[00:00:16] PROBLEM - mw1 php-fpm on mw1 is CRITICAL: PROCS CRITICAL: 0 processes with command name 'php-fpm7.2' [00:00:18] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.326 second response time [00:00:19] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.499 second response time [00:02:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [00:03:36] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:04:45] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:07:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.75, 3.61, 2.44 [00:08:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:10:39] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:11:59] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.97, 3.69, 2.80 [00:12:46] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 7.436 second response time [00:13:57] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeatn [00:13:59] [02miraheze/puppet] 07paladox 03957faa7 - php: Remove "process.priority" OpenVZ 7 restricts this so we have to remove it due to "ERROR: Unable to set priority for the master process: Permission denied (13)". [00:14:11] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.60, 2.51, 2.48 [00:17:28] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.383 second response time [00:18:43] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.004 second response time [00:18:45] RECOVERY - mw1 php-fpm on mw1 is OK: PROCS OK: 7 processes with command name 'php-fpm7.2' [00:19:04] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.692 second response time [00:19:37] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:20:36] !log repool mw1 [00:21:58] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.99, 4.31, 2.86 [00:22:07] !log restart php7.2-fpm on mw[23] [00:22:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:22:33] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [00:22:35] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:22:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:22:49] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:23:36] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.57, 3.51, 3.30 [00:23:57] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.49, 3.51, 2.74 [00:25:35] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.47, 3.35, 3.27 [00:25:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:25:53] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.41, 4.16, 3.06 [00:27:52] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.59, 3.25, 2.87 [00:31:43] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.66, 4.11, 3.26 [00:32:54] !log depool mw2 [00:33:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:33:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.81, 3.80, 3.24 [00:36:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.92, 5.18, 3.83 [00:38:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.49, 3.73, 3.46 [00:38:07] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:38:15] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:39:54] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.93, 2.75, 3.13 [00:40:04] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:40:13] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:46:44] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 4 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[vpncloud@miraheze-internal] [00:47:51] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [00:48:00] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [00:48:01] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:48:21] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:49:39] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [00:50:25] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:51:42] PROBLEM - mw2 Disk Space on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:51:57] PROBLEM - mw2 php-fpm on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:51:58] PROBLEM - mw2 Current Load on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:52:13] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:52:36] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: HTTP CRITICAL - No data received from host [00:52:40] PROBLEM - mw2 HTTPS on mw2 is CRITICAL: connect to address 185.52.2.113 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [00:53:16] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:53:32] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:53:37] RECOVERY - mw2 Disk Space on mw2 is OK: DISK OK - free space: / 21737 MB (28% inode=98%); [00:55:10] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.645 second response time [00:55:52] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:59:26] PROBLEM - mw2 Disk Space on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:00:20] PROBLEM - mw2 SSH on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:00:56] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:01:34] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [01:03:49] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:05:49] PROBLEM - Host mw2 is DOWN: PING CRITICAL - Packet loss = 100% [01:08:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:09:19] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.399 second response time [01:09:35] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.003 second response time [01:10:28] RECOVERY - Host mw2 is UP: PING OK - Packet loss = 0%, RTA = 0.37 ms [01:10:30] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [01:10:33] RECOVERY - mw2 SSH on mw2 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [01:10:33] RECOVERY - mw2 HTTPS on mw2 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.008 second response time [01:11:03] RECOVERY - mw2 php-fpm on mw2 is OK: PROCS OK: 7 processes with command name 'php-fpm7.2' [01:11:03] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 0.10, 0.19, 0.10 [01:11:15] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.631 second response time [01:11:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [01:11:33] RECOVERY - mw2 Disk Space on mw2 is OK: DISK OK - free space: / 21737 MB (28% inode=98%); [01:13:09] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:15:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [01:18:36] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:19:42] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://git.io/JeaqI [01:19:44] [02miraheze/puppet] 07paladox 035127039 - Use systemd to mount [01:19:45] [02puppet] 07paladox created branch 03paladox-patch-4 - 13https://git.io/vbiAS [01:19:47] [02puppet] 07paladox opened pull request 03#1137: Use systemd to mount - 13https://git.io/Jeaqt [01:20:09] !log repool mw2 [01:20:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:20:32] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:21:22] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:25:09] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:25:54] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_33 [+0/-0/±1] 13https://git.io/Jeaqc [01:25:55] [02miraheze/mediawiki] 07paladox 035903ffa - Revert "Update ResourceLoader.php" This reverts commit 27a3bbc1b42ded123ad30141772b8b27b6a5a476. [01:26:10] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:28:05] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.693 second response time [01:28:49] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:31:57] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb [01:33:18] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:33:34] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:34:04] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:35:35] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.683 second response time [01:35:57] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:36:05] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.004 second response time [01:36:19] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.395 second response time [01:37:42] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.80, 6.08, 4.12 [01:37:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 2.72, 4.73, 3.63 [01:43:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.10, 3.90, 3.76 [01:46:04] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 8.96, 5.50, 4.33 [01:48:02] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.85, 3.98, 3.91 [01:49:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.43, 3.42, 3.83 [01:53:44] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.67, 2.21, 3.26 [01:54:08] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.65, 3.86, 3.82 [01:58:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.70, 2.75, 3.43 [02:00:29] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.04, 2.36, 3.18 [02:27:36] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.09, 6.72, 5.91 [02:29:34] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.77, 6.43, 5.90 [02:42:02] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.70, 3.37, 2.38 [02:42:35] PROBLEM - wiki.pupilliam.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Cannot make SSL connection. [02:44:01] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.05, 2.49, 2.17 [02:44:33] RECOVERY - wiki.pupilliam.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.pupilliam.com' will expire on Sun 26 Jan 2020 12:52:17 PM GMT +0000. [02:48:07] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.76, 5.02, 3.36 [02:50:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.63, 3.81, 3.12 [02:52:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.77, 4.60, 3.53 [02:54:08] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.60, 3.13, 3.12 [03:18:26] PROBLEM - wiki.pupilliam.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Cannot make SSL connection. [03:41:02] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:41:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [03:42:58] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.392 second response time [03:43:15] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [03:47:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [03:47:27] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:47:47] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [03:51:50] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 3.845 second response time [03:51:56] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [03:56:30] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [03:58:26] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [04:02:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [04:19:40] RECOVERY - wiki.pupilliam.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.pupilliam.com' will expire on Sun 26 Jan 2020 12:52:17 PM GMT +0000. [04:33:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.85, 3.37, 2.61 [04:35:53] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.93, 2.48, 2.38 [04:47:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.83, 3.74, 2.43 [04:48:59] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.47, 2.77, 2.22 [05:07:08] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 81.4.109.133/cpweb [05:09:14] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [05:16:51] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:23:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [05:23:25] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.47, 3.74, 3.07 [05:25:19] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.92, 2.71, 2.77 [05:32:37] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 81.4.109.133/cpweb [05:33:01] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:34:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [05:38:41] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [05:40:48] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [05:41:09] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:41:44] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:43:08] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.021 second response time [05:45:59] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.133 second response time [05:46:15] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 8.348 second response time [05:47:05] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [05:49:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [05:55:13] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [05:57:11] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:10:46] Hello chris46! If you have any questions, feel free to ask and someone should answer soon. [06:14:05] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[nagios-plugins] [06:22:07] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [06:26:54] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2868 MB (11% inode=94%); [06:33:38] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.99, 3.65, 2.93 [06:34:59] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.78, 4.69, 3.25 [06:35:36] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.62, 3.32, 2.90 [06:39:01] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.00, 3.89, 3.33 [06:39:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.74, 3.42, 3.06 [06:41:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.50, 3.80, 3.34 [06:41:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.10, 3.67, 3.19 [06:43:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.89, 3.86, 3.42 [06:45:01] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.37, 4.39, 3.66 [06:45:56] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.62, 3.98, 3.43 [06:47:58] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.65, 4.05, 3.52 [06:49:57] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.93, 3.94, 3.54 [06:51:58] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.63, 4.86, 3.94 [07:06:54] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.95, 3.29, 3.95 [07:14:50] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.88, 2.70, 3.38 [07:15:56] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.62, 2.72, 3.72 [07:20:17] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 21.86, 8.91, 5.67 [07:21:12] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 16.30, 9.50, 5.81 [07:21:14] PROBLEM - lizardfs4 Puppet on lizardfs4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [07:21:39] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [07:21:53] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [07:22:12] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [07:22:13] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [07:22:20] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [07:23:09] RECOVERY - lizardfs4 Puppet on lizardfs4 is OK: OK: Puppet is currently enabled, last run 9 minutes ago with 0 failures [07:23:37] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.422 second response time [07:23:57] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [07:24:15] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [07:24:15] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [07:24:21] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [07:44:50] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.03, 2.86, 3.88 [07:46:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.79, 4.12, 4.26 [07:48:54] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.77, 3.25, 3.92 [07:50:57] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.72, 3.73, 3.97 [07:52:55] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.88, 2.85, 3.62 [07:54:41] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.37, 2.68, 3.90 [07:54:54] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.63, 2.45, 3.38 [07:56:40] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.78, 4.75, 4.53 [07:58:56] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.42, 3.04, 3.55 [08:01:22] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.50, 3.70, 3.69 [08:03:21] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.12, 2.97, 3.41 [08:05:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.67, 2.69, 3.62 [08:07:32] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.44, 3.93, 3.66 [08:09:02] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.71, 2.41, 3.38 [08:09:31] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.49, 2.97, 3.34 [08:13:12] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.65, 3.05, 3.42 [08:13:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.61, 3.71, 3.62 [08:15:19] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:15:46] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.93, 4.69, 3.93 [08:19:10] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.79, 2.31, 3.01 [08:19:43] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.15, 3.68, 3.73 [08:21:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 10.92, 6.91, 4.92 [08:27:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.70, 4.40, 3.72 [08:29:32] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.95, 3.12, 3.33 [08:37:54] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.35, 5.16, 4.15 [08:39:49] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.87, 4.00, 3.85 [08:41:55] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.89, 5.78, 4.55 [09:05:27] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:06:36] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [09:06:37] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [09:07:37] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.870 second response time [09:08:34] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:08:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:12:39] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.07, 2.61, 3.81 [09:13:29] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.52, 2.55, 3.95 [09:16:42] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 10.72, 4.75, 4.22 [09:17:17] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.71, 5.13, 4.61 [09:18:40] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.99, 3.41, 3.79 [09:20:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:22:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.17, 3.38, 3.70 [09:24:48] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.69, 2.44, 3.32 [09:25:09] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.04, 3.00, 3.78 [09:27:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.08, 3.86, 4.03 [09:29:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.92, 3.10, 3.73 [09:31:16] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 13.98, 6.06, 4.64 [09:33:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.58, 3.87, 3.70 [09:35:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.60, 3.39, 3.85 [09:35:22] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.09, 2.79, 3.31 [09:41:10] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 11.56, 5.14, 4.21 [09:42:48] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [09:43:12] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.68, 3.83, 3.84 [09:44:08] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:44:25] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 81.4.109.133/cpweb [09:44:52] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:45:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:46:28] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 3.373 second response time [09:46:42] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:48:27] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 2.49, 4.42, 3.84 [09:54:40] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.04, 3.28, 3.62 [09:56:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 9.14, 6.02, 4.59 [09:59:35] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.84, 2.88, 3.65 [10:02:01] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 10.24, 6.04, 4.71 [10:22:35] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.70, 2.89, 3.90 [10:24:35] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.62, 2.00, 3.86 [10:26:48] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.97, 4.63, 4.19 [10:26:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.69, 4.06, 4.34 [10:28:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.44, 2.90, 3.88 [10:31:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 11.96, 5.23, 4.53 [10:35:02] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.71, 2.80, 3.69 [10:37:01] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.86, 5.82, 4.74 [10:43:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.11, 3.23, 3.89 [10:49:11] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.35, 2.90, 3.94 [10:49:54] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.01, 2.27, 3.29 [10:51:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 8.07, 4.37, 4.32 [10:53:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.82, 3.30, 3.93 [10:57:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 8.04, 6.10, 4.85 [10:58:16] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 2.10, 4.13, 3.88 [11:00:15] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.89, 2.98, 3.48 [11:02:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:04:20] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:04:32] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.88, 2.71, 3.37 [11:04:59] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.22, 3.20, 3.96 [11:07:07] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.22, 4.02, 4.18 [11:09:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.72, 2.74, 3.69 [11:09:52] [02landing] 07oop23 opened pull request 03#17: Polish Translation - 13https://git.io/JeanF [11:13:09] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.01, 3.25, 3.63 [11:13:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [11:15:04] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.73, 4.13, 3.60 [11:15:05] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.49, 3.00, 3.50 [11:17:30] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.18, 5.69, 4.44 [11:18:03] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [11:21:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.18, 3.75, 3.89 [11:22:47] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:25:42] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.80, 3.97, 3.90 [11:26:01] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.94, 3.24, 3.80 [11:26:55] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.572 second response time [11:27:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.64, 3.55, 3.76 [11:27:43] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:28:28] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:31:47] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.14, 3.78, 3.78 [11:32:00] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [11:32:55] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [11:33:44] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.53, 3.38, 3.64 [11:34:56] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:35:09] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2645 MB (10% inode=94%); [11:35:39] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.52, 2.75, 3.38 [11:36:14] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.17, 2.39, 3.16 [11:44:14] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:45:40] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 81.4.109.133/cpweb [11:46:18] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:47:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeac6 [11:47:54] [02miraheze/puppet] 07paladox 03a971a11 - Remove vpncloud from mw* [11:48:07] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 2.94, 4.03, 3.68 [11:48:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [11:51:04] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:52:58] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 8.033 second response time [11:53:13] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 8.647 second response time [11:53:26] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [11:53:54] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [11:54:10] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.20, 3.79, 3.46 [11:54:35] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:58:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:03:24] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [12:05:22] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:13:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb [12:13:05] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [12:14:58] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:15:19] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.86, 3.13, 3.95 [12:17:17] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.68, 3.66, 4.04 [12:19:02] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:19:16] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.28, 3.39, 3.91 [12:25:19] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.64, 2.53, 3.40 [12:28:38] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.20, 2.78, 3.81 [12:30:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.36, 3.05, 3.76 [12:32:38] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.19, 2.97, 3.64 [12:34:45] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.61, 3.06, 3.55 [12:37:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.83, 6.71, 4.65 [12:57:02] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.01, 2.57, 3.80 [12:57:56] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [12:58:25] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [12:59:54] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:00:24] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:01:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.17, 3.48, 3.76 [13:02:40] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.77, 3.42, 3.99 [13:08:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [13:08:51] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.99, 3.43, 3.91 [13:08:54] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.81, 2.31, 3.29 [13:10:34] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:12:55] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.61, 2.45, 3.38 [13:14:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb [13:16:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:18:45] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 2.29, 5.02, 4.41 [13:18:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.84, 3.53, 3.56 [13:21:16] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 10.09, 5.31, 4.15 [13:23:00] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [13:24:53] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.76, 3.15, 3.86 [13:24:57] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:26:51] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.68, 3.83, 4.03 [13:28:48] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.18, 2.85, 3.65 [13:28:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [13:31:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 10.06, 4.73, 4.16 [13:34:54] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.77, 2.90, 3.58 [13:35:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:35:47] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.27, 2.97, 3.71 [13:38:55] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.59, 2.36, 3.21 [13:39:44] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.93, 2.09, 3.20 [13:43:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.06, 4.50, 3.93 [13:43:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.72, 3.04, 3.44 [13:45:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.16, 3.29, 3.56 [13:45:54] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 3.02, 2.91, 3.33 [13:47:08] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.98, 5.15, 4.23 [13:47:40] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [13:49:07] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.64, 3.67, 3.79 [13:51:02] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.03, 5.37, 4.35 [13:53:36] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:53:55] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.24, 3.47, 3.50 [13:54:50] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.70, 3.07, 3.66 [13:56:12] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.81, 3.99, 3.69 [13:58:11] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.77, 3.24, 3.46 [13:58:40] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.05, 2.21, 3.20 [14:00:19] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:03:14] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.22, 5.01, 4.12 [14:04:31] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.79, 3.58, 3.61 [14:06:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.05, 4.60, 3.97 [14:08:42] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.78, 3.62, 3.70 [14:10:46] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.02, 4.30, 3.89 [14:12:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.96, 3.85, 3.78 [14:13:07] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.97, 3.41, 3.77 [14:16:43] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.77, 2.84, 3.37 [14:16:59] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.51, 3.74, 3.77 [14:19:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.00, 3.57, 3.72 [14:21:11] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.82, 4.74, 4.09 [14:21:45] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:22:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [14:22:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:23:43] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.384 second response time [14:24:40] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:24:51] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:34:51] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.98, 2.89, 3.91 [14:38:48] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.08, 2.15, 3.39 [14:42:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:43:01] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:43:05] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.49, 6.46, 4.77 [14:43:21] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.52, 3.93, 3.20 [14:44:34] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:44:58] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:45:25] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.96, 3.50, 3.11 [14:47:29] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.78, 5.02, 3.76 [14:49:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.84, 3.77, 3.46 [14:51:41] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.56, 4.12, 3.62 [14:53:42] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.75, 3.14, 3.31 [14:58:23] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.70, 6.90, 4.79 [14:59:39] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:03:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [15:08:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.13, 3.04, 3.95 [15:09:19] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:09:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:13:13] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:13:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:13:47] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [15:14:27] revi: do you think the CoCC election can be started? I don't think there will be any new nominations [15:14:44] On a meeting [15:14:48] bb in an hour [15:14:57] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.61, 2.19, 3.26 [15:15:17] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 7.382 second response time [15:18:07] revi: ok [15:19:40] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:23:11] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.82, 4.00, 3.74 [15:23:14] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:27:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 8.60, 4.98, 4.06 [15:28:57] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.11, 2.29, 3.66 [15:29:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.13, 3.65, 3.67 [15:32:58] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb [15:35:20] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.41, 2.41, 3.14 [15:36:49] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.64, 2.42, 3.29 [15:37:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:42:43] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [15:42:46] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:42:58] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [15:44:56] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.406 second response time [15:45:07] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:46:58] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:47:22] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:49:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:53:36] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:54:41] Reception123: and IMO it can wait just 24 hours.... :P [15:55:21] next year I am probably going to start it around early Oct so we can close it bit earlier [16:00:58] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [16:01:52] It was my mistake not starting the vote on 25th, however [16:01:59] revi: no problem sounds good to me [16:02:05] tomorrow we get the election started [16:03:03] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [16:03:06] yup [16:03:12] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:14] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is WARNING: WARNING: Diff, 306 files, 50.50GB, 2019-10-20 02:22:00 (2.5 weeks ago) [16:03:23] I have it on my todo list and it will yell at me [16:03:29] so it is quite... impossible to miss it now [16:03:37] (with 5 push notifications) [16:04:42] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:04:59] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:05:14] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.045 second response time [16:12:02] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:13:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:14:09] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 6.970 second response time [16:15:05] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:17:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:18:46] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:19:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [16:20:46] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.635 second response time [16:21:17] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:23:54] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:41:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [16:43:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:44:00] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.17, 6.38, 5.28 [16:45:58] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.22, 6.21, 5.34 [16:47:12] Hello aloo_shu! If you have any questions, feel free to ask and someone should answer soon. [16:47:40] neat [16:49:04] Hey [16:49:55] just came here cuz I saw ExamSoon ask something elsewhere, and got curious about the namespace [16:52:50] Okay, we’re a free, open source wiki hosting service. Is that something you’re interested in? [16:53:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 81.4.109.133/cpweb [16:54:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:54:10] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:55:44] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:55:51] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:56:22] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 9.554 second response time [16:57:43] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:57:43] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.068 second response time [16:57:54] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 6.429 second response time [16:58:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:58:33] RhinosF1: I might. I've bookmarked it, but I'd yet have to make my first wiki. My first thought was, whether the permission structure of wikis could be leveraged to build in som social networking aspects, e.g. pages that can only be accessed by 2 or a group of members, and would function as PM / Room [17:00:36] I could see if you wre not interested in hosting *that*, but I'm using freenode as a learning resource to pick up the most diverse pieces of knowledge, which sometimes does and sometimes doesn't lead to anything tangible. [17:01:36] aloo_shu: you won’t be able to have a PM aspect by default but social profile might allow it [17:01:47] Not sure if they can be fully private though [17:02:57] what is catching my eye here is that, apparantly, you are also using the project in order to work with server monitoring and automating it [17:03:18] We have some automated tools to monitor uptime [17:04:13] kind of nice to see irc in the loop for that kinda thing, I've never seen server monitoring live [17:05:23] I don’t know much about how it works tbh just that it somehow does, I keep MediaWiki sorted rather than deal with icinga [17:13:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [17:19:48] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:22:21] ah ok, mediawiki is free wiki software, isn't it? think I've seen a few free hosting sites that offered it as a preconfigured pkg to install [17:23:47] aloo_shu: it’s open source and free yes [17:25:20] both speech and beer, what could be better :) (disclaimer: don't blame me, I haven't invented that metaphor) [17:27:57] are the datacenters really down that much (and you are having mirrors to cope?), or is it the bot giving false positives when he's running into timeouts? [17:28:50] aloo_shu: we do sometimes have issues but it’s not always noticeable - it’s mainly down to low budget [17:30:36] I mean, I'm behind a very poor connection as well (and low/no budget like few others), and so all sorts of approaches to make the best out of what one has, by intelligent use of available tools, is interesting me more than average [17:32:10] aloo_shu: we work our hardest for the best experience and are constantly looking at better options [17:36:05] that's proabably where my philosophy is different: I'm rather interested into making the least effort and still having the experience I want - which partly involves reformulating 'experience goals', but on the practcal side, has a lot to do with getting better functionality out of 'obsolete' gear and 3rd-word-ish setups, and FLOSS is obviously my friend big time in that endeavour [17:38:13] so in a way, I am more 'backwards facing', and backporters are my heroes more than upgraders [17:38:30] Oh [17:40:18] I was never very fascinated with progress, but am attracted to hacking :) in the sense of tinkering [17:42:44] I see [17:46:33] I could frame wiki hosting into my view as well - arguably, in the minds of many people, it is an 'antiquated' way of presenting information, and only suitable for niche use, with ofc wikpedia as the niche that has gained prominence - I bet there is a lot of web functionality that, from a web developer's perspective, wiki format has _not_ - from my user perspective, though, that's a plus, and translates [17:46:35] into 'probably fully functional in elinks' for instance, and proving the versatility of wiki would look like a worthwhile challenge to me [17:48:23] MediaWiki isn't ideal for all scenarios. https://www.mediawiki.org/wiki/Manual:Deciding_whether_to_use_a_wiki_as_your_website_type [17:48:23] [ Manual:Deciding whether to use a wiki as your website type - MediaWiki ] - www.mediawiki.org [17:50:56] probably not the ideal host for online gaming :) thx for link k6ka , let me have a look [17:54:59] Ultimately you're the final judge. There are people who still have faith and try to make MediaWiki work for them. There may be better solutions out there, though. [17:55:57] Rough example... say I wanted a photography website for my online portfolio. I probably wouldn't use MediaWiki and would use something like wixsite, Squarespace, or SmugMug. It's possible to use MediaWiki, but maybe not ideal. [17:56:17] MediaWiki was really built around wikis and wiki-style collaboration, so it's great for things like knowledge bases. [17:56:27] NASA I believe uses MediaWiki internally for that [18:04:00] PROBLEM - mw1 SSH on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:04:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [18:04:33] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:05:04] yeah, been reading a bit into mediawiki's documentation, nicely analytical w/o being tech-y, but offering all the details further down [18:05:31] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:05:45] PROBLEM - mw1 Disk Space on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:06:17] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:06:50] PROBLEM - mw1 MirahezeRenewSsl on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:07:02] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:07:13] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:07:26] i'm aware mw1 has gone down [18:07:28] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:07:40] I'm not sure why it's gone down [18:07:43] *though [18:07:45] PROBLEM - mw1 php-fpm on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:07:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [18:07:58] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [18:08:01] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [18:08:04] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:08:06] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [18:08:47] PROBLEM - Host mw1 is DOWN: PING CRITICAL - Packet loss = 100% [18:11:33] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.11, 6.63, 5.55 [18:15:36] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.899 second response time [18:16:12] for what I'd like as a personal website, and out of the few sites that I know what they were running on, I might like something like JomSocial, and the argument offered by wikimedia that more and better frontends exist for more popular software, is certainly true, but from the config snippet given at https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Using_MediaWiki_as_a_content_management_system , [18:16:13] [ Manual:Using MediaWiki as a content management system - MediaWiki ] - www.mediawiki.org [18:16:14] I'd think that doing configuration manually may be less complex compared to doing that elsewhere, and documentation well written [18:16:22] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.90, 6.71, 5.90 [18:17:58] would you know of any generic channel where I could learn from people discussing setting up and running websites with (mainly) FLOSS tools? [18:22:11] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.55, 7.85, 6.63 [18:22:55] RECOVERY - Host mw1 is UP: PING WARNING - Packet loss = 82%, RTA = 0.32 ms [18:22:56] RECOVERY - mw1 SSH on mw1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [18:23:03] !log restarted php7.2-fpm on mw2 [18:23:07] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.009 second response time [18:23:08] RECOVERY - mw1 php-fpm on mw1 is OK: PROCS OK: 7 processes with command name 'php-fpm7.2' [18:23:27] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 30 minutes ago with 0 failures [18:23:45] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:24:01] RECOVERY - mw1 Disk Space on mw1 is OK: DISK OK - free space: / 17603 MB (23% inode=97%); [18:24:10] !log restarted php7.2-fpm on mw3 [18:24:21] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 5.73, 7.09, 6.52 [18:24:25] RECOVERY - mw1 MirahezeRenewSsl on mw1 is OK: TCP OK - 0.001 second response time on 185.52.1.75 port 5000 [18:24:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:24:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:24:41] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 2.35, 0.97, 0.40 [18:24:50] RECOVERY - mw1 HTTPS on mw1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.008 second response time [18:25:14] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [18:25:32] PROBLEM - mw2 Puppet on mw2 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 2 minutes ago with 0 failures [18:25:35] ok, there's #web and ##webdev, I see [18:25:46] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.634 second response time [18:25:59] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.394 second response time [18:26:35] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.38, 6.46, 6.36 [18:27:12] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [18:27:26] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:27:42] PROBLEM - mw3 Puppet on mw3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 5 minutes ago with 0 failures [18:27:49] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 34 minutes ago with 0 failures [18:30:00] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 5.756 second response time [18:30:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeau0 [18:30:11] [02miraheze/services] 07MirahezeSSLBot 03ecaff3d - BOT: Updating services config for wikis [18:32:11] pretty fascinating to look over your shoulders, I admit, e.g. connecting server monitoring to IRC with a bot, then loggin back into a place that is itself mediawiki-run, from IRC, by means of another bot, is exactly that potential for creative modularity that I see as a key strength of FLOSS [18:32:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:36:33] * Hispano76 hello [18:36:48] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [18:37:34] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:37:45] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:38:57] Hey Hispano76 [18:41:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:41:49] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.011 second response time [18:42:18] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.711 second response time [18:45:15] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb [18:49:19] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:53:23] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [18:54:28] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:55:22] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:59:26] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:01:52] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:10:56] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:05] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:28] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:31] !log restarted php7.2-fpm on mw3 [19:12:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:12:58] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 1.444 second response time [19:13:06] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 1.424 second response time [19:13:30] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.004 second response time [19:13:41] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:16:11] !log restarted php7.2-fpm on mw2 [19:17:18] !log restarted php7.2-fpm on mw3 [19:17:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:17:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:17:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [19:21:36] !log restarted nginx on mw[12] [19:21:43] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:22:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:23:47] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:23:51] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:25:50] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 6.496 second response time [19:25:54] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:27:02] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:27:49] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:28:59] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.399 second response time [19:33:44] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:33:45] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:44:41] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb [19:45:40] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:46:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:47:37] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:50:52] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:51:50] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [19:52:54] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:53:11] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:53:54] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:01:49] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [20:06:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:08:06] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:09:58] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:45:11] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [20:45:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:46:21] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [20:48:24] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [20:50:23] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.00, 6.50, 6.29 [20:52:24] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.40, 6.45, 6.29 [20:55:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:55:36] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:56:27] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.83, 6.64, 6.39 [21:00:34] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.74, 6.72, 6.48 [21:01:46] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/Jea2a [21:01:48] [02miraheze/puppet] 07paladox 03ef41bb5 - Add lizardfs6 as a mediawiki server Also tweek php-fpm childs to around 100 [21:01:49] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [21:01:51] [02puppet] 07paladox opened pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:02:28] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/Jea2r [21:02:30] [02miraheze/puppet] 07paladox 033211477 - Update php.pp [21:02:31] [02puppet] 07paladox synchronize pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:02:59] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/Jea2o [21:03:01] [02miraheze/puppet] 07paladox 0354c19e6 - Update lizardfs6.yaml [21:03:02] [02puppet] 07paladox synchronize pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:03:58] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/Jea26 [21:04:00] [02miraheze/puppet] 07paladox 03a5f4392 - Update lizardfs6.yaml [21:04:01] [02puppet] 07paladox synchronize pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:04:12] [02puppet] 07paladox edited pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:05:28] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/Jea2M [21:05:30] [02miraheze/puppet] 07paladox 03c303874 - mediawiki: Increase timeout to 1500s [21:05:31] [02puppet] 07paladox synchronize pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:05:45] [02puppet] 07paladox closed pull request 03#1138: Add lizardfs6 as a mediawiki server - 13https://git.io/Jea2V [21:05:46] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±4] 13https://git.io/Jea2D [21:05:48] [02miraheze/puppet] 07paladox 0351b9496 - Add lizardfs6 as a mediawiki server (#1138) * Add lizardfs6 as a mediawiki server Also tweek php-fpm childs to around 100 * Update php.pp * Update lizardfs6.yaml * Update lizardfs6.yaml * mediawiki: Increase timeout to 1500s [21:05:49] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://git.io/vbiAS [21:05:51] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [21:06:44] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jea29 [21:06:45] [02miraheze/puppet] 07paladox 03a2522fc - Update lizardfs6.yaml [21:09:12] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:12:44] PROBLEM - lizardfs6 php-fpm on lizardfs6 is CRITICAL: PROCS CRITICAL: 0 processes with command name 'php-fpm7.2' [21:18:26] [02mw-config] 07The-Voidwalker opened pull request 03#2794: fix assignment of custom rights on testwiki - 13https://git.io/Jeaat [21:19:34] Voidwalker: will merge once Travis smiles [21:19:57] [02mw-config] 07RhinosF1 closed pull request 03#2794: fix assignment of custom rights on testwiki - 13https://git.io/Jeaat [21:19:58] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeaas [21:20:00] [02miraheze/mw-config] 07The-Voidwalker 03858f9d6 - fix assignment of custom rights on testwiki (#2794) [21:20:35] Voidwalker: {{done}} - should deploy within 10-15 mins [21:37:46] considering when it was merged, it would have deployed within a minute :P [21:38:52] JohnLewis: I have no clue what puppet's timetable is just that it runs every 10 mins or so [21:41:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeaab [21:41:17] [02miraheze/puppet] 07paladox 030ac683b - Update mfsmaster.cfg.erb [21:41:55] !log restart lizardfs-master on lizardfs6 [21:44:40] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVe [21:44:41] [02miraheze/puppet] 07paladox 03ac3563e - Update lizardfs6.yaml [21:45:26] RECOVERY - lizardfs6 php-fpm on lizardfs6 is OK: PROCS OK: 3 processes with command name 'php-fpm7.2' [21:46:26] !log [21:41:55] <+paladox> !log restart lizardfs-master on lizardfs6 [21:46:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:57:54] RhinosF1: every 10 mins, on the 10th minute [21:58:31] [02miraheze/mediawiki] 07Pix1234 pushed 031 commit to 03REL1_33 [+1/-0/±1] 13https://git.io/JeaVm [21:58:32] [02miraheze/mediawiki] 07Pix1234 03e31fb71 - add FontAwesome for testing for T4852 [21:58:50] [02miraheze/mw-config] 07Pix1234 pushed 031 commit to 03master [+0/-0/±3] 13https://git.io/JeaVO [21:58:51] [02miraheze/mw-config] 07Pix1234 031ea11c6 - Config for testing FontAwesome for T4852 [22:00:51] miraheze/mw-config/master/1ea11c6 - zppix1 The build was broken. https://travis-ci.org/miraheze/mw-config/builds/608439687 [22:01:24] Zppix, unclosed array https://github.com/miraheze/mw-config/compare/858f9d637244...1ea11c62f5a2#diff-633209e879ae25710c1454b6be17da3bR651 [22:01:25] [ Comparing 858f9d637244...1ea11c62f5a2 · miraheze/mw-config · GitHub ] - github.com [22:01:54] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVG [22:01:55] [02miraheze/mw-config] 07paladox 032503d63 - Fix syntax [22:05:16] miraheze/mw-config/master/2503d63 - paladox The build is still failing. https://travis-ci.org/miraheze/mw-config/builds/608441601 [22:06:07] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVl [22:06:08] [02miraheze/mw-config] 07paladox 03312f782 - fix [22:06:24] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [22:06:25] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [22:06:26] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [22:07:33] miraheze/mw-config/master/312f782 - paladox The build was fixed. https://travis-ci.org/miraheze/mw-config/builds/608443914 [22:08:20] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [22:08:22] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [22:08:23] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [22:09:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVu [22:09:55] [02miraheze/puppet] 07paladox 03d0c0b63 - db: Add lizardfs6 to firewall [22:13:57] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:22:50] [02miraheze/mw-config] 07Pix1234 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVS [22:22:51] [02miraheze/mw-config] 07Pix1234 0320cf336 - syntax [22:23:22] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JeaV9 [22:23:24] [02miraheze/puppet] 07paladox 03818aef6 - Add lizardfs6 as a mediawiki backend [22:23:25] Voidwalker: ^ is that good (paladox ) [22:23:26] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [22:23:27] [02puppet] 07paladox opened pull request 03#1139: Add lizardfs6 as a mediawiki backend - 13https://git.io/JeaVH [22:24:04] kind of yes Zppix, you need to fix the lint. Probably best you do pulls if you want us to review. [22:24:11] ok [22:24:19] remove the line at https://github.com/miraheze/mw-config/compare/312f7823b8ac...20cf336d5ed7#diff-633209e879ae25710c1454b6be17da3bR654 [22:24:20] [ Comparing 312f7823b8ac...20cf336d5ed7 · miraheze/mw-config · GitHub ] - github.com [22:24:27] and unindent the three lines above it [22:25:10] [02puppet] 07paladox closed pull request 03#1139: Add lizardfs6 as a mediawiki backend - 13https://git.io/JeaVH [22:25:11] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVF [22:25:13] [02miraheze/puppet] 07paladox 0338f2121 - Add lizardfs6 as a mediawiki backend (#1139) [22:26:06] Voidwalker: so only 3 brackets? [22:26:25] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://git.io/vbiAS [22:26:27] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [22:26:44] yeah [22:27:03] you only want as many brackets going in as you have going out [22:27:06] [02miraheze/MirahezeDebug] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVA [22:27:07] [02miraheze/MirahezeDebug] 07paladox 038f19282 - Update popup.html [22:29:04] [02miraheze/mw-config] 07Pix1234 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeaVx [22:29:05] [02miraheze/mw-config] 07Pix1234 03257cd5b - syntax [22:30:38] Voidwalker: so syntax should be right [22:30:42] ? [22:30:47] yeah [22:31:07] now to move it to 1_34 [22:31:34] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:32:17] have fun with that, I'll be back later [22:32:31] Voidwalker: should be hard just have to submodule add to rel134 [22:32:55] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:38:41] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeawt [22:38:43] [02miraheze/puppet] 07paladox 03e9dac84 - Update services.pp [22:39:17] [02miraheze/mediawiki] 07Pix1234 pushed 031 commit to 03REL1_34 [+1/-0/±1] 13https://git.io/Jeawq [22:39:19] [02miraheze/mediawiki] 07Pix1234 0317c1f93 - add FontAwesome for testing [22:42:39] [02miraheze/mediawiki] 07Pix1234 pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/Jeaws [22:42:41] [02miraheze/mediawiki] 07Pix1234 03891346b - Update CreatePageUw [23:11:47] !log sudo -u www-data php toggleExtension.php --disable --wiki testwiki fontawesome [23:11:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:20:09] !log running lc on test1 [23:20:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:29:30] !log depool mw[123], rebuild lc and repool mw[123] [23:29:47] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:33:40] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JearU [23:33:41] [02miraheze/mw-config] 07paladox 0344b969f - Add FontAwesome to extension-list