[00:03:52] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 2400:6180:0:d0::403:f001/cpweb [00:04:30] PROBLEM - misc3 Current Load on misc3 is CRITICAL: CRITICAL - load average: 10.40, 5.68, 2.47 [00:04:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:04:55] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [00:06:01] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [00:10:19] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:10:36] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:11:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:11:34] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:13:11] PROBLEM - misc3 Current Load on misc3 is WARNING: WARNING - load average: 0.75, 3.87, 3.43 [00:15:05] RECOVERY - misc3 Current Load on misc3 is OK: OK - load average: 0.80, 2.87, 3.11 [01:03:45] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:03:57] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [01:04:05] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [01:04:25] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:04:41] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:04:41] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:04:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:05:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:05:23] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [01:05:26] PROBLEM - misc3 Current Load on misc3 is CRITICAL: CRITICAL - load average: 7.51, 5.67, 2.74 [01:06:33] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 2.519 second response time [01:06:53] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 5.049 second response time [01:06:55] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 2.791 second response time [01:07:26] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [01:08:23] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.681 second response time [01:10:35] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [01:10:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:11:31] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:12:37] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [01:13:42] RECOVERY - misc3 Current Load on misc3 is OK: OK - load average: 0.26, 2.94, 2.92 [01:45:37] PROBLEM - lizardfs7 Disk Space on lizardfs7 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [01:45:47] PROBLEM - lizardfs7 Puppet on lizardfs7 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [01:45:53] PROBLEM - lizardfs7 Current Load on lizardfs7 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [01:49:25] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeujn [01:49:27] [02miraheze/dns] 07paladox 038048442 - Add lizardfs7 to dns [01:49:38] PROBLEM - lizardfs7 Puppet on lizardfs7 is UNKNOWN: UNKNOWN: Failed to check. Reason is: no_summary_file [01:49:39] RECOVERY - lizardfs7 Disk Space on lizardfs7 is OK: DISK OK - free space: / 1779266 MB (99% inode=99%); [01:49:48] RECOVERY - lizardfs7 Current Load on lizardfs7 is OK: OK - load average: 0.90, 1.10, 0.57 [01:53:27] PROBLEM - lizardfs7 Puppet on lizardfs7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 19 seconds ago with 1 failures. Failed resources (up to 3 shown): User[johnflewis] [01:57:17] RECOVERY - lizardfs7 Puppet on lizardfs7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [01:58:13] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JeujB [01:58:15] [02miraheze/puppet] 07paladox 03f41862e - Add lizardfs7 and remove lizardfs6 [01:58:16] [02puppet] 07paladox created branch 03paladox-patch-7 - 13https://git.io/vbiAS [01:58:18] [02puppet] 07paladox opened pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [01:59:08] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+1/-1/±0] 13https://git.io/Jeuju [01:59:09] [02miraheze/puppet] 07paladox 03d63a8b1 - Update and rename lizardfs6.yaml to lizardfs7.yaml [01:59:11] [02puppet] 07paladox synchronize pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [01:59:30] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jeujz [01:59:32] [02miraheze/puppet] 07paladox 039d3222b - Update config.yaml [01:59:33] [02puppet] 07paladox synchronize pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [01:59:49] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jeujg [01:59:50] [02miraheze/puppet] 07paladox 0371cb38f - Update storage_firewall.yaml [01:59:52] [02puppet] 07paladox synchronize pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [02:01:35] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jeuj2 [02:01:37] [02miraheze/puppet] 07paladox 0311ada24 - Update mfsmaster.cfg.erb [02:01:38] [02puppet] 07paladox synchronize pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [02:02:26] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JeujV [02:02:27] [02miraheze/puppet] 07paladox 035620d08 - Update mfschunkserver.cfg.erb [02:02:29] [02puppet] 07paladox synchronize pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [02:04:29] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+1/-1/±5] 13https://git.io/Jeujo [02:04:30] [02miraheze/puppet] 07paladox 03680f9f7 - Add lizardfs7 and remove lizardfs6 (#1126) * Add lizardfs7 and remove lizardfs6 * Update and rename lizardfs6.yaml to lizardfs7.yaml * Update config.yaml * Update storage_firewall.yaml * Update mfsmaster.cfg.erb * Update mfschunkserver.cfg.erb [02:04:32] [02puppet] 07paladox closed pull request 03#1126: Add lizardfs7 and remove lizardfs6 - 13https://git.io/JeujR [02:05:55] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-7 [02:05:57] [02puppet] 07paladox deleted branch 03paladox-patch-7 - 13https://git.io/vbiAS [02:07:38] PROBLEM - lizardfs7 Lizardfs Master Port 3 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9421: Connection refused [02:07:41] PROBLEM - lizardfs7 Lizardfs Master Port 1 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9419: Connection refused [02:07:55] PROBLEM - lizardfs7 Lizardfs Master Port 2 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9420: Connection refused [02:08:50] PROBLEM - lizardfs7 Puppet on lizardfs7 is WARNING: WARNING: Puppet is currently disabled, message: reason not specified, last run 2 minutes ago with 1 failures [02:09:35] RECOVERY - lizardfs7 Lizardfs Master Port 3 on lizardfs7 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 9421 [02:09:36] RECOVERY - lizardfs7 Lizardfs Master Port 1 on lizardfs7 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 9419 [02:09:52] RECOVERY - lizardfs7 Lizardfs Master Port 2 on lizardfs7 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 9420 [02:15:51] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:16:18] PROBLEM - Host lizardfs6 is DOWN: PING CRITICAL - Packet loss = 100% [02:17:27] !log restart lizardfs-master on misc3 [02:25:25] PROBLEM - lizardfs7 Puppet on lizardfs7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 35 seconds ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [02:27:34] PROBLEM - lizardfs7 Puppet on lizardfs7 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 2 minutes ago with 1 failures [02:34:06] PROBLEM - lizardfs7 Puppet on lizardfs7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [02:39:08] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jeujd [02:39:10] [02miraheze/puppet] 07paladox 035556bd5 - Rename lizardfs7 to lizardfs6 [02:39:11] [02puppet] 07paladox created branch 03paladox-patch-7 - 13https://git.io/vbiAS [02:39:36] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeujF [02:39:37] [02miraheze/dns] 07paladox 0321d1827 - Rename lizardfs7 to lizardfs6 [02:39:39] [02puppet] 07paladox opened pull request 03#1127: Rename lizardfs7 to lizardfs6 - 13https://git.io/Jeujb [02:40:16] RECOVERY - lizardfs7 Puppet on lizardfs7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:42:53] RECOVERY - Host lizardfs6 is UP: PING OK - Packet loss = 0%, RTA = 12.50 ms [02:43:43] PROBLEM - lizardfs6 Disk Space on lizardfs6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:43:43] RECOVERY - lizardfs6 Disk Space on lizardfs6 is OK: DISK OK - free space: / 1777740 MB (99% inode=99%); [02:44:02] PROBLEM - lizardfs7 Puppet on lizardfs7 is CRITICAL: CRITICAL: Puppet has 5 failures. Last run 1 minute ago with 5 failures. Failed resources (up to 3 shown) [03:03:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb [03:05:46] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [04:28:11] !log deleted account ‘macfan’ on status.miraheze.wiki [04:28:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [05:03:45] PROBLEM - misc3 Current Load on misc3 is CRITICAL: CRITICAL - load average: 4.18, 3.32, 1.49 [05:05:44] RECOVERY - misc3 Current Load on misc3 is OK: OK - load average: 0.63, 2.26, 1.32 [06:11:14] [02mw-config] 07autoresponder[bot] commented on issue 03#2783: LoserFan2018 - 13https://git.io/JezvE [06:20:11] [02mw-config] 07RhinosF1 commented on issue 03#2783: LoserFan2018 - 13https://git.io/Jezva [06:21:15] ^ spam [06:26:59] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2829 MB (11% inode=94%); [10:22:14] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [10:37:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [10:39:01] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:03:45] !log reception@mw1:~$ sudo php /srv/mediawiki/w/maintenance/dumpBackup.php --wiki allthetropeswiki --full --output gzip:/home/reception/allthetropes30102019.xml (part of wikibackups) [11:03:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:04:07] !log reception@mw2:~$ sudo php /srv/mediawiki/w/maintenance/dumpBackup.php --wiki poserdazfreebieswiki --full --output gzip:/home/reception/poserdazfreebieswiki30102019.xml (part of wikibackups) [11:04:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:04:52] !log reception@mw3:~$ sudo php /srv/mediawiki/w/maintenance/dumpBackup.php --wiki nonsensopediawiki --full --output gzip:/home/reception/nonsensopediawiki30102019.xml (part of wikibackups) [11:04:58] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:06:09] !log reception@mw1:~$ sudo php /srv/mediawiki/w/maintenance/dumpBackup.php --wiki nonciclopediawiki --full --output gzip:/home/reception/nonciclopediawiki30102019.xml (part of wikibackups) [11:06:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:06:16] !log nice 10 for all of the above [11:06:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:57:41] Hello init2winit! If you have any questions, feel free to ask and someone should answer soon. [12:33:05] PROBLEM - wiki.om3ga.tk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:37:10] RECOVERY - wiki.om3ga.tk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.om3ga.tk' will expire on Fri 22 Nov 2019 12:00:37 PM GMT +0000. [12:47:43] PROBLEM - wiki.om3ga.tk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:27:10] [02miraheze/DataDump] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeztk [13:27:11] [02miraheze/DataDump] 07Reception123 03c75b89d - rm white space [13:27:25] ^ blaming paladox for that one [14:02:48] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:02:49] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:02:50] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:02:53] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:02:55] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:02:55] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:03:00] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:03:30] PROBLEM - db5 Puppet on db5 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 2 minutes ago with 17 failures. Failed resources (up to 3 shown) [14:03:38] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:03:38] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:03:38] PROBLEM - lizardfs4 Puppet on lizardfs4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:03:50] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 3 minutes ago with 18 failures. Failed resources (up to 3 shown) [14:04:01] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:04:11] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:04:11] PROBLEM - lizardfs5 Puppet on lizardfs5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:04:22] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:04:27] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:13:21] RECOVERY - db5 Puppet on db5 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [14:13:30] RECOVERY - lizardfs4 Puppet on lizardfs4 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [14:13:47] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 30 seconds ago with 0 failures [14:14:02] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 26 seconds ago with 0 failures [14:14:06] RECOVERY - lizardfs5 Puppet on lizardfs5 is OK: OK: Puppet is currently enabled, last run 54 seconds ago with 0 failures [14:14:20] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [14:14:45] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:14:49] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:10] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:11] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:14] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:16] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:25] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [14:15:30] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:15:53] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 54 seconds ago with 0 failures [14:16:15] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:16:44] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:23:13] PROBLEM - lizardfs7 Lizardfs Master Port 2 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9420: Connection refused [14:23:22] PROBLEM - lizardfs7 Lizardfs Chunkserver Port on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9422: Connection refused [14:23:35] PROBLEM - lizardfs7 Current Load on lizardfs7 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [14:23:39] PROBLEM - lizardfs7 Lizardfs Master Port 1 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9419: Connection refused [14:23:39] PROBLEM - lizardfs7 lizard.miraheze.org HTTPS on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [14:23:41] PROBLEM - lizardfs6 Disk Space on lizardfs6 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [14:24:11] PROBLEM - lizardfs7 Lizardfs Master Port 3 on lizardfs7 is CRITICAL: connect to address 54.36.165.161 and port 9421: Connection refused [14:24:21] PROBLEM - lizardfs7 Disk Space on lizardfs7 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [14:24:27] PROBLEM - lizardfs6 Current Load on lizardfs6 is CRITICAL: connect to address 54.36.165.161 port 5666: Connection refusedconnect to host 54.36.165.161 port 5666: Connection refused [15:04:25] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+1/-1/±0] 13https://git.io/JezmO [15:04:27] [02miraheze/puppet] 07paladox 03eaf597e - Rename lizardfs7.yaml to lizardfs6.yaml [15:04:28] [02puppet] 07paladox synchronize pull request 03#1127: Rename lizardfs7 to lizardfs6 - 13https://git.io/Jeujb [15:04:41] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jezms [15:04:42] [02miraheze/puppet] 07paladox 033c16a0b - Update config.yaml [15:04:44] [02puppet] 07paladox synchronize pull request 03#1127: Rename lizardfs7 to lizardfs6 - 13https://git.io/Jeujb [15:04:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JezmG [15:04:55] [02miraheze/puppet] 07paladox 03e37b822 - Update storage_firewall.yaml [15:04:56] [02puppet] 07paladox synchronize pull request 03#1127: Rename lizardfs7 to lizardfs6 - 13https://git.io/Jeujb [15:05:04] [02puppet] 07paladox closed pull request 03#1127: Rename lizardfs7 to lizardfs6 - 13https://git.io/Jeujb [15:05:06] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+1/-1/±3] 13https://git.io/Jezmn [15:05:07] [02miraheze/puppet] 07paladox 034100506 - Rename lizardfs7 to lizardfs6 (#1127) * Rename lizardfs7 to lizardfs6 * Rename lizardfs7.yaml to lizardfs6.yaml * Update config.yaml * Update storage_firewall.yaml [15:05:09] [02puppet] 07paladox deleted branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:05:10] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-7 [15:11:32] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/Jezm0 [15:11:33] [02miraheze/puppet] 07paladox 031fc7368 - Change default lizard master to lizardfs6 [15:11:35] [02puppet] 07paladox created branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:11:36] [02puppet] 07paladox opened pull request 03#1128: Change default lizard master to lizardfs6 - 13https://git.io/JezmE [15:12:24] [02puppet] 07paladox closed pull request 03#1128: Change default lizard master to lizardfs6 - 13https://git.io/JezmE [15:12:26] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezmz [15:12:27] [02miraheze/puppet] 07paladox 0319ce5bf - Change default lizard master to lizardfs6 (#1128) [15:12:29] [02puppet] 07paladox deleted branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:12:30] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-7 [15:13:24] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 58 seconds ago with 1 failures. Failed resources (up to 3 shown): Service[lizardfs-master] [15:14:21] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 1 minute ago with 1 failures [15:14:32] PROBLEM - mw2 Puppet on mw2 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 1 minute ago with 0 failures [15:15:41] PROBLEM - mw3 Puppet on mw3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 3 minutes ago with 0 failures [15:16:56] PROBLEM - misc3 Puppet on misc3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 0 failures [15:17:50] !log restart lizardfs-chunkserver & change master [15:17:56] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:21:10] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Mount[/mnt/mediawiki-static] [15:21:29] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:22:30] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:22:39] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:22:52] PROBLEM - mw2 HTTPS on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:22:58] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:22:59] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [15:23:11] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:23:17] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:23:21] PROBLEM - mw3 HTTPS on mw3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:23:27] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:23:43] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:23:52] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 19 seconds ago with 1 failures. Failed resources (up to 3 shown): Mount[/mnt/mediawiki-static] [15:23:53] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [15:23:57] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:24:02] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [15:24:27] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.011 second response time [15:24:48] RECOVERY - mw2 HTTPS on mw2 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.016 second response time [15:24:56] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 0 seconds ago with 1 failures. Failed resources (up to 3 shown) [15:25:17] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.682 second response time [15:25:20] RECOVERY - mw3 HTTPS on mw3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.024 second response time [15:25:33] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 5.395 second response time [15:25:34] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:25:40] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 1.770 second response time [15:25:50] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 55 seconds ago with 0 failures [15:25:54] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.643 second response time [15:26:10] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:26:57] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 49 seconds ago with 0 failures [15:28:41] !log depool mw[123] and reboot then repool [15:29:44] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:30:30] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:31:50] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: HTTP CRITICAL - No data received from host [15:32:00] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [15:32:21] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.006 second response time [15:32:26] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JezYn [15:32:28] [02miraheze/puppet] 07paladox 03d44e31f - Migrate lizardfs[45] data to lizardfs6 [15:32:29] [02puppet] 07paladox created branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:32:31] [02puppet] 07paladox opened pull request 03#1129: Migrate lizardfs[45] data to lizardfs6 - 13https://git.io/JezYc [15:32:46] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JezYC [15:32:46] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 1.748 second response time [15:32:47] [02miraheze/puppet] 07paladox 03c2526a9 - Update lizardfs5.yaml [15:32:49] [02puppet] 07paladox synchronize pull request 03#1129: Migrate lizardfs[45] data to lizardfs6 - 13https://git.io/JezYc [15:32:54] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.397 second response time [15:32:58] [02puppet] 07paladox closed pull request 03#1129: Migrate lizardfs[45] data to lizardfs6 - 13https://git.io/JezYc [15:33:00] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JezYW [15:33:01] [02miraheze/puppet] 07paladox 03f21fc3a - Migrate lizardfs[45] data to lizardfs6 (#1129) * Migrate lizardfs[45] data to lizardfs6 * Update lizardfs5.yaml [15:33:03] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-7 [15:33:04] [02puppet] 07paladox deleted branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:33:47] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.006 second response time [15:33:59] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:34:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:34:08] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:34:20] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:34:34] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [15:34:40] !log reload lizardfs-chunkserver on lizardfs[45] [15:34:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:43:30] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JezYM [15:43:31] [02miraheze/puppet] 07paladox 03fc9981a - Change lizardfs.miraheze.org backend to lizardfs6 [15:43:33] [02puppet] 07paladox created branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:43:34] [02puppet] 07paladox opened pull request 03#1130: Change lizardfs.miraheze.org backend to lizardfs6 - 13https://git.io/JezYD [15:44:13] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-7 [+0/-0/±1] 13https://git.io/JezYy [15:44:14] [02miraheze/puppet] 07paladox 0383fda05 - Update stunnel.conf [15:44:16] [02puppet] 07paladox synchronize pull request 03#1130: Change lizardfs.miraheze.org backend to lizardfs6 - 13https://git.io/JezYD [15:45:57] !log switch swap off on lizardfs6 [15:46:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:54:22] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezOq [15:54:24] [02miraheze/puppet] 07paladox 033a1681e - bacula: Replace lizardfs4 with lizardfs6 [15:56:13] [02puppet] 07paladox closed pull request 03#1130: Change lizardfs.miraheze.org backend to lizardfs6 - 13https://git.io/JezYD [15:56:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JezO3 [15:56:16] [02miraheze/puppet] 07paladox 03e533ec1 - Change lizardfs.miraheze.org backend to lizardfs6 (#1130) * Change lizardfs.miraheze.org backend to lizardfs6 * Update stunnel.conf [15:57:19] [02puppet] 07paladox deleted branch 03paladox-patch-7 - 13https://git.io/vbiAS [15:57:21] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-7 [16:01:14] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:12] PROBLEM - misc3 Puppet on misc3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 51 minutes ago with 0 failures [16:07:02] PROBLEM - bacula1 Bacula Static on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: lizardfs4-fd [16:09:23] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezO2 [16:09:25] [02miraheze/puppet] 07paladox 03db232d2 - Update nrpe.cfg.erb [17:08:23] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jez3h [17:08:24] [02miraheze/puppet] 07paladox 035b31e35 - Update mfsmaster.cfg.erb [17:09:52] !log restart lizardfs-master on lizardfs6 [17:11:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:12:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:12:20] !log restart php7.2-fpm on mw[123] [17:12:24] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [17:12:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:12:35] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [17:13:01] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [17:14:05] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:14:20] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:14:30] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:15:01] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:15:38] [02mw-config] 07Pix1234 deleted branch 03Pix1234-patch-1 - 13https://git.io/vbvb3 [17:15:39] [02miraheze/mw-config] 07Pix1234 deleted branch 03Pix1234-patch-1 [17:17:05] [02mw-config] 07paladox closed pull request 03#2771: ensure we cache api thumbs for commons.wikimedia.org - 13https://git.io/Je8mm [17:17:07] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-1 [17:17:09] [02mw-config] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [17:21:41] [02miraheze/mediawiki] 07paladox pushed 035 commits to 03REL1_34 [+0/-0/±16] 13https://git.io/Jezs8 [17:21:42] [02miraheze/mediawiki] 07it-spiderman 035c0938a - Update git submodules * Update extensions/OATHAuth from branch 'REL1_34' to 99b1f06a2f411c08f12e3fa80257ac0a49612345 - Ask for user re-auth only on initial requests Make sure user is asked to re-authenticate (if needed) only on initital request, not after submitting the form Bug: T235645 Change-Id: Ic315f49ac5810da0a703ccf4b51f558d17f905fb [17:21:44] [02miraheze/mediawiki] 07reedy 03be6e8ee - Update git submodules * Update extensions/OATHAuth from branch 'REL1_34' to eacb5b281ae3fac18f394342a419d63ad6064d9c - Bump 0.4.4 Change-Id: I3097526954c18c6759461f800168ebeb4a92e9e7 [17:21:45] [02miraheze/mediawiki] 07Krinkle 03d96006b - Add release notes for discontinuation of IE6/7 support Bug: T232563 Change-Id: I95c693d7c3059f441489d61f3fce597f02bedc0e [17:21:47] [02miraheze/mediawiki] ... and 2 more commits. [17:24:40] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/Jezsu [17:24:41] [02miraheze/mediawiki] 07Pix1234 03aea3a1e - Move all submodule urls to github [17:27:14] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/JezsV [17:27:15] [02miraheze/mediawiki] 07paladox 03acdcadf - Update CW [17:30:59] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.99, 6.79, 6.02 [17:33:03] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.78, 6.78, 6.11 [17:33:47] PROBLEM - misc3 Lizardfs Master Port 3 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9421: Connection refused [17:34:22] PROBLEM - misc3 Lizardfs Master Port 1 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9419: Connection refused [17:34:38] PROBLEM - misc3 Lizardfs Master Port 2 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9420: Connection refused [17:51:39] paladox: ^ expected? [17:52:44] yes [18:32:40] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.20, 7.24, 6.67 [18:34:44] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.86, 7.08, 6.68 [18:36:46] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.62, 6.77, 6.60 [18:44:45] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.41, 7.00, 6.74 [18:46:44] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.29, 6.57, 6.60 [19:23:34] hi [19:23:46] 503 [19:23:50] :( [19:48:40] hispano76: still? [19:49:04] it'll still be happening as only 45gb has migrated to the new server... [19:49:15] so we won't know till everything is migrated [19:51:51] paladox: LF6? [19:51:57] yup [19:52:14] :) [19:53:04] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.22, 4.48, 2.83 [19:55:01] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.32, 3.92, 2.85 [19:56:58] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.56, 2.75, 2.54 [20:20:04] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.01, 4.08, 3.06 [20:20:33] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [20:21:09] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:21:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:21:42] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [20:21:54] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [20:22:02] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.43, 3.80, 3.08 [20:23:59] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.57, 2.90, 2.83 [20:24:25] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [20:25:47] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [20:26:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:27:32] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:27:50] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [20:59:31] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:00:18] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.92, 3.24, 2.59 [21:02:25] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.28, 3.18, 2.68 [21:03:37] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [21:03:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.41, 3.14, 2.26 [21:04:07] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [21:04:41] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:05:30] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [21:05:46] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.58, 2.43, 2.11 [21:07:31] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:07:33] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [21:07:41] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [21:07:59] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [21:08:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:19:37] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.70, 4.34, 2.95 [21:25:38] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.44, 2.82, 2.88 [22:02:26] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.50, 2.71, 2.10 [22:02:58] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [22:04:24] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.20, 2.32, 2.02 [22:04:54] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [22:12:40] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.88, 6.17, 5.68 [22:13:05] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [22:15:01] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:20:47] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:21:14] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:21:30] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [22:21:59] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [22:23:10] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.12, 7.10, 6.40 [22:23:19] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [22:24:02] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [22:25:09] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.20, 6.68, 6.32 [22:25:17] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [22:25:30] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [22:29:25] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [22:29:35] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [22:30:29] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [22:30:50] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:30:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.56, 5.02, 3.20 [22:31:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:31:21] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [22:31:31] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [22:32:30] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [22:34:55] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.74, 3.52, 3.11 [22:36:52] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.47, 2.54, 2.79 [23:06:15] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [23:10:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 81.4.109.133/cpweb [23:12:36] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:12:37] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:28:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.50, 3.31, 2.43 [23:30:04] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.72, 2.30, 2.17 [23:46:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [23:47:58] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:59:39] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezRO [23:59:41] [02miraheze/puppet] 07paladox 0362a3b48 - php: Increase max_requests to 10000