[00:02:43] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 9.59, 7.28, 5.58 [00:02:50] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 9.51, 7.01, 5.66 [00:05:01] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.09, 17.86, 13.35 [00:06:03] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 51.195.236.219/cpweb, 51.222.25.132/cpweb [00:07:04] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 17.71, 17.98, 13.97 [00:07:29] PROBLEM - mw10 MediaWiki Rendering on mw10 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:07:37] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [00:07:46] PROBLEM - cp12 Stunnel Http for mw10 on cp12 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:09:27] RECOVERY - mw10 MediaWiki Rendering on mw10 is OK: HTTP OK: HTTP/1.1 200 OK - 20739 bytes in 0.983 second response time [00:09:34] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [00:09:44] RECOVERY - cp12 Stunnel Http for mw10 on cp12 is OK: HTTP OK: HTTP/1.1 200 OK - 15240 bytes in 0.364 second response time [00:10:02] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:11:50] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 8.65, 7.44, 5.65 [00:12:21] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.75, 6.96, 5.51 [00:15:49] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.77, 7.89, 6.30 [00:17:50] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 9.56, 8.32, 6.63 [00:19:49] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.38, 7.66, 6.61 [00:20:18] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.62, 6.47, 5.95 [00:21:49] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 8.70, 7.67, 6.72 [00:23:49] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.71, 7.32, 6.71 [00:25:49] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 3.99, 6.17, 6.36 [00:42:43] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 4.80, 5.81, 7.59 [00:50:43] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 10.38, 7.15, 7.29 [00:52:42] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 4.11, 5.87, 6.81 [00:52:49] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 4.60, 6.89, 7.90 [00:54:41] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.58, 5.42, 6.53 [00:58:49] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 3.34, 4.64, 6.57 [02:29:06] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [02:38:07] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [05:19:22] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [05:29:07] PROBLEM - ping4 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 351.74 ms [05:31:16] !log disabled puppet on mw* [05:31:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:32:12] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOynp [05:32:13] [02miraheze/mw-config] 07Reception123 0305a1e68 - switch SimpleBlogPage to use wfLoad [05:32:15] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [05:33:12] miraheze/mw-config - Reception123 the build passed. [05:33:20] miraheze/mw-config - Reception123 the build passed. [05:33:54] PROBLEM - mw11 Puppet on mw11 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 30 minutes ago with 0 failures [05:34:26] PROBLEM - mw9 Puppet on mw9 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 30 minutes ago with 0 failures [05:34:57] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±2] 13https://git.io/JOyc3 [05:34:58] [02miraheze/mediawiki] 07Reception123 03a1714e5 - switch SimpleBlogPage to use Universal Omega fork (T7156) [05:35:18] PROBLEM - mw8 Puppet on mw8 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 31 minutes ago with 0 failures [05:35:35] PROBLEM - mw10 Puppet on mw10 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 31 minutes ago with 0 failures [05:38:05] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [05:41:10] !log enable puppet on mw* [05:41:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:41:59] !log cd /srv/mediawiki/w/extensions && sudo -u www-data git pull ; sudo -u www-data git submodule sync ; sudo -u www-data git submodule update && sudo puppet agent --enable && sudo puppet agent -t on mw* [05:42:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:42:25] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [05:43:17] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [05:43:38] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [05:43:54] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [05:46:05] !log disable puppet on mw* [05:46:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:48:23] PROBLEM - mw9 Puppet on mw9 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 4 minutes ago with 0 failures [05:48:35] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOyC5 [05:48:36] [02miraheze/mw-config] 07Reception123 03dfa6f86 - switch FancyBoxThumbs to use wfLoad [05:48:38] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [05:49:33] miraheze/mw-config - Reception123 the build passed. [05:49:39] miraheze/mw-config - Reception123 the build passed. [05:49:42] PROBLEM - dbbackup1 MariaDB c4 on dbbackup1 is UNKNOWN: [05:50:02] PROBLEM - mw8 Puppet on mw8 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 6 minutes ago with 0 failures [05:50:09] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±2] 13https://git.io/JOyWv [05:50:10] [02miraheze/mediawiki] 07Reception123 03c8c6383 - switch FancyBoxThumbs to use Universal Omega fork (T7156) [05:50:38] PROBLEM - mw10 Puppet on mw10 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 7 minutes ago with 0 failures [05:51:49] PROBLEM - mw11 Puppet on mw11 is WARNING: WARNING: Puppet is currently disabled, message: Reception123 - forks, last run 8 minutes ago with 0 failures [05:52:29] PROBLEM - dbbackup1 MariaDB c4 on dbbackup1 is CRITICAL: Can't connect to MySQL server on 'dbbackup1.miraheze.org' (111) [05:53:51] PROBLEM - datacrondatabase.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for datacrondatabase.com could not be found [05:53:52] PROBLEM - www.rothwell-leeds.co.uk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.rothwell-leeds.co.uk could not be found [05:54:00] PROBLEM - olwest.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for olwest.org could not be found [05:54:05] PROBLEM - trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for trollpasta.com could not be found [05:54:07] PROBLEM - zh.gyaanipedia.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for zh.gyaanipedia.com could not be found [05:56:56] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOyWp [05:56:58] [02miraheze/mw-config] 07Reception123 03aad137e - rv FancyBoxThumbs change [05:56:59] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [05:57:52] miraheze/mw-config - Reception123 the build passed. [05:58:03] miraheze/mw-config - Reception123 the build passed. [05:58:07] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/JOylI [05:58:09] [02miraheze/mediawiki] 07Reception123 03346ce61 - switch back to other version [06:00:41] RECOVERY - www.rothwell-leeds.co.uk - reverse DNS on sslhost is OK: rDNS OK - www.rothwell-leeds.co.uk reverse DNS resolves to cp10.miraheze.org [06:00:43] RECOVERY - datacrondatabase.com - reverse DNS on sslhost is OK: rDNS OK - datacrondatabase.com reverse DNS resolves to cp10.miraheze.org [06:00:47] RECOVERY - trollpasta.com - reverse DNS on sslhost is OK: rDNS OK - trollpasta.com reverse DNS resolves to cp10.miraheze.org [06:00:53] RECOVERY - zh.gyaanipedia.com - reverse DNS on sslhost is OK: rDNS OK - zh.gyaanipedia.com reverse DNS resolves to cp11.miraheze.org [06:00:58] RECOVERY - olwest.org - reverse DNS on sslhost is OK: rDNS OK - olwest.org reverse DNS resolves to cp11.miraheze.org [06:02:45] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±2] 13https://git.io/JOylM [06:02:46] [02miraheze/mediawiki] 07Reception123 0354026f5 - switch back [06:06:49] !log cd /srv/mediawiki/w/extensions && sudo -u www-data git pull ; sudo -u www-data git submodule sync ; sudo -u www-data git submodule update && sudo puppet agent --enable && sudo puppet agent -t on mw* [06:06:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:07:01] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:07:49] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [06:08:22] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 13 seconds ago with 0 failures [06:08:57] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [06:25:40] RECOVERY - graylog2 APT on graylog2 is OK: APT OK: 28 packages available for upgrade (0 critical updates). [06:42:17] RECOVERY - puppet3 APT on puppet3 is OK: APT OK: 27 packages available for upgrade (0 critical updates). [06:45:20] RECOVERY - ping4 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 261.37 ms [07:02:03] [02mw-config] 07Reception123 closed pull request 03#3849: Update SimpleBlogPage config - 13https://git.io/JOPMj [07:02:05] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JOyur [07:02:06] [02miraheze/mw-config] 07Universal-Omega 03eafc425 - Update SimpleBlogPage config (#3849) [07:02:08] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [07:03:06] miraheze/mw-config - Reception123 the build passed. [07:03:08] miraheze/mw-config - Reception123 the build passed. [07:38:16] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/JOyVJ [07:38:17] [02miraheze/mediawiki] 07Reception123 031eaa3e8 - Update SimpleBlogPage [07:58:34] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/JOyr9 [07:58:36] [02miraheze/mediawiki] 07Reception123 03c36877d - Update SimpleBlogPage [08:03:48] !log sudo -u www-data php /srv/mediawiki/w/maintenance/mergeMessageFileList.php --output /srv/mediawiki/config/ExtensionMessageFiles.php --wiki loginwiki on mw* [08:03:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:03:58] !log sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildLocalisationCache.php --wiki loginwiki on mw* [08:04:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:09:23] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [08:20:10] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOy6s [08:20:11] [02miraheze/mw-config] 07Reception123 0383e7fbd - extension.json for FancyBoxThumbs [08:20:13] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [08:21:08] miraheze/mw-config - Reception123 the build passed. [08:21:15] miraheze/mw-config - Reception123 the build passed. [08:23:32] revi: ^ MHBot keeps banning GH actions bots [08:23:34] *RhinosF1 [08:23:40] (sorry for the accidental ping re vi) [08:30:33] Reception123: it's the _ [08:45:17] ah [08:45:25] !log disabled puppet on mw* [08:45:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:45:40] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±2] 13https://git.io/JOyP9 [08:45:42] [02miraheze/mediawiki] 07Reception123 030bb7531 - try FancyBoxThumbs switch again [08:46:04] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOyPd [08:46:05] [02miraheze/mw-config] 07Reception123 030f4774a - use wfLoad for FancyBoxThumbs [08:46:07] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [08:47:05] miraheze/mw-config - Reception123 the build passed. [08:47:07] miraheze/mw-config - Reception123 the build passed. [08:47:25] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 4 seconds ago with 0 failures [08:47:54] PROBLEM - mw11 Puppet on mw11 is WARNING: WARNING: Puppet is currently disabled, message: forks, last run 14 minutes ago with 0 failures [08:47:58] PROBLEM - mw9 Puppet on mw9 is WARNING: WARNING: Puppet is currently disabled, message: forks, last run 13 minutes ago with 0 failures [08:48:03] PROBLEM - mw10 Puppet on mw10 is WARNING: WARNING: Puppet is currently disabled, message: forks, last run 14 minutes ago with 0 failures [08:48:58] !log cd /srv/mediawiki/w/extensions && sudo -u www-data git pull ; sudo -u www-data git submodule sync ; sudo -u www-data git submodule update && sudo puppet agent --enable && sudo puppet agent -t on mw* [08:49:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [08:49:16] PROBLEM - mw8 Puppet on mw8 is WARNING: WARNING: Puppet is currently disabled, message: forks, last run 15 minutes ago with 0 failures [08:49:54] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [08:49:59] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [08:50:04] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 4 seconds ago with 0 failures [08:51:16] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:08:01] PROBLEM - jobrunner4 Puppet on jobrunner4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [09:08:26] PROBLEM - jobrunner3 Puppet on jobrunner3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [09:11:25] !log cd /srv/mediawiki/w/extensions && sudo -u www-data git pull ; sudo -u www-data git submodule sync ; sudo -u www-data git submodule update && sudo puppet agent --enable && sudo puppet agent -t on jbr* [09:11:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [09:12:33] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/JOyMt [09:12:34] [02miraheze/mediawiki] 07Reception123 036ca05c7 - Update Cosmos [09:13:57] RECOVERY - jobrunner4 Puppet on jobrunner4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:14:13] RECOVERY - jobrunner3 Puppet on jobrunner3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:37:29] Reception123: I just got 2 exceptions that cleared on refresh [09:37:45] PROBLEM - jobrunner4 MediaWiki Rendering on jobrunner4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18515 bytes in 0.231 second response time [09:37:53] PROBLEM - jobrunner3 MediaWiki Rendering on jobrunner3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18515 bytes in 0.255 second response time [09:38:00] That's fun [09:38:01] RhinosF1: oh, where? [09:38:07] Reception123: meta [09:38:34] Can you check logs [09:38:45] jbr3/4 are crit [09:38:51] PROBLEM - cp11 Varnish Backends on cp11 is CRITICAL: 4 backends are down. mw8 mw9 mw10 mw11 [09:38:57] PROBLEM - mw10 MediaWiki Rendering on mw10 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18509 bytes in 0.224 second response time [09:39:08] urgh well we need to revert Cosmos then [09:39:12] Yep [09:39:23] PROBLEM - cp12 Varnish Backends on cp12 is CRITICAL: 4 backends are down. mw8 mw9 mw10 mw11 [09:39:23] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [09:39:27] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [09:39:57] PROBLEM - mw11 MediaWiki Rendering on mw11 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18509 bytes in 0.218 second response time [09:40:00] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [09:40:03] PROBLEM - mw9 MediaWiki Rendering on mw9 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18508 bytes in 0.230 second response time [09:40:12] PROBLEM - cp10 Varnish Backends on cp10 is CRITICAL: 4 backends are down. mw8 mw9 mw10 mw11 [09:40:26] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 4 backends are down. mw8 mw9 mw10 mw11 [09:40:55] PROBLEM - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is CRITICAL: CRITICAL - NGINX Error Rate is 65% [09:41:02] PROBLEM - mw8 MediaWiki Rendering on mw8 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 18508 bytes in 0.230 second response time [09:41:04] Is that why all wikis are down? [09:41:08] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 44% [09:41:27] yes, will be fixed shortly [09:42:16] PROBLEM - cp11 HTTP 4xx/5xx ERROR Rate on cp11 is CRITICAL: CRITICAL - NGINX Error Rate is 62% [09:43:08] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 26% [09:44:15] RECOVERY - cp11 HTTP 4xx/5xx ERROR Rate on cp11 is OK: OK - NGINX Error Rate is 25% [09:44:55] PROBLEM - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is WARNING: WARNING - NGINX Error Rate is 45% [09:46:55] PROBLEM - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is CRITICAL: CRITICAL - NGINX Error Rate is 74% [09:47:04] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 41% [09:47:11] PROBLEM - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is CRITICAL: CRITICAL - NGINX Error Rate is 60% [09:48:55] PROBLEM - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is WARNING: WARNING - NGINX Error Rate is 42% [09:49:05] PROBLEM - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is WARNING: WARNING - NGINX Error Rate is 44% [09:50:55] PROBLEM - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is CRITICAL: CRITICAL - NGINX Error Rate is 77% [09:51:02] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 39% [09:52:54] RECOVERY - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is OK: OK - NGINX Error Rate is 39% [09:53:30] Ahem, it has been more than 10 minutes and SRE seems online. Updates, please? [09:54:17] unfortunately unless RhinosF1 has a local clone of mediawiki it will have to wait until I get one as the current one was broken [09:54:56] Oh, okay. [09:55:18] yeah, I've had to change extension forks and the last update (of the Cosmos skin) didn't seem to go as planned [10:00:25] Reception123: use git revert on server and stop puppet [10:00:33] PROBLEM - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is WARNING: WARNING - NGINX Error Rate is 56% [10:04:21] RECOVERY - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is OK: OK - NGINX Error Rate is 38% [10:04:53] RECOVERY - mw8 MediaWiki Rendering on mw8 is OK: HTTP OK: HTTP/1.1 200 OK - 20727 bytes in 0.358 second response time [10:05:16] @R4356th: coming back up [10:05:27] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:06:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:06:52] Sounds great. 🙂 [10:06:55] RECOVERY - cp12 HTTP 4xx/5xx ERROR Rate on cp12 is OK: OK - NGINX Error Rate is 9% [10:07:09] !log cd /srv/mediawiki/w/skins && sudo -u www-data git revert 6ca05c7 on mw* [10:08:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [10:09:17] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 13.59, 7.73, 3.82 [10:09:27] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [10:09:32] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/JOyHf [10:09:34] [02miraheze/mediawiki] 07Reception123 03aa7a74b - Revert "Update Cosmos" [10:10:00] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [10:33:23] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [10:34:57] RECOVERY - mw10 MediaWiki Rendering on mw10 is OK: HTTP OK: HTTP/1.1 200 OK - 20729 bytes in 0.185 second response time [10:35:36] RECOVERY - jobrunner4 MediaWiki Rendering on jobrunner4 is OK: HTTP OK: HTTP/1.1 200 OK - 20741 bytes in 0.214 second response time [10:35:51] RECOVERY - jobrunner3 MediaWiki Rendering on jobrunner3 is OK: HTTP OK: HTTP/1.1 200 OK - 20741 bytes in 0.272 second response time [10:35:53] RECOVERY - mw9 MediaWiki Rendering on mw9 is OK: HTTP OK: HTTP/1.1 200 OK - 20727 bytes in 0.221 second response time [10:35:57] RECOVERY - mw11 MediaWiki Rendering on mw11 is OK: HTTP OK: HTTP/1.1 200 OK - 20729 bytes in 0.186 second response time [10:36:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:36:02] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 7 backends are healthy [10:36:12] RECOVERY - cp10 Varnish Backends on cp10 is OK: All 7 backends are healthy [10:36:51] RECOVERY - cp11 Varnish Backends on cp11 is OK: All 7 backends are healthy [10:37:23] RECOVERY - cp12 Varnish Backends on cp12 is OK: All 7 backends are healthy [10:37:27] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:45:17] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 3.28, 4.64, 7.88 [10:49:17] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 3.25, 3.76, 6.75 [11:14:19] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 5.17, 4.86, 2.40 [11:16:19] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 1.21, 3.43, 2.17 [11:18:18] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.22, 2.31, 1.91 [13:40:23] [02puppet] 07RhinosF1 opened pull request 03#1745: CSP: add iwiki.eu.org mirrors - 13https://git.io/JOSGq [14:09:53] hey [14:10:40] hey SPF|Cloud [14:10:45] Wait, hold on, is SPF|Cloud actually Southparkfan? [14:10:50] yes [14:10:55] it's his IRC nickname [14:11:05] Well, I thought it had to have been him. [14:11:18] yeah [14:11:36] Just like how Void-walker is Void here. [14:14:00] SPF|Cloud is the real Southparkfan :P [14:14:21] Yeah, go figure. [14:15:10] Hey SPF|Cloud [14:16:10] Reception123, not sure if you saw as he posted the commit after you went to bed yesterday, but with this commit (https://git.io/JODgM), you can now directly log to the Tech:Server admin log from your shell session [14:16:11] [ Comparing 3e03caad3e46...3b0191964ddb · miraheze/puppet · GitHub ] - git.io [14:16:51] oh, cool [14:17:28] yeah, eventually it could even be more automated potentially John said, but it should still save you time as you can just scrollback and insert `!log` before your command [14:17:50] MirahezeLSBot will then issue the command to IRC and MirahezeLogbot will log it on-wiki [14:18:58] We do at some point want to look at an mwscript wrapper [14:19:13] RhinosF1, oh what would that do? [14:19:33] Essentially save writing full commands out and link into the log script [14:20:00] RhinosF1, ah, yeah that'd be cool :) [14:20:12] It's not hard [14:20:16] It's just time [14:20:42] oh, so potentially you have the technical knowledge to write that wrapper, it's just a matter of finding the time? [14:21:13] even more bots? [14:21:19] lol [14:21:29] !log [southparkfan@mw9] hey [14:21:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:21:32] yeah fully 1/3 of this channel are bots [14:21:38] awesome [14:21:46] it is pretty cool, isn't it? :) [14:22:16] this will be very useful for creating cookbooks and wrappers [14:22:56] cool [14:23:09] dmehus: oh I could do it asleep [14:23:17] e.g. 'mwscript' instead of 'sudo -u www-data php' or 'php' (the latter is very bad) -> 'mwscript' will run maintenance scripts with the right privileges and log the run via logsalmsg [14:23:19] RhinosF1, oh that's cool :) [14:23:23] Now John has given us MirahezeLSBot [14:23:43] and a reboot script to automatically log a server reboot and downtime the server in icinga [14:23:52] automation is a good thing [14:24:22] SPF|Cloud: the sudo -u www-data php /srv/MediaWiki/w/maintenance (with handling for extensions) and then log via logsalmsg was my idea [14:24:42] SPF|Cloud, that'd be awesome if we could automated the logging of maintenance script runs like that :) [14:24:53] 'handling for extensions' -> or just go for /srv/mediawiki/w [14:24:54] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 1.91, 4.63, 2.66 [14:25:18] typing 'maintenance/rebuildrecentchanges.php' instead of 'rebuildrecentchanges.php' is not an issue [14:25:50] ah [14:25:56] and I'm looking forward to Semantic MediaWiki [14:26:17] @Lake, signs ^ seem favourable [14:26:45] it was an idea of mine to fetch infrastructure data from PuppetDB to create a CMDB on meta wiki [14:26:53] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.06, 3.40, 2.45 [14:27:06] SPF|Cloud, what does CMDB stand for? [14:27:23] configuration management database [14:27:27] ah, okay [14:29:05] there's a book called 'A Semantic Wiki-based Platform for IT Service Management' [14:30:09] dmehus: oooooh, I like it [14:30:37] I might use it because, while I like Cargo and it's pretty easy to store data, it's a bit awkward to display the data the way I want [14:31:45] and would be pretty cool, I was reading recently that [[mw:skin:Tweeki]] is optimized for Semantic wikis [14:31:46] https://www.mediawiki.org/wiki/skin:Tweeki [14:31:47] [ Skin:Tweeki - MediaWiki ] - www.mediawiki.org [14:33:04] SPF|Cloud: typing rebuildrc.php would auto add maintenance/ and typing extensions/CU/script.php would add extensions/CU/maintenance/script.php [14:33:28] it's possible [14:36:22] !log https://phabricator.miraheze.org/T5877#140588: run test backup on db11 with six threads [14:36:23] [ ⚓ T5877 Revise MariaDB backup strategy ] - phabricator.miraheze.org [14:36:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:37:03] it's great that MirahezeLSBot also says the things logged here :) [14:38:07] cloud3 offers neat performance [14:38:47] that server is under high load now (because I am running mydumper with six threads), but wiki response times are still normal [14:39:36] oh it's interesting that there's better performing alternative to mysqldump [14:40:40] the reason I went with mydumper is the option to have per-table sql files [14:41:37] oh I though it was because of performance [14:41:38] got a corrupted table on the largest Miraheze wiki? you don't have to dig through a 20 GB sql file to get the right table, just restore 'testwiki.mytable.sql.gz' [14:41:57] performance is one of the reasons too, but not as important [14:42:26] though it is possible to do per-table dumps with mysqldump too, it's just mysqldump [database] [table] > table.sql [14:42:38] > it's great that MirahezeLSBot also says the things logged here :) [14:42:38] Receptionn123, yep. Eventually this channel will consist of bots talking to each other. :) [14:42:39] I've done that many times to be extra sure I don't do something wrong on a particular table [14:42:40] but the dumps won't be consistent [14:42:46] Oh [14:43:19] dmehus: heh, the automatic tasks and bots have definitely gone up since 2015 [14:43:21] @Lake, yeah, definitely, Cargo is still a very versatile wiki database that is simpler and less verbose than WikiBose :) [14:43:43] ManageWiki, CreateWiki (soon), SSL renewals [14:43:44] Do we have Tweeki on Miraheze, Lake? [14:43:49] imagine dumping the metawiki.logging table five minutes later than metawiki.user, you will see a 'created account' entry for a user account that cannot be found in the metawiki.user table, because the metawiki.logging table was dumped later [14:43:52] dmehus: we do yes :) [14:43:55] dmehus: yes, we do [14:44:05] by the way, I don't know if it was updated [14:44:12] I know that because once it conflicted with MobileFrontend so we had to add a warning there [14:44:14] I know the author made a major update to the skin recenty [14:44:36] SPF|Cloud: oh, but how does mydumper change that? [14:45:27] mydumper dumps every table in one run, it will use a (short) lock to ensure transactional consistency [14:45:40] Oh, I see [14:45:41] Reception123, very true about the increased automation, which I completely agree with SPF|Cloud is a good thing :) [14:45:55] that makes sense then [14:45:56] if you start the backup at 14:00, all tables will be dumped with the content from 14:00 [14:46:14] even if the backup process takes over four hours [14:46:27] @Lake, ah, okay, interesting. We could have Reception123 check if there's any updates for Tweeki we can install? [14:46:30] oh that's nice then [14:46:54] dmehus: I can look yeah, though I've not been having fun with extensions lately and there's still many to go [14:47:04] Reception123, heh true yeah [14:50:41] PROBLEM - db11 Current Load on db11 is WARNING: WARNING - load average: 7.28, 6.87, 4.79 [14:52:41] RECOVERY - db11 Current Load on db11 is OK: OK - load average: 6.69, 6.77, 5.01 [14:54:12] Automation is good [14:54:26] Then we can spend more time doing what we should be doing [14:56:41] PROBLEM - db11 Current Load on db11 is WARNING: WARNING - load average: 7.10, 6.99, 5.52 [15:00:05] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [15:00:28] Reception123: did they get back ^ [15:03:06] not that I'm aware of [15:03:22] dmehus: to whoever you were talking to last night, yes we have comment box and it was us that discovered CVE-2021-31550 [15:05:06] @Lake: ^ [15:06:49] ooo interesting [15:06:51] good job [15:13:37] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=mw10.miraheze.org&var-port=9100 [15:13:38] [ Grafana ] - grafana.miraheze.org [15:13:45] why did mw10 stop serving traffic? [15:18:28] paladox: you mean why mw* [15:18:40] Because Cosmos update by Reception123 [15:18:46] oh ok [15:19:34] yeah... whoops :( [15:21:15] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOS2v [15:21:17] [02miraheze/dns] 07paladox 0313134f1 - Depool cp10 [15:23:24] RhinosF1, yeah we figured it out in #general on Discord after that comment on IRC [15:23:45] Ok dmehus [15:26:56] * dmehus is surprised DigitalOcean had an IPO just recently and he never heard about it [15:40:42] RECOVERY - db11 Current Load on db11 is OK: OK - load average: 5.69, 6.47, 6.73 [15:58:57] high load on db11 [15:58:58] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=db11.miraheze.org&var-port=9100 [15:58:59] [ Grafana ] - grafana.miraheze.org [15:59:11] SPF|Cloud: wondering are you working on db11? [15:59:30] yes [15:59:52] ok [15:59:59] 16:36:22 <+SPF|Cloud> ![anti-log]log https://phabricator.miraheze.org/T5877#140588: run test backup on db11 with six threads [16:00:00] [ ⚓ T5877 Revise MariaDB backup strategy ] - phabricator.miraheze.org [16:00:05] > that server is under high load now (because I am running mydumper with six threads), but wiki response times are still normal [16:00:05] paladox, ^ [16:00:21] ok [16:52:41] PROBLEM - db11 Current Load on db11 is WARNING: WARNING - load average: 7.17, 6.75, 6.25 [16:54:41] RECOVERY - db11 Current Load on db11 is OK: OK - load average: 6.04, 6.47, 6.21 [17:04:27] [02mw-config] 07Universal-Omega opened pull request 03#3852: Convert AutoCreatePage to ExtensionRegistry - 13https://git.io/JOSQk [17:05:00] [02mw-config] 07Universal-Omega synchronize pull request 03#3852: Convert AutoCreatePage to ExtensionRegistry - 13https://git.io/JOSQk [17:05:22] miraheze/mw-config - Universal-Omega the build passed. [17:05:56] miraheze/mw-config - Universal-Omega the build passed. [17:09:33] RhinosF1: do you know how to make that IRC notify portion not run on fork PRs since it doesn't connect to freenode if it is, since it doesn't have the secrets access? I'm willing to do PRs to do it if you'd like, but I don't know how. If it is also fine how it is then that's good also, just thought you might want to do that. [17:10:08] Universal_Omega: erm, I will look [17:10:25] RhinosF1: thanks! [17:10:48] (Don't want to stop full CI from running though just notify-irc) [17:11:04] Universal_Omega, I like the IRC notification on my forks, though :( [17:11:13] Universal_Omega: if: ${{ always() && github.repository_owner == 'miraheze' && ( github.ref == 'refs/heads/master' || github.event_name == 'pull_request' ) }} [17:11:22] https://github.com/miraheze/mw-config/blob/0f4774adf13f1e640918b95e6b16ca85cc7c7dbd/.github/workflows/continuousIntegration.yml#L28 [17:11:23] [ mw-config/continuousIntegration.yml at 0f4774adf13f1e640918b95e6b16ca85cc7c7dbd · miraheze/mw-config · GitHub ] - github.com [17:11:55] dmehus: yeah but we want it to connect to Freenode. Email Notifications are still sent on build failure. [17:12:41] RhinosF1: what are you referencing with that line? [17:12:53] Universal_Omega, oh do you mean to have miraheze-github identify itself to Freenode on forked PRs but still notify the IRC channel? if so, yeah, I suggested that awhile back so would support that [17:13:08] Universal_Omega: that looks like it should stop it [17:13:13] dmehus: we can do that [17:13:26] RhinosF1, ack, okay, cool :) [17:13:28] Just change on pull_request to pull_request_target [17:13:37] cool [17:13:52] that would stop the sygnal kickbans then too [17:13:57] But that would need SPF|Cloud as that password is icinga-miraheze and stuff too I think and you could potentially do malicious stuff [17:14:36] dmehus, RhinosF1: I thought that it's not possible to make it identify on forks, because it doesn't have access to GH secrets. RhinosF1: will do. Thanks! But if we can make it identify for forked PRs then I guess I won't do it. [17:15:33] What I said about _target [17:16:42] RhinosF1: I'm confused. That will stop it running on forked PRs? Right now it doesn't run on forks which is why I added that check in the first place but still does when PR is opened to /miraheze, will that stop that also? [17:17:19] It shouldn't [17:18:30] RhinosF1: then what does _target do? Make it so it only will stop external PRs (not PRs from internal branches from the same repositories I mean) [17:19:00] Allows secrets to be read by forks if PR is open [17:20:17] RhinosF1: that's not possible I thought. I spent 2-3 hours researching how to do that once and all I got back was it is not possible. Maybe it is though and I just didn't find it or missed it? [17:20:37] That's what _target does [17:20:59] yeah what RhinosF1 said about _target seems logical to me [17:21:20] It's possible because bots does it [17:21:26] oh [17:21:33] you mean MirahezeBots? [17:21:35] cool :) [17:21:51] RhinosF1: oh. Thanks I'll do PRs then to do it. But isn't that a risk that forks can be leaked secrets? Just making sure don't want to mess everything up. [17:22:24] Yes that's why I said you'll need to ask SPF|Cloud and we'd probably want to make it use it's own account first [17:22:55] oh, like have it use a separate account from icinga-miraheze? [17:23:06] RhinosF1: oh I understand now so I won't do it then yet. [17:23:22] dmehus: yes [17:23:55] RhinosF1, ack [17:25:05] RhinosF1: I don't think we should do that at all (use _target) "Workflows triggered via pull_request_target have write permission to the target repository. They also have access to target repository secrets." So that means it would cause a real risk, and one that's not worth it I think. [17:25:35] Yeah it's a huge risk [17:25:41] For Miraheze [17:25:52] Yeah and it's not worth it. [17:29:04] if the IRC notifications are annoying, we could just have miraheze-github notify a different channel for users that want to still monitor them, potentially, then we wouldn't have to disable them or use _target) [17:29:07] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 8.50, 7.06, 5.72 [17:30:02] dmehus: it's mainly about it being logged in because otherwise both freenode and MirahezeBot cry [17:30:19] RhinosF1, yeah [17:31:07] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 6.38, 6.73, 5.76 [17:46:23] PROBLEM - cloud4 APT on cloud4 is CRITICAL: APT CRITICAL: 69 packages available for upgrade (5 critical updates). [17:49:42] PROBLEM - cloud3 APT on cloud3 is CRITICAL: APT CRITICAL: 115 packages available for upgrade (5 critical updates). [17:57:09] PROBLEM - cloud5 APT on cloud5 is CRITICAL: APT CRITICAL: 69 packages available for upgrade (5 critical updates). [17:59:13] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOSxr [17:59:15] [02miraheze/dns] 07paladox 03cdf20b7 - Revert "Depool cp10" [18:00:05] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [18:17:44] db11 backup: 299 / 1742 wikis dumped [18:18:34] it's been running for 165 minutes now [18:20:43] rather slow, but at least the backup seems to go on without errors [18:28:27] SPF|Cloud, are you purging backups of deleted wikis? [18:40:06] no? [18:42:06] SPF|Cloud, ah okay, I think misinterpreted dumping meaning purging rather than generating dumps of wiki databases [18:42:44] dumping = generating a https://en.wikipedia.org/wiki/Database_dump [18:42:45] [WIKIPEDIA] Database dump | "A database dump (also: SQL dump) contains a record of the table structure and/or the data from a database and is usually in the form of a list of SQL statements. A database dump is most often used for backing up a database so that its contents can be restored in the event of data loss. Corrupted databases..." [18:45:07] SPF|Cloud, yeah, heh...I knew that, I just wasn't thinking when I asked that question [20:50:43] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO98j [20:50:45] [02miraheze/dns] 07paladox 034977d9f - Depool cp10 [20:51:41] [02mw-config] 07R4356th opened pull request 03#3853: Disable Wikibase entity search UI by default - 13https://git.io/JO94T [20:52:37] miraheze/mw-config - R4356th the build passed. [21:04:09] [02mw-config] 07R4356th opened pull request 03#3854: Remove wmgUseYandexTranslate - 13https://git.io/JO9Bo [21:05:16] miraheze/mw-config - R4356th the build passed. [21:06:52] PROBLEM - test3 Current Load on test3 is CRITICAL: CRITICAL - load average: 5.05, 3.55, 1.66 [21:12:53] PROBLEM - test3 Current Load on test3 is WARNING: WARNING - load average: 0.93, 3.93, 2.71 [21:14:53] RECOVERY - test3 Current Load on test3 is OK: OK - load average: 0.36, 2.69, 2.40 [21:15:27] PROBLEM - cp10 Puppet on cp10 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 22 minutes ago with 0 failures [21:16:51] test3 load is me [21:25:48] PROBLEM - test3 Current Load on test3 is CRITICAL: CRITICAL - load average: 6.44, 5.07, 3.34 [21:26:24] [02puppet] 07R4356th opened pull request 03#1746: Remove wmgYandexTranslationKey - 13https://git.io/JO9um [21:27:30] [02puppet] 07R4356th edited pull request 03#1746: Remove wmgYandexTranslationKey - 13https://git.io/JO9um [21:28:20] PROBLEM - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is WARNING: WARNING - NGINX Error Rate is 57% [21:29:48] PROBLEM - test3 Current Load on test3 is WARNING: WARNING - load average: 0.60, 3.80, 3.37 [21:30:19] RECOVERY - cp10 HTTP 4xx/5xx ERROR Rate on cp10 is OK: OK - NGINX Error Rate is 39% [21:31:47] RECOVERY - test3 Current Load on test3 is OK: OK - load average: 0.47, 2.68, 3.01 [21:54:25] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO928 [21:54:27] [02miraheze/dns] 07paladox 03661487e - Revert "Depool cp10" [21:55:27] RECOVERY - cp10 Puppet on cp10 is OK: OK: Puppet is currently enabled, last run 46 seconds ago with 0 failures [22:14:36] [02mw-config] 07R4356th edited pull request 03#3854: Remove wmgUseYandexTranslate - 13https://git.io/JO9Bo [22:15:24] [02puppet] 07R4356th synchronize pull request 03#1746: Remove wmgYandexTranslationKey - 13https://git.io/JO9um [22:15:57] [02puppet] 07R4356th commented on pull request 03#1746: Remove wmgYandexTranslationKey - 13https://git.io/JO9VP [22:25:59] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO9rR [22:26:01] [02miraheze/dns] 07paladox 031574e8c - Depool cp10 [22:30:47] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 6.94, 6.48, 5.13 [22:31:28] PROBLEM - cp10 Puppet on cp10 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 9 minutes ago with 0 failures [22:32:48] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.43, 5.77, 5.04 [23:16:15] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.20, 1.89, 1.40 [23:18:15] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.44, 1.41, 1.28 [23:40:02] [02mw-config] 07paladox commented on pull request 03#3853: Disable Wikibase entity search UI by default - 13https://git.io/JO9yB [23:46:43] [02mw-config] 07R4356th synchronize pull request 03#3853: Disable Wikibase entity search UI by default - 13https://git.io/JO94T [23:46:54] [02mw-config] 07R4356th commented on pull request 03#3853: Disable Wikibase entity search UI by default - 13https://git.io/JO9Sg [23:47:49] miraheze/mw-config - R4356th the build passed. [23:59:48] PROBLEM - test3 Current Load on test3 is CRITICAL: CRITICAL - load average: 10.76, 5.11, 2.16