[00:01:05] [02CreateWiki] 07Universal-Omega edited pull request 03#214: Create new invalid status for databases that already exist - 13https://git.io/J3Gub [00:02:14] PROBLEM - mon2 Puppet on mon2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [00:02:38] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.37, 1.77, 1.37 [00:04:38] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.48, 1.72, 1.40 [00:05:17] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 8.76, 7.16, 5.62 [00:06:17] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:06:37] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.55, 1.33, 1.30 [00:07:15] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 6.41, 6.75, 5.64 [00:08:12] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.23, 5.76, 5.27 [00:11:10] [02CreateWiki] 07JohnFLewis commented on pull request 03#214: Create new invalid status for databases that already exist - 13https://git.io/J3Wpk [00:22:14] RECOVERY - mon2 Puppet on mon2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:24:21] [02CreateWiki] 07Universal-Omega synchronize pull request 03#214: Create new invalid status for databases that already exist - 13https://git.io/J3Gub [00:25:20] miraheze/CreateWiki - Universal-Omega the build passed. [01:34:25] RECOVERY - bacula2 Bacula Databases db11 on bacula2 is OK: OK: Full, 442654 files, 36.03GB, 2021-05-02 01:34:00 (24.0 seconds ago) [03:18:53] RECOVERY - bacula2 Bacula Databases db13 on bacula2 is OK: OK: Full, 278933 files, 66.52GB, 2021-05-02 03:17:00 (1.9 minutes ago) [03:29:40] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.26, 5.01, 3.81 [03:31:37] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.95, 4.93, 3.93 [04:10:38] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 191s [04:13:26] PROBLEM - adadevelopersacademy.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for adadevelopersacademy.wiki could not be found [04:13:28] PROBLEM - savage-wiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for savage-wiki.com could not be found [04:13:28] PROBLEM - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for phabdigests.bots.miraheze.wiki could not be found [04:13:30] PROBLEM - baharna.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for baharna.org could not be found [04:13:33] PROBLEM - www.arru.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.arru.xyz could not be found [04:13:34] PROBLEM - iesd.mobagenie.my.id - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for iesd.mobagenie.my.id could not be found [04:13:35] PROBLEM - beaconspace.unrestrictedlorefare.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for beaconspace.unrestrictedlorefare.com could not be found [04:13:35] PROBLEM - wiki.mobilityengineer.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.mobilityengineer.com could not be found [04:13:36] PROBLEM - en.famepedia.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.famepedia.org could not be found [04:13:36] PROBLEM - en.pornwiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.pornwiki.org could not be found [04:13:37] PROBLEM - en.petrawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.petrawiki.org could not be found [04:13:40] PROBLEM - savagepedia.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for savagepedia.wiki could not be found [04:13:40] PROBLEM - oecumene.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for oecumene.org could not be found [04:13:45] PROBLEM - wiki.usagihime.ml - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.usagihime.ml could not be found [04:13:46] PROBLEM - www.lukaibrisimovic.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.lukaibrisimovic.org could not be found [04:13:47] PROBLEM - aman.awiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for aman.awiki.org could not be found [04:13:48] PROBLEM - www.trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.trollpasta.com could not be found [04:13:48] PROBLEM - christipedia.nl - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for christipedia.nl could not be found [04:13:48] PROBLEM - mrlove.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for mrlove.wiki could not be found [04:13:49] PROBLEM - wiki.responsibly.ai - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.responsibly.ai could not be found [04:13:51] PROBLEM - en.phgalaxy.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.phgalaxy.xyz could not be found [04:13:51] PROBLEM - infectowiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for infectowiki.com could not be found [04:13:58] PROBLEM - test1.miraheze.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for test1.miraheze.org could not be found [04:14:00] PROBLEM - wiki.beergeeks.co.il - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.beergeeks.co.il could not be found [04:14:03] PROBLEM - wiki.twilightsignal.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.twilightsignal.com could not be found [04:14:06] PROBLEM - wiki.helioss.co - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.helioss.co could not be found [04:14:06] PROBLEM - archive.a2b2.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for archive.a2b2.org could not be found [04:14:10] PROBLEM - wiki.astralprojections.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.astralprojections.org could not be found [04:14:11] PROBLEM - petrawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for petrawiki.org could not be found [04:14:12] PROBLEM - munwiki.info - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for munwiki.info could not be found [04:14:12] PROBLEM - wiki.erisly.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.erisly.com could not be found [04:14:17] PROBLEM - portalsofphereon.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for portalsofphereon.com could not be found [04:14:37] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 75s [04:20:12] RECOVERY - adadevelopersacademy.wiki - reverse DNS on sslhost is OK: rDNS OK - adadevelopersacademy.wiki reverse DNS resolves to cp10.miraheze.org [04:20:16] RECOVERY - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - phabdigests.bots.miraheze.wiki reverse DNS resolves to cp11.miraheze.org [04:20:16] RECOVERY - savage-wiki.com - reverse DNS on sslhost is OK: rDNS OK - savage-wiki.com reverse DNS resolves to cp10.miraheze.org [04:20:18] RECOVERY - baharna.org - reverse DNS on sslhost is OK: rDNS OK - baharna.org reverse DNS resolves to cp11.miraheze.org [04:20:23] RECOVERY - www.arru.xyz - reverse DNS on sslhost is OK: rDNS OK - www.arru.xyz reverse DNS resolves to cp10.miraheze.org [04:20:27] RECOVERY - iesd.mobagenie.my.id - reverse DNS on sslhost is OK: rDNS OK - iesd.mobagenie.my.id reverse DNS resolves to cp10.miraheze.org [04:20:29] RECOVERY - beaconspace.unrestrictedlorefare.com - reverse DNS on sslhost is OK: rDNS OK - beaconspace.unrestrictedlorefare.com reverse DNS resolves to cp10.miraheze.org [04:20:29] RECOVERY - wiki.mobilityengineer.com - reverse DNS on sslhost is OK: rDNS OK - wiki.mobilityengineer.com reverse DNS resolves to cp11.miraheze.org [04:20:30] RECOVERY - wiki.usagihime.ml - reverse DNS on sslhost is OK: rDNS OK - wiki.usagihime.ml reverse DNS resolves to cp10.miraheze.org [04:20:30] RECOVERY - www.lukaibrisimovic.org - reverse DNS on sslhost is OK: rDNS OK - www.lukaibrisimovic.org reverse DNS resolves to cp11.miraheze.org [04:20:31] RECOVERY - en.famepedia.org - reverse DNS on sslhost is OK: rDNS OK - en.famepedia.org reverse DNS resolves to cp11.miraheze.org [04:20:32] RECOVERY - aman.awiki.org - reverse DNS on sslhost is OK: rDNS OK - aman.awiki.org reverse DNS resolves to cp11.miraheze.org [04:20:32] RECOVERY - en.pornwiki.org - reverse DNS on sslhost is OK: rDNS OK - en.pornwiki.org reverse DNS resolves to cp10.miraheze.org [04:20:33] RECOVERY - en.petrawiki.org - reverse DNS on sslhost is OK: rDNS OK - en.petrawiki.org reverse DNS resolves to cp10.miraheze.org [04:20:34] RECOVERY - christipedia.nl - reverse DNS on sslhost is OK: rDNS OK - christipedia.nl reverse DNS resolves to cp10.miraheze.org [04:20:34] RECOVERY - www.trollpasta.com - reverse DNS on sslhost is OK: rDNS OK - www.trollpasta.com reverse DNS resolves to cp11.miraheze.org [04:20:35] RECOVERY - mrlove.wiki - reverse DNS on sslhost is OK: rDNS OK - mrlove.wiki reverse DNS resolves to cp11.miraheze.org [04:20:37] RECOVERY - wiki.responsibly.ai - reverse DNS on sslhost is OK: rDNS OK - wiki.responsibly.ai reverse DNS resolves to cp10.miraheze.org [04:20:38] RECOVERY - en.phgalaxy.xyz - reverse DNS on sslhost is OK: rDNS OK - en.phgalaxy.xyz reverse DNS resolves to cp11.miraheze.org [04:20:39] RECOVERY - savagepedia.wiki - reverse DNS on sslhost is OK: rDNS OK - savagepedia.wiki reverse DNS resolves to cp10.miraheze.org [04:20:39] RECOVERY - infectowiki.com - reverse DNS on sslhost is OK: rDNS OK - infectowiki.com reverse DNS resolves to cp11.miraheze.org [04:20:40] RECOVERY - oecumene.org - reverse DNS on sslhost is OK: rDNS OK - oecumene.org reverse DNS resolves to cp10.miraheze.org [04:20:47] RECOVERY - wiki.helioss.co - reverse DNS on sslhost is OK: rDNS OK - wiki.helioss.co reverse DNS resolves to cp11.miraheze.org [04:20:48] RECOVERY - archive.a2b2.org - reverse DNS on sslhost is OK: rDNS OK - archive.a2b2.org reverse DNS resolves to cp10.miraheze.org [04:20:53] RECOVERY - petrawiki.org - reverse DNS on sslhost is OK: rDNS OK - petrawiki.org reverse DNS resolves to cp11.miraheze.org [04:20:54] RECOVERY - wiki.astralprojections.org - reverse DNS on sslhost is OK: rDNS OK - wiki.astralprojections.org reverse DNS resolves to cp10.miraheze.org [04:20:55] RECOVERY - wiki.beergeeks.co.il - reverse DNS on sslhost is OK: rDNS OK - wiki.beergeeks.co.il reverse DNS resolves to cp11.miraheze.org [04:20:57] RECOVERY - munwiki.info - reverse DNS on sslhost is OK: rDNS OK - munwiki.info reverse DNS resolves to cp10.miraheze.org [04:20:57] RECOVERY - test1.miraheze.org - reverse DNS on sslhost is OK: rDNS OK - test1.miraheze.org reverse DNS resolves to cp10.miraheze.org [04:20:58] RECOVERY - wiki.erisly.com - reverse DNS on sslhost is OK: rDNS OK - wiki.erisly.com reverse DNS resolves to cp10.miraheze.org [04:21:01] RECOVERY - wiki.twilightsignal.com - reverse DNS on sslhost is OK: rDNS OK - wiki.twilightsignal.com reverse DNS resolves to cp11.miraheze.org [04:21:04] RECOVERY - portalsofphereon.com - reverse DNS on sslhost is OK: rDNS OK - portalsofphereon.com reverse DNS resolves to cp10.miraheze.org [05:00:36] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 253s [05:13:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 5.17, 3.59, 2.72 [05:17:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.77, 3.54, 2.95 [05:23:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.25, 3.70, 3.18 [05:27:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.88, 3.73, 3.32 [05:31:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.77, 3.98, 3.50 [05:34:16] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.06, 19.82, 18.02 [05:35:13] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.87, 6.25, 5.24 [05:36:15] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.28, 18.92, 17.92 [05:37:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.92, 5.91, 5.26 [05:42:11] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.92, 20.89, 19.18 [05:44:12] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 16.87, 18.82, 18.60 [05:45:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.95, 3.88, 3.92 [05:51:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.66, 3.90, 3.85 [05:53:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.82, 3.79, 3.82 [05:55:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.71, 4.04, 3.90 [05:57:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.46, 3.90, 3.88 [06:01:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.22, 3.64, 3.74 [06:05:11] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.06, 3.67, 3.75 [06:06:07] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.67, 6.72, 5.62 [06:08:07] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.16, 6.15, 5.55 [06:13:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 5.37, 4.00, 3.78 [06:19:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.16, 3.83, 3.83 [06:25:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 5.34, 4.21, 3.95 [06:25:36] [02CreateWiki] 07Reception123 closed pull request 03#213: [BUG FIX] Fix "PHP Notice: Undefined index" warnings - 13https://git.io/JOo0h [06:25:38] [02miraheze/CreateWiki] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J38vQ [06:25:39] [02miraheze/CreateWiki] 07Universal-Omega 03f424044 - [BUG FIX] Fix "PHP Notice: Undefined index" warnings (#213) [06:26:37] miraheze/CreateWiki - Reception123 the build passed. [06:29:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.36, 3.87, 3.88 [06:35:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.10, 2.76, 3.39 [06:35:13] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.22, 6.53, 5.97 [06:37:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.11, 5.56, 5.69 [06:43:04] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.45, 3.81, 3.60 [06:47:05] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.93, 3.98, 3.73 [06:55:04] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.21, 3.66, 3.62 [06:55:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.87, 21.32, 19.00 [06:56:35] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/J38T5 [06:56:36] [02miraheze/mediawiki] 07Reception123 035d5b0ab - Update CreateWiki [06:56:40] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.20, 5.83, 4.67 [06:57:54] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.77, 23.16, 20.09 [06:58:02] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 3.75, 4.49, 2.77 [06:59:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.68, 3.85, 3.72 [06:59:58] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.99, 3.26, 2.52 [07:00:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.92, 5.92, 4.96 [07:01:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.54, 4.07, 3.82 [07:01:52] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 16.58, 20.27, 19.61 [07:02:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.32, 4.97, 4.73 [07:03:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.17, 3.76, 3.74 [07:05:46] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.88, 21.62, 20.30 [07:09:41] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.76, 23.42, 21.24 [07:10:40] PROBLEM - jobrunner4 Puppet on jobrunner4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:10:57] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:11:18] PROBLEM - mw11 Puppet on mw11 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:11:39] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.64, 23.05, 21.39 [07:11:53] PROBLEM - mw10 Puppet on mw10 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:12:10] PROBLEM - mw8 Puppet on mw8 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:12:13] PROBLEM - mw9 Puppet on mw9 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:12:52] PROBLEM - jobrunner3 Puppet on jobrunner3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [07:13:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.74, 2.96, 3.32 [07:14:52] RECOVERY - jobrunner3 Puppet on jobrunner3 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [07:19:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.90, 3.33, 3.34 [07:19:31] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 15.77, 18.69, 20.09 [07:21:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.01, 3.23, 3.30 [07:27:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.88, 3.70, 3.50 [07:31:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.91, 3.18, 3.33 [07:32:18] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.89, 20.53, 20.08 [07:34:16] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 19.98, 20.18, 20.00 [07:34:57] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:38:38] RECOVERY - jobrunner4 Puppet on jobrunner4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:39:04] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.74, 3.94, 3.60 [07:43:58] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 7 datacenters are down: 128.199.139.216/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [07:44:04] Huh [07:44:34] Reception123: we're down [07:44:37] PROBLEM - sopel.bots.miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for sopel.bots.miraheze.wiki could not be found [07:45:05] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 2001:41d0:800:178a::5/cpweb [07:45:24] Mw10 went poo for a minute [07:45:30] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.23, 6.08, 4.90 [07:45:56] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:46:02] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.50, 22.52, 21.07 [07:46:42] strange [07:47:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.10, 3.74, 3.68 [07:47:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [07:47:27] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.36, 5.22, 4.71 [07:49:23] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.31, 4.87, 4.64 [07:51:27] RECOVERY - sopel.bots.miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - sopel.bots.miraheze.wiki reverse DNS resolves to cp11.miraheze.org [07:55:53] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 13.85, 18.60, 19.96 [07:59:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.95, 3.22, 3.40 [08:03:05] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 5.43, 4.21, 3.75 [08:03:52] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [08:05:18] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:05:50] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.58, 22.81, 20.87 [08:06:12] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:06:25] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:07:05] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.31, 3.40, 3.54 [08:07:07] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 4.70, 5.77, 3.20 [08:07:48] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 15.63, 20.08, 20.10 [08:10:30] PROBLEM - cloud5 Current Load on cloud5 is CRITICAL: CRITICAL - load average: 24.20, 24.13, 18.07 [08:11:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.59, 3.65, 3.57 [08:12:28] RECOVERY - cloud5 Current Load on cloud5 is OK: OK - load average: 13.02, 19.90, 17.25 [08:13:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.44, 3.44, 3.50 [08:15:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.52, 3.13, 3.37 [08:23:58] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.05, 3.55, 3.44 [08:27:04] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 0.70, 1.69, 3.75 [08:29:49] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.82, 3.60, 3.56 [08:31:07] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 2.26, 1.79, 3.29 [08:31:46] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 1.91, 3.02, 3.35 [08:36:16] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.20, 21.56, 19.98 [08:38:14] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.10, 21.44, 20.07 [08:40:11] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.30, 21.89, 20.37 [08:46:04] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.12, 19.83, 20.05 [09:01:48] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.71, 21.66, 19.87 [09:03:49] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.72, 21.00, 19.85 [09:09:45] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.99, 23.39, 21.13 [09:15:43] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.00, 23.08, 21.71 [09:16:40] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.51, 5.16, 4.64 [09:18:38] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.06, 5.43, 4.80 [09:19:56] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 9.59, 4.44, 2.56 [09:20:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.96, 5.51, 4.93 [09:21:57] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 3.39, 3.61, 2.47 [09:23:36] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 16.25, 19.13, 20.33 [09:23:55] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 5.10, 5.09, 3.21 [09:24:41] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.33, 4.74, 4.74 [09:25:53] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 1.65, 3.87, 2.98 [09:27:31] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.14, 19.99, 20.41 [09:27:51] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 1.35, 2.99, 2.76 [09:28:17] [02mw-config] 07R4356th commented on pull request 03#3854: Remove wmgUseYandexTranslate - 13https://git.io/J38wS [09:29:28] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.65, 19.55, 20.20 [09:36:18] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.57, 21.73, 20.89 [09:38:16] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.91, 20.96, 20.71 [09:41:42] [02mw-config] 07dmehus commented on pull request 03#3854: Remove wmgUseYandexTranslate - 13https://git.io/J38ox [09:42:12] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.85, 23.53, 21.68 [09:44:09] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.76, 23.04, 21.71 [09:44:22] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.17, 5.20, 4.70 [09:46:07] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.51, 23.65, 22.06 [09:46:19] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.82, 5.03, 4.70 [09:48:05] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 17.78, 21.22, 21.35 [09:52:00] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.61, 22.28, 21.63 [09:53:58] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.57, 21.86, 21.56 [09:55:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.42, 3.27, 3.04 [09:56:05] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.90, 5.25, 4.92 [09:57:04] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.10, 3.25, 3.07 [09:57:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 3.23, 4.60, 3.34 [09:58:02] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.79, 5.07, 4.89 [09:59:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.15, 3.77, 3.19 [10:01:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.63, 3.03, 2.98 [10:11:43] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.53, 19.39, 20.31 [10:36:17] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.89, 23.72, 21.14 [10:38:15] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 15.85, 20.48, 20.25 [10:40:13] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 17.92, 19.26, 19.80 [10:44:09] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.30, 20.88, 20.36 [10:46:07] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.91, 22.95, 21.22 [10:47:50] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.32, 5.27, 4.68 [10:48:04] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.35, 4.04, 2.95 [10:48:05] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.41, 22.77, 21.34 [10:50:00] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.44, 3.05, 2.71 [10:50:24] PROBLEM - wiki.candela.digital - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.candela.digital could not be found [10:50:25] PROBLEM - wiki.graalmilitary.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.graalmilitary.com could not be found [10:51:43] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.47, 5.31, 4.87 [10:52:11] PROBLEM - debilepedie.gq - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for debilepedie.gq could not be found [10:53:49] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.47, 3.95, 3.18 [10:55:36] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.26, 4.93, 4.83 [10:55:44] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.07, 4.20, 3.31 [10:55:56] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 15.99, 19.16, 20.32 [10:57:07] RECOVERY - wiki.candela.digital - reverse DNS on sslhost is OK: rDNS OK - wiki.candela.digital reverse DNS resolves to cp10.miraheze.org [10:57:26] RECOVERY - wiki.graalmilitary.com - reverse DNS on sslhost is OK: rDNS OK - wiki.graalmilitary.com reverse DNS resolves to cp10.miraheze.org [10:57:39] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.61, 3.63, 3.21 [10:59:01] RECOVERY - debilepedie.gq - reverse DNS on sslhost is OK: rDNS OK - debilepedie.gq reverse DNS resolves to cp11.miraheze.org [10:59:33] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.78, 2.91, 2.99 [11:03:48] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.94, 19.67, 19.68 [11:05:45] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 19.23, 19.38, 19.57 [11:10:06] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.77, 3.49, 3.27 [11:12:00] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.49, 3.13, 3.16 [11:15:51] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.39, 3.94, 3.50 [11:17:46] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.19, 2.93, 3.18 [11:23:29] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 10.60, 5.35, 4.00 [11:27:22] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.24, 20.00, 18.91 [11:27:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.91, 3.48, 3.54 [11:29:20] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.97, 20.14, 19.13 [11:29:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.55, 2.86, 3.30 [11:33:17] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.07, 23.27, 20.64 [11:33:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.86, 3.55, 3.47 [11:35:14] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.37, 23.07, 20.95 [11:35:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.38, 3.06, 3.30 [11:43:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.86, 3.49, 2.86 [11:43:05] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 18.68, 19.81, 20.32 [11:46:59] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.54, 21.70, 21.15 [11:47:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.19, 3.36, 2.96 [11:48:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.26, 23.66, 21.87 [11:49:37] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.31, 5.84, 5.09 [11:50:39] PROBLEM - wiki.ripto.gq - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.ripto.gq could not be found [11:50:43] PROBLEM - wiki.zaoace.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.zaoace.com could not be found [11:50:56] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.81, 22.27, 21.59 [11:51:33] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.57, 5.41, 5.02 [11:53:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.75, 3.60, 3.19 [11:53:30] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.27, 4.61, 4.78 [11:57:04] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.89, 3.25, 3.14 [11:57:29] RECOVERY - wiki.ripto.gq - reverse DNS on sslhost is OK: rDNS OK - wiki.ripto.gq reverse DNS resolves to cp10.miraheze.org [11:57:37] RECOVERY - wiki.zaoace.com - reverse DNS on sslhost is OK: rDNS OK - wiki.zaoace.com reverse DNS resolves to cp11.miraheze.org [11:58:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.60, 21.08, 20.91 [12:00:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.25, 21.27, 20.97 [12:03:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.64, 3.53, 3.30 [12:03:18] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.43, 4.94, 4.83 [12:03:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.28, 6.10, 4.34 [12:05:18] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.99, 4.64, 4.75 [12:06:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.12, 22.45, 21.55 [12:07:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.62, 3.68, 3.74 [12:08:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.79, 22.14, 21.55 [12:11:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.36, 3.89, 3.53 [12:11:26] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.57, 4.97, 4.30 [12:13:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.36, 3.69, 3.50 [12:14:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.12, 23.14, 22.06 [12:16:05] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 8.66, 5.98, 5.24 [12:19:58] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.36, 5.56, 5.28 [12:27:47] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.36, 4.89, 5.06 [12:28:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 16.83, 21.73, 23.05 [12:29:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.45, 3.91, 3.69 [12:31:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.78, 3.02, 3.88 [12:32:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 30.22, 21.84, 22.51 [12:37:23] PROBLEM - vise.dayid.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for vise.dayid.org could not be found [12:37:25] PROBLEM - adadevelopersacademy.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for adadevelopersacademy.wiki could not be found [12:37:27] PROBLEM - bn.gyaanipedia.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for bn.gyaanipedia.com could not be found [12:37:27] PROBLEM - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for phabdigests.bots.miraheze.wiki could not be found [12:37:28] PROBLEM - savage-wiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for savage-wiki.com could not be found [12:37:30] PROBLEM - www.arru.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.arru.xyz could not be found [12:37:31] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.33, 5.61, 5.14 [12:38:56] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.64, 23.71, 23.41 [12:39:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.40, 3.81, 3.79 [12:39:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 3.11, 4.02, 3.94 [12:39:27] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.71, 5.46, 5.15 [12:41:23] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.43, 5.83, 5.32 [12:41:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.82, 3.50, 3.75 [12:42:54] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.97, 24.76, 23.70 [12:43:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.10, 3.88, 3.82 [12:43:05] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 3.32, 5.39, 3.34 [12:43:21] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.23, 4.89, 5.04 [12:44:04] RECOVERY - vise.dayid.org - reverse DNS on sslhost is OK: rDNS OK - vise.dayid.org reverse DNS resolves to cp11.miraheze.org [12:44:09] RECOVERY - adadevelopersacademy.wiki - reverse DNS on sslhost is OK: rDNS OK - adadevelopersacademy.wiki reverse DNS resolves to cp10.miraheze.org [12:44:11] RECOVERY - bn.gyaanipedia.com - reverse DNS on sslhost is OK: rDNS OK - bn.gyaanipedia.com reverse DNS resolves to cp11.miraheze.org [12:44:13] RECOVERY - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - phabdigests.bots.miraheze.wiki reverse DNS resolves to cp11.miraheze.org [12:44:13] RECOVERY - savage-wiki.com - reverse DNS on sslhost is OK: rDNS OK - savage-wiki.com reverse DNS resolves to cp11.miraheze.org [12:44:17] RECOVERY - www.arru.xyz - reverse DNS on sslhost is OK: rDNS OK - www.arru.xyz reverse DNS resolves to cp11.miraheze.org [12:44:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.87, 23.78, 23.44 [12:45:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.65, 3.84, 3.81 [12:45:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.00, 4.43, 4.03 [12:47:08] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 3.09, 3.59, 3.02 [12:47:24] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.54, 5.94, 5.41 [12:47:59] JohnLewis: any chance you can do Owen's task [12:48:11] Because the more I look the less confident I get [12:48:53] We found that hook you gave doesn't work on private logs like suppress [12:49:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.10, 3.97, 3.88 [12:49:06] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 3.25, 3.31, 2.98 [12:49:12] And CheckUser won't work without patching CU because it's it's own special case [12:49:16] Is that a requirement? [12:50:36] JohnLewis: From what Reception123 has said sounds like it but I'm not logged onto discord [12:51:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.94, 3.91, 3.86 [12:51:09] Be strange that he would require sensitive information to be transmitted over email unnecessarily though, surely? [12:51:17] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.22, 5.53, 5.40 [12:51:28] Which CU logs at the very least would contain [12:51:50] JohnLewis: well I don't think he wanted more than acting user and a link [12:52:22] https://phabricator.miraheze.org/T7197#142817 [12:52:22] [ ⚓ T7197 Generate Emails on Logged Actions ] - phabricator.miraheze.org [12:52:52] CheckUser and OS actions would be picked up by a user rights change log though so what would be the need to send the logs themselves over? [12:53:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.82, 3.40, 3.87 [12:54:23] RhinosF1: what I sent wasn't about CheckUser/OS though [12:56:19] JohnLewis: true [12:56:29] Not sure what else is restricted logs [12:57:09] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.74, 4.85, 5.09 [13:01:00] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 28.25, 23.67, 22.44 [13:01:11] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.31, 2.82, 3.37 [13:01:32] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.35, 5.42, 4.37 [13:02:58] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.58, 23.70, 22.62 [13:03:24] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 5.64, 6.36, 5.72 [13:04:10] Reception123: https://phabricator.miraheze.org/T7197#143453 [13:04:11] [ ⚓ T7197 Generate Emails on Logged Actions ] - phabricator.miraheze.org [13:04:17] Ask him if ^ makes sense [13:04:39] PROBLEM - www.portalsofphereon.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.portalsofphereon.com could not be found [13:04:39] PROBLEM - spcodex.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for spcodex.wiki could not be found [13:04:40] PROBLEM - heavyironmodding.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for heavyironmodding.org could not be found [13:04:40] PROBLEM - wiki.fourta.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.fourta.org could not be found [13:04:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 35.95, 27.91, 24.23 [13:07:20] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 3.93, 5.47, 5.55 [13:08:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.16, 22.99, 23.00 [13:09:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.69, 3.47, 3.89 [13:10:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.61, 23.96, 23.34 [13:11:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.44, 3.85, 3.97 [13:11:30] RECOVERY - heavyironmodding.org - reverse DNS on sslhost is OK: rDNS OK - heavyironmodding.org reverse DNS resolves to cp11.miraheze.org [13:11:32] RECOVERY - www.portalsofphereon.com - reverse DNS on sslhost is OK: rDNS OK - www.portalsofphereon.com reverse DNS resolves to cp11.miraheze.org [13:11:32] RECOVERY - spcodex.wiki - reverse DNS on sslhost is OK: rDNS OK - spcodex.wiki reverse DNS resolves to cp11.miraheze.org [13:11:33] RECOVERY - wiki.fourta.org - reverse DNS on sslhost is OK: rDNS OK - wiki.fourta.org reverse DNS resolves to cp10.miraheze.org [13:13:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.23, 3.50, 3.82 [13:14:54] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.72, 23.48, 23.33 [13:15:04] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 3.81, 4.44, 3.47 [13:17:04] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 1.90, 3.38, 3.18 [13:17:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 8.92, 5.21, 4.34 [13:19:34] RhinosF1: said that's fine yeah [13:19:44] Reception123: good [13:21:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.65, 3.88, 3.97 [13:22:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 33.52, 27.38, 24.54 [13:23:26] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 11.00, 7.60, 5.40 [13:26:46] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.43, 5.81, 5.49 [13:27:04] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 4.84, 4.98, 3.97 [13:28:43] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [13:29:03] PROBLEM - wikiru.wildterra2.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wikiru.wildterra2.com could not be found [13:29:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 7 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [13:29:35] PROBLEM - wiki.autocountsoft.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.autocountsoft.com could not be found [13:29:39] PROBLEM - podpedia.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for podpedia.org could not be found [13:29:43] PROBLEM - wiki.patriam.cc - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.patriam.cc could not be found [13:29:44] PROBLEM - tensegritywiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for tensegritywiki.com could not be found [13:29:46] PROBLEM - wiki.cdntennis.ca - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.cdntennis.ca could not be found [13:29:50] PROBLEM - sarovia.graalmilitary.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for sarovia.graalmilitary.com could not be found [13:30:37] PROBLEM - cyberlaw.ccdcoe.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for cyberlaw.ccdcoe.org could not be found [13:30:39] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [13:30:41] PROBLEM - guia.cineastas.pt - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for guia.cineastas.pt could not be found [13:30:42] PROBLEM - bharatwiki.online - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for bharatwiki.online could not be found [13:30:42] PROBLEM - files.pornwiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for files.pornwiki.org could not be found [13:30:42] PROBLEM - wiki.gesamtschule-nordkirchen.de - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.gesamtschule-nordkirchen.de could not be found [13:30:43] PROBLEM - sims.miraheze.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for sims.miraheze.org could not be found [13:30:43] PROBLEM - arquivo.ucmg.ml - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for arquivo.ucmg.ml could not be found [13:30:43] PROBLEM - www.baharna.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.baharna.org could not be found [13:31:08] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 1.49, 3.54, 3.68 [13:31:09] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:32:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.90, 5.90, 5.73 [13:33:05] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 1.11, 2.77, 3.38 [13:34:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.11, 5.99, 5.77 [13:35:51] RECOVERY - wikiru.wildterra2.com - reverse DNS on sslhost is OK: rDNS OK - wikiru.wildterra2.com reverse DNS resolves to cp11.miraheze.org [13:36:14] RECOVERY - wiki.autocountsoft.com - reverse DNS on sslhost is OK: rDNS OK - wiki.autocountsoft.com reverse DNS resolves to cp10.miraheze.org [13:36:27] RECOVERY - wiki.patriam.cc - reverse DNS on sslhost is OK: rDNS OK - wiki.patriam.cc reverse DNS resolves to cp11.miraheze.org [13:36:35] RECOVERY - wiki.cdntennis.ca - reverse DNS on sslhost is OK: rDNS OK - wiki.cdntennis.ca reverse DNS resolves to cp10.miraheze.org [13:36:38] RECOVERY - podpedia.org - reverse DNS on sslhost is OK: rDNS OK - podpedia.org reverse DNS resolves to cp11.miraheze.org [13:36:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 3.81, 5.18, 5.50 [13:36:44] RECOVERY - tensegritywiki.com - reverse DNS on sslhost is OK: rDNS OK - tensegritywiki.com reverse DNS resolves to cp11.miraheze.org [13:36:50] RECOVERY - sarovia.graalmilitary.com - reverse DNS on sslhost is OK: rDNS OK - sarovia.graalmilitary.com reverse DNS resolves to cp10.miraheze.org [13:37:17] RECOVERY - cyberlaw.ccdcoe.org - reverse DNS on sslhost is OK: rDNS OK - cyberlaw.ccdcoe.org reverse DNS resolves to cp10.miraheze.org [13:37:24] RECOVERY - arquivo.ucmg.ml - reverse DNS on sslhost is OK: rDNS OK - arquivo.ucmg.ml reverse DNS resolves to cp11.miraheze.org [13:37:24] RECOVERY - guia.cineastas.pt - reverse DNS on sslhost is OK: rDNS OK - guia.cineastas.pt reverse DNS resolves to cp11.miraheze.org [13:37:25] RECOVERY - bharatwiki.online - reverse DNS on sslhost is OK: rDNS OK - bharatwiki.online reverse DNS resolves to cp10.miraheze.org [13:37:25] RECOVERY - files.pornwiki.org - reverse DNS on sslhost is OK: rDNS OK - files.pornwiki.org reverse DNS resolves to cp11.miraheze.org [13:37:25] RECOVERY - wiki.gesamtschule-nordkirchen.de - reverse DNS on sslhost is OK: rDNS OK - wiki.gesamtschule-nordkirchen.de reverse DNS resolves to cp10.miraheze.org [13:37:26] RECOVERY - sims.miraheze.org - reverse DNS on sslhost is OK: rDNS OK - sims.miraheze.org reverse DNS resolves to cp11.miraheze.org [13:37:26] RECOVERY - www.baharna.org - reverse DNS on sslhost is OK: rDNS OK - www.baharna.org reverse DNS resolves to cp11.miraheze.org [13:38:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.08, 5.60, 5.62 [13:40:40] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.77, 5.45, 5.55 [13:41:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.39, 2.81, 3.93 [13:47:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.04, 2.14, 3.25 [13:48:44] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 8.86, 5.91, 5.63 [13:50:32] PROBLEM - stablestate.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for stablestate.org could not be found [13:51:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.17, 6.27, 5.00 [13:52:40] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.23, 5.47, 5.51 [13:57:13] RECOVERY - stablestate.org - reverse DNS on sslhost is OK: rDNS OK - stablestate.org reverse DNS resolves to cp10.miraheze.org [14:00:53] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 17.14, 21.48, 23.71 [14:02:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.90, 4.48, 5.02 [14:04:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 36.61, 25.76, 24.67 [14:06:41] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.67, 5.84, 5.41 [14:26:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.01, 5.62, 5.90 [14:28:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.22, 5.88, 5.96 [14:30:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.08, 5.51, 5.81 [14:34:54] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.65, 21.87, 23.79 [14:36:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.32, 24.95, 24.69 [14:37:15] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 11.74, 8.57, 6.12 [14:37:33] hmm, very high load times for me [14:38:09] PROBLEM - wiki.wilderyogi.eu - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.wilderyogi.eu could not be found [14:38:09] PROBLEM - wiki.mastodon.kr - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.mastodon.kr could not be found [14:38:12] PROBLEM - de.gyaanipedia.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for de.gyaanipedia.com could not be found [14:38:14] PROBLEM - dc-multiverse.dcwikis.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for dc-multiverse.dcwikis.com could not be found [14:38:52] PROBLEM - mw10 Check Gluster Clients on mw10 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [14:39:14] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 4.28, 6.82, 5.78 [14:39:20] PROBLEM - mw8 Check Gluster Clients on mw8 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [14:39:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.54, 3.11, 3.98 [14:40:38] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 2.85, 3.83, 4.88 [14:40:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 17.75, 21.10, 23.21 [14:41:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 3.60, 5.77, 5.51 [14:41:52] PROBLEM - mw10 Puppet on mw10 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/mnt/mediawiki-static] [14:42:51] RECOVERY - mw10 Check Gluster Clients on mw10 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [14:44:48] RECOVERY - wiki.wilderyogi.eu - reverse DNS on sslhost is OK: rDNS OK - wiki.wilderyogi.eu reverse DNS resolves to cp10.miraheze.org [14:44:48] RECOVERY - wiki.mastodon.kr - reverse DNS on sslhost is OK: rDNS OK - wiki.mastodon.kr reverse DNS resolves to cp11.miraheze.org [14:44:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.86, 24.29, 23.85 [14:44:53] RECOVERY - de.gyaanipedia.com - reverse DNS on sslhost is OK: rDNS OK - de.gyaanipedia.com reverse DNS resolves to cp10.miraheze.org [14:44:59] RECOVERY - dc-multiverse.dcwikis.com - reverse DNS on sslhost is OK: rDNS OK - dc-multiverse.dcwikis.com reverse DNS resolves to cp11.miraheze.org [14:45:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.04, 3.54, 3.83 [14:45:51] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:46:01] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.92, 6.49, 5.61 [14:46:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.94, 23.41, 23.55 [14:47:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.41, 2.78, 3.52 [14:47:58] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.22, 6.19, 5.61 [14:50:56] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.43, 22.89, 23.15 [14:51:25] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 3.33, 2.83, 3.35 [14:52:54] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.77, 21.69, 22.66 [14:54:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.68, 23.74, 23.34 [14:55:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.41, 4.51, 3.97 [14:57:27] PROBLEM - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for phabdigests.bots.miraheze.wiki could not be found [14:57:29] PROBLEM - baharna.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for baharna.org could not be found [14:58:53] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.61, 22.45, 23.01 [15:01:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.74, 3.46, 3.73 [15:02:05] !log root@gluster3:/home/paladox# gluster volume set static lookup-optimize off [15:02:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:02:11] !log regenerate backups on bacula2 [15:02:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:02:59] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.06, 22.76, 22.75 [15:04:14] RECOVERY - phabdigests.bots.miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - phabdigests.bots.miraheze.wiki reverse DNS resolves to cp11.miraheze.org [15:04:16] RECOVERY - baharna.org - reverse DNS on sslhost is OK: rDNS OK - baharna.org reverse DNS resolves to cp11.miraheze.org [15:05:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.31, 4.18, 3.88 [15:06:08] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.87, 6.78, 5.87 [15:07:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.73, 3.56, 3.69 [15:07:57] !log upgrade gluster to 8.4 on gluster[34] [15:08:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:08:23] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.44, 6.80, 6.13 [15:08:56] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.94, 23.74, 23.45 [15:09:08] !log upgrade glusterfs-client glusterfs-common on mw 8, 9, 10, 11 [15:09:43] PROBLEM - mw8 Puppet on mw8 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/mnt/mediawiki-static] [15:09:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:10:09] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.24, 6.19, 5.90 [15:10:21] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.90, 6.31, 6.02 [15:11:09] I found an undeleteable file on awfulmovies.miraheze.org when I started to mass delete poor quality pages [15:11:11] https://phabricator.miraheze.org/T7228 [15:11:12] [ ⚓ T7228 Undeleteable file on awfulmovieswiki ] - phabricator.miraheze.org [15:11:20] RECOVERY - mw8 Check Gluster Clients on mw8 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [15:12:01] PROBLEM - wiki.twilightsignal.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.twilightsignal.com could not be found [15:12:04] PROBLEM - archive.a2b2.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for archive.a2b2.org could not be found [15:12:08] PROBLEM - oecumene.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for oecumene.org could not be found [15:12:09] PROBLEM - speleo.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for speleo.wiki could not be found [15:12:13] PROBLEM - sportsanalytics.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for sportsanalytics.wiki could not be found [15:12:19] PROBLEM - wiki.nevillepedia.eu - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.nevillepedia.eu could not be found [15:12:19] PROBLEM - translate.petrawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for translate.petrawiki.org could not be found [15:12:20] PROBLEM - portalsofphereon.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for portalsofphereon.com could not be found [15:12:21] PROBLEM - pastport.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for pastport.org could not be found [15:12:21] PROBLEM - www.petrawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.petrawiki.org could not be found [15:12:24] PROBLEM - techwiki.techboyg5blog.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for techwiki.techboyg5blog.com could not be found [15:12:25] PROBLEM - wiki.mcpirevival.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.mcpirevival.tk could not be found [15:12:55] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.29, 24.12, 23.55 [15:13:28] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 18.60, 7.80, 5.07 [15:15:13] PROBLEM - trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for trollpasta.com could not be found [15:15:13] PROBLEM - wiki.ciptamedia.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.ciptamedia.org could not be found [15:15:13] PROBLEM - wiki.etn.link - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.etn.link could not be found [15:15:13] PROBLEM - ao90.pinho.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for ao90.pinho.org could not be found [15:15:14] PROBLEM - wiki.mussa.id - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.mussa.id could not be found [15:15:17] PROBLEM - wiki.villagecollaborative.net - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.villagecollaborative.net could not be found [15:15:19] PROBLEM - www.opendatascot.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.opendatascot.org could not be found [15:16:02] !log upgrade glusterfs-client glusterfs-common on jobrunner[34] [15:16:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:16:41] !log upgrade glusterfs-client glusterfs-common on test3 [15:16:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:18:45] RECOVERY - archive.a2b2.org - reverse DNS on sslhost is OK: rDNS OK - archive.a2b2.org reverse DNS resolves to cp10.miraheze.org [15:18:52] RECOVERY - speleo.wiki - reverse DNS on sslhost is OK: rDNS OK - speleo.wiki reverse DNS resolves to cp11.miraheze.org [15:18:58] RECOVERY - sportsanalytics.wiki - reverse DNS on sslhost is OK: rDNS OK - sportsanalytics.wiki reverse DNS resolves to cp10.miraheze.org [15:18:59] RECOVERY - wiki.twilightsignal.com - reverse DNS on sslhost is OK: rDNS OK - wiki.twilightsignal.com reverse DNS resolves to cp11.miraheze.org [15:19:07] RECOVERY - oecumene.org - reverse DNS on sslhost is OK: rDNS OK - oecumene.org reverse DNS resolves to cp11.miraheze.org [15:19:08] RECOVERY - portalsofphereon.com - reverse DNS on sslhost is OK: rDNS OK - portalsofphereon.com reverse DNS resolves to cp10.miraheze.org [15:19:10] RECOVERY - pastport.org - reverse DNS on sslhost is OK: rDNS OK - pastport.org reverse DNS resolves to cp11.miraheze.org [15:19:11] RECOVERY - www.petrawiki.org - reverse DNS on sslhost is OK: rDNS OK - www.petrawiki.org reverse DNS resolves to cp10.miraheze.org [15:19:11] RECOVERY - translate.petrawiki.org - reverse DNS on sslhost is OK: rDNS OK - translate.petrawiki.org reverse DNS resolves to cp10.miraheze.org [15:19:11] RECOVERY - wiki.nevillepedia.eu - reverse DNS on sslhost is OK: rDNS OK - wiki.nevillepedia.eu reverse DNS resolves to cp11.miraheze.org [15:19:20] RECOVERY - techwiki.techboyg5blog.com - reverse DNS on sslhost is OK: rDNS OK - techwiki.techboyg5blog.com reverse DNS resolves to cp11.miraheze.org [15:19:24] RECOVERY - wiki.mcpirevival.tk - reverse DNS on sslhost is OK: rDNS OK - wiki.mcpirevival.tk reverse DNS resolves to cp11.miraheze.org [15:22:02] RECOVERY - wiki.villagecollaborative.net - reverse DNS on sslhost is OK: rDNS OK - wiki.villagecollaborative.net reverse DNS resolves to cp10.miraheze.org [15:22:04] RECOVERY - www.opendatascot.org - reverse DNS on sslhost is OK: rDNS OK - www.opendatascot.org reverse DNS resolves to cp10.miraheze.org [15:22:09] RECOVERY - trollpasta.com - reverse DNS on sslhost is OK: rDNS OK - trollpasta.com reverse DNS resolves to cp10.miraheze.org [15:22:09] RECOVERY - wiki.ciptamedia.org - reverse DNS on sslhost is OK: rDNS OK - wiki.ciptamedia.org reverse DNS resolves to cp10.miraheze.org [15:22:09] RECOVERY - wiki.etn.link - reverse DNS on sslhost is OK: rDNS OK - wiki.etn.link reverse DNS resolves to cp10.miraheze.org [15:22:10] RECOVERY - ao90.pinho.org - reverse DNS on sslhost is OK: rDNS OK - ao90.pinho.org reverse DNS resolves to cp10.miraheze.org [15:22:11] RECOVERY - wiki.mussa.id - reverse DNS on sslhost is OK: rDNS OK - wiki.mussa.id reverse DNS resolves to cp11.miraheze.org [15:22:52] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.47, 5.02, 4.34 [15:24:14] PROBLEM - viileapedia.ga - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for viileapedia.ga could not be found [15:24:48] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.60, 4.37, 4.18 [15:26:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 14.75, 20.21, 22.96 [15:31:10] RECOVERY - viileapedia.ga - reverse DNS on sslhost is OK: rDNS OK - viileapedia.ga reverse DNS resolves to cp10.miraheze.org [15:31:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.51, 2.72, 3.82 [15:36:08] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:36:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 31.06, 23.96, 22.83 [15:37:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.25, 4.54, 4.12 [15:42:36] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 2001:41d0:800:178a::5/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [15:42:56] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.65, 5.80, 4.82 [15:43:11] PROBLEM - wiki.insideearth.info - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.insideearth.info could not be found [15:43:12] PROBLEM - dariawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for dariawiki.org could not be found [15:44:33] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:44:52] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.28, 4.98, 4.64 [15:49:03] Reception123: can you apply https://paste.taavi.wtf/?e4cd421ee9b83398#BL9weg4JFevhukZZmk8LLc8grj2r4Jyb13Q7NeNKSmUG [15:49:04] [ PrivateBin ] - paste.taavi.wtf [15:49:53] RECOVERY - wiki.insideearth.info - reverse DNS on sslhost is OK: rDNS OK - wiki.insideearth.info reverse DNS resolves to cp10.miraheze.org [15:49:55] RECOVERY - dariawiki.org - reverse DNS on sslhost is OK: rDNS OK - dariawiki.org reverse DNS resolves to cp10.miraheze.org [15:50:42] don't seem to have access to that [15:53:33] Reception123: huh loads for me [15:55:05] for me https://usercontent.irccloud-cdn.com/file/LCpeAJrs/image.png [15:55:42] I get the same error as Reception123 [15:55:55] so likely RhinosF1 has view access, I'm guessing [15:56:15] Weird [15:56:57] Do you have another private bin elsewhere you can copy it to to send to Reception123? [15:57:17] unless Majavah is around, of course [15:57:36] he's sent it to me now via DM :) [15:57:40] dmehus: with dbtranscationerrors, if they happen once. It's probably not something we can fix. Twice is worth a closer look and 3 times is an issue. [15:57:58] RhinosF1, ah [15:58:09] Reception123, ah okay, cool :) [15:58:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.41, 22.14, 23.62 [15:59:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.21, 3.06, 3.98 [15:59:25] dmehus: that's my rule with anything unlikely to happen. Once is note for next time. Twice is look closer. Three times is suspicious. [15:59:57] Yup, that's what I thought too, one individual issue isn't enough to warrant a further investgiation really [16:00:02] *investigation [16:03:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.83, 3.88, 4.00 [16:05:02] RhinosF1, is that in reference to the weird image deletion issue, or the private pastebin not being visible for Reception123 and I? I know it's your rule, so could apply to both, but just wondering the context in which you were replying [16:05:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.90, 3.46, 3.83 [16:05:33] dmehus: the image deletion specifically but it's a general life rule [16:06:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.94, 24.66, 23.62 [16:08:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.99, 22.53, 22.96 [16:09:22] RhinosF1, ack, yeah [16:09:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 8.59, 4.27, 3.96 [16:10:54] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.99, 23.02, 23.05 [16:11:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.77, 3.57, 3.75 [16:12:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 15.72, 20.40, 22.10 [16:15:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.40, 3.28, 3.53 [16:15:39] Hi Universal_Omega [16:16:09] Hi RhinosF1 [16:17:25] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.82, 2.77, 3.31 [16:20:56] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 13.94, 16.64, 19.69 [16:27:13] hey Universal_Omega [16:27:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.82, 4.91, 3.91 [16:27:40] Hi dmehus [16:28:41] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.21, 4.69, 4.13 [16:29:47] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.85, 22.80, 21.29 [16:31:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.83, 3.87, 3.73 [16:33:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.98, 5.20, 4.21 [16:33:51] [02MirahezeMagic] 07supertassu opened pull request 03#251: Add email notifications on privileged actions - 13https://git.io/J34pI [16:36:38] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.73, 4.86, 4.43 [16:37:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.44, 3.77, 3.85 [16:37:32] Majavah: so we add watched-person to T&S global group [16:37:51] RhinosF1: yes, or whatever you end up naming the right [16:38:08] Majavah: cool [16:38:14] I can setup tonight [16:39:39] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.46, 23.94, 22.96 [16:41:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 10.78, 8.12, 5.55 [16:41:36] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 28.61, 25.99, 23.84 [16:41:50] Reception123: unless you want too [16:44:19] RhinosF1: go ahead if you want :) [16:44:39] Reception123: I'm mobile for a bit [16:49:01] PROBLEM - miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for miraheze.wiki could not be found [16:49:02] PROBLEM - archive.stellurgists.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for archive.stellurgists.wiki could not be found [16:49:05] PROBLEM - factorio-mods.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for factorio-mods.tk could not be found [16:49:05] PROBLEM - wiki.danvs.net - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.danvs.net could not be found [16:49:06] PROBLEM - celeste.ink - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for celeste.ink could not be found [16:49:07] PROBLEM - rupowerrangersfanon.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for rupowerrangersfanon.tk could not be found [16:49:08] PROBLEM - bots.miraheze.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for bots.miraheze.wiki could not be found [16:49:08] PROBLEM - www.programming.red - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.programming.red could not be found [16:49:10] PROBLEM - kkutu.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for kkutu.wiki could not be found [16:51:30] How long would a bit be? If it's just merging MM and the settings I could do it [16:51:32] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.21, 23.78, 23.84 [16:55:40] RECOVERY - miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - miraheze.wiki reverse DNS resolves to cp11.miraheze.org [16:55:47] RECOVERY - celeste.ink - reverse DNS on sslhost is OK: rDNS OK - celeste.ink reverse DNS resolves to cp11.miraheze.org [16:55:47] RECOVERY - factorio-mods.tk - reverse DNS on sslhost is OK: rDNS OK - factorio-mods.tk reverse DNS resolves to cp11.miraheze.org [16:55:52] RECOVERY - bots.miraheze.wiki - reverse DNS on sslhost is OK: rDNS OK - bots.miraheze.wiki reverse DNS resolves to cp10.miraheze.org [16:55:53] RECOVERY - rupowerrangersfanon.tk - reverse DNS on sslhost is OK: rDNS OK - rupowerrangersfanon.tk reverse DNS resolves to cp11.miraheze.org [16:55:53] RECOVERY - www.programming.red - reverse DNS on sslhost is OK: rDNS OK - www.programming.red reverse DNS resolves to cp11.miraheze.org [16:55:54] RECOVERY - kkutu.wiki - reverse DNS on sslhost is OK: rDNS OK - kkutu.wiki reverse DNS resolves to cp10.miraheze.org [16:55:58] RECOVERY - archive.stellurgists.wiki - reverse DNS on sslhost is OK: rDNS OK - archive.stellurgists.wiki reverse DNS resolves to cp10.miraheze.org [16:56:05] RECOVERY - wiki.danvs.net - reverse DNS on sslhost is OK: rDNS OK - wiki.danvs.net reverse DNS resolves to cp10.miraheze.org [16:57:28] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 35.21, 26.75, 24.67 [16:57:53] miraheze/MirahezeMagic - Reception123 the build passed. [16:59:26] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.16, 23.55, 23.77 [17:01:16] Reception123: few hours [17:01:24] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.08, 25.05, 24.34 [17:03:35] I can try I guess [17:08:41] Reception123: make sure you add the right to the blacklist [17:08:50] ah yeah, thanks [17:11:24] * Majavah reminds Reception123 that it's untested [17:11:39] Majavah: yeah, will test it out before implementing of course :) [17:11:49] test3 can have some fun [17:14:28] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3BkO [17:14:29] [02miraheze/mw-config] 07Reception123 03c26fbaf - email notifications on privileged actions configuration T7197 [17:14:31] [02mw-config] 07Reception123 created branch 03Reception123-patch-1 - 13https://git.io/vbvb3 [17:14:32] [02mw-config] 07Reception123 opened pull request 03#3869: email notifications on privileged actions configuration T7197 - 13https://git.io/J3Bk3 [17:15:05] RhinosF1: one thing though, would the sending an email part actually work on test3? [17:15:09] that's my only issue [17:15:27] Reception123: yes [17:15:33] miraheze/mw-config - Reception123 the build has errored. [17:16:30] [02mw-config] 07RhinosF1 reviewed pull request 03#3869 commit - 13https://git.io/J3BkK [17:16:32] I'll fix the syntax [17:16:45] Majavah: can you look at my review comment [17:16:58] RhinosF1: there's already the + on the variable though [17:17:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.56, 2.91, 3.96 [17:17:26] I don't think default will apply otherwise [17:17:48] [02mw-config] 07supertassu reviewed pull request 03#3869 commit - 13https://git.io/J3BkQ [17:18:32] RhinosF1: on mobile atm and not sure [17:18:43] Ok [17:19:06] [02mw-config] 07dmehus reviewed pull request 03#3869 commit - 13https://git.io/J3BIv [17:19:27] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 31.93, 13.61, 7.70 [17:19:42] dmehus: that's a good point [17:19:58] Reception123: it should work if we set available rights on just meta [17:20:08] RhinosF1, ack [17:20:20] Because then we can edit the global group [17:20:24] yeah [17:20:35] and it's already blacklisted so no one else can add it [17:20:38] Ye [17:20:58] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3BLW [17:21:00] [02miraheze/mw-config] 07Reception123 036ffcfc6 - fix wgconf end [17:21:01] [02mw-config] 07Reception123 synchronize pull request 03#3869: email notifications on privileged actions configuration T7197 - 13https://git.io/J3Bk3 [17:21:13] that was a weird error, I didn't realise wgConf ended there [17:21:19] oh [17:21:57] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.62, 5.24, 4.73 [17:21:58] miraheze/mw-config - Reception123 the build passed. [17:22:18] dmehus: yeah, the wgConf definition ended with the ]; at the end [17:22:59] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3BLi [17:23:00] [02miraheze/mw-config] 07Reception123 03022941a - array for watched-person group [17:23:02] [02mw-config] 07Reception123 synchronize pull request 03#3869: email notifications on privileged actions configuration T7197 - 13https://git.io/J3Bk3 [17:23:56] Reception123: swap default for meta then go config everywhere + magic go to test3 [17:23:59] miraheze/mw-config - Reception123 the build passed. [17:24:47] Reception123, I was looking for a missed comma following a square bracket, but ended up being that `];` thing [17:25:42] dmehus: I think that's all bar training stopping you taking over [17:25:52] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.01, 4.81, 4.67 [17:26:04] yeah that wasn't too easy to catch but everything else was okay so I thought it had to be wgConf considering I added the config at the end of the file [17:26:17] RhinosF1: but if it's only on Meta, would it work globally? [17:26:46] and what do you mean by "+ magic"? [17:26:48] Reception123: ye because we only need it to get it onto the global group from meta [17:26:56] + == and [17:27:26] Config can go everywhere then magic PR to test3 to test [17:27:29] RhinosF1: ah sorry I misunderstood and thought you mean I had to change something on MM [17:27:46] but okay we'll try meta then, though I'll replace that with test3 locally [17:28:04] Ye [17:28:04] Reception123, I wasn't sure if the right was needed on all wikis to send the notification via e-mail, but maybe not [17:28:24] I doubt it because global groups weird [17:28:26] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3BtX [17:28:27] [02miraheze/mw-config] 07Reception123 0387d61b4 - default=>meta [17:28:29] [02mw-config] 07Reception123 synchronize pull request 03#3869: email notifications on privileged actions configuration T7197 - 13https://git.io/J3Bk3 [17:28:33] RhinosF1, yeah [17:28:48] `metawiki` would definitely be preferred certainly, I think [17:29:24] miraheze/mw-config - Reception123 the build passed. [17:33:23] {"user_name":"Reception123","wiki_id":"test3wiki","log_type":"delete\/delete","comment_text":""} [17:33:23] yay! [17:33:42] \o/ [17:33:46] Roll it out [17:33:49] we'll still need to test even when it's in prod to see if it sends them for actions on an external wiki [17:33:51] just to make sure [17:33:52] Let Owen know [17:33:55] Ye [17:33:56] external = not meta [17:34:06] Well you can make it on test3 once you reset [17:34:08] wait it actually works? [17:34:12] It does [17:34:13] Reception123, oh nice :) [17:34:14] Majavah: yup, got an email! :) [17:34:21] great work [17:34:24] thanks for the code, Majavah :) [17:34:24] [02MirahezeMagic] 07Reception123 closed pull request 03#251: Add email notifications on privileged actions - 13https://git.io/J34pI [17:34:25] oh nice [17:34:25] [02miraheze/MirahezeMagic] 07Reception123 pushed 031 commit to 03master [+2/-0/±2] 13https://git.io/J3BqD [17:34:27] [02miraheze/MirahezeMagic] 07supertassu 03a2273ef - Add email notifications on privileged actions (#251) [17:34:33] yup, thanks a lot for doing it so quickly too [17:34:39] Just make sure to put it onto T&S not Sysadmin [17:34:56] it'll likely need filtering for things like account auto creation at some point [17:35:11] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/J3Bqj [17:35:12] [02miraheze/mediawiki] 07Reception123 03c14e85f - Update MirahezeMagic [17:35:15] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 9.42, 6.99, 5.34 [17:35:29] miraheze/MirahezeMagic - Reception123 the build passed. [17:35:43] Majavah, oh yeah, currently you think it might pick up the CA auto account creation logs? [17:35:48] RhinosF1: yeah, after I confirm that it works on other wikis though [17:36:00] dmehus: likely yes [17:36:04] ah [17:36:16] Yep [17:36:23] and basically everything else like creating pages, etc [17:36:34] true [17:36:38] dmehus: anything that's logged [17:36:47] there's a reason it said "untested proof of concept" [17:36:52] [02mw-config] 07Reception123 closed pull request 03#3869: email notifications on privileged actions configuration T7197 - 13https://git.io/J3Bk3 [17:36:53] I know our wiki deletion script doesn't count CA account creations, so can probably borrow code from there [17:36:53] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3BmC [17:36:55] [02miraheze/mw-config] 07Reception123 0339cf010 - email notifications on privileged actions configuration T7197 (#3869) [17:36:56] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [17:37:08] [02miraheze/mw-config] 07Reception123 deleted branch 03Reception123-patch-1 [17:37:10] [02mw-config] 07Reception123 deleted branch 03Reception123-patch-1 - 13https://git.io/vbvb3 [17:37:12] dmehus: I think you're using seperate accounts though too [17:37:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 6.18, 6.53, 5.37 [17:37:20] Like Owen has Owen (Miraheze) [17:37:24] RhinosF1, yeah definitely [17:37:52] miraheze/mw-config - Reception123 the build passed. [17:38:03] How we were able to add it to the `sysadmin` group on `test3wiki`, though? Didn't we only add `watched-person` to `metawiki`? [17:38:18] Local hacking [17:38:25] ? [17:38:36] oh [17:38:42] you mean you edited it locally [17:38:45] on test3 [17:38:46] yup [17:38:48] yeah that makes sense [17:38:50] Reception123 did [17:38:51] and then I added it to the global group via test3 [17:38:57] ah, yeah [17:39:14] heh yeah there's lots of local hacking on test3wiki [17:39:24] of course that's only done on test3, as if it were done on mw* it would cause discrepancies between the mw*s which would not be good [17:39:25] There's not [17:39:36] well there is a lot but it's not permanent [17:39:40] after you're done with it you get rid of it [17:39:40] yeah [17:39:53] not a lot, but yeah just testing extensions and things [17:41:08] Reception123: is it still working from meta and another wiki [17:41:16] If so I'll do the global group editing from meta [17:42:05] maybe Owen should add the right to `trustandsafety`? Not sure, but I'd check with him first [17:42:27] RhinosF1: haven't tested it yet [17:42:29] puppet is slow [17:42:33] Reception123: ack [17:42:37] ah [17:42:42] I'll test it to make sure it works [17:42:48] then change the email to owen@ and then we can make the switch [17:42:50] dmehus: it's a technical change already approved though [17:43:12] yeah I think it's fine if we add it as it was an explicit request made on Phab [17:43:38] RhinosF1, yeah, true. Yeah, it's just to define who is notified, not really a user right of any significance [17:43:54] dmehus: I wonder what i18n we should give it [17:43:58] we don't want something too long [17:44:02] Reception123, yeah maybe link the Phab ticket in the log summary [17:44:05] I guess "Email notifications on actions" maybe? [17:44:27] or alternatively we could just leave the name as "Watched user" as well [17:44:31] Reception123, yeah that could work, maybe "Email notifications on log actions" [17:44:38] a bit clearer and still short [17:44:53] !log sudo puppet agent -tv && sudo -u www-data php /srv/mediawiki/w/maintenance/mergeMessageFileList.php --output /srv/mediawiki/config/ExtensionMessageFiles.php --wiki loginwiki && sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildLocalisationCache.php --wiki loginwiki on mw* [17:45:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:45:35] Reception123, that should update the i18n messages I added a few days ago too now, right? [17:45:53] PROBLEM - mw10 Puppet on mw10 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [17:45:53] yes [17:45:57] cool [17:45:58] that's why I did it, that + others [17:46:04] oh heh [17:46:05] urgh nevermind [17:46:07] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 12.90, 8.45, 6.52 [17:46:09] puppet3 is so deceiving! [17:46:16] it shows the text in green even though it's a fail [17:46:26] normally the text is red if you run it locally [17:46:56] oh? [17:47:14] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 8.92, 7.50, 6.20 [17:47:15] The thing to run commands everywhere [17:47:23] PROBLEM - mw8 Puppet on mw8 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [17:47:46] dmehus: yeah, salt [17:47:51] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [17:48:14] PROBLEM - mw9 Puppet on mw9 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [17:48:48] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.52, 7.17, 6.10 [17:49:07] Reception123, oh [17:50:33] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.34, 7.03, 6.05 [17:50:48] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.44, 6.60, 6.03 [17:50:51] PROBLEM - jobrunner3 Puppet on jobrunner3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [17:52:33] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 5.74, 6.32, 5.90 [17:52:55] [02mw-config] 07Universal-Omega reviewed pull request 03#3869 commit - 13https://git.io/J3B3n [17:53:46] [02mw-config] 07Reception123 reviewed pull request 03#3869 commit - 13https://git.io/J3B3g [17:55:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.76, 2.63, 3.78 [17:55:26] That solution seems a lot more complicated than it needed to be [17:56:07] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.59, 7.52, 7.27 [17:57:14] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 5.15, 7.09, 7.04 [17:57:24] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:57:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 7.04, 3.59, 3.95 [17:58:41] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.55, 5.15, 4.53 [17:59:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.38, 6.21, 6.72 [17:59:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.53, 3.29, 3.78 [17:59:40] well it did appear to be urgent, perhaps it can be improved/made simpler in the future [18:00:28] I provided the solution twice though, once on Monday when asked how to do it [18:01:25] not sure why that wasn't followed then [18:02:07] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 8.03, 7.26, 7.14 [18:02:13] Plus what happens if the user right is removed? It allows a degree of malice while user group membership monitoring wouldn’t as it can’t be changed easily [18:02:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.43, 5.55, 4.87 [18:02:57] JohnLewis: well wouldn't there be a notification about the user right itself having been removed? [18:02:59] or would that not work? [18:03:14] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 9.20, 7.28, 7.00 [18:03:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.08, 3.75, 3.78 [18:03:42] It would, but then all future notifications wouldn’t - so you’d have to search all wikis for log actions if malice is involved [18:04:22] Yeah, fair enough. [18:04:27] JohnLewis: Do you think we should wait before deploying then or just create a task for this to be addressed after [18:04:30] *? [18:04:33] PROBLEM - graylog2 Current Load on graylog2 is WARNING: WARNING - load average: 3.53, 2.89, 2.15 [18:04:40] what's up with these load alerts? [18:04:52] RECOVERY - jobrunner3 Puppet on jobrunner3 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [18:05:14] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.48, 7.27, 7.03 [18:05:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.32, 3.25, 3.59 [18:06:03] https://grafana.miraheze.org/d/xtkCtBkiz/prometheus-blackbox-exporter-test-ferran-tufan?viewPanel=217&orgId=1&from=1619373938761&to=1619978738762 [18:06:03] [ Grafana ] - grafana.miraheze.org [18:06:34] RECOVERY - graylog2 Current Load on graylog2 is OK: OK - load average: 2.12, 2.55, 2.12 [18:06:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.86, 4.73, 4.70 [18:06:46] I've got no idea, all I've done is run puppet [18:07:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.66, 2.71, 3.35 [18:07:58] SPF|Cloud: it's gluster [18:08:01] increased disk utilization on cloud[45] since 01:00 today [18:08:07] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 5.47, 7.57, 7.46 [18:08:09] (utc) [18:08:25] i thought stoping the backup (as i wanted to regenerate all the backups since it was bound to be out of date) would help [18:08:29] but the load was still high [18:08:34] and i have no idea [18:09:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.34, 6.54, 6.80 [18:09:41] Reception123: I don’t know, I’m just reading how the request was worded and my previous suggestions. It would probably be up to Owen but almost 7 days for an urgent request for something we as a team want to get rid of it poor to say the least, especially false promises of ‘it’ll be done tomorrow’ [18:10:19] gluster4 is back to normal since 20 minutes, gluster3 not yet [18:10:55] JohnLewis: Yeah, the timing wasn't ideal at all. I'll ask Owen what he wants to do considering this oversight, whether he'd rather proceed and we get another task to fix this or whether he doesn't want to proceed until it's fixed [18:12:03] and now phabricator is being backed up [18:13:13] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 6.24, 7.07, 7.02 [18:13:31] yeh, i did it db11, db13, phab, private then static [18:14:21] Asked Owen if he'd rather wait or go through with it and create a new task [18:14:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.19, 22.12, 23.96 [18:15:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.69, 6.46, 6.80 [18:16:00] He said it has to be addressed first. cc RhinosF1 [18:18:05] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=graylog2.miraheze.org&var-port=9100&from=now-24h&to=now-1m [18:18:05] [ Grafana ] - grafana.miraheze.org [18:18:06] hmm [18:18:10] RECOVERY - bacula2 Bacula Phabricator Static on bacula2 is OK: OK: Full, 88317 files, 9.008GB, 2021-05-02 18:16:00 (2.2 minutes ago) [18:18:12] SPF|Cloud: ^ [18:18:20] Reception123: can we move it to be set in LS.php [18:18:25] good [18:18:28] Which is what I originally suggested [18:18:36] Well today [18:18:49] !log restart graylog-server on graylog2 [18:18:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 31.38, 24.59, 24.38 [18:18:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:18:59] how though? [18:19:07] That was a question [18:19:22] !log restart elasticsearch on graylog2 [18:19:24] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:20:00] especially cache proxies don't like high I/O on the host [18:20:07] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 9.42, 7.66, 7.29 [18:20:35] hmm es is taking forever to restart [18:20:55] RhinosF1: well I don't see how, I didn't design it. The issue is to not have it as a group that can be removed easily by someone [18:20:56] ok restarted [18:21:22] PROBLEM - graylog2 HTTPS on graylog2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 311 bytes in 0.005 second response time [18:22:08] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.19, 7.05, 7.11 [18:22:11] Reception123: do we assign any global rights in LS [18:22:15] Majavah: ^ [18:22:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.99, 23.21, 23.93 [18:23:10] !log restart mongod on graylog2 [18:23:11] don't think so, we've always used the special page [18:23:14] RhinosF1: CA does not allow that [18:23:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:23:22] RECOVERY - bacula2 Bacula Private Git on bacula2 is OK: OK: Full, 5691 files, 27.00MB, 2021-05-02 18:21:00 (2.4 minutes ago) [18:23:44] Majavah: urgh [18:23:51] I hate MediaWiki at times [18:24:00] Can we make your thing just check global groups [18:24:09] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.39, 6.09, 6.75 [18:24:30] !log killed puppet process (long running) on graylog2 [18:24:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:24:41] still on mobile for maybe next 15 mins [18:24:49] !log killed ssh-salt process on gralog2 (never got rid off?) [18:24:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:24:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 23.21, 24.44, 24.39 [18:25:39] Ack [18:25:44] should be fairly simple, just get the CentralAuthUser instance and get the groups for that [18:26:28] That's still french [18:26:36] (Which was my worse grade) [18:27:20] RECOVERY - graylog2 HTTPS on graylog2 is OK: HTTP OK: HTTP/1.1 200 OK - 1670 bytes in 0.608 second response time [18:27:36] Which is what I said on Monday... I don’t know why people ask for my opinion and help at times [18:28:40] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3Blm [18:28:42] [02miraheze/puppet] 07Southparkfan 032a1f1df - Install sshfs on database servers (T5877) [18:30:35] PROBLEM - graylog2 Current Load on graylog2 is WARNING: WARNING - load average: 2.22, 3.91, 2.95 [18:32:03] SPF|Cloud: https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?viewPanel=287&orgId=1&from=now-24h&to=now-1m&var-job=node&var-node=cloud4.miraheze.org&var-port=9100 [18:32:03] [ Grafana ] - grafana.miraheze.org [18:32:05] JohnLewis: I was asked to get a solution as fast as possible without any extra details, that was my best idea :/ [18:32:35] RECOVERY - graylog2 Current Load on graylog2 is OK: OK - load average: 0.46, 2.67, 2.61 [18:33:48] Majavah: no, it’s fine. This was requested a week ago and I have a solution a week ago on the same day of the request being made. The solution you produced works well [18:34:03] Just doesn’t meet the requirements, which you weren’t told it seems :( [18:34:25] !log killed long running puppet process on cp10 & salt process [18:34:36] !log perform backup on all database servers, T5877 [18:35:07] oh my god [18:35:28] apparently stopping salt mid way leaves the process open on all servers [18:35:34] bot crashed? [18:35:37] paladox: yup, that's what I noticed with puppet too [18:35:44] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.05, 6.35, 4.47 [18:35:45] if you run it it runs even if you ctrl+c on puppet3 [18:36:05] ok, i'll kill it [18:36:13] !log killed long running puppet process on cloud4 & salt process [18:36:23] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [18:36:46] !log killed long running puppet process on gluster[34] & salt process [18:37:02] PROBLEM - mon2 icinga.miraheze.org HTTPS on mon2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 311 bytes in 0.003 second response time [18:37:10] PROBLEM - mail2 webmail.miraheze.org HTTPS on mail2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:37:25] phabricator is unresponsive [18:37:41] PROBLEM - wiki.autocountsoft.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.autocountsoft.com could not be found [18:37:50] PROBLEM - wiki.ct777.cf - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.ct777.cf could not be found [18:38:12] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [18:38:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb, 2607:5300:205:200::1c30/cpweb [18:38:27] must be the backups [18:38:33] !log killed long running puppet process on mw10 & salt process [18:38:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:38:53] !log stop database backups [18:38:59] [02MirahezeMagic] 07supertassu opened pull request 03#252: LogEmail: Use global groups instead of rights - 13https://git.io/J3B8F [18:39:12] JohnLewis: RhinosF1: (maybe others): ^ [18:39:12] RECOVERY - mail2 webmail.miraheze.org HTTPS on mail2 is OK: HTTP OK: HTTP/1.1 200 OK - 5664 bytes in 3.339 second response time [18:39:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:39:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:39:54] miraheze/MirahezeMagic - supertassu the build passed. [18:40:09] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [18:40:10] stopping backups fixes the outage [18:40:15] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:40:30] Majavah, oh nice :) [18:40:32] !log killed long running puppet process on mw11 & salt process [18:40:36] !log killed long running puppet process on jobrunner[34] & salt process [18:40:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:40:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:40:53] RECOVERY - mon2 icinga.miraheze.org HTTPS on mon2 is OK: HTTP OK: HTTP/1.1 302 Found - 308 bytes in 0.007 second response time [18:41:00] SPF|Cloud: FYI mw10 went bump earlier today [18:41:04] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.69, 21.32, 23.67 [18:41:13] 'went bump'? [18:41:30] SPF|Cloud: down for like a minute at most [18:41:34] oh? [18:41:58] !log killed long running puppet process on cloud[53] & salt process [18:42:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:42:41] !log killed long running puppet process on cp11 & salt process [18:42:44] SPF|Cloud: 08:44 uk time [18:42:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:42:54] so... running database backups on masters brings our services to a halt [18:43:02] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.15, 23.49, 24.19 [18:43:12] that's bad [18:43:31] !log killed long running puppet process on cp12 & salt process [18:43:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:43:43] oh wait... I forgot a flag to reduce database locking [18:44:16] !log killed long running puppet process on db 11, 12 and 13 & salt process [18:44:21] RECOVERY - wiki.autocountsoft.com - reverse DNS on sslhost is OK: rDNS OK - wiki.autocountsoft.com reverse DNS resolves to cp11.miraheze.org [18:44:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:44:49] RhinosF1: is there a task for investigation? [18:44:53] RECOVERY - wiki.ct777.cf - reverse DNS on sslhost is OK: rDNS OK - wiki.ct777.cf reverse DNS resolves to cp10.miraheze.org [18:45:04] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.65, 6.26, 5.25 [18:45:18] !log killed long running puppet process on mon2 & salt process [18:45:32] SPF|Cloud: no, didn't cause more than a minute's slowness but I can create one [18:45:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:46:28] !log killed long running puppet process on mail2 and ldap2 & salt process [18:46:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:46:34] if you think this issue is solely an issue with mw10, you can [18:46:53] if it was an one-off, up to you [18:47:02] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.25, 5.71, 5.18 [18:47:06] SPF|Cloud: I'd say most likely something important crashed and auto recovered [18:47:17] gluster again? [18:47:23] !log killed long running puppet process on mem[12] & salt process [18:47:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:47:45] SPF|Cloud: it could be. Beyond what icinga says for that time don't know anything more. [18:48:45] well that didn't fix the disk usage [18:48:56] gluster still high and cloud4 is showing almost 100% utilisation [18:49:00] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.70, 21.92, 23.54 [18:49:38] site still online folks? [18:50:57] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.49, 5.89, 5.35 [18:50:58] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.11, 23.70, 23.98 [18:51:01] SPF|Cloud, seems to be. Decent page load times, too, `Queued Jobs: 0326ms (PHP7 via metawiki@mw11 / cp12)` [18:51:09] ok, looking at iotop on cloud4 [18:51:14] it shows gluster3 as the main one [18:52:53] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.37, 5.56, 5.28 [18:52:56] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.29, 23.17, 23.75 [18:53:07] dmehus: thanks, still the same? [18:53:15] should i reboot gluster3 [18:53:26] !log restarted database backups (but with --trx-consistency-only enabled) [18:53:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:53:46] paladox: is there an increase in file retrieval rate? [18:53:46] Queued Jobs: 0194ms (PHP7 via metawiki@mw10 / cp11 [18:54:11] great, then the outage was my fault [18:54:18] writing an incident report [18:54:20] I just see alot of gluster3 processes. Though i guess bacula is backing up now. But when it stopped things didn't get better. [18:54:24] human error [18:55:01] [02MirahezeMagic] 07Reception123 commented on pull request 03#252: LogEmail: Use global groups instead of rights - 13https://git.io/J3B0u [18:55:32] paladox: if the situation doesn't get better after stopping bacula-fd, it's not bacula [18:55:51] yeh it's not bacula but i don't know what else it could be. [18:56:07] do you monitor the request rate? [18:56:25] has there been an increase in file requests from mediawiki servers? [18:56:52] SPF|Cloud, a little faster, 0180ms. That's quite decent. I'm happy with anything under 1000ms :) [18:57:00] good [18:57:24] !log set gluster i/o threads to 8 (from 12) [18:57:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:57:33] SPF|Cloud: i don't think we monitor that [18:58:19] maybe you should do that in the future [18:58:45] to rule out a traffic increase is the cause [18:59:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.26, 3.07, 3.95 [18:59:32] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=gluster4.miraheze.org&var-port=9100 [18:59:32] [ Grafana ] - grafana.miraheze.org [18:59:43] it cannot be file uploads SPF|Cloud ^ [18:59:56] otherwise it would be impacting gluster4 [19:00:48] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.16, 5.71, 5.37 [19:01:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.90, 5.00, 4.59 [19:02:45] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.97, 5.54, 5.35 [19:05:36] let's see.. [19:05:54] https://meta.miraheze.org/wiki/Special:IncidentReports/43 if you could review this in the meantime... [19:05:54] [ Permission error - Miraheze Meta ] - meta.miraheze.org [19:06:17] cc JohnLewis [19:06:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 28.82, 22.65, 22.31 [19:08:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.44, 4.73, 5.02 [19:08:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.60, 21.67, 22.01 [19:10:12] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3Bzv [19:10:13] [02miraheze/services] 07MirahezeSSLBot 031004eb9 - BOT: Updating services config for wikis [19:12:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 24.64, 20.98, 21.51 [19:14:20] paladox: cannot find the cause [19:14:33] ok [19:14:40] a restart is possible... [19:14:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.77, 20.67, 21.41 [19:14:56] rebooting takes much longer than restarting glusterfs, you may not want to reboot for that reason [19:15:45] I guess a bacula backup has just started on gluster3 [19:16:31] yeh backups running [19:16:57] again? [19:18:29] yeh [19:20:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.09, 21.76, 21.43 [19:21:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.18, 2.91, 3.83 [19:22:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.89, 5.38, 5.18 [19:24:17] Bacula backups cause downtime for static? [19:24:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.19, 22.42, 21.92 [19:26:26] JohnLewis: Is global groups pr ok? [19:26:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.37, 22.60, 21.98 [19:27:18] If it does it based on the group yeah, I also think SRE owe an apology for how poorly that request was dealt with [19:27:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.75, 3.39, 3.64 [19:27:28] Reception123: ^ [19:28:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.80, 22.10, 21.87 [19:28:53] I definitely agree that for a task that was opened on April 26 this was not prioritised well enough, and for some reason the comments you made on Monday were not transmitted/.taken into account [19:29:03] I'll test the PR now [19:29:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.51, 3.84, 3.80 [19:29:46] JohnLewis: it still could be broke though, renaming the group would be instant and stop email notifs though [19:30:53] if someone has access to the full global groups interface they could just make a new group with the same access [19:31:15] so it's fairly hard to log all the things at that point [19:31:25] Would a separate `trustandsafety` Meta user group be helpful? [19:31:29] True [19:31:32] JohnLewis: no. We were thinking it could have caused the high load. [19:31:37] dmehus: not really [19:31:38] which would hold the global groups permissions right [19:31:44] RhinosF1, oh ok [19:31:45] but i stopped them earlier to regenerate the backups freshly and the load didn't go down [19:31:52] But those are more obvious than removing a right that could be subtle [19:31:54] There's a million and one ways you could get round it [19:32:05] JohnLewis, yeah [19:32:06] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3BVD [19:32:07] [02miraheze/mw-config] 07Reception123 038aa31bf - change email notifs for group instead of right [19:32:09] [02mw-config] 07Reception123 created branch 03Reception123-patch-1 - 13https://git.io/vbvb3 [19:32:11] [02mw-config] 07Reception123 opened pull request 03#3870: change email notifs for group instead of right - 13https://git.io/J3BVS [19:32:11] They are both just log entries [19:32:41] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.44, 5.65, 5.39 [19:32:50] not sure if I did something wrong in my PR, but the new method doesn't appear to work [19:33:07] Issue is this was a fairly simple and easy to understand request. Yet it has been massively complicated and avoided the original request to the point a strict deadline was not met which the Board set [19:33:12] miraheze/mw-config - Reception123 the build passed. [19:33:29] what's that parser function you added, Reception123? [19:33:42] Reception123: the array key is "group", not "right" now [19:33:42] > On test3wiki Reception123 deleted "[[Test]]": content was: "{{#createpageifnotex:Test656156|Test16573}} Will I get an email for this?! [[Category:Pages with broken file links]]"; https://test3.miraheze.org/wiki/Special:Log/delete [19:33:45] [ Deletion log - Test3 ] - test3.miraheze.org [19:33:50] that's what I did yeah and no email [19:33:55] I pulled in the MM changes + https://github.com/miraheze/mw-config/pull/3870/files [19:33:55] [ change email notifs for group instead of right by Reception123 · Pull Request #3870 · miraheze/mw-config · GitHub ] - github.com [19:34:04] oh, hrm [19:34:05] Majavah: ah, I see [19:34:28] oh right [19:34:40] worked now, perfect then! [19:34:41] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.50, 5.37, 5.33 [19:34:42] yeah I did see Majavah's comment about that on the PR [19:34:51] That seems to be resolved then now that it's assigned to groups [19:34:59] nice :) [19:35:26] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.66, 2.91, 3.36 [19:36:01] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-1 [+0/-0/±1] 13https://git.io/J3Bw4 [19:36:02] [02miraheze/mw-config] 07Reception123 031b57590 - fix and configure for T&S [19:36:04] [02mw-config] 07Reception123 synchronize pull request 03#3870: change email notifs for group instead of right - 13https://git.io/J3BVS [19:36:36] [02MirahezeMagic] 07Reception123 closed pull request 03#252: LogEmail: Use global groups instead of rights - 13https://git.io/J3B8F [19:36:37] [02miraheze/MirahezeMagic] 07Reception123 pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/J3Bwu [19:36:39] [02miraheze/MirahezeMagic] 07supertassu 03a30013e - LogEmail: Use global groups instead of rights (#252) [19:37:06] miraheze/mw-config - Reception123 the build passed. [19:37:33] miraheze/MirahezeMagic - Reception123 the build passed. [19:37:54] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/J3Bwr [19:37:56] [02miraheze/mediawiki] 07Reception123 03c16c0fa - Update MirahezeMagic [19:39:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.83, 3.28, 3.45 [19:40:38] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.64, 5.57, 5.41 [19:40:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.94, 22.31, 21.50 [19:41:28] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 31.17, 9.55, 5.48 [19:44:07] [02mw-config] 07Reception123 closed pull request 03#3870: change email notifs for group instead of right - 13https://git.io/J3BVS [19:44:08] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3Brg [19:44:10] [02miraheze/mw-config] 07Reception123 0318202ff - change email notifs for group instead of right (#3870) [19:44:11] [02mw-config] 07Reception123 synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [19:44:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.94, 5.91, 5.62 [19:44:59] miraheze/mw-config - Reception123 the build passed. [19:45:25] [02miraheze/mw-config] 07Reception123 deleted branch 03Reception123-patch-1 [19:45:26] [02mw-config] 07Reception123 deleted branch 03Reception123-patch-1 - 13https://git.io/vbvb3 [19:46:53] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.74, 23.24, 22.57 [19:48:54] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.89, 24.22, 23.01 [19:50:53] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.89, 23.70, 23.01 [19:52:40] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.47, 6.34, 5.76 [19:52:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.46, 26.30, 24.05 [19:53:12] PROBLEM - electowiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for electowiki.org could not be found [19:54:48] PROBLEM - dbbackup1 Disk Space on dbbackup1 is WARNING: DISK WARNING - free space: / 68222 MB (10% inode=97%); [19:56:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.65, 22.47, 23.03 [19:58:41] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.48, 5.55, 5.64 [19:59:57] RECOVERY - electowiki.org - reverse DNS on sslhost is OK: rDNS OK - electowiki.org reverse DNS resolves to cp10.miraheze.org [20:00:43] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.64, 5.66, 5.64 [20:00:56] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.92, 22.01, 22.51 [20:02:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.33, 5.63, 5.64 [20:02:54] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.30, 21.35, 22.23 [20:04:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.00, 24.04, 23.13 [20:05:10] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 10.37, 7.33, 5.85 [20:05:23] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 8.73, 6.70, 5.12 [20:06:07] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.89, 6.79, 5.48 [20:07:10] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.66, 7.06, 5.94 [20:07:23] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.43, 5.98, 5.06 [20:08:07] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 3.67, 5.65, 5.22 [20:08:59] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki config] [20:09:10] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.34, 6.54, 5.87 [20:10:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.64, 5.79, 5.58 [20:10:50] PROBLEM - jobrunner3 Puppet on jobrunner3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [20:12:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.62, 5.80, 5.62 [20:12:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 17.77, 22.47, 23.33 [20:15:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 3.26, 2.94, 3.90 [20:16:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.08, 5.46, 5.49 [20:18:54] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 29.46, 24.56, 23.77 [20:22:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.33, 5.98, 5.91 [20:24:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.68, 22.48, 23.27 [20:25:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.55, 2.43, 3.23 [20:30:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.11, 5.48, 5.61 [20:32:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.10, 22.92, 22.73 [20:32:58] SPF|Cloud: meta is a slug [20:34:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.36, 5.99, 5.85 [20:34:46] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.83, 6.73, 5.97 [20:34:50] RECOVERY - jobrunner3 Puppet on jobrunner3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:35:15] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 9.35, 8.43, 6.51 [20:35:27] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.65, 3.93, 3.41 [20:35:47] > [3b46c95f4d8453d07963e644] 2021-05-02 20:34:40: Fatal exception of type "Wikimedia\Rdbms\DBQueryError" [20:35:47] ^ returned on `chadsofawiki` when trying to view the deleted file page at https://chadsofa.miraheze.org/w/index.php?search=File:Fatren%20-%201.jpg&title=Special:Search&fulltext=1 [20:35:49] [ Database error - Chads of /a/ ] - chadsofa.miraheze.org [20:36:06] Yeah somethings not right [20:36:10] !sre [20:36:15] File a task? [20:36:22] UBN [20:36:26] paladox: ^ [20:36:27] okay [20:36:32] I'll file [20:36:45] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.26, 6.50, 5.97 [20:36:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.92, 23.28, 23.03 [20:36:59] There's been recent vandalism attacks going on that wiki as of late. I think there should be an investigation into that too, if it's possible. [20:37:13] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 5.03, 7.38, 6.37 [20:37:24] dmehus That's because of ` - ` in search, its about the upstream task that's been open for years. [20:37:25] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.47, 3.34, 3.26 [20:37:43] RhinosF1: cannot reproduce slowness [20:38:02] SPF|Cloud: it was like snail for me [20:38:12] it's a bit slow for me as well [20:38:35] and the searchbar is not returning any suggestions [20:38:38] Can you review [20:38:40] https://phabricator.miraheze.org/T7229 [20:38:41] [ ⚓ T7229 Encountered fatal DB exception error trying to view deleted file page on `chadsofawiki` ] - phabricator.miraheze.org [20:39:13] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.27, 6.37, 6.12 [20:39:20] RhinosF1: see my reply. [20:39:23] I'm concerned something ain't right today [20:39:31] Universal_Omega: viewing a deleted file? [20:39:45] Any slowness is likely caused by the gluster backup [20:39:54] SPF|Cloud: it's way too slow [20:40:08] Main page loads in 280ma [20:40:20] Cannot reproduce [20:40:22] RhinosF1: Yes, it is still using Special:Search so issue still exists with DB query. [20:40:25] hmm https://graylog.miraheze.org/streams/5fc826549b05260901034cc2/search?q=&rangetype=relative&streams=5fc826549b05260901034cc2&relative=172800 hasn't shown any thing for the last few days [20:40:40] I just gor 1500 SPF|Cloud [20:40:55] Start an investigation? [20:41:00] Mobile [20:41:04] Ah, same.. [20:41:13] But it was unusable when I pinged you [20:41:35] Universal_Omega, thanks, do you have the upstream task number? [20:41:36] paladox could try stopping the gluster backup [20:41:40] then I can close that if you want [20:41:44] ok [20:42:16] !log stop gluster backup on bacula2 [20:42:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:42:24] dmehus: replied on task. [20:42:39] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.68, 6.03, 5.84 [20:42:44] dmehus: I was concerned 2 were related which is why I wanted UBN [20:42:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 30.71, 26.06, 24.05 [20:43:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.77, 6.68, 4.65 [20:43:29] You're still discussing a bug on the chadsofawiki? [20:43:46] No that's an old one [20:43:54] We know things slow though [20:44:02] Oh. [20:44:30] seems to be back to normal for me [20:44:34] Things still slow RhinosF1? [20:44:38] ~110ms now for me [20:44:40] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 3.32, 5.03, 5.50 [20:44:41] RhinosF1: things seem fast for me. [20:44:53] Universal_Omega, ah okay, thanks, and RhinosF1, ah, no problem :) [20:44:55] Well, I know this is off-topic, but I've noticed that there's been vandalism happening on that said wiki. [20:45:07] Well, mine's a bit faster, so it's not slow for me. [20:45:13] SPF|Cloud: apart from RC, no [20:45:20] but yeah, ~10 to 20 mins ago it was a bit slow. Not as much as this afternoon (in my timezone), but noticeable [20:46:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.02, 22.47, 23.02 [20:47:17] So cause = backups [20:47:18] backups stopped [20:47:31] but the high load is still happening on gluster3 [20:47:44] so doesn't look like the cause is bacula? [20:47:55] should i just reboot? [20:48:04] Do a restart of glusterfsd first [20:48:55] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 26.26, 23.91, 23.47 [20:49:18] A reboot takes more time + the glusterfsd process was the only process responsible for the high i/o [20:49:24] Ok, though i did that earlier today when upgrading gluster to 3.4 (not related to the load as the graphs show it was happening before) [20:49:30] *8.4 [20:49:50] !log restart glusterd on gluster3 [20:49:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:50:07] ... then it's an unexpected traffic increase? [20:50:42] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.36, 5.24, 5.36 [20:50:45] doesn't seem to have helped [20:50:52] SPF|Cloud: but it's not happening on gluster4? [20:50:55] (although an increase should affect both gluster servers) [20:51:08] if it was a traffic increase it would be causing it on both glusterds [20:51:14] That's what I wanted to say yes :) [20:51:29] Let's try a reboot... [20:51:46] ok [20:51:50] !log rebooting gluster3 [20:51:58] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:52:50] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 2.38, 0.58, 0.19 [20:53:14] SPF|Cloud: I just got a really slow load but the script only says 1000ms [20:53:29] Like I'd have said it was 10x that in reality [20:53:54] rebooted though the load is still the same :/ [20:54:19] load at 6 [20:54:42] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2607:5300:205:200::1c30/cpweb [20:54:55] Universal_Omega: it's your script from template wiki [20:55:02] paladox: icinga ^ [20:55:08] ? [20:55:11] oh [20:55:15] cp3 [20:55:30] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.47, 3.34, 3.91 [20:55:33] PROBLEM - wiki.openhatch.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.openhatch.org could not be found [20:55:33] PROBLEM - www.christipedia.nl - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.christipedia.nl could not be found [20:55:35] it'll have hit the timeout [20:55:38] hence why [20:56:37] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:56:53] RhinosF1: the script only starts once the page loads the script, so if there is a delay in that then it won't be accurate. Absolutely nothing can be done to fix that I don't think. [20:57:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 3.93, 3.88, 4.06 [20:58:04] Universal_Omega: right, it's that stage that's lagged [20:58:17] paladox: then we need to act [20:58:34] I don't know what to do to resolve this... [20:58:36] I wonder if that was earlier too [20:58:50] Because I said then mw10 was back as quick [20:59:01] they'll all be quick once in a while [20:59:27] hdds disk performance is unstable, very slow and unreliable. [20:59:38] not to mention the amount of php childs we have available. [21:00:08] all the vms are fighting for disk resources [21:00:23] so gluster & cp* are taking away i/o bandwidth from mw* [21:00:41] .task 7230 [21:00:43] https://phabricator.miraheze.org/T7230 - Load times high enough to cause depool, authored by RhinosF1, assigned to None, Priority: Unbreak Now!, Status: Open [21:00:53] Yeah but a depool is bad [21:00:56] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 21.97, 23.25, 23.61 [21:02:16] RECOVERY - wiki.openhatch.org - reverse DNS on sslhost is OK: rDNS OK - wiki.openhatch.org reverse DNS resolves to cp11.miraheze.org [21:02:16] RECOVERY - www.christipedia.nl - reverse DNS on sslhost is OK: rDNS OK - www.christipedia.nl reverse DNS resolves to cp11.miraheze.org [21:06:52] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.77, 22.97, 23.14 [21:07:58] Looking at SPF|Cloud's dash on grafana the timings are obvious on the 2 day view [21:08:06] prometheus-varnish-exporter isn't running on cp3 [21:08:09] it says it fails [21:08:14] but when i run it manually it works? [21:08:43] JohnLewis: how's gluster taking I/O bandwidth got anything to do with MediaWiki team [21:08:52] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.10, 21.77, 22.71 [21:13:25] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.04, 2.90, 3.73 [21:15:02] oh.... [21:16:36] ? [21:17:09] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3BQ7 [21:17:10] [02miraheze/puppet] 07paladox 030701487 - varnish_exporter: Remove user from systemd script [21:18:55] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.44, 21.65, 21.59 [21:19:15] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.12, 5.19, 3.92 [21:19:26] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.41, 2.73, 3.37 [21:20:59] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.61, 21.03, 21.39 [21:21:13] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.79, 5.57, 4.23 [21:22:07] [[T7232]] [21:22:08] https://meta.miraheze.org/wiki/T7232?action=edit&redlink=1 [21:22:09] [ T7232 - Miraheze Meta ] - meta.miraheze.org [21:22:28] DarkMatterMan4500, [[phab:T7232]] [21:22:49] PROBLEM - wiki.usagihime.ml - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.usagihime.ml could not be found [21:22:51] PROBLEM - en.phgalaxy.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.phgalaxy.xyz could not be found [21:22:54] PROBLEM - www.trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.trollpasta.com could not be found [21:22:56] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 33.23, 24.39, 22.49 [21:23:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 8.10, 6.85, 4.91 [21:23:26] @Doug Thanks for that. I did file a task on importing stuff from FANDOM. I'm sure SpongeBuff1991 won't mind it at all (I hope). [21:24:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 20.83, 22.52, 22.03 [21:26:07] DarkMatterMan4500, okay, it's in the queue. Should be done in 1-2 days likely [21:26:34] PROBLEM - runzeppelin.ru - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for runzeppelin.ru could not be found [21:26:39] PROBLEM - cyberlaw.ccdcoe.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for cyberlaw.ccdcoe.org could not be found [21:26:39] https://grafana.miraheze.org/d/arhCmd7Mz/nginx-cache-proxies?orgId=1&refresh=5s&from=now-24h&to=now [21:26:39] PROBLEM - arquivo.ucmg.ml - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for arquivo.ucmg.ml could not be found [21:26:40] [ Grafana ] - grafana.miraheze.org [21:26:42] PROBLEM - guia.cineastas.pt - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for guia.cineastas.pt could not be found [21:26:43] PROBLEM - bharatwiki.online - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for bharatwiki.online could not be found [21:26:44] PROBLEM - wiki.gesamtschule-nordkirchen.de - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.gesamtschule-nordkirchen.de could not be found [21:26:44] PROBLEM - files.pornwiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for files.pornwiki.org could not be found [21:26:44] PROBLEM - sims.miraheze.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for sims.miraheze.org could not be found [21:26:45] PROBLEM - www.dariawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.dariawiki.org could not be found [21:26:48] i don't see any abnormal requests cc SPF|Cloud [21:26:57] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 28.03, 26.09, 23.49 [21:27:13] @Doug Okay. And yesterday, I've been importing Vyond-related videos from a wiki called the Horrible Vyonders Wiki, and all I had to do was remove a couple that I didn't need for my Trashy Vyond Videos Wiki. [21:28:00] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:800:178a::5/cpweb [21:28:05] PROBLEM - wiki.ct777.cf - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.ct777.cf could not be found [21:28:08] PROBLEM - nonbinary.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for nonbinary.wiki could not be found [21:28:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 18.33, 23.34, 22.80 [21:29:32] RECOVERY - wiki.usagihime.ml - reverse DNS on sslhost is OK: rDNS OK - wiki.usagihime.ml reverse DNS resolves to cp10.miraheze.org [21:29:38] RECOVERY - en.phgalaxy.xyz - reverse DNS on sslhost is OK: rDNS OK - en.phgalaxy.xyz reverse DNS resolves to cp11.miraheze.org [21:29:39] RECOVERY - www.trollpasta.com - reverse DNS on sslhost is OK: rDNS OK - www.trollpasta.com reverse DNS resolves to cp10.miraheze.org [21:29:54] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3BdE [21:29:55] [02miraheze/dns] 07paladox 03ec245a5 - Depool cp10 [21:29:56] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:33:21] RECOVERY - cyberlaw.ccdcoe.org - reverse DNS on sslhost is OK: rDNS OK - cyberlaw.ccdcoe.org reverse DNS resolves to cp11.miraheze.org [21:33:22] RECOVERY - arquivo.ucmg.ml - reverse DNS on sslhost is OK: rDNS OK - arquivo.ucmg.ml reverse DNS resolves to cp11.miraheze.org [21:33:25] RECOVERY - guia.cineastas.pt - reverse DNS on sslhost is OK: rDNS OK - guia.cineastas.pt reverse DNS resolves to cp11.miraheze.org [21:33:25] RECOVERY - bharatwiki.online - reverse DNS on sslhost is OK: rDNS OK - bharatwiki.online reverse DNS resolves to cp11.miraheze.org [21:33:26] RECOVERY - wiki.gesamtschule-nordkirchen.de - reverse DNS on sslhost is OK: rDNS OK - wiki.gesamtschule-nordkirchen.de reverse DNS resolves to cp11.miraheze.org [21:33:26] RECOVERY - files.pornwiki.org - reverse DNS on sslhost is OK: rDNS OK - files.pornwiki.org reverse DNS resolves to cp11.miraheze.org [21:33:27] RECOVERY - sims.miraheze.org - reverse DNS on sslhost is OK: rDNS OK - sims.miraheze.org reverse DNS resolves to cp11.miraheze.org [21:33:30] RECOVERY - www.dariawiki.org - reverse DNS on sslhost is OK: rDNS OK - www.dariawiki.org reverse DNS resolves to cp11.miraheze.org [21:33:32] RECOVERY - runzeppelin.ru - reverse DNS on sslhost is OK: rDNS OK - runzeppelin.ru reverse DNS resolves to cp11.miraheze.org [21:34:56] RECOVERY - nonbinary.wiki - reverse DNS on sslhost is OK: rDNS OK - nonbinary.wiki reverse DNS resolves to cp11.miraheze.org [21:34:56] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.58, 5.01, 4.78 [21:35:05] RECOVERY - wiki.ct777.cf - reverse DNS on sslhost is OK: rDNS OK - wiki.ct777.cf reverse DNS resolves to cp11.miraheze.org [21:35:15] !log reboot cp10 [21:35:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:37:25] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.60, 0.42, 0.14 [21:38:36] back @ laptop now [21:40:55] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 11.08, 16.24, 19.55 [21:40:59] !log shutdown & start gluster3 via proxmox ui [21:41:18] meta's down now [21:41:24] is that caused by your reboot paladox? [21:41:32] i think so [21:41:56] Shit.... [21:42:07] you could have announced it beforehand then? [21:42:13] let's put a sitenotice up [21:42:25] it's back up [21:42:45] Oh. [21:42:58] please don't reboot anymore if avoidable [21:43:50] ok [21:44:05] i've ran out of ideas now [21:44:45] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2001:41d0:800:1bbd::4/cpweb, 2607:5300:205:200::1c30/cpweb [21:44:47] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 51.195.236.250/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb [21:45:21] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 17.68, 20.01, 20.57 [21:46:09] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3BbR [21:46:10] [02miraheze/dns] 07paladox 031574425 - Revert "Depool cp10" [21:46:42] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.85, 4.64, 1.96 [21:46:45] PROBLEM - wiki.scvo.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.scvo.org could not be found [21:48:37] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:48:44] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:49:13] well [21:49:19] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 16.30, 19.01, 20.11 [21:49:22] there's a raid check running on clou4 now [21:49:42] oh? [21:49:46] whereas the same check is not running on cloud5 at the moment [21:49:54] take a look at /proc/mdstat [21:50:02] (yes, that is a file you can 'cat') [21:50:27] aha [21:50:41] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.10, 5.22, 2.85 [21:51:04] how did you find it was that or at least think to look there? [21:51:07] not sure if this check is the cause of the high i/o [21:51:34] I was wondering if a disk might have failed [21:51:53] it'll be finished in the next couple of hours according to the timer [21:52:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.13, 4.91, 3.02 [21:52:54] and I think gluster may be hit hard because gluster's performance depends on I/O performance [21:53:02] yeh [21:53:11] in turn, mediawiki depends on gluster, cache proxies depend on mediawiki [21:53:37] RECOVERY - wiki.scvo.org - reverse DNS on sslhost is OK: rDNS OK - wiki.scvo.org reverse DNS resolves to cp11.miraheze.org [21:53:54] cache proxy are having a difficult time with the i/o too [21:53:57] take a look at cp10 [21:54:16] yep, cache proxies are also i/o intensive [21:55:22] paladox: can you dig through cloud5 logs for anything regarding the mdadm check? [21:55:37] i checked and it's not currently running [21:55:44] *logs* [21:56:14] I think the check finished sooner on cloud5 than on cloud4, hence cloud5's statistics being back to normal since 18:00 :) [21:57:54] SPF|Cloud: can you also retag infra [21:57:57] i don't see anything mentioning mdadm within the 8 hours [21:58:02] already done [21:58:14] A raid check on the cloud server doesn't involve mw team [21:58:28] Originally I put sre in general because it caused a MediaWiki outage [21:58:28] oh [21:58:30] found it [21:58:37] But anything that effects MediaWiki isn't our team [21:58:38] https://graylog.miraheze.org/messages/graylog_168/4e0eac03-aae1-11eb-9fbb-0200001a24a4 [21:58:39] and [21:58:45] https://graylog.miraheze.org/messages/graylog_168/4e2abf81-aae1-11eb-9fbb-0200001a24a4 [21:58:46] and [21:58:56] https://graylog.miraheze.org/messages/graylog_168/4e2e9011-aae1-11eb-9fbb-0200001a24a4 [21:59:51] 00:57 matches with the 'Disk: utilization' graphs on cloud4 and cloud5 [22:00:02] cause found I guess? [22:00:06] yup [22:00:24] great, at least root cause known [22:01:17] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?viewPanel=287&orgId=1&from=1617408000000&to=1617667199000&var-job=node&var-node=cloud4.miraheze.org&var-port=9100 see this [22:01:18] [ Grafana ] - grafana.miraheze.org [22:01:39] the mdadm check starts at 00:57 UTC on the first Sunday of the month [22:11:40] paladox: are we going to stop the check (which is not a good thing, because these checks are necessary to warn us in advance for hardware failures) or wait till the check is finished (up to four hours more of bad performance)? [22:12:18] I say it's been running for most of the day, might as well let it finish. [22:12:23] ^ [22:12:41] (ofc if it had another day left i would of made a different decision) [22:12:42] Gives us a month to work out how not to repeat this [22:12:43] that was my thinking too [22:13:10] but even better if we decide together [22:13:16] yup [22:13:25] <3 team work [22:13:50] the array check doesn't show up in iotop... very difficult to discover the cause [22:14:19] the root cause discovery was pure luck :P [22:14:41] Is there anything we can do over the next 4 weeks to stop it being an issue in June [22:14:42] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.81, 5.01, 4.18 [22:16:35] removing the check is an option, but the check allows us to act on hardware failure (bad sectors on disks) before it causes a real outage [22:16:44] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.07, 5.58, 4.48 [22:17:29] Let's talk at some point during the week but is there any alternative checks we can use or ways to make it less intensive [22:17:34] I need to sleep [22:17:43] ttyl [22:18:59] PROBLEM - wiki.usagihime.ml - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.usagihime.ml could not be found [22:20:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.82, 5.62, 4.77 [22:22:42] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.02, 6.19, 5.09 [22:24:19] PROBLEM - wiki.mineland.eu - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.mineland.eu could not be found [22:24:19] PROBLEM - wiki.thefactoryhka.com.pa - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.thefactoryhka.com.pa could not be found [22:24:22] PROBLEM - viileapedia.ga - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for viileapedia.ga could not be found [22:24:25] PROBLEM - lama.madooa.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for lama.madooa.org could not be found [22:24:27] PROBLEM - www.ferrandalmeida.family - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.ferrandalmeida.family could not be found [22:24:34] PROBLEM - kunwok.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for kunwok.org could not be found [22:24:34] PROBLEM - heavyironmodding.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for heavyironmodding.org could not be found [22:24:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.94, 5.79, 5.08 [22:25:42] RECOVERY - wiki.usagihime.ml - reverse DNS on sslhost is OK: rDNS OK - wiki.usagihime.ml reverse DNS resolves to cp11.miraheze.org [22:25:44] the array check takes substantially longer on cloud4 than on cloud5 [22:26:28] and the I/O stays high for a longer period of time for /dev/sda on cloud4 than for /dev/sdb on cloud4 [22:31:08] RECOVERY - www.ferrandalmeida.family - reverse DNS on sslhost is OK: rDNS OK - www.ferrandalmeida.family reverse DNS resolves to cp10.miraheze.org [22:31:14] RECOVERY - wiki.thefactoryhka.com.pa - reverse DNS on sslhost is OK: rDNS OK - wiki.thefactoryhka.com.pa reverse DNS resolves to cp11.miraheze.org [22:31:15] RECOVERY - wiki.mineland.eu - reverse DNS on sslhost is OK: rDNS OK - wiki.mineland.eu reverse DNS resolves to cp11.miraheze.org [22:31:17] RECOVERY - viileapedia.ga - reverse DNS on sslhost is OK: rDNS OK - viileapedia.ga reverse DNS resolves to cp10.miraheze.org [22:31:24] RECOVERY - lama.madooa.org - reverse DNS on sslhost is OK: rDNS OK - lama.madooa.org reverse DNS resolves to cp10.miraheze.org [22:31:24] RECOVERY - heavyironmodding.org - reverse DNS on sslhost is OK: rDNS OK - heavyironmodding.org reverse DNS resolves to cp10.miraheze.org [22:31:24] RECOVERY - kunwok.org - reverse DNS on sslhost is OK: rDNS OK - kunwok.org reverse DNS resolves to cp11.miraheze.org [22:34:17] !log raid array check is 85.0% on cloud4, ETA 370 - 550 minutes [22:34:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:34:39] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.86, 4.83, 5.02 [23:05:09] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 10.26, 7.52, 5.48 [23:07:09] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 5.97, 6.81, 5.47 [23:09:09] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.90, 6.59, 5.56 [23:10:51] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/J3RJP [23:10:52] [02miraheze/mw-config] 07paladox 0382e41da - Fix indentation for wgMirahezeMagicLogEmailConditions [23:10:54] [02mw-config] 07paladox synchronize pull request 03#3845: Merge 'master' into REL1_36 - 13https://git.io/JOKA0 [23:11:47] miraheze/mw-config - paladox the build passed. [23:31:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.06, 3.71, 2.79 [23:33:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.20, 3.14, 2.70 [23:36:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:800:1bbd::4/cpweb [23:36:12] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.03, 5.56, 4.85 [23:38:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:38:08] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.41, 5.27, 4.85 [23:43:06] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 6.73, 4.39, 3.30 [23:44:14] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 5.07, 6.85, 5.61 [23:46:13] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 3.22, 5.55, 5.28 [23:47:03] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.39, 3.83, 3.33 [23:51:03] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.43, 3.40, 3.30 [23:52:45] night