[00:01:36] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3273 MB (13% inode=93%); [00:02:50] PROBLEM - ns1 Current Load on ns1 is CRITICAL: CRITICAL - load average: 3.66, 2.81, 1.58 [00:02:56] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.20, 22.02, 17.98 [00:04:48] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.10, 1.33, 1.26 [00:06:34] PROBLEM - hololive.wiki - ZeroSSL on sslhost is WARNING: WARNING - Certificate 'hololive.wiki' expires in 30 day(s) (Thu 10 Sep 2020 23:59:59 GMT +0000). [00:08:37] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.56, 22.92, 19.94 [00:12:25] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.07, 19.53, 19.28 [00:16:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.45, 20.16, 19.56 [00:18:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 13.34, 17.62, 18.70 [00:24:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.48, 22.70, 20.55 [00:25:49] PROBLEM - wiki.hibernusmc.net - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.hibernusmc.net could not be found [00:25:52] PROBLEM - wiki.vinesh.eu.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.vinesh.eu.org could not be found [00:25:52] PROBLEM - erikapedia.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for erikapedia.com could not be found [00:28:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.66, 23.38, 21.33 [00:32:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.82, 19.86, 20.34 [00:32:38] RECOVERY - wiki.hibernusmc.net - reverse DNS on sslhost is OK: rDNS OK - wiki.hibernusmc.net reverse DNS resolves to cp6.miraheze.org [00:32:43] RECOVERY - erikapedia.com - reverse DNS on sslhost is OK: rDNS OK - erikapedia.com reverse DNS resolves to cp7.miraheze.org [00:32:44] RECOVERY - wiki.vinesh.eu.org - reverse DNS on sslhost is OK: rDNS OK - wiki.vinesh.eu.org reverse DNS resolves to cp6.miraheze.org [01:00:21] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJD14 [01:00:23] [02miraheze/services] 07MirahezeSSLBot 0339104d4 - BOT: Updating services config for wikis [01:00:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.60, 20.80, 19.52 [01:04:25] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 19.48, 19.85, 19.38 [02:25:15] Why i was logged out? [02:25:42] Is it known problem? Maybe i saw anything about it. [02:30:48] Hmm browser? [02:31:22] Chrome of course [02:32:53] Yeah, I've been having issues with Chrome, it mostly signs me out of phabricator, but sometimes for wikis too, and I'm not sure why. [02:33:46] It's small problem for me only due to 2FA. [02:35:05] Do you have a few minutes? [02:40:07] Not really, unfortunately [02:40:26] Okay [03:12:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.85, 21.00, 18.16 [03:14:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.23, 20.80, 18.47 [03:20:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 20.07, 19.65, 18.66 [04:44:16] !log reception@jobrunner1:/srv/mediawiki/w/maintenance$ sudo -u www-data php deleteBatch.php --wiki xedwiki --r "Requested - T6023" /home/reception/xeddel.txt [04:44:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:55:36] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2648 MB (10% inode=93%); [07:19:42] !log GDPR script [07:19:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:44:23] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [07:44:47] PROBLEM - ns1 Current Load on ns1 is CRITICAL: CRITICAL - load average: 5.25, 3.62, 1.85 [07:46:18] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [07:46:47] PROBLEM - ns1 Current Load on ns1 is WARNING: WARNING - load average: 0.12, 1.71, 1.49 [07:47:59] What? [07:48:04] Silly ns1 [07:48:47] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.00, 0.76, 1.15 [08:03:08] RhinosF1: I don't know why it keeps doing that [08:03:16] Meh [10:08:50] RhinosF1 Good morning, can you check babel on my UP? On Loginwiki > https://login.miraheze.org/wiki/User:MrJaroslavik On my wiki > https://sesupport.miraheze.org/wiki/User:MrJaroslavik [10:08:51] [ User:MrJaroslavik - Miraheze Login Wiki ] - login.miraheze.org [10:08:51] [ User:MrJaroslavik - StreamElements Support ] - sesupport.miraheze.org [10:14:14] @MrJaroslavik: you need to enable Babel on your wiki in ManageWiki/Extensions [10:14:43] Oh, interesing [11:12:27] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.12, 21.08, 18.08 [11:14:25] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.77, 20.70, 18.31 [11:16:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.53, 19.10, 18.01 [12:08:17] PROBLEM - espiral.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'espiral.org' expires in 15 day(s) (Thu 27 Aug 2020 12:03:52 GMT +0000). [12:09:17] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.35, 21.04, 19.12 [12:10:31] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJyc5 [12:10:32] [02miraheze/ssl] 07MirahezeSSLBot 030d24879 - Bot: Update SSL cert for espiral.org [12:13:04] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.23, 22.04, 19.85 [12:14:58] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.89, 22.69, 20.38 [12:18:46] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.44, 22.41, 20.69 [12:20:41] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.42, 22.68, 20.97 [12:26:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 28.79, 25.27, 22.54 [12:28:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.54, 22.82, 21.97 [12:28:45] RECOVERY - espiral.org - LetsEncrypt on sslhost is OK: OK - Certificate 'espiral.org' will expire on Mon 26 Oct 2020 15:37:20 GMT +0000. [12:30:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.56, 22.99, 22.06 [12:32:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.66, 20.65, 21.28 [12:38:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.53, 23.19, 21.98 [12:42:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.31, 20.78, 21.26 [12:50:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.26, 22.10, 21.42 [12:54:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 15.59, 20.56, 21.08 [12:58:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.56, 18.92, 20.21 [13:04:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.45, 20.73, 20.56 [13:06:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.21, 19.83, 20.31 [13:36:06] [02dns] 07MacFan4000 opened pull request 03#170: add 2 new subdomains for MirahezeBots - 13https://git.io/JJyBW [13:37:28] Reception123: ^ [13:43:00] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.32, 19.70, 19.55 [13:46:47] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.26, 18.08, 19.02 [13:52:30] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.18, 20.89, 19.97 [13:54:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.28, 19.53, 19.61 [13:56:59] Change of my home wiki in database is not possible, right? [14:06:15] no [14:06:48] technically we could in theory do it but I wouldn't know the implications [14:14:47] Not important [14:15:12] I only have listed testwiki as home wiki [14:15:34] due to registration there [14:20:28] ok [14:28:55] Maybe we can remove CheckUser and Oversight button from Meta:RfP [14:28:56] ? [14:29:54] meh [14:30:01] you could in theory apply [14:30:48] But only in theory :D [14:31:22] [02dns] 07Reception123 closed pull request 03#170: add 2 new subdomains for MirahezeBots - 13https://git.io/JJyBW [14:31:24] [02miraheze/dns] 07Reception123 pushed 032 commits to 03master [+0/-0/±2] 13https://git.io/JJyus [14:31:25] [02miraheze/dns] 07MacFan4000 03a745699 - add 2 new subdomains for MirahezeBots sopel.bots is for serving files generated by .help phab-storage.bots is to be used as an alternate file domain for phab.bots.miraheze.wiki [14:31:27] [02miraheze/dns] 07Reception123 03d20ddf4 - Merge pull request #170 from MacFan4000/patch-6 add 2 new subdomains for MirahezeBots [14:31:55] I would remove, it really serves no purpose atm and only encourages WP:SNOWs anyway [14:33:16] We can add it back after any RfC. But i don't think it will be successful. [14:37:17] https://meta.miraheze.org/w/index.php?title=Meta:Requests_for_permissions/header&diff=118765&oldid=108692 [14:37:18] [ Difference between revisions of "Meta:Requests for permissions/header" - Miraheze Meta ] - meta.miraheze.org [14:37:44] ack [15:02:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.43, 20.30, 19.14 [15:04:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.61, 20.28, 19.30 [15:14:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.66, 22.52, 20.43 [15:22:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.00, 21.89, 20.95 [15:26:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.25, 21.03, 20.85 [15:34:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 32.08, 25.45, 22.57 [15:38:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.21, 22.46, 22.09 [15:50:24] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.67, 22.61, 21.81 [15:52:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.96, 22.61, 21.91 [16:06:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.05, 19.24, 20.28 [16:25:36] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: DISK CRITICAL - free space: / 1431 MB (5% inode=93%); [16:27:36] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 1465 MB (6% inode=93%); [16:28:11] ^ @Reception123 (since paladox is on vacation :P) [16:30:47] dmehus: load is not a worry and disk space should clear itself [16:31:40] @RhinosF1 Oh okay, yeah wasn't too worried about the load, and regarding the disk space, is that because it's one of the cache proxies so, presumably, is regularly cleared of cached data? [16:32:02] dmehus: probably logs if I'm honest [16:32:20] @RhinosF1 Ah, okay, that makes sense. Thanks. [16:33:47] dmehus: as long as it doesn't stay critical for too long it's not a worry. We'll know if it crashes. Cp3 has had a lot of recent usage. [16:33:56] Probably needs a bigger disk eventually [16:35:13] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJyrh [16:35:15] [02miraheze/services] 07MirahezeSSLBot 035a8ddf4 - BOT: Updating services config for wikis [16:36:08] @RhinosF1, ah, okay, thanks...and yeah, "Cp3 has had a lot of recent usage[,]" likely explains a lot. [16:37:37] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: DISK CRITICAL - free space: / 1436 MB (5% inode=93%); [16:38:04] Reception123: maybe see if some stuff can be gzip'd [16:38:11] It shouldn't re go off that quick [16:38:59] No access right now [17:20:23] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJy6L [17:20:25] [02miraheze/services] 07MirahezeSSLBot 03fd16cbf - BOT: Updating services config for wikis [17:37:58] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2001:41d0:800:1056::2/cpweb, 2001:41d0:800:105a::10/cpweb [17:38:22] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 51.89.160.142/cpweb, 2001:41d0:800:1056::2/cpweb, 2607:5300:205:200::2ac4/cpweb [17:38:33] Huh [17:38:43] .ip 128.199.138.216 [17:38:44] [IP/Host Lookup] Hostname: 128.199.138.216 | Location: Singapore | ISP: AS14061 DIGITALOCEAN-ASN [17:38:45] PROBLEM - mw7 Current Load on mw7 is CRITICAL: CRITICAL - load average: 9.78, 6.87, 4.31 [17:38:52] PROBLEM - mw6 Current Load on mw6 is CRITICAL: CRITICAL - load average: 9.78, 6.44, 3.94 [17:38:52] Singapore [17:39:14] Sounds rigjt [17:39:16] .ip 51.77.107.210 [17:39:16] [IP/Host Lookup] Hostname: cp6.miraheze.org | Location: United Kingdom | Region: England | City: London | ISP: AS16276 OVH SAS [17:39:20] Huh [17:39:24] Why's cp6 as well [17:39:33] .ip 51.89.160.142 [17:39:34] [IP/Host Lookup] Hostname: cp7.miraheze.org | Location: France | ISP: AS16276 OVH SAS [17:39:42] Our asia cp is signapore iirc [17:39:43] Zppix: so 3cps dodgy [17:39:46] Yeah [17:39:52] Asia + UK down [17:39:55] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:39:58] .gh Miraheze/dns [17:39:58] https://github.com/Miraheze/dns [17:40:22] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:40:31] cp[367] [17:40:45] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 4.13, 5.68, 4.17 [17:40:49] RECOVERY - mw6 Current Load on mw6 is OK: OK - load average: 5.03, 5.69, 3.95 [17:40:49] Just 9 was up [17:41:35] Reception123, SPF|Cloud: any idea what's upsetting stuff [17:42:37] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?viewPanel=285&orgId=1&from=now-6h&to=now-1m&var-job=node&var-node=cp3.miraheze.org&var-port=9100 [17:42:38] [ Grafana ] - grafana.miraheze.org [17:42:50] What the https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?viewPanel=285&orgId=1&from=now-6h&to=now-1m&var-job=node&var-node=cp6.miraheze.org&var-port=9100 [17:42:52] [ Grafana ] - grafana.miraheze.org [17:43:03] As I said I don't currently have access and without that I can't really guess what it is [17:43:09] Reception123: okay [17:51:50] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 10.58, 7.15, 5.37 [17:52:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 51.89.160.142/cpweb, 2001:41d0:800:1056::2/cpweb [17:52:18] Oh great [17:52:19] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 97% [17:52:22] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 2001:41d0:800:1056::2/cpweb, 2001:41d0:800:105a::10/cpweb [17:52:25] Yey fun [17:52:30] PROBLEM - mw4 Current Load on mw4 is CRITICAL: CRITICAL - load average: 9.41, 7.25, 5.45 [17:52:49] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:52:50] Pings Zppix SPF|Cloud paladox [17:53:23] That's 3+6 [17:54:11] PROBLEM - cp6 Stunnel Http for mw4 on cp6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:54:19] PROBLEM - mw4 MediaWiki Rendering on mw4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:54:22] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 18% [17:54:37] Why mw4 as well [17:54:49] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1950 bytes in 1.001 second response time [17:55:29] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.77, 19.63, 17.87 [17:55:57] PROBLEM - mw6 Current Load on mw6 is CRITICAL: CRITICAL - load average: 12.12, 8.33, 5.68 [17:56:24] RECOVERY - mw4 MediaWiki Rendering on mw4 is OK: HTTP OK: HTTP/1.1 200 OK - 17160 bytes in 8.512 second response time [17:56:44] @System Administrators possible ongoing issue [17:56:46] PROBLEM - mw7 Current Load on mw7 is WARNING: WARNING - load average: 7.26, 7.30, 5.50 [17:57:15] PROBLEM - bn.gyaanipedia.co.in - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for bn.gyaanipedia.co.in could not be found [17:57:17] https://grafana.miraheze.org/d/iWQm-pOZz/nginx-appservers?orgId=1&refresh=5s&var-instance=mw6.miraheze.org:9113&var-instance=mw5.miraheze.org:9113&var-instance=mw4.miraheze.org:9113&var-instance=mw7.miraheze.org:9113 [17:57:18] PROBLEM - mh142.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for mh142.com could not be found [17:57:18] [ Grafana ] - grafana.miraheze.org [17:57:22] PROBLEM - tallguysfree.miraheze.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for tallguysfree.miraheze.org could not be found [17:57:43] seems resolved [17:57:52] RECOVERY - mw6 Current Load on mw6 is OK: OK - load average: 4.91, 6.78, 5.41 [17:57:56] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 3.18, 6.76, 6.04 [17:57:56] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:58:20] RECOVERY - cp6 Stunnel Http for mw4 on cp6 is OK: HTTP OK: HTTP/1.1 200 OK - 15661 bytes in 0.012 second response time [17:58:28] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:58:30] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 3.25, 6.59, 6.01 [17:58:45] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 2.74, 5.69, 5.13 [17:59:15] paladox: what caused this? [17:59:17] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.30, 19.37, 18.33 [17:59:35] looks like requests were high at the time, so it hit php-fpm child limit [17:59:56] https://github.com/miraheze/puppet/blob/master/modules/mediawiki/manifests/php.pp#L3 [17:59:57] [ puppet/php.pp at master · miraheze/puppet · GitHub ] - github.com [18:01:14] paladox: one off? Could happen again? Steps to prevent further incident? [18:03:31] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:04:02] RECOVERY - bn.gyaanipedia.co.in - reverse DNS on sslhost is OK: rDNS OK - bn.gyaanipedia.co.in reverse DNS resolves to cp6.miraheze.org [18:04:09] RECOVERY - mh142.com - reverse DNS on sslhost is OK: rDNS OK - mh142.com reverse DNS resolves to cp7.miraheze.org [18:04:15] RECOVERY - tallguysfree.miraheze.org - reverse DNS on sslhost is OK: rDNS OK - tallguysfree.miraheze.org reverse DNS resolves to cp7.miraheze.org [18:04:32] PROBLEM - cp9 NTP time on cp9 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [18:08:33] RECOVERY - cp9 NTP time on cp9 is OK: NTP OK: Offset -0.002675682306 secs [18:15:19] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:22:26] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.14, 20.80, 17.91 [19:24:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.42, 20.84, 18.30 [19:26:24] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.99, 19.53, 18.14 [19:31:40] I think GSs should get 'commentadmin' right [19:38:12] > I think GSs should get 'commentadmin' right @MrJaroslavik That's probably a good idea, potentially, for wikis that use that extension, but would likely require a new RfC to add that right. I suspect the community would likely support, but would probably want to provide some guidance in terms of when and under what conditions it could be used, too. [19:44:24] Im not sure why they dont [19:45:29] Maybe because 'commentadmin' is not very known and used right [19:46:03] Imma go ahead and add it [19:47:10] Oh i know why [19:47:24] Its not a right that can assigned globally [19:48:06] Is added by extension [19:48:08] Oh [19:49:08] @MrJaroslavik if you ever need something done that needs commentadmin just lmk and i can find a way to handle it [19:49:45] I will paste it into cvt-private [19:49:51] Ok [20:11:18] .help [20:11:18] dmehus: I've published a list of my commands at: https://sopel.bots.miraheze.wiki/help_prod.html [20:12:32] .help seen [20:12:33] dmehus: Reports when and where the user was last seen. [20:12:48] Adding such rights to the global group is possible, but non-central (you have to do it on a wiki that has the right available) [20:13:40] Of course [20:14:12] .help tell [20:14:12] dmehus: Give someone a message the next time they're seen [20:14:12] e.g. MirahezeBot, tell dgw he broke something again. [20:14:39] Maybe should be enough discussion on Talk:GS or CN? [20:15:42] @MrJaroslavik CN, probably, I would say, but recommend asking in the discussion whether the community assents to discussing at CN or whether it wants an amending RfC. [20:15:57] CN probably is the better option [20:25:55] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.81, 20.60, 18.84 [20:27:50] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.19, 18.09, 18.14 [20:52:33] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 7.20, 5.45, 4.67 [20:54:33] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 4.32, 5.18, 4.68 [22:15:14] PROBLEM - gluster1 Puppet on gluster1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [22:31:27] !log root@gluster1:/home/paladox# gluster volume set mvol performance.client-io-threads off [22:31:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:35:14] RECOVERY - gluster1 Puppet on gluster1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:44:56] [02mw-config] 07MrJaroslavik opened pull request 03#3206: Add 'skipcaptcha' right to sysops on Meta - 13https://git.io/JJybi [23:48:04] Lol, sysops are autoconfirmed users [23:48:28] I did not realize that😆 [23:48:43] But shouldn't be problem add this right [23:56:20] > Lol, sysops are autoconfirmed users @MrJaroslavik Usually, yes. TestWiki is a common exception to this, as the sysop flag is often given before the user is autopromoted to autoconfirmed. Some sysops never make it to autoconfirmed on TestWiki because they don't meet the second condition, which is 10 edits, having issued all or mostly log actions. [23:59:10] [02mw-config] 07dmehus commented on pull request 03#3206: Add 'skipcaptcha' right to sysops on Meta - 13https://git.io/JJyNG