[00:02:27] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2746 MB (11% inode=93%); [00:02:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.04, 21.20, 16.91 [00:06:35] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.67, 19.59, 17.21 [00:20:23] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJbOf [00:20:25] [02miraheze/services] 07MirahezeSSLBot 035b0ab8e - BOT: Updating services config for wikis [00:54:28] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.31, 5.06, 3.84 [00:54:41] PROBLEM - ns1 Current Load on ns1 is WARNING: WARNING - load average: 1.93, 1.89, 0.96 [00:56:25] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.28, 3.61, 3.46 [00:56:40] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.27, 0.97, 0.79 [01:36:27] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2644 MB (10% inode=93%); [02:42:21] musikanimal: hows it going [02:42:33] well! and you? [02:42:56] musikanimal: besides testing positive for covid (im not having symptoms) im fine [02:46:47] oh dear! stay healthy! I've had several friends test positive, none had serious problems [02:48:54] musikanimal: yeah what you been working on? [02:56:38] @Zppix, you tested positive for COVID-19? Do you have any symptoms? [03:07:24] dmehus, I believe he said he did not have any symptoms. Which is great, I might add! Hope it stays that way! [03:09:00] @Universal_Omega Ah, I see that now in the parenthetical. Yeah, touch wood that it stays that way. Take it easy, rest lots. :) [03:13:24] Yep [03:13:44] Well goodnight. And I probably won't be on much tomorrow. [03:27:29] PROBLEM - storytime.jdstroy.cf - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - storytime.jdstroy.cf All nameservers failed to answer the query. [03:34:12] RECOVERY - storytime.jdstroy.cf - reverse DNS on sslhost is OK: rDNS OK - storytime.jdstroy.cf reverse DNS resolves to cp7.miraheze.org [04:00:24] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJbWW [04:00:26] [02miraheze/services] 07MirahezeSSLBot 036ce6be9 - BOT: Updating services config for wikis [04:08:44] PROBLEM - wiki.fourta.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.fourta.org All nameservers failed to answer the query. [04:15:23] RECOVERY - wiki.fourta.org - reverse DNS on sslhost is OK: rDNS OK - wiki.fourta.org reverse DNS resolves to cp7.miraheze.org [06:25:16] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJb2b [06:25:18] [02miraheze/services] 07MirahezeSSLBot 037c8b349 - BOT: Updating services config for wikis [08:05:55] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:06:02] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CRITICAL - load average: 5.76, 2.88, 1.32 [08:06:08] Oh great [08:06:38] PROBLEM - cp3 APT on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:07:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb [08:07:21] Ping Reception|away SPF|Cloud [08:07:32] cp3 has gone and borked itself somehow [08:07:35] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 63% [08:07:44] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 51.89.160.142/cpweb [08:08:46] PROBLEM - ping6 on cp7 is WARNING: PING WARNING - Packet loss = 54%, RTA = 1.96 ms [08:09:02] Oh for _ sake [08:09:09] PROBLEM - ping4 on cp6 is WARNING: PING WARNING - Packet loss = 80%, RTA = 2.03 ms [08:09:36] RECOVERY - cp3 APT on cp3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [08:10:12] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.05, 1.92, 1.02 [08:10:54] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1945 bytes in 0.651 second response time [08:10:58] RECOVERY - ping6 on cp7 is OK: PING OK - Packet loss = 0%, RTA = 1.58 ms [08:11:07] RECOVERY - ping4 on cp6 is OK: PING OK - Packet loss = 0%, RTA = 1.55 ms [08:11:56] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 32.11, 23.98, 18.95 [08:12:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:12:07] RECOVERY - cp9 Current Load on cp9 is OK: OK - load average: 0.53, 1.40, 0.94 [08:12:10] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/sudoers] [08:12:11] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 9% [08:12:13] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:16:22] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 13.14, 20.20, 18.89 [08:18:40] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 53% [08:19:33] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:19:34] PROBLEM - ping6 on cp7 is WARNING: PING WARNING - Packet loss = 73%, RTA = 1.79 ms [08:19:36] PROBLEM - ping4 on cp7 is WARNING: PING WARNING - Packet loss = 80%, RTA = 1.73 ms [08:20:16] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 28%, RTA = 80.33 ms [08:21:06] PROBLEM - ping6 on cp6 is WARNING: PING WARNING - Packet loss = 37%, RTA = 5.18 ms [08:21:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.89.160.142/cpweb, 2001:41d0:800:1056::2/cpweb, 2001:41d0:800:105a::10/cpweb [08:21:20] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 4.83, 3.76, 2.18 [08:21:38] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.222.27.129/cpweb [08:22:09] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 61% [08:22:49] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1944 bytes in 8.833 second response time [08:23:20] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [08:23:29] PROBLEM - ping4 on cp6 is WARNING: PING WARNING - Packet loss = 61%, RTA = 16.36 ms [08:23:33] PROBLEM - wiki.tallguysfree.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.tallguysfree.com could not be found [08:23:34] PROBLEM - wiki.candela.digital - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.candela.digital could not be found [08:25:56] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:26:38] PROBLEM - la.gyaanipedia.co.in - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:26:47] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [08:27:29] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 37%, RTA = 117.87 ms [08:28:00] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is CRITICAL: CRITICAL - NGINX Error Rate is 70% [08:28:00] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 44% [08:29:27] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 78.41 ms [08:30:49] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 82.00 ms [08:31:08] Ping SPF|Cloud paladox Zppix Reception|away: help? [08:31:17] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 62% [08:31:19] PROBLEM - cp6 Puppet on cp6 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 5 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [08:31:23] RECOVERY - wiki.tallguysfree.com - reverse DNS on sslhost is OK: rDNS OK - wiki.tallguysfree.com reverse DNS resolves to cp6.miraheze.org [08:31:23] RECOVERY - wiki.candela.digital - reverse DNS on sslhost is OK: rDNS OK - wiki.candela.digital reverse DNS resolves to cp6.miraheze.org [08:31:30] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 81% [08:31:41] RECOVERY - ping6 on cp7 is OK: PING OK - Packet loss = 0%, RTA = 3.07 ms [08:31:42] RECOVERY - ping4 on cp7 is OK: PING OK - Packet loss = 0%, RTA = 3.80 ms [08:31:43] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 1778 MB (7% inode=93%); [08:32:00] RECOVERY - ping4 on cp6 is OK: PING OK - Packet loss = 0%, RTA = 2.62 ms [08:32:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:32:38] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:32:42] RECOVERY - ping6 on cp6 is OK: PING OK - Packet loss = 0%, RTA = 5.26 ms [08:33:15] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 56% [08:33:50] RECOVERY - la.gyaanipedia.co.in - LetsEncrypt on sslhost is OK: OK - Certificate 'en.gyaanipedia.co.in' will expire on Tue 27 Oct 2020 15:20:40 GMT +0000. [08:34:35] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 4 minutes ago with 2 failures. Failed resources (up to 3 shown): File[/etc/sudoers],File[antiguabarbudacalypso.com] [08:35:32] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 97% [08:35:43] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.48, 21.47, 20.23 [08:35:46] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 48% [08:35:48] RECOVERY - cp6 Puppet on cp6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:36:42] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [08:37:28] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 51.89.160.142/cpweb [08:38:35] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 50% [08:38:57] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 28%, RTA = 82.90 ms [08:40:04] PROBLEM - ping6 on cp7 is WARNING: PING WARNING - Packet loss = 50%, RTA = 27.33 ms [08:40:05] PROBLEM - ping4 on cp7 is WARNING: PING WARNING - Packet loss = 37%, RTA = 53.43 ms [08:40:12] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:40:50] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 24% [08:41:20] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 66% [08:41:26] PROBLEM - cp9 NTP time on cp9 is UNKNOWN: check_ntp_time: Invalid hostname/address - time.cloudflare.comUsage: check_ntp_time -H [-4 [08:41:45] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 85.31 ms [08:42:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [08:43:47] RECOVERY - cp9 NTP time on cp9 is OK: NTP OK: Offset -0.004855126143 secs [08:44:36] RECOVERY - ping6 on cp7 is OK: PING OK - Packet loss = 0%, RTA = 0.69 ms [08:44:38] RECOVERY - ping4 on cp7 is OK: PING OK - Packet loss = 0%, RTA = 1.56 ms [08:44:43] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:44:46] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:45:22] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 99% [08:45:37] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 76% [08:47:33] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 34% [08:47:39] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 37% [08:48:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [08:48:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [08:49:29] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 13.03, 18.40, 19.96 [08:50:07] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 73%, RTA = 81.06 ms [08:50:45] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 58%, RTA = 82.83 ms [08:50:46] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:50:50] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:51:41] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 51% [08:53:01] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 16%, RTA = 83.94 ms [08:53:47] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 90% [08:53:59] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:54:26] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 79.18 ms [08:54:49] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 89% [08:55:27] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 38% [08:55:58] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1947 bytes in 0.643 second response time [08:58:53] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [08:59:11] PROBLEM - wiki.serwerwanilia.pl - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.serwerwanilia.pl' expires in 15 day(s) (Thu 03 Sep 2020 08:53:15 GMT +0000). [08:59:30] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.85, 20.60, 19.96 [08:59:31] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 56% [09:01:05] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJbQg [09:01:06] [02miraheze/ssl] 07MirahezeSSLBot 036b5c706 - Bot: Update SSL cert for wiki.serwerwanilia.pl [09:01:32] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.58, 22.22, 20.59 [09:01:41] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:01:59] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: DISK CRITICAL - free space: / 1441 MB (5% inode=93%); [09:03:36] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.93, 21.15, 20.46 [09:05:39] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.37, 19.67, 20.01 [09:07:54] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:08:04] PROBLEM - cp3 APT on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:10:02] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1936 bytes in 6.770 second response time [09:10:10] RECOVERY - cp3 APT on cp3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [09:10:45] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 0.78, 1.67, 2.00 [09:11:01] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 40% [09:11:35] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 37% [09:13:01] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:13:06] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 64%, RTA = 77.60 ms [09:13:34] PROBLEM - cp9 NTP time on cp9 is UNKNOWN: error getting address for time.cloudflare.com: Temporary failure in name resolution [09:15:39] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 64% [09:15:40] RECOVERY - cp9 NTP time on cp9 is OK: NTP OK: Offset -0.004844069481 secs [09:15:54] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 51% [09:16:20] PROBLEM - cp3 Stunnel Http for mw5 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:16:40] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 44%, RTA = 294.54 ms [09:16:45] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 50%, RTA = 303.17 ms [09:16:55] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:17:07] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 32% [09:17:10] PROBLEM - cp3 Stunnel Http for mon1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:17:25] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 77.70 ms [09:17:54] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 92% [09:18:43] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 170.82 ms [09:18:44] RECOVERY - ping4 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 157.73 ms [09:18:50] RECOVERY - cp9 Current Load on cp9 is OK: OK - load average: 1.37, 1.25, 1.63 [09:19:14] RECOVERY - cp3 Stunnel Http for mon1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 30411 bytes in 3.964 second response time [09:19:27] RECOVERY - wiki.serwerwanilia.pl - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.serwerwanilia.pl' will expire on Mon 16 Nov 2020 08:00:57 GMT +0000. [09:19:37] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 12% [09:20:21] RECOVERY - cp3 Stunnel Http for mw5 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15653 bytes in 1.501 second response time [09:22:09] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 3 minutes ago with 2 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled],File[/etc/ssl/certs/m.miraheze.org.crt] [09:24:54] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.35, 1.55, 1.61 [09:25:11] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 47% [09:25:36] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [09:25:36] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 42% [09:25:58] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [09:26:04] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [09:26:10] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:20] PROBLEM - cp3 APT on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:26:40] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 29.34, 23.13, 20.86 [09:27:09] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 73% [09:28:13] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1946 bytes in 4.328 second response time [09:28:23] RECOVERY - cp3 APT on cp3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [09:28:41] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.22, 23.14, 21.19 [09:28:55] RECOVERY - cp9 Current Load on cp9 is OK: OK - load average: 1.36, 1.55, 1.61 [09:29:21] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is WARNING: WARNING - NGINX Error Rate is 46% [09:29:29] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 7% [09:29:50] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 35% [09:29:57] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:30:39] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:31:09] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 24% [09:31:18] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is CRITICAL: CRITICAL - NGINX Error Rate is 87% [09:33:00] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.48, 2.60, 2.07 [09:33:45] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 75% [09:34:08] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 80% [09:34:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [09:35:05] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 58%, RTA = 82.35 ms [09:35:28] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 74% [09:35:41] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [09:36:03] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:36:44] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.39, 18.91, 20.06 [09:37:06] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 0.92, 1.84, 1.90 [09:37:09] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 82.81 ms [09:39:57] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 80%, RTA = 80.18 ms [09:40:10] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 57% [09:40:10] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2607:5300:205:200::2ac4/cpweb [09:40:52] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.58, 24.09, 22.00 [09:41:20] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 43% [09:41:32] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 44%, RTA = 83.12 ms [09:42:32] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [09:43:11] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.36, 1.87, 1.85 [09:43:24] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 80% [09:43:55] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 46% [09:44:00] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [09:44:09] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:45:00] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.33, 23.80, 22.52 [09:45:27] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 48% [09:45:52] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:53] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 64% [09:46:21] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 16%, RTA = 78.46 ms [09:47:28] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 94% [09:47:50] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 83.20 ms [09:47:51] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1947 bytes in 0.636 second response time [09:48:04] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:48:18] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:49:08] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 1.12, 1.92, 1.92 [09:49:26] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 56% [09:49:47] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 43% [09:50:05] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 59% [09:51:03] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.20, 2.00, 1.95 [09:51:23] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 81% [09:52:15] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 78% [09:53:21] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 0.91, 1.63, 1.82 [09:53:29] PROBLEM - cp9 NTP time on cp9 is UNKNOWN: error getting address for time.cloudflare.com: Temporary failure in name resolution [09:53:43] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 53% [09:54:43] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [09:55:02] PROBLEM - ping6 on cp9 is WARNING: PING WARNING - Packet loss = 28%, RTA = 84.46 ms [09:55:12] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.31, 17.87, 20.16 [09:55:35] RECOVERY - cp9 NTP time on cp9 is OK: NTP OK: Offset -0.005997240543 secs [09:55:41] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 69% [09:55:41] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 44%, RTA = 78.41 ms [09:56:06] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 62% [09:56:13] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 46 seconds ago with 0 failures [09:57:29] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.77, 1.93, 1.88 [09:58:54] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 56% [09:59:32] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [09:59:45] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 27 seconds ago with 0 failures [10:00:17] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 16%, RTA = 98.65 ms [10:00:49] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 75% [10:00:51] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:02:25] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 56% [10:03:04] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 56% [10:03:44] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 1.23, 1.81, 1.89 [10:04:14] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 27% [10:05:00] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 97% [10:05:13] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 44%, RTA = 77.49 ms [10:05:45] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:06:34] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 34% [10:07:48] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 2.32, 1.89, 1.89 [10:08:33] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 69% [10:09:40] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 77.04 ms [10:10:34] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 88% [10:12:48] RECOVERY - ping6 on cp9 is OK: PING OK - Packet loss = 16%, RTA = 83.24 ms [10:12:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.222.27.129/cpweb [10:13:03] PROBLEM - cp9 NTP time on cp9 is UNKNOWN: check_ntp_time: Invalid hostname/address - time.cloudflare.comUsage: check_ntp_time -H [-4 [10:13:22] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:13:36] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.222.27.129/cpweb [10:13:43] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 1.74, 2.00, 1.96 [10:14:51] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:15:01] RECOVERY - cp9 NTP time on cp9 is OK: NTP OK: Offset -0.004722982645 secs [10:15:39] PROBLEM - cp9 Current Load on cp9 is CRITICAL: CRITICAL - load average: 4.10, 2.71, 2.22 [10:18:35] PROBLEM - ping4 on cp9 is WARNING: PING WARNING - Packet loss = 37%, RTA = 77.87 ms [10:18:39] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is WARNING: WARNING - NGINX Error Rate is 54% [10:18:45] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [10:19:25] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:19:42] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1944 bytes in 4.215 second response time [10:20:32] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 57% [10:20:41] RECOVERY - ping4 on cp9 is OK: PING OK - Packet loss = 0%, RTA = 77.63 ms [10:22:19] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 44%, RTA = 156.15 ms [10:22:23] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 54%, RTA = 171.65 ms [10:22:35] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 62% [10:22:44] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/sudoers] [10:22:53] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is CRITICAL: CRITICAL - NGINX Error Rate is 96% [10:23:23] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [10:23:41] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.222.27.129/cpweb, 2607:5300:205:200::2ac4/cpweb [10:24:21] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 173.87 ms [10:24:21] RECOVERY - ping4 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 155.96 ms [10:24:30] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 50% [10:24:31] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:24:47] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 47% [10:26:30] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CRITICAL - NGINX Error Rate is 69% [10:26:33] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1943 bytes in 2.139 second response time [10:28:41] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 90% [10:29:00] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [10:29:24] PROBLEM - cp3 NTP time on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:30:10] PROBLEM - cp3 Stunnel Http for mw5 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:31:25] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 39% [10:31:25] PROBLEM - cp3 NTP time on cp3 is UNKNOWN: error getting address for time.cloudflare.com: Temporary failure in name resolution [10:32:09] RECOVERY - cp3 Stunnel Http for mw5 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15653 bytes in 0.674 second response time [10:32:26] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 42% [10:32:49] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is WARNING: WARNING - NGINX Error Rate is 54% [10:33:39] PROBLEM - cp3 NTP time on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:35:14] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 35% [10:36:01] RECOVERY - cp3 NTP time on cp3 is OK: NTP OK: Offset -0.002785265446 secs [10:36:52] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:37:06] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:37:27] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [10:37:28] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.96, 20.24, 18.61 [10:39:01] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 11% [10:39:11] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1945 bytes in 1.486 second response time [10:39:29] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 19.80, 19.94, 18.69 [10:39:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:39:58] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:41:19] RECOVERY - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is OK: OK - NGINX Error Rate is 11% [10:44:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [10:44:20] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [10:47:51] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.91, 21.89, 19.98 [10:48:24] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:49:14] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 61%, RTA = 241.84 ms [10:49:49] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.94, 19.82, 19.45 [10:51:34] PROBLEM - cp3 Stunnel Http for mw6 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:52:13] PROBLEM - cp3 Stunnel Http for test2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:52:43] PROBLEM - cp3 Stunnel Http for mw5 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:52:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:53:30] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 54%, RTA = 218.03 ms [10:53:51] RECOVERY - cp3 Stunnel Http for mw6 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15653 bytes in 3.813 second response time [10:54:22] RECOVERY - cp3 Stunnel Http for test2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15655 bytes in 2.540 second response time [10:54:32] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw5 [10:54:41] RECOVERY - cp3 Stunnel Http for mw5 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15653 bytes in 0.898 second response time [10:55:33] RECOVERY - ping4 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 167.51 ms [10:56:15] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [10:56:34] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 7 backends are healthy [10:56:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:00:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [11:04:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:04:14] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw5 mw6 [11:05:33] PROBLEM - mw4 Current Load on mw4 is WARNING: WARNING - load average: 6.96, 5.74, 4.43 [11:06:01] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.42, 20.11, 19.27 [11:07:34] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 2.39, 4.38, 4.09 [11:08:02] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 19.45, 19.47, 19.10 [11:08:02] PROBLEM - cp6 Disk Space on cp6 is WARNING: DISK WARNING - free space: / 2348 MB (8% inode=95%); [11:08:16] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 16%, RTA = 219.24 ms [11:09:56] PROBLEM - cp7 Disk Space on cp7 is WARNING: DISK WARNING - free space: / 2887 MB (10% inode=95%); [11:09:58] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:10:26] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 7 backends are healthy [11:11:59] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.79, 23.09, 20.64 [11:13:56] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.88, 22.02, 20.58 [11:15:53] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.27, 20.11, 20.04 [11:16:21] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [11:17:55] PROBLEM - cp6 Disk Space on cp6 is CRITICAL: DISK CRITICAL - free space: / 1552 MB (5% inode=95%); [11:23:29] PROBLEM - cp9 Current Load on cp9 is WARNING: WARNING - load average: 0.08, 0.65, 1.83 [11:25:16] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.47, 5.82, 4.50 [11:25:17] PROBLEM - cp3 Current Load on cp3 is WARNING: WARNING - load average: 0.25, 0.73, 1.84 [11:25:25] RECOVERY - cp9 Current Load on cp9 is OK: OK - load average: 0.23, 0.51, 1.64 [11:27:14] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.14, 0.56, 1.64 [11:29:10] PROBLEM - puppet2 Current Load on puppet2 is CRITICAL: CRITICAL - load average: 8.15, 7.03, 5.29 [11:35:02] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 5.21, 7.25, 6.06 [11:37:00] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.81, 5.36, 5.51 [11:44:45] PROBLEM - puppet2 Current Load on puppet2 is CRITICAL: CRITICAL - load average: 8.34, 6.48, 5.70 [11:46:42] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 6.34, 6.72, 5.91 [11:54:27] PROBLEM - puppet2 Current Load on puppet2 is CRITICAL: CRITICAL - load average: 8.90, 6.93, 6.00 [11:56:24] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.39, 7.20, 6.22 [11:58:22] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 2.44, 5.39, 5.67 [11:59:22] PROBLEM - db7 Check MariaDB Replication on db7 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 202s [12:04:11] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.85, 6.29, 5.85 [12:06:12] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 4.92, 6.23, 5.91 [12:13:10] RECOVERY - db7 Check MariaDB Replication on db7 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [12:13:27] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 38.97, 29.05, 21.95 [12:17:21] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 14.24, 21.92, 20.76 [12:19:19] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 29.01, 23.11, 21.22 [12:21:17] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.89, 22.47, 21.22 [12:25:17] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.60, 18.75, 20.02 [12:44:49] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.83, 20.91, 19.39 [12:50:39] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.88, 19.87, 19.42 [13:05:10] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 6.85, 5.46, 4.32 [13:06:30] PROBLEM - cp9 Disk Space on cp9 is WARNING: DISK WARNING - free space: / 4244 MB (10% inode=96%); [13:07:07] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.38, 3.97, 3.91 [13:10:05] !log fixing puppet on test2 [13:10:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:14:53] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.01, 5.34, 4.33 [13:15:07] ^ probably me somehow [13:16:50] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.49, 3.97, 3.96 [13:25:11] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 30.64, 22.73, 20.00 [13:27:08] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.22, 22.02, 20.04 [13:34:57] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.97, 20.24, 19.98 [13:35:23] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.20, 5.73, 4.53 [13:36:40] PROBLEM - ns1 Current Load on ns1 is CRITICAL: CRITICAL - load average: 0.60, 2.05, 1.30 [13:37:20] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.12, 3.95, 4.02 [13:38:40] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.02, 0.93, 1.01 [13:48:35] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.11, 20.61, 19.91 [13:50:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 32.23, 23.78, 21.09 [13:52:36] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.86, 21.70, 20.69 [13:54:33] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 13.92, 19.11, 19.86 [14:00:39] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.81, 21.79, 20.79 [14:02:36] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.59, 20.35, 20.38 [14:15:23] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.02, 5.65, 4.55 [14:17:10] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.66, 21.64, 20.26 [14:17:20] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.33, 4.08, 4.11 [14:19:07] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.74, 21.72, 20.50 [14:21:04] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.43, 21.91, 20.66 [14:24:59] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.76, 21.76, 20.97 [14:26:58] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 32.42, 24.78, 22.12 [14:30:50] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.73, 23.56, 22.24 [14:32:47] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.33, 24.58, 22.78 [14:36:41] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.59, 22.50, 22.45 [14:46:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.70, 22.65, 22.28 [14:48:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.68, 22.55, 22.26 [14:50:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.09, 24.00, 22.81 [14:52:33] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.89, 22.56, 22.42 [14:55:34] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 6.83, 5.62, 4.62 [14:57:31] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.41, 4.14, 4.20 [15:05:19] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNts [15:05:20] [02miraheze/services] 07MirahezeSSLBot 0340eafab - BOT: Updating services config for wikis [15:05:22] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.62, 5.84, 4.73 [15:07:18] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.62, 4.32, 4.32 [15:08:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.94, 22.55, 21.66 [15:12:36] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.31, 22.45, 21.91 [15:15:11] PROBLEM - puppet2 Current Load on puppet2 is CRITICAL: CRITICAL - load average: 8.54, 5.95, 4.78 [15:17:10] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 2.39, 4.65, 4.46 [15:18:36] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.14, 23.63, 22.68 [15:22:36] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.49, 23.92, 23.35 [15:23:01] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 7.92, 6.10, 5.28 [15:24:07] .op [15:24:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 29.02, 25.39, 23.92 [15:24:56] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 6.60, 5.92, 5.29 [15:26:35] PROBLEM - madzebrascience.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'madzebrascience.wiki' expires in 15 day(s) (Thu 03 Sep 2020 15:19:03 GMT +0000). [15:28:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.83, 23.93, 23.86 [15:28:53] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNmG [15:28:55] [02miraheze/ssl] 07MirahezeSSLBot 03e9e9a2d - Bot: Update SSL cert for madzebrascience.wiki [15:36:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.70, 22.27, 22.75 [15:38:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.66, 21.18, 22.31 [15:39:56] RECOVERY - madzebrascience.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'madzebrascience.wiki' will expire on Mon 16 Nov 2020 14:28:46 GMT +0000. [15:44:29] PROBLEM - db7 Check MariaDB Replication on db7 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 166s [15:46:29] RECOVERY - db7 Check MariaDB Replication on db7 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [15:46:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 29.48, 23.91, 22.49 [15:48:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.76, 21.41, 21.76 [15:50:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 36.91, 28.68, 24.44 [15:54:20] [02miraheze/dns] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNYz [15:54:21] [02miraheze/dns] 07Southparkfan 035913cc7 - Depool cp3 [15:54:33] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.25, 23.89, 23.51 [16:00:05] !log delete varnishlog [4567] from cp3 to free up space [16:00:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:12:34] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.66, 18.24, 20.14 [16:13:27] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNOY [16:13:29] [02miraheze/puppet] 07Southparkfan 03200413a - cp3: reduce cache file size and use new caching system [16:13:45] !log downtime cp3 services in varnish [16:13:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:16:35] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.50, 20.59, 20.80 [16:18:02] I'm afk for a bit, still working on cp3 [16:20:34] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.32, 22.10, 21.30 [16:22:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.88, 20.81, 20.92 [16:26:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.77, 23.23, 21.81 [16:28:35] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.60, 21.13, 21.21 [16:32:34] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.77, 18.12, 19.94 [16:45:26] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.90, 23.88, 21.32 [16:46:42] PROBLEM - mw7 Current Load on mw7 is WARNING: WARNING - load average: 7.39, 6.39, 4.51 [16:47:22] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.99, 22.62, 21.18 [16:48:41] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 3.92, 5.39, 4.36 [16:54:34] PROBLEM - mw4 Current Load on mw4 is CRITICAL: CRITICAL - load average: 8.98, 6.66, 5.30 [16:55:31] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 9.03, 7.25, 5.65 [16:56:32] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 5.36, 6.18, 5.30 [16:57:31] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 5.33, 6.72, 5.67 [17:04:32] PROBLEM - mw4 Current Load on mw4 is CRITICAL: CRITICAL - load average: 9.00, 6.88, 5.74 [17:05:31] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 9.69, 7.41, 6.15 [17:06:09] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.27, 6.27, 5.10 [17:06:33] PROBLEM - mw4 Current Load on mw4 is WARNING: WARNING - load average: 6.59, 6.99, 5.95 [17:06:58] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.35, 18.60, 20.06 [17:07:31] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 4.45, 6.31, 5.91 [17:08:11] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 0.97, 4.19, 4.48 [17:08:33] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 2.90, 5.40, 5.49 [17:10:23] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNsP [17:10:24] [02miraheze/services] 07MirahezeSSLBot 036c278c4 - BOT: Updating services config for wikis [17:14:31] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3499 MB (14% inode=93%); [17:18:42] *op [17:18:54] Zppix: use .op [17:19:01] does it matter? [17:19:19] Zppix: .op is what used to be ZppixBot [17:19:34] but they do the same thing no? [17:19:34] SigmaBotMH is heading hopefully eventually for the dumpster [17:19:39] Zppix: yes [17:25:32] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 7.64, 6.59, 5.62 [17:27:31] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 3.43, 5.30, 5.26 [17:41:59] !log forcing a logrotate job for nginx and varnish logs on cp3 [17:42:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:49:53] [02miraheze/dns] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNZh [17:49:55] [02miraheze/dns] 07Southparkfan 03eec5db3 - Repool cp3 [17:50:33] cp3 fixed, working on cp6 now [17:52:42] !log forcing nginx logrotate on cp6 [17:52:49] PROBLEM - ns1 Current Load on ns1 is WARNING: WARNING - load average: 1.67, 1.85, 0.90 [17:52:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:53:39] PROBLEM - puppet2 APT on puppet2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:54:11] PROBLEM - puppet2 Current Load on puppet2 is CRITICAL: CRITICAL - load average: 10.64, 6.62, 5.20 [17:54:48] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.89, 1.20, 0.84 [17:55:32] SPF|Cloud: great! [17:55:39] RECOVERY - puppet2 APT on puppet2 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [17:57:44] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:57:58] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Puppet has 37 failures. Last run 2 minutes ago with 37 failures. Failed resources (up to 3 shown) [17:57:59] PROBLEM - cloud3 Puppet on cloud3 is CRITICAL: CRITICAL: Puppet has 15 failures. Last run 2 minutes ago with 15 failures. Failed resources (up to 3 shown) [17:58:03] PROBLEM - db13 Puppet on db13 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 2 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:58:04] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:58:09] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.57, 5.52, 5.25 [17:58:16] PROBLEM - cp7 Puppet on cp7 is CRITICAL: CRITICAL: Puppet has 277 failures. Last run 2 minutes ago with 277 failures. Failed resources (up to 3 shown) [17:58:17] PROBLEM - db12 Puppet on db12 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 2 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:58:26] PROBLEM - rdb1 Puppet on rdb1 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 2 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:58:29] PROBLEM - jobrunner2 Puppet on jobrunner2 is CRITICAL: CRITICAL: Puppet has 290 failures. Last run 2 minutes ago with 290 failures. Failed resources (up to 3 shown) [17:58:34] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Puppet has 13 failures. Last run 2 minutes ago with 13 failures. Failed resources (up to 3 shown) [17:58:40] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Puppet has 24 failures. Last run 3 minutes ago with 24 failures. Failed resources (up to 3 shown) [17:58:43] PROBLEM - ns2 Puppet on ns2 is CRITICAL: CRITICAL: Puppet has 14 failures. Last run 3 minutes ago with 14 failures. Failed resources (up to 3 shown) [17:58:44] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:58:44] PROBLEM - db7 Puppet on db7 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 3 minutes ago with 17 failures. Failed resources (up to 3 shown) [17:58:44] PROBLEM - gluster2 Puppet on gluster2 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [17:58:46] PROBLEM - ldap1 Puppet on ldap1 is CRITICAL: CRITICAL: Puppet has 15 failures. Last run 3 minutes ago with 15 failures. Failed resources (up to 3 shown) [17:58:46] PROBLEM - cloud2 Puppet on cloud2 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 3 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:58:47] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Puppet has 23 failures. Last run 3 minutes ago with 23 failures. Failed resources (up to 3 shown) [17:58:50] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:58:56] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Puppet has 246 failures. Last run 2 minutes ago with 246 failures. Failed resources (up to 3 shown): File[/etc/rsyslog.conf],File[authority certificates],File[/etc/apt/apt.conf.d/50unattended-upgrades],File[/etc/apt/apt.conf.d/20auto-upgrades] [17:58:58] hmmm [17:59:05] PROBLEM - rdb2 Puppet on rdb2 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 3 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:59:06] PROBLEM - cp6 Puppet on cp6 is CRITICAL: CRITICAL: Puppet has 276 failures. Last run 3 minutes ago with 276 failures. Failed resources (up to 3 shown) [17:59:13] SPF|Cloud: ^ expected? [17:59:15] PROBLEM - gluster1 Puppet on gluster1 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 3 minutes ago with 18 failures. Failed resources (up to 3 shown) [17:59:17] PROBLEM - phab1 Puppet on phab1 is CRITICAL: CRITICAL: Puppet has 22 failures. Last run 3 minutes ago with 22 failures. Failed resources (up to 3 shown) [17:59:24] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Puppet has 300 failures. Last run 3 minutes ago with 300 failures. Failed resources (up to 3 shown) [17:59:30] PROBLEM - puppet2 Puppet on puppet2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:59:30] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 30 failures. Last run 3 minutes ago with 30 failures. Failed resources (up to 3 shown) [17:59:32] PROBLEM - db11 Puppet on db11 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 3 minutes ago with 16 failures. Failed resources (up to 3 shown) [17:59:34] PROBLEM - cloud1 Puppet on cloud1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/rsyslog.conf] [17:59:39] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Puppet has 289 failures. Last run 3 minutes ago with 289 failures. Failed resources (up to 3 shown) [17:59:42] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Puppet has 288 failures. Last run 3 minutes ago with 288 failures. Failed resources (up to 3 shown) [17:59:47] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Puppet has 288 failures. Last run 3 minutes ago with 288 failures. Failed resources (up to 3 shown) [18:00:26] eh [18:02:04] RECOVERY - cp6 Disk Space on cp6 is OK: DISK OK - free space: / 9072 MB (34% inode=95%); [18:02:34] SPF|Cloud: puppet2 just went green for apt upgrades. Probably transient [18:03:18] RECOVERY - phab1 Puppet on phab1 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [18:03:28] RECOVERY - puppet2 Puppet on puppet2 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [18:03:29] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 18 seconds ago with 0 failures [18:03:32] RECOVERY - db11 Puppet on db11 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [18:03:33] RECOVERY - cloud1 Puppet on cloud1 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [18:03:42] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNnM [18:03:44] [02miraheze/puppet] 07Southparkfan 0333d3e0d - Fix postrotate for nginx logrotate unit [18:03:44] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 11 seconds ago with 0 failures [18:03:53] Weeeeeee, here goes happy puppet! [18:03:59] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 43 seconds ago with 0 failures [18:04:00] RECOVERY - cloud3 Puppet on cloud3 is OK: OK: Puppet is currently enabled, last run 55 seconds ago with 0 failures [18:04:03] RECOVERY - db13 Puppet on db13 is OK: OK: Puppet is currently enabled, last run 55 seconds ago with 0 failures [18:04:04] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 56 seconds ago with 0 failures [18:04:10] PROBLEM - puppet2 Current Load on puppet2 is WARNING: WARNING - load average: 7.32, 6.28, 5.50 [18:04:16] RECOVERY - cp7 Puppet on cp7 is OK: OK: Puppet is currently enabled, last run 30 seconds ago with 0 failures [18:04:17] RECOVERY - db12 Puppet on db12 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:26] RECOVERY - rdb1 Puppet on rdb1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:29] RECOVERY - jobrunner2 Puppet on jobrunner2 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [18:04:34] RECOVERY - bacula2 Puppet on bacula2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:39] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:40] RECOVERY - ns2 Puppet on ns2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:42] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [18:04:43] RECOVERY - db7 Puppet on db7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:43] RECOVERY - gluster2 Puppet on gluster2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:45] RECOVERY - ldap1 Puppet on ldap1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:45] RECOVERY - cloud2 Puppet on cloud2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:45] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:04:56] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [18:05:06] RECOVERY - cp6 Puppet on cp6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:06] RECOVERY - rdb2 Puppet on rdb2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:14] RECOVERY - gluster1 Puppet on gluster1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:23] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:26] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 28.34, 23.74, 20.70 [18:05:41] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:45] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:47] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:05:55] SPF|Cloud: load alert want checking? [18:06:10] RECOVERY - puppet2 Current Load on puppet2 is OK: OK - load average: 1.07, 4.27, 4.86 [18:06:49] I'm not so sure if the threshold for cloud's load alerts are correct [18:06:50] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:07:24] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.53, 22.74, 20.73 [18:07:48] a 5m load average higher than the amount of available CPUs (or threads, when using hyperthreading) can indicate issues, but does not always have to be a problewm [18:08:59] Ack [18:11:17] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 20.03, 20.37, 20.11 [18:18:03] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 16.31, 21.64, 21.12 [18:29:47] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.04, 18.74, 20.20 [18:32:37] !log force logrotate for nginx on cp[79] [18:32:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:32:54] RECOVERY - cp7 Disk Space on cp7 is OK: DISK OK - free space: / 2995 MB (11% inode=95%); [18:33:38] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.87, 20.92, 20.81 [18:34:52] PROBLEM - museummiddelland.nl - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/dns/resolver.py", line 213, in __init__ rdclass, rdtype) File "/usr/lib/python3/dist-packages/dns/message.py", line 341, in find_rrset raise KeyErrorKeyErrorDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/usr/lib/python3/dist-packages/dns/resolver.p [18:34:52] ", line 223, in __init__ dns.rdatatype.CNAME) File "/usr/lib/python3/dist-packages/dns/message.py", line 341, in find_rrset raise KeyErrorKeyErrorDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 95, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 80, in main rdns_hostname = get_reverse_dnshostname(args.hostn [18:34:52] me) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 66, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.query(hostname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 1004, in query raise_on_no_answer) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 232, in __init__ raise NoAnswer(response=response)dns.resolver.NoAnswer: The DNS response does not contain an answer to the question: museum [18:34:52] iddelland.nl. IN A [18:34:53] RECOVERY - cp9 Disk Space on cp9 is OK: DISK OK - free space: / 6001 MB (15% inode=96%); [18:37:33] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.57, 19.56, 20.27 [18:43:23] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.33, 23.04, 21.56 [18:44:11] [02dns] 07MacFan4000 opened pull request 03#173: switch mhbots icinga to tools1 - 13https://git.io/JJNCC [18:51:11] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.21, 18.19, 20.03 [18:55:04] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.29, 20.33, 20.48 [18:57:01] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 18.86, 19.63, 20.19 [19:09:29] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJNWC [19:09:31] [02miraheze/puppet] 07paladox 03a023333 - gluster: backup /srv/mvol [19:09:32] [02puppet] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbiAS [19:09:38] [02puppet] 07paladox opened pull request 03#1482: gluster: backup /srv/mvol - 13https://git.io/JJNWl [19:13:33] [02dns] 07JohnFLewis closed pull request 03#173: switch mhbots icinga to tools1 - 13https://git.io/JJNCC [19:13:35] [02miraheze/dns] 07JohnFLewis pushed 032 commits to 03master [+0/-0/±2] 13https://git.io/JJNWP [19:13:36] [02miraheze/dns] 07MacFan4000 03c60b696 - switch mhbots icinga to tools1 [19:13:38] [02miraheze/dns] 07JohnFLewis 034e102a9 - Merge pull request #173 from MacFan4000/patch-7 switch mhbots icinga to tools1 [19:14:30] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJNWD [19:14:32] [02miraheze/puppet] 07paladox 03d86559d - Update bacula-dir.conf [19:14:33] [02puppet] 07paladox synchronize pull request 03#1482: gluster: backup /srv/mvol - 13https://git.io/JJNWl [19:17:14] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJNWQ [19:17:16] [02miraheze/puppet] 07paladox 031e786a3 - Update director.pp [19:17:18] [02puppet] 07paladox synchronize pull request 03#1482: gluster: backup /srv/mvol - 13https://git.io/JJNWl [19:18:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJNW7 [19:18:45] [02miraheze/puppet] 07paladox 03b7bfd81 - Update nrpe.cfg.erb [19:18:52] [02puppet] 07paladox synchronize pull request 03#1482: gluster: backup /srv/mvol - 13https://git.io/JJNWl [19:45:53] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.72, 20.96, 18.69 [19:47:50] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.30, 20.50, 18.73 [19:51:45] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.69, 21.83, 19.50 [19:53:43] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 19.62, 20.26, 19.17 [19:55:13] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNBm [19:55:14] [02miraheze/services] 07MirahezeSSLBot 03d05e6c3 - BOT: Updating services config for wikis [21:05:00] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.47, 19.43, 17.48 [21:06:57] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.97, 17.96, 17.17 [21:12:47] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.17, 20.66, 18.43 [21:14:43] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.19, 19.07, 18.09 [21:35:12] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.79, 21.10, 19.01 [21:37:10] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.54, 21.71, 19.50 [21:39:10] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.71, 19.37, 18.89 [21:51:36] https://rottenwebsites.miraheze.org/ <- does miraheze policy allow sites like this? [21:51:37] [ Rotten Websites Wiki ] - rottenwebsites.miraheze.org [21:53:42] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.11, 20.12, 18.75 [21:55:39] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.71, 18.83, 18.42 [21:57:57] Naleksuh: some of them just make it (unfortunately) [21:58:07] what do you mean by "just make it" [22:01:36] Naleksuh: one step in one direction and it would be violating policies and one step the other direction and it wouldnt [22:01:56] so is it CURRENTLY in violation? [22:02:57] no [22:03:06] its on a fine line [22:03:26] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.87, 20.62, 19.14 [22:05:25] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.42, 18.98, 18.70 [22:28:40] PROBLEM - mw6 Current Load on mw6 is CRITICAL: CRITICAL - load average: 13.87, 8.13, 4.78 [22:30:07] PROBLEM - mw7 Current Load on mw7 is CRITICAL: CRITICAL - load average: 9.70, 7.38, 4.69 [22:31:31] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 11.57, 7.32, 5.07 [22:32:07] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 4.43, 6.24, 4.61 [22:32:40] PROBLEM - mw6 Current Load on mw6 is WARNING: WARNING - load average: 5.17, 7.26, 5.28 [22:32:42] PROBLEM - mw4 Current Load on mw4 is CRITICAL: CRITICAL - load average: 10.42, 8.28, 5.52 [22:33:31] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 6.97, 7.31, 5.36 [22:34:37] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 3.14, 6.40, 5.17 [22:34:40] RECOVERY - mw6 Current Load on mw6 is OK: OK - load average: 3.52, 6.09, 5.10 [22:35:31] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 2.62, 5.63, 4.99 [22:55:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJNoA [22:55:12] [02miraheze/services] 07MirahezeSSLBot 03b924838 - BOT: Updating services config for wikis [23:04:19] •eir was opped (+o) by •ChanServ [23:04:20] •eir un-banned Bugambilia!*@* (-b) [23:04:20] •eir un-banned *!*@199.8.201.* (-b) [23:04:20] <--- @Zppix, that's one I haven't seen before. Assuming @eir is some sort of system bot, presumably operated by Freenode, but what prompted it take those actions? [23:04:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.84, 18.91, 16.88 [23:05:13] eir automatically clears banlist entries after 24h unless explicitly instructed to do otherwise [23:06:33] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 12.49, 16.56, 16.28 [23:06:34] ah, thanks @Voidwalker. That's great. So, basically, unless channel operators define a custom ban timeframe, it clears them after a day, basically. [23:07:20] yeah, and the bot messages the operator who placed the ban and asks them to provide an expiry/comment [23:09:17] @Voidwalker, ah, thanks. That's helpful. One thing one has to love about IRC is the simplicity of the IP ban syntax relative to wiki rangeblocks, eh? ;) [23:09:52] depends ;) [23:38:00] PROBLEM - mw7 Current Load on mw7 is CRITICAL: CRITICAL - load average: 8.27, 5.90, 3.83 [23:38:40] PROBLEM - mw6 Current Load on mw6 is WARNING: WARNING - load average: 7.65, 5.95, 3.94 [23:40:00] PROBLEM - mw7 Current Load on mw7 is WARNING: WARNING - load average: 7.94, 6.61, 4.34 [23:40:41] PROBLEM - mw6 Current Load on mw6 is CRITICAL: CRITICAL - load average: 9.04, 6.95, 4.56 [23:42:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.89.160.142/cpweb, 2607:5300:205:200::2ac4/cpweb [23:43:01] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 2001:41d0:800:1056::2/cpweb, 2001:41d0:800:105a::10/cpweb, 2607:5300:205:200::2ac4/cpweb [23:43:35] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 10.47, 7.84, 5.30 [23:44:00] PROBLEM - mw7 Current Load on mw7 is CRITICAL: CRITICAL - load average: 11.18, 8.30, 5.52 [23:44:12] irc bans are more complicated than wiki blocks though [23:44:39] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:44:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [23:45:31] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 5.66, 6.76, 5.20 [23:46:00] PROBLEM - mw7 Current Load on mw7 is WARNING: WARNING - load average: 7.57, 7.81, 5.68 [23:46:45] PROBLEM - mw6 Current Load on mw6 is WARNING: WARNING - load average: 6.78, 7.98, 5.87 [23:47:35] @Naleksuh Is that because of the federated or quasi-federated nature of IRC (i.e., different IRC servers) and the way in which other services can interact (i.e., Matrix.org, etc.), primarily? [23:49:02] irc bans use a somewhat complex syntax involving nicknames, real names, and hosts. some irc clients malform them and you can end up with a ban that does nothing or bans an entire isp, etc... [23:49:59] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 3.40, 5.94, 5.44 [23:50:01] @Naleksuh, ah, okay, that's helpful context. Thanks. :) [23:50:41] RECOVERY - mw6 Current Load on mw6 is OK: OK - load average: 3.53, 5.83, 5.50