[00:02:24] but it's such a misc and rare issue that making the code more complex isn't pay off imho [00:02:45] esp when code is likely to change and there's been so much work reducing duplicates [00:09:18] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fjqBu [00:09:19] [02miraheze/mw-config] 07paladox 03e8c1325 - Enable ContactPage for wiki guiaslocaiswiki - T4263 [00:09:21] [02mw-config] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbvb3 [00:09:22] [02mw-config] 07paladox opened pull request 03#2667: Enable ContactPage for wiki guiaslocaiswiki - T4263 - 13https://git.io/fjqBz [00:09:49] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fjqBg [00:09:51] [02miraheze/mw-config] 07paladox 03847dc48 - Update LocalSettings.php [00:09:52] [02mw-config] 07paladox synchronize pull request 03#2667: Enable ContactPage for wiki guiaslocaiswiki - T4263 - 13https://git.io/fjqBz [00:09:59] [02mw-config] 07paladox closed pull request 03#2667: Enable ContactPage for wiki guiaslocaiswiki - T4263 - 13https://git.io/fjqBz [00:10:00] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/fjqB2 [00:10:02] [02miraheze/mw-config] 07paladox 03fc73fe6 - Enable ContactPage for wiki guiaslocaiswiki - T4263 (#2667) * Enable ContactPage for wiki guiaslocaiswiki - T4263 * Update LocalSettings.php [00:10:03] [02mw-config] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [00:10:05] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-1 [00:10:22] miraheze/mw-config/paladox-patch-1/e8c1325 - paladox The build was broken. https://travis-ci.org/miraheze/mw-config/builds/518545952 [00:11:49] miraheze/mw-config/paladox-patch-1/847dc48 - paladox The build was broken. https://travis-ci.org/miraheze/mw-config/builds/518546135 [00:23:56] thanks paladox [00:24:16] your welcome :) [00:26:09] !log generating image dump for holycrosswiki - T4262 [00:26:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:26:53] !log generating image dump for mediatecawiki - T4261 [00:26:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:28:30] thanks hehe paladox [00:28:36] :) [00:32:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqB1 [00:32:25] [02miraheze/puppet] 07paladox 03e461f7c - Whitelist *.cloudytheology.com fixes T4257 [00:36:32] [02miraheze/phabricator-extensions] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqB9 [00:36:33] [02miraheze/phabricator-extensions] 07paladox 038f79666 - Fix support for latest phabricator update [00:36:35] [02miraheze/phabricator-extensions] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqB9 [00:36:36] [02miraheze/phabricator-extensions] 07paladox 038f79666 - Fix support for latest phabricator update [00:43:52] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqBb [00:43:53] [02miraheze/puppet] 07paladox 03fb263bb - phabricator: Lock "auth.lock-config" [00:49:18] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/fjqBh [00:49:19] [02miraheze/dns] 07paladox 03c9264fe - Add unrecnations.wiki to dns - T4266 [00:51:52] [02miraheze/ssl] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fjqBj [00:51:54] [02miraheze/ssl] 07paladox 0381c8bc5 - Add unrecnations.wiki ssl certificate - T4266 [00:51:55] [02ssl] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vxP9L [00:51:57] [02ssl] 07paladox opened pull request 03#171: Add unrecnations.wiki ssl certificate - T4266 - 13https://git.io/fjqRe [00:52:21] [02miraheze/ssl] 07paladox pushed 031 commit to 03paladox-patch-1 [+1/-0/±0] 13https://git.io/fjqRv [00:52:23] [02miraheze/ssl] 07paladox 037abdd83 - Create unrecnations.wiki.crt [00:52:24] [02ssl] 07paladox synchronize pull request 03#171: Add unrecnations.wiki ssl certificate - T4266 - 13https://git.io/fjqRe [00:53:00] [02ssl] 07paladox closed pull request 03#171: Add unrecnations.wiki ssl certificate - T4266 - 13https://git.io/fjqRe [00:53:02] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/fjqRf [00:53:03] [02miraheze/ssl] 07paladox 039d8691a - Add unrecnations.wiki ssl certificate - T4266 (#171) * Add unrecnations.wiki ssl certificate - T4266 * Create unrecnations.wiki.crt [00:53:05] [02ssl] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vxP9L [00:53:06] [02miraheze/ssl] 07paladox deleted branch 03paladox-patch-1 [02:30:13] !log upgraded phabricator on misc4 (hour and half ago) [02:30:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [06:00:14] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqEQ [06:00:15] [02miraheze/services] 07MirahezeSSLBot 0345b3982 - BOT: Updating services config for wikis [06:14:50] PROBLEM - lizardfs1 Disk Space on lizardfs1 is WARNING: DISK WARNING - free space: / 16906 MB (11% inode=98%); [06:19:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb [06:21:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:32:58] PROBLEM - lizardfs2 Disk Space on lizardfs2 is WARNING: DISK WARNING - free space: / 16900 MB (11% inode=98%); [09:12:29] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 1.71, 1.37, 0.91 [09:14:29] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 1.38, 1.34, 0.94 [10:13:44] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:13:48] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:14:24] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:14:35] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:14:59] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:15:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [10:15:18] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:15:36] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:15:49] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [10:17:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:17:52] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 236.91 ms [10:17:55] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.05, 0.07, 0.06 [10:18:28] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 1.447 second response time [10:24:30] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [10:33:36] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:33:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:33:49] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:34:12] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:34:13] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:34:36] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:34:44] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:34:59] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [10:35:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:35:36] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [10:37:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:37:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:37:39] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 236.91 ms [10:37:43] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 5505 MB (22% inode=95%); [10:38:12] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [10:38:31] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 1.457 second response time [10:38:52] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 11% [11:05:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:05:56] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:05:58] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:06:21] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:06:26] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:06:31] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:06:35] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:06:59] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:07:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [11:07:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:07:41] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [11:09:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:09:44] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 166.79 ms [11:10:03] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 5495 MB (22% inode=95%); [11:10:05] RECOVERY - cp3 SSH on cp3 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [11:10:27] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.11, 0.09, 0.05 [11:10:29] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24074 bytes in 1.343 second response time [11:10:39] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [11:14:51] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:14:53] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:15:04] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:15:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:15:14] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:15:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:16:26] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:16:28] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:17:14] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [11:17:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:19:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:19:17] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 237.01 ms [11:24:44] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 48 seconds ago with 0 failures [11:38:51] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:39:00] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:39:12] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:39:13] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:39:14] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:39:32] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:39:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:40:36] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:40:41] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:41:07] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [11:41:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:43:10] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 166.87 ms [11:43:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:43:13] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 2% [11:43:20] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.10, 0.06, 0.03 [11:43:39] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [11:59:59] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:00:47] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:00:53] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:00:54] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [12:01:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [12:02:57] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 241.45 ms [12:03:01] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:03:01] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:03:04] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 2.017 second response time [12:03:07] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 2% [12:03:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:03:33] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.08, 0.05, 0.00 [12:04:07] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [12:15:02] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:15:10] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:15:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [12:15:16] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:15:18] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:15:19] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:15:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [12:15:52] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:16:30] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:16:35] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [12:18:38] RECOVERY - Host cp3 is UP: PING OK - Packet loss = 0%, RTA = 171.04 ms [12:19:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:19:16] RECOVERY - cp3 SSH on cp3 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [12:19:18] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 32% [12:19:25] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 5475 MB (22% inode=95%); [12:19:59] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.00, 0.05, 0.05 [12:21:27] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24084 bytes in 0.912 second response time [12:21:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:22:01] Always nice when things arent broken lol [12:22:29] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [12:22:46] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [12:24:27] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [12:24:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [12:24:46] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [12:24:56] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [12:28:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [12:28:44] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [12:29:36] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3871 bytes in 0.649 second response time [12:31:23] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [12:31:24] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [12:31:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [12:32:18] PROBLEM - db4 Disk Space on db4 is WARNING: DISK WARNING - free space: / 42979 MB (11% inode=96%); [12:32:27] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 49% [12:33:48] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjqrO [12:33:50] [02miraheze/mw-config] 07paladox 03272d09a - Update ManageWiki.php [12:34:27] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 81% [12:34:53] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 59% [12:35:27] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 1.379 second response time [12:35:45] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:36:12] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:36:27] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 10% [12:36:53] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 7% [12:38:41] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [12:39:18] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [12:39:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [12:43:13] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [12:43:17] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [12:45:12] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [12:45:16] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [12:48:15] paladox: are all these alerts just due to garbage collection? [12:48:23] Nope [12:49:15] paladox: okay just wondering, i just knew garbage collection usually happens around this time, and if it wasnt that i wanted to let someone know incase the world just starts to end [12:49:30] Garbage collection affects puppet only [12:49:55] paladox: okay [14:43:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:43:29] PROBLEM - cp2 HTTPS on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3858 bytes in 0.401 second response time [14:43:32] PROBLEM - mw3 HTTPS on mw3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:43:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:43:40] PROBLEM - mw2 HTTPS on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:44:09] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:44:27] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 77% [14:44:29] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:44:33] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 63% [14:44:53] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 73% [14:44:53] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:45:27] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:45:53] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3858 bytes in 0.009 second response time [14:45:57] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3871 bytes in 0.649 second response time [14:46:33] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 25% [14:46:53] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 46% [14:47:39] RECOVERY - mw3 HTTPS on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 23221 bytes in 0.009 second response time [14:47:44] RECOVERY - mw2 HTTPS on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 23221 bytes in 0.007 second response time [14:47:53] RECOVERY - cp4 HTTPS on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 0.015 second response time [14:47:57] RECOVERY - cp3 HTTPS on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24074 bytes in 1.034 second response time [14:48:27] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 47% [14:48:53] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 4% [14:49:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:49:26] RECOVERY - cp2 HTTPS on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24074 bytes in 0.806 second response time [14:49:31] RECOVERY - mw1 HTTPS on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 23221 bytes in 0.009 second response time [14:49:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:50:07] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:50:28] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 6% [14:50:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:50:53] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:00:57] [02miraheze/MirahezeMagic] 07translatewiki pushed 031 commit to 03master [+0/-0/±9] 13https://git.io/fjqP7 [16:00:58] [02miraheze/MirahezeMagic] 07translatewiki 03eda930d - Localisation updates from https://translatewiki.net. [16:01:00] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±10] 13https://git.io/fjqP5 [16:01:01] [02miraheze/ManageWiki] 07translatewiki 03c201792 - Localisation updates from https://translatewiki.net. [17:28:06] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjq1R [17:28:07] [02miraheze/puppet] 07paladox 03cc686b5 - Increase job runner to 2 for basic [19:35:32] PROBLEM - mw3 HTTPS on mw3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:35:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:35:48] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [19:35:53] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3856 bytes in 0.010 second response time [19:36:28] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 72% [19:36:29] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [19:36:53] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 69% [19:37:31] RECOVERY - mw3 HTTPS on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 23221 bytes in 0.010 second response time [19:37:38] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:37:48] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [19:37:53] RECOVERY - cp4 HTTPS on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24051 bytes in 0.015 second response time [19:38:27] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 5% [19:38:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [19:38:53] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 4% [19:51:20] [02DataDump] 07eduaddad opened pull request 03#1: Create pt-br.json - 13https://git.io/fjqyG [19:57:21] [02miraheze/DataDump] 07paladox pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/fjqy8 [19:57:23] [02miraheze/DataDump] 07eduaddad 033295dbe - Create pt-br.json add Brazilian Portuguese language [19:57:24] [02DataDump] 07paladox closed pull request 03#1: Create pt-br.json - 13https://git.io/fjqyG [19:59:03] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_32 [+0/-0/±1] 13https://git.io/fjqy0 [19:59:04] [02miraheze/mediawiki] 07paladox 032737c8f - Update PDFEmbed [20:00:50] [02miraheze/PDFEmbed] 07paladox pushed 039 commits to 03master [+5/-0/±10] 13https://git.io/fjqy2 [20:00:52] [02miraheze/PDFEmbed] 07paladox 0333b023e - Fix support for mediawiki 1.31 [20:00:53] [02miraheze/PDFEmbed] 07Alexia 0305e1c49 - Update license, link, and extension.json. Add .gitignore and code standards. [20:00:55] [02miraheze/PDFEmbed] 07pcjtulsa 03cb6dff5 - Add user rights messages [20:00:56] [02miraheze/PDFEmbed] ... and 6 more commits. [20:01:54] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_32 [+0/-0/±1] 13https://git.io/fjqyV [20:01:56] [02miraheze/mediawiki] 07paladox 03336869d - Update PDFEmbed [20:02:18] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_32 [+0/-0/±1] 13https://git.io/fjqyo [20:02:20] [02miraheze/mediawiki] 07paladox 03d0c49a0 - Update DD [20:04:27] !log running lc on mw* [20:04:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:01:26] [02miraheze/ManageWiki] 07JohnFLewis pushed 032 commits to 03master [+0/-0/±11] 13https://git.io/fjq98 [22:01:27] [02miraheze/ManageWiki] 07JohnFLewis 032ef590d - remove arguement references [22:01:29] [02miraheze/ManageWiki] 07JohnFLewis 03d1688d3 - Merge branch 'master' of www.github.com:miraheze/ManageWiki [22:40:45] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 833 [22:44:18] PROBLEM - db4 Disk Space on db4 is CRITICAL: DISK CRITICAL - free space: / 40291 MB (10% inode=96%); [22:59:55] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [23:39:55] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 777 [23:41:58] paladox: do special pages automatically get purged? [23:42:22] They shoulden't even be cached. *I think* [23:42:30] doin't quote me on that though :) [23:42:33] paladox: well see https://phabricator.miraheze.org/T4270 [23:42:34] Title: [ ⚓ T4270 Active users won't show on Special:Statistics ] - phabricator.miraheze.org [23:42:42] I think its a cache related issue [23:43:42] I'd think it's jobs [23:43:54] we've almost always had an issue with them [23:44:47] Voidwalker: damn jobs not even the software wants to do a job :P [23:44:52] Ah [23:44:58] updateSpecialPages.php fixed it [23:45:22] heh, as usual [23:45:27] !log ran updateSpecialPages.php against nonciclopediawiki on mw1 [23:45:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:51:21] paladox: is it possible to set wgExtraSignatureNamespaces per wiki? [23:51:45] yes through LS [23:52:10] paladox: what would be an example of proper syntax? [23:52:19] https://www.mediawiki.org/wiki/Manual:$wgExtraSignatureNamespaces [23:52:19] Title: [ Manual:$wgExtraSignatureNamespaces - MediaWiki ] - www.mediawiki.org [23:52:35] paladox: that doesnt tell me how to set it for a certain wiki [23:52:57] it would be like https://github.com/miraheze/mw-config/blob/master/LocalSettings.php#L1077 [23:52:58] Title: [ mw-config/LocalSettings.php at master · miraheze/mw-config · GitHub ] - github.com [23:53:55] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [23:54:03] paladox: so it would be like ‘default’ => NS, NS? [23:54:09] nope [23:54:17] 'default' => [ NS, ns ] [23:54:28] Thats what i meant :D [23:54:35] heh [23:54:41] No quotes in the brackets right? Paladox [23:54:53] nope, not unless it's not a constant [23:55:05] uh wrong terminology for php [23:55:08] i mean define :P