[00:00:47] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.37, 3.58, 2.11 [00:02:24] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb [00:02:48] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.94, 2.89, 2.03 [00:02:59] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [00:03:03] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [00:04:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [00:04:24] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [00:04:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:06:07] PROBLEM - lizardfs6 Puppet on lizardfs6 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 3 minutes ago with 0 failures [00:06:16] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:06:52] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [00:08:02] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:08:18] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:10:25] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:11:08] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [00:12:25] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [00:12:40] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb [00:13:03] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [00:13:10] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:14:20] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:14:23] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:14:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:21:08] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.85, 2.65, 1.97 [00:23:08] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.56, 2.80, 2.12 [01:18:58] PROBLEM - lizardfs6 GlusterFS port 24007 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 24007: Connection refused [01:20:58] RECOVERY - lizardfs6 GlusterFS port 24007 on lizardfs6 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 24007 [01:28:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.72, 3.31, 2.12 [01:30:07] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.018 second response time on 54.36.165.161 port 49152 [01:30:12] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.15, 2.43, 1.93 [01:34:07] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [01:46:35] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.57, 4.27, 2.75 [01:48:30] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.73, 3.25, 2.54 [01:56:10] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 49152 [02:00:35] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeg9o [02:00:37] [02miraheze/puppet] 07paladox 03610b483 - Update mediawiki.pp [02:00:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeg9K [02:00:54] [02miraheze/puppet] 07paladox 031e0c29c - Update init.pp [02:06:07] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [02:12:07] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.012 second response time on 54.36.165.161 port 49152 [02:16:41] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static-new] [02:18:27] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [02:20:21] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 49152 [02:22:40] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [02:26:46] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 47 seconds ago with 1 failures. Failed resources (up to 3 shown): File[/var/lib/glusterd/secure-access] [02:31:53] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [02:33:48] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:44:47] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [02:53:50] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.58, 3.34, 1.98 [02:55:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.52, 2.74, 1.94 [03:11:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.26, 3.46, 2.13 [03:13:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.63, 2.77, 2.04 [04:05:12] hmm, I can't seem to access miraheze right now [04:13:07] cp2 seems to be down [04:14:18] ah, it's a local problem it would seem [04:29:11] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.62, 3.66, 2.11 [04:33:11] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.39, 3.14, 2.32 [05:28:12] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.09, 3.91, 2.44 [05:32:10] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.25, 3.57, 2.67 [05:34:10] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.55, 2.47, 2.37 [05:58:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.33, 5.74, 3.16 [05:58:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.03, 3.66, 2.55 [06:00:45] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.49, 3.10, 2.47 [06:18:14] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.90, 3.89, 3.92 [06:24:12] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.58, 2.35, 3.20 [06:26:25] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2736 MB (11% inode=94%); [06:28:10] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.19, 4.52, 3.88 [06:36:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.96, 2.91, 3.63 [06:38:17] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.86, 2.17, 3.26 [06:46:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.93, 3.13, 2.48 [06:48:05] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.35, 4.36, 3.00 [06:51:57] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.91, 3.93, 3.20 [06:53:51] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.20, 3.01, 2.95 [07:25:18] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.00, 4.16, 2.75 [07:39:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.63, 3.49, 2.53 [07:41:21] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.27, 3.59, 3.96 [07:45:20] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.63, 2.08, 3.26 [07:45:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.21, 3.62, 2.95 [07:46:13] Reception123: ping [07:51:50] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.87, 2.54, 2.76 [08:14:26] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [09:12:30] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.68, 3.37, 2.40 [09:14:29] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.94, 4.41, 2.90 [09:16:28] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.36, 3.87, 2.91 [09:18:27] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.39, 3.13, 2.76 [09:32:07] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.07, 3.62, 2.45 [09:34:03] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.80, 2.55, 2.20 [09:48:50] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 12.16, 6.83, 3.58 [09:49:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.95, 4.22, 2.73 [09:59:08] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.82, 3.52, 3.69 [09:59:56] Reception123: can you review the unblock request on test wiki [10:02:10] Ok [10:03:10] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.88, 2.88, 3.39 [10:07:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.66, 3.08, 3.41 [10:09:16] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.51, 2.85, 3.28 [10:11:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.01, 2.55, 4.00 [10:15:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.08, 1.65, 3.29 [10:41:27] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.06, 4.60, 3.07 [10:43:28] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.93, 3.67, 2.93 [10:45:30] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.14, 2.68, 2.65 [10:47:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.04, 4.04, 2.63 [10:49:51] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.86, 3.78, 2.72 [10:51:50] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.76, 3.04, 2.57 [11:10:57] RhinosF1: meh got a 503 before will look at that soon [11:19:54] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.25, 2.76, 1.97 [11:26:44] PROBLEM - espiral.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'espiral.org' expires in 15 day(s) (Tue 19 Nov 2019 11:23:39 AM GMT +0000). [11:26:57] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JegbP [11:26:59] [02miraheze/ssl] 07MirahezeSSLBot 03c00c1db - Bot: Update SSL cert for espiral.org [11:27:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.91, 3.58, 2.93 [11:29:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.95, 2.74, 2.71 [11:32:45] RECOVERY - espiral.org - LetsEncrypt on sslhost is OK: OK - Certificate 'espiral.org' will expire on Sat 01 Feb 2020 10:26:52 AM GMT +0000. [11:35:31] PROBLEM - wiki.x1c7.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.x1c7.com' expires in 15 day(s) (Tue 19 Nov 2019 11:32:16 AM GMT +0000). [11:35:44] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JegbQ [11:35:46] [02miraheze/ssl] 07MirahezeSSLBot 036f0d6d8 - Bot: Update SSL cert for wiki.x1c7.com [11:43:30] RECOVERY - wiki.x1c7.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.x1c7.com' will expire on Sat 01 Feb 2020 10:35:38 AM GMT +0000. [12:08:05] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.72, 3.72, 2.56 [12:10:09] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.40, 2.76, 2.35 [13:03:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.58, 3.81, 2.30 [13:05:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.83, 2.63, 2.05 [14:05:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JegxF [14:05:10] [02miraheze/services] 07MirahezeSSLBot 03b5c881c - BOT: Updating services config for wikis [14:23:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.43, 2.87, 2.05 [14:25:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.82, 2.52, 2.02 [14:36:58] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [14:36:59] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [14:37:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:37:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb [14:38:04] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [14:38:54] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:38:54] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:39:12] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:41:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:41:58] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [14:49:35] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.16, 3.92, 2.56 [14:53:35] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.13, 2.93, 2.52 [15:01:55] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.72, 4.41, 2.56 [15:09:51] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.08, 3.13, 2.90 [15:19:13] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb [15:22:13] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [15:22:18] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [15:22:19] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [15:22:23] paladox: ^ [15:22:34] yes, i'm aware :) [15:23:04] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:27:12] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:27:19] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:28:12] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:28:15] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [15:28:25] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:53:56] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.09, 4.05, 2.92 [15:55:55] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.90, 2.89, 2.63 [16:09:49] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2vM [16:09:51] [02miraheze/puppet] 07paladox 032a1859c - parsoid: Up workers [16:33:01] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [16:33:06] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw2 mw3 [16:33:48] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:34:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:34:05] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [16:36:52] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [16:37:01] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:37:44] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:38:04] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [16:38:05] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:54:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.76, 3.96, 2.55 [17:00:34] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.89, 3.59, 2.96 [17:02:33] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.91, 2.97, 2.81 [17:02:40] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2Js [17:02:41] [02miraheze/mw-config] 07paladox 035b5225f - Update LocalExtensions.php [17:04:02] [02ssl] 07Pix1234 opened pull request 03#236: + wiki.wikidadds.org - 13https://git.io/Je2Jn [17:05:39] [02ssl] 07paladox closed pull request 03#236: + wiki.wikidadds.org - 13https://git.io/Je2Jn [17:05:41] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/Je2Jl [17:05:42] [02miraheze/ssl] 07Pix1234 03594eeb8 - + wiki.wikidadds.org (#236) * Create wiki.wikidadds.org.crt * + wiki.wikidadds.org [17:06:47] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki config] [17:07:40] .in 7m managewiki settings for new ssl cert [17:07:41] Zppix: Okay, I will set the reminder for: 2019-11-03 - 11:14:40CST [17:13:02] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [17:14:41] Zppix: managewiki settings for new ssl cert [17:48:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:48:53] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [17:50:54] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:52:07] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:53:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.48, 3.94, 2.18 [17:54:59] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.49, 3.06, 2.08 [18:40:27] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.58, 7.01, 6.31 [18:44:25] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.82, 7.92, 6.82 [18:46:23] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.88, 7.93, 6.96 [18:46:28] [3ab4bd20922caaa4a30f2af8] 2019-11-03 18:45:14: Excepción grave de tipo "Wikimedia\Rdbms\DBQueryError" [18:46:48] hispano76: what were you doing when it happened [18:48:03] special undelete / restaurare Zppix [18:48:22] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.37, 8.16, 7.16 [18:48:45] hispano76: i see the issue will fix it [18:50:24] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.81, 7.69, 7.10 [18:54:23] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.25, 5.93, 6.52 [19:01:13] !log UPDATE actor SET actor_name = 'Flow talk page manager2' WHERE actor_id = '485' for ucroniaswiki [19:01:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:01:25] hispano76: try now [19:02:38] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.90, 4.30, 2.52 [19:04:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.99, 3.77, 2.55 [19:06:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.95, 2.77, 2.33 [19:44:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.66, 3.41, 2.11 [19:45:48] Voidwalker hi [19:46:02] hello [19:46:41] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.95, 3.85, 2.45 [19:47:15] error in translate: "denominaciones" -> "nominaciones". Page proteced https://meta.miraheze.org/w/index.php?title=Special:Translate&group=Centralnotice-tgroup-Cocc_nominations&language=es&filter=%21translated&action=translate [19:47:15] [ Translate - Miraheze Meta ] - meta.miraheze.org [19:47:22] Voidwalker [19:48:38] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.80, 3.05, 2.31 [19:48:51] thanks [19:49:41] Voidwalker: lol i was just changing the translation lol [19:49:57] * Voidwalker too fast for ya [20:03:04] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.57, 4.55, 2.69 [20:05:05] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.91, 3.96, 2.71 [20:09:07] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.83, 2.85, 2.56 [20:42:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.83, 3.80, 2.47 [20:44:38] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.37, 2.95, 2.33 [21:46:38] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.89, 3.95, 2.36 [21:48:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.19, 2.88, 2.17 [21:49:18] Hello hispano7650! If you have any questions, feel free to ask and someone should answer soon. [21:53:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.99, 3.51, 2.11 [22:01:29] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.94, 3.70, 2.97 [22:03:29] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.02, 2.79, 2.73 [22:13:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.29, 3.77, 2.97 [22:17:31] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.89, 3.61, 3.16 [22:19:30] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.17, 2.79, 2.91 [22:23:33] we have 503 [22:25:28] should be back now [22:28:42] !log zppix@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki uncyclomirrorwiki /home/zppix/uncyclopedia-full-2019-05-15.xml [22:28:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:32:23] k6ka i think i have something to go on now... [22:32:39] based on the access log i see where it starts to increase in page load [23:01:58] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2qA [23:02:00] [02miraheze/puppet] 07paladox 03ac85dbe - Update config.yaml [23:03:30] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2qp [23:03:32] [02miraheze/puppet] 07paladox 03b2365ea - Update config.yaml [23:03:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2qj [23:03:55] [02miraheze/puppet] 07paladox 0389196a4 - Update config.yaml [23:07:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:07:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:08:36] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [23:08:50] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [23:09:37] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:09:42] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:10:54] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [23:12:31] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [23:16:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.62, 4.47, 2.65 [23:20:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.72, 3.69, 2.78 [23:21:40] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2Ye [23:21:42] [02miraheze/puppet] 07paladox 03f51da7c - Update mediawiki.pp [23:22:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.81, 2.63, 2.49 [23:23:25] PROBLEM - db4 Disk Space on db4 is WARNING: DISK WARNING - free space: / 40250 MB (10% inode=96%); [23:27:03] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Mount[/mnt/mediawiki-static] [23:29:36] root@db4:/home/paladox# ./purge-binary.sh [23:29:38] !log root@db4:/home/paladox# ./purge-binary.sh [23:29:44] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:29:52] !log root@db5:/home/paladox# ./purge-binary.sh [23:29:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:31:26] RECOVERY - db4 Disk Space on db4 is OK: DISK OK - free space: / 58566 MB (15% inode=96%); [23:32:46] !log restart php-fpm on mw1, was slow [23:32:56] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:33:14] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:36:59] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.78, 4.15, 2.70 [23:38:58] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.55, 3.16, 2.50