[00:15:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:15:50] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [00:16:00] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [00:17:31] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:17:49] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:17:54] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:32:13] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.75, 3.71, 2.34 [00:34:15] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.61, 3.07, 2.27 [00:52:21] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.52, 3.45, 2.13 [00:54:21] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.73, 2.81, 2.07 [01:34:12] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw2 [01:36:11] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:28:39] !log purged mysql binary on db4 [02:28:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:45:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.14, 4.06, 2.32 [02:47:21] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.48, 2.97, 2.13 [03:01:05] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.73, 3.18, 2.13 [03:03:02] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.91, 2.35, 1.96 [03:51:21] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.01, 3.02, 2.07 [03:57:21] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.70, 3.68, 2.85 [03:59:25] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.13, 2.80, 2.63 [04:13:08] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.46, 4.26, 2.62 [04:15:05] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.33, 3.06, 2.38 [05:03:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.97, 3.60, 2.29 [05:05:29] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.40, 2.73, 2.13 [06:03:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.55, 3.32, 2.35 [06:05:24] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 3.05, 3.33, 2.49 [06:26:33] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2813 MB (11% inode=94%); [06:27:52] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.16, 4.73, 3.37 [06:27:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [06:28:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [06:29:48] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.96, 3.91, 3.23 [06:29:51] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:30:11] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:33:42] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.76, 2.54, 2.86 [06:36:23] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.56, 3.85, 2.84 [06:44:21] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.27, 3.02, 3.10 [07:33:23] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.80, 3.95, 2.53 [07:35:21] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.21, 3.67, 2.60 [07:37:21] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.29, 2.77, 2.40 [07:58:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.96, 3.18, 2.50 [08:00:18] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.77, 2.44, 2.30 [09:07:08] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.34, 4.44, 2.86 [09:11:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.91, 3.87, 3.07 [09:15:05] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.51, 2.80, 2.83 [09:25:26] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:27:24] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 81004 files, 2.632GB, 2019-10-11 03:03:00 (2.9 weeks ago) [09:30:04] [02ssl] 07RhinosF1 opened pull request 03#233: Rmv wiki.omega3.tk - 13https://git.io/JezwE [09:32:05] [02ssl] 07RhinosF1 synchronize pull request 03#233: Rmv wiki.omega3.tk - 13https://git.io/JezwE [09:32:05] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.22, 3.73, 3.09 [09:32:49] [02ssl] 07RhinosF1 edited pull request 03#233: Rmv wiki.om3ga.tk - 13https://git.io/JezwE [09:36:06] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.77, 3.10, 3.02 [09:37:27] Reception123: ^ [09:38:16] [02ssl] 07Reception123 closed pull request 03#233: Rmv wiki.om3ga.tk - 13https://git.io/JezwE [09:38:18] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-1/±1] 13https://git.io/Jezww [09:38:19] [02miraheze/ssl] 07RhinosF1 031e40347 - Rmv wiki.om3ga.tk (#233) * Rmv wiki.omega3.tk No longer registered * Remove wikiom3ga.tk [09:38:26] RhinosF1: I assume I need to remove MW stuff in db? [09:39:36] Reception123: believe so [09:40:03] Reception123: and you need to as I'm mobile [09:40:21] ok [09:40:27] https://phabricator.miraheze.org/T4657 is where it was added [09:40:28] [ ⚓ T4657 SSL Certificate Request ] - phabricator.miraheze.org [09:40:31] thanks [09:42:46] !log removed wgServer setting on om3gawiki via DB (domain no longer pointing) [09:42:51] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [09:46:47] Reception123: it's still redirecting to the custom [09:46:59] I saw... [09:47:15] Reception123: I can't see anything in redirects.ya [09:47:17] Ml [09:47:39] RhinosF1: fixed [09:50:07] Reception123: how [09:50:27] RhinosF1: forgot to do wgServer:"" [09:50:40] because you can't remove it completely it needs to be = to nothing [09:51:05] Reception123: it still redirects [09:54:02] RhinosF1: probably your cache, wfm [09:55:48] Reception123: nope [09:58:42] https://usercontent.irccloud-cdn.com/file/4jcb7GeK/image.png [09:58:49] ^ RhinosF1 [09:59:45] Strange I've cleared cache and tried another browser [10:00:20] hmm don't know [10:03:41] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.29, 3.76, 2.47 [10:05:40] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.04, 3.23, 2.45 [10:12:18] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2650 MB (10% inode=94%); [11:03:28] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.04, 3.96, 2.60 [11:05:27] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.57, 3.01, 2.42 [11:10:03] [02ssl] 07RhinosF1 opened pull request 03#234: order - 13https://git.io/Jezo6 [11:12:54] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezo1 [11:12:55] [02miraheze/mw-config] 07RhinosF1 033057557 - spelling [11:13:10] Reception123: can you merge ssl [11:15:52] ah good old typos and spelling mistakes [11:15:57] my favorite way to do something wrong [11:19:49] [02ssl] 07Reception123 closed pull request 03#234: order - 13https://git.io/Jezo6 [11:19:51] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezo7 [11:19:52] [02miraheze/ssl] 07RhinosF1 037774647 - order (#234) [11:21:12] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.54, 4.17, 2.86 [11:23:10] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.11, 3.62, 2.83 [11:25:20] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.67, 2.49, 2.51 [11:25:48] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki config] [11:26:58] Reception123: ^ [11:29:37] * RhinosF1 is looking [11:36:36] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [11:36:52] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [11:37:26] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [11:37:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [11:38:36] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [11:38:47] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [11:39:24] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [11:39:27] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:03:11] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.18, 3.00, 1.90 [12:11:11] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.64, 3.51, 2.81 [12:13:11] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.94, 2.64, 2.57 [13:03:21] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [13:23:06] Any updates on https://phabricator.miraheze.org/T4813 ? [13:23:07] [ ⚓ T4813 "Welcome to 'wiki name'" appears every time the source or visual editor is opened ] - phabricator.miraheze.org [13:25:17] k6ka: I'll set up a test for the WMF people tonight [13:25:21] Thanks for the reminder [13:25:36] .at 21:00 .task T4813 [13:25:37] RhinosF1: Okay, I will set the reminder for: 2019-10-31 - 20:59:59UTC [13:28:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.32, 3.54, 2.30 [13:30:18] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.56, 2.69, 2.13 [13:30:57] k6ka hi, i think based on what i found, it's related to the downtime. [13:31:11] Since it should save using the api so it remembers not to show the dialog [13:31:23] but if the api is slow or fails, it won't save. [13:32:05] PROBLEM - drag.lgbt - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'drag.lgbt' expires in 15 day(s) (Sat 16 Nov 2019 01:29:22 PM GMT +0000). [13:32:19] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezix [13:32:19] That would make sense, since this issue didn't appear previously [13:32:21] [02miraheze/ssl] 07MirahezeSSLBot 0368cd35c - Bot: Update SSL cert for drag.lgbt [13:32:34] PROBLEM - monarchists.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'monarchists.wiki' expires in 15 day(s) (Sat 16 Nov 2019 01:29:40 PM GMT +0000). [13:32:48] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezij [13:32:50] [02miraheze/ssl] 07MirahezeSSLBot 03ad2c706 - Bot: Update SSL cert for monarchists.wiki [13:34:13] PROBLEM - nonciclopedia.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'nonciclopedia.org' expires in 15 day(s) (Sat 16 Nov 2019 01:30:19 PM GMT +0000). [13:34:27] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPv [13:34:28] [02miraheze/ssl] 07MirahezeSSLBot 03a25aba7 - Bot: Update SSL cert for nonciclopedia.org [13:34:56] paladox: we'll find out soon [13:35:22] No, i'm 100% sure that it is that. [13:36:13] paladox: ok, there’s an upstream task of you want to update it [13:36:17] PROBLEM - stablestate.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'stablestate.org' expires in 15 day(s) (Sat 16 Nov 2019 01:33:30 PM GMT +0000). [13:36:30] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPT [13:36:32] [02miraheze/ssl] 07MirahezeSSLBot 039dcc99d - Bot: Update SSL cert for stablestate.org [13:36:48] PROBLEM - nonsensopedia.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'nonsensopedia.org' expires in 15 day(s) (Sat 16 Nov 2019 01:32:52 PM GMT +0000). [13:37:00] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPk [13:37:02] [02miraheze/ssl] 07MirahezeSSLBot 03bfad7ff - Bot: Update SSL cert for nonsensopedia.org [13:37:41] PROBLEM - wiki.joust.ro - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.joust.ro' expires in 15 day(s) (Sat 16 Nov 2019 01:34:14 PM GMT +0000). [13:37:55] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPt [13:37:56] [02miraheze/ssl] 07MirahezeSSLBot 0323103f0 - Bot: Update SSL cert for wiki.joust.ro [13:40:10] PROBLEM - guiasdobrasil.com.br - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'guiasdobrasil.com.br' expires in 15 day(s) (Sat 16 Nov 2019 01:37:57 PM GMT +0000). [13:40:24] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPm [13:40:25] [02miraheze/ssl] 07MirahezeSSLBot 036f8228d - Bot: Update SSL cert for guiasdobrasil.com.br [13:42:22] PROBLEM - islamkosh.tk - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'islamkosh.tk' expires in 15 day(s) (Sat 16 Nov 2019 01:40:12 PM GMT +0000). [13:42:36] RECOVERY - monarchists.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'monarchists.wiki' will expire on Wed 29 Jan 2020 12:32:42 PM GMT +0000. [13:42:44] RECOVERY - nonsensopedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'nonsensopedia.org' will expire on Wed 29 Jan 2020 12:36:55 PM GMT +0000. [13:42:45] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPO [13:42:46] [02miraheze/ssl] 07MirahezeSSLBot 039e96f85 - Bot: Update SSL cert for islamkosh.tk [13:43:42] RECOVERY - wiki.joust.ro - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.joust.ro' will expire on Wed 29 Jan 2020 12:37:49 PM GMT +0000. [13:44:12] RECOVERY - drag.lgbt - LetsEncrypt on sslhost is OK: OK - Certificate 'drag.lgbt' will expire on Wed 29 Jan 2020 12:32:13 PM GMT +0000. [13:44:14] RECOVERY - nonciclopedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'nonciclopedia.org' will expire on Wed 29 Jan 2020 12:34:21 PM GMT +0000. [13:44:15] RECOVERY - stablestate.org - LetsEncrypt on sslhost is OK: OK - Certificate 'stablestate.org' will expire on Wed 29 Jan 2020 12:36:24 PM GMT +0000. [13:44:43] PROBLEM - runzeppelin.ru - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'runzeppelin.ru' expires in 15 day(s) (Sat 16 Nov 2019 01:41:34 PM GMT +0000). [13:44:56] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPn [13:44:58] [02miraheze/ssl] 07MirahezeSSLBot 030254c8b - Bot: Update SSL cert for runzeppelin.ru [13:45:38] PROBLEM - wiki.hyperborian.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.hyperborian.org' expires in 15 day(s) (Sat 16 Nov 2019 01:42:54 PM GMT +0000). [13:45:51] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPc [13:45:52] [02miraheze/ssl] 07MirahezeSSLBot 03473272b - Bot: Update SSL cert for wiki.hyperborian.org [13:46:26] PROBLEM - wiki.casual4casuals.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.casual4casuals.com' expires in 15 day(s) (Sat 16 Nov 2019 01:42:38 PM GMT +0000). [13:46:39] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPC [13:46:41] [02miraheze/ssl] 07MirahezeSSLBot 0326d114d - Bot: Update SSL cert for wiki.casual4casuals.com [13:48:25] PROBLEM - crimesciencewiki.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'crimesciencewiki.org' expires in 15 day(s) (Sat 16 Nov 2019 01:45:18 PM GMT +0000). [13:48:38] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPl [13:48:39] [02miraheze/ssl] 07MirahezeSSLBot 033c2c95d - Bot: Update SSL cert for crimesciencewiki.org [13:49:02] PROBLEM - wiki.casadocarvalho.net - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.casadocarvalho.net' expires in 15 day(s) (Sat 16 Nov 2019 01:45:36 PM GMT +0000). [13:49:15] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezP8 [13:49:16] [02miraheze/ssl] 07MirahezeSSLBot 0353cff2a - Bot: Update SSL cert for wiki.casadocarvalho.net [13:49:37] PROBLEM - wiki.coderdojosaintpaul.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.coderdojosaintpaul.org' expires in 15 day(s) (Sat 16 Nov 2019 01:46:28 PM GMT +0000). [13:49:50] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezP4 [13:49:51] [02miraheze/ssl] 07MirahezeSSLBot 03f1f9479 - Bot: Update SSL cert for wiki.coderdojosaintpaul.org [13:50:52] PROBLEM - endlesssea.lucyawrey.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'endlesssea.lucyawrey.com' expires in 15 day(s) (Sat 16 Nov 2019 01:48:12 PM GMT +0000). [13:51:06] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezP0 [13:51:07] [02miraheze/ssl] 07MirahezeSSLBot 03e53e428 - Bot: Update SSL cert for endlesssea.lucyawrey.com [13:51:46] PROBLEM - wiki.landan.ca - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.landan.ca' expires in 15 day(s) (Sat 16 Nov 2019 01:48:27 PM GMT +0000). [13:51:59] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPE [13:52:01] [02miraheze/ssl] 07MirahezeSSLBot 03bc4f89b - Bot: Update SSL cert for wiki.landan.ca [13:52:45] PROBLEM - bebaskanpengetahuan.id - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'bebaskanpengetahuan.id' expires in 15 day(s) (Sat 16 Nov 2019 01:49:33 PM GMT +0000). [13:52:59] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPg [13:53:01] [02miraheze/ssl] 07MirahezeSSLBot 0382aa1d0 - Bot: Update SSL cert for bebaskanpengetahuan.id [13:53:38] RECOVERY - wiki.hyperborian.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.hyperborian.org' will expire on Wed 29 Jan 2020 12:45:45 PM GMT +0000. [13:54:08] RECOVERY - guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'guiasdobrasil.com.br' will expire on Wed 29 Jan 2020 12:40:18 PM GMT +0000. [13:54:19] PROBLEM - documentation.aqfer.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'documentation.aqfer.com' expires in 15 day(s) (Sat 16 Nov 2019 01:50:30 PM GMT +0000). [13:54:28] PROBLEM - wiki.ripto.gq - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.ripto.gq' expires in 15 day(s) (Sat 16 Nov 2019 01:50:42 PM GMT +0000). [13:54:28] RECOVERY - crimesciencewiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'crimesciencewiki.org' will expire on Wed 29 Jan 2020 12:48:31 PM GMT +0000. [13:54:30] RECOVERY - islamkosh.tk - LetsEncrypt on sslhost is OK: OK - Certificate 'islamkosh.tk' will expire on Wed 29 Jan 2020 12:42:39 PM GMT +0000. [13:54:30] RECOVERY - wiki.casual4casuals.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.casual4casuals.com' will expire on Wed 29 Jan 2020 12:46:34 PM GMT +0000. [13:54:32] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPV [13:54:34] [02miraheze/ssl] 07MirahezeSSLBot 03a214b02 - Bot: Update SSL cert for documentation.aqfer.com [13:54:45] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPw [13:54:46] [02miraheze/ssl] 07MirahezeSSLBot 039ddc099 - Bot: Update SSL cert for wiki.ripto.gq [13:54:46] RECOVERY - runzeppelin.ru - LetsEncrypt on sslhost is OK: OK - Certificate 'runzeppelin.ru' will expire on Wed 29 Jan 2020 12:44:50 PM GMT +0000. [13:56:52] PROBLEM - theatlas.pw - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'theatlas.pw' expires in 15 day(s) (Sat 16 Nov 2019 01:53:06 PM GMT +0000). [13:57:05] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPi [13:57:07] [02miraheze/ssl] 07MirahezeSSLBot 0351255ae - Bot: Update SSL cert for theatlas.pw [13:58:06] PROBLEM - www.wikimicrofinanza.it - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'www.wikimicrofinanza.it' expires in 15 day(s) (Sat 16 Nov 2019 01:54:46 PM GMT +0000). [13:58:19] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezP1 [13:58:20] [02miraheze/ssl] 07MirahezeSSLBot 030c81835 - Bot: Update SSL cert for www.wikimicrofinanza.it [13:58:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPM [13:58:25] [02miraheze/puppet] 07paladox 037500104 - upgrade test1 mediawiki to 1.34 [14:02:16] PROBLEM - test1 Puppet on test1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 9 minutes ago with 0 failures [14:02:35] PROBLEM - wiki.ombre.io - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.ombre.io' expires in 15 day(s) (Sat 16 Nov 2019 01:59:09 PM GMT +0000). [14:02:40] PROBLEM - wiki.openhatch.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.openhatch.org' expires in 15 day(s) (Sat 16 Nov 2019 02:00:33 PM GMT +0000). [14:02:48] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPN [14:02:50] [02miraheze/ssl] 07MirahezeSSLBot 038164ab0 - Bot: Update SSL cert for wiki.ombre.io [14:02:59] RECOVERY - wiki.casadocarvalho.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.casadocarvalho.net' will expire on Wed 29 Jan 2020 12:49:09 PM GMT +0000. [14:03:00] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPA [14:03:02] [02miraheze/ssl] 07MirahezeSSLBot 03dc322a8 - Bot: Update SSL cert for wiki.openhatch.org [14:03:36] PROBLEM - wiki.doverdistrictscouts.co.uk - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.doverdistrictscouts.co.uk' expires in 15 day(s) (Sat 16 Nov 2019 02:00:18 PM GMT +0000). [14:03:45] RECOVERY - wiki.landan.ca - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.landan.ca' will expire on Wed 29 Jan 2020 12:51:54 PM GMT +0000. [14:03:51] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezPh [14:03:53] [02miraheze/ssl] 07MirahezeSSLBot 0337cdba3 - Bot: Update SSL cert for wiki.doverdistrictscouts.co.uk [14:04:00] RECOVERY - www.wikimicrofinanza.it - LetsEncrypt on sslhost is OK: OK - Certificate 'www.wikimicrofinanza.it' will expire on Wed 29 Jan 2020 12:58:13 PM GMT +0000. [14:04:14] RECOVERY - documentation.aqfer.com - LetsEncrypt on sslhost is OK: OK - Certificate 'documentation.aqfer.com' will expire on Wed 29 Jan 2020 12:54:26 PM GMT +0000. [14:04:30] RECOVERY - wiki.ripto.gq - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ripto.gq' will expire on Wed 29 Jan 2020 12:54:39 PM GMT +0000. [14:04:41] PROBLEM - wiki.veloren.net - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.veloren.net' expires in 15 day(s) (Sat 16 Nov 2019 02:01:50 PM GMT +0000). [14:04:42] RECOVERY - bebaskanpengetahuan.id - LetsEncrypt on sslhost is OK: OK - Certificate 'bebaskanpengetahuan.id' will expire on Wed 29 Jan 2020 12:52:53 PM GMT +0000. [14:04:43] RECOVERY - endlesssea.lucyawrey.com - LetsEncrypt on sslhost is OK: OK - Certificate 'endlesssea.lucyawrey.com' will expire on Wed 29 Jan 2020 12:51:00 PM GMT +0000. [14:04:51] RECOVERY - theatlas.pw - LetsEncrypt on sslhost is OK: OK - Certificate 'theatlas.pw' will expire on Wed 29 Jan 2020 12:56:59 PM GMT +0000. [14:04:55] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezXe [14:04:56] [02miraheze/ssl] 07MirahezeSSLBot 03cd65444 - Bot: Update SSL cert for wiki.veloren.net [14:05:22] PROBLEM - wiki.warfra.ml - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.warfra.ml' expires in 15 day(s) (Sat 16 Nov 2019 02:02:41 PM GMT +0000). [14:05:27] RECOVERY - wiki.coderdojosaintpaul.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.coderdojosaintpaul.org' will expire on Wed 29 Jan 2020 12:49:44 PM GMT +0000. [14:05:34] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezXv [14:05:36] [02miraheze/ssl] 07MirahezeSSLBot 035c414be - Bot: Update SSL cert for wiki.warfra.ml [14:07:07] PROBLEM - prfm.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'prfm.wiki' expires in 15 day(s) (Sat 16 Nov 2019 02:03:31 PM GMT +0000). [14:07:20] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezXU [14:07:22] [02miraheze/ssl] 07MirahezeSSLBot 036c56dfa - Bot: Update SSL cert for prfm.wiki [14:08:15] PROBLEM - encyclopediaofastrobiology.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'encyclopediaofastrobiology.org' expires in 15 day(s) (Sat 16 Nov 2019 02:06:11 PM GMT +0000). [14:08:29] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezXI [14:08:30] [02miraheze/ssl] 07MirahezeSSLBot 03ed93c29 - Bot: Update SSL cert for encyclopediaofastrobiology.org [14:10:00] PROBLEM - electowiki.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'electowiki.org' expires in 15 day(s) (Sat 16 Nov 2019 02:07:43 PM GMT +0000). [14:10:21] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JezXq [14:10:23] [02miraheze/ssl] 07MirahezeSSLBot 03c517b88 - Bot: Update SSL cert for electowiki.org [14:13:03] RECOVERY - prfm.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'prfm.wiki' will expire on Wed 29 Jan 2020 01:07:14 PM GMT +0000. [14:13:22] RECOVERY - wiki.warfra.ml - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.warfra.ml' will expire on Wed 29 Jan 2020 01:05:28 PM GMT +0000. [14:13:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [14:13:36] RECOVERY - wiki.doverdistrictscouts.co.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.doverdistrictscouts.co.uk' will expire on Wed 29 Jan 2020 01:03:45 PM GMT +0000. [14:14:15] RECOVERY - encyclopediaofastrobiology.org - LetsEncrypt on sslhost is OK: OK - Certificate 'encyclopediaofastrobiology.org' will expire on Wed 29 Jan 2020 01:08:23 PM GMT +0000. [14:14:36] RECOVERY - wiki.ombre.io - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ombre.io' will expire on Wed 29 Jan 2020 01:02:42 PM GMT +0000. [14:14:42] RECOVERY - wiki.veloren.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.veloren.net' will expire on Wed 29 Jan 2020 01:04:48 PM GMT +0000. [14:14:42] RECOVERY - wiki.openhatch.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.openhatch.org' will expire on Wed 29 Jan 2020 01:02:55 PM GMT +0000. [14:15:30] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:17:29] Hello absor70! If you have any questions, feel free to ask and someone should answer soon. [14:23:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.30, 3.77, 2.44 [14:23:59] RECOVERY - electowiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'electowiki.org' will expire on Wed 29 Jan 2020 01:10:15 PM GMT +0000. [14:25:08] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.69, 2.98, 2.32 [14:28:18] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:36:40] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb [14:36:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [14:37:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:38:16] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [14:38:26] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [14:39:12] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1U [14:39:13] [02miraheze/mw-config] 07paladox 0334e7f87 - Move Scribunto to use extension.json [14:39:15] [02mw-config] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbvb3 [14:39:16] [02mw-config] 07paladox opened pull request 03#2784: Move Scribunto to use extension.json - 13https://git.io/Jez1T [14:39:39] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1k [14:39:40] [02miraheze/mw-config] 07paladox 038304391 - Update extension-list [14:39:42] [02mw-config] 07paladox synchronize pull request 03#2784: Move Scribunto to use extension.json - 13https://git.io/Jez1T [14:40:18] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [14:40:26] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:40:34] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:40:51] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:41:10] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:42:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.88, 3.51, 2.30 [14:42:53] [02mw-config] 07paladox closed pull request 03#2784: Move Scribunto to use extension.json - 13https://git.io/Jez1T [14:42:54] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/Jez1m [14:42:56] [02miraheze/mw-config] 07paladox 0347cf9f7 - Move Scribunto to use extension.json (#2784) * Move Scribunto to use extension.json * Update extension-list [14:42:57] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-1 [14:42:59] [02mw-config] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [14:45:16] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1n [14:45:18] [02miraheze/mw-config] 07paladox 0350f2fa0 - Variables use extension.json [14:45:19] [02mw-config] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbvb3 [14:45:28] [02mw-config] 07paladox opened pull request 03#2785: Variables use extension.json - 13https://git.io/Jez1c [14:45:51] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1C [14:45:53] [02miraheze/mw-config] 07paladox 03494d4bb - Update extension-list [14:45:54] [02mw-config] 07paladox synchronize pull request 03#2785: Variables use extension.json - 13https://git.io/Jez1c [14:46:11] [02mw-config] 07paladox closed pull request 03#2785: Variables use extension.json - 13https://git.io/Jez1c [14:46:13] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/Jez1l [14:46:14] [02miraheze/mw-config] 07paladox 03cc45bde - Variables use extension.json (#2785) * Variables use extension.json * Update extension-list [14:49:09] [02mw-config] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [14:49:10] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-1 [14:50:22] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1z [14:50:24] [02miraheze/mw-config] 07paladox 035790889 - Use extension.json for loops on mw 1.34+ [14:50:25] [02mw-config] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbvb3 [14:50:34] [02mw-config] 07paladox opened pull request 03#2786: Use extension.json for loops on mw 1.34+ - 13https://git.io/Jez12 [14:51:16] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1V [14:51:17] [02miraheze/mw-config] 07paladox 03f686ead - Update extension-list [14:51:19] [02mw-config] 07paladox synchronize pull request 03#2786: Use extension.json for loops on mw 1.34+ - 13https://git.io/Jez12 [14:54:17] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.75, 3.20, 3.34 [14:56:40] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_33 [+0/-0/±2] 13https://git.io/Jez1D [14:56:42] [02miraheze/mediawiki] 07paladox 03f52bc2a - Update Loops to REL1_34 There dosen't appear to be any breaking changes so just going to be bold so that we can try to test 1.34 without any hacks. [14:57:41] [02mw-config] 07paladox synchronize pull request 03#2786: Use extension.json for loops on mw 1.34+ - 13https://git.io/Jez12 [14:57:43] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez19 [14:57:44] [02miraheze/mw-config] 07paladox 0314545b3 - Update LocalExtensions.php [14:57:56] [02mw-config] 07paladox synchronize pull request 03#2786: Use extension.json for loops on mw 1.34+ - 13https://git.io/Jez12 [14:57:57] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jez1H [14:57:59] [02miraheze/mw-config] 07paladox 03d121040 - Update extension-list [14:58:03] [02mw-config] 07paladox closed pull request 03#2786: Use extension.json for loops on mw 1.34+ - 13https://git.io/Jez12 [14:58:04] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/Jez1Q [14:58:06] [02miraheze/mw-config] 07paladox 030a9cbf0 - Use extension.json for loops on mw 1.34+ (#2786) * Use extension.json for loops on mw 1.34+ * Update extension-list * Update LocalExtensions.php * Update extension-list [15:06:46] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/JezMI [15:06:47] [02miraheze/mediawiki] 07paladox 0363aaad9 - Update SoftRedirector [15:43:15] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.68, 3.01, 2.05 [15:45:16] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.88, 3.25, 2.27 [15:53:52] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [15:59:06] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.09, 6.60, 5.75 [15:59:34] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [15:59:51] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:01:24] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [16:01:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:02:13] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:34] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [16:03:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:04:09] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.390 second response time [16:05:07] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.64, 6.62, 6.10 [16:07:16] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [16:07:38] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:07:40] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [16:07:46] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:09:16] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:55:16] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.74, 4.19, 2.63 [16:56:38] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb [16:57:15] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.20, 2.99, 2.38 [16:58:35] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:37:27] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [17:37:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:37:45] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [17:38:04] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: HTTP CRITICAL - No data received from host [17:38:24] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [17:39:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:40:05] [02mw-config] 07Pix1234 deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [17:40:07] [02miraheze/mw-config] 07Pix1234 deleted branch 03paladox-patch-1 [17:40:58] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:41:12] Hit a 502 Bad Gateway error... [17:41:15] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jez90 [17:41:17] [02miraheze/ManageWiki] 07translatewiki 0380ed281 - Localisation updates from https://translatewiki.net. [17:41:18] [ Main page - translatewiki.net ] - translatewiki.net. [17:42:11] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 6.099 second response time [17:45:04] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: HTTP CRITICAL - No data received from host [17:46:54] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.003 second response time [17:47:10] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 8.774 second response time [17:48:26] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:49:09] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:49:27] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:49:28] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:49:36] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [18:52:48] CRITICAL: paladox left the building [19:00:17] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CRITICAL: Full, 4217 files, 8.761MB, 2019-10-15 18:59:00 (2.3 weeks ago) [19:31:45] RECOVERY: paladox entered the building again [19:31:53] lol [19:31:59] 14:52 < mutante> CRITICAL: paladox left the building [19:31:59] 14:55 -!- paladox [paladox@wikimedia/paladox] has joined #miraheze [19:32:00] oh didn't realise you rejoined :P [19:32:06] haha [19:52:23] PROBLEM - cp4 Stunnel Http for misc2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:52:38] hi [19:53:05] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [19:53:05] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:53:06] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [19:53:40] PROBLEM - misc2 HTTPS on misc2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 372 bytes in 6.488 second response time [19:54:17] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [19:54:18] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:54:19] RECOVERY - cp4 Stunnel Http for misc2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 43687 bytes in 0.349 second response time [19:55:07] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [19:55:57] RECOVERY - misc2 HTTPS on misc2 is OK: HTTP OK: HTTP/1.1 200 OK - 43695 bytes in 0.224 second response time [19:56:07] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [19:56:24] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/root/ufw-fix] [19:58:17] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[wiki.counterculturelabs.org] [19:58:26] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:58:29] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [19:58:40] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/root/ufw-fix] [19:59:11] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:59:15] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [20:02:24] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [20:02:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:02:44] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 10 seconds ago with 0 failures [20:03:50] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:04:24] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:04:32] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:12:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:14:33] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:22:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.69, 3.08, 2.14 [20:24:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.30, 2.87, 2.18 [20:40:09] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jez7y [20:40:10] [02miraheze/puppet] 07paladox 033cda2b6 - Remove lizardfs from misc3 [20:43:57] !log rebuilding misc3 (restbase will fail for a bit) also moving it to the 1gb plan [20:44:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:46:26] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:47:17] [02miraheze/dns] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/Jez7H [20:47:18] [02miraheze/dns] 07paladox 03c966f1a - Update ip for misc3 [20:47:20] [02dns] 07paladox created branch 03paladox-patch-2 - 13https://git.io/vbQXl [20:47:21] [02dns] 07paladox opened pull request 03#116: Update ip for misc3 - 13https://git.io/Jez7Q [20:49:03] miraheze/dns/paladox-patch-2/c966f1a - paladox The build passed. https://travis-ci.org/miraheze/dns/builds/605717995 [20:49:11] PROBLEM - misc3 citoid on misc3 is CRITICAL: connect to address 185.52.1.71 and port 6927: Connection refused [20:49:19] PROBLEM - misc3 Current Load on misc3 is CRITICAL: connect to address 185.52.1.71 port 5666: Connection refusedconnect to host 185.52.1.71 port 5666: Connection refused [20:49:21] PROBLEM - misc3 Disk Space on misc3 is CRITICAL: connect to address 185.52.1.71 port 5666: Connection refusedconnect to host 185.52.1.71 port 5666: Connection refused [20:49:28] PROBLEM - misc3 electron on misc3 is CRITICAL: connect to address 185.52.1.71 and port 3000: Connection refused [20:49:29] PROBLEM - misc3 restbase on misc3 is CRITICAL: connect to address 185.52.1.71 and port 7231: Connection refused [20:49:34] PROBLEM - misc3 lizard.miraheze.org HTTPS on misc3 is CRITICAL: connect to address 185.52.1.71 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [20:49:47] PROBLEM - misc3 zotero on misc3 is CRITICAL: connect to address 185.52.1.71 and port 1969: Connection refused [20:52:56] [02dns] 07paladox closed pull request 03#116: Update ip for misc3 - 13https://git.io/Jez7Q [20:52:58] [02miraheze/dns] 07paladox deleted branch 03paladox-patch-2 [20:52:59] [02dns] 07paladox deleted branch 03paladox-patch-2 - 13https://git.io/vbQXl [20:59:59] RhinosF1: .task T4813 [21:00:14] PROBLEM - misc3 Puppet on misc3 is UNKNOWN: NRPE: Unable to read output [21:00:41] paladox: did you comment on upstream for [21:00:47] .task T4813 [21:00:49] https://phabricator.miraheze.org/T4813 - "Welcome to 'wiki name'" appears every time the source or visual editor is opened [Stalled] authored by Andreas, assigned to None [21:00:58] yes [21:01:18] RECOVERY - misc3 Current Load on misc3 is OK: OK - load average: 1.77, 1.53, 0.97 [21:01:19] RECOVERY - misc3 Disk Space on misc3 is OK: DISK OK - free space: / 22130 MB (91% inode=95%); [21:01:31] paladox: no https://phabricator.wikimedia.org/T231763 [21:01:32] [ ⚓ T231763 Don't show VE help/welcome messages to users if seen before ] - phabricator.wikimedia.org [21:01:48] no [21:01:55] i only commented at https://phabricator.miraheze.org/T4813 [21:01:56] [ ⚓ T4813 "Welcome to 'wiki name'" appears every time the source or visual editor is opened ] - phabricator.miraheze.org [21:02:02] i'm not going to comment in two places :) [21:02:19] I'll cross post [21:04:28] {{done}} [21:10:55] RECOVERY - misc3 citoid on misc3 is OK: TCP OK - 0.001 second response time on 185.52.1.71 port 6927 [21:11:13] RECOVERY - misc3 zotero on misc3 is OK: TCP OK - 0.002 second response time on 185.52.1.71 port 1969 [21:11:15] RECOVERY - misc3 restbase on misc3 is OK: TCP OK - 0.002 second response time on 185.52.1.71 port 7231 [21:14:11] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [21:16:24] paladox: MW is fatalling on test1 [21:16:35] b4b70cc2dc2726803c04242b] 2019-10-31 21:15:53: Fatal exception of type "TypeError" [21:16:40] where? [21:16:55] paladox: ManageWiki/extensions test1wiki [21:17:34] TypeError from line 4 of /srv/mediawiki/w/extensions/ManageWiki/includes/formFactory/ManageWikiFormFactory.php: Argument 6 passed to ManageWikiFormFactory::getFormDescriptor() must be an instance of Wikimedia\Rdbms\Database, instance of Wikimedia\Rdbms\MaintainableDBConnRef given, called in /srv/mediawiki/w/extensions/ManageWiki/includes/formFactory/ManageWikiFormFactory.php on line 47 [21:17:36] JohnLewis ^ [21:17:58] paladox: when was ManageWiki last updated? [21:18:14] Uh, that has nothing to do with this. [21:18:24] It's running MW 1.34 so shows a incomaptibility [21:18:29] Oh [21:18:41] It's not the bug that was raised is it [21:18:53] what bug? [21:18:55] paladox: what is being passed? [21:19:37] looking [21:19:49] paladox: someone from the WMF filed a task with us about something changing [21:20:02] where? [21:20:06] task link? [21:20:39] https://github.com/miraheze/ManageWiki/blob/master/includes/specials/SpecialManageWiki.php#L100 [21:20:40] [ ManageWiki/SpecialManageWiki.php at master · miraheze/ManageWiki · GitHub ] - github.com [21:20:41] JohnLewis ^ [21:20:45] that'll be it i guess? [21:21:19] paladox: I can't find it now [21:21:25] paladox: okay a) it's arg 6. There's no 6 there. b) it's not even in that file. c) it's not even those lines mentioned [21:21:25] On our phab [21:21:32] I'll look at it later [21:21:42] ok [21:26:51] hi [21:31:16] paladox: who provides LF6 [21:31:22] ovh [21:31:44] Examknow: ^ what about them [21:31:55] paladox: EK is looking for VPS providers [21:32:07] oh, we are not using them as a VPS [21:32:11] though they do provide them [21:32:21] we have a dedicated server with OVH [21:32:26] Ah right [21:32:32] As long as it's an option [21:32:50] I am looking for a free solution as I really have nothing to spend on this [21:33:05] oh, there's no free solutions [21:33:12] Examknow: not many will offer free [21:33:43] Paladox, RhinosF1: I know I have literally looked all over the web [21:34:53] Examknow: see -offtopic [21:38:27] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.71, 3.20, 2.08 [21:40:25] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.50, 2.56, 1.98 [21:45:05] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jezdv [21:45:07] [02miraheze/puppet] 07paladox 032cc8000 - electron: install libasound2 [21:45:36] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.62, 6.61, 5.98 [21:46:15] RECOVERY - misc3 electron on misc3 is OK: TCP OK - 0.002 second response time on 185.52.1.71 port 3000 [21:47:32] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.50, 6.51, 6.01 [22:03:07] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:03:09] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [22:03:19] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb [22:09:07] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [22:10:46] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [22:12:45] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [22:15:11] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [22:17:14] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:17:15] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [22:17:19] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:26:01] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.83, 2.92, 2.10 [22:28:02] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.55, 3.05, 2.27 [22:28:38] * mutante heals the lizards [22:29:31] lololol [22:29:46] drain the lizard! :D [22:41:20] paladox: lol [23:08:07] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:08:44] paladox: an alert for entire data centers? ^ and 2 at once? uh oh [23:09:05] seems things are still up [23:09:22] what is "cpweb" [23:09:51] maybe just IPv6 broke [23:09:59] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:10:00] cp[234] mutante [23:10:17] i dunno why it keeps reporting it as ipv6 :P [23:10:24] *that's broken [23:11:21] 2a00:d880:5:8ea::ebc7 is cp4 [23:11:55] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:12:06] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:12:44] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.25, 6.49, 5.72 [23:13:09] paladox: so quick the entire world can change (3 data centers down - 3 data centers up again :) [23:13:18] lol [23:13:48] paladox: Halloween night!!! [23:13:52] :D [23:13:57] paladox: you should be out.. doing Trick Or Treat [23:13:59] halloween almost over here! [23:14:11] now is the right time to get all the candy they have let [23:14:13] left [23:14:21] no one came to mine [23:14:32] lol [23:14:35] maybe some will give sunday roast slices instead of candy [23:14:44] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.83, 6.08, 5.66 [23:14:46] never herd of that one : [23:14:47] *:P [23:14:48] meh, I wasn't in for about 2.5 hours [23:22:23] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.41, 3.45, 2.13 [23:24:25] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.51, 3.57, 2.35 [23:26:26] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.18, 2.62, 2.14