[00:45:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjhmb [00:45:10] [02miraheze/services] 07MirahezeSSLBot 03879a9ad - BOT: Updating services config for wikis [02:23:36] Is there a way to just purge all my actions on the publictestwiki? I've got things sorted and don't really want to delete pages 1 by 1 [02:27:03] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [02:27:04] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [02:27:13] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [02:29:03] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [02:29:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [02:29:13] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:39:04] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [02:39:05] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:39:13] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [02:39:19] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [02:39:24] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [02:39:45] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:39:54] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 92% [02:40:05] I run my script, critical crashes all around. Coincidence? I damn well hope so. [02:40:11] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:25] PROBLEM - misc1 webmail.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:40:25] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:32] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:37] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:44] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:46] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:40:53] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:41:03] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [02:41:55] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 54% [02:42:40] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.588 second response time [02:42:49] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.004 second response time [02:43:04] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24500 bytes in 0.692 second response time [02:44:28] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.393 second response time [02:44:35] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.394 second response time [02:44:36] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.634 second response time [02:44:48] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.004 second response time [02:45:08] I've not seen it do this before [02:45:39] Pongles, if the pages were created, it may be possible to nuke em [02:46:01] I created the pages using a bot script, does that count? [02:46:34] PROBLEM - misc1 icinga.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:46:35] should be fine, if you didn't use Special:Import or something similar in the API (doubtful), then it can be nuked [02:46:58] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:47:12] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:47:18] would help if we weren't down [02:47:23] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:47:51] yeah, this whole "Critical Error" thing is putting a cramp in my plans. :P [02:48:21] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 40% [02:48:52] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:48:53] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.393 second response time [02:48:56] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:49:00] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:49:08] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.004 second response time [02:49:09] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:49:10] Things have... exploded. icinga.miraheze.org isn't loading. getting an error when trying to access phabricator. [02:49:29] db4 ran out of space [02:49:49] paladox, PuppyKun, Reception123, SPF|Cloud ^ [02:49:59] hmm. well, it's been an issue for a while now. we need a permanent solution. [02:50:28] we have db5, but I'm not sure what went wrong here [02:57:21] sadly, looks like this isn't going to change in the next few hours [03:01:17] Keep us posted. [03:02:12] unfortunately, I have to head off, though I expect that someone will be able to fix it in the next 3-5 hours [03:15:05] I got a 503 error. (Varnish XID 819036873) via cp2 at Wed, 04 Sep 2019 03:14:37 GMT. [03:18:15] yeah, everything seems to be down zzo38 [03:25:03] Do you know when it would be fixed? It says to try to access Phabricator. The first time I tried that, I got a error message that it is a exception when trying to process the exception. [03:38:52] Last I heard, the ETA is 3-5 hours [03:47:00] OK [05:34:17] !log purge binary logs before '2019-09-04 02:00:00'; [05:34:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [05:38:15] paladox, PuppyKun, Reception123, SPF|Cloud: were down 503 - reported issues with phab as well [05:38:36] I can't access wikis but phab is up for me [05:39:00] https://www.irccloud.com/pastebin/OLlfywHV [05:39:01] [ Snippet | IRCCloud ] - www.irccloud.com [05:39:30] And it magically has recovered [05:42:43] yeah... too bad no one looked at db4 space again [05:45:22] Reception123: just missed most of the alerts [05:47:15] That was a long outage as well. We need to maybe look at doing it more often [05:47:34] yeah, true [05:48:48] Reception123: maybe every 24-36 hours? [05:49:10] yeah, I'll also have to get to moving more to db5 soon [05:50:19] I think that's a good idea [06:05:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjh3h [06:05:10] [02miraheze/services] 07MirahezeSSLBot 0389856db - BOT: Updating services config for wikis [06:11:01] Reception123: around??? [06:14:55] yeah [06:15:39] Reception123: https://phabricator.miraheze.org/T4695 [06:15:39] [ Login ] - phabricator.miraheze.org [14:23:28] paladox: ping [14:23:39] yes? [14:24:00] paladox: T4695 - Reception123 said u might know [14:24:27] it's been closed as declided as it does not affect us [14:24:32] *declined [14:24:40] *invalid [14:24:58] uh that ^ (sorry) [14:25:11] paladox: I can't see it so thanks - how so? [14:25:23] it's public [14:25:34] i've left my reason on task, please read :) [14:25:46] Ah it's public now - cool thx [14:25:58] authors can also read there own tasks [14:27:24] paladox: not when Reception123 moved it to S2 [14:27:27] paladox: Reception123 changed it [14:27:31] oh [14:27:37] * RhinosF1 was at school after that [14:32:42] I changed it right after realizing the mistake [14:47:02] * RhinosF1 had gone by then [14:57:35] Voidwalker: how long was icinga quieted? [14:57:57] since around when we went down last night [14:58:26] Voidwalker: oh, this is why it would be better in a -alerts channel? [14:59:04] yeah, but if the db runs out of storage, you'll get hundreds of alerts that don't actually tell you what's wrong [14:59:45] yeah, and in an alerts channel, it doesn't bother people who don't care [15:01:20] true, true [15:01:57] paladox, Reception123, JohnLewis: thoughts? [15:02:05] ? [15:02:28] paladox: on icinga moving to a -alerts channel? [15:02:36] As I said I disagree with alerts channel [15:02:41] Because then people will care less anyway [15:02:50] (The people who are supposed to) [15:02:55] ^ [15:03:00] We won't check that channel and therefore will miss alerts [15:03:01] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:03:03] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [15:03:04] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:03:06] hmm [15:03:11] Until there is a lot of traffic here there's no need [15:03:26] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:03:26] Reception123: it's been +q all day so far, so we've been missing alerts anyway! [15:03:36] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:03:53] RhinosF1: then we should be fixing the issue not the notifications [15:04:12] Reception123: well true, we said this morning we need more on db5 [15:04:21] and to purge the logs more often [15:04:30] down again [15:04:46] No, I mean the occasional 503s [15:04:51] Those are not db4 are at all [15:04:59] Reception123: yeah, we do need to look at them [15:05:00] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:05:04] paladox: ^ those are misc3 right? [15:05:04] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:05:05] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:05:21] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.691 second response time [15:05:27] it's looking like it is [15:05:33] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24500 bytes in 0.393 second response time [15:05:56] paladox: so shouldn't we do something soon? [15:06:04] paladox: based on icinga, it is [15:06:38] Reception123 yes we should try and resolve this. But this is not an easy fix as i said. if it's the network it's out of our control. [15:06:44] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[wildcard.miraheze.org] [15:07:23] paladox: so maybe we should open at ticket with RN if it's network? See what they say [15:07:31] i have already :) [15:12:44] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [15:27:09] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 7.77, 6.57, 5.45 [15:29:15] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 6.10, 6.17, 5.44 [15:47:15] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 7.53, 6.30, 5.47 [15:48:06] PROBLEM - papelor.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:49:14] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 6.74, 6.55, 5.66 [15:50:03] RECOVERY - papelor.io - LetsEncrypt on sslhost is OK: OK - Certificate 'papelor.io' will expire on Wed 30 Oct 2019 08:29:29 PM GMT +0000. [15:56:13] PROBLEM - mw2 Current Load on mw2 is CRITICAL: CRITICAL - load average: 10.73, 8.36, 6.67 [15:58:11] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 5.37, 7.28, 6.49 [16:00:09] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 4.45, 6.14, 6.15 [16:00:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:01:12] looking [16:02:35] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:55:32] things should hopefully improve with misc3 now [16:55:45] rn have changed some settings on the server [17:01:47] paladox: let's hope [17:01:59] yeh [17:10:12] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjh5w [17:10:13] [02miraheze/services] 07MirahezeSSLBot 0387996f3 - BOT: Updating services config for wikis [17:16:13] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:16:20] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:16:53] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:17:09] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:17:14] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:17:19] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [17:17:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:17:23] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:17:26] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:17:29] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [17:17:31] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:17:37] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:17:38] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 42% [17:17:42] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [17:17:54] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 75% [17:17:56] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 61% [17:17:58] !log restarted lizardfs-chunkserver on lizardfs[45] [17:18:07] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24500 bytes in 0.395 second response time [17:18:15] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.751 second response time [17:18:23] now i've learned that restarting all chunkservers at the same time, it causes everything to go down [17:18:50] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24500 bytes in 0.790 second response time [17:18:59] !log [18:17:57] <+paladox> !log restarted lizardfs-chunkserver on lizardfs[45] [17:19:04] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.005 second response time [17:19:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:19:09] RECOVERY - mw1 HTTPS on mw1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.009 second response time [17:19:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:19:20] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:19:20] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.397 second response time [17:19:21] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.737 second response time [17:19:24] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:19:29] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:19:33] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.439 second response time [17:19:38] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 5% [17:19:39] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [17:19:54] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 11% [17:19:56] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 3% [17:38:22] PROBLEM - papelor.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:40:22] RECOVERY - papelor.io - LetsEncrypt on sslhost is OK: OK - Certificate 'papelor.io' will expire on Wed 30 Oct 2019 08:29:29 PM GMT +0000. [17:55:17] PROBLEM - papelor.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:20] paladox, Reception123: ^ is that pointed right? [17:57:10] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 7.30, 6.66, 5.75 [17:57:13] RECOVERY - papelor.io - LetsEncrypt on sslhost is OK: OK - Certificate 'papelor.io' will expire on Wed 30 Oct 2019 08:29:29 PM GMT +0000. [17:59:10] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 4.82, 6.12, 5.68 [18:01:31] RhinosF1: well it says "NOTICE: This domain name expired on 8/30/2019 and is pending renewal or deletion." so you tell me :) [18:03:39] Reception123: that would be a no [18:04:08] And where did it say that? [18:04:10] yeah, we'll have to get rid of it then I guess [18:04:13] RhinosF1: on the link :D [18:04:42] Reception123: ha suppose so [18:05:14] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjhdX [18:05:16] [02miraheze/services] 07MirahezeSSLBot 036b65c07 - BOT: Updating services config for wikis [18:05:53] PROBLEM - papelor.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:40:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjhFs [18:40:12] [02miraheze/services] 07MirahezeSSLBot 03ac2320c - BOT: Updating services config for wikis [18:45:19] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjhFc [18:45:20] [02miraheze/services] 07MirahezeSSLBot 03438b822 - BOT: Updating services config for wikis [20:33:20] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjhNP [20:33:21] [02miraheze/puppet] 07paladox 0362b0167 - matomo: Increase 'memory_limit' to 256M [20:49:43] [02ssl] 07RhinosF1 opened pull request 03#215: Remove papelor.io cert - 13https://git.io/fjhNN [20:51:10] [02ssl] 07RhinosF1 synchronize pull request 03#215: Remove papelor.io cert - 13https://git.io/fjhNN [20:52:10] paladox: done the public bit for you ^ [20:52:23] thanks [20:52:33] paladox: np [20:52:49] [02ssl] 07paladox closed pull request 03#215: Remove papelor.io cert - 13https://git.io/fjhNN [20:52:51] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-1/±1] 13https://git.io/fjhNp [20:52:53] [02miraheze/ssl] 07RhinosF1 0364c7252 - Remove papelor.io cert (#215) * Delete papelor.io.crt * Update certs.yaml [20:55:55] paladox: what order is certs.yaml supposed to be in? [20:56:16] dosen't need to be in order [20:56:26] we add new certs at the end of the file [20:56:50] paladox: i worked that out trying to find the cert. Didn't know whether it had just been forgotten at times [20:56:57] * RhinosF1 thanks ctrl+f [21:11:28] PROBLEM - cp2 Current Load on cp2 is CRITICAL: CRITICAL - load average: 2.22, 1.59, 0.81 [21:13:23] RECOVERY - cp2 Current Load on cp2 is OK: OK - load average: 1.33, 1.41, 0.83