[03:36:26] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [03:44:32] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [04:20:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvOCt [04:20:09] [02miraheze/services] 07MirahezeSSLBot 0397a13d1 - BOT: Updating services config for wikis [06:16:17] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 4 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [06:26:46] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2955 MB (12% inode=94%); [09:16:28] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [09:24:25] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:46:29] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [09:54:26] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [11:00:46] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2647 MB (10% inode=94%); [12:36:26] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-81.4.121.113-to-any-port-81] [12:44:29] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:53:35] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvOo7 [14:53:37] [02miraheze/ManageWiki] 07translatewiki 03078827e - Localisation updates from https://translatewiki.net. [14:53:38] [ Main page - translatewiki.net ] - translatewiki.net. [15:22:45] PuppyKun, SPF|Cloud: Can I get a CU on the two users I’ve just blocked on test.miraheze.org? [15:23:15] @Stewards: ^ [15:26:14] Make that 3 users [15:45:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvOi3 [15:45:09] [02miraheze/services] 07MirahezeSSLBot 031618ebe - BOT: Updating services config for wikis [16:56:26] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [17:04:25] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:10:47] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvO1I [17:10:49] [02miraheze/puppet] 07Southparkfan 03a94f1e7 - Remove key [17:21:43] JohnLewis, SPF|Cloud: if you beat void, can you cu those I blocked on testwiki and lock them/block underlying IPs [17:22:42] Also see paladox about passing the result of the tou ban onto me [17:26:30] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [17:31:56] I don’t have access to check rn but will do later if not beaten [17:32:58] Hello Traductor2020! If you have any questions, feel free to ask and someone should answer soon. [17:37:53] Reception123 Hi!! I'm here: https://es.publictestwiki.com/wiki/PruebaWiki:Solicitudes_de_permisos (requesting permissions) [17:37:56] [ PruebaWiki:Solicitudes de permisos - PruebaWiki ] - es.publictestwiki.com [17:42:51] Hi!! I'm here: https://es.publictestwiki.com/wiki/PruebaWiki:Solicitudes_de_permisos (requesting permissions) [17:42:52] [ PruebaWiki:Solicitudes de permisos - PruebaWiki ] - es.publictestwiki.com [17:44:27] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:50:55] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvOMl [17:50:56] [02miraheze/puppet] 07paladox 03cee0dc0 - gluster: Upgrade to 7.2 [17:54:09] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:55:41] !log apt-get upgrade - lizardfs6 (gluster, php and puppet-agent + few other packages) [17:55:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:56:05] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is WARNING: WARNING: Full, 1007552 files, 44.08GB, 2020-01-11 20:01:00 (2.7 weeks ago) [17:59:36] RECOVERY - bacula1 Bacula Static on bacula1 is OK: OK: Full, 2336892 files, 199.8GB, 2020-01-30 17:58:00 (1.6 minutes ago) [18:01:32] !staff vandal https://meta.miraheze.org/wiki/Special:Contributions/Traductor2020 [18:01:41] [ User contributions for Traductor2020 - Miraheze Meta ] - meta.miraheze.org [18:02:01] RhinosF1: JohnLewis Reception123 PuppyKun SPF|Cloud [18:02:07] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [18:02:54] !log apt-get upgrade - mw[123] (gluster, php and puppet-agent + few other packages) [18:02:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:04:31] PROBLEM - bacula1 Current Load on bacula1 is CRITICAL: CRITICAL - load average: 2.75, 1.81, 0.82 [18:04:42] help see RecentChanges [18:06:33] PROBLEM - bacula1 Current Load on bacula1 is WARNING: WARNING - load average: 1.94, 1.81, 0.95 [18:08:33] RECOVERY - bacula1 Current Load on bacula1 is OK: OK - load average: 1.27, 1.62, 0.99 [18:08:51] !ops [18:09:32] RhinosF1: JohnLewis Reception123 PuppyKun SPF|Cloud paladox See https://meta.miraheze.org/wiki/Special:Contributions/Traductor2020 vandal [18:09:36] [ User contributions for Traductor2020 - Miraheze Meta ] - meta.miraheze.org [18:09:50] https://meta.miraheze.org/wiki/User_talk:Traductor2020 this is the blocked message??? [18:09:51] [ User talk:Traductor2020 - Miraheze Meta ] - meta.miraheze.org [18:09:57] nothing i can do, only stewards/admins can act. [18:10:21] ups, sorry paladox [18:11:09] I got confused paladox thanks also :) [18:11:16] ok :) [18:11:20] hispano76 You have been blocked me in Miraheze??? [18:13:26] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:13:39] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 4431 files, 9.358MB, 2020-01-30 18:13:00 (39.0 seconds ago) [18:17:29] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:20:05] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is WARNING: WARNING: Full, 1007552 files, 44.08GB, 2020-01-11 20:01:00 (2.7 weeks ago) [18:22:38] Voidwalker ¿Qué haces? [18:23:15] Voidwalker YOU HAVE BEEN BLOCKED ME!!! [18:23:24] Voidwalker Try me to block me in IRC. [18:23:36] aight [18:24:08] >try to block me on irc [18:24:09] lol [18:45:17] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 2 minutes ago with 6 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80],Exec[ufw-allow-tcp-from-any-to-any-port-443],Exec[ufw-allow-tcp-from-54.36.165.161-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81] [18:53:09] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 29 seconds ago with 0 failures [19:46:30] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 6 failures. Last run 3 minutes ago with 6 failures. Failed resources (up to 3 shown): Package[openssh-client],Package[openssh-server],Service[ssh],Service[postfix] [19:54:26] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:55:06] RECOVERY - bacula1 Bacula Databases db4 on bacula1 is OK: OK: Diff, 75994 files, 41.57GB, 2020-01-30 19:53:00 (2.1 minutes ago) [20:16:31] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 3 minutes ago with 3 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-185.52.1.75-to-any-port-81],Exec[ufw-allow-tcp-from-185.52.2.113-to-any-port-81],Exec[ufw-allow-tcp-from-81.4.121.113-to-any-port-81] [20:24:31] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:35:44] Voidwalker: whats the status on socks? [20:36:50] Voidwalker: ATS on testwiki.wiki is known to interact with himself [20:37:18] And can we check to the vandal hispano76 reported and was seen on IRC [20:39:03] That meta sock is them based on bwhaviour id say [20:52:01] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:52:29] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4208 bytes in 0.044 second response time [20:52:35] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:53:04] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:53:20] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:53:21] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:53:32] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 2054 bytes in 9.990 second response time [20:53:38] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.046 second response time [20:53:43] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 89% [20:53:48] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 4 backends are down. lizardfs6 mw1 mw2 mw3 [20:53:54] hmm [20:53:54] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 70% [20:53:59] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:54:00] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 69% [20:54:00] PROBLEM - cp2 HTTPS on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4140 bytes in 0.398 second response time [20:54:05] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 15296 bytes in 0.004 second response time [20:54:06] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:54:07] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 4 backends are down. lizardfs6 mw1 mw2 mw3 [20:54:08] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 4 backends are down. lizardfs6 mw1 mw2 mw3 [20:54:30] oh [20:54:31] bugger [20:54:31] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 15302 bytes in 0.391 second response time [20:54:43] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4208 bytes in 0.021 second response time [20:54:44] xd [20:55:02] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.392 second response time [20:55:21] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 15295 bytes in 0.005 second response time [20:55:21] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15296 bytes in 0.506 second response time [20:56:03] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.023 second response time [20:56:10] PROBLEM - misc4 phab.miraheze.wiki HTTPS on misc4 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/1.1 500 Internal Server Error [20:56:21] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [20:56:31] PROBLEM - misc4 phabricator.miraheze.org HTTPS on misc4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4215 bytes in 0.028 second response time [20:56:42] !log stop mysql, remove bin logs and restart mysql [20:59:41] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 57% [21:00:15] RECOVERY - bacula1 Bacula Databases db5 on bacula1 is OK: OK: Diff, 478 files, 63.37GB, 2020-01-30 21:00:00 (14.0 seconds ago) [21:00:34] paladox: db4 out of space? [21:01:59] yes [21:02:01] 8.4G left [21:02:10] RECOVERY - misc4 phab.miraheze.wiki HTTPS on misc4 is OK: HTTP OK: Status line output matched "HTTP/1.1 200" - 17718 bytes in 0.079 second response time [21:02:21] RECOVERY - db4 MySQL on db4 is OK: Uptime: 251 Threads: 34 Questions: 3229 Slow queries: 196 Opens: 367 Flush tables: 1 Open tables: 361 Queries per second avg: 12.864 [21:02:23] paladox: hopefully we wont have longer left with this [21:02:33] RECOVERY - misc4 phabricator.miraheze.org HTTPS on misc4 is OK: HTTP OK: HTTP/1.1 200 OK - 19067 bytes in 0.205 second response time [21:02:33] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18691 bytes in 1.171 second response time [21:02:37] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18690 bytes in 0.315 second response time [21:03:07] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18691 bytes in 1.189 second response time [21:03:30] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 4% [21:03:30] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 4% [21:03:34] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18691 bytes in 1.289 second response time [21:03:38] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 7 backends are healthy [21:03:46] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:03:46] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 6 backends are healthy [21:03:51] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 2% [21:03:52] RECOVERY - cp2 HTTPS on cp2 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1532 bytes in 0.492 second response time [21:03:59] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:04:04] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 7 backends are healthy [21:04:05] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18690 bytes in 1.167 second response time [21:23:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [21:25:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:27:09] RhinosF1, from a technical standpoint, Brownlowe.2 and co are unrelated to ATS on both mh and testwiki.wiki (which I've confirmed with CU on both wikis) [21:27:54] Voidwalker: weird, what about to the vandal/irc guy from earlier [21:28:48] confirmed ATS, basically the same as JosueThomasDiez and related [21:29:25] Voidwalker: weird, I’m baffled because of the intercations + timing that ATS turned up after you unblocked [21:29:40] @Owen: see email reply [21:30:05] looks like they were just picking on a new user [21:31:01] @RhinosF1 emailed the wrong one [21:33:39] PROBLEM - cp4 Current Load on cp4 is WARNING: WARNING - load average: 1.76, 1.84, 1.24 [21:35:38] RECOVERY - cp4 Current Load on cp4 is OK: OK - load average: 0.52, 1.05, 1.04 [21:35:44] Voidwalker: how strange [21:36:11] yeah, it's rather weird, and I'm definitely gonna be keeping an eye on it [21:36:12] @Owen: Thx, dont forget to update on wiki [21:36:23] Voidwalker: they emailed me so I’ll reply [21:36:31] I won't forget 🙂 [21:38:02] Voidwalker: bcc’d ya [21:42:12] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [21:44:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:53:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:55:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:04:10] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [22:06:08] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:14:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb [22:20:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:30:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [22:33:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [22:34:28] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:35:14] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:39:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:45:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:46:37] paladox: whats going on ^ [22:46:59] seems that's cp3 [22:47:07] and another cp [22:47:19] How fantastic [22:51:26] "another cp" there's 3 IPv4s listed [22:51:30] *IPv6 [22:54:03] Guess that should have been more then one cp when i said that. [22:54:36] well [22:54:40] cp3, cp2 and cp4 :) [22:55:05] It was more a hint to try and stabilise things. [22:55:18] As long as you know whats going om [22:55:36] It's at a point of where things stabilise themselves without intervention [22:56:04] Good [22:56:12] Thats dealt with then [22:57:43] I see no reason for that to have alerted [22:57:47] looking at syslog [22:57:52] shows all backends were up [22:58:18] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:58:21] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [23:00:06] oh [23:02:13] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:02:14] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:04:23] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 15302 bytes in 4.165 second response time [23:04:24] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:14:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [23:15:32] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [23:16:33] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:17:31] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:20:12] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvOd5 [23:20:13] [02miraheze/services] 07MirahezeSSLBot 03342418e - BOT: Updating services config for wikis [23:21:22] PROBLEM - bacula1 Disk Space on bacula1 is WARNING: DISK WARNING - free space: / 51155 MB (10% inode=99%); [23:50:15] Reception123: Disculpa. Tengo una duda sobre una traducción. La frase "To you" se traduciría como "A ti"? quiero estar seguro de estar traduciendo el termino bien.