[00:00:19] https://meta.miraheze.org/wiki/Help_center can the tag be moved to the top of the page or will it break stuff? [00:00:20] [ Help center - Miraheze Meta ] - meta.miraheze.org [00:19:36] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw2 mw3 [00:19:38] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [00:19:41] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:19:42] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:21:03] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [00:21:44] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:24:49] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.732 second response time [00:25:51] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [00:25:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:25:55] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [00:25:56] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:26:52] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [00:58:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:01:29] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:13:26] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:13:33] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [01:13:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [01:13:46] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [01:14:39] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:15:08] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:15:52] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:15:52] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [01:16:11] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [01:18:28] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:18:29] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.729 second response time [01:18:52] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.392 second response time [01:19:16] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:19:16] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.039 second response time [01:19:35] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [01:20:08] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.764 second response time [01:20:11] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [01:20:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:20:25] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [01:21:02] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.395 second response time [01:51:54] hhh [01:53:28] Apap: Hello [01:53:39] hey [01:53:47] Hello apap04! If you have any questions, feel free to ask and someone should answer soon. [01:54:11] had to change nics [01:54:15] nicks* [01:54:29] do /nick and then your prefered nickname [01:56:57] Apap04: Do you want to know how to set a real name? [01:57:07] i know ;) [01:57:27] i know how to set one, i just set my real name as that as a joke [01:57:57] Please don't do that as it confuses people [01:58:03] okay [01:58:05] me for example ;) [02:33:34] is Mira acting up? [02:33:36] once again [02:33:47] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [02:33:48] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [02:33:49] yup [02:33:58] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [02:33:58] yeah [02:34:03] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [02:34:07] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [02:34:31] icinga messages should relay over to the discord [02:34:38] just a suggestion [02:35:21] i'm heading off to bed, ill still be up... i think [02:37:13] hmm cpu and network activity on misc2 looks unusual, but it's been unusual for a few hours [02:42:24] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [02:42:27] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [02:42:44] yay [02:42:44] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [02:42:45] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:42:47] it's back [02:42:51] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [02:51:45] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [02:52:49] * PuppyKun growls at mw1 [02:53:24] oh no [02:54:24] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [03:04:15] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [03:04:26] :( [03:04:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb [03:04:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [03:05:42] >fire started at Miraheze db center [03:05:46] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [03:06:01] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [03:06:25] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:07:53] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:10:52] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.642 second response time [03:12:49] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.759 second response time [03:14:17] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [03:14:26] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [03:14:32] ns1 if you could not.. thanks [03:14:37] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [03:15:05] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [03:15:13] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [03:15:34] PuppyKun, you wanna investigate? [03:29:44] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [03:32:28] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [03:41:15] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [03:43:49] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [04:28:41] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [04:28:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [04:30:01] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [04:30:15] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [04:30:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [04:31:40] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [04:32:39] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [04:32:45] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [04:32:57] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [04:33:45] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [04:55:23] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [04:55:24] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [04:57:53] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [04:57:54] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [05:06:21] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:07:28] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:07:28] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:07:56] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:08:00] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [05:08:30] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:09:08] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:09:09] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:09:47] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24592 bytes in 0.390 second response time [05:10:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [05:10:35] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24592 bytes in 0.004 second response time [05:10:52] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [05:11:04] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [05:11:13] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.003 second response time [05:11:36] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [05:11:37] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [05:55:09] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [05:57:24] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:26:54] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3088 MB (12% inode=94%); [08:57:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [09:00:35] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [11:03:22] last night sounded fun [11:06:13] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [11:15:10] paladox ^ (i know you contributed to puppet before, so i guess you can help?) [11:15:50] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [12:04:20] apap: it recovered [12:29:07] PROBLEM - cp3 Stunnel Http for misc2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:31:52] RECOVERY - cp3 Stunnel Http for misc2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 1.007 second response time [13:07:18] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb [13:10:14] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:37:58] [02miraheze/CreateWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jen1O [14:38:00] [02miraheze/CreateWiki] 07translatewiki 03bf5c8bc - Localisation updates from https://translatewiki.net. [14:38:01] [ Main page - translatewiki.net ] - translatewiki.net. [14:38:01] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jen13 [14:38:03] [02miraheze/ManageWiki] 07translatewiki 0317fd33f - Localisation updates from https://translatewiki.net. [14:38:04] [ Main page - translatewiki.net ] - translatewiki.net. [15:36:56] oops [16:05:41] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [16:08:07] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:33:43] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JenDM [16:33:45] [02miraheze/ssl] 07Reception123 03a34d02f - add wiki.isina.ir cert [16:39:04] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JenD7 [16:39:05] [02miraheze/puppet] 07paladox 03dde3748 - lizardfs: Reduce MASTER_TIMEOUT to 20s [16:41:17] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JenDj [16:41:19] [02miraheze/puppet] 07paladox 037a878a1 - lizardfs: Set CHUNKS_LOOP_MAX_CPU to 60 [16:42:37] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/Jenye [16:42:39] [02miraheze/ssl] 07Reception123 03e02706f - add wiki.starship.digital cert [16:44:21] !log restart lizardfs-chunkserver on lizardfs[45] [16:44:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:45:07] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jenyf [16:45:09] [02miraheze/services] 07MirahezeSSLBot 03c7e2b7c - BOT: Updating services config for wikis [16:53:24] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:54:07] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:55:27] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:55:44] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:55:51] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:55:56] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [16:56:10] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:09:24] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2648 MB (10% inode=94%); [17:52:21] [ANNOUNCEMENT] Channel Operators please see: https://git.io/JenEA - Ping Zppix, or join #ZppixBot with any questions and thanks for you cooperation [18:56:48] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_33 [+0/-0/±1] 13https://git.io/Jen9r [18:56:50] [02miraheze/mediawiki] 07paladox 03fcabe0f - Update CheckUser [19:26:55] !log root@mw2:/home/paladox# sudo -u www-data php /srv/mediawiki/w/extensions/CreateWiki/maintenance/renameWiki.php --wiki=loginwiki --rename flawlessfandomuserswiki incrediblewikisanduserswiki paladox - T4755 [19:26:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:30:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JenHe [19:30:10] [02miraheze/services] 07MirahezeSSLBot 031069459 - BOT: Updating services config for wikis [19:46:21] hello [20:38:06] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [20:38:08] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [20:38:13] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [20:38:30] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:41:12] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:41:12] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [20:41:14] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [20:41:27] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24592 bytes in 0.390 second response time [21:32:09] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:34:34] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is WARNING: WARNING: Diff, 375 files, 24.02GB, 2019-09-15 02:28:00 (2.4 weeks ago) [21:37:15] PROBLEM - ICINGA complaint quota exceeded, complaint count: over 9000, quota: -9000 [21:37:26] :P [22:36:27] !log update flow_revision set rev_user_wiki = 'incrediblewikisanduserswiki' where rev_user_wiki = 'flawlessfandomuserswiki'; [22:36:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:38:59] !log update flow_wiki_ref set ref_src_wiki = '' where ref_src_wiki = 'flawlessfandomus'; [22:39:04] Paladox: Do you know how to set an automatic git fetch? [22:39:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:39:18] Examknow yes, you can use a cron [22:39:22] or a systemd timer [22:39:28] ok thx [22:40:30] !log update flow_workflow set workflow_wiki = 'incrediblewikisanduserswiki' where workflow_wiki = 'flawlessfandomuserswiki'; [22:40:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:42:28] !log update flow_tree_revision set tree_orig_user_wiki = 'incrediblewikisanduserswiki' where tree_orig_user_wiki = 'flawlessfandomuserswiki'; [22:42:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:43:54] !log update flow_ext_ref set ref_src_wiki = '' where ref_src_wiki = 'flawlessfandomus'; [22:43:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:11:59] !log update flow_revision set rev_mod_user_wiki = 'incrediblewikisanduserswiki' where rev_mod_user_wiki = 'flawlessfandomuserswiki'; [23:12:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:13:07] !log update flow_revision set rev_edit_user_wiki = 'incrediblewikisanduserswiki' where rev_edit_user_wiki = 'flawlessfandomuserswiki'; [23:13:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master