[00:01:20] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 1 backends are down. mw4 [00:01:34] PROBLEM - cp4 Stunnel Http for mw4 on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 328 bytes in 4.031 second response time [00:03:16] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [00:06:51] RECOVERY - cp8 Stunnel Http for mw4 on cp8 is OK: HTTP OK: HTTP/1.1 200 OK - 15302 bytes in 0.310 second response time [00:06:58] RECOVERY - cp3 Stunnel Http for mw4 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15296 bytes in 0.749 second response time [00:07:31] RECOVERY - cp4 Stunnel Http for mw4 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.079 second response time [00:07:34] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [00:08:31] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [00:13:06] PROBLEM - cp8 Disk Space on cp8 is WARNING: DISK WARNING - free space: / 2114 MB (10% inode=93%); [00:33:12] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 2.04, 1.94, 1.42 [00:35:12] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 1.99, 1.85, 1.44 [00:37:13] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.56, 1.37, 1.32 [00:43:18] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 2.16, 2.67, 1.88 [00:47:19] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.69, 1.58, 1.61 [03:40:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEMl [03:40:09] [02miraheze/services] 07MirahezeSSLBot 03d19488c - BOT: Updating services config for wikis [05:03:07] RECOVERY - cp8 Disk Space on cp8 is OK: DISK OK - free space: / 3706 MB (19% inode=93%); [06:25:31] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2739 MB (11% inode=94%); [08:43:51] PROBLEM - wiki.valentinaproject.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.valentinaproject.org' expires in 15 day(s) (Thu 12 Mar 2020 08:40:10 AM GMT +0000). [08:44:05] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvE53 [08:44:06] [02miraheze/ssl] 07MirahezeSSLBot 0340ee168 - Bot: Update SSL cert for wiki.valentinaproject.org [08:53:55] RECOVERY - wiki.valentinaproject.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.valentinaproject.org' will expire on Mon 25 May 2020 07:43:58 AM GMT +0000. [09:16:39] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 4 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[php7.3-redis] [09:22:35] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [09:51:25] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [09:55:18] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:59:25] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:03:27] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:13:27] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:15:24] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:23:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [10:25:20] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:32:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [10:34:38] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [10:38:45] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [10:40:43] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:08:53] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [11:10:55] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:15:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [11:15:59] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [11:24:41] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:27:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:37:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [11:41:23] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [11:42:14] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [11:44:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [11:45:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [11:46:42] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [11:50:58] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [11:56:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [12:00:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvENN [12:00:09] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:00:10] [02miraheze/services] 07MirahezeSSLBot 033bc8d44 - BOT: Updating services config for wikis [12:04:11] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [12:06:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:10:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [12:18:01] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[wiki.staraves-no.cz_private] [12:18:59] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:23:12] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [12:28:41] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [12:30:32] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [12:31:34] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:32:50] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [12:40:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [12:42:57] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:49:12] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/logrotate.d/puppet] [12:51:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [12:55:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [12:59:29] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [13:01:12] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:01:30] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:05:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [13:11:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:15:33] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [13:17:45] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [13:25:24] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:29:20] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [13:36:56] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [13:37:42] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:41:09] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:53:31] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [14:01:42] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:05:45] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [14:17:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [14:22:35] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:26:38] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [14:29:48] Hello MirahezeCreator7! If you have any questions, feel free to ask and someone should answer soon. [14:30:04] I can create a Miraheze ToolForge with SSH Access, Kubernetes Cluster Access, etc??? Also, will have a CLI to run a an app from Miraheze Docker Registry??? [14:34:14] I can create a Miraheze ToolForge with SSH Access, Kubernetes Cluster Access, etc??? Also, will have a CLI to run a an app from Miraheze Docker Registry??? [14:37:42] I can create a Miraheze ToolForge with SSH Access, Kubernetes Cluster Access, etc??? Also, will have a CLI to run a an app from Miraheze Docker Registry??? [14:38:46] hello [14:39:42] I can create a Miraheze ToolForge with SSH Access, Kubernetes Cluster Access, etc??? Also, will have a CLI to run a an app from Miraheze Docker Registry??? Vermont [14:40:10] MirahezeCreator7: what language do you speak? [14:40:20] Vermont Spanish [14:40:27] Okay. [14:40:52] No soy un experto de Miraheze, pero... [14:41:06] ¿Zppix, tu hablas español? [14:41:32] Vermont Puedo crear un Miraheze ToolForge para desplegar applicaciones como bots, etc, al igual que en Wikimedia Toolforge, pero en Kubernetes, con CLI llamado toolforge??? [14:41:43] no se [14:42:51] y, por favor, detente con los muchos “???” [14:43:15] Vermont Puedo crear un Miraheze ToolForge para desplegar applicaciones como bots, etc, al igual que en Wikimedia Toolforge, pero en Kubernetes, con CLI llamado toolforge? [14:43:20] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:44:02] * Vermont sighs [14:44:09] Vermont Puedo crear un Miraheze ToolForge para desplegar applicaciones como bots, etc, al igual que en Wikimedia Toolforge, pero en Kubernetes, con CLI llamado toolforge??? [14:44:38] MirahezeCreator7: Repeating your comment at me will not make me magically know how to help you [14:45:25] Vermont Sí, es posible. [14:46:05] en tus sueños [14:46:55] Vermont ¿Dónde? ¿En que sueño? [14:47:05] Digo, ¿en que servidor? Vermont [14:47:57] Reception123: here? [14:48:34] Vermont: yes? [14:48:49] can you help this person [14:49:15] Reception123 I can create a Miraheze Toolforge with CLI toolforge to deploy apps such as bots webapps etc to Kubernetes??? [14:49:29] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [14:49:36] I'm not sure what you mean, but we don't have a toolforge [14:50:09] And there are no plans for one at the moment [14:50:14] Wikimedia have Toolforge??? [14:51:31] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:52:28] Reception123 Wikimedia have Toolforge??? [14:53:11] MirahezeCreator7: yes, Wikimedia has one. Miraheze doesn't [14:53:54] MirahezeCreator7: https://tools.wmflabs.org/admin/ [14:53:55] [ Wikimedia Toolforge ] - tools.wmflabs.org [14:54:42] Reception123 I cannot register in Wikimedia Toolforge, but when Miraheze launch it, i will register it??? [14:55:07] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEjB [14:55:08] [02miraheze/services] 07MirahezeSSLBot 0317563db - BOT: Updating services config for wikis [14:55:19] MirahezeCreator7: There are no plans for a Miraheze toolforge, as I already said [14:55:31] It would be appreciated if you didn't repeat the same questions [14:56:15] Reception123 When launchs Miraheze Toolforge i will launch an Bot for an Miraheze Wiki??? [14:56:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:57:51] MirahezeCreator7: I've told you three times, there is no planned launch for a toolforge here. Please stop spamming the same request. [14:59:42] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:01:41] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:07:52] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:11:52] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:15:51] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:17:50] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:18:31] Reception123 When i create a Kubernetes Cluster and an main server with CLI called toolforge i will put in DNS nameservers with *.tools.miraheze.org??? [15:18:52] Reception123 SORRY [15:19:17] Reception123 You cannot ban me [15:19:18] Reception123: ^ [15:19:30] RhinosF1 ??? [15:21:28] grumble: ^ [15:21:39] Hello dj2020! If you have any questions, feel free to ask and someone should answer soon. [15:21:49] Reception123 Hello [15:22:18] dj2020: you have been banned from this channel [15:22:34] Reception123 Banned??? [15:22:45] Reception123 Try me [15:31:53] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [15:32:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [15:34:48] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:35:50] Hello kruxy! If you have any questions, feel free to ask and someone should answer soon. [15:38:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:42:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.161.32.127/cpweb [15:42:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:42:22] Vermont: mi espanol es no bueno [15:43:19] Zppix: same for me :P [15:43:30] I can understand a bit but not really speak [15:44:14] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:53:46] I can try to understand [15:53:58] Based on words i know and context [15:54:47] Mi casa su casa [15:54:50] Zppix: yeah same for me [15:55:03] but when it's about speaking or writing I'm mostly lost [15:55:44] I've been doing French for 7 years and can't speak or write properly [15:56:05] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:58:41] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [15:59:24] RhinosF1: it is quite hard to learn [15:59:33] so many grammar rules and all that [15:59:56] Reception123: It really is. It doesn't help my english grammar is a mess at times. [16:00:23] RhinosF1: heh yeah, and English is supposed to be way easier than French [16:00:43] But yeah, French is really not an easy one [16:00:56] Reception123: it's how my head works. Words just fall onto paper when I write. [16:01:44] RhinosF1: so for French your main weak spot is grammar? (note: should probably take it to -offtopic) [16:02:34] Meh [16:02:42] Theres no other convo rn [16:02:57] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:03:31] * RhinosF1 moved [16:22:49] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2607:5300:205:200::17f6/cpweb [16:24:53] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:26:15] .wmca Flaf [16:26:16] https://meta.wikimedia.org/wiki/Special:CentralAuth/Flaf [16:29:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [16:37:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:43:14] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [16:45:33] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [16:47:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:55:11] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:58:03] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuJW [16:58:05] [02miraheze/services] 07MirahezeSSLBot 03988b7c0 - BOT: Updating services config for wikis [17:03:21] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [17:07:21] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:14:19] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [17:17:38] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [17:18:29] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:22:40] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [17:26:45] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:27:46] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:32:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [17:33:01] !log increase puppet2 ram by 1gb (so 5g in total) [17:33:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:33:38] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuUw [17:33:39] [02miraheze/puppet] 07paladox 03e5cbf4e - Update puppet2.yaml [17:37:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:39:12] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [17:41:32] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuU9 [17:41:33] [02miraheze/puppet] 07paladox 0372fc824 - Revert "Update puppet2.yaml" This reverts commit e5cbf4eec4c8905a6948edcfc516814ead0bc80f. [17:42:06] !log downgrade puppet2 ram back to 4g [17:42:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:45:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [17:47:12] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:47:23] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:48:08] !log converting puppet2 to a raw image (qemu-img convert vm-102-disk-0.qcow2 vm-102-disk-0.raw) [17:48:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:03:05] !log root@cloud1:/var/lib/vz/images/102# rm vm-102-disk-0.qcow2 (it's been converted to raw) [18:03:05] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [18:03:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:03:30] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [18:06:17] !log root@cloud2:/var/lib/vz/images/104# qemu-img convert vm-104-disk-0.qcow2 vm-104-disk-0.raw [18:06:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:07:07] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:09:37] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:15:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jvukf [18:15:11] [02miraheze/services] 07MirahezeSSLBot 0369cf1ae - BOT: Updating services config for wikis [18:26:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [18:33:00] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:37:13] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 2607:5300:205:200::17f6/cpweb [18:39:11] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:42:49] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2607:5300:205:200::17f6/cpweb [18:43:43] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2a00:d880:5:8ea::ebc7/cpweb, 2607:5300:205:200::17f6/cpweb [18:49:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:50:03] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:21:42] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [19:22:42] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [19:23:40] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:24:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:47:31] !log root@db4:/var/log/mysql# sysctl net.ipv4.tcp_tw_reuse=1 [19:47:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:48:58] !log root@db4:/var/log/mysql# sysctl net.ipv4.tcp_fin_timeout=3 [19:49:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:52:31] !log root@cp7:/home/paladox# sysctl net.core.somaxconn=4000 [19:52:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:55:26] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 0.58, 1.76, 1.38 [19:57:32] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 1.69, 1.69, 1.39 [20:15:15] !log root@db4:/var/log/mysql# sysctl net.ipv4.tcp_fin_timeout=60 [20:15:23] !log root@db4:/var/log/mysql# sysctl net.ipv4.tcp_tw_reuse=0 [20:15:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:15:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:44:44] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:46:46] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:05:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvumA [21:05:09] [02miraheze/services] 07MirahezeSSLBot 0338c6f1b - BOT: Updating services config for wikis [21:18:03] !log root@cloud2:/var/lib/vz/images/104# rm vm-104-disk-0.qcow2 [21:18:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:20:34] !log root@cloud2:/var/lib/vz/images/119# qemu-img convert vm-119-disk-0.qcow2 vm-119-disk-0.raw [21:20:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:23:24] PROBLEM - cp4 Stunnel Http for mon1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:23:27] PROBLEM - cp8 Stunnel Http for mon1 on cp8 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:23:32] me ^ [21:24:28] PROBLEM - cp3 Stunnel Http for mon1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:25:01] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 51.161.32.127/cpweb [21:26:19] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [21:27:04] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:27:52] paladox: database server having issues? [21:28:01] SPF|Cloud db4? [21:28:16] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:28:21] phab, grafana and matomo aren't performing that well [21:28:24] oh [21:28:30] SPF|Cloud that'll be due to mon1 :) [21:28:32] see the log above [21:28:38] RECOVERY - cp3 Stunnel Http for mon1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 29496 bytes in 1.020 second response time [21:28:39] i was migrating it to a raw image [21:28:53] and i wanted it to be safe, so i stoped the vps, and did it. [21:29:02] SPF|Cloud should work now [21:29:29] RECOVERY - cp8 Stunnel Http for mon1 on cp8 is OK: HTTP OK: HTTP/1.1 200 OK - 29527 bytes in 0.313 second response time [21:29:29] RECOVERY - cp4 Stunnel Http for mon1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 29496 bytes in 0.092 second response time [21:30:03] SPF|Cloud phab's phab1, so that shouldn't have been affected. [21:32:56] PROBLEM - mon1 Puppet on mon1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 3 days ago with 0 failures PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 4.91, 4.82, 4.79 [21:33:33] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 42% [21:33:51] !log root@cloud1:/var/lib/vz/images/112# qemu-img convert vm-112-disk-0.qcow2 vm-112-disk-0.raw [21:34:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:34:07] paladox: why is irc echo combing alerts (see the mon1 puppet alert and see how it also says something about load on jobrunner1) [21:34:53] heh, that'll be because icinga... oh [21:34:59] has started on mon1 [21:35:56] Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Puppet::Parser::Compiler failed with error NoMethodError: undefined method `resource' for nil:NilClass on node mon1.miraheze.org [21:35:58] hmm [21:36:16] I thought you disable icinga on mon1 due to conflict with misc paladox [21:36:32] i did, but i stopped and started mon1 [21:36:39] due to a conversion to raw image :) [21:36:54] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:40:22] PROBLEM - jobrunner1 JobRunner Service on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:40:40] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:41:39] PROBLEM - jobrunner1 Disk Space on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:41:49] PROBLEM - jobrunner1 JobChron Service on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:41:58] PROBLEM - jobrunner1 php-fpm on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:42:08] PROBLEM - jobrunner1 Redis Process on jobrunner1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:42:50] PROBLEM - jobrunner1 SSH on jobrunner1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:43:34] PROBLEM - ping4 on jobrunner1 is CRITICAL: PING CRITICAL - Packet loss = 100% [21:44:46] PROBLEM - Host jobrunner1 is DOWN: PING CRITICAL - Packet loss = 100% [21:48:26] RECOVERY - Host jobrunner1 is UP: PING OK - Packet loss = 0%, RTA = 0.26 ms [21:48:28] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 16 minutes ago with 0 failures [21:48:30] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [21:48:44] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [21:48:57] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 0.23, 0.08, 0.03 [21:49:01] RECOVERY - jobrunner1 Disk Space on jobrunner1 is OK: DISK OK - free space: / 18047 MB (63% inode=83%); [21:49:09] RECOVERY - jobrunner1 php-fpm on jobrunner1 is OK: PROCS OK: 7 processes with command name 'php-fpm7.3' [21:49:28] RECOVERY - jobrunner1 Redis Process on jobrunner1 is OK: PROCS OK: 1 process with args 'redis-server' [21:49:47] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:49:50] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [21:49:52] RECOVERY - jobrunner1 SSH on jobrunner1 is OK: SSH OK - OpenSSH_7.9p1 Debian-10+deb10u2 (protocol 2.0) [21:50:50] RECOVERY - ping4 on jobrunner1 is OK: PING OK - Packet loss = 0%, RTA = 0.35 ms [21:51:17] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [21:51:51] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18701 bytes in 9.289 second response time [21:51:59] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [21:52:35] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 39% [21:53:52] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:54:06] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [21:54:58] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:55:32] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:55:41] PROBLEM - test2 Puppet on test2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:55:42] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:55:50] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:56:14] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:23] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:23] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:38] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:38] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:52] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:56:53] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:57:05] that's me [21:59:56] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [22:00:10] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18700 bytes in 0.418 second response time [22:01:26] RECOVERY - jobrunner1 JobChron Service on jobrunner1 is OK: PROCS OK: 1 process with args 'redisJobChronService' [22:01:48] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [22:01:49] RECOVERY - test2 Puppet on test2 is OK: OK: Puppet is currently enabled, last run 36 seconds ago with 0 failures [22:01:55] PROBLEM - jobrunner1 MediaWiki Rendering on jobrunner1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:02:14] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [22:02:25] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:02:33] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:02:47] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:02] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:02] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:03:10] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:03:15] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:03:42] RECOVERY - jobrunner1 JobRunner Service on jobrunner1 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [22:03:56] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:04:14] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:04:52] RECOVERY - jobrunner1 MediaWiki Rendering on jobrunner1 is OK: HTTP OK: HTTP/1.1 200 OK - 18700 bytes in 0.511 second response time [22:07:51] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 81.4.109.133/cpweb [22:08:13] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2a00:d880:5:8ea::ebc7/cpweb [22:11:05] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 4.03, 3.25, 1.82 [22:13:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:14:10] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 3.67, 3.54, 2.21 [22:15:01] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:16:13] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:19:55] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 3.16, 3.37, 2.55 [22:20:34] Hi, once more a question: [22:22:18] I've generated and saved a dump of a wiki yesterday. After some edits today I just wanted to generate another dump. But I got a message "You are only allowed to generate this 1 many dumps." I have hard time understanding the meaning of that sentence. Can you please elaborate? [22:23:09] uh, that sounds like someone mangled the system messages, paladox :) [22:23:13] flu-pm hi, you delete the dump, then click generate again. [22:23:26] oh [22:24:02] I have no idea where I could delete a dump? [22:24:30] [02miraheze/DataDump] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/Jvu3z [22:24:32] [02miraheze/DataDump] 07paladox 03a0379cf - Fix i18n [22:24:33] [02DataDump] 07paladox created branch 03paladox-patch-2 - 13https://git.io/fhhKV [22:24:35] [02DataDump] 07paladox opened pull request 03#4: Fix i18n - 13https://git.io/Jvu3g [22:24:56] flu-pm there should be a delete dump button (on the screen you see for viewing the download link for the dump) [22:25:14] if not you need to grant your self the delete dump right through ManageWikiPermissions [22:25:19] Voidwalker LGTY? ^ [22:25:39] Just looking that up ... [22:25:57] yup [22:26:10] [02DataDump] 07paladox closed pull request 03#4: Fix i18n - 13https://git.io/Jvu3g [22:26:12] [02miraheze/DataDump] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jvu3V [22:26:13] [02miraheze/DataDump] 07paladox 0333f3edf - Fix i18n (#4) [22:26:15] [02miraheze/DataDump] 07paladox deleted branch 03paladox-patch-2 [22:26:16] [02DataDump] 07paladox deleted branch 03paladox-patch-2 - 13https://git.io/fhhKV [22:26:22] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 3.64, 3.62, 2.97 [22:26:27] Success deleting. Now trying to generate another one ... [22:26:58] [02miraheze/DataDump] 07paladox pushed 032 commits to 03paladox-patch-1 [+0/-0/±2] 13https://git.io/Jvu3o [22:26:59] [02miraheze/DataDump] 07paladox 03e98ea33 - Merge branch 'master' into paladox-patch-1 [22:27:01] [02DataDump] 07paladox synchronize pull request 03#3: Redesign DataDump - 13https://git.io/JvIMk [22:27:54] Great, that worked! Thank you guys! [22:28:24] yw :) [22:28:42] Bye! [22:30:28] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [22:32:09] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 3.29, 1.99, 1.41 [22:34:24] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:34:33] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:34:56] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:35:17] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:35:17] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:35:50] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:36:32] PROBLEM - test2 Puppet on test2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:37:02] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:44:11] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 1.38, 1.99, 1.91 [22:49:02] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:50:10] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 1.51, 1.44, 1.67 [22:52:00] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 39 seconds ago with 0 failures [22:52:05] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 4.28, 3.90, 3.61 [22:52:28] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [22:52:28] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 39 seconds ago with 0 failures [22:53:09] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:22] RECOVERY - test2 Puppet on test2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:59] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:54:12] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:54:13] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:54:56] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 3.87, 3.80, 3.62 [23:00:49] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 4.27, 4.02, 3.77 [23:03:55] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 3.86, 3.99, 3.81 [23:14:25] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:14:52] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:15:18] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:15:19] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:15:59] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:16:15] PROBLEM - phab1 Puppet on phab1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:16:25] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:16:48] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:16:48] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:17:06] PROBLEM - test2 Puppet on test2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:21:00] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb [23:22:15] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.161.32.127/cpweb [23:23:40] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [23:24:13] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:30:07] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2607:5300:205:200::17f6/cpweb [23:31:50] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [23:31:55] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 7 seconds ago with 0 failures [23:32:02] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:32:07] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 16 seconds ago with 0 failures [23:32:26] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 16 seconds ago with 0 failures [23:33:17] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:33:26] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 3.66, 2.40, 1.53 [23:33:37] RECOVERY - phab1 Puppet on phab1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [23:33:50] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [23:33:51] RECOVERY - test2 Puppet on test2 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [23:34:00] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [23:34:00] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [23:35:38] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 4.06, 3.81, 3.74 [23:37:28] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 1.41, 1.93, 1.54 [23:38:21] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 3.90, 3.85, 3.77 [23:39:27] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.55, 1.42, 1.40 [23:49:44] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 0.17, 0.05, 0.01 [23:52:05] !log increased jobrunner1 core by 1 (so 4 in total) [23:52:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:55:35] err [23:55:42] !log actually 3 in total [23:55:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:56:52] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 5 minutes ago with 2 failures. Failed resources (up to 3 shown)