[02:34:19] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.77, 3.07, 2.37 [02:38:19] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 3.29, 3.20, 2.58 [03:02:19] PROBLEM - misc2 Current Load on misc2 is CRITICAL: CRITICAL - load average: 5.05, 3.90, 3.22 [03:04:19] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.15, 3.56, 3.17 [03:08:19] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 2.24, 3.05, 3.06 [03:32:19] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.58, 3.17, 2.95 [03:34:19] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 2.55, 2.87, 2.86 [04:26:22] PROBLEM - sahitya.shaunak.in - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'sahitya.shaunak.in' expires in 15 day(s) (Fri 05 Jul 2019 04:24:19 AM GMT +0000). [04:26:36] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjVOd [04:26:37] [02miraheze/ssl] 07MirahezeSSLBot 03ebb5fd3 - Bot: Update SSL cert for sahitya.shaunak.in [04:34:22] RECOVERY - sahitya.shaunak.in - LetsEncrypt on sslhost is OK: OK - Certificate 'sahitya.shaunak.in' will expire on Tue 17 Sep 2019 03:26:30 AM GMT +0000. [04:37:21] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.62, 3.43, 3.01 [04:44:59] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 2.86, 3.32, 3.16 [05:02:22] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 7.54, 2.98, 1.14 [05:02:43] PROBLEM - cp4 Current Load on cp4 is CRITICAL: CRITICAL - load average: 10.08, 4.34, 1.88 [05:02:48] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:03:26] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 48% [05:03:26] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 10.28, 6.15, 2.60 [05:04:43] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 12 minutes ago with 0 failures [05:05:25] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 3% [05:07:15] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 0.53, 3.35, 2.27 [05:08:13] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.09, 1.38, 1.06 [05:10:36] PROBLEM - cp4 Current Load on cp4 is WARNING: WARNING - load average: 0.57, 1.86, 1.78 [05:12:35] RECOVERY - cp4 Current Load on cp4 is OK: OK - load average: 0.73, 1.47, 1.64 [06:02:17] PROBLEM - cp4 Current Load on cp4 is CRITICAL: CRITICAL - load average: 9.08, 4.48, 2.12 [06:02:24] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 6.39, 2.39, 0.95 [06:03:15] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 10.29, 5.29, 2.20 [06:05:09] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 2.40, 3.99, 2.08 [06:06:13] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.34, 1.49, 0.92 [06:07:07] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 0.49, 2.79, 1.87 [06:10:10] PROBLEM - cp4 Current Load on cp4 is WARNING: WARNING - load average: 1.34, 1.97, 1.90 [06:14:07] RECOVERY - cp4 Current Load on cp4 is OK: OK - load average: 0.43, 1.16, 1.58 [07:52:47] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:52:49] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:52:58] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:53:01] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [07:53:11] PROBLEM - lizardfs3 Puppet on lizardfs3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:53:13] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:53:13] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:53:36] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:53:49] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 207 failures. Last run 2 minutes ago with 207 failures. Failed resources (up to 3 shown) [07:54:05] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:06] PROBLEM - elasticsearch1 Puppet on elasticsearch1 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 3 minutes ago with 17 failures. Failed resources (up to 3 shown) [07:54:10] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:17] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:20] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:22] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Puppet has 13 failures. Last run 3 minutes ago with 13 failures. Failed resources (up to 3 shown) [07:54:24] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:32] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [07:54:32] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Puppet has 14 failures. Last run 3 minutes ago with 14 failures. Failed resources (up to 3 shown) [07:55:04] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 197 failures. Last run 3 minutes ago with 197 failures. Failed resources (up to 3 shown): File[/etc/rsyslog.d],File[/etc/rsyslog.conf],File[authority certificates],File[/etc/apt/apt.conf.d/50unattended-upgrades] [08:02:38] PROBLEM - cp4 Current Load on cp4 is CRITICAL: CRITICAL - load average: 8.71, 3.72, 1.58 [08:03:16] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 5.68, 2.75, 1.26 [08:04:17] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 8 seconds ago with 0 failures [08:04:19] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 5.10, 2.49, 1.00 [08:04:22] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [08:04:24] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [08:04:32] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 35 seconds ago with 0 failures [08:04:32] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 39 seconds ago with 0 failures [08:04:47] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [08:04:49] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [08:05:01] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:05:11] RECOVERY - lizardfs3 Puppet on lizardfs3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:05:13] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 26 seconds ago with 0 failures [08:05:13] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:05:36] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 5 seconds ago with 0 failures [08:05:42] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 56 seconds ago with 0 failures [08:06:05] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:07:09] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:08:33] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [08:08:34] PROBLEM - misc4 Disk Space on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:10:12] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [08:10:35] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [08:10:56] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:12:23] PROBLEM - netazar.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:12:47] PROBLEM - guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:13:03] PROBLEM - misc4 phd on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:13:04] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:16:03] PROBLEM - misc4 SSH on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:16:55] RECOVERY - misc4 Disk Space on misc4 is OK: DISK OK - free space: / 48367 MB (78% inode=99%); [08:18:11] RECOVERY - misc4 SSH on misc4 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [08:18:24] RECOVERY - netazar.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.netazar.org' will expire on Mon 19 Aug 2019 08:36:02 PM GMT +0000. [08:18:33] PROBLEM - misc4 phabricator.miraheze.org HTTPS on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:19:20] RECOVERY - cp4 HTTPS on cp4 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1498 bytes in 9.437 second response time [08:19:28] RECOVERY - misc4 phd on misc4 is OK: PROCS OK: 1 process with args 'phd' [08:20:29] PROBLEM - cp4 SSH on cp4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:21:16] PROBLEM - misc4 Disk Space on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:22:05] RECOVERY - elasticsearch1 Puppet on elasticsearch1 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [08:22:08] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:15] PROBLEM - cp4 Disk Space on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:22:19] PROBLEM - test1 SSH on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:39] PROBLEM - misc4 phab.miraheze.wiki HTTPS on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:39] PROBLEM - misc4 SSH on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:41] PROBLEM - netazar.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:47] PROBLEM - enc.for.uz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:22:53] PROBLEM - misc4 Prometheus on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:23:20] PROBLEM - test1 Disk Space on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:23:42] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:23:42] PROBLEM - test1 php-fpm on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:23:51] PROBLEM - misc4 parsoid on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:23:53] PROBLEM - misc4 phd on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:24:26] PROBLEM - Host misc4 is DOWN: PING CRITICAL - Packet loss = 100% [08:24:31] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: misc4-fd [08:24:45] PROBLEM - Host test1 is DOWN: PING CRITICAL - Packet loss = 100% [08:24:49] PROBLEM - Host cp4 is DOWN: PING CRITICAL - Packet loss = 100% [08:26:39] RECOVERY - netazar.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.netazar.org' will expire on Mon 19 Aug 2019 08:36:02 PM GMT +0000. [08:28:09] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [08:28:54] RECOVERY - enc.for.uz - LetsEncrypt on sslhost is OK: OK - Certificate 'enc.for.uz' will expire on Sat 31 Aug 2019 02:47:02 PM GMT +0000. [08:29:11] RECOVERY - Host test1 is UP: PING OK - Packet loss = 0%, RTA = 0.46 ms [08:29:12] RECOVERY - Host cp4 is UP: PING OK - Packet loss = 0%, RTA = 0.76 ms [08:29:25] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:29:29] RECOVERY - guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'guiasdobrasil.com.br' will expire on Tue 03 Sep 2019 02:35:01 PM GMT +0000. [08:29:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [08:30:02] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 3% [08:31:22] RECOVERY - Host misc4 is UP: PING OK - Packet loss = 0%, RTA = 0.83 ms [08:31:29] RECOVERY - misc4 Disk Space on misc4 is OK: DISK OK - free space: / 48239 MB (78% inode=99%); [08:31:33] RECOVERY - bacula1 Bacula Phabricator Static on bacula1 is OK: OK: Full, 79592 files, 1.998GB, 2019-06-16 02:18:00 (3.3 days ago) [08:31:59] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 2.76, 1.96, 0.79 [08:32:10] RECOVERY - misc4 parsoid on misc4 is OK: TCP OK - 0.006 second response time on 185.52.3.121 port 8142 [08:35:48] PROBLEM - cp4 Current Load on cp4 is CRITICAL: CRITICAL - load average: 16.13, 7.83, 3.08 [08:35:55] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 5.01, 3.63, 1.73 [08:36:18] RECOVERY - misc4 phd on misc4 is OK: PROCS OK: 2 processes with args 'phd' [08:36:22] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:37:34] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:37:53] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 2.97, 3.35, 1.85 [08:41:47] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 3.82, 3.68, 2.32 [08:43:44] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 5.13, 4.24, 2.68 [08:49:28] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 2.64, 3.55, 2.91 [08:51:24] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 4.23, 3.79, 3.06 [08:53:19] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 3.39, 3.52, 3.04 [08:55:16] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 4.88, 4.00, 3.27 [08:57:11] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 3.01, 3.48, 3.15 [09:01:23] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CRITICAL - load average: 9.92, 5.50, 3.91 [09:03:19] Reception123: any clue what's going on ^^^^ [09:06:14] (or any sysadmin) [09:11:07] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 0.68, 3.07, 3.70 [09:13:07] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 1.01, 2.41, 3.38 [10:03:12] PROBLEM - cp4 Current Load on cp4 is WARNING: WARNING - load average: 0.54, 1.08, 1.99 [10:07:09] RECOVERY - cp4 Current Load on cp4 is OK: OK - load average: 0.74, 0.83, 1.68 [10:45:26] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 2672 [12:07:25] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [13:04:13] O/ [13:05:08] yo [13:12:09] Voidwalker: i have to complain discussion top wont let me use my sig since it has wikilinks that contain a pipe :( [13:14:31] that's weird, considering I've never had a problem :P [13:15:19] Voidwalker: im assuming its the pipe atleast [13:15:48] Voidwalker: i mean my sig is pretty basic the fanciest it gets is wikilinks [13:16:36] oh, your problem is that you use pipes outside of links [13:17:16] Is it possible to make it ignore any pipes after the first? [13:17:49] use | to display the pipe character [13:18:26] Voidwalker: will it show as a pipe in source editor? If not thats going drive me crazy :D [13:18:48] nope, but it will display properly on the page [13:19:09] K [13:20:50] Voidwalker: invalid raw sig check html tags upon saving prefs [13:20:59] Oh wait [13:21:00] I forgot ; [13:21:18] very important those semicolons [13:50:02] I think there was an outage this morning hence highload on cp4. [13:52:56] paladox: sorry ill plug it back in [13:53:01] lol [13:53:15] paladox: i needed to charge my phone [13:53:23] heh [13:54:53] I see nothing in the syslog that could explain it, but looking at ramnode's service checker, it appears some of the NL* have lower uptime (indicating some of them have gone down today) [13:55:15] which would explain why we had a icinga notification that said cp4 and misc4 and test1 went down [16:24:47] RECOVERY - test1 Disk Space on test1 is OK: DISK OK - free space: / 27485 MB (67% inode=98%); [16:25:32] RECOVERY - test1 SSH on test1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [16:25:34] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.011 second response time [16:25:48] !log apt-get upgrade on test1 [16:25:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:25:58] RECOVERY - test1 php-fpm on test1 is OK: PROCS OK: 3 processes with command name 'php-fpm7.3' [16:26:09] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 50 seconds ago with 0 failures [16:26:13] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.54, 0.19, 0.07 [18:09:19] !log upgrade puppet-agent on cp4 && apt-get upgraon puppet1 [18:09:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:12:20] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:22] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:24] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:32] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:32] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:47] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:48] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:12:58] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:01] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:05] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:11] PROBLEM - lizardfs3 Puppet on lizardfs3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:12] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:13] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:36] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:13:42] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:14:05] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:14:05] PROBLEM - elasticsearch1 Puppet on elasticsearch1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:14:09] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:14:17] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:14:29] !log upgrade puppet-agent on mw* [18:14:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:16:47] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 26 seconds ago with 0 failures [18:17:12] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:17:42] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [18:22:48] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [18:24:22] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:24:32] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [18:24:32] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:25:50] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 356 failures. Last run 2 minutes ago with 356 failures. Failed resources (up to 3 shown) [18:26:52] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 354 failures. Last run 3 minutes ago with 354 failures. Failed resources (up to 3 shown) [18:27:18] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 359 failures. Last run 3 minutes ago with 359 failures. Failed resources (up to 3 shown) [18:32:06] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 36 seconds ago with 0 failures [18:32:09] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [18:32:18] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [18:32:20] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [18:32:24] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:32:51] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [18:32:58] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:33:01] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:33:08] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 5 seconds ago with 0 failures [18:33:11] RECOVERY - lizardfs3 Puppet on lizardfs3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:33:12] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:33:15] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:33:36] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [18:33:42] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:35:03] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_33 [+0/-0/±2] 13https://git.io/fjVuL [18:35:05] [02miraheze/mediawiki] 07paladox 031d23290 - Update MM and VE [18:36:00] [02miraheze/mediawiki] 07paladox pushed 036 commits to 03REL1_33 [+0/-0/±11] 13https://git.io/fjVut [18:36:02] [02miraheze/mediawiki] 07AaronSchulz 03a5545ca - Reduce HashRing test load to avoid several seconds of CPU Bug: T225719 Change-Id: I358383e99d7950c4747b48583dc8faf00b3deeab [18:36:03] [02miraheze/mediawiki] 07kostajh 030ae35bb - Only attempt to deduplicate if there is data in archive and revision The idea is to avoid expensive calls to makeDummyRevisionRow, and speed up installation of MediaWiki on CI. Bug: T225901 Change-Id: I6f69281568218c89eb18353c06cabf7eb1926de8 [18:36:05] [02miraheze/mediawiki] 07jenkins-bot 03392fe83 - Merge "Reduce HashRing test load to avoid several seconds of CPU" into REL1_33 [18:36:06] [02miraheze/mediawiki] ... and 3 more commits. [18:48:14] Hello Guest38736! If you have any questions feel free to ask and someone should answer soon. [18:50:10] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [18:54:05] RECOVERY - elasticsearch1 Puppet on elasticsearch1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:09:40] Hello Vees! If you have any questions feel free to ask and someone should answer soon. [20:11:00] Hi, just to check how it works here :)