[01:29:10] Hello Void|Whispers! If you have any questions feel free to ask and someone should answer soon. [02:22:10] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CRITICAL - load average: 11.76, 6.90, 3.76 [02:24:11] PROBLEM - mw1 Current Load on mw1 is WARNING: WARNING - load average: 7.49, 6.69, 4.05 [02:28:29] Voidwalker done (db4) (though i did that only yesturday) [02:28:49] !log depool mw1 [02:29:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:29:33] yeah, well it's getting low enough that it might need to be cleaned daily [02:30:07] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 4.73, 6.78, 5.01 [02:30:20] yup [02:30:20] PROBLEM - bacula1 Bacula Static on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:30:36] !log restarting php-fpm on mw1 (proccesses stuck in D) [02:30:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:31:19] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:31:39] !log repool mw1 [02:31:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:32:21] PROBLEM - bacula1 Bacula Static on bacula1 is WARNING: WARNING: Full, 4442156 files, 398.9GB, 2019-06-16 01:05:00 (3.0 weeks ago) [02:32:56] !log depool mw1 [02:33:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:33:20] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 79592 files, 1.998GB, 2019-06-16 02:18:00 (3.0 weeks ago) [02:33:22] !log reboot mw1 [02:33:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:36:00] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [02:36:20] PROBLEM - mw1 MirahezeRenewSsl on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:36:22] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [02:36:23] PROBLEM - mw1 Disk Space on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:36:31] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:36:47] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [02:37:03] PROBLEM - mw1 SSH on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:37:05] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:37:31] PROBLEM - mw1 php-fpm on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:37:49] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:38:06] Voidwalker i think i may need to put this into a cron (cleaning bin logs). But will need to ask others how they will feel before i do that. [02:39:00] RECOVERY - mw1 SSH on mw1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [02:39:35] !log repool mw1 [02:39:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:39:43] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [02:40:14] RECOVERY - mw1 MirahezeRenewSsl on mw1 is OK: TCP OK - 0.001 second response time on 185.52.1.75 port 5000 [02:40:18] RECOVERY - mw1 Disk Space on mw1 is OK: DISK OK - free space: / 13869 MB (18% inode=99%); [02:40:22] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [02:40:29] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 2.14, 0.56, 0.19 [02:40:38] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [02:40:44] definitely something to discuss first [02:40:53] RECOVERY - mw1 HTTPS on mw1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.050 second response time [02:41:20] RECOVERY - mw1 php-fpm on mw1 is OK: PROCS OK: 13 processes with command name 'php-fpm7.2' [02:41:51] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:41:54] !log depooling mw2 (and rebooting due to nginx process stuck in D) [02:42:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:43:59] should try and figure out what's causing those processes [02:44:40] PROBLEM - mw2 Disk Space on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:44:58] yeh (appears to be affecting mw1/mw2) so something strange is happening. [02:45:39] PROBLEM - mw2 HTTPS on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:45:47] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [02:45:55] PROBLEM - mw2 Current Load on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:45:55] PROBLEM - mw2 php-fpm on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:46:02] PROBLEM - mw2 SSH on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:46:22] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [02:46:24] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CRITICAL - load average: 8.80, 5.96, 2.72 [02:46:25] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:46:25] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [02:46:34] PROBLEM - Host mw2 is DOWN: PING CRITICAL - Packet loss = 100% [02:47:54] !log repool mw2 [02:48:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:48:33] RECOVERY - Host mw2 is UP: PING OK - Packet loss = 0%, RTA = 0.34 ms [02:48:39] RECOVERY - mw2 Disk Space on mw2 is OK: DISK OK - free space: / 48150 MB (62% inode=99%); [02:49:33] RECOVERY - mw2 HTTPS on mw2 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.007 second response time [02:49:41] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [02:50:19] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [02:50:21] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 3.96, 5.73, 3.44 [02:50:22] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [02:53:02] !log depool mw1 && restart nginx (due to D state) && repool [02:53:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [03:14:35] !log deleting STATIC* on bacula1 (to generate a new backup) [03:14:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [03:16:36] !log doing the same for DB4 (and other backups) [03:16:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [03:30:42] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is UNKNOWN: NRPE: Unable to read output [03:31:21] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is UNKNOWN: NRPE: Unable to read output [03:31:42] PROBLEM - bacula1 Bacula Private Git on bacula1 is UNKNOWN: NRPE: Unable to read output [03:31:43] PROBLEM - bacula1 Bacula Static on bacula1 is UNKNOWN: NRPE: Unable to read output [03:32:16] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:32:56] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 79592 files, 1.998GB, 2019-06-16 02:18:00 (3.0 weeks ago) [03:33:30] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CRITICAL: Full, 671533 files, 76.35GB, 2019-06-02 02:58:00 (5.0 weeks ago) [03:33:43] PROBLEM - bacula1 Current Load on bacula1 is CRITICAL: CRITICAL - load average: 6.43, 4.58, 2.12 [03:33:48] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CRITICAL: Full, 4066 files, 8.300MB, 2019-06-16 02:27:00 (3.0 weeks ago) [03:33:52] PROBLEM - bacula1 Bacula Static on bacula1 is WARNING: WARNING: Full, 4442156 files, 398.9GB, 2019-06-16 01:05:00 (3.0 weeks ago) [03:34:18] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 12 minutes ago with 0 failures [03:37:46] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is UNKNOWN: NRPE: Unable to read output [03:38:01] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is UNKNOWN: NRPE: Unable to read output [03:38:15] PROBLEM - bacula1 Bacula Private Git on bacula1 is UNKNOWN: NRPE: Unable to read output [03:38:24] PROBLEM - bacula1 Bacula Static on bacula1 is UNKNOWN: NRPE: Unable to read output [03:51:48] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CRITICAL: Full, 4066 files, 8.300MB, 2019-06-16 02:27:00 (3.0 weeks ago) [03:51:50] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 79592 files, 1.998GB, 2019-06-16 02:18:00 (3.0 weeks ago) [03:51:57] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CRITICAL: Full, 671533 files, 76.35GB, 2019-06-02 02:58:00 (5.0 weeks ago) [03:52:21] PROBLEM - bacula1 Bacula Static on bacula1 is WARNING: WARNING: Full, 4442156 files, 398.9GB, 2019-06-16 01:05:00 (3.0 weeks ago) [03:56:48] RECOVERY - bacula1 Disk Space on bacula1 is OK: DISK OK - free space: / 91746 MB (19% inode=99%); [04:02:16] !log upgrade puppet-agent on bacula1 [04:02:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [04:03:41] PROBLEM - bacula1 Current Load on bacula1 is WARNING: WARNING - load average: 0.70, 0.92, 1.88 [04:03:50] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 4084 files, 8.362MB, 2019-07-07 04:02:00 (1.8 minutes ago) [04:04:22] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [04:05:41] RECOVERY - bacula1 Current Load on bacula1 is OK: OK - load average: 0.15, 0.66, 1.67 [04:06:22] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [05:40:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjimH [05:40:12] [02miraheze/services] 07MirahezeSSLBot 03d2f529b - BOT: Updating services config for wikis [10:22:21] RECOVERY - bacula1 Bacula Databases db4 on bacula1 is OK: OK: Full, 759946 files, 78.12GB, 2019-07-07 10:22:00 (21.0 seconds ago) [10:25:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjiOy [10:25:10] [02miraheze/services] 07MirahezeSSLBot 033c57363 - BOT: Updating services config for wikis [11:05:04] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/fji3f [11:05:05] [02miraheze/dns] 07Reception123 03c27995b - add theliteratureproject.org zone [11:36:59] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/fji3o [11:37:00] [02miraheze/ssl] 07Reception123 0313acec8 - add theliteratureproject.org cert [11:44:24] Hello Kirito! If you have any questions feel free to ask and someone should answer soon. [11:44:37] hello, I made a request to have a wiki this will be accepted under how long? [12:04:22] Kirito01: I will take care of it [12:04:45] oh nevermind, looks like someone else already has [14:05:50] !log purge binary logs before '2019-07-07 09:00:00'; [14:05:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:54:55] Hey Zppix: btw - you have a msg on Discord and good luck for consul [14:55:51] Ok lookin [14:58:21] [02mw-config] 07paladox created branch 03paladox-patch-2 - 13https://git.io/vbvb3 [14:58:22] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/fjiGh [14:58:24] [02miraheze/mw-config] 07paladox 03ddc660d - Add missing sql to WikiForum [14:58:25] [02mw-config] 07paladox opened pull request 03#2693: Add missing sql to WikiForum - 13https://git.io/fjiZe [14:59:20] [02mw-config] 07paladox closed pull request 03#2693: Add missing sql to WikiForum - 13https://git.io/fjiZe [14:59:22] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjiZv [14:59:23] [02miraheze/mw-config] 07paladox 03359c10c - Add missing sql to WikiForum (#2693) [15:01:23] [02mw-config] 07paladox deleted branch 03paladox-patch-2 - 13https://git.io/vbvb3 [15:01:24] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-2 [16:11:22] [02mw-config] 07JohnFLewis commented on commit 03359c10c09bf5c6a6f4651e09265a8e1462c901f0 - 13https://git.io/fjiZF [16:20:07] [02mw-config] 07paladox commented on commit 03359c10c09bf5c6a6f4651e09265a8e1462c901f0 - 13https://git.io/fjinf [16:24:44] [02mw-config] 07paladox created branch 03revert-2693-paladox-patch-2 - 13https://git.io/vbvb3 [16:24:45] [02miraheze/mw-config] 07paladox pushed 031 commit to 03revert-2693-paladox-patch-2 [+0/-0/±1] 13https://git.io/fjink [16:24:47] [02miraheze/mw-config] 07paladox 03ed8462f - Revert "Add missing sql to WikiForum (#2693)" This reverts commit 359c10c09bf5c6a6f4651e09265a8e1462c901f0. [16:24:50] [02mw-config] 07paladox opened pull request 03#2694: Revert "Add missing sql to WikiForum" - 13https://git.io/fjinI [16:25:50] miraheze/mw-config/revert-2693-paladox-patch-2/ed8462f - paladox The build passed. https://travis-ci.org/miraheze/mw-config/builds/555370383 [16:33:35] Paladox: publictestwiki.com User rights changes are saying conflict constantly [16:36:47] Reception123: would you know why ^^ that’s happening? [16:37:47] Paladox: I can change your rights fine but not my bots. [16:38:01] RhinosF1: conflict? what's the error? [16:39:27] Reception123: See the log as well - it should have more rights https://usercontent.irccloud-cdn.com/file/DSe5O2zo/026234F4-F8CB-4BBE-91C3-EBEEC90F3C27.png [16:40:58] RhinosF1: so what were you trying to add to that account? [16:41:39] Reception123: +autopattrolled +bot +IAdmin -sysop [16:42:04] Same as I set the other day [16:42:52] paladox: could it be managewiki related? [16:44:54] Hmm [16:45:36] I wonder what it is conflicting with [16:47:06] Try setting the others right (without setting autoprotrolled. [16:49:35] Paladox: nope [17:31:32] RhinosF1, suddenly, I can manage your bot's rights again [17:31:40] no idea what changed [17:39:44] Voidwalker: neither do I [17:40:38] I wonder if it's because I attempted to change the userrights from meta, and they were suddenly just there [17:41:09] Voidwalker: Maybe, it looked to me as if my last change before it broke didn't go through properly [17:41:25] Reception123, Paladox: ^ ideas? [17:45:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjinN [17:45:13] [02miraheze/services] 07MirahezeSSLBot 03e5da5d3 - BOT: Updating services config for wikis [18:03:11] Voidwalker: possibly, though I’ve never encountered that problem (so not sure) [18:03:35] Maybe see if changing it back to what it was before, works? [18:05:47] Paladox: Ive set the rights now to what I was sgarting to set then to locally [18:16:56] * Hispano76 saluda [18:24:15] ah, thanks Voidwalker (didn't see your other message) [20:05:04] !log root@mw1:/var/log/nginx# sysctl -w net.core.somaxconn=256 [20:05:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:12:13] !log depool mw1 [20:12:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:15:11] !log restarting nginx on mw1 [20:15:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:15:42] !log repool mw1 [20:15:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:33:18] !log depool mw1 && remove "use epoll" from nginx (to see how it does) && repool mw1 [20:33:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:37:09] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 0 failures [20:39:28] Voidwalker: if your around on meta someone named system operator is creating pages on meta thinkin its there wiki [20:40:29] they've only created a user page and talk page [20:40:43] Voidwalker: they made a page in module ns [20:40:46] !log reverted nginx config [20:40:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:41:08] Oh it was just an edit ill revert [20:41:09] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [20:41:12] nope, just edited an existing one [20:42:23] Voidwalker: ik i just undid the edit i thought they created it [20:42:36] !log changing php-fpm to use TCP rather then socket on mw1 (includes depooling and repooling mw1) [20:42:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:42:47] Voidwalker: you gonna leave them a msg or shall i? [20:43:29] paladox: hmm i see tcp what you messing with now [20:43:34] S/messing/breaking [20:43:34] Zppix meant to say: paladox: hmm i see tcp what you breaking with now [20:43:34] tbh, I wasn't going to touch it, and you did the revert so... [20:43:45] Zppix im trying to debug why nginx keeps going into D state on mw* [20:44:02] and im seeing alot of resource temporarily unavailable errors [20:44:08] Voidwalker: i figured since i dragged you into this i ask :P [20:47:09] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 0 failures [20:47:43] Left them a msg Voidwalker lmk if you think i need to add more [20:53:39] paladox: Or any other staffers, OM? [20:53:49] s/OM?/PM? [20:53:49] RhinosF1 meant to say: paladox: Or any other staffers, PM? [20:54:11] RhinosF1: hi, what's it about? [20:54:20] Sure, though you can always pm us, no need to ask :) [20:55:24] If you state whats it about it could also potientally be solved w/o a staffer [20:58:01] Zppix: ToU enforcement by Paladox [20:58:16] Ah [20:58:42] Is it bad i dont even know whats in tou off the top of my head [21:00:23] Zppix: thx for the wiki creator vote - I don't know the ToU off my head - most of it is easy not to break [21:00:36] Challenge accept [21:00:39] Ed* [21:00:59] Zppix: Breaking the ToU should not be a challenge [21:01:25] Sssh [21:55:09] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [21:55:27] !log depool mw1 && revert nginx / php-fpm change && repool [21:55:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:03:13] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:03:43] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Puppet has 13 failures. Last run 2 minutes ago with 13 failures. Failed resources (up to 3 shown) [23:03:58] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 52 failures. Last run 2 minutes ago with 52 failures. Failed resources (up to 3 shown) [23:04:03] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:06] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:17] PROBLEM - elasticsearch1 Puppet on elasticsearch1 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [23:04:19] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 2 minutes ago with 16 failures. Failed resources (up to 3 shown) [23:04:20] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Puppet has 13 failures. Last run 2 minutes ago with 13 failures. Failed resources (up to 3 shown) [23:04:24] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 188 failures. Last run 2 minutes ago with 188 failures. Failed resources (up to 3 shown): File[/etc/rsyslog.conf],File[authority certificates],File[/etc/apt/apt.conf.d/50unattended-upgrades],File[/etc/apt/apt.conf.d/20auto-upgrades] [23:04:37] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:41] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 213 failures. Last run 3 minutes ago with 213 failures. Failed resources (up to 3 shown) [23:04:58] PROBLEM - lizardfs3 Puppet on lizardfs3 is CRITICAL: CRITICAL: Puppet has 13 failures. Last run 3 minutes ago with 13 failures. Failed resources (up to 3 shown) [23:05:00] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:05:06] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Puppet has 12 failures. Last run 3 minutes ago with 12 failures. Failed resources (up to 3 shown) [23:05:07] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:05:08] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:05:11] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 210 failures. Last run 3 minutes ago with 210 failures. Failed resources (up to 3 shown) [23:05:18] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 216 failures. Last run 3 minutes ago with 216 failures. Failed resources (up to 3 shown) [23:05:31] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Puppet has 201 failures. Last run 3 minutes ago with 201 failures. Failed resources (up to 3 shown) [23:12:58] RECOVERY - lizardfs3 Puppet on lizardfs3 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [23:13:06] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [23:13:07] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 20 seconds ago with 0 failures [23:13:08] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [23:13:13] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [23:13:42] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:13:57] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:14:03] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 50 seconds ago with 0 failures [23:14:07] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:14:18] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:14:20] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:14:29] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 7 seconds ago with 0 failures [23:14:37] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:14:44] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:15:07] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:15:24] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:15:30] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:17:00] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 27 seconds ago with 0 failures [23:32:17] RECOVERY - elasticsearch1 Puppet on elasticsearch1 is OK: OK: Puppet is currently enabled, last run 13 seconds ago with 0 failures