[01:53:23] Lua error in package.lua at line 80: module 'Module:Protection banner' not found. [02:13:07] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:13:13] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CRITICAL: Puppet has 21 failures. Last run 2 minutes ago with 21 failures. Failed resources (up to 3 shown) [02:13:13] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:13:16] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:13:17] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:13:21] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 2 minutes ago with 17 failures. Failed resources (up to 3 shown) [02:13:33] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:13:50] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:03] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Puppet has 180 failures. Last run 2 minutes ago with 180 failures. Failed resources (up to 3 shown): File[/etc/rsyslog.d],File[/etc/rsyslog.conf],File[authority certificates],File[/etc/apt/apt.conf.d/50unattended-upgrades] [02:14:04] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Puppet has 15 failures. Last run 3 minutes ago with 15 failures. Failed resources (up to 3 shown) [02:14:06] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:29] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:29] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:34] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:34] PROBLEM - db5 Puppet on db5 is CRITICAL: CRITICAL: Puppet has 15 failures. Last run 3 minutes ago with 15 failures. Failed resources (up to 3 shown) [02:14:35] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:36] PROBLEM - lizardfs5 Puppet on lizardfs5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:14:38] PROBLEM - lizardfs4 Puppet on lizardfs4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [02:23:45] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [02:24:02] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [02:24:08] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [02:24:28] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [02:24:33] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 11 seconds ago with 0 failures [02:24:34] RECOVERY - db5 Puppet on db5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:24:35] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:24:35] RECOVERY - lizardfs5 Puppet on lizardfs5 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [02:24:41] RECOVERY - lizardfs4 Puppet on lizardfs4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:25:11] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 20 seconds ago with 0 failures [02:25:12] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:25:12] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:25:14] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [02:25:22] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:25:31] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:26:18] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 22 seconds ago with 0 failures [02:26:28] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [02:27:07] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:25:49] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2863 MB (11% inode=94%); [07:10:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2SQ [07:10:12] [02miraheze/services] 07MirahezeSSLBot 031579378 - BOT: Updating services config for wikis [10:52:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.49, 3.96, 2.13 [10:56:34] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [10:56:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.10, 3.85, 2.52 [10:58:13] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.39, 3.91, 2.29 [10:58:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.89, 2.82, 2.31 [11:00:16] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.57, 3.93, 2.52 [11:04:17] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.43, 2.87, 2.43 [11:23:20] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.30, 3.62, 2.91 [11:23:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.86, 3.71, 2.96 [11:25:16] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.60, 3.36, 2.92 [11:25:21] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.88, 3.15, 2.84 [11:33:15] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 8.33, 6.00, 4.15 [11:35:44] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [11:37:45] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [11:47:59] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.65, 4.08, 3.10 [11:48:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.63, 2.98, 3.84 [11:54:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.14, 2.41, 3.29 [12:04:13] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.75, 3.55, 3.89 [12:08:13] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.14, 2.23, 3.27 [13:56:55] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.17, 3.31, 2.20 [13:58:53] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.43, 2.68, 2.12 [14:06:54] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.09, 4.35, 3.10 [14:08:48] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.46, 3.14, 2.80 [14:12:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.72, 3.77, 2.88 [14:12:51] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.05, 4.47, 3.51 [14:14:08] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.83, 2.68, 2.59 [14:14:47] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.16, 3.28, 3.18 [14:18:12] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.22, 3.71, 3.11 [14:18:40] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.14, 3.85, 3.54 [14:20:14] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.34, 3.25, 3.01 [14:22:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.65, 3.21, 3.37 [14:28:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.70, 3.64, 3.43 [14:30:45] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.87, 3.48, 3.42 [14:32:41] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.37, 3.02, 3.28 [14:41:37] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.65, 3.88, 2.98 [14:43:36] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.46, 3.47, 2.95 [14:47:17] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.95, 4.47, 3.62 [14:49:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.42, 3.59, 3.39 [14:49:38] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.80, 4.11, 3.37 [14:51:19] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.41, 6.13, 4.37 [14:55:33] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.95, 3.17, 3.37 [14:59:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.07, 3.04, 3.65 [15:01:15] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 11.61, 5.92, 4.59 [15:02:49] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 2.95, 4.22, 3.80 [15:04:47] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.01, 3.08, 3.43 [15:05:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.20, 3.59, 3.90 [15:06:46] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.94, 2.45, 3.16 [15:08:52] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.82, 2.38, 3.37 [15:12:52] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.84, 3.45, 3.69 [15:14:53] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.69, 3.64, 3.68 [15:15:09] paladox: around? [15:15:56] RhinosF1: I’m about to go [15:15:57] But what’s up? I won’t be able to help right now. [15:16:37] paladox: I was going to say can you fix those wiki requests I mentioned this morning while I do some school work [15:17:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.10, 3.77, 3.75 [15:17:20] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.30, 4.41, 3.71 [15:17:26] RhinosF1: sorry cannot atm, mobile [15:17:36] You’ll probably get there first [15:17:44] I still have another hour [15:17:45] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [15:18:05] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [15:19:00] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.77, 2.73, 3.37 [15:19:28] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.99, 3.53, 3.46 [15:19:44] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:20:07] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:21:27] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.64, 4.94, 3.99 [15:22:17] paladox: maybe, it’s an English essay and french quiz [15:22:49] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.41, 3.32, 3.52 [15:24:45] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.18, 2.85, 3.32 [15:30:43] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2AC [15:30:44] [02miraheze/services] 07MirahezeSSLBot 03d057bc7 - BOT: Updating services config for wikis [15:44:07] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.98, 3.04, 3.78 [15:46:10] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 8.23, 4.46, 4.16 [15:46:59] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.98, 4.06, 3.34 [15:48:08] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.49, 3.16, 3.72 [15:48:53] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.02, 2.86, 2.98 [15:52:09] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.22, 2.35, 3.27 [15:53:05] [02ManageWiki] 07kenng69 opened pull request 03#120: Add more system messages - 13https://git.io/Je2xs [16:09:17] ^ poor syntax, directed to translatewiki [16:12:09] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.22, 4.19, 3.39 [16:13:17] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [16:13:20] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [16:13:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [16:14:07] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.26, 3.10, 3.08 [16:14:20] [02ManageWiki] 07RhinosF1 commented on pull request 03#120: Add more system messages - 13https://git.io/Je2xM [16:14:21] [02ManageWiki] 07RhinosF1 closed pull request 03#120: Add more system messages - 13https://git.io/Je2xs [16:15:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:15:21] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [16:15:29] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:16:01] !log sudo -u www-data php /srv/mediawiki/w/extensions/CreateWiki/maintenance/populateMainPage.php --wiki=kangarootestingwiki [16:16:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:17:16] @Stewards can you add rights for https://kangarootesting.miraheze.org/wiki/Special:ListUsers [16:17:17] [ Permission error - Kangaroo Testing ] - kangarootesting.miraheze.org [16:18:10] !log sudo -u www-data php /srv/mediawiki/w/extensions/CreateWiki/maintenance/populateMainPage.php --wiki=garbageandcleanredditwiki [16:18:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:20:43] @Stewards https://meta.miraheze.org/wiki/Special:RequestWikiQueue/9783 as well pls [16:20:44] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [16:20:47] needs rights [16:21:32] !log sudo -u www-data php /srv/mediawiki/w/extensions/CentralAuth/maintenance/createLocalAccount.php --wiki=garbageandcleanredditwiki GasMask0217 (was fixing broke wiki requests 9783 and 9773) [16:21:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:33:06] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:33:09] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [16:33:11] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [16:33:19] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.38, 7.04, 4.84 [16:33:21] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.25, 5.53, 3.93 [16:33:22] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [16:35:04] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:35:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [16:35:10] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:35:20] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [16:39:05] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.35, 3.43, 3.55 [16:41:14] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.26, 4.06, 3.78 [16:43:12] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.13, 2.95, 3.40 [16:43:22] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.07, 3.38, 4.00 [16:53:26] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.93, 2.40, 3.28 [16:56:50] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.88, 3.61, 3.49 [16:57:14] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:58:44] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.92, 2.58, 3.12 [16:59:11] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:11:38] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.21, 3.91, 3.24 [17:13:33] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.52, 2.85, 2.92 [17:15:42] JohnLewis: Can you do rights for some broke wiki requests? [17:22:02] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.55, 4.36, 3.52 [17:22:32] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [17:23:01] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.24, 3.50, 2.95 [17:23:57] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.34, 3.16, 3.18 [17:24:30] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:24:59] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.90, 2.89, 2.79 [17:30:32] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [17:32:35] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:47:18] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.75, 4.27, 3.08 [17:47:41] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [17:47:45] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw2 mw3 [17:48:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:48:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:48:51] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:49:23] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [17:50:52] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.402 second response time [17:51:29] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:51:37] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [17:51:47] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:52:25] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:52:27] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:03:24] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [18:03:27] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [18:03:34] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [18:04:00] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [18:05:17] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.65, 2.89, 3.89 [18:05:28] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:05:31] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [18:06:04] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:07:28] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:13:04] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.59, 2.57, 3.33 [18:13:26] !log changed status for 2 wiki requests to approved from inreview [18:13:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:17:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.72, 5.38, 4.29 [18:17:20] Examknow: you around? [18:18:46] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.33, 4.09, 2.85 [18:20:21] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [18:20:37] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [18:20:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.72, 3.48, 2.77 [18:21:18] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [18:21:25] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [18:21:35] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [18:22:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:22:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:22:43] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.69, 3.14, 2.73 [18:22:55] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.60, 3.39, 3.79 [18:23:26] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [18:23:36] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [18:26:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.08, 4.01, 3.21 [18:27:22] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:28:40] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.04, 2.31, 3.21 [18:36:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.24, 3.86, 3.84 [18:40:50] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.22, 3.50, 3.65 [18:42:47] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.76, 3.06, 2.96 [18:44:50] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.82, 2.60, 2.80 [18:44:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.03, 3.14, 3.51 [18:46:36] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [18:46:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [18:46:44] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [18:46:51] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.41, 2.58, 3.26 [18:47:17] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [18:48:32] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:48:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:48:44] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [18:49:16] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [18:57:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.60, 3.40, 3.03 [18:59:33] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.22, 3.16, 3.00 [19:14:11] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://git.io/JeafP [19:14:12] * hispano76 hello! [19:14:13] [02miraheze/puppet] 07paladox 03c9fc14b - varnish: Raise timeout for varnish health checker to 20s [19:14:14] [02puppet] 07paladox created branch 03paladox-patch-4 - 13https://git.io/vbiAS [19:14:22] [02puppet] 07paladox opened pull request 03#1136: varnish: Raise timeout for varnish health checker to 20s - 13https://git.io/JeafX [19:14:41] [02miraheze/dns] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JeafM [19:14:42] [02miraheze/dns] 07paladox 03b590efa - Raise timeout to 20s [19:14:44] [02dns] 07paladox created branch 03paladox-patch-2 - 13https://git.io/vbQXl [19:14:47] NO [19:14:51] [02puppet] 07paladox closed pull request 03#1136: varnish: Raise timeout for varnish health checker to 20s - 13https://git.io/JeafX [19:14:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jeafy [19:14:54] [02miraheze/puppet] 07paladox 0381dee73 - varnish: Raise timeout for varnish health checker to 20s (#1136) [19:14:56] [02dns] 07paladox opened pull request 03#117: Raise timeout to 20s - 13https://git.io/JeafS [19:15:00] [02dns] 07paladox closed pull request 03#117: Raise timeout to 20s - 13https://git.io/JeafS [19:15:01] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeafQ [19:15:03] [02miraheze/dns] 07paladox 03279111d - Raise timeout to 20s (#117) [19:15:30] [02miraheze/dns] 07JohnFLewis pushed 031 commit to 03revert-117-paladox-patch-2 [+0/-0/±1] 13https://git.io/Jeaf7 [19:15:31] [02miraheze/dns] 07JohnFLewis 03e780286 - Revert "Raise timeout to 20s (#117)" This reverts commit 279111d1865e16e1e751405d95b76346408b5da3. [19:15:33] [02dns] 07JohnFLewis created branch 03revert-117-paladox-patch-2 - 13https://git.io/vbQXl [19:15:48] [02dns] 07JohnFLewis opened pull request 03#118: Revert "Raise timeout to 20s" - 13https://git.io/Jeaf5 [19:15:59] [02dns] 07JohnFLewis closed pull request 03#118: Revert "Raise timeout to 20s" - 13https://git.io/Jeaf5 [19:16:00] [02miraheze/dns] 07JohnFLewis pushed 032 commits to 03master [+0/-0/±2] 13https://git.io/Jeafd [19:16:02] [02miraheze/dns] 07JohnFLewis 03f485a8c - Merge pull request #118 from miraheze/revert-117-paladox-patch-2 Revert "Raise timeout to 20s" [19:16:03] [02miraheze/dns] 07JohnFLewis deleted branch 03revert-117-paladox-patch-2 [19:16:05] [02dns] 07JohnFLewis deleted branch 03revert-117-paladox-patch-2 - 13https://git.io/vbQXl [19:16:24] miraheze/dns/master/279111d - paladox The build was broken. https://travis-ci.org/miraheze/dns/builds/607809134 [19:16:48] miraheze/dns/paladox-patch-2/b590efa - paladox The build was broken. https://travis-ci.org/miraheze/dns/builds/607809022 [19:17:08] [02puppet] 07paladox deleted branch 03paladox-patch-4 - 13https://git.io/vbiAS [19:17:09] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-4 [19:20:58] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.71, 3.67, 2.98 [19:22:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.68, 4.40, 3.33 [19:22:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [19:25:27] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb [19:25:57] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.89, 3.71, 3.31 [19:26:41] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:27:25] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:27:54] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.85, 3.11, 3.14 [19:37:32] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.69, 3.94, 3.44 [19:45:30] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.66, 3.92, 3.69 [19:49:22] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.06, 2.46, 3.15 [19:55:38] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.76, 2.44, 3.94 [20:01:46] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.94, 3.20, 3.61 [20:03:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 7.27, 4.45, 3.35 [20:03:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.35, 3.44, 3.68 [20:05:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.24, 4.00, 3.87 [20:06:58] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.75, 3.57, 3.28 [20:07:42] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.18, 2.97, 3.50 [20:08:53] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.83, 2.60, 2.95 [20:09:41] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.03, 2.53, 3.27 [20:21:16] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:21:27] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [20:21:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.08, 3.47, 3.17 [20:21:47] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.46, 3.60, 3.35 [20:23:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:23:24] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:23:34] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.02, 2.61, 2.89 [20:23:46] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.16, 2.67, 3.04 [20:27:57] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.30, 3.88, 3.46 [20:28:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 2.61, 4.60, 3.94 [20:30:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.27, 4.09, 3.55 [20:31:54] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.90, 3.55, 3.41 [20:32:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.56, 3.27, 3.56 [20:33:49] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.01, 3.50, 3.39 [20:34:08] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.60, 2.65, 3.29 [20:37:42] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.81, 3.08, 3.27 [20:48:17] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.74, 3.83, 3.28 [20:52:23] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.29, 3.87, 3.46 [20:54:24] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.39, 4.21, 3.64 [21:00:32] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.16, 3.55, 3.22 [21:00:40] Seems you have fixed your 503 erros [21:00:46] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [21:01:12] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:02:45] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.91, 4.50, 3.62 [21:03:26] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:07:22] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:09:06] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.88, 3.69, 3.69 [21:11:00] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.80, 4.25, 3.93 [21:11:22] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.41, 3.55, 3.98 [21:12:59] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.32, 3.43, 3.66 [21:17:20] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.76, 3.32, 3.61 [21:19:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.38, 3.05, 3.49 [21:20:42] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.89, 2.83, 3.36 [21:21:23] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.99, 2.53, 3.23 [21:25:24] Hello chris95! If you have any questions, feel free to ask and someone should answer soon. [21:34:17] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 8.83, 5.11, 3.71 [21:34:41] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 9.10, 5.20, 3.75 [21:37:46] Hi, anybody know how to delete your own account? [21:45:05] User accounts cannot be deleted chris95 [21:46:29] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [21:48:27] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:48:30] huh, that's a surprise. Deactivated perhaps? I accidentally made 2 accounts, figured I would remove one. Do they ever get archived/closed or something if I just stop using it? [21:50:02] Hello sario528! If you have any questions, feel free to ask and someone should answer soon. [21:50:51] !log depool mw1 [21:51:57] paladox, seems you fixed the 503s [21:52:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:52:14] BurningPrincess it's not fixed yet, i doin't think [21:52:23] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:52:24] or have you seen better behaviour? [21:52:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:54:43] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.18, 2.83, 3.78 [21:54:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.90, 2.78, 3.84 [21:56:23] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:56:37] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:00:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:00:42] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:00:42] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.45, 2.61, 3.36 [22:00:54] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.09, 2.36, 3.29 [22:05:30] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [22:05:53] PROBLEM - mw1 php-fpm on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:06:07] PROBLEM - mw1 Disk Space on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:06:09] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:06:33] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:06:43] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [22:06:49] PROBLEM - mw1 SSH on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:06:59] PROBLEM - mw1 MirahezeRenewSsl on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:07:10] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.16, 3.67, 3.71 [22:07:22] PROBLEM - Host mw1 is DOWN: PING CRITICAL - Packet loss = 100% [22:07:27] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [22:07:49] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:08:00] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:08:36] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:08:37] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:10:31] chris95: a steward can merge them if you ask on Stewards Noticeboard on meta [22:11:56] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.84, 2.56, 3.22 [22:14:23] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:14:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:14:57] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:18:40] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.51, 3.53, 3.49 [22:20:19] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 9.502 second response time [22:21:27] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.94, 2.73, 3.18 [22:21:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.95, 4.26, 3.56 [22:22:13] paladox, seems verrer] [22:24:27] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.24, 3.52, 3.39 [22:25:27] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:25:31] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:26:52] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.47, 2.73, 3.11 [22:32:30] RECOVERY - Host mw1 is UP: PING OK - Packet loss = 0%, RTA = 0.30 ms [22:32:31] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: connect to address 185.52.1.75 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [22:32:53] RECOVERY - mw1 Disk Space on mw1 is OK: DISK OK - free space: / 14008 MB (18% inode=97%); [22:33:07] RECOVERY - mw1 MirahezeRenewSsl on mw1 is OK: TCP OK - 0.001 second response time on 185.52.1.75 port 5000 [22:33:17] PROBLEM - mw1 HTTPS on mw1 is CRITICAL: connect to address 185.52.1.75 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [22:33:22] RECOVERY - mw1 SSH on mw1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [22:34:27] PROBLEM - db4 Disk Space on db4 is WARNING: DISK WARNING - free space: / 40291 MB (10% inode=96%); [22:39:06] PROBLEM - mw1 Disk Space on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:39:36] PROBLEM - Host mw1 is DOWN: PING CRITICAL - Packet loss = 100% [22:43:18] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:44:15] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.08, 6.73, 6.36 [22:45:55] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:46:40] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.90, 6.26, 6.22 [23:04:14] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:04:49] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:04:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [23:06:13] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:08:51] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 1.657 second response time [23:28:19] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:29:53] RECOVERY - Host mw1 is UP: PING OK - Packet loss = 0%, RTA = 0.36 ms [23:29:55] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:29:56] RECOVERY - mw1 HTTPS on mw1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 442 bytes in 0.008 second response time [23:29:56] RECOVERY - mw1 Disk Space on mw1 is OK: DISK OK - free space: / 14008 MB (18% inode=97%); [23:30:19] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 0.22, 0.23, 0.13 [23:32:22] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.368 second response time [23:32:59] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 3.658 second response time [23:34:10] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 7.77, 7.14, 6.56 [23:34:27] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:35:16] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.93, 7.04, 6.67 [23:37:16] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.91, 6.46, 6.52 [23:38:15] PROBLEM - mw2 Current Load on mw2 is CRITICAL: CRITICAL - load average: 8.22, 7.43, 6.79 [23:38:42] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.683 second response time [23:40:10] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 6.24, 6.68, 6.58 [23:43:50] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:50:23] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 9.180 second response time [23:53:23] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:53:25] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.008 second response time [23:54:04] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.421 second response time [23:54:24] RECOVERY - mw1 php-fpm on mw1 is OK: PROCS OK: 7 processes with command name 'php-fpm7.2' [23:54:25] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.665 second response time [23:55:15] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [23:55:42] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [23:55:43] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [23:57:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb [23:58:39] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:59:16] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [23:59:34] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:59:34] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.003 second response time [23:59:38] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [23:59:41] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1