[00:02:02] RECOVERY - cp8 Disk Space on cp8 is OK: DISK OK - free space: / 3741 MB (19% inode=93%); [01:41:22] hello all [01:41:30] this is Examknow on mIRC [02:14:41] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvzPs [02:14:42] [02miraheze/puppet] 07paladox 03c253dd4 - Update init.pp [06:25:39] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2751 MB (11% inode=94%); [14:35:28] !log MariaDB [(none)]> set global max_heap_table_size = 67108864; [14:35:32] !log that's on db4 [14:35:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:35:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:36:01] !log MariaDB [(none)]> set global tmp_table_size = 67108864; - db4 [14:36:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:38:37] paladox: fixed your log msg you sent at 8:35AM (UTC-5) [14:38:48] thanks! [14:39:02] Np i like a cleaner SAL onwiki paladox [14:39:09] :) [14:55:39] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 2001:41d0:800:1056::2/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [14:56:45] PROBLEM - db6 Puppet on db6 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 5 minutes ago with 0 failures [14:58:54] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:11:16] !log MariaDB [(none)]> set global innodb_io_capacity=1000; - db4 [15:11:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:11:54] !log MariaDB [(none)]> set global innodb_io_capacity_max=3000; - db4 [15:12:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:24:28] !log restart php7.3-fpm on mw* and lizardfs6 [15:24:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:23:58] SPF|Cloud: do you have time for a quick look at something? [16:47:34] * hispano76 greetings [16:48:34] Hi [16:58:56] RECOVERY - db6 Puppet on db6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:59:51] Zppix: can you look into MH_Discord being down? [17:07:26] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2651 MB (10% inode=94%); [17:12:37] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2651 MB (11% inode=94%); [17:15:06] !log restarted php7.3-fpm on mw1 [17:15:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:18:30] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2645 MB (10% inode=94%); [17:29:09] PROBLEM - mw4 Puppet on mw4 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 7 minutes ago with 0 failures [17:30:32] PROBLEM - mw5 Puppet on mw5 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 8 minutes ago with 0 failures [17:32:07] PROBLEM - mw6 Puppet on mw6 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 10 minutes ago with 0 failures [17:32:11] PROBLEM - cp6 Stunnel Http for mw7 on cp6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:38:27] RECOVERY - cp6 Stunnel Http for mw7 on cp6 is OK: HTTP OK: HTTP/1.1 200 OK - 15343 bytes in 0.004 second response time [17:40:25] PROBLEM - cp7 Varnish Backends on cp7 is CRITICAL: 2 backends are down. mw4 mw5 [17:41:26] PROBLEM - cp6 Varnish Backends on cp6 is CRITICAL: 1 backends are down. mw4 [17:41:55] PROBLEM - mw4 Current Load on mw4 is WARNING: WARNING - load average: 1.07, 6.87, 5.43 [17:42:23] PROBLEM - cp7 Stunnel Http for mw4 on cp7 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:42:36] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 1 backends are down. mw6 [17:42:58] PROBLEM - cp3 Stunnel Http for mw4 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:43:06] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw6 [17:43:17] !log reboot mw[45] [17:43:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:43:37] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw4 [17:44:39] RECOVERY - cp6 Varnish Backends on cp6 is OK: All 11 backends are healthy [17:45:16] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 1.17, 0.41, 0.15 [17:45:57] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [17:47:51] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 3 minutes ago with 0 failures [17:48:26] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 3 minutes ago with 0 failures [17:48:51] RECOVERY - cp7 Stunnel Http for mw4 on cp7 is OK: HTTP OK: HTTP/1.1 200 OK - 15343 bytes in 0.004 second response time [17:49:09] RECOVERY - cp3 Stunnel Http for mw4 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15357 bytes in 0.748 second response time [17:49:19] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [17:49:31] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 53 seconds ago with 0 failures [17:49:35] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [17:49:57] RECOVERY - cp7 Varnish Backends on cp7 is OK: All 11 backends are healthy [17:54:38] as if one annoying bot wasn't enough [17:57:24] Reception123: I’m guessing both are sigyn exempt seen as they haven’t been killed [18:02:04] RhinosF1: unfortunately they are :P [18:02:14] Haha [18:02:24] RhinosF1: they're not exempt from my +q though :D [18:02:31] I think I set the exemptions up [18:02:39] and that has actually been done a few times when it's been too annoying [18:02:46] not-* might not be [18:03:09] Reception123: i remeber, a lot ignore them [18:03:18] RhinosF1: I think it is, or else it would've banned it with all that paladox spam :P [18:03:42] I need to tell Zppix to add mirahezebots_ to the ignore list for mh-discord when it’s back [18:04:00] Reception123: i guess so, i never did it to my knowledge [18:04:04] yeah [18:04:11] But then my memory is appaling [18:04:15] RhinosF1: it must've been done by someone [18:04:23] I think I might remember something about that [18:04:27] Yeah [18:04:42] Probably whoever did Sigyn when it first came [18:04:56] I just added it back [18:06:31] heh [19:41:17] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 0.84, 1.72, 1.34 [19:44:08] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 1.51, 1.60, 1.35 [19:50:31] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 1.27, 1.78, 1.57 [19:53:23] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.35, 1.19, 1.38 [20:07:27] PROBLEM - cp8 Disk Space on cp8 is WARNING: DISK WARNING - free space: / 2113 MB (10% inode=93%); [22:11:15] RECOVERY - bacula1 Disk Space on bacula1 is OK: DISK OK - free space: / 469392 MB (98% inode=99%); [22:11:19] !log decom bacula1 [22:11:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:34:08] !log reinstall bacula2 [22:34:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:36:46] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: connect to address 172.245.38.205 port 5666: Connection refusedconnect to host 172.245.38.205 port 5666: Connection refused [22:38:59] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:44:06] PROBLEM - bacula1 Bacula Daemon on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:46:35] PROBLEM - bacula1 Disk Space on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:47:06] PROBLEM - bacula1 Current Load on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:47:57] PROBLEM - bacula1 SSH on bacula1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:48:13] PROBLEM - bacula2 Disk Space on bacula2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [22:48:24] PROBLEM - bacula2 SSH on bacula2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:49:18] PROBLEM - bacula2 Current Load on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:49:37] PROBLEM - bacula2 Bacula Phabricator Static on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:49:57] PROBLEM - bacula2 Bacula Daemon on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:50:15] PROBLEM - bacula2 Bacula Private Git on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:50:31] PROBLEM - bacula2 Bacula Databases db5 on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:50:31] PROBLEM - bacula2 Bacula Databases db4 on bacula2 is CRITICAL: connect to address 168.235.111.29 port 5666: Connection refusedconnect to host 168.235.111.29 port 5666: Connection refused [22:50:36] PROBLEM - Host bacula1 is DOWN: PING CRITICAL - Packet loss = 100% [22:56:27] PROBLEM - mon1 Puppet on mon1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 0 failures [23:20:24] RECOVERY - bacula2 SSH on bacula2 is OK: SSH OK - OpenSSH_7.9p1 Debian-10+deb10u2 (protocol 2.0) [23:41:43] PROBLEM - bacula2 Bacula Phabricator Static on bacula2 is UNKNOWN: NRPE: Unable to read output [23:42:02] PROBLEM - bacula2 Bacula Databases db5 on bacula2 is UNKNOWN: NRPE: Unable to read output [23:42:03] PROBLEM - bacula2 Bacula Private Git on bacula2 is UNKNOWN: NRPE: Unable to read output [23:42:29] PROBLEM - bacula2 Bacula Databases db4 on bacula2 is UNKNOWN: NRPE: Unable to read output [23:42:40] RECOVERY - bacula2 Current Load on bacula2 is OK: OK - load average: 1.17, 0.83, 0.35 [23:43:15] RECOVERY - bacula2 Bacula Daemon on bacula2 is OK: PROCS OK: 2 processes with UID = 112 (bacula) [23:43:16] PROBLEM - bacula2 Puppet on bacula2 is UNKNOWN: NRPE: Unable to read output [23:43:39] RECOVERY - bacula2 Disk Space on bacula2 is OK: DISK OK - free space: / 950698 MB (99% inode=99%); [23:44:53] RECOVERY - bacula2 Bacula Phabricator Static on bacula2 is OK: OK: Full, 82043 files, 3.092GB, 2020-02-26 23:42:00 (1.8 days ago) [23:45:02] RECOVERY - bacula2 Bacula Databases db5 on bacula2 is OK: OK: Full, 3069 files, 47.08GB, 2020-02-27 03:03:00 (1.6 days ago) [23:45:04] RECOVERY - bacula2 Bacula Private Git on bacula2 is OK: OK: Full, 3944 files, 9.117MB, 2020-02-26 23:32:00 (1.8 days ago) [23:45:29] RECOVERY - bacula2 Bacula Databases db4 on bacula2 is OK: OK: Full, 1072605 files, 46.70GB, 2020-02-27 02:25:00 (1.6 days ago) [23:47:05] PROBLEM - bacula2 Puppet on bacula2 is UNKNOWN: UNKNOWN: Failed to check. Reason is: no_summary_file [23:47:49] PROBLEM - bacula2 Bacula Static on bacula2 is CRITICAL: CRITICAL: no terminated jobs [23:52:00] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:52:14] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle.