[00:45:36] RhinosF1: we kind of worked on it, but ran out of time [04:02:19] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.80, 3.32, 2.94 [04:04:15] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 3.32, 3.27, 2.97 [04:33:27] PROBLEM - misc2 Current Load on misc2 is CRITICAL: CRITICAL - load average: 4.31, 3.49, 3.02 [04:35:27] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.71, 3.49, 3.07 [04:39:27] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 2.91, 3.31, 3.09 [04:43:26] PROBLEM - misc2 Current Load on misc2 is WARNING: WARNING - load average: 3.27, 3.48, 3.22 [04:47:27] RECOVERY - misc2 Current Load on misc2 is OK: OK - load average: 1.80, 3.03, 3.12 [05:00:14] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjoAu [05:00:15] [02miraheze/services] 07MirahezeSSLBot 03d5a443a - BOT: Updating services config for wikis [07:56:10] Paladox: k, let me know when u have time. And I authorize u to take any global actions needed to get it to work without any local intervention [10:19:09] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 321 [10:21:09] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [10:39:55] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 350 [10:47:42] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [11:00:28] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 385 [11:04:21] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [11:08:17] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 668 [11:45:15] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [14:41:08] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:41:18] PROBLEM - netazar.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:41:47] PROBLEM - misc4 phabricator.miraheze.org HTTPS on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:41:54] PROBLEM - cp4 Current Load on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:41:54] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:41:59] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:42:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:42:09] PROBLEM - misc4 Current Load on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:42:13] PROBLEM - enc.for.uz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:42:22] PROBLEM - test1 Current Load on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:42:28] PROBLEM - misc4 phab.miraheze.wiki HTTPS on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:42:48] PROBLEM - cp4 Varnish Backends on cp4 is WARNING: No backends detected. If this is an error, see readme.txt [14:42:48] PROBLEM - guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:42:55] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:44:16] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:44:35] PROBLEM - misc4 Disk Space on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:44:46] PROBLEM - misc4 phd on misc4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:44:53] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:44:56] PROBLEM - test1 SSH on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:45:05] PROBLEM - misc4 SSH on misc4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:45:11] PROBLEM - test1 php-fpm on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:45:13] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:45:30] PROBLEM - test1 Disk Space on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:45:31] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:45:37] PROBLEM - cp4 SSH on cp4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:47:30] Hmmm [14:47:32] RECOVERY - netazar.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.netazar.org' will expire on Mon 19 Aug 2019 08:36:02 PM GMT +0000. [14:48:28] RECOVERY - enc.for.uz - LetsEncrypt on sslhost is OK: OK - Certificate 'enc.for.uz' will expire on Sat 31 Aug 2019 02:47:02 PM GMT +0000. [14:49:50] RECOVERY - cp4 SSH on cp4 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [14:50:08] RECOVERY - misc4 phabricator.miraheze.org HTTPS on misc4 is OK: HTTP OK: HTTP/1.1 200 OK - 19059 bytes in 0.500 second response time [14:50:26] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 17 minutes ago with 0 failures [14:50:53] RECOVERY - misc4 Disk Space on misc4 is OK: DISK OK - free space: / 47921 MB (77% inode=99%); [14:50:54] RECOVERY - guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'guiasdobrasil.com.br' will expire on Tue 03 Sep 2019 02:35:01 PM GMT +0000. [14:50:55] RECOVERY - misc4 phab.miraheze.wiki HTTPS on misc4 is OK: HTTP OK: Status line output matched "HTTP/1.1 200" - 17725 bytes in 0.193 second response time [14:50:57] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:51:00] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:51:04] RECOVERY - misc4 phd on misc4 is OK: PROCS OK: 1 process with args 'phd' [14:51:06] RECOVERY - test1 SSH on test1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [14:51:12] RECOVERY - test1 php-fpm on test1 is OK: PROCS OK: 5 processes with command name 'php-fpm7.3' [14:51:18] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.008 second response time [14:51:23] RECOVERY - misc4 SSH on misc4 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u6 (protocol 2.0) [14:51:29] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 18 minutes ago with 0 failures [14:51:32] RECOVERY - test1 Disk Space on test1 is OK: DISK OK - free space: / 27715 MB (67% inode=98%); [14:51:46] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 18 minutes ago with 0 failures [14:52:02] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:52:05] RECOVERY - cp4 HTTPS on cp4 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1498 bytes in 0.010 second response time [14:52:30] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 3% [15:10:22] PROBLEM - misc4 Current Load on misc4 is WARNING: WARNING - load average: 0.40, 0.62, 3.58 [15:12:20] RECOVERY - misc4 Current Load on misc4 is OK: OK - load average: 0.33, 0.47, 3.16 [15:18:12] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 0.00, 0.10, 1.84 [15:20:12] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.07, 0.08, 1.62 [15:23:44] PROBLEM - cp4 Current Load on cp4 is WARNING: WARNING - load average: 0.13, 0.20, 1.94 [15:27:44] RECOVERY - cp4 Current Load on cp4 is OK: OK - load average: 0.22, 0.22, 1.54 [16:13:04] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CRITICAL - load average: 8.85, 6.60, 4.63 [16:15:05] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 3.66, 5.47, 4.47 [16:15:58] PROBLEM - knuxwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:17:57] RECOVERY - knuxwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'knuxwiki.com' will expire on Fri 12 Jun 2020 11:59:59 PM GMT +0000. [17:23:12] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 1980 [18:02:32] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [18:04:32] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:59:09] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [19:36:15] PROBLEM - browndust.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'browndust.wiki' expires in 15 day(s) (Tue 16 Jul 2019 07:33:00 PM GMT +0000). [19:36:30] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjKJh [19:36:31] [02miraheze/ssl] 07MirahezeSSLBot 0375514cc - Bot: Update SSL cert for browndust.wiki [19:44:15] RECOVERY - browndust.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'browndust.wiki' will expire on Sat 28 Sep 2019 06:36:24 PM GMT +0000.