[00:55:15] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 10.02, 6.92, 5.18 [01:01:14] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 5.67, 6.85, 5.74 [01:03:14] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.00, 5.87, 5.51 [01:25:24] PROBLEM - cp3 Stunnel Http for misc2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:25:36] PROBLEM - cp4 Stunnel Http for misc2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:25:43] PROBLEM - misc2 HTTPS on misc2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:26:32] PROBLEM - cp2 Stunnel Http for misc2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:27:23] RECOVERY - cp3 Stunnel Http for misc2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 0.920 second response time [01:27:39] RECOVERY - cp4 Stunnel Http for misc2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 7.289 second response time [01:29:49] RECOVERY - misc2 HTTPS on misc2 is OK: HTTP OK: HTTP/1.1 200 OK - 41810 bytes in 0.079 second response time [01:30:06] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.05, 1.76, 1.29 [01:30:40] RECOVERY - cp2 Stunnel Http for misc2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 0.617 second response time [01:32:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.89, 1.80, 1.37 [01:37:52] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.12, 1.68, 1.46 [02:03:08] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.83, 1.71, 1.42 [02:07:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.07, 1.45, 1.38 [02:38:28] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.97, 1.80, 1.53 [02:42:20] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.59, 1.69, 1.54 [02:48:12] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.56, 2.00, 1.68 [02:52:04] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.38, 1.85, 1.69 [02:54:01] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.99, 1.58, 1.61 [03:11:30] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.20, 1.71, 1.56 [03:13:26] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.95, 1.75, 1.59 [03:15:22] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.31, 1.60, 1.55 [03:20:58] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 10.62, 5.42, 2.76 [03:21:19] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 9.99, 5.54, 2.82 [03:21:20] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:21:30] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [03:21:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [03:22:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [03:22:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [03:22:19] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:22:25] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [03:22:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [03:23:17] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.682 second response time [03:24:19] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24592 bytes in 2.246 second response time [03:24:24] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.389 second response time [03:25:29] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [03:25:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [03:26:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [03:26:16] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [03:26:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [03:28:02] PROBLEM - glusterfs1 GlusterFS port 49152 on glusterfs1 is CRITICAL: connect to address 81.4.100.90 and port 49152: Connection refused [03:28:58] PROBLEM - glusterfs2 Current Load on glusterfs2 is WARNING: WARNING - load average: 0.66, 3.96, 3.67 [03:29:19] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 0.22, 2.87, 3.00 [03:30:59] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 0.79, 2.90, 3.31 [03:47:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.23, 1.76, 1.44 [03:49:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.70, 1.82, 1.51 [03:51:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.27, 1.64, 1.48 [04:07:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.06, 1.71, 1.48 [04:09:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.81, 1.79, 1.54 [04:11:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.23, 1.66, 1.52 [04:19:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.75, 2.00, 1.67 [04:25:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.24, 1.87, 1.74 [04:31:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.61, 1.67, 1.70 [04:51:41] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.30, 1.89, 1.68 [05:05:15] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.77, 1.89, 1.88 [05:07:12] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.02, 2.01, 1.92 [05:09:08] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.85, 1.89, 1.88 [05:11:05] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.19, 2.04, 1.94 [05:17:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.44, 1.97, 1.96 [05:23:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.19, 1.83, 1.86 [05:25:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.37, 1.66, 1.79 [05:43:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.03, 1.87, 1.82 [05:45:22] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 7.43, 3.77, 1.65 [05:45:22] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:47:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.86, 1.94, 1.85 [05:49:10] PROBLEM - glusterfs1 SSH on glusterfs1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:26] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 17 minutes ago with 0 failures [05:50:02] RECOVERY - glusterfs1 GlusterFS port 49152 on glusterfs1 is OK: TCP OK - 0.005 second response time on 81.4.100.90 port 49152 [05:51:05] RECOVERY - glusterfs1 SSH on glusterfs1 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [05:54:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:54:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:54:19] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:54:25] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:54:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:54:32] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:54:40] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 65% [05:54:52] PROBLEM - misc1 webmail.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:54:53] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:01] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:09] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:11] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:13] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 88% [05:55:19] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 1.03, 3.45, 3.09 [05:55:20] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:55:45] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:55:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 55% [05:56:03] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:56:40] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 29% [05:57:19] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.08, 2.72, 2.87 [05:59:56] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.681 second response time [05:59:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 76% [06:00:26] PROBLEM - misc1 icinga.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:03:13] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 56% [06:03:29] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 6.998 second response time [06:03:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 56% [06:04:26] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [06:05:13] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 80% [06:05:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 80% [06:07:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.09, 1.92, 1.83 [06:07:57] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [06:09:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.68, 1.86, 1.81 [06:09:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 51% [06:11:15] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.389 second response time [06:11:25] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.004 second response time [06:11:58] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 8% [06:15:03] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.08, 1.97, 1.86 [06:15:25] paladox, Reception123, SPF|Cloud: we’re down [06:15:39] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [06:15:43] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [06:15:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 55% [06:16:05] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 8.938 second response time [06:16:08] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.039 second response time [06:16:11] :( [06:16:16] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.666 second response time [06:17:08] Reception123: I have a feeling it could be the dB based on phab’s error [06:17:19] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 66% [06:17:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 64% [06:19:17] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 32% [06:19:42] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.004 second response time [06:19:46] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.392 second response time [06:19:54] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.389 second response time [06:19:58] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 58% [06:20:01] RhinosF1: fixed [06:20:03] !log purge binary logs before '2019-09-20 02:00:00'; [06:20:08] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.014 second response time [06:20:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [06:20:35] RECOVERY - misc1 webmail.miraheze.org HTTPS on misc1 is OK: HTTP OK: Status line output matched "HTTP/1.1 401 Unauthorized" - 5799 bytes in 0.027 second response time [06:20:45] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.672 second response time [06:21:01] RECOVERY - misc1 icinga.miraheze.org HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 341 bytes in 0.014 second response time [06:21:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.76, 1.86, 1.84 [06:21:13] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 2% [06:21:37] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.671 second response time [06:21:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [06:21:58] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 4% [06:21:59] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [06:22:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:22:17] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:22:52] !log purge binary logs before '2019-09-20 02:00:00'; [06:22:58] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [06:25:30] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3055 MB (12% inode=94%); [06:27:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.31, 1.49, 1.67 [06:27:19] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 5.10, 3.60, 2.36 [06:29:19] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 3.76, 3.55, 2.50 [06:31:19] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 2.22, 3.15, 2.49 [06:53:17] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php /home/rhinos/1.xml --wiki philosiversewiki [06:53:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:03:59] !log rhinos@mw1:~$ sudo /root/ssl-certificate -d yellowiki.xyz -g -o [07:04:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:06:06] [02ssl] 07RhinosF1 opened pull request 03#222: Create yellowiki.xyz.crt - 13https://git.io/Je3rY [07:07:30] [02ssl] 07RhinosF1 synchronize pull request 03#222: Create yellowiki.xyz.crt - 13https://git.io/Je3rY [07:08:25] Reception123: ^ [07:10:14] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php /home/rhinos/2.xml --wiki philosiversewiki [07:10:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:11:08] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php /home/rhinos/[345].xml --wiki philosiversewiki [07:11:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:16:51] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki philosiversewiki (post import maintenance) [07:16:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:19:55] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --update --wiki philosiversewiki [07:20:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [07:37:09] [02ssl] 07RhinosF1 synchronize pull request 03#222: Create yellowiki.xyz.crt - 13https://git.io/Je3rY [07:37:22] Reception123: ^ [11:31:20] No access atm [11:31:25] Will do later [11:31:32] K [12:05:32] !log install nload on glusterfs1 [12:06:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [12:06:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [12:08:16] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [13:55:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3MT [13:55:11] [02miraheze/services] 07MirahezeSSLBot 03c28a5e9 - BOT: Updating services config for wikis [14:16:59] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_clone_phabricator-extensions] [14:22:59] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [15:18:28] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [15:31:12] Zppix: you here? [16:44:10] !log granted RhinosF1 view rights on matomo [16:44:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:19:16] RhinosF1: whats up? [17:41:41] [02mw-config] 07RhinosF1 created branch 03RhinosF1-patch-2 - 13https://git.io/vbvb3 [17:41:43] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03RhinosF1-patch-2 [+0/-0/±1] 13https://git.io/Je3Hv [17:41:44] [02miraheze/mw-config] 07RhinosF1 032766840 - Update LocalSettings.php [17:41:47] [02mw-config] 07RhinosF1 opened pull request 03#2763: Update LocalSettings.php - 13https://git.io/Je3Hf [17:43:03] paladox, SPF|Cloud: ^ [17:44:36] [02mw-config] 07paladox reviewed pull request 03#2763 commit - 13https://git.io/Je3HI [17:47:10] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03RhinosF1-patch-2 [+0/-0/±1] 13https://git.io/Je3H3 [17:47:12] [02miraheze/mw-config] 07RhinosF1 03c88eafa - Update LocalSettings.php [17:47:13] [02mw-config] 07RhinosF1 synchronize pull request 03#2763: Update LocalSettings.php - 13https://git.io/Je3Hf [17:47:28] paladox: ^ [17:50:40] [02mw-config] 07RhinosF1 closed pull request 03#2763: Update LocalSettings.php - 13https://git.io/Je3Hf [17:50:41] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3H4 [17:50:43] [02miraheze/mw-config] 07RhinosF1 037681fac - Sitenotice per T4724 PR #2763 [18:10:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb [18:10:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [18:10:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [18:10:52] paladox: meta is fine [18:11:04] Quircwiki is slow but up [18:11:11] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [18:11:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [18:12:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [18:13:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:14:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:14:16] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:15:10] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [18:22:00] paladox: publictestwiki are slow to save changes [18:22:34] hmm [18:24:48] Zppix does it work now? [18:24:54] yes [18:26:15] ok :) [18:53:14] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.83, 6.68, 5.42 [18:55:14] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.97, 6.12, 5.37 [19:07:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [19:09:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [19:10:59] !log purge binary logs before '2019-09-20 16:00:00'; on db4 [19:11:08] Thx [19:11:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:13:22] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [19:15:18] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [19:42:58] PROBLEM - glusterfs2 Current Load on glusterfs2 is WARNING: WARNING - load average: 3.53, 2.92, 2.22 [19:44:58] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 2.58, 2.56, 2.16 [19:56:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [19:57:06] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [19:58:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [19:59:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [19:59:24] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [20:00:03] paladox: ^ that u with test? [20:00:16] yes, you already know that though :) [20:01:01] It's been disabled up until now, wanted to make sure you expected it to fail paladox although tbh puppet is good at GC [20:01:29] when disabling puppet it can save the prevous state [20:01:39] Ah [20:06:11] !log reboot test1 [20:07:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [20:07:47] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 3 minutes ago with 0 failures [20:08:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:08:14] PROBLEM - cp3 Stunnel Http for test1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:08:28] PROBLEM - cp4 Stunnel Http for test1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:08:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [20:09:06] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [20:09:07] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:09:30] PROBLEM - cp2 Stunnel Http for test1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:09:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [20:10:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [20:11:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [20:16:00] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [20:17:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [20:18:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:18:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:18:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [20:19:01] !log apt upgrade on test1 [20:19:06] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [20:19:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:19:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [20:20:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:20:16] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:20:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [20:21:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [20:22:52] RECOVERY - cp3 Stunnel Http for test1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24569 bytes in 0.850 second response time [20:22:56] RECOVERY - cp4 Stunnel Http for test1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24569 bytes in 0.018 second response time [20:23:40] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.008 second response time [20:24:19] RECOVERY - cp2 Stunnel Http for test1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24569 bytes in 0.498 second response time [20:24:25] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:28:56] paladox: what exactly is an "stunnel"? [20:29:14] stunnel encrypts the connection between cp* and mw* [20:29:39] paladox: so like a dumbed down vpn in a way? [20:29:42] since varnish does not support natively encrypting the connection :( [20:29:59] i guess so, yup. [20:30:03] cool [20:55:27] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3dY [20:55:28] [02miraheze/puppet] 07paladox 03f909304 - Update mediawiki.pp [20:59:06] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw2 [20:59:10] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [20:59:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [21:01:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [21:01:07] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:01:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [21:03:13] !log rebooted glusterfs[12] [21:03:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:04:48] PROBLEM - glusterfs1 SSH on glusterfs1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:04:50] PROBLEM - glusterfs1 GlusterFS port 24007 on glusterfs1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:05:13] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:15] PROBLEM - glusterfs2 Disk Space on glusterfs2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:19] PROBLEM - glusterfs1 Disk Space on glusterfs1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:22] PROBLEM - glusterfs2 Puppet on glusterfs2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:36] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:42] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:05:43] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [21:06:05] PROBLEM - glusterfs2 SSH on glusterfs2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:06:10] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:06:37] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Mount[/mnt/mediawiki-static-new] [21:06:43] RECOVERY - glusterfs1 SSH on glusterfs1 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [21:06:44] PROBLEM - glusterfs2 GlusterFS port 24007 on glusterfs2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:06:48] RECOVERY - glusterfs1 GlusterFS port 24007 on glusterfs1 is OK: TCP OK - 0.001 second response time on 81.4.100.90 port 24007 [21:06:59] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [21:07:15] RECOVERY - glusterfs2 Disk Space on glusterfs2 is OK: DISK OK - free space: / 164769 MB (52% inode=88%); [21:07:18] RECOVERY - glusterfs1 Disk Space on glusterfs1 is OK: DISK OK - free space: / 163354 MB (51% inode=88%); [21:07:31] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.89, 0.69, 0.25 [21:07:43] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [21:08:04] RECOVERY - glusterfs2 SSH on glusterfs2 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [21:08:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:08:41] RECOVERY - glusterfs2 GlusterFS port 24007 on glusterfs2 is OK: TCP OK - 0.001 second response time on 81.4.100.77 port 24007 [21:08:56] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:09:29] RECOVERY - glusterfs2 Puppet on glusterfs2 is OK: OK: Puppet is currently enabled, last run 6 minutes ago with 0 failures [21:09:38] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 3.03, 1.72, 0.70 [21:13:03] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:14:31] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:23:13] !log restarted networking on db5 [21:23:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:23:22] PROBLEM - misc2 HTTPS on misc2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 372 bytes in 0.007 second response time [21:23:53] PROBLEM - db5 SSH on db5 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:24:03] PROBLEM - db5 Puppet on db5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:24:05] PROBLEM - cp2 Stunnel Http for misc2 on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 372 bytes in 0.296 second response time [21:24:30] PROBLEM - db5 Disk Space on db5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:24:49] PROBLEM - db5 Current Load on db5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:24:59] PROBLEM - cp3 Stunnel Http for misc2 on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 372 bytes in 0.474 second response time [21:25:08] PROBLEM - cp4 Stunnel Http for misc2 on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 372 bytes in 0.060 second response time [21:25:10] PROBLEM - Host db5 is DOWN: PING CRITICAL - Packet loss = 100% [21:27:55] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:28:15] db5 is not down, the network is not comming back [21:28:23] this channel becomes so much more quiet after ignoring icinga-miraheze :P [21:33:04] phab admins, should https://phabricator.miraheze.org/H35 and https://phabricator.miraheze.org/H25 be disabled? [21:33:06] [ ☿ add AmandaCatherine project to requests tasks ] - phabricator.miraheze.org [21:33:07] [ ☿ add macfan project to requests tasks ] - phabricator.miraheze.org [21:36:58] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php /home/rhinos/6.xml --wiki philosiversewiki [21:37:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:47:34] hope no one minds https://meta.miraheze.org/w/index.php?title=MediaWiki:Gadget-HotCat.js&curid=9795&diff=83474&oldid=43709 [21:47:35] [ Difference between revisions of "MediaWiki:Gadget-HotCat.js" - Miraheze Meta ] - meta.miraheze.org [21:48:14] nah you're fine void [21:50:23] !log rhinos@mw1:/srv/mediawiki/w/extensions/CreateWiki/maintenance$ sudo -u www-data php populateMainPage.php --wiki=bestmusicandsongswiki [21:50:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:52:12] !log rhinos@mw1:/srv/mediawiki/w/extensions/CentralAuth/maintenance$ sudo -u www-data php createLocalAccount.php --wiki=bestmusicandsongswiki Inkster [21:52:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:52:22] Voidwalker: ^ can u add the rights pls? [21:52:30] sure [21:52:59] RECOVERY - cp3 Stunnel Http for misc2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 0.928 second response time [21:53:08] RECOVERY - cp4 Stunnel Http for misc2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 0.077 second response time [21:53:09] d [21:53:19] !log rebooted db5 [21:53:20] RECOVERY - misc2 HTTPS on misc2 is OK: HTTP OK: HTTP/1.1 200 OK - 41810 bytes in 0.080 second response time [21:53:24] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:53:25] now db5 recovers [21:53:48] RECOVERY - Host db5 is UP: PING OK - Packet loss = 0%, RTA = 0.48 ms [21:54:05] RECOVERY - cp2 Stunnel Http for misc2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 41802 bytes in 0.565 second response time [21:54:06] RECOVERY - db5 Current Load on db5 is OK: OK - load average: 0.88, 0.27, 0.10 [21:55:07] RECOVERY - bacula1 Bacula Databases db5 on bacula1 is OK: OK: Diff, 375 files, 24.02GB, 2019-09-15 02:28:00 (5.8 days ago) [21:55:10] Voidwalker yes [21:55:21] done [21:55:28] !log disabled https://phabricator.miraheze.org/H35 and https://phabricator.miraheze.org/H25 [21:55:30] [ ☿ add AmandaCatherine project to requests tasks ] - phabricator.miraheze.org [21:55:31] [ ☿ add macfan project to requests tasks ] - phabricator.miraheze.org [21:55:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:59:08] 503 [21:59:33] paladox: [22:00:07] :O [22:00:18] works for me [22:00:30] paladox: must of just recovered [22:00:31] nvm [22:00:35] yeh [22:00:50] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [22:00:52] paladox: im still getting 503 on publictestwik [22:00:56] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [22:01:04] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [22:01:29] why is it sooo unstable? [22:01:46] paladox: your the one with sysadmin not me :P [22:02:01] heh [22:02:47] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:02:54] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:02:59] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [22:03:12] RECOVERY - db5 Puppet on db5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:42] T [22:05:31] !log rhinos@mw1:/srv/mediawiki/w/maintenance$ sudo -u www-data php sql.php --wiki=metawiki --query="update cw_requests set cw_status = 'approved' where cw_id = 9311;" [22:05:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:05:36] @Meu Hi [22:05:44] Hey [22:05:50] RhinosF1: why did you do that? [22:06:00] Zppix: CW was broken [22:06:15] Wait [22:06:16] oh [22:06:19] What chat am I in? [22:06:24] irc-relay [22:06:32] What chat are you in? [22:06:37] Are you talking via a wiki! [22:06:39] IRC [22:06:41] Wiki*? [22:06:43] #miraheze and IRC [22:06:48] What is IRC? [22:06:54] Sorry, I'm new to this. [22:06:58] meta.miraheze/wiki/IRC [22:07:03] meta.miraheze.org/wiki/IRC [22:07:23] @Meu If you have any questions, feel free to ask them in #general. This is good if you're not familiar with IRC. [22:07:30] Okay :) [22:16:22] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3FV [22:16:24] [02miraheze/puppet] 07paladox 03f3fa17a - Update mount.pp [22:17:26] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:18:12] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3Fr [22:18:14] [02miraheze/puppet] 07paladox 03913108c - Update mount.pp [22:19:25] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [22:28:27] !log rhinos@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php /home/rhinos/infobox.xml --wiki philosiversewiki [22:28:44] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:42:34] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3F9 [22:42:35] [02miraheze/dns] 07paladox 03b406a13 - add ipv6 address to glusterfs[12] [23:28:50] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je3bY [23:28:51] [02miraheze/puppet] 07paladox 03dbf072f - Update mediawiki.pp [23:35:34] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Mount[/mnt/mediawiki-static-new] [23:39:43] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures