[00:02:35] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:04:35] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:29:45] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.42, 2.99, 1.86 [00:33:44] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.23, 3.01, 2.16 [00:34:38] ho boy, that's a bit of flooding [00:34:39] whoops [00:35:51] yeh [01:31:30] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 7.02, 4.58, 2.96 [01:32:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 4.91, 3.80, 2.51 [01:44:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 2.80, 3.72, 3.33 [01:44:57] PROBLEM - glusterfs2 Current Load on glusterfs2 is WARNING: WARNING - load average: 3.03, 3.73, 3.66 [01:45:08] paladox: what is going on jeez [01:45:18] Zppix highload on gluster [01:48:47] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 1.90, 2.98, 3.38 [01:50:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 4.47, 3.70, 3.40 [01:52:39] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 4.43, 3.89, 3.66 [01:54:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 3.12, 3.71, 3.49 [01:54:34] PROBLEM - glusterfs2 Current Load on glusterfs2 is WARNING: WARNING - load average: 1.67, 3.24, 3.46 [01:56:00] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 0.63, 2.59, 3.10 [01:56:29] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 0.24, 2.20, 3.05 [02:22:38] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYzM [02:22:40] [02miraheze/puppet] 07paladox 032102de2 - Update mediawiki.pp [02:24:12] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [02:28:31] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:28:50] PROBLEM - test1 php-fpm on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:28:52] that's me [02:28:59] PROBLEM - cp3 Stunnel Http for test1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:29:18] PROBLEM - test1 Current Load on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:29:32] PROBLEM - test1 Disk Space on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:29:34] PROBLEM - cp4 Stunnel Http for test1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:30:02] PROBLEM - test1 SSH on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:30:05] PROBLEM - Host test1 is DOWN: PING CRITICAL - Packet loss = 100% [02:30:27] PROBLEM - cp2 Stunnel Http for test1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:32:05] RECOVERY - Host test1 is UP: PING OK - Packet loss = 0%, RTA = 0.44 ms [02:32:09] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:32:40] PROBLEM - test1 Puppet on test1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 7 minutes ago with 0 failures [02:34:45] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 19 seconds ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [02:35:18] RECOVERY - cp3 Stunnel Http for test1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.878 second response time [02:35:39] RECOVERY - cp4 Stunnel Http for test1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.009 second response time [02:36:31] RECOVERY - cp2 Stunnel Http for test1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.492 second response time [02:36:33] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.010 second response time [02:36:45] PROBLEM - test1 Puppet on test1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 2 minutes ago with 1 failures [02:42:52] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [02:46:38] PROBLEM - cp2 Stunnel Http for test1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:46:50] PROBLEM - test1 php-fpm on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:46:57] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:47:19] PROBLEM - test1 Current Load on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:47:29] PROBLEM - Host test1 is DOWN: PING CRITICAL - Packet loss = 100% [02:47:40] PROBLEM - cp4 Stunnel Http for test1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:47:46] PROBLEM - cp3 Stunnel Http for test1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:49:30] RECOVERY - Host test1 is UP: PING OK - Packet loss = 0%, RTA = 0.47 ms [02:49:46] RECOVERY - test1 Disk Space on test1 is OK: DISK OK - free space: / 8912 MB (21% inode=98%); [02:49:57] RECOVERY - test1 SSH on test1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [02:51:11] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [02:54:05] RECOVERY - cp3 Stunnel Http for test1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.875 second response time [02:54:47] RECOVERY - cp2 Stunnel Http for test1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.490 second response time [02:55:23] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.012 second response time [02:55:41] RECOVERY - cp4 Stunnel Http for test1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.009 second response time [02:57:09] PROBLEM - test1 Puppet on test1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 3 minutes ago with 1 failures [05:26:50] PROBLEM - glusterfs2 Puppet on glusterfs2 is CRITICAL: CRITICAL: Puppet has 47 failures. Last run 3 minutes ago with 47 failures. Failed resources (up to 3 shown): Exec[ufw-logging-low],Exec[ufw-allow-tcp-from-any-to-any-port-22],Exec[ufw-allow-tcp-from-any-to-any-port-5666],Exec[ufw-allow-tcp-from-185.52.3.121-to-any-port-9100] [05:46:10] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 10.06, 5.39, 2.99 [05:49:17] PROBLEM - glusterfs2 Disk Space on glusterfs2 is CRITICAL: connect to address 81.4.100.77 port 5666: Connection refusedconnect to host 81.4.100.77 port 5666: Connection refused [05:51:22] PROBLEM - glusterfs2 SSH on glusterfs2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:53:17] RECOVERY - glusterfs2 SSH on glusterfs2 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [06:02:08] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 0.52, 1.98, 3.33 [06:03:04] RECOVERY - glusterfs2 Puppet on glusterfs2 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [06:03:17] RECOVERY - glusterfs2 Disk Space on glusterfs2 is OK: DISK OK - free space: / 243024 MB (77% inode=93%); [06:15:46] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: CRITICAL: Puppet has 43 failures. Last run 3 minutes ago with 43 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-9102],Exec[ufw-allow-tcp-from-81.4.100.90-to-any-port-24007],Exec[ufw-allow-tcp-from-81.4.100.90-to-any-port-24008],Exec[ufw-allow-tcp-from-81.4.100.90-to-any-port-24009] [06:25:34] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2991 MB (12% inode=94%); [06:25:58] PROBLEM - glusterfs1 SSH on glusterfs1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:26:29] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [06:27:56] RECOVERY - glusterfs1 SSH on glusterfs1 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [06:33:54] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:38:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 1.36, 2.82, 3.83 [06:40:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeY2G [06:40:11] [02miraheze/services] 07MirahezeSSLBot 03f4edc17 - BOT: Updating services config for wikis [06:42:00] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.13, 1.99, 3.25 [07:25:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeY2a [07:25:11] [02miraheze/services] 07MirahezeSSLBot 0365fdf2c - BOT: Updating services config for wikis [08:51:11] @steward [08:54:34] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 6.13, 3.70, 2.77 [08:55:25] @Stewards [08:57:36] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: CRITICAL: Puppet has 44 failures. Last run 20 seconds ago with 44 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-185.52.3.121-to-any-port-9100],Exec[ufw-allow-tcp-from-any-to-any-port-9102],Exec[ufw-allow-tcp-from-81.4.100.90-to-any-port-24007],Exec[ufw-allow-tcp-from-81.4.100.90-to-any-port-24008] [09:02:19] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 1.19, 3.96, 3.81 [09:03:30] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [09:06:07] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 0.81, 2.56, 3.31 [09:40:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYaP [09:40:12] [02miraheze/services] 07MirahezeSSLBot 03599da97 - BOT: Updating services config for wikis [09:45:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYaM [09:45:10] [02miraheze/services] 07MirahezeSSLBot 03fedb6b1 - BOT: Updating services config for wikis [09:57:35] PROBLEM - glusterfs2 Puppet on glusterfs2 is CRITICAL: CRITICAL: Puppet has 36 failures. Last run 3 minutes ago with 36 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-81.4.100.77-to-any-port-24007],Exec[ufw-allow-tcp-from-81.4.100.77-to-any-port-24008],Exec[ufw-allow-tcp-from-81.4.100.77-to-any-port-24009],Exec[ufw-allow-tcp-from-81.4.100.77-to-any-port-111] [10:15:15] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 10.04, 4.99, 3.12 [10:16:15] PROBLEM - glusterfs2 SSH on glusterfs2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:19:17] PROBLEM - glusterfs2 Disk Space on glusterfs2 is CRITICAL: connect to address 81.4.100.77 port 5666: Connection refusedconnect to host 81.4.100.77 port 5666: Connection refused [10:21:56] If an @Stewards responds I'm busy so ask @Reception123 [10:22:20] RECOVERY - glusterfs2 SSH on glusterfs2 is OK: SSH OK - OpenSSH_7.9p1 Debian-10 (protocol 2.0) [10:23:17] RECOVERY - glusterfs2 Disk Space on glusterfs2 is OK: DISK OK - free space: / 240749 MB (76% inode=93%); [10:24:18] RECOVERY - glusterfs2 Puppet on glusterfs2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [10:32:35] PROBLEM - glusterfs2 Current Load on glusterfs2 is WARNING: WARNING - load average: 0.68, 2.20, 3.81 [10:36:26] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 0.45, 1.49, 3.18 [10:45:53] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: CRITICAL: Puppet has 47 failures. Last run 2 minutes ago with 47 failures. Failed resources (up to 3 shown): Exec[ufw-logging-low],Exec[ufw-allow-tcp-from-any-to-any-port-22],Exec[ufw-allow-tcp-from-any-to-any-port-5666],Exec[ufw-allow-tcp-from-185.52.3.121-to-any-port-9100] [10:56:09] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: CRITICAL - load average: 4.39, 3.89, 2.84 [10:58:03] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.53, 3.01, 2.64 [11:04:03] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [12:31:23] !log reimage gluserfs[12] [12:31:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [12:32:49] PROBLEM - glusterfs2 Puppet on glusterfs2 is CRITICAL: connect to address 81.4.100.77 port 5666: Connection refusedconnect to host 81.4.100.77 port 5666: Connection refused [12:33:21] I will let you know when I see JohnLewis around here [12:33:21] @notify JohnLewis [12:33:24] PROBLEM - glusterfs2 Disk Space on glusterfs2 is CRITICAL: connect to address 81.4.100.77 port 5666: Connection refusedconnect to host 81.4.100.77 port 5666: Connection refused [12:33:35] I will let you know when I see Voidwalker around here [12:33:35] @notify Voidwalker [12:33:47] PROBLEM - glusterfs1 Puppet on glusterfs1 is CRITICAL: connect to address 81.4.100.90 port 5666: Connection refusedconnect to host 81.4.100.90 port 5666: Connection refused [12:33:49] PROBLEM - glusterfs1 Disk Space on glusterfs1 is CRITICAL: connect to address 81.4.100.90 port 5666: Connection refusedconnect to host 81.4.100.90 port 5666: Connection refused [12:34:00] PROBLEM - glusterfs1 Current Load on glusterfs1 is CRITICAL: connect to address 81.4.100.90 port 5666: Connection refusedconnect to host 81.4.100.90 port 5666: Connection refused [12:34:07] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: connect to address 81.4.100.77 port 5666: Connection refusedconnect to host 81.4.100.77 port 5666: Connection refused [12:38:01] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.45, 0.69, 0.29 [12:39:46] PROBLEM - glusterfs1 Puppet on glusterfs1 is UNKNOWN: UNKNOWN: Failed to check. Reason is: no_summary_file [12:39:49] RECOVERY - glusterfs1 Disk Space on glusterfs1 is OK: DISK OK - free space: / 312224 MB (99% inode=99%); [12:40:08] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 1.48, 0.85, 0.37 [12:40:47] PROBLEM - glusterfs2 Puppet on glusterfs2 is UNKNOWN: NRPE: Unable to read output [12:41:19] RECOVERY - glusterfs2 Disk Space on glusterfs2 is OK: DISK OK - free space: / 312292 MB (99% inode=99%); [12:43:46] RECOVERY - glusterfs1 Puppet on glusterfs1 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [12:46:47] RECOVERY - glusterfs2 Puppet on glusterfs2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [12:51:15] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:51:23] !log reboot test1 [12:51:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [12:53:18] PROBLEM - test1 Current Load on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:53:24] PROBLEM - cp3 Stunnel Http for test1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:53:32] PROBLEM - test1 Disk Space on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:53:34] PROBLEM - cp4 Stunnel Http for test1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:54:02] PROBLEM - test1 SSH on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:54:15] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:54:27] PROBLEM - cp2 Stunnel Http for test1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:54:49] PROBLEM - test1 php-fpm on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [12:55:12] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.33, 0.29, 0.10 [12:55:24] PROBLEM - test1 Puppet on test1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 10 hours ago with 1 failures [12:55:30] RECOVERY - test1 Disk Space on test1 is OK: DISK OK - free space: / 8828 MB (21% inode=98%); [12:55:57] RECOVERY - test1 SSH on test1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [12:56:44] RECOVERY - test1 php-fpm on test1 is OK: PROCS OK: 3 processes with command name 'php-fpm7.3' [12:57:30] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.08, 6.03, 4.53 [12:59:29] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 3.52, 4.98, 4.32 [13:44:59] yey my password manager likes icinga now [13:56:55] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.24, 1.61, 1.06 [14:06:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.39, 1.78, 1.47 [14:08:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.50, 2.10, 1.63 [14:09:20] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.011 second response time [14:10:34] RECOVERY - cp4 Stunnel Http for test1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.014 second response time [14:10:51] RECOVERY - cp3 Stunnel Http for test1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 1.029 second response time [14:11:08] RECOVERY - cp2 Stunnel Http for test1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.499 second response time [14:20:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYrd [14:20:12] [02miraheze/services] 07MirahezeSSLBot 032116c5f - BOT: Updating services config for wikis [14:28:22] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [15:08:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.11, 1.62, 1.99 [15:10:27] !log rhinos@mw1:/srv/mediawiki/w/maintenance$ sudo -u www-data php importDump.php --wiki colchaguawiki /home/rhinos/colchaguawiki.xml [15:10:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:16:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.26, 1.96, 1.97 [15:18:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.88, 1.92, 1.94 [15:20:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.28, 2.10, 2.01 [15:21:30] !log restarting with screen [15:21:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:22:25] RhinosF1: manage to break anything since you got mw-admin (if not then your not doing your job right) [15:22:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.48, 1.93, 1.95 [15:23:16] Zppix: only timing out a wiki's images [15:23:31] RhinosF1: pssh i can do that myself xD [15:23:31] Because they wanted ImageSize increasing [15:25:13] Zppix: the current team break everything first then leave me wondering what on earth it's complaining about now [15:26:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 3.41, 2.51, 2.15 [15:28:24] RhinosF1: im just giving you crap :P [15:28:49] Zppix: his time will come :D [15:28:53] Zppix: I knoe [15:29:09] Reception123: that sounds ominous [15:29:33] Zppix: so far no sysadmin has escaped [15:31:32] Reception123: heh [15:34:27] !log move import to test1 due to redis/JQ error [15:34:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:38:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.39, 1.71, 1.96 [16:50:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.09, 1.72, 1.77 [17:10:06] RhinosF1, Reception123, still need someone? [17:10:18] Voidwalker: pioneer sorted it [17:10:42] good :) [18:52:33] paladox: RhinosF1 wow cp2 hates giving me the favicon I wish i knew why it did that [18:53:19] Zppix: I'm mobile atm and don't have cp* Anyway but it does [18:53:34] RhinosF1: what CP* do you get? [18:54:03] Zppix: I only have mw* and test1 [18:54:12] RhinosF1: no i meant when going on like meta [18:54:27] Zppix: what? [18:54:40] RhinosF1: when going to meta what CP* usually serves you [18:54:40] Oh, I get you [18:54:46] I believe cp2 [18:55:26] paladox: RhinosF1 its because he favicon get request get's 503'd [18:55:46] s/he/the [18:55:46] RhinosF1 meant to say: Zppix: tthe current team break everything first then leave me wondering what on earth it's complaining about now [18:55:56] Zppix: what the? [18:56:02] ZppixBot: even [18:56:06] ZppixBot: are you ok? [18:56:22] oh i know what it did [18:56:38] it cant find he in your last msg so it gets the latest one it can [18:57:05] Zppix: I want it to just get the last message [18:57:16] i believe its like this [18:57:26] RhinosF1 s/the/The [18:57:29] erg [18:57:39] let me look into it i know what you trying to do [18:57:57] .source [18:57:58] Zppix: My code can be found here: https://github.com/Pix1234/ZppixBot-Source [18:58:00] Zppix do you know which mw? or is it all of them? [18:58:17] paladox: I can find out, if you tell me [18:58:21] how [18:58:37] Zppix does curl work? [18:58:44] curl -I https://meta.miraheze.org/favicon.ico [18:59:11] im on windows paladox [18:59:21] windows supports curl [18:59:56] https://stackoverflow.com/questions/9507353/how-do-i-install-and-use-curl-on-windows [18:59:57] [ How do I install and use curl on Windows? - Stack Overflow ] - stackoverflow.com [19:00:20] or [19:00:27] if you have git for windows installed zpi [19:00:30] * Zppix [19:00:35] i got git installed [19:01:11] paladox: curl works [19:01:22] ok [19:03:11] hmm [19:03:50] Zppix curl -I https://meta.miraheze.org/favicon.ico works on cp2 [19:03:58] yes [19:04:01] So it may be your end? [19:04:27] paladox: anyway to confirm what cache proxy from mobile I get [19:04:45] I'm not sure, i know on desktop you can do curl -I https://meta.miraheze.org/favicon.ico [19:04:58] idk [19:05:04] RhinosF1 you would be using cp4 [19:05:11] paladox: k [19:05:15] I thought you ment one of the mw* [19:05:27] cp4 is for the europe, cp2 for america, and cp3 for asia [19:07:20] Ive seen cp2 mentioned on the 503s before I think [19:07:20] paladox: weird its back now and all i did was a normal refresh [19:07:32] I really do just think cp2 hates me [19:07:44] Zppix: maybe it had a blip (or cp2 hates u) [19:08:03] RhinosF1: well its being consistent [19:08:16] Zppix: daily yes [19:08:18] Hmm [19:08:25] Zppix do you use firefox? [19:08:30] paladox: yes [19:08:52] did you access it at 17:51pm utc? [19:08:58] and 17:55pm utc [19:09:04] .time utc [19:09:05] 2019-09-14 - 19:09:04UTC [19:09:24] i see 200 status code around then [19:10:34] paladox: i just pm'd you my ip cause i dont know when i access the site :P [19:10:38] thanks [19:12:17] i see a 503 at 15:47 utc [19:12:26] but not meta [19:12:43] weird [19:13:48] Zppix i see a 304 for /favicon.ico at 19:00 [19:14:27] i even see 499 [19:15:08] I dont know... its just weird its never done this before wikimedia got ddos'd [19:15:39] Weird [19:45:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYPN [19:45:12] [02miraheze/services] 07MirahezeSSLBot 039b59b15 - BOT: Updating services config for wikis [20:41:28] !log root@mw1:/mnt/mediawiki-static# mkdir -p swedishmuseumwiki [20:41:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:15:31] paladox: is phab's oauth for meta broken? [21:15:49] not that i'm aware off [21:16:16] paladox: when trying to use oauth for phab it says You are not allowed to execute the action you have requested. on the oauth dialog on meta [21:16:24] hmm [21:17:34] Zppix: hmm, Is that you getting that or from AN? [21:17:38] worked for me [21:17:56] RhinosF1: idk who AN is but i cant login to it using oauth [21:18:32] Zppix: working fine for me [21:18:41] Zppix: Admin Noticeboard meta [21:18:49] paladox: oauth is throttled by amount of actions an account is doing is it [21:19:04] * RhinosF1 waits for totp [21:19:04] isnt* [21:19:57] mwoauthmanagemygrants is not assigned on meta [21:20:14] wonder how that happened [21:20:20] Voidwalker: since when? [21:20:37] it can't be managed by ManageWiki either, so I'm not sure [21:20:43] * RhinosF1 needs to look at something [21:20:55] I mean jeez if you want me off phab that much you just have to say so JK [21:21:53] may have to override it via localsettings (iirc) or somthing then [21:23:11] Zppix: I'm working on it [21:24:03] ah [21:24:22] Voidwalker do i add mwoauthmanagemygrants to 'user'? [21:25:24] [02mw-config] 07The-Voidwalker opened pull request 03#2756: enable mwoauthmanagemygrants for users on meta - 13https://git.io/JeYXN [21:25:25] paladox: iirc user is logged in [21:25:30] like so [21:25:37] paladox: https://github.com/miraheze/mw-config/blob/39b6ebe073f5ea390ad5b92dd600a8379ce5e360/LocalSettings.php#L1898 prohibits it from MW [21:25:37] [ mw-config/LocalSettings.php at 39b6ebe073f5ea390ad5b92dd600a8379ce5e360 · miraheze/mw-config · GitHub ] - github.com [21:25:48] yup [21:27:11] paladox, https://git.io/JeYXN [21:27:13] [ enable mwoauthmanagemygrants for users on meta by The-Voidwalker · Pull Request #2756 · miraheze/mw-config · GitHub ] - git.io [21:27:25] Voidwalker thanks!! [21:27:30] i'm going to merge [21:27:39] paladox: go ahead pls [21:27:39] [02mw-config] 07paladox closed pull request 03#2756: enable mwoauthmanagemygrants for users on meta - 13https://git.io/JeYXN [21:27:41] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeYXh [21:27:42] [02miraheze/mw-config] 07The-Voidwalker 0373686b1 - enable mwoauthmanagemygrants for users on meta (#2756) [21:28:03] paladox: we get it from sysadmin group which is why we weren't impacted [21:28:11] paladox: fixed [21:28:13] ty [21:28:21] Zppix: good [21:28:22] RhinosF1 oh [21:28:53] should possibly consider making it available to users on all wikis [21:31:01] Yeh [21:31:07] Voidwalker mind creating a task? [21:31:14] *please [21:32:09] could just create a new PR to move mwoauthmanagemygrants to the default section instead of the metawiki section [21:32:26] Voidwalker: that'll work [21:33:03] oddly enough, testwiki still has it available to users [21:33:12] :O [21:35:02] [02mw-config] 07The-Voidwalker opened pull request 03#2757: make mwoauthmanagemygrants available by default - 13https://git.io/JeY1L [21:35:19] paladox: any issues or shall I merge [21:35:39] I doin't see any issues with that. Since wikis carn [21:35:46] *carn't change it etc. [21:35:50] +1 to merging [21:36:13] * RhinosF1 will merge in a sec [21:36:48] [02mw-config] 07RhinosF1 closed pull request 03#2757: make mwoauthmanagemygrants available by default - 13https://git.io/JeY1L [21:36:50] [02miraheze/mw-config] 07RhinosF1 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeY1m [21:36:51] [02miraheze/mw-config] 07The-Voidwalker 0373f1b11 - make mwoauthmanagemygrants available by default (#2757) [21:36:55] :) [21:38:41] paladox: seen as you've got puppet off you'll have to force it deploy for test1 [21:38:56] you can easily git pull :) [21:39:04] sudo -u www-data git pull [21:40:51] paladox: rhinos@test1:~$ sudo -u www-data git pull [21:40:51] fatal: not a git repository (or any of the parent directories): .git [21:41:01] wrong place [21:41:05] /srv/mediawiki/config [21:42:00] !log rhinos@test1:/srv/mediawiki/config$ sudo -u www-data git pull (to deploy local settings change due to puppet being off) [21:42:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [21:51:44] also paladox is there a way to get rid of that stupid box that appears when editting a page [21:51:59] Zppix: which stupid box? VE Welcome? [21:52:04] yes [21:52:10] Zppix it should remember when you click "Start Editing". [21:52:19] paladox: "should" [21:52:46] Zppix: only on a per wiki level but I've complained it shouldn't appear every time you edit on wm phab [21:52:47] other's have reported that [21:54:10] it happens on meta and other wikis [21:54:24] Zppix: we know - see your PMs [21:54:34] * RhinosF1 has seen it on wikimedia projects [21:54:41] ugh [22:15:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeY1u [22:15:10] [02miraheze/services] 07MirahezeSSLBot 035a16e42 - BOT: Updating services config for wikis [22:26:20] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [22:27:07] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [22:27:09] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw2 mw3 [22:27:13] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [22:27:22] huh [22:27:43] recovered [22:28:20] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 8 backends are healthy [22:29:07] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [22:29:09] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 8 backends are healthy [22:29:13] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 8 backends are healthy [22:40:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeY1y [22:40:11] [02miraheze/services] 07MirahezeSSLBot 03e76a8d2 - BOT: Updating services config for wikis [23:23:04] !log upgrade phabricator - misc4 [23:23:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master