[00:00:58] PROBLEM - cp8 Disk Space on cp8 is WARNING: DISK WARNING - free space: / 2110 MB (10% inode=93%); [00:02:22] !log root@cloud1:/var/lib/vz/images/106# qemu-img convert vm-106-disk-0.qcow2 vm-106-disk-0.raw [00:02:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:02:52] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [00:08:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuZE [00:08:45] [02miraheze/puppet] 07paladox 035e06f5c - jobrunner: Lower runners by 1 [00:15:18] !log root@cloud1:/var/lib/vz/images/100# qemu-img convert vm-100-disk-0.qcow2 vm-100-disk-0.raw [00:15:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:19:08] PROBLEM - cp6 Stunnel Http for test2 on cp6 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:22:40] PROBLEM - cp6 Stunnel Http for test2 on cp6 is UNKNOWN: NRPE: Command 'check_stunnel_test2' not defined [00:25:40] !log root@cloud1:/var/lib/vz/images/107# qemu-img convert vm-107-disk-0.qcow2 vm-107-disk-0.raw [00:25:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:36:41] !log varnish> ban req.http.Host == messengergeek.miraheze.org (cp8) [00:36:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:45:32] !log varnish> ban req.http.Host == messengergeek.miraheze.org (cp4) [00:45:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:57:00] !log varnish> ban req.http.Host == static.miraheze.org (cp8|cp4) [00:57:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:57:34] ah wrong host then? [00:57:48] yeh [00:57:56] since the file page is fetching it from static.m.org [00:58:25] could have realized that sooner :P [00:59:27] :P [01:11:54] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jvunu [01:11:55] [02miraheze/puppet] 07paladox 03e7508f3 - varnish: Potential fix [01:16:47] RECOVERY - cp8 Disk Space on cp8 is OK: DISK OK - free space: / 4082 MB (21% inode=93%); [01:17:16] !log apt-get upgrade - cp3 [01:17:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:18:58] !log apt-get install linux-image-amd64 [01:19:02] !log that's on cp3 [01:19:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:19:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:20:35] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2448 MB (10% inode=93%); [01:21:19] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvunD [01:21:20] [02miraheze/puppet] 07paladox 030289ee6 - Update db.pp [01:21:41] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jvuny [01:21:43] [02miraheze/puppet] 07paladox 03cc48bc6 - Update firewall.yaml [01:27:11] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jvun7 [01:27:13] [02miraheze/puppet] 07paladox 031e3439f - Update varnish.pp [01:44:59] !log root@cloud1:/var/lib/vz/images/111# qemu-img convert vm-111-disk-0.qcow2 vm-111-disk-0.raw [01:45:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:53:46] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 2.42, 2.10, 1.29 [01:55:49] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 1.32, 1.67, 1.22 [01:55:58] !log root@cloud2:/var/lib/vz/images/115# qemu-img convert vm-115-disk-0.qcow2 vm-115-disk-0.raw [01:56:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [02:04:48] PROBLEM - ping4 on test2 is CRITICAL: PING CRITICAL - Packet loss = 100% [02:08:05] RECOVERY - ping4 on test2 is OK: PING OK - Packet loss = 0%, RTA = 0.35 ms [02:22:57] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb [02:24:57] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:27:38] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3319 MB (13% inode=93%); [06:40:26] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [06:42:22] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:45:02] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [06:47:45] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:26:07] !log reception@mw1:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki bigbrotherwikiwiki /home/reception/bigbrother_pages_full.xml [07:26:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [09:50:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuzD [09:50:10] [02miraheze/services] 07MirahezeSSLBot 03666b8b5 - BOT: Updating services config for wikis [12:14:47] Hello faag14! If you have any questions, feel free to ask and someone should answer soon. [14:31:42] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2607:5300:205:200::17f6/cpweb [14:33:44] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 51.161.32.127/cpweb [14:35:43] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:35:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:36:57] Hello Ontzak! If you have any questions, feel free to ask and someone should answer soon. [14:37:35] Reception123 Hello [14:38:17] Guest66168: hi, what can I help with? [14:39:41] Reception123 I don't know when launchs Miraheze Toolforge. I cannot register in Wikimedia Toolforge because Wikimedia team removes my request closing it, and when i can register in Miraheze Toolforge??? [14:43:02] Hello MikelTube! If you have any questions, feel free to ask and someone should answer soon. [14:43:29] Reception123 Sorry. [14:45:57] PROBLEM - wiki.starship.digital - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.starship.digital' expires in 15 day(s) (Fri 13 Mar 2020 02:43:31 PM GMT +0000). [14:46:10] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuP4 [14:46:12] [02miraheze/ssl] 07MirahezeSSLBot 033c5dddd - Bot: Update SSL cert for wiki.starship.digital [14:53:54] RECOVERY - wiki.starship.digital - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.starship.digital' will expire on Tue 26 May 2020 01:46:04 PM GMT +0000. [15:17:38] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4222 bytes in 0.446 second response time [15:17:44] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4222 bytes in 0.515 second response time [15:17:51] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [15:17:58] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4222 bytes in 0.399 second response time [15:17:59] PROBLEM - cp4 HTTPS on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4150 bytes in 0.007 second response time [15:17:59] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4224 bytes in 0.424 second response time [15:18:25] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [15:18:34] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [15:18:36] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [15:18:40] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4222 bytes in 0.113 second response time [15:18:41] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 97% [15:18:59] PROBLEM - mw4 MediaWiki Rendering on mw4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.131 second response time [15:19:06] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 78% [15:19:10] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [15:19:16] PROBLEM - jobrunner1 MediaWiki Rendering on jobrunner1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.087 second response time [15:19:41] PROBLEM - mw5 MediaWiki Rendering on mw5 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.107 second response time [15:20:00] PROBLEM - test2 MediaWiki Rendering on test2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.118 second response time [15:20:14] PROBLEM - mw6 MediaWiki Rendering on mw6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.152 second response time [15:20:14] PROBLEM - mw7 MediaWiki Rendering on mw7 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.131 second response time [15:20:30] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [15:21:42] PROBLEM - cp7 Varnish Backends on cp7 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [15:21:46] PROBLEM - cp6 Varnish Backends on cp6 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [15:23:07] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is WARNING: WARNING - NGINX Error Rate is 59% [15:24:53] PROBLEM - misc1 HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:25:11] PROBLEM - misc1 icinga.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:25:15] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 15% [15:27:24] RECOVERY - mw4 MediaWiki Rendering on mw4 is OK: HTTP OK: HTTP/1.1 200 OK - 18674 bytes in 2.260 second response time [15:27:49] RECOVERY - cp4 HTTPS on cp4 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1544 bytes in 0.008 second response time [15:27:59] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18673 bytes in 0.847 second response time [15:28:45] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 27% [15:32:01] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4222 bytes in 0.424 second response time [15:32:24] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 66% [15:32:50] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 90% [15:33:58] PROBLEM - mw4 MediaWiki Rendering on mw4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4225 bytes in 0.182 second response time [15:34:30] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 23% [15:34:54] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 54% [15:36:57] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 91% [15:38:13] !log restart mysql - db4 ran out of space [15:38:35] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 74% [15:38:53] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 92% [15:39:43] RECOVERY - misc1 HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 334 bytes in 0.068 second response time [15:39:54] RECOVERY - misc1 icinga.miraheze.org HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 334 bytes in 0.009 second response time [15:41:12] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [15:43:05] PROBLEM - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is CRITICAL: CRITICAL - NGINX Error Rate is 79% [15:43:11] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [15:43:13] RECOVERY - db4 MySQL on db4 is OK: Uptime: 284 Threads: 76 Questions: 6811 Slow queries: 368 Opens: 843 Flush tables: 1 Open tables: 837 Queries per second avg: 23.982 [15:44:15] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 1.850 second response time [15:44:46] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 30% [15:45:03] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 32% [15:46:00] RECOVERY - cp7 HTTP 4xx/5xx ERROR Rate on cp7 is OK: OK - NGINX Error Rate is 3% [15:47:16] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [15:47:33] holy shit that guy is making alot of pages on bluepageswiki [15:47:38] its filling up the feed channle [15:47:52] its not vandlism, just annyoing but not much i can do aobut it [15:48:16] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4220 bytes in 0.418 second response time [15:48:52] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 82% [15:49:00] paladox: [15:49:07] FoxsideMiners hi [15:49:07] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 77% [15:49:13] might want to do somthing about all theses erros [15:49:21] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [15:49:42] FoxsideMiners i am :) [15:49:50] okay [15:49:53] dont break anything [15:49:56] PROBLEM - phab1 phabricator.miraheze.org HTTPS on phab1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4216 bytes in 0.050 second response time [15:51:20] RECOVERY - db4 MySQL on db4 is OK: Uptime: 263 Threads: 52 Questions: 8803 Slow queries: 423 Opens: 835 Flush tables: 1 Open tables: 829 Queries per second avg: 33.471 [15:51:21] RECOVERY - mw7 MediaWiki Rendering on mw7 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 0.935 second response time [15:51:32] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18673 bytes in 0.895 second response time [15:51:41] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:51:50] stuff seems to be working again [15:52:08] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 1.827 second response time [15:52:12] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18674 bytes in 1.979 second response time [15:52:25] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 0.960 second response time [15:52:27] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18673 bytes in 0.847 second response time [15:52:44] RECOVERY - phab1 phabricator.miraheze.org HTTPS on phab1 is OK: HTTP OK: HTTP/1.1 200 OK - 19053 bytes in 1.018 second response time [15:52:46] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [15:52:48] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [15:52:49] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:52:52] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 5% [15:52:53] RECOVERY - cp7 Varnish Backends on cp7 is OK: All 11 backends are healthy [15:52:53] RECOVERY - cp6 Varnish Backends on cp6 is OK: All 11 backends are healthy [15:52:55] * hispano76 greetings [15:53:09] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [15:53:09] RECOVERY - jobrunner1 MediaWiki Rendering on jobrunner1 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 1.440 second response time [15:53:11] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 1% [15:53:28] RECOVERY - mw4 MediaWiki Rendering on mw4 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 1.357 second response time [15:53:28] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 3% [15:53:46] RECOVERY - mw6 MediaWiki Rendering on mw6 is OK: HTTP OK: HTTP/1.1 200 OK - 18675 bytes in 1.527 second response time [15:53:51] RECOVERY - test2 MediaWiki Rendering on test2 is OK: HTTP OK: HTTP/1.1 200 OK - 18674 bytes in 1.038 second response time [15:54:00] RECOVERY - mw5 MediaWiki Rendering on mw5 is OK: HTTP OK: HTTP/1.1 200 OK - 18674 bytes in 1.740 second response time [15:54:05] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [16:31:30] Hello Costamente! If you have any questions, feel free to ask and someone should answer soon. [16:32:12] hello Reception123 [16:32:22] RhinosF1 [16:32:25] RhinosF1 Hello [16:32:45] grumble: ^ [16:33:30] RhinosF1 Hello!!! When Miraheze Toolforge is available??? [16:34:06] Never for you [16:34:26] RhinosF1 Why??? [16:34:43] I’m hoping to fund it :) [16:34:51] And will be banning you [16:34:57] Reception123: kick pls as well [16:38:18] Hello Constantemente! If you have any questions, feel free to ask and someone should answer soon. [16:38:35] Reception123: ^ [16:38:41] RhinosF1 WTF??? [16:48:02] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 6 backends are down. mw1 mw2 lizardfs6 mw4 mw5 mw7 [16:48:31] PROBLEM - cp3 Stunnel Http for mon1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:50:00] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [16:50:10] PROBLEM - cp3 Stunnel Http for mw5 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:50:23] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [16:50:47] PROBLEM - cp3 Stunnel Http for mw7 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:51:12] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [16:52:39] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:53:16] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:53:18] RECOVERY - cp3 Stunnel Http for mw7 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.752 second response time [16:54:29] PROBLEM - cp3 Stunnel Http for mw4 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:54:39] RECOVERY - cp3 Stunnel Http for mw5 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.751 second response time [16:54:40] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:54:46] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.07, 0.13, 0.18 [16:55:02] PROBLEM - cp3 Stunnel Http for mw6 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:56:46] RECOVERY - cp3 Stunnel Http for mw4 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.766 second response time [16:58:57] PROBLEM - cp3 Stunnel Http for mw7 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:59:41] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [16:59:43] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:00:38] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [17:02:34] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2697 MB (11% inode=93%); [17:04:08] .... [17:04:21] well there seems to be network slowness on cp3 [17:04:28] RECOVERY - cp3 Stunnel Http for mw6 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.763 second response time [17:04:40] !log reboot cp3 - network felt slow [17:04:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:04:57] RECOVERY - cp3 Current Load on cp3 is OK: OK - load average: 0.23, 0.05, 0.02 [17:05:49] RECOVERY - cp3 Stunnel Http for mw7 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15288 bytes in 0.755 second response time [17:05:53] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:05:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:06:16] !log removing old kernel from cp3 using sudo apt autoremove [17:06:16] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 12% [17:06:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:06:51] RECOVERY - cp3 Stunnel Http for mon1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 29496 bytes in 1.272 second response time [17:07:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:08:38] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [17:27:00] Interesting, there's a new beta feature for a better way to deal with edit conflicts [17:28:08] Reception123: TwoColEditConflict? [17:28:50] maybe, just saw a post about it didn't look much [17:29:04] Reception123: link? [17:30:01] RhinosF1: found it, https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Edit_Conflicts#Edit_conflicts_on_talk_pages [17:30:02] [ WMDE Technical Wishes/Edit Conflicts - Meta ] - meta.wikimedia.org [17:31:17] Reception123: on talk pages is new [17:31:24] Yey [17:32:06] RhinosF1: I really like the idea would make it way nicer [17:32:29] Yeah, if it’s as good as on normal pages then it should [19:03:48] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2607:5300:205:200::17f6/cpweb [19:05:44] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [20:06:23] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 4 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[php7.3-redis] [20:06:24] !log move the previous import to jobrunner1 (per redis issues) [20:06:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:22:46] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 49 seconds ago with 0 failures [20:29:46] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 2607:5300:205:200::17f6/cpweb [20:31:45] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:43:10] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [20:43:31] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 2 backends are down. mw2 mw3 [20:43:59] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [20:44:12] paladox: ^ [20:45:09] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [20:45:25] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [20:46:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:04:57] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:11:08] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuFc [21:11:10] [02miraheze/dns] 07paladox 031f043e1 - Add bacula2 to dns [21:13:15] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:21:05] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFK [21:21:07] [02miraheze/puppet] 07paladox 035f85318 - bacula: Setup bacula2 as the backup host [21:21:08] [02puppet] 07paladox created branch 03paladox-patch-11 - 13https://git.io/vbiAS [21:21:14] [02puppet] 07paladox opened pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:22:13] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+1/-0/±0] 13https://git.io/JvuFi [21:22:14] [02miraheze/puppet] 07paladox 0392dbcbc - Create bacula2.yaml [21:22:16] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:23:03] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuF1 [21:23:04] [02miraheze/puppet] 07paladox 030bef009 - Update bacula-fd.conf [21:23:06] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:23:45] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFD [21:23:47] [02miraheze/puppet] 07paladox 03bb647b9 - Update bacula-sd.conf [21:23:48] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:24:06] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFy [21:24:08] [02miraheze/puppet] 07paladox 032afa6b9 - Update tray-monitor.conf [21:24:09] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:26:14] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuF5 [21:26:15] [02miraheze/puppet] 07paladox 039c5c39b - Update bacula-dir.conf [21:26:17] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:26:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFb [21:26:26] [02miraheze/puppet] 07paladox 030a6677e - Update bconsole.conf [21:26:27] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:26:40] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFx [21:26:42] [02miraheze/puppet] 07paladox 03f4c4a3b - Update bacula-fd.conf [21:26:43] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:26:53] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-11 [+0/-0/±1] 13https://git.io/JvuFh [21:26:54] [02miraheze/puppet] 07paladox 03baf7505 - Update groups.conf [21:26:56] [02puppet] 07paladox synchronize pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:27:13] [02puppet] 07paladox closed pull request 03#1264: bacula: Setup bacula2 as the backup host - 13https://git.io/JvuF6 [21:27:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+1/-0/±8] 13https://git.io/JvuFj [21:27:16] [02miraheze/puppet] 07paladox 030ad7342 - bacula: Setup bacula2 as the backup host (#1264) * bacula: Setup bacula2 as the backup host * Create bacula2.yaml * Update bacula-fd.conf * Update bacula-sd.conf * Update tray-monitor.conf * Update bacula-dir.conf * Update bconsole.conf * Update bacula-fd.conf * Update groups.conf [21:28:14] PROBLEM - bacula1 Puppet on bacula1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 7 minutes ago with 4 failures [21:31:09] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvubL [21:31:10] [02miraheze/puppet] 07paladox 03e23b0c8 - fix [21:31:45] PROBLEM - bacula1 Bacula Static on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: lizardfs6-fd [21:32:38] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: db4-fd [21:33:48] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: db5-fd [21:41:09] the h [21:44:52] paladox: ? [21:44:57] ? [21:52:02] paladox: whats with icinga [21:52:14] as you can see i'm setting up bacula2 [21:52:48] Ok [21:52:57] c^: expected [22:12:10] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuNa [22:12:12] [02miraheze/dns] 07paladox 03d5e19eb - bacula2: Add ipv6 address [22:14:26] [02puppet] 07paladox deleted branch 03paladox-patch-11 - 13https://git.io/vbiAS [22:14:28] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-11 [22:39:49] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuAF [22:39:51] [02miraheze/dns] 07paladox 03218342c - Update miraheze.org [22:58:33] [02landing] 07Pix1234 closed pull request 03#28: Add translations to Polish - 13https://git.io/JvlZ5 [22:58:35] [02miraheze/landing] 07Pix1234 pushed 031 commit to 03master [+0/-0/±3] 13https://git.io/Jvuxn [22:58:36] [02miraheze/landing] 07alex4401 03c925706 - Add translations to Polish (#28) * Add Polish translation of strings in translations.php * Add Polish as a language to choose * Add an option to choose Polish on the donation page * Add a semicolon that went missing after adding PL translation of a string [23:03:35] [02landing] 07Pix1234 opened pull request 03#29: ensure github issues are notified to use phabriactor instead - 13https://git.io/JvuxR [23:03:44] [02landing] 07Pix1234 closed pull request 03#29: ensure github issues are notified to use phabriactor instead - 13https://git.io/JvuxR [23:03:45] [02miraheze/landing] 07Pix1234 pushed 031 commit to 03master [+2/-0/±0] 13https://git.io/Jvux0 [23:03:47] [02miraheze/landing] 07Pix1234 0311920f6 - ensure github issues are notified to use phabriactor instead (#29) * Create ISSUE_TEMPLATE.md * Create ISSUE_REPLY_TEMPLATE.md [23:12:52] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvuxD [23:12:54] [02miraheze/dns] 07paladox 033da2673 - bacula2: Remove ipv6 address [23:18:02] !log apt-get dist-upgrade - phab1 [23:18:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:24:55] !log root@bacula2:/home/paladox# ip -6 address add 2604:180:f3::382/64 dev ens3 [23:25:02] PROBLEM - phab1 Puppet on phab1 is CRITICAL: CRITICAL: Puppet has 5 failures. Last run 3 minutes ago with 5 failures. Failed resources (up to 3 shown): Package[nagios-plugins],Package[php7.3-cli],Package[php7.3-fpm],Package[php7.3-dev] [23:25:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:25:37] !log root@bacula2:/home/paladox# ip addr del fe80::f816:3eff:fe31:f5f/64 dev ens3 [23:25:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:30:37] RECOVERY - phab1 Puppet on phab1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:32:02] PROBLEM - cp8 Disk Space on cp8 is WARNING: DISK WARNING - free space: / 2108 MB (10% inode=93%); [23:45:30] !log root@cloud2:/var/lib/vz/images/113# qemu-img convert vm-113-disk-0.qcow2 vm-113-disk-0.raw [23:45:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:48:38] Phab no working? [23:49:10] hispano76: it looks like paladox may be doing stuff with it [23:49:26] Yeh, migrating phab to raw image [23:50:22] Ah, it was weird that my wiki worked and not Phab hehe :) [23:50:55] hispano76: thats because phab is hosted on a different server from the wikis [23:51:21] :) [23:51:44] Although they share some "servers"? [23:51:58] hispano76: what do you mean? [23:52:27] !log root@cloud2:/var/lib/vz/images/113# rm vm-113-disk-0.qcow2 [23:52:32] Phab's now back up [23:52:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:52:45] Um, I don't know how to explain :/ [23:53:20] hispano76: if your talking about how the wikis are hosted they are load balanced between several mediawiki hosts [23:54:08] Yeah, I do know [23:54:43] Phab's a vps on cloud hosts. [23:54:58] Mediawiki will be on the cloud hosts too, but we only have one phab vps [23:55:04] where as we load balance mw* [23:56:17] ok