[00:18:19] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:18:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [00:21:57] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [00:24:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [00:34:12] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jel6h [00:34:13] [02miraheze/puppet] 07paladox 0337f0621 - Update default.vcl [01:02:38] [02puppet] 07paladox created branch 03paladox-patch-2 - 13https://git.io/vbiAS [01:02:39] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/Jeliq [01:02:41] [02miraheze/puppet] 07paladox 0301c04fa - Update mediawiki-includes.conf.erb [01:02:42] [02puppet] 07paladox opened pull request 03#1097: Update mediawiki-includes.conf.erb - 13https://git.io/Jelim [01:03:49] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JeliY [01:03:51] [02miraheze/puppet] 07paladox 03920fca8 - Update mediawiki.conf [01:03:52] [02puppet] 07paladox synchronize pull request 03#1097: Update mediawiki-includes.conf.erb - 13https://git.io/Jelim [01:04:04] [02puppet] 07paladox closed pull request 03#1097: Update mediawiki-includes.conf.erb - 13https://git.io/Jelim [01:04:05] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JeliO [01:04:07] [02miraheze/puppet] 07paladox 031cdf6ec - Update mediawiki-includes.conf.erb (#1097) * Update mediawiki-includes.conf.erb * Update mediawiki.conf [01:04:08] [02puppet] 07paladox deleted branch 03paladox-patch-2 - 13https://git.io/vbiAS [01:04:10] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-2 [02:07:31] !log keeping puppet disabled on mw[123] overnight. php-fpm config hacked to experiment with different settings. [02:07:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [04:24:57] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [04:27:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:26:40] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3071 MB (12% inode=94%); [07:40:02] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/Jel1p [07:40:04] [02miraheze/ssl] 07Reception123 030383cae - add wiki.apap04.com [07:44:34] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [07:45:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [07:45:07] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [07:45:07] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [07:45:38] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [07:48:41] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [07:49:13] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [07:50:50] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [07:51:14] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [07:51:14] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [07:53:55] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 9.14, 7.69, 5.94 [07:54:36] Reception123: back ^ load is high though [07:57:33] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/JelMf [07:57:34] [02miraheze/dns] 07Reception123 037070724 - add oecumene.org zone [08:01:32] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 5.96, 7.39, 6.60 [08:04:48] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.91, 6.13, 6.24 [08:06:45] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [08:07:04] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [08:07:12] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [08:09:20] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:09:20] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [08:10:33] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:12:42] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JelMt [08:12:44] [02miraheze/ssl] 07Reception123 03c04995a - add oecumene.org cert [08:13:20] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:14:19] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:15:25] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:15:43] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:16:15] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:16:48] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:17:04] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 50% [08:17:17] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.389 second response time [08:18:05] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 8.954 second response time [08:18:10] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw3 [08:18:24] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 5.374 second response time [08:18:50] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.005 second response time [08:18:57] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [08:18:59] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.390 second response time [08:19:00] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [08:19:23] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.695 second response time [08:19:45] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.006 second response time [08:20:01] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 3% [08:20:20] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.389 second response time [08:20:21] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:20:47] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [08:21:06] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [08:21:23] How long will it be before you fix your 503 errors that happen like three times a day? Otherwise I will have to move my wikis somewhere esle. Sorry if you don't wnat to hear this [08:21:55] BurningPrincess: we're looking into it, hopefully sometime soon [08:22:52] RhinosF1: Thank you, its just getting annoying and its happening a lot, I really don';t want to end up having to move the wiki [08:25:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JelMG [08:25:10] [02miraheze/services] 07MirahezeSSLBot 03d95de48 - BOT: Updating services config for wikis [08:25:33] BurningPrincess: I understand, stay with us and we'll get there. Donations welcome if you can though no matter how small [08:29:04] RhinosF1: I am sorry, its annoying to have the wiki 503 when I needed to link it... I can'r donate sorry - I don't even have any bitcoin or anything [08:30:24] BurningPrincess: no problem [08:31:27] Moving a wiki would be annoying anyway as I would have to update a jornal full of links and tell eveyone whom is writing it to write on the new wiki [08:33:12] BurningPrincess: you won't have to move [08:34:12] Thank you, shoutwiki is really annoying in that it does not allow xdml dump inporting, what is a real pain [08:34:49] That is yes [08:36:04] I dcon't know why they don't allow it though [08:36:43] Me neither [08:38:11] My wiki would be a pain to move via copy pasting though but I did that before [08:38:56] Yeah that's not easy [08:39:13] RhinosF1: should we movge this to PM [08:39:53] It's fine here but you can [09:28:36] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [09:31:38] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:52:43] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [09:55:21] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [12:03:47] hi [12:04:24] sdiy wiki logo the png preview of the svg is all black https://sdiy.info/wiki/File:SDIY_wiki_logo.svg [12:04:25] [ File:SDIY wiki logo.svg - Synth DIY wiki ] - sdiy.info [12:06:50] PROBLEM - mh142.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'mh142.com' expires in 15 day(s) (Mon 28 Oct 2019 12:03:08 PM GMT +0000). [12:06:59] any idea what setting are required while exporting to svg from Adobe Illustrator? [12:07:04] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JelyB [12:07:05] [02miraheze/ssl] 07MirahezeSSLBot 030516c75 - Bot: Update SSL cert for mh142.com [12:08:13] PROBLEM - www.mh142.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'mh142.com' expires in 15 day(s) (Mon 28 Oct 2019 12:03:08 PM GMT +0000). [12:12:06] RECOVERY - mh142.com - LetsEncrypt on sslhost is OK: OK - Certificate 'mh142.com' will expire on Fri 10 Jan 2020 11:06:58 AM GMT +0000. [12:13:25] RECOVERY - www.mh142.com - LetsEncrypt on sslhost is OK: OK - Certificate 'mh142.com' will expire on Fri 10 Jan 2020 11:06:58 AM GMT +0000. [12:17:11] never mind, fixed by opening and re-saving the svg from Inkscape [12:53:22] [02mw-config] 07GustaveLondon776 opened pull request 03#2770: Allow cvt block the email - 13https://git.io/JelSL [12:53:52] ^ declined should be done via on wiki RfC [12:54:21] [02mw-config] 07RhinosF1 commented on pull request 03#2770: Allow cvt block the email - 13https://git.io/JelSY [12:54:22] [02mw-config] 07RhinosF1 closed pull request 03#2770: Allow cvt block the email - 13https://git.io/JelSL [12:55:17] paladox: Can You remind him about submitting pointless PRs as my internet is slow? He's already been warned once [12:56:20] Is it pointless though? [12:56:30] Maybe he didn't know it needed a rfc? [12:56:47] RhinosF1 ^ [12:57:22] paladox: yes, not just RfC - it's never been needed for global groups in Mediawiki [12:57:36] You can use Special:GlobalGroupPermissions [12:58:12] Also, he's been told before to look at what's on Phabricator or ask us rather than just making PRs [12:58:27] oh ok [12:59:48] I feel if i remind him, that it'll discourage others from contributing. [13:00:11] paladox: word it well then or shall I do it [13:00:25] can you do it please? [13:00:34] Yes [13:00:40] make sure that it dosen't discourage others to contribute please. [13:04:14] {{done}} - I've explicitly said that you can ask us first if you are not sure [13:04:35] thanks [13:06:52] .status mhtest on internet about as slow as snail mail [13:07:03] That might cause a bug [13:08:06] .status mhtest bot [13:08:09] RhinosF1 updating User:RhinosF1/Status! [13:08:17] RhinosF1: Done! [13:08:31] .status mhtest on internet about as slow as snail mail [13:08:50] I've probably just crashed ZppixBot making so many tests [13:10:24] Zppix: I found another bug, will fix it tonight [13:11:04] .status mhtest on-internet-about-as-slow-as-snail-mail [13:11:06] RhinosF1 updating User:RhinosF1/Status! [13:11:15] RhinosF1: Done! [13:13:16] [02puppet] 07paladox created branch 03paladox-patch-2 - 13https://git.io/vbiAS [13:13:17] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSE [13:13:19] [02miraheze/puppet] 07paladox 03028f069 - php: Tweek config [13:13:20] [02puppet] 07paladox opened pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:14:16] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSg [13:14:18] [02miraheze/puppet] 07paladox 0375b5b9d - Update php.pp [13:14:20] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:17:41] [02puppet] 07paladox closed pull request 03#1092: php: Reduce emergency_restart_interval to 30s - 13https://git.io/JeZB8 [13:20:14] [02puppet] 07paladox deleted branch 03paladox-patch-4 - 13https://git.io/vbiAS [13:20:15] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-4 [13:21:46] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSr [13:21:48] [02miraheze/puppet] 07paladox 034b35c7d - Update php_fpm.pp [13:21:49] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:22:02] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSo [13:22:03] [02miraheze/puppet] 07paladox 03abb816b - Update mw1.yaml [13:22:05] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:22:16] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelS6 [13:22:17] [02miraheze/puppet] 07paladox 03f3103e5 - Update mw2.yaml [13:22:19] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:22:27] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSi [13:22:28] [02miraheze/puppet] 07paladox 030009151 - Update mw3.yaml [13:22:30] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:22:38] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-1/±0] 13https://git.io/JelSP [13:22:39] [02miraheze/puppet] 07paladox 038a591ae - Delete mw4.yaml [13:22:41] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:24:06] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelSX [13:24:07] [02miraheze/puppet] 07paladox 032a61dd0 - Update misc2.yaml [13:24:09] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [13:24:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelS1 [13:24:26] [02miraheze/puppet] 07paladox 03277f63e - Update init.pp [13:24:27] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [14:06:15] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [14:07:10] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [14:07:16] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [14:09:47] known ^ [14:11:30] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.56, 7.13, 6.03 [14:14:14] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:14:22] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.28, 6.57, 6.02 [14:15:12] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:15:31] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [14:23:40] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:24:14] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:24:17] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:26:26] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:26:53] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:26:56] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [15:15:31] .addchannel #miraheze-offtopic [15:15:31] RhinosF1: Request sent! Action upon the request should be taken shortly. Thank you for using ZppixBot! [15:19:25] o.o [15:31:13] Zppix: sudo php -u www-data importImages.php --wiki wikidb /path/to/file/ the -u www-data would go before the php [15:31:15] Hey apap [15:31:36] Reception123: shoot, i knew that :P thats me multitasking :P [15:32:25] Reception123: also i meant to add to that, if the there a huge number of images and could cause process to slow down i would use nice as required [15:32:25] hey [15:32:28] apap: 2019-10-07 - 05:38:19UTC tell apap WikiMedia cloaks can take a long time to be issued, don't worry. I waited a few month when I had one. [15:32:34] ok [15:32:55] oh yeah i saw that on discord ;), thanks [15:33:31] Hi apap [15:33:43] hello RhinosF1 [15:34:50] apap: feel free to join us in #miraheze-offtopic as well now [15:38:13] yay [15:42:09] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:43:00] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [15:43:00] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw2 [15:44:09] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:44:49] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:45:33] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [15:47:05] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:47:35] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 0.004 second response time [15:48:16] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:48:18] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:49:04] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:49:05] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [15:52:23] Zppix: look at the tech namespace, there's some (reasonably/now) up to date docs there [15:52:35] paladox: jeez i didnt mean to take down varnish by accepting a wiki request :P jk [15:52:42] lol [15:53:20] RhinosF1: (i know its unrelated) but it seems i dont need sysadmin to break the servers i can do by pressing create wiki :P [15:53:41] Zppix: heh [16:02:56] ATTENTION: The relay will be going offline for a short period of time to deploy a new config change!!! It should resume normal operation shortly! [16:03:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2604:180:0:33b::2/cpweb [16:06:42] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:43:13] paladox: how do the mw hosts keep each other updated with the same info? [16:43:31] Zppix what do you mean by that? [16:44:01] paladox: like how do they keep track of what each other status of like what they have stored [16:44:22] paladox: like if you run a script on mw2 how does it keep the other mw hosts updated with the same info [16:44:31] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [16:44:33] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:45:14] Zppix the db [16:45:46] paladox: oh so they dont have anything locally? [16:46:09] Not really, the dblist is updated by a script pulling in from the db every 10mins. [16:46:28] Files are stored on a network file storage so all mw* can access them [16:46:31] ah [16:47:05] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:47:07] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:12:09] !log stopping lizardfs-master on misc3 short downtime [17:12:26] !log resizing misc3 to 3gb SKVM [17:14:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:16:24] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:16:41] ^ blame paladox [17:16:41] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [17:16:56] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [17:18:10] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.293 second response time [17:18:40] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.482 second response time [17:19:58] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [17:20:21] PROBLEM - misc3 Lizardfs Master Port 2 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9420: Connection refused [17:20:25] PROBLEM - misc3 Lizardfs Master Port 3 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9421: Connection refused [17:20:25] PROBLEM - misc3 Lizardfs Master Port 1 on misc3 is CRITICAL: connect to address 185.52.1.71 and port 9419: Connection refused [17:20:33] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [17:21:30] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [17:21:55] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [17:22:44] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24592 bytes in 0.425 second response time [17:23:05] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24586 bytes in 0.632 second response time [17:23:28] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [17:24:19] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [17:24:47] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [17:25:30] PROBLEM - misc3 Current Load on misc3 is CRITICAL: CRITICAL - load average: 11.50, 8.37, 3.81 [17:27:16] PROBLEM - misc3 zotero on misc3 is CRITICAL: connect to address 185.52.1.71 and port 1969: Connection refused [17:30:59] RECOVERY - misc3 zotero on misc3 is OK: TCP OK - 0.001 second response time on 185.52.1.71 port 1969 [17:32:10] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 9 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [17:32:55] RECOVERY - misc3 Current Load on misc3 is OK: OK - load average: 2.24, 0.68, 0.24 [17:35:01] RECOVERY - misc3 Lizardfs Master Port 2 on misc3 is OK: TCP OK - 0.001 second response time on 185.52.1.71 port 9420 [17:35:08] RECOVERY - misc3 Lizardfs Master Port 1 on misc3 is OK: TCP OK - 0.002 second response time on 185.52.1.71 port 9419 [17:35:10] RECOVERY - misc3 Lizardfs Master Port 3 on misc3 is OK: TCP OK - 0.003 second response time on 185.52.1.71 port 9421 [17:37:23] PROBLEM - lizardfs4 Puppet on lizardfs4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 5 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [17:38:21] https://www.youtube.com/watch?v=sonLd-32ns4 :P [17:38:21] [ Skeeter Davis ~ The End of The World (1962) - YouTube ] - www.youtube.com [17:40:15] !log [18:12:26] <+paladox> !log resizing misc3 to 3gb SKVM (bst time) [17:40:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:40:24] Zppix hahaha [17:42:27] :P [17:42:49] RECOVERY - lizardfs4 Puppet on lizardfs4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:43:16] paladox: thats one of my fave fallout songs too xD [17:43:21] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [17:43:22] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:44:22] Zppix lol [17:44:37] paladox: stop judging me fallout has some good music :P [17:44:44] lolol [17:45:18] paladox: Bethesda has one of the best selection of music for their video games, although with red dead redemption 2 rockstar is a very very close second [17:45:38] heh [17:46:03] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [17:47:08] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jel71 [17:47:10] [02miraheze/puppet] 07paladox 03b9c671a - Update mediawiki-includes.conf.erb [17:47:18] I feel like if icinga-miraheze was human it would have been diagnosed with schizophrenia, dementia, multiple personality disorder and ADHD [17:48:00] heh [17:49:12] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [17:50:17] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 128.199.139.216/cpweb [17:51:46] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [17:53:15] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [18:12:30] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.23, 7.30, 6.48 [18:15:33] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.15, 6.15, 6.17 [18:37:26] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.26, 6.96, 6.41 [18:42:49] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.17, 6.54, 6.44 [18:51:17] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [19:29:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [19:30:02] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [19:32:34] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [19:32:48] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [19:43:15] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 10 minutes ago with 0 failures [19:48:19] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [19:48:50] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:49:07] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [19:49:24] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [19:49:42] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [19:51:19] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [19:51:48] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24570 bytes in 6.346 second response time [19:52:26] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [19:52:36] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:52:59] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:55:24] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 9.75, 8.39, 6.16 [19:55:36] PROBLEM - mw2 Current Load on mw2 is CRITICAL: CRITICAL - load average: 8.05, 6.88, 4.41 [19:58:06] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 5.97, 7.57, 6.22 [19:58:29] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 2.09, 5.07, 4.15 [20:01:37] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 3.98, 5.77, 5.79 [20:04:56] PROBLEM - mw2 Puppet on mw2 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 10 minutes ago with 0 failures [20:07:08] PROBLEM - mw3 Puppet on mw3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 1 failures [20:09:06] paladox: what you Changing? [20:18:53] php-fpm [20:19:00] Ah [21:12:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:19:47] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [21:19:53] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [21:21:03] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:21:57] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [21:22:52] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [21:24:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:25:37] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [21:50:26] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 2604:180:0:33b::2/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [21:53:16] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:56:35] !log echo "DirectIO=true" > .lizardfs_tweaks on mw[123] [21:56:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:32:04] !log depool mw1 and reboot - high cpu and load [22:32:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:34:08] !log repool mw1 [22:34:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:49:04] [02puppet] 07paladox synchronize pull request 03#1098: php: Tweek config - 13https://git.io/JelSu [22:49:06] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JelAc [22:49:07] [02miraheze/puppet] 07paladox 03463f742 - Update php.pp