[00:25:21] hi [00:30:49] hello [00:53:44] .at september 26 Zppix's 5th year on English Wikipedia [00:53:44] Zppix: Sorry, but I didn't understand, please try again. [00:53:47] .help at [00:53:48] Zppix: The documentation for this command is too long; I'm sending it to you in a private message. [00:55:02] .in 13205m 44s Zppix's 5th year on English Wikipedia [00:55:03] Zppix: Okay, I will set the reminder for: 2019-09-26 - 00:00:46CDT [02:26:11] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 9.34, 6.80, 5.48 [02:28:11] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.38, 6.86, 5.66 [02:30:11] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.70, 6.64, 5.72 [04:03:12] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.75, 1.60, 1.46 [04:05:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeO0u [04:05:12] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.67, 1.57, 1.46 [04:05:12] [02miraheze/services] 07MirahezeSSLBot 03f4e8549 - BOT: Updating services config for wikis [04:13:12] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.74, 1.62, 1.49 [04:15:12] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.52, 1.53, 1.46 [04:47:06] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.72, 1.61, 1.50 [04:51:05] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.41, 1.54, 1.49 [05:01:03] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.94, 1.74, 1.56 [05:05:03] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.66, 1.62, 1.54 [05:11:02] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.14, 1.73, 1.58 [05:15:02] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.91, 1.75, 1.61 [05:17:01] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.23, 1.64, 1.59 [05:21:00] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.26, 1.92, 1.69 [05:24:59] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.62, 1.87, 1.72 [05:26:58] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.06, 1.87, 1.74 [05:29:59] RhinosF1: -staff [05:32:57] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.89, 2.00, 1.84 [05:40:55] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.09, 1.97, 1.85 [05:44:54] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.64, 1.92, 1.87 [05:46:40] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:46:42] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:47:00] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:47:08] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:47:11] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:47:25] PROBLEM - misc1 webmail.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:47:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:47:30] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 60% [05:47:40] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 94% [05:47:43] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [05:47:48] PROBLEM - cp3 Stunnel Http for mw2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:47:55] Reception123: ^ what's up? [05:47:59] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:48:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [05:48:07] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:48:22] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:48:23] 503 [05:48:24] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:49:40] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 57% [05:50:05] Don't ask me... [05:51:15] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 44% [05:51:30] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 48% [05:51:40] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 38% [05:52:03] PROBLEM - misc1 icinga.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:52:41] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [05:53:30] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 71% [05:55:12] Reception123: check RN [05:55:15] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 21% [05:55:40] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 93% [05:56:03] RECOVERY - cp3 Stunnel Http for mw2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.659 second response time [05:56:11] RECOVERY - misc1 icinga.miraheze.org HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 341 bytes in 0.010 second response time [05:56:11] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.007 second response time [05:56:31] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.389 second response time [05:56:39] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.670 second response time [05:56:40] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.645 second response time [05:56:50] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.005 second response time [05:56:51] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.01, 1.78, 1.77 [05:57:19] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.389 second response time [05:57:20] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24516 bytes in 0.389 second response time [05:57:21] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24522 bytes in 0.006 second response time [05:57:50] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (111 "Connection refused") [05:58:50] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.63, 1.69, 1.73 [05:58:56] PROBLEM - misc4 phab.miraheze.wiki HTTPS on misc4 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/1.1 500 Internal Server Error [05:58:58] PROBLEM - misc4 phabricator.miraheze.org HTTPS on misc4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4221 bytes in 0.088 second response time [05:59:15] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 41% [05:59:21] RhinosF1: can't now :( [05:59:25] No access [05:59:59] paladox, SPF|Cloud, PuppyKun: ^ [06:00:30] Doubt PuppyKun is up :p [06:00:35] Reception123: I would send an email out to wake the rest up but afaik mail went down and Icinga is dead so I can't check [06:01:01] Terminated perhaps? [06:01:15] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 33% [06:01:25] SPF|Cloud: pls Check but it's taken out most of our services [06:01:26] SPF|Cloud: it could be the usual suspension due to bandwidth [06:01:43] Matomo is up as I was on it when we went down [06:02:49] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.33, 1.61, 1.69 [06:03:45] Nothing suspended [06:04:25] SPF|Cloud: db4? It's DBs that are failed for Icinga and grafana now [06:05:04] db4 is online per RamNode [06:05:15] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 53% [06:05:27] SPF|Cloud: Does it have disk space? [06:05:29] * SPF|Cloud is booting his laptop already [06:07:40] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 32% [06:07:47] !log stopping mysql on db4 [06:08:47] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.78, 1.77, 1.73 [06:09:06] RhinosF1: your assumption was right [06:09:15] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 23% [06:09:31] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 55% [06:09:59] SPF|Cloud: few, we're back. I think void mentioned it last night and given Icinga and grafana has dB errors, it added up. [06:10:19] Should be back [06:10:34] * RhinosF1 waits for a barrage of Recovery alerts [06:10:43] still 503ing [06:11:05] I have issued a start command for db4 mysql but it's not returing an exit code [06:11:18] SPF|Cloud: hmm, you stopped MySQL so could it still need it - can we clear the logs on it? [06:11:30] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 72% [06:11:40] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 65% [06:13:03] I'm trying removing binlogs manually so I can purge them afterwards in the indexes [06:13:30] K [06:14:48] !log abort restart, trying to start in recovery mode to purge logs [06:15:30] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is WARNING: WARNING - NGINX Error Rate is 55% [06:16:03] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4112 bytes in 0.040 second response time [06:17:11] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4112 bytes in 0.027 second response time [06:17:13] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4112 bytes in 0.026 second response time [06:17:44] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4112 bytes in 0.029 second response time [06:18:56] RECOVERY - misc4 phab.miraheze.wiki HTTPS on misc4 is OK: HTTP OK: Status line output matched "HTTP/1.1 200" - 17724 bytes in 0.096 second response time [06:18:58] RECOVERY - misc4 phabricator.miraheze.org HTTPS on misc4 is OK: HTTP OK: HTTP/1.1 200 OK - 19058 bytes in 0.202 second response time [06:19:01] RECOVERY - misc1 webmail.miraheze.org HTTPS on misc1 is OK: HTTP OK: Status line output matched "HTTP/1.1 401 Unauthorized" - 5799 bytes in 0.037 second response time [06:19:11] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [06:19:12] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18971 bytes in 0.300 second response time [06:19:13] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18947 bytes in 0.022 second response time [06:19:29] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:19:31] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 13% [06:19:40] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 2% [06:19:43] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [06:19:44] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18948 bytes in 0.029 second response time [06:19:50] RECOVERY - db4 MySQL on db4 is OK: Uptime: 233 Threads: 69 Questions: 77607 Slow queries: 2202 Opens: 6815 Flush tables: 2 Open tables: 800 Queries per second avg: 333.077 [06:20:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:20:03] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18948 bytes in 0.265 second response time [06:20:40] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [06:21:38] SPF|Cloud: back [06:21:46] not yet [06:21:49] But keep clearing as that's way too high [06:21:58] SPF|Cloud: wikis and Icinga are [06:22:11] !log issued another regular restart job after killing manual mysqld_safe with recovery enabled [06:23:43] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [06:23:50] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (111 "Connection refused") [06:24:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [06:24:40] I'm in a bus so.. [06:24:40] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [06:24:44] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.39, 1.58, 1.66 [06:25:07] SPF|Cloud: ah [06:25:33] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3127 MB (12% inode=94%); [06:25:43] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [06:25:50] RECOVERY - db4 MySQL on db4 is OK: Uptime: 238 Threads: 61 Questions: 69662 Slow queries: 2643 Opens: 3995 Flush tables: 1 Open tables: 800 Queries per second avg: 292.697 [06:26:02] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [06:26:31] Thank god [06:26:40] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [06:26:59] Give me a few mins to get into a classroom so I have decent internet (: [06:32:57] Ok, looks like we're fine now [06:34:23] SPF|Cloud: should be [08:11:17] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.82, 1.72, 1.54 [08:13:16] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.21, 1.52, 1.49 [09:45:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeOgw [09:45:11] [02miraheze/services] 07MirahezeSSLBot 0341fea19 - BOT: Updating services config for wikis [12:02:11] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.76, 6.19, 5.24 [12:04:11] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.34, 5.94, 5.27 [12:22:22] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.98, 1.81, 1.55 [12:28:21] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.40, 1.70, 1.59 [12:38:18] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.92, 1.75, 1.62 [12:42:17] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.48, 1.63, 1.60 [13:14:26] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.68, 6.64, 5.63 [13:16:25] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.19, 6.39, 5.65 [13:22:18] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.67, 7.46, 6.30 [13:28:13] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.15, 7.83, 6.91 [13:32:11] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.13, 7.75, 7.04 [13:34:11] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.20, 6.54, 6.69 [13:49:40] RhinosF1: https://meta.miraheze.org/wiki/User:Townler is advertisement [13:49:40] [ User:Townler - Miraheze Meta ] - meta.miraheze.org [13:53:49] Zppix: will look later - need to do an incident report [13:53:56] And still walking home [13:54:00] RhinosF1: for simple spam? [13:54:13] oh nevermind [13:54:25] I swear sometimes I turn into a blond [13:56:20] Zppix: for this morning [13:56:27] Yeah i realized that [14:02:59] RhinosF1: lol i just triggered an abuse filter on loginwiki trying to create my global userpage because i put a link to publictestwiki [14:03:59] lol 09:02, September 17, 2019: User:Zppix (login.miraheze.org) triggered filter 19, performing the action "edit" on User:Zppix. Actions taken: Disallow; Filter description: External links on userpages (details | examine) [14:04:39] I feel like some global groups should be added as a exemption to that filter lol [14:04:58] Zppix: I'll look [14:05:05] RhinosF1: meh i could do it [14:05:14] RhinosF1: ill talk to the stewards and cvt in private before [14:05:29] Zppix: won't [[mh:test:User: Zppix]] work [14:05:50] i dont think publictestwiki has a interwiki link [14:06:04] RhinosF1: [14:06:07] Zppix: all wikis do [14:06:24] RhinosF1: thats not true iirc [14:06:27] RhinosF1: try using it [14:06:47] Zppix: I'll look but I've tried it before [14:07:19] RhinosF1: it wont work because publictestwiki uses a customdomain [14:07:34] They redirect [14:07:51] RhinosF1: nope for me i get a error [14:08:20] a 404 [14:08:24] https://login.miraheze.org/wiki/User_talk:RhinosF1 [14:08:25] [ User talk:RhinosF1 - Miraheze Login Wiki ] - login.miraheze.org [14:09:04] RhinosF1: odd try just visting it directly as meta.miraheze.org/mh:publictestwiki: [14:09:54] nvm [14:09:55] i see [14:10:23] Zppix: that's not how interwikis work I don't think or the right dB name [14:10:40] its not :P [14:10:46] * Zppix is still waking up [14:11:32] Zppix: I believe the software just redirects it when it's sees it in the links table [14:11:42] Im sitll tired :P [14:22:31] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 8.03, 6.75, 6.27 [14:24:29] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.79, 7.02, 6.42 [14:26:27] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.70, 6.52, 6.31 [14:34:18] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.49, 6.78, 6.40 [14:36:17] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.76, 6.40, 6.31 [16:00:00] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2648 MB (10% inode=94%); [16:01:17] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.26, 7.07, 6.16 [16:03:16] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.35, 6.71, 6.12 [16:32:02] @seen Anderlaxe [16:32:02] RhinosF1: I have never seen Anderlaxe [16:35:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeO6d [16:35:10] [02miraheze/services] 07MirahezeSSLBot 03da90d72 - BOT: Updating services config for wikis [17:17:32] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeOiN [17:17:33] [02miraheze/puppet] 07Southparkfan 035516bbd - Reduce expire-logs-days We're lacking disk space on db4. [17:20:34] !log run "SET GLOBAL expire_logs_days=4;" on db[45] to apply https://git.io/JeOiN without restart [17:20:35] [ Comparing 6531b84023af...5516bbd1f57a · miraheze/puppet · GitHub ] - git.io [17:20:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:28:32] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.19, 6.26, 5.71 [17:30:30] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.30, 5.90, 5.65 [18:05:15] PROBLEM - mw1 Current Load on mw1 is WARNING: WARNING - load average: 7.09, 6.36, 5.50 [18:07:12] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 5.31, 5.91, 5.43 [18:15:12] so, microsoft warns me that my account may be compromised, but there's not even been any attempted logins [18:15:17] thank's microsoft [18:15:32] lol [18:25:15] PROBLEM - glusterfs2 Current Load on glusterfs2 is CRITICAL: CRITICAL - load average: 5.91, 3.10, 1.56 [18:26:57] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.43, 1.81, 1.47 [18:27:58] PROBLEM - glusterfs1 Current Load on glusterfs1 is WARNING: WARNING - load average: 3.50, 3.71, 1.99 [18:29:15] RECOVERY - glusterfs2 Current Load on glusterfs2 is OK: OK - load average: 2.22, 3.08, 1.96 [18:30:01] RECOVERY - glusterfs1 Current Load on glusterfs1 is OK: OK - load average: 1.97, 2.95, 1.92 [18:32:55] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.60, 1.96, 1.66 [18:34:54] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.32, 1.69, 1.59 [18:38:53] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 3.60, 2.74, 2.04 [18:54:49] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.50, 1.86, 1.99 [19:02:47] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.99, 1.40, 1.70 [19:45:28] .discord [19:45:29] Zppix: You can join discord by going to, https://discord.is/miraheze! [19:46:00] Zppix: did u sort Vermont an account? [19:46:05] yes [19:46:16] Consul powers on publictestwiki helped [19:46:16] Good [19:46:19] Ah [19:46:33] RhinosF1: blacklist wouldnt let me either but then i was like i wonder if consul would let me overide it [19:46:52] Zppix: cool [19:47:19] * Zppix starts up an account creation biz, charges 3usd an account xD [19:47:59] RhinosF1: im kinda shocked central auth didnt throw a fit but it behaved too [19:48:45] Zppix: it does about the rename - the lock doesn't look to have transferred from the log [19:48:55] RhinosF1: the other account is locked [19:49:01] just doesnt show any details [19:49:15] Zppix: ah strange, is that upstream [19:49:18] I could unlock and relock it [19:49:47] RhinosF1: I dont know, it could just need something to update the log [19:49:53] Zppix: can do for the record, pls upstream the issue as well [20:04:21] PROBLEM - mw1 Current Load on mw1 is WARNING: WARNING - load average: 6.86, 6.52, 5.70 [20:08:22] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 4.42, 5.96, 5.69 [20:22:44] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:24:54] PROBLEM - test1 Current Load on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:25:16] PROBLEM - test1 HTTPS on test1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:25:37] PROBLEM - cp2 Stunnel Http for test1 on cp2 is CRITICAL: HTTP CRITICAL - No data received from host [20:25:39] PROBLEM - test1 Disk Space on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:25:51] PROBLEM - Host test1 is DOWN: PING CRITICAL - Packet loss = 100% [20:25:52] PROBLEM - cp3 Stunnel Http for test1 on cp3 is CRITICAL: HTTP CRITICAL - No data received from host [20:25:56] paladox: RhinosF1 [20:26:02] yup, aware [20:26:10] Aware [20:26:34] PROBLEM - cp4 Stunnel Http for test1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:30:18] We doin't need to annouce test1 being down [20:30:54] Helps people like Zppix know why Icinga is warning of issues and says we know [20:32:00] RhinosF1: meh its fine if anyone after im told asks i can just tell them I only ping because I dont know if you are aware, [20:32:12] If you guys let us know when you were aware i wouldnt ping xD [20:32:33] Zppix: we were aware about a few minutes before you [20:32:39] thats my point [20:32:42] I didnt know [20:32:52] That's us letting you know now [20:33:08] nevermind it appears your misunderstanding what i was saying [20:34:26] paladox: so if i got this right, the bandwidth issue is because of RN not us right? [20:34:42] Well we used all the bandwith up so it would be us : [20:34:44] *:P [20:35:51] paladox: why does it use so much bandwidth isnt it just used to test config changes and stuff? [20:36:10] yes, it used so much because i'm transfering data :) [20:36:45] paladox: tsk tsk bad bad [20:36:46] xD [20:37:05] heh [20:42:36] RECOVERY - Host test1 is UP: PING OK - Packet loss = 0%, RTA = 0.43 ms [20:42:37] RECOVERY - test1 SSH on test1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [20:42:40] RECOVERY - test1 Disk Space on test1 is OK: DISK OK - free space: / 8894 MB (21% inode=98%); [20:42:40] RECOVERY - test1 php-fpm on test1 is OK: PROCS OK: 3 processes with command name 'php-fpm7.3' [20:43:06] welcome back test1 [20:43:09] Zppix: don't forget to flood if you're global blocking [20:43:21] RhinosF1: thats only if its a massive amount :) [20:43:39] RhinosF1: the massgblock script forces me to use flood if I mass block [20:43:58] Zppix: k, I was pre warning you [20:47:51] RECOVERY - cp2 Stunnel Http for test1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.497 second response time [20:47:57] RECOVERY - cp4 Stunnel Http for test1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 0.017 second response time [20:48:06] RECOVERY - cp3 Stunnel Http for test1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24499 bytes in 1.988 second response time [20:48:47] RECOVERY - test1 HTTPS on test1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 444 bytes in 0.009 second response time