[00:41:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [00:41:41] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.422 second response time [00:42:03] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.437 second response time [00:42:27] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [00:42:30] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.121 second response time [00:42:32] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 85% [00:42:36] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [00:43:28] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.555 second response time [00:43:39] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [00:43:41] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [00:43:53] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.482 second response time [00:46:31] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 48% [00:48:42] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 88% [00:49:00] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 73% [00:49:22] PROBLEM - misc1 HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:49:43] PROBLEM - misc1 icinga.miraheze.org HTTPS on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:51:03] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 54% [00:51:26] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is WARNING: WARNING - NGINX Error Rate is 45% [00:53:05] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 18% [00:53:31] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 23% [00:55:35] RECOVERY - misc1 HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 334 bytes in 7.752 second response time [00:55:52] RECOVERY - misc1 icinga.miraheze.org HTTPS on misc1 is OK: HTTP OK: HTTP/1.1 302 Found - 334 bytes in 0.009 second response time [00:57:12] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 70% [00:58:47] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [00:59:28] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 74% [01:00:29] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18705 bytes in 0.411 second response time [01:00:43] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 21% [01:00:45] RECOVERY - db4 MySQL on db4 is OK: Uptime: 280 Threads: 56 Questions: 12705 Slow queries: 622 Opens: 1177 Flush tables: 1 Open tables: 1000 Queries per second avg: 45.375 [01:00:48] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18689 bytes in 1.960 second response time [01:01:21] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 11% [01:01:30] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18689 bytes in 1.884 second response time [01:01:31] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:01:32] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [01:01:36] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 8% [01:01:44] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 1.571 second response time [01:01:49] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18687 bytes in 0.848 second response time [01:02:01] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [01:02:23] !log restarted mariadb - db4 as ran out of space [01:02:32] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [01:02:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [01:02:46] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:06:53] PROBLEM - db4 Puppet on db4 is WARNING: WARNING: Puppet is currently disabled, message: reason not specified, last run 5 minutes ago with 0 failures [01:08:22] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:08:35] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.453 second response time [01:08:36] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 92% [01:08:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [01:08:43] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [01:08:44] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.416 second response time [01:09:17] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CRITICAL - NGINX Error Rate is 77% [01:09:19] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.399 second response time [01:09:25] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [01:09:28] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:09:35] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 80% [01:09:37] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.419 second response time [01:09:49] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.438 second response time [01:09:57] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:10:36] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 58% [01:10:40] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 2.816 second response time [01:10:44] RECOVERY - db4 MySQL on db4 is OK: Uptime: 267 Threads: 69 Questions: 13081 Slow queries: 666 Opens: 1278 Flush tables: 1 Open tables: 1000 Queries per second avg: 48.992 [01:10:45] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18689 bytes in 1.849 second response time [01:11:03] flood xd [01:11:19] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 14% [01:11:24] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18689 bytes in 1.919 second response time [01:11:25] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:11:28] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [01:11:36] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 3% [01:11:36] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18687 bytes in 0.965 second response time [01:11:48] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18686 bytes in 0.378 second response time [01:11:58] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [01:12:16] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [01:12:36] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 4% [01:12:43] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:12:52] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [01:14:43] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.442 second response time [01:14:45] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.434 second response time [01:14:45] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (115) [01:15:21] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.404 second response time [01:15:25] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [01:15:27] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:15:35] PROBLEM - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is CRITICAL: CRITICAL - NGINX Error Rate is 75% [01:15:35] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4212 bytes in 0.430 second response time [01:15:49] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4210 bytes in 0.444 second response time [01:15:57] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:16:16] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 8 backends are down. mw1 mw2 mw3 lizardfs6 mw4 mw5 mw6 mw7 [01:16:29] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 96% [01:16:45] RECOVERY - db4 MySQL on db4 is OK: Uptime: 246 Threads: 20 Questions: 2936 Slow queries: 144 Opens: 144 Flush tables: 1 Open tables: 138 Queries per second avg: 11.934 [01:16:47] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [01:17:16] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is WARNING: WARNING - NGINX Error Rate is 44% [01:17:22] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 0.854 second response time [01:17:24] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [01:17:25] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [01:17:33] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18689 bytes in 0.883 second response time [01:17:36] RECOVERY - cp8 HTTP 4xx/5xx ERROR Rate on cp8 is OK: OK - NGINX Error Rate is 1% [01:17:49] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 0.930 second response time [01:17:56] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [01:18:15] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [01:18:28] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 6% [01:18:44] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 0.951 second response time [01:18:47] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [01:18:50] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18688 bytes in 0.961 second response time [01:19:16] RECOVERY - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is OK: OK - NGINX Error Rate is 1% [03:06:33] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 4 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[php7.3-redis] [03:12:30] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 26 seconds ago with 0 failures [05:03:06] RECOVERY - cp8 Disk Space on cp8 is OK: DISK OK - free space: / 3682 MB (19% inode=93%); [06:25:34] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2706 MB (11% inode=94%); [10:30:09] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEOp [10:30:10] [02miraheze/services] 07MirahezeSSLBot 0311cd2f2 - BOT: Updating services config for wikis [11:18:16] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEsB [11:18:17] [02miraheze/puppet] 07Reception123 03d74ffaf - add twitch.tv to whitelist requested in T5267, can be trusted [11:22:26] paladox: why is Auto Create Category Pages restricted? [11:24:03] paladox: ah ok got it https://phabricator.miraheze.org/T4712#90161 [11:24:06] [ ⚓ T4712 Requesting some extensions ] - phabricator.miraheze.org [11:29:02] So why is it i guess i cant see the reason? Reception123 [11:29:23] " it's to stop people enabling it then realising they have lots of categories they need to delete and reduce possible impact of vandalism." [11:29:38] Ah [11:29:41] Zppix: any ideas for https://phabricator.miraheze.org/T5248 though? [11:29:44] [ ⚓ T5248 MobileFrontEnd - Not functioning after login. ] - phabricator.miraheze.org [11:30:46] Reception123: do they have it set on this browser to always display desktop version of sites, or iirc theres a setting in prefs to make it auto redirect to desktop version? [11:31:00] Their* [11:31:04] Zppix: not their fault, I tried and I also get desktop mode without having any of those settings [11:31:40] I need to click mobile view on the footer to get back to mobile [11:31:44] Zppix: could you try too? [11:31:49] Do they have anything in mw:xommon[js/cs]? [11:32:31] Zppix: nope and it's weird because for me when I logged in I got to desktop mode but then if I click "mobile view" it works [11:33:23] Reception123: same result for me [11:33:31] I cant debug really as im mobile [11:34:00] Zppix: don't see anything too strange in https://indoctrinated.miraheze.org/wiki/Special:Log/managewiki [11:34:02] [ Login required - Indoctrinated Wiki ] - indoctrinated.miraheze.org [11:34:43] Reception123: can you try disabling mobilefrontend clear your cache then enable it again and see if it still does it? [11:35:17] Zppix: ok [11:36:05] Is desktop/mobile view stored as a cookie? or is it simply just a url param? [11:36:17] Reception123 [11:36:37] Zppix: I think it's a cookie but not entirely sure [11:37:29] If its a cookie it could be just the fact its doing what it believes it should Reception123 [11:38:25] Cause i dont see an issue that is obvious [11:38:34] Zppix: seems to work for me now! [11:38:38] Zppix: could you take a look? [11:38:50] Reception123: after disable and reenable? [11:38:57] Zppix: oh nevermind, forgot to log in :P [11:39:51] Lmao [11:41:12] Zppix: nope it's still being annoying [11:41:17] Zppix: but why on login and not before I don't get it [11:42:05] Zppix: this is really strange this is what happened 1) LOGIN - mobile 2) 2FA - desktop 3) REDIRECT PAGE - mobile 4) CLICKING HOME - desktop [11:49:45] PROBLEM - panadev.ir - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'panadev.ir' expires in 15 day(s) (Wed 11 Mar 2020 11:47:06 AM GMT +0000). [11:50:01] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEGC [11:50:02] [02miraheze/ssl] 07MirahezeSSLBot 035b4fba3 - Bot: Update SSL cert for panadev.ir [12:01:55] RECOVERY - panadev.ir - LetsEncrypt on sslhost is OK: OK - Certificate 'panadev.ir' will expire on Sun 24 May 2020 10:49:54 AM GMT +0000. [12:11:43] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEGb [12:11:44] [02miraheze/mw-config] 07Reception123 03b15f8ad - wgCompressRevisions for onepiecewiki per @paladox when I asked before about the import (which is very large) [14:41:54] !log MariaDB [metawiki]> set global table_open_cache=50000; - db4 [14:42:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:42:24] !log MariaDB [metawiki]> set global table_definition_cache=40000; - db4 [14:42:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:43:46] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JvECj [14:43:48] [02miraheze/ManageWiki] 07translatewiki 03ef8fe75 - Localisation updates from https://translatewiki.net. [14:43:49] [ Main page - translatewiki.net ] - translatewiki.net. [14:43:53] !log restart php7.3-fpm on mw* and lizardfs6 [14:44:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:48:50] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEWk [14:48:51] [02miraheze/puppet] 07paladox 034380653 - matomo: Update to 3.13.3 [14:55:31] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-9 [+0/-0/±1] 13https://git.io/JvEWY [14:55:32] [02miraheze/puppet] 07paladox 03cedf4d3 - mariadb: Tweak config [14:55:34] [02puppet] 07paladox created branch 03paladox-patch-9 - 13https://git.io/vbiAS [14:55:35] [02puppet] 07paladox opened pull request 03#1263: mariadb: Tweak config - 13https://git.io/JvEWO [15:04:04] !log MariaDB [metawiki]> set global table_open_cache=50000; - db5 [15:04:10] !log MariaDB [metawiki]> set global table_definition_cache=40000; - db5 [15:04:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:04:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:04:32] !log MariaDB [metawiki]> set global table_open_cache=50000; - db6 [15:04:34] !log MariaDB [metawiki]> set global table_definition_cache=40000; - db6 [15:04:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:04:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:13:13] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [15:13:42] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [15:19:08] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:19:31] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:21:40] RhinosF1: around for a PM? [15:21:58] sort of [16:24:32] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[php7.3-redis] [16:24:55] ZppixBot is known - Config change [16:32:44] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 48 seconds ago with 0 failures [17:15:14] !log set read_only on db6 [17:15:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:33:04] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2650 MB (10% inode=94%); [17:39:58] !log set global read_only=0; - db6 [17:40:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:24:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEuu [18:24:25] [02miraheze/puppet] 07paladox 03abd14ed - Update cloud1.yaml [18:24:34] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEuz [18:24:36] [02miraheze/puppet] 07paladox 0379c5bbc - Update cloud2.yaml [18:42:34] Hello coconut! If you have any questions, feel free to ask and someone should answer soon. [18:49:19] Hi Guest99199 [18:51:28] Hello, RhinosF1 [18:51:40] How can we help? [18:52:31] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEzG [18:52:32] [02miraheze/puppet] 07paladox 03d3ceabc - Update mw4.yaml [18:52:38] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEzZ [18:52:39] [02miraheze/puppet] 07paladox 03f5284c7 - Update mw4.yaml [18:52:44] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEzC [18:52:46] [02miraheze/puppet] 07paladox 0332378cf - Update cloud2.yaml [18:52:59] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEzl [18:53:01] [02miraheze/puppet] 07paladox 037597ff0 - Update cloud1.yaml [18:57:58] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEzu [18:58:00] [02miraheze/puppet] 07paladox 03a279b01 - cloud: Do not install salt [18:59:33] !log apt-get upgrade - mon1 [18:59:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [19:00:20] RhinosF1, Nothing now, thanks. [19:00:36] :) [19:05:59] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown) [19:12:13] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 6 seconds ago with 0 failures [20:35:54] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [20:35:58] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [20:36:06] [02miraheze/mw-config] 07Pix1234 created branch 03Pix1234-patch-2 13https://git.io/JvEap [20:36:07] [02mw-config] 07Pix1234 created branch 03Pix1234-patch-2 - 13https://git.io/vbvb3 [20:36:32] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [20:37:30] [02miraheze/mw-config] 07Pix1234 pushed 031 commit to 03Pix1234-patch-2 [+0/-0/±1] 13https://git.io/JvEVf [20:37:31] [02miraheze/mw-config] 07Pix1234 03fab618e - more specfic button text [20:37:53] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:38:05] [02mw-config] 07Pix1234 opened pull request 03#2904: more specfic button text for MissingWiki page - 13https://git.io/JvEVJ [20:38:29] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [20:40:02] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [20:41:04] [02mw-config] 07paladox closed pull request 03#2904: more specfic button text for MissingWiki page - 13https://git.io/JvEVJ [20:41:06] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JvEVt [20:41:07] [02miraheze/mw-config] 07Pix1234 03b333c6e - more specfic button text (#2904) [20:42:31] [02mw-config] 07Pix1234 deleted branch 03Pix1234-patch-2 - 13https://git.io/vbvb3 [20:42:32] [02miraheze/mw-config] 07Pix1234 deleted branch 03Pix1234-patch-2 [20:48:37] * hispano76 greetings [21:57:56] !log restart php-fpm on mon1 [21:58:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:01:31] !log restart php-fpm again (on mon1) [22:01:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:04:18] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 1.07, 3.31, 2.04 [22:05:53] !log restarted php-fpm again [22:06:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [22:08:26] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.23, 1.64, 1.65 [22:51:16] https://en.wikipedia.org/wiki/Wikipedia:CSVLoader/Walkthrough Has anyone used it? how can I make it create pages in a certain namespace ? [22:51:17] [WIKIPEDIA] Wikipedia:CSVLoader/Walkthrough | "Click the pictures for expanded view..." [22:51:17] [WIKIPEDIA] Wikipedia:CSVLoader/Walkthrough | "Click the pictures for expanded view..." [23:02:41] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 51.161.32.127/cpweb [23:04:40] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:11:08] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4224 bytes in 0.421 second response time [23:12:15] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:12:41] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [23:13:03] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [23:13:26] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [23:13:44] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4224 bytes in 0.413 second response time [23:13:50] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:13:54] PROBLEM - cp8 Varnish Backends on cp8 is CRITICAL: 4 backends are down. mw1 mw2 mw3 lizardfs6 [23:14:04] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4224 bytes in 0.393 second response time [23:14:16] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 51.161.32.127/cpweb, 2607:5300:205:200::17f6/cpweb [23:17:23] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 11 backends are healthy [23:17:31] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 18701 bytes in 0.992 second response time [23:17:43] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 18701 bytes in 2.056 second response time [23:17:54] RECOVERY - cp8 Varnish Backends on cp8 is OK: All 11 backends are healthy [23:17:57] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 18700 bytes in 1.404 second response time [23:18:05] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 18701 bytes in 3.060 second response time [23:18:06] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 18700 bytes in 1.431 second response time [23:18:10] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 1.13, 2.09, 1.39 [23:18:14] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [23:18:39] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 9 backends are healthy [23:18:55] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [23:20:14] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 1.46, 1.81, 1.37 [23:22:16] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.72, 1.47, 1.30 [23:28:31] !log root@mw4:/var/log# sysctl net.core.somaxconn=512 [23:28:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:32:34] PROBLEM - cp8 Current Load on cp8 is CRITICAL: CRITICAL - load average: 2.05, 2.46, 1.80 [23:34:34] PROBLEM - cp8 Current Load on cp8 is WARNING: WARNING - load average: 0.87, 1.94, 1.69 [23:36:52] RECOVERY - cp8 Current Load on cp8 is OK: OK - load average: 0.62, 1.49, 1.56 [23:56:53] PROBLEM - cp8 Stunnel Http for mw4 on cp8 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 328 bytes in 0.761 second response time [23:57:31] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw4 [23:58:28] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw4 [23:58:50] PROBLEM - cp3 Stunnel Http for mw4 on cp3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 328 bytes in 1.374 second response time