[00:01:23] [02statichelp] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/statichelp/commit/a87e5426cea95958b54b381d32bfcc0dba8e85c3 [00:01:23] 02statichelp/03main 07WikiTideBot 03a87e542 Bot: Auto-update Tech namespace pages 2025-12-04 00:01:21 [00:12:07] RECOVERY - cp171 Disk Space on cp171 is OK: DISK OK - free space: / 73598MiB (16% inode=99%); [00:13:13] RECOVERY - cp191 Disk Space on cp191 is OK: DISK OK - free space: / 73862MiB (16% inode=99%); [00:14:26] RECOVERY - cp201 Disk Space on cp201 is OK: DISK OK - free space: / 72877MiB (16% inode=99%); [04:05:22] [02ssl] 07WikiTideBot pushed 1 new commit to 03main 13https://github.com/miraheze/ssl/commit/70087dd33aa35d0ce85f906ccd7d3e4a5c5ba141 [04:05:22] 02ssl/03main 07WikiTideBot 0370087dd Bot: Auto-update domain lists [15:34:26] PROBLEM - cp201 Disk Space on cp201 is WARNING: DISK WARNING - free space: / 49774MiB (10% inode=99%); [16:19:13] PROBLEM - cp191 Disk Space on cp191 is WARNING: DISK WARNING - free space: / 49858MiB (10% inode=99%); [16:28:07] PROBLEM - cp171 Disk Space on cp171 is WARNING: DISK WARNING - free space: / 49817MiB (10% inode=99%); [18:45:16] !log [somerandomdeveloper@test151] starting deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'CommentStreams'} to test151 [18:45:17] !log [somerandomdeveloper@test151] finished deploy of {'versions': ['1.44', '1.45'], 'upgrade_extensions': 'CommentStreams'} to test151 - SUCCESS in 2s [18:45:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:45:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:45:32] !log [somerandomdeveloper@mwtask181] starting deploy of {'versions': '1.44', 'upgrade_extensions': 'CommentStreams'} to all [18:45:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:45:57] !log [somerandomdeveloper@mwtask181] finished deploy of {'versions': '1.44', 'upgrade_extensions': 'CommentStreams'} to all - SUCCESS in 24s [18:46:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:18:07] !log test [19:18:19] um [19:18:29] @abaddriverlolare our servers doing good [19:18:30] PROBLEM - cp201 HTTPS on cp201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:34] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:18:36] LOL [19:18:39] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:18:41] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:41] PROBLEM - mw203 HTTPS on mw203 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:42] PROBLEM - mw193 MediaWiki Rendering on mw193 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:18:42] i am a prophet [19:18:44] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:18:45] PROBLEM - mw181 HTTPS on mw181 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:45] OH GOD [19:18:46] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [19:18:50] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:18:54] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:56] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:18:59] which db is it this time [19:19:03] PROBLEM - db161 Current Load on db161 is CRITICAL: LOAD CRITICAL - total load average: 100.20, 45.97, 18.40 [19:19:04] 161 [19:19:09] PROBLEM - mw202 MediaWiki Rendering on mw202 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:19:09] PROBLEM - mw192 MediaWiki Rendering on mw192 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.023 second response time [19:19:12] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:15] omg i get to update our status page [19:19:20] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:19:23] PROBLEM - mw201 MediaWiki Rendering on mw201 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:19:25] should i announce [19:19:26] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [19:19:27] PROBLEM - cp161 HTTPS on cp161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:30] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:31] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:32] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:33] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:38] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:39] PROBLEM - cp171 HTTPS on cp171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:19:50] PROBLEM - cp191 HTTPS on cp191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:20:00] PROBLEM - mw203 MediaWiki Rendering on mw203 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [19:20:00] PROBLEM - mw191 HTTPS on mw191 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [19:20:02] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.016 second response time [19:20:05] PROBLEM - cp191 Varnish Backends on cp191 is CRITICAL: 11 backends are down. mw151 mw161 mw162 mw181 mw153 mw163 mw173 mw191 mw192 mw193 mw201 [19:20:10] PROBLEM - mw202 HTTPS on mw202 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [19:20:10] PROBLEM - cp201 Varnish Backends on cp201 is CRITICAL: 11 backends are down. mw151 mw161 mw162 mw181 mw153 mw163 mw173 mw191 mw192 mw193 mw201 [19:20:12] PROBLEM - mw201 HTTPS on mw201 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [19:20:17] PROBLEM - mw191 MediaWiki Rendering on mw191 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:19] PROBLEM - cp171 Varnish Backends on cp171 is CRITICAL: 11 backends are down. mw151 mw161 mw162 mw181 mw153 mw163 mw173 mw191 mw192 mw193 mw201 [19:20:21] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:24] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [19:20:27] PROBLEM - mw192 HTTPS on mw192 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10000 milliseconds with 0 bytes received [19:20:27] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:29] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:32] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:33] PROBLEM - mw193 HTTPS on mw193 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [19:20:33] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [19:20:38] doing [19:21:13] PROBLEM - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is CRITICAL: CRITICAL - NGINX Error Rate is 81% [19:21:22] !log [somerandomdeveloper@mwtask181] starting deploy of {'config': True, 'force': True} to all [19:22:14] mwdeploy is not mwdeploying [19:22:33] !log [somerandomdeveloper@mwtask181] starting deploy of {'config': True, 'force': True} to all [19:22:35] PROBLEM - db161 MariaDB Connections on db161 is UNKNOWN: [19:22:44] PROBLEM - db161 APT on db161 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:22:46] @rhinosf1 can we add an option to skip the canary check in mwdeploy [19:22:59] it's hanging after syncing the config to mw151 [19:23:19] RECOVERY - mw192 MediaWiki Rendering on mw192 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 3.008 second response time [19:23:26] it's deploying but incredibly slowly [19:23:30] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.063 second response time [19:23:31] RECOVERY - mw201 MediaWiki Rendering on mw201 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.247 second response time [19:23:33] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 7.012 second response time [19:23:36] RECOVERY - cp161 HTTPS on cp161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4328 bytes in 1.918 second response time [19:23:36] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.063 second response time [19:23:37] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.062 second response time [19:23:37] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 4.011 second response time [19:23:39] RECOVERY - cp171 HTTPS on cp171 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4273 bytes in 0.218 second response time [19:23:45] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.063 second response time [19:23:50] RECOVERY - cp191 HTTPS on cp191 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4273 bytes in 0.066 second response time [19:23:50] PROBLEM - db161 MariaDB on db161 is UNKNOWN: [19:23:55] !log [somerandomdeveloper@mwtask181] finished deploy of {'config': True, 'force': True} to all - SUCCESS in 82s [19:24:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:24:00] RECOVERY - mw203 MediaWiki Rendering on mw203 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.261 second response time [19:24:02] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.221 second response time [19:24:03] RECOVERY - mw191 HTTPS on mw191 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.070 second response time [19:24:05] RECOVERY - cp191 Varnish Backends on cp191 is OK: All 31 backends are healthy [19:24:10] RECOVERY - cp201 Varnish Backends on cp201 is OK: All 31 backends are healthy [19:24:10] RECOVERY - mw202 HTTPS on mw202 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.072 second response time [19:24:15] RECOVERY - mw201 HTTPS on mw201 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.069 second response time [19:24:16] RECOVERY - mw191 MediaWiki Rendering on mw191 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.253 second response time [19:24:19] RECOVERY - cp171 Varnish Backends on cp171 is OK: All 31 backends are healthy [19:24:21] !log depool c2 [19:24:25] RECOVERY - cp201 HTTPS on cp201 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4273 bytes in 0.072 second response time [19:24:26] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.064 second response time [19:24:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:24:28] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.216 second response time [19:24:28] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.210 second response time [19:24:32] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.062 second response time [19:24:34] RECOVERY - mw192 HTTPS on mw192 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.074 second response time [19:24:36] I got a processlist this time [19:24:36] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.225 second response time [19:24:37] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.211 second response time [19:24:40] RECOVERY - mw193 HTTPS on mw193 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.073 second response time [19:24:41] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.067 second response time [19:24:41] RECOVERY - mw193 MediaWiki Rendering on mw193 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.214 second response time [19:24:41] RECOVERY - mw203 HTTPS on mw203 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.066 second response time [19:24:44] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.215 second response time [19:24:46] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.245 second response time [19:24:46] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.199 second response time [19:24:48] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.224 second response time [19:24:50] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.207 second response time [19:24:51] RECOVERY - mw181 HTTPS on mw181 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.072 second response time [19:24:56] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.053 second response time [19:25:03] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.065 second response time [19:25:04] PROBLEM - db161 Puppet on db161 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:25:09] RECOVERY - mw202 MediaWiki Rendering on mw202 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.217 second response time [19:25:12] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 410 - Status line output matched "HTTP/2 410" - 4224 bytes in 0.074 second response time [19:25:13] PROBLEM - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is WARNING: WARNING - NGINX Error Rate is 55% [19:25:20] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.236 second response time [19:25:46] PROBLEM - db161 MariaDB on db161 is CRITICAL: Can't connect to server on 'db161.fsslc.wtnet' (115) [19:27:36] the server is so overloaded I can't even restart or kill the process [19:27:49] restart the server [19:29:32] !log force-restart (reset) db161 via proxmox [19:29:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:29:39] RECOVERY - db161 MariaDB on db161 is OK: Uptime: 35 Threads: 55 Questions: 122 Slow queries: 0 Opens: 22 Open tables: 16 Queries per second avg: 3.485 [19:30:07] RECOVERY - db161 Puppet on db161 is OK: OK: Puppet is currently enabled, last run 16 minutes ago with 0 failures [19:30:31] RECOVERY - db161 MariaDB Connections on db161 is OK: OK connection usage: 1%Current connections: 10 [19:30:33] RECOVERY - db161 APT on db161 is OK: APT OK: 127 packages available for upgrade (0 critical updates). [19:31:13] RECOVERY - cp161 HTTP 4xx/5xx ERROR Rate on cp161 is OK: OK - NGINX Error Rate is 36% [19:31:27] for some reason it's getting connections to the parsercache DB even though it's still depooled, we should look into that [19:31:31] (cc @paladox) [19:33:02] RECOVERY - db161 Current Load on db161 is OK: LOAD OK - total load average: 3.22, 7.31, 3.57 [19:33:43] !log [somerandomdeveloper@mwtask181] starting deploy of {'config': True, 'force': True} to all [19:33:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:05] !log [somerandomdeveloper@mwtask181] finished deploy of {'config': True, 'force': True} to all - SUCCESS in 22s [19:34:08] unexploded? [19:34:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:30] !log repool c2 [19:34:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:35] hopefully yes [19:35:01] time to look at the processlist [19:36:04] @pskyechology you can probably update the status page [19:36:07] I don't have access lol [19:36:46] just did, i'll keep it at monitoring for like 5 minutes then mark it as resolved [19:37:01] just in case it explodes again xd [19:37:13] that has never happened so far fortunately [19:37:27] but it's probably still good to not resolve it immediately [19:41:23] ya want it? [19:44:32] I don't think there's a scenario where there's an outage and I have time to edit the statuspage, but sure [19:49:23] check email [19:49:28] btw @pskyechology if there's a db server outage and nobody from infra is online, the best thing to do is usually to depool the cluster; the db server will likely crash and restart by itself after some time [19:52:54] honestly my dumbass just never remembered that c1 = cloud15 and etc. and i had nowhere to look it up during an outage [19:53:18] working memory of wait was i talking about again [19:53:31] worked (the dashboard is terrible btw, it reloads like 5 times when I'm trying to log in, now I can understand why people hate Jira (also an atlassian product)), thanks! [19:53:54] lmfao should be fine after the first time [19:54:05] i have this in a separate channel in my personal dc server [19:54:14] IT WAS DEEP IN LS ALL THIS TIME???? [19:54:15] DAMN [19:54:24] skill issue tbh [19:54:33] maybe we should add it to meta so we can access it via the tech docs [19:54:59] trying to think what the best place would be [19:55:48] not sure, maybe Tech:MariaDB, although depooling is rather related to MW [19:56:24] Tech:MariaDB doesn't even document how to execute mariadb queries (when trying this for the first time I wasn't aware I had to `sudo su`), I should definitely expand that page at some point [19:57:06] [[Tech:MediaWiki_appserver]] maybe [19:57:06] [19:58:37] i dont even know how i would access a certain db server to delete a wiki off it (currently torturing claire for her secret methodologies) [19:59:34] wow apparently its stupid easy [20:05:21] i might just be silly :3 [20:07:24] Does the mw user have drop db rights? [20:07:47] Cause you should only have mw user not root [20:07:57] https://discord.com/channels/1006797886027214949/1225518258451517492/1436467737282084947 [20:08:03] yes [20:08:21] I dropped dbs when I was mw specialist [20:08:48] Hmm [20:08:53] sql.php my beloved [20:08:55] Why does the mediawiki user need that [20:09:05] You can just type sql [20:09:25] Literally as simple as sql metawiki [20:09:26] i know but i needed to specify i mean the maintenance script and not sql itself [20:09:32] idk but mw specialists do [20:09:54] I'm not going to think too hard about that [20:10:15] idk but very useful when renaming a wiki to a marked-as-deleted-but-not-actually-deleted wiki [20:11:41] We could give you different credentials somehow [20:12:00] But just not the web user [20:12:01] PROBLEM - mwtask181 Current Load on mwtask181 is WARNING: LOAD WARNING - total load average: 23.37, 18.22, 12.14 [20:13:01] that happens once a day btw because of a checkuser maint script running daily on all wikis [20:13:20] is it supposed to try and explode mwtask181 [20:13:30] @rhinosf1 do you think we could move some timers awa from the canary server to another mwtask? [20:13:57] and maybe also ia backups [20:14:12] if disk space were to run out again it wouldn't block deployments anymore [20:14:30] We were going to split them up a while ago [20:14:35] But CA [20:14:49] So we have no signed off infra plan [20:14:54] oh [20:16:08] at least it doesn't go up to load 400 unlike our dbs [20:16:59] quick, before it strikes midnight [20:19:10] we should absolutely do a major deployment at 23:59:59 [20:19:50] actually at 23:58:59 because it starts at 23:59 [20:21:40] miraheze/dns - paladox the build passed. [20:24:01] PROBLEM - mwtask181 Current Load on mwtask181 is CRITICAL: LOAD CRITICAL - total load average: 25.01, 23.02, 18.09 [20:26:01] PROBLEM - mwtask181 Current Load on mwtask181 is WARNING: LOAD WARNING - total load average: 21.67, 22.28, 18.41 [20:27:43] miraheze/dns - paladox the build passed. [20:30:23] It's 23:59 tomorrow [20:30:55] awh [20:32:01] RECOVERY - mwtask181 Current Load on mwtask181 is OK: LOAD OK - total load average: 10.68, 17.40, 17.82 [20:40:57] Also see Mattermost on enforcement [20:41:35] Hello! Any idea how much it takes for a request to be reviewed? [20:41:39] Basically, the major freeze is at discretion and basically means if you screw up and cause problems, you're on your own and it will be seen as activating factor [20:41:46] Sorry if I am talking in the wrong channel [20:41:59] The full freeze will be strictly enforced [20:42:12] I have no idea what type of request you're even talking about? [20:42:24] Wiki? Import? SSL? Steward? [20:42:26] Request for a wiki [20:42:33] My bad [20:42:40] miraheze/dns - paladox the build passed. [20:42:45] Definitely not a question for the tech team [20:42:57] I just saw after sending the message 😭 I'm sorry for that [20:42:58] miraheze/dns - paladox the build passed. [20:43:00] But if it's gone to a human, when a human is free [20:43:12] miraheze/dns - paladox the build passed. [20:43:25] queue times vary but usually its somewhere around 3 days [20:43:38] Thank you [21:16:05] MacFan4000: I think MacFanBot is down, it didn't log any git commits [21:16:23] (btw, where is the bot actually hosted? I don't see anything in puppet, so I assume not on our infra?) [21:18:31] PROBLEM - db202 Puppet on db202 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 2 minutes ago with 2 failures. Failed resources (up to 3 shown): Package[ntp],Service[pdns-recursor] [21:18:32] its on a free vm from oci cloud [21:18:36] PROBLEM - db202 NTP time on db202 is UNKNOWN: check_ntp_time: Invalid hostname/address - ntp.fsslc.wtnetUsage: check_ntp_time -H [-4|-6] [-w ] [-c ] [-v verbose] [-o