[03:24:40] [1/4] Not sure what to make of the MediaSpoiler error. It says [03:24:40] [2/4] > Error: Couldn't fetch Wikimedia\Parsoid\DOM\Element [03:24:40] [3/4] I thought it's related to the parser change, but the message implies a missing composer dependency. This error occurred locally on my machine but running `composer update` fixed it. If we are missing some composer dependencies it should affect more extensions, so this is really odd. [03:24:40] [4/4] Random is probably asleep by now so I'll just send it here. [03:27:02] It might be vendor. [03:29:27] What confuses me is that if something as important as `Wikimedia\Parsoid\DOM\Element` is missing the whole wiki should be non-functional, not just one extension. [09:13:18] was fixed recently, what version of core and PHP are you using? [09:14:23] It wasn't fully backported yet so that might be the problem [09:18:45] [1/3] MW 1.45.1 (30bca66) [09:18:45] [2/3] PHP 8.4.16 (fpm-fcgi) [09:18:46] [3/3] Beta is using 8.2 and shouldn't be affected, so that might have been a different issue from mine. [09:21:05] Still weird since it should theoretically be fixed in 1.45 [09:30:00] Is our version updated enough? [09:30:24] I assume not, but we don't use PHP 8.4 yet [09:31:56] Do we have the sodium extension installed in PHP? [09:33:03] Idk this is odd [09:33:07] yes [09:33:28] why [09:33:36] it isn't affecting beta [09:33:44] right now at least [09:33:53] This is. [09:34:29] I was trying to see if its possible the same reason it could break in 8.4 could in 8.2 but I think its a different issue. [09:35:25] Unless I misunderstood the error is also occurring for MediaSpoiler on beta. [09:51:32] @abaddriverlol around? [09:51:40] I need some help I think. [09:51:46] yeah [09:51:55] db161 is crashed. I rebooted it immediately crashed again. [09:52:13] load skyrocketed to 500+ and memory full at 120GB constant. [09:53:15] do you mind putting in maint clusters in config to let it calm down? [09:53:55] yes give me a sec [09:55:00] my termius won't open for some reason [09:55:14] nvm it did [09:55:28] Thank you! Its almost 3AM for me and I just simply can't stay up this late anymore lol. I can stay up a little longer but won't be able to for much longer. [09:56:29] o/ [09:56:39] deploying but canary checks are slow af [09:56:43] we really need an option to skip them [09:56:56] I should make --force skip them [09:57:02] yes [09:57:51] @abaddriverlol you going to reboot the db again? [09:58:31] load is going down slowly, so not sure whether it's necessary [09:58:48] I had to in proxmox last time as server wouldnt even reboot from shell. [09:59:20] will do since it's faster [10:00:33] Thank you! Im going to sleep, feel free to repool when you feel its safe. Hopefully everything is okay for me to go now anyway? [10:00:40] Sleep well [10:01:00] yes, should be fine [10:01:02] good night [10:01:23] Awesome thanks! [10:01:37] I see [10:01:44] Can't understand shit here [10:02:01] @weebshitvaid then listen and let us work [10:02:14] Yea that's what I was thinking [10:02:21] We are watching #general as well, we'll reply to questions as we can there [10:02:22] So what u guys doing on wiki [10:02:35] Oh my bad this is not general [10:02:39] Nothing, we're trying to fix the 25% of wikis that are down [10:03:47] [1/2] Ohh i c [10:03:48] [2/2] How much time it gonna take like avg- [10:03:53] No idea [10:04:10] Should be resolved [10:04:12] [1/2] No issues mate 🥀 [10:04:12] [2/2] Thnx for working [10:04:14] There you go [10:04:28] It's come back happy and after a little encouragement from @abaddriverlol [10:04:51] (a little encouragement = basically hitting it with a hammer) [10:04:54] That doesn't look like the outages we've seen the last few days [10:05:04] no that's a new one [11:24:56] https://cdn.discordapp.com/attachments/1006789349498699827/1462045016816877766/G-wXjXtX0AAuuI3.png?ex=696cc307&is=696b7187&hm=81cc157ac31157b4c787ea34df8b6d943bc4d7ce65705db9d8b8f3e41b1e6b5b& [11:25:10] A new one 😱 [11:35:23] I'll take the apt upgrades [11:37:01] python-urllib3 [11:37:04] I deploying it [12:00:55] Sigh [12:01:04] @pskyechology do you need my help with mwtask171? [12:01:21] whats up with it [12:01:28] Alerting [12:01:30] And it ain't me [12:01:59] i have no idea what happened there or where to hit with a hammer [12:02:03] im not even connected rn [12:02:14] so yes i would like your help [12:02:47] @pskyechology can you ssh in? [12:03:25] 161 is grumpy too [12:03:57] aye [12:04:11] Check fpm status and see what's wrong [12:04:23] ...how do i do that [12:04:44] Try sudo service php8.whatever-fpm status [12:04:52] also trip [12:04:55] Htop [12:05:02] and not systemctl? damn [12:05:18] Or systemctl status ye [12:05:25] active running [12:05:53] @pskyechology does it give you worker counts? [12:06:26] [1/29] this? [12:06:27] [2/29] ``` [12:06:27] [3/29] CGroup: /system.slice/php8.2-fpm.service [12:06:27] [4/29] ├─3165833 "php-fpm: master process (/etc/php/8.2/fpm/php-fpm.conf)" [12:06:28] [5/29] ├─3165834 "php-fpm: pool www" [12:06:28] [6/29] ├─3165835 "php-fpm: pool www" [12:06:28] [7/29] ├─3165836 "php-fpm: pool www" [12:06:29] [8/29] ├─3165837 "php-fpm: pool www" [12:06:29] [9/29] ├─3165838 "php-fpm: pool www" [12:06:29] [10/29] ├─3165839 "php-fpm: pool www" [12:06:29] [11/29] ├─3165840 "php-fpm: pool www" [12:06:30] [12/29] ├─3165841 "php-fpm: pool www" [12:06:30] [13/29] ├─3165842 "php-fpm: pool www" [12:06:31] [14/29] ├─3165843 "php-fpm: pool www" [12:06:31] [15/29] ├─3165844 "php-fpm: pool www" [12:06:32] [16/29] ├─3165845 "php-fpm: pool www" [12:06:32] [17/29] ├─3165846 "php-fpm: pool www" [12:06:33] [18/29] ├─3165847 "php-fpm: pool www" [12:06:33] [19/29] ├─3165848 "php-fpm: pool www" [12:06:34] [20/29] ├─3165849 "php-fpm: pool www" [12:06:34] [21/29] ├─3165850 "php-fpm: pool www" [12:06:35] [22/29] ├─3165851 "php-fpm: pool www" [12:06:35] [23/29] ├─3165852 "php-fpm: pool www" [12:06:36] [24/29] ├─3165853 "php-fpm: pool www" [12:06:36] [25/29] ├─3165854 "php-fpm: pool www" [12:06:37] [26/29] ├─3165855 "php-fpm: pool www" [12:06:37] [27/29] ├─3165856 "php-fpm: pool www" [12:06:38] [28/29] └─3165857 "php-fpm: pool www" [12:06:38] [29/29] ``` [12:06:45] ` Status: "Processes active: 24, idle: 0, Requests: 745243, slow: 509, Traffic: 0.00req/sec"` [12:06:50] 0? [12:06:52] That's bad [12:07:07] do i scream restart at it [12:07:14] 171 fixed itself [12:07:23] But 100% usage is bad [12:07:27] awh i am on 171 [12:07:31] Especially at 0.0req/s [12:07:49] 5.5 now [12:07:53] Hmm [12:08:27] i think someone had a really large video or smth given that 161 also just recovered [12:08:38] Maybe [12:09:09] If it screams again I'll look [12:09:36] service is now a wrapper against systemctl [12:09:43] It still looks bad [12:09:51] 161 is still sad [12:10:43] 161 is like me when i see my reflection [12:10:46] i think [12:10:51] 4 req/sec [12:11:03] Requests: 34618792 [12:11:06] um [12:11:16] chat is that a lot [12:11:32] findyou- [12:11:45] I'm bouncing it [12:12:10] what does this mean for a girlboss like me [12:12:20] not much [12:12:21] Oh fuck task [12:12:30] service is like systemctl, but different syntax [12:12:32] 151 and 171 are duff now [12:12:39] What the hell is running [12:12:54] whats the find out command [12:13:07] i just use htop ngl [12:13:42] htop is topped by mcrouterand prometheus-statsd [12:14:09] sorted by cpu that is [12:15:10] It's task and it looks be flappy so I'm going to see if it fixes itself in like half an hour [12:15:48] Memory looks a bit high [12:15:55] 171 has php-fpm maxxing out cpu now [12:16:04] cpumaxxing [12:16:22] this girl who still hasn't reenabled shell is just chilling lol [12:16:41] why bother when you have more girl [12:16:46] trueeeeeeeeeeee [12:16:57] each girl can have two more girls on her! [12:17:24] Oh we have 1.1 million jobs [12:17:42] can i apply [12:17:43] That explains the sadness [12:18:03] would it be a conincidence that i touched two templates on loginwiki [12:18:08] GUP [12:18:32] how did we forgor [12:18:53] It shouldn't be [12:19:12] I'm going to give this half an hour and see if it settles [12:21:02] I blame you cause I have nothing better [12:21:37] fair enough [13:56:08] @pskyechology it looks stable [13:56:17] Time to clear on the job queue is 60 minutes [15:29:30] forsaken wiki will release on [15:29:34] prepare for the worst [15:38:02] Prepared to pull the plug in the server room [15:55:33] watch as miraheze's servers instantly blows up [15:56:09] im gonna have grafana open on a third monitor [16:01:37] who needs foreign agents to ddos the website when there are >100million roblox players [16:15:30] We are gonna throw tomatoes at you all [16:15:41] banana peels too [22:31:03] https://cdn.discordapp.com/attachments/1006789349498699827/1462212651055714346/image.png?ex=696d5f26&is=696c0da6&hm=1557a2e0deeb20302a1347bd5cce4a11a3ad1f30bbd4394af0ab6f65de969225& [22:35:18] Where is that at? [22:35:32] https://forsaken.wiki/Main_Page [22:35:39] a few tens of thousands of people are on it right now [22:36:13] We are working on expanding our resources as well. I will look into this shortly and see what we can do about that. [22:45:58] maybe we could temporarily increase if DB load isn't too high [22:46:59] it spiked earlier but still seems relatively fine [22:47:19] [1/2] There's also https://discord.com/channels/407504499280707585/407537962553966603/1462216406010691772 [22:47:19] [2/2] Extremely high traffic pages should be using `` [22:47:44] Other pages are doing fine, and this is the most suspicious thing on the main page. [22:48:47] oh yeah that's definitely an issue [22:58:32] https://cdn.discordapp.com/attachments/1006789349498699827/1462219566980792529/image.png?ex=696d6597&is=696c1417&hm=f3ecca2e80e3f7029213252c53eb7987d109d7de27f9bc89f4257fed104165d1& [23:19:38] I think this behaved exactly as expected tbh [23:20:03] Probably but we could increase it a bit. [23:20:05] making an extremely high traffic page uncached is bad and it's very good pool counter kicked in to prevent it [23:20:14] Yeah that part is true. [23:20:16] I don't think we need to or should [23:20:37] Let's see if it does again while cached if so we could consider if not we dont need to.