[00:22:25] time for sleep [00:22:28] night folks :) [00:36:59] night :) [04:04:17] PROBLEM - wiki.mnzp.xyz - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mnzp.xyz All nameservers failed to answer the query. [04:11:11] RECOVERY - wiki.mnzp.xyz - reverse DNS on sslhost is OK: rDNS OK - wiki.mnzp.xyz reverse DNS resolves to cp10.miraheze.org [06:38:42] PROBLEM - en.famepedia.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'en.famepedia.org' expires in 15 day(s) (Fri 30 Apr 2021 06:30:37 GMT +0000). [06:45:36] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO3yZ [06:45:37] [02miraheze/ssl] 07MirahezeSSLBot 03cedd62e - Bot: Update SSL cert for en.famepedia.org [10:43:35] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 5.87, 5.15, 2.45 [10:45:34] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 0.89, 3.49, 2.16 [10:47:33] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.18, 2.37, 1.91 [11:43:48] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 353.31 ms [11:47:56] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 286.42 ms [12:10:35] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 2.36, 4.34, 2.38 [12:12:33] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.61, 3.04, 2.14 [12:29:15] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 304.60 ms [12:41:38] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 298.31 ms [12:43:26] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.23, 5.07, 2.94 [12:45:24] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.01, 3.52, 2.62 [12:45:51] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 544.09 ms [12:47:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.44, 2.44, 2.32 [12:49:58] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 338.28 ms [12:52:02] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 424.02 ms [12:52:23] JohnLewis: that's getting pointless ^ [12:52:37] Critical alerts shouldn't be going off if action ain't needed [12:58:13] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 283.96 ms [13:02:24] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 370.83 ms [13:04:22] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 10.95, 6.89, 4.22 [13:06:31] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 271.18 ms [13:08:22] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.01, 3.78, 3.59 [13:10:22] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.26, 2.57, 3.16 [13:16:08] The ping isn’t optimal, action is needed [13:17:22] PROBLEM - meta.orain.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for meta.orain.org could not be found [13:17:51] JohnLewis: then can we see a task or something [13:19:04] Action being needed and actionable are different unfortunately. A task would be declined as there is nothing we can do besides remove cp3 [13:24:03] RECOVERY - meta.orain.org - reverse DNS on sslhost is OK: rDNS OK - meta.orain.org reverse DNS resolves to cp11.miraheze.org [13:28:08] JohnLewis: if no action can be done then it's not critical [13:28:20] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 376.83 ms [13:28:23] Given were operating fine [13:28:34] Just because no action can be done does not mean a valid monitoring alert becomes invalid [13:29:20] there is a problem, in that a simple ping request takes 500+ms sometimes [13:29:54] Is it 'CRITICAL' though? [13:30:23] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 282.64 ms [13:31:14] Yes? [13:31:33] 500ms+ is a critical length of time for a simple ping [13:36:38] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 348.96 ms [13:38:44] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 354.73 ms [13:47:20] Do we know the cause of the lag? [13:51:12] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 330.56 ms [13:53:16] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 363.80 ms [13:59:28] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 346.96 ms [14:03:35] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 388.61 ms [14:14:32] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-3 [+0/-0/±1] 13https://git.io/JOsN0 [14:14:34] [02miraheze/puppet] 07paladox 03444b3fd - gluster: Don't use exec to create mount directory [14:14:35] [02puppet] 07paladox created branch 03paladox-patch-3 - 13https://git.io/vbiAS [14:14:37] [02puppet] 07paladox opened pull request 03#1740: gluster: Don't use exec to create mount directory - 13https://git.io/JOsNu [14:15:40] [02puppet] 07paladox closed pull request 03#1740: gluster: Don't use exec to create mount directory - 13https://git.io/JOsNu [14:15:41] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOsNX [14:15:43] [02miraheze/puppet] 07paladox 0355c092f - gluster: Don't use exec to create mount directory (#1740) [14:15:45] [02puppet] 07paladox deleted branch 03paladox-patch-3 - 13https://git.io/vbiAS [14:15:46] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-3 [14:17:07] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOsAI [14:17:08] [02miraheze/puppet] 07paladox 03c5eecfd - gluster: Fix param in file {} [14:18:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOsA4 [14:18:17] [02miraheze/puppet] 07paladox 03fc9a20c - gluster: Change user permission on mount [14:19:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-3 [+0/-0/±1] 13https://git.io/JOsA1 [14:19:45] [02miraheze/puppet] 07paladox 03cba0f39 - gluster: Remove gluster.[pem|key|ca] file [14:19:46] [02puppet] 07paladox created branch 03paladox-patch-3 - 13https://git.io/vbiAS [14:19:48] [02puppet] 07paladox opened pull request 03#1741: gluster: Remove gluster.[pem|key|ca] file - 13https://git.io/JOsAD [14:21:52] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-3 [+0/-0/±1] 13https://git.io/JOsxY [14:21:54] [02miraheze/puppet] 07paladox 0342579c8 - Update init.pp [14:21:55] [02puppet] 07paladox synchronize pull request 03#1741: gluster: Remove gluster.[pem|key|ca] file - 13https://git.io/JOsAD [14:42:42] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 263.26 ms [15:24:11] Sario: It's the path chosen by upstream network providers to get to Singapore from the UK [15:35:03] JohnLewis: so short of moving providers, there's not much we can do to to improve the connection [15:36:18] Not really without creating our own network peering points [15:40:55] > Not really without creating our own network peering points [15:40:55] Not sure what the cost of that would be, but it does sound like it'd be a lot of network infrastructure to manage that would add complexity [15:42:18] PROBLEM - m.miraheze.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'm.miraheze.org' expires in 15 day(s) (Fri 30 Apr 2021 15:38:35 GMT +0000). [15:46:18] Quite a bit, as you can't operate a receiving VPS, you'd need a network presence as well in Singapore which would be rather costly - so you'll have the cost of essentially going for colocation and operating a full server just for a cache proxy [15:50:02] ~£100/mo alone for the peering [15:50:20] peering is more towards enterprise grade networking [15:52:31] our physical servers and VMs are quite cheap (perhaps not for us, but in major companies, our expenses are next to nothing), I don't expect good network performance from them [15:52:54] ^^ [15:52:58] yeah [15:53:17] and in the end, it beats not having a cp3 in Singapore [15:54:33] cp3 is up for renewal in the next few months anyway [15:54:36] if we had more choice, we wouldn't be with our current partners [15:54:47] but contracted with enterprise hosters instead [15:55:35] and/or paying for a better SLA [15:56:24] yeah, and I think cp3 is the one Wikimedia's Indonesian chapter pays for, right? [15:56:35] Ye [15:56:44] ack [15:58:40] although most major companies outsource most of the stuff that we have been self-hosting for years :P [15:59:01] use PaaS and SaaS solutions, why not.. [16:01:15] FYI: my quick maths says we need to raise £180 by the end of the year if we stick to the original budget. In April - December last year, we lost £192 [16:01:28] SPF|Cloud: that might be worth mentioning at next board ^ [16:04:37] Sure [16:04:40] Brb, dinner [16:05:40] SPF|Cloud: ack, I know owen wanted to have reserves as well in the bank at a certain level so someone probably needs to make clear at what point we decide on early fundraising etc [16:05:48] Like beyond the annual december one [16:06:28] Essentially my question is at what point does it become we need to act now [16:06:54] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [16:11:08] RhinosF1, our revenues do not need to exceed our expenditures in any given month, though, as we do budget on an annual basis. Our fundraising is such that typically most months, barring several months once a year, our expenses will greatly exceed revenues. We may need to have another fundraiser this year, but I do feel that we should be fine until our next annual fundraiser (November-December-ish), taking into account some added donations [16:11:08] throughout the year and the recurring donations from our GitHub sponsor users [16:23:30] dmehus: my maths is based on current balance and what spf said in the plan we'd spend [16:24:07] Them figures above would mean running out in November [16:25:59] Do we have a method of recurring donations other that github sponsers? [16:27:11] Not that I know of [16:27:32] DD might work [16:28:14] JohnLewis: https://meta.miraheze.org/wiki/Miraheze_Vacancies restructured :) [16:28:15] [ Miraheze Vacancies - Miraheze Meta ] - meta.miraheze.org [16:28:25] also added the Board's announcement as it's a good place to centralise these things [16:29:27] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.26, 4.83, 2.63 [16:31:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.08, 3.43, 2.37 [16:33:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.72, 2.55, 2.18 [16:35:35] !log reception@jobrunner3:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/deleteBatch.php --wiki=metawiki --r "[[phab:T7137|Requested]]" /home/reception/metadel.txt [16:35:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:38:38] Reception123: looks good [16:39:16] great :) I've posted an announcement on Discord and Facebook/Twitter [16:39:25] now we need to hope someone sees it [16:39:34] Soon you'll be the biggest team again ;) [16:39:46] heh, I hope so! [16:41:24] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.09, 7.99, 4.82 [16:49:23] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.03, 3.78, 3.91 [16:53:23] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 8.84, 4.39, 3.97 [16:53:42] Some "Pre-requisites" for that new role? (Trust and Safety Responder) Only "good judgement to apply appropriate responses.."? Also do you really wanted share link to last section when you mentioned "software developer"? [16:55:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.59, 3.10, 3.55 [16:55:28] JohnLewis: your expired Cisco certification will come in handy when we'll manage our own network ;-) [16:56:40] managing network equipment, looking forward to that... must be a complex thing [16:57:19] SPF|Cloud: it is stressful at times - glad I no longer do it as a 'job' but more when I get bored and I can pick and choose it :P [16:57:27] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.10, 2.41, 3.24 [16:57:48] I've always found managing network to be way more complex than managing servers [16:58:14] It is [16:58:17] IaC for servers isn't rocket science, but can't say that about network equipment [16:59:27] plus relying on open source technologies is harder in networking [17:03:47] there are two truths in IT: 1) it's always a network issue 2) if 1) is not true, it must be DNS [17:04:17] and the root cause is PEBKAC in approximately 100% of the incidents [17:04:42] Dmehus: GitHub Sponsors now has custom donation amounts [17:06:10] JohnLewis, Oh, is that a recent change? It didn't when Reception123 and I checked a few months ago [17:07:21] yeah, I didn't know that was a thing either until right now [17:07:22] JohnLewis, I don't see a custom amount tier at https://github.com/sponsors/miraheze. Perhaps we just need to enable that then? [17:07:22] [ Sponsor @miraheze on GitHub Sponsors · GitHub ] - github.com [17:07:26] I thought it was limited to the custom sums [17:07:33] yeah [17:08:47] Reception123, do you have access to the GitHub Sponsors configuration for the miraheze project, to see if it's just a matter of enabling a custom amount tier? [17:08:51] JohnLewis: dmehus yes, there's an option to enable them [17:08:59] ah, cool [17:09:19] JohnLewis: SPF|Cloud it wants a default amount for custom amounts, what do you think I should set that to? [17:10:04] Hrm, well, the idea is users may want to have a custom amount, so if we want only fixed tier amounts on GitHub Sponsors, then we still need a Patreon account [17:10:08] Reception123: I would recommend either $1 or $5 [17:10:29] I'd add two more tiers, $1/mo and $5/mo [17:10:33] or just $1/mo [17:10:47] They're supposed to be based on costs [17:11:18] RhinosF1, many users may not want to donate $2/mo month, though. It should be based on donor preferences as well [17:11:33] We need to be flexible [17:11:44] it's not tiers that it's asking for [17:11:46] it's just a default sum [17:12:01] Reception123, what do you mean? [17:12:02] so basically when someone would select custom amount it would default to something [17:12:12] oh [17:12:13] gotcha [17:12:17] I see what you mean [17:12:20] the current tiers would still exist, but another box somewhere would appear and would have a default sum that's proposed [17:12:30] IMO, we should remove the $2 tier, and set the default custom amount to $1 [17:12:34] yeah as long as they can change the custom amount, I'm fine with whatever the default tiers are [17:12:58] s/default tiers are/default amount is [17:12:58] dmehus meant to say: yeah as long as they can change the custom amount, I'm fine with whatever the default amount is [17:12:59] regarding the tiers, SPF|Cloud designed them so you'll have to discuss that with him [17:13:17] I feel like the default custom amount could be $5, any issues with that? [17:13:26] it's not that important either way, it's just a default sum [17:13:27] 5 sounds good [17:13:29] that'd be fine, as long as it can be changed yeah [17:13:36] yeah, it can easily be changed [17:13:48] ohh, I see [17:14:01] or we could just set custom amounts for all the tiers, allowing users to change the amounts [17:14:08] it also allows a one time payment through GH now [17:14:10] with the custom amount [17:14:14] oh nice [17:14:18] that's actually ideal [17:14:23] and that's not possible, the custom amount is a separate deal [17:14:35] I've updated it so you can have a look now [17:14:40] okay [17:14:41] looking [17:15:10] It's definitely a nice addition [17:15:12] Reception123, LGTM [17:15:17] Reception123: current settings LGTM [17:15:19] I'm 💯 happy with that [17:15:29] as it makes it obvious it can be changed [17:15:40] yeah, it's a box [17:15:45] yep [17:15:48] * Reception123 is curious if anyone will use that [17:16:41] oh, there's also one-time tiers available now [17:17:02] I'll let SPF review the tiers though (both recurring and one time) as you can now have 10 [17:17:14] yeah, that's cool, I like that [17:17:28] as that was one of my other "wants," one-time donations via GitHub Sponsors [17:18:35] yeah, it's nice [17:20:14] yep [17:39:10] [02CreateWiki] 07Universal-Omega opened pull request 03#209: Partially undo pull request #208 - 13https://git.io/JOGVB [17:40:16] miraheze/CreateWiki - Universal-Omega the build passed. [17:41:21] [02CreateWiki] 07Reception123 closed pull request 03#209: Partially undo pull request #208 - 13https://git.io/JOGVB [17:41:22] [02miraheze/CreateWiki] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOGVH [17:41:24] [02miraheze/CreateWiki] 07Universal-Omega 033dec1ca - Partially undo pull request #208 (#209) [17:42:22] miraheze/CreateWiki - Reception123 the build passed. [17:49:16] Reception123: RhinosF1: so looking at https://grafana.miraheze.org/d/3L3WYylMz/mediawiki-job-queue?orgId=1&from=now-2d&to=now - I think it's fair to conclude that we don't have a problem with the jobrunner/jobqueue like we previously thought we did? [17:49:17] [ Grafana ] - grafana.miraheze.org [17:59:23] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 2.58, 4.71, 2.67 [17:59:45] JohnLewis: so far so good. Would be nice to see it under a surge of like 15 edits to GUP at once [18:00:45] Such a rare occasion and naturally flooding a service with loads of requests will cause problems. But we always thought we had a problem, but over the span of a week, we've seen there isn't one [18:01:53] yeah, it seems fine [18:02:02] I wonder if it's because of those unclaimed jobs that you found just sticking around [18:02:14] JohnLewis: it's not amazingly rare tbh [18:02:37] Has it happened in the past 7 days though - a reasonable time period I'm asking you to consider? [18:03:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.55, 3.29, 2.52 [18:04:18] True [18:04:26] I'm saying so far it seems good [18:04:59] Which now ticks off the "jobqueue has a problem" issue in my mind, so now I can start considering proposals to get rid of jobrunner* [18:07:23] Yeah I definately don't see an issue [18:07:32] There's always a cap to what something can run at [18:07:46] My comment was simply I wonder if that's higher [18:08:54] Considering our plan is to essentially double the number of jobrunner processes, we only need to consider whether 4 (current) is capable of handling demand and whether there are any other obvious problems left unaddressed [18:11:16] My other major one was it's unmaintained and that's solved by us maintaining it so I don't have obvious issues [18:13:20] Would it be worth creating a MediaWiki.org documentation page for our forked version of Jobrunner that we maintain? [18:13:45] I wonder if other non-WMF wikis would be interested in using our fork? [18:14:04] I don't think it's even documented there for the original [18:14:09] oh, wow [18:14:23] Because it's the farm situation [18:14:28] true [18:14:48] more related to backend infrastructure than actual MediaWiki software [18:14:58] would be documented on WikiTech if anywhere, probably [18:15:17] For 99% of users that have 1 wiki on 1 server, they'll use runJobs probably in systemd with --wait or a cron [18:15:35] It's on wikitech in the history of jobrunners page [18:16:45] The jobrunner process is essentially runJobs with systemd and --wait - just had a lot of Redis stuff because it's better than using the database for jobs [18:17:20] Iirc, jobs go into redis and it runs a specific wiki and Job type doesn't it [18:17:38] --wait would be at least 1 runner per wiki [18:17:46] Which we definately don't do [18:18:10] We recycle a lot - but on paper it essentially is --wait [18:18:27] It's just a case of recycle rather than --wait [18:18:40] Yeah [18:19:29] > The jobrunner process is essentially runJobs with systemd and --wait - just had a lot of Redis stuff because it's better than using the database for jobs [18:19:29] ah, interesting. when you put it that way, that really simplifies and encapsulates jobrunner's purpose / how it works [18:20:12] The job runner is extremely basic in its logic [18:20:14] It is really easy to explain what it does, it just looks complicated because Redis allows more to be done than the DB version of job storage does [18:20:35] So we utilise the fact Redis is more advanced [18:22:11] It's the backends that do the complex stuff with dedupe and whatever else [18:22:35] Whether that be redis like us for kafka like upstream [18:22:55] (Db like default does nothing complex and might as well just be a list iirc) [18:30:47] is there a limit for API requests (outside of a wiki)? [18:31:10] Same as normal requests I think [18:31:15] paladox: ^ [18:44:56] ? [18:45:12] paladox: rate limit for api.php [18:46:40] https://github.com/miraheze/puppet/blob/master/modules/varnish/templates/default.vcl#L205 12 over 2s [18:46:41] [ puppet/default.vcl at master · miraheze/puppet · GitHub ] - github.com [18:49:16] paladox: https://grafana.miraheze.org/d/uOLD33lMz/ldap?orgId=1 can you categorise this dashboard in Grafana please :) [18:49:16] [ Grafana ] - grafana.miraheze.org [18:49:27] Sure [18:50:36] Done [18:50:37] https://grafana.miraheze.org/dashboards/f/a5R8aFXMz/monitoring [18:50:38] [ Grafana ] - grafana.miraheze.org [18:51:12] paladox: a monitoring folder on a monitoring platform? Aren't all dashboards there for monitoring? [18:51:57] Ah righ, you are right. Fixed [18:52:00] changed it to LDAP [18:52:23] Thanks [18:53:02] > https://github.com/miraheze/puppet/blob/master/modules/varnish/templates/default.vcl#L205 12 over 2s [18:53:02] oh yeah, that's the limit that used to be a bit lower last fall I think. Can't remember the old limit [18:53:02] [ puppet/default.vcl at master · miraheze/puppet · GitHub ] - github.com [19:02:21] oh, thanks for the limit :) I'm not very familiar with the MW API system, but if the limit is exceeded, does it show an error or it just stops? [19:04:15] It shows an error [19:10:55] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [19:17:10] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JOGST [19:17:11] [02miraheze/puppet] 07paladox 03ce40d4f - gluster: Set remount to true [19:30:28] https://phabricator.miraheze.org/T7139 [19:30:29] [ ⚓ T7139 MediaWiki Capacity Proposal ] - phabricator.miraheze.org [19:37:37] Sounds good JohnLewis [19:46:05] [02RottenLinks] 07pastakhov opened pull request 03#30: Fix sql patches for case when rottenlinks.rl_externallink is primary key - 13https://git.io/JOGQl [19:46:07] @Lake, yeah, that's the Varnish rate limit, though. That can't be exceeded at all. There's also a MediaWiki API rate limit, which can be exceeded, provided it's less than the Varnish rate limit, of course, if the user has the high API limits user right, afaik. I'm not sure what the MediaWiki API rate limits are set to though [19:47:04] I see. It's because my friend is doing a college work and he thought of using the API of my wiki on a demo app [19:47:12] miraheze/RottenLinks - pastakhov the build passed. [19:47:51] Lake, ah [19:48:55] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.49, 1.88, 1.27 [19:50:55] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 1.02, 1.57, 1.22 [20:43:53] [02WikiDiscover] 07Universal-Omega opened pull request 03#40: Add skin aliases for {{NUMBEROFWIKISBYSETTING}} magic word - 13https://git.io/JOGN5 [20:44:59] miraheze/WikiDiscover - Universal-Omega the build passed. [21:01:23] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 1.93, 5.47, 3.55 [21:05:23] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.45, 3.76, 3.37 [21:07:25] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 8.06, 4.36, 3.57 [21:09:24] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.83, 3.21, 3.24 [21:45:23] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 2.22, 4.09, 2.59 [21:47:23] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.43, 2.78, 2.29 [21:52:42] PROBLEM - wiki.insideearth.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:54:41] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.71, 1.46, 1.08 [21:56:41] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 1.54, 1.50, 1.14 [21:59:23] RECOVERY - wiki.insideearth.info - LetsEncrypt on sslhost is OK: OK - Certificate 'sni.cloudflaressl.com' will expire on Thu 02 Sep 2021 12:00:00 GMT +0000. [22:10:54] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp10.miraheze.org