[01:08:06] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.15, 1.73, 1.26 [01:10:07] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.84, 1.65, 1.28 [01:12:07] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.79, 1.30, 1.20 [01:12:48] PROBLEM - mw10 Disk Space on mw10 is WARNING: DISK WARNING - free space: / 1879 MB (9% inode=73%); [01:16:53] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [01:51:54] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.69, 1.93, 1.41 [01:53:54] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 1.31, 1.61, 1.35 [04:03:47] PROBLEM - mw9 Disk Space on mw9 is CRITICAL: DISK CRITICAL - free space: / 1083 MB (5% inode=74%); [04:07:47] PROBLEM - mw9 Disk Space on mw9 is WARNING: DISK WARNING - free space: / 1192 MB (6% inode=74%); [04:07:54] PROBLEM - mw11 Disk Space on mw11 is WARNING: DISK WARNING - free space: / 2082 MB (10% inode=73%); [04:16:53] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [04:25:53] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [05:05:46] PROBLEM - mw9 Disk Space on mw9 is CRITICAL: DISK CRITICAL - free space: / 1064 MB (5% inode=74%); [05:07:46] PROBLEM - mw9 Disk Space on mw9 is WARNING: DISK WARNING - free space: / 1191 MB (6% inode=74%); [06:05:46] PROBLEM - mw9 Disk Space on mw9 is CRITICAL: DISK CRITICAL - free space: / 1063 MB (5% inode=74%); [06:07:46] PROBLEM - mw9 Disk Space on mw9 is WARNING: DISK WARNING - free space: / 1190 MB (6% inode=74%); [06:10:52] PROBLEM - mw8 Disk Space on mw8 is WARNING: DISK WARNING - free space: / 2053 MB (10% inode=73%); [06:25:53] RECOVERY - mw11 Disk Space on mw11 is OK: DISK OK - free space: / 2093 MB (11% inode=73%); [07:10:53] PROBLEM - mw11 Disk Space on mw11 is WARNING: DISK WARNING - free space: / 1957 MB (10% inode=73%); [07:24:53] !log reception@jobrunner3:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki maintenantswiki /home/reception/delbackups2/maintenantswiki.xml [07:24:56] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [07:30:00] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-1/±2] 13https://git.io/JOWn6 [07:30:01] [02miraheze/ssl] 07Reception123 0353da1b4 - remove bluevies.org cert No longer pointing [07:32:20] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-1/±1] 13https://git.io/JOWce [07:32:22] [02miraheze/ssl] 07Reception123 039dcbaba - remove wiki.yapsavun.com No longer pointing [07:43:23] paladox: how would /tmp be emptied without making the server go down (that's what happened a while ago when I tried) [07:43:39] /tmp across mw* is between 3.5-5G causing icinga to warn about disk space [07:43:45] so I'd like to delete the useless stuff from there [07:50:08] I said rm -rf /tmp/* [07:50:32] Note the /* because last time deleting the tmp dir iirc caused issues with perms to recreate stiff [07:51:00] Reception123 thought a specific file might have been needed in there [07:51:05] yeah, just making sure since last time there was also a file issue or something yeah [07:51:07] Although then it shouldn't be in tmp [07:51:11] I don't really remember though, just want to make sure [08:09:33] PROBLEM - mw9 Disk Space on mw9 is CRITICAL: DISK CRITICAL - free space: / 1026 MB (5% inode=74%); [08:09:54] RECOVERY - wiki.globasa.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.globasa.net' will expire on Sun 11 Jul 2021 16:41:59 GMT +0000. [08:11:16] RECOVERY - en.famepedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'en.famepedia.org' will expire on Tue 13 Jul 2021 05:45:30 GMT +0000. [10:06:22] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.13, 3.69, 2.38 [10:10:10] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.79, 2.50, 2.20 [10:26:28] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp10.miraheze.org [10:41:36] @owen: can you check your email [10:44:07] @RhinosF1 I can do in the next 30 minutes unless it’s really urgent [10:44:39] @Owen: 30 minutes should be okay, it's about the most comprehensive complaint I've ever seen [11:36:49] @RhinosF1 Are you able to come onto Discord so I can send you my thoughts? [11:47:50] 2 minutes @Owen [12:23:15] Reception123: what Rhinos says or just remove the files that you see lots of or the ones that have large sizes [12:27:38] paladox: I've been logged out on both my devices [12:43:56] Ok, I'll try that and hopefully nothing bad happens [13:11:10] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 4.34, 3.73, 1.92 [13:13:10] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.72, 2.55, 1.72 [13:36:22] RECOVERY - mw8 Disk Space on mw8 is OK: DISK OK - free space: / 5848 MB (30% inode=73%); [13:37:21] Reception123: https://phabricator.miraheze.org/T7131 what is the priority of this task in the MW team? [13:37:22] [ ⚓ T7131 Consistent 50x errors ] - phabricator.miraheze.org [13:37:29] and/or can I help you? [13:37:59] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.52, 5.47, 3.44 [13:38:14] SPF|Cloud: your help would be very useful yes :) [13:38:17] loading 247 thumbnails from Wikimedia Commons in a navigation bar is 'too much' [13:38:27] since you've already done some investigatigating [13:38:30] *investigating [13:38:32] 'some' [13:38:42] SPF|Cloud: is it via InstantCommons? [13:38:48] I think it's quite comprehensive :) [13:38:53] yes, it's instantcommons [13:39:03] sorry yeah, definitely more than 'some' [13:39:06] RECOVERY - mw9 Disk Space on mw9 is OK: DISK OK - free space: / 5841 MB (30% inode=74%); [13:39:13] it's fine :) [13:39:48] SPF|Cloud: I know upstream have tasks for limiting number of images in a page [13:39:50] we could dig futher into the network latency statistics, but 247 thumbnails won't work [13:39:53] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 1.13, 3.88, 3.10 [13:39:53] So I assume you want that [13:40:37] I would recommend to upload the images locally (no InstantCommons required) or to reduce the number of images [13:41:06] it seems like that page has been changed already and I was able to access it pretty much fine now - https://socdemwiki.miraheze.org/w/index.php?title=Template:Geonav&action=history [13:41:07] [ Revision history of "Template:Geonav" - SocDemWiki ] - socdemwiki.miraheze.org [13:41:12] locally would be less latent but it's still gonna have a limiy [13:41:15] limit* [13:41:31] because parsercache works and I assume the images are cached after a while [13:41:48] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.60, 3.02, 2.87 [13:42:05] but I have seen multiple timeouts when trying to parse the article content [13:42:24] images are cached [13:42:51] until they aren't cached anymore [13:43:43] ye [13:44:20] but until this wiki causes a DoS, it's not up to me [13:44:48] networking wise I see little room for improvement [13:46:24] JohnLewis is around now, maybe he has a great idea [13:46:25] I see little we can do to help [13:46:25] (brb) [13:46:51] yeah, I can't really see much we can do either [13:47:19] For a lot of images on a wiki page? Not really without removing them [13:48:01] mw9 shows packet loss to one of the hops (when running mtr) [13:48:15] err, five hops out of twenty actually [13:49:32] !log sudo rm -rf /tmp/* on mw* [13:49:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:49:44] (brb now) [13:49:44] RECOVERY - mw11 Disk Space on mw11 is OK: DISK OK - free space: / 6846 MB (36% inode=73%); [13:50:19] RECOVERY - mw10 Disk Space on mw10 is OK: DISK OK - free space: / 6058 MB (31% inode=73%); [14:05:42] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.88, 3.93, 2.63 [14:07:43] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.79, 2.82, 2.38 [15:33:42] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.32, 4.26, 2.39 [15:35:44] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.04, 3.01, 2.16 [16:32:26] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [16:37:42] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.23, 3.62, 2.25 [16:39:43] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.10, 2.65, 2.06 [17:07:44] JohnLewis: bullseye due by end of may [17:08:21] RhinosF1, Bullseye is the codename for the new Debian version, right? [17:08:28] dmehus: yes [17:08:33] ah, cool [17:10:11] Fun fun [17:11:47] JohnLewis: I'm thinking is it worth holding the new job servers you proposed off to save having to reimage a few weeks later [17:12:14] Mw cluster is easy to upgrade I think [17:12:21] Assuming php7 is default [17:12:49] I’m proposing getting rid of jobservers so [17:13:37] JohnLewis: yeah but there's jobchron and a task server in your proposal [17:13:44] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 5.55, 4.50, 2.60 [17:13:55] But yes jbr* can just be binned [17:14:17] They can’t be binned until those servers are in place though [17:15:02] But if MediaWiki want to wait a few weeks, it’s fine [17:15:29] But I’m going to do the jobrunner service changes today as they need to be monitored for a period of time to understand impact [17:15:43] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.93, 3.40, 2.42 [17:19:28] JohnLewis: I want to avoid us having to go through reimagining just after installing. Jobchron will be user impacting won't it be when that's down so that's another thing [17:19:55] jobchron isn’t user facing [17:22:12] JohnLewis: it'll affect processing of jobs though right? [17:23:06] Yeah, but as we’ve seen currently in the stats, it’ll not cause any long lasting impacting over an hour at most [17:23:41] Ack [17:24:57] Then I suppose it's whether infra want to have to reimage just after installing JohnLewis. I'm mainly pointing out to consider avoiding having to do imstalls twice and multiple maintenance periods [17:25:50] Technically MW would handle the reimagining of the servers and arranging that [17:26:59] JohnLewis: do we even have access to that? I suppose Reception123 would [17:27:17] Reception does [17:27:30] MWEs don’t, but SREs do [17:27:55] Reception123: what do you want to do then about setting up jobchron and task server? [17:28:41] JohnLewis: if we hold then how would it impact you [17:28:55] Because you've got 4 new servers you want to image [17:29:10] It doesn’t affect me [17:29:39] I’m doing a task for the MW team, if they don’t want to do it now, it doesn’t really impact me as I’m volunteering my time to do it for you guys [17:30:34] JohnLewis: you can do the first two tick boxes though in your plan [17:30:57] I plan to shortly :) [17:31:17] Ack [17:31:39] The other thing Reception123 pointed out is 1.36 [17:31:46] So we'll come up with a plan [17:31:56] Whenever Reception123 wants a team meeting [17:33:30] Team meetings should be easier to arrange for both teams. John and Paladox are in the same time zone, and there's only a 1 hour difference between you and Reception123 [17:34:04] Yeah I mean there's only me and Reception123 in our team [17:34:26] So it is just a case of really Reception123 replying and saying their thoughts in the team chat [17:34:26] yeah, that's what I mean. Can literally be a DM when you're both free [17:34:43] We have an MWE irc channel [17:34:50] or that too yeah [17:36:52] dmehus: meeting times are on Phab [17:37:43] https://phabricator.miraheze.org/calendar/ [17:37:44] [ ⌨ Query: Month View ] - phabricator.miraheze.org [17:39:50] JohnLewis, ah, cool. I don't think I've ever looked at the calendar application on Phab, at least not the month view [17:40:35] Yeah, it's not really used too much but probably should/could be used more often [17:42:01] Apparently it should have been today [17:43:48] oh yeah [17:44:25] looks like Paladox and John have a team meeting on Monday [17:45:02] [02miraheze/puppet] 07JohnFLewis pushed 031 commit to 03master [+0/-0/±5] 13https://git.io/JOl72 [17:45:04] [02miraheze/puppet] 07JohnFLewis 0336d66f5 - reduce jobrunner processes to 1, but deploy to mw* [18:43:49] JohnLewis: me and Reception123 do think we'd prefer to wait until we can set them up from clean with 1.36+Bullseye so it's probably gonna be late may / early June [18:44:29] It's fine, it needs Reception123 to endorse before they can be created anyway per approvals process [18:44:32] yeah, though if 1.36 seems like it will take longer we can do it before [18:44:44] and anyway that will give us time to monitor mw* servers [18:45:32] Which currently seem fine from the past 45 minutes [18:45:43] The jobs graph looks fine [18:45:47] Haven't seen any others [18:46:10] the jobs graph looked fine before, so it wasn't the most useful graph I expected it to be [18:48:37] I was pleasantly surprised by that [18:49:57] Me too, but it kind of confirmed what I thought was always the case, the problem with jobs was never power, it was just poor monitoring and thinking a restart is more useful than debugging [18:55:53] I will say the global renames are completed much more quickly since we reconfigured the way those abandoned/unclaimed jobs were handled [18:56:26] yeah the only very minor/niche issue left with those is https://phabricator.miraheze.org/T7011 [18:56:26] [ ⚓ T7011 Renames getting stuck on deleted wikis ] - phabricator.miraheze.org [18:56:27] whereas a global rename used to take up to 1-2 hours to be completed, it can now be completed within, I'd say, < 2-5 minutes [18:56:39] * Reception123 wonders if JohnLewis would have any idea how to fix that, even though it's very niche [18:56:54] Reception123, lol yeah it would be nice if we could fix that but I agree it's very niche [18:57:02] as RhinosF1 said, it couldn't be more niche [18:57:12] yeah, but it will happen again eventually [18:57:18] yeah [18:57:33] wait I just had an idea [18:58:09] I wonder if that was related to the issue with jbr, and if we should try fixing that stuck global rename then rerunning it now that we've reconfigured things? [18:58:10] Reception123: any errors/logs associated with it? If not, I can't help really [18:58:10] Yes I did say that [18:58:28] JohnLewis: iirc we left a rename open at the time for debugging [18:58:45] https://meta.miraheze.org/wiki/Special:GlobalRenameProgress?username=RenameTestAccount [18:58:46] [ Global rename progress - Miraheze Meta ] - meta.miraheze.org [18:58:46] Were any logs recovered from that though? [18:58:55] JohnLewis, basically when top wiki in a user's CentralAuth is a deleted wiki, according to the default sort, it gets stuck [18:58:57] iirc there weren't any [18:58:57] Not when I looked [18:59:11] Reception123, nothing in graylog? [18:59:17] No errors [18:59:20] oh [18:59:23] nope, I didn't see any [18:59:27] or else I would've for sure pasted them [18:59:48] What is the top wiki? [18:59:59] * dmehus is trying to remember which one it was [19:00:04] let me look [19:00:46] JohnLewis, `aaawiki` or `testrenamewiki`, I think [19:00:54] believe the former [19:01:04] It'll be aaawiki [19:01:18] Because that one is the first in the list test [19:01:22] yeah [19:01:38] `testrenamewiki` was from the first rename test, not that one [19:02:18] Ye [19:02:19] yeah it would be `aaawiki` afaik [19:02:25] Theres logs on aaawiki creating the account so it can't be a jobrunner error. Likely the job was just never inserted to any wikis [19:02:54] So you'll need to look at the code for job insertions on a rename and see if there is anything there that can trip it out [19:03:01] > Likely the job was just never inserted to any wikis [19:03:01] Yeah, that seems likely. Any ideas on possible causes? [19:03:07] That sounds fun [19:03:16] If we knew a cause, it wouldn't be an issue [19:03:28] > So you'll need to look at the code for job insertions on a rename and see if there is anything there that can trip it out [19:03:28] ah, yeah, makes sense [19:23:34] Reception123: see task [19:24:16] JohnLewis: Failed to unserialize configuration array. [19:25:14] https://github.com/wikimedia/mediawiki/blob/467a4f32b0475fd2840264ada94e3ce87518d22a/includes/SiteConfiguration.php#L557 [19:25:14] [ mediawiki/SiteConfiguration.php at 467a4f32b0475fd2840264ada94e3ce87518d22a · wikimedia/mediawiki · GitHub ] - github.com [19:26:14] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.16, 1.97, 1.62 [19:26:21] Is it something we could fairly easily submit a PR on Gerrit to correct, or would it be better to fix that locally and just re-patch it locally with updated MediaWiki versions? [19:26:40] JohnLewis: how does that work? [19:26:53] Have a look at the script it runs and see [19:28:11] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.89, 1.61, 1.53 [19:28:19] JohnLewis: is that trying to parse LS [19:29:22] Whatever it's doing there's a lot of regex involved [19:29:31] Which for me I can't imagine is needed [19:32:00] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [19:34:44] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.18, 4.75, 2.64 [19:36:39] Red heron - nevermind [19:38:52] so https://phabricator.miraheze.org/T7011#141803 is it? then the question is why didn't it? [19:38:52] [ ⚓ T7011 Renames getting stuck on deleted wikis ] - phabricator.miraheze.org [19:40:33] Reception123: because the wiki was never accessed from test3 before :) [19:40:42] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.66, 3.35, 2.96 [19:40:54] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [19:42:18] ah heh I misread it, you just meant that the eval results you found weren't correct because it was on test3 [20:00:32] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 6.87, 6.34, 4.00 [20:01:49] https://meta.miraheze.org/wiki/Special:GlobalRenameProgress?username=RenameTestAccount [20:01:51] [ Global rename progress - Miraheze Meta ] - meta.miraheze.org [20:02:38] JohnLewis, did you run the script to fix the stuck rename, or if not, what did you do to fix that? [20:03:02] See task [20:03:35] oh I note the accounts are not attached now, looking now though [20:04:24] ah it's what you thought initially...the job isn't getting queued up initially [20:04:31] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.66, 3.32, 3.29 [20:04:59] Important to note though, I didn't insert the job, I did the whole process from scratch - essentially I did Special:GlobalUserRename just server side [20:05:09] ah [20:05:17] and it didn't attach the wiki accounts [20:05:32] can we fix that on the server side, or should I fix that on-wiki? [20:06:11] For a test account, unsure why we really care :) [20:07:36] Well, I mean they seem like they're merged into the global account, but they're not showing as "attached" and by what "method" https://usercontent.irccloud-cdn.com/file/q7OEjFn0/2021-04-16%2013.06.05%20meta.miraheze.org%2043ec638ca286.jpg [20:12:40] dmehus: do we care? [20:13:28] Because I'm too lazy to fix on that many wikis [20:14:22] RhinosF1, yeah, I mean, I do. Though it's unlikely we'll need to do anything with this account, it's technically still in a broken state as the unmerge checkboxes aren't available, so there's probably just a config we can run in `eval.php` maybe, no? then we wouldn't need to run the maintenance script on each wiki [20:22:17] Idea: Maybe I could try renaming the account back to RenameTestAccount2, as a dual purpose? For one, the act of initiating the rename would detach all local accounts from the old account, then reattach them correctly to the new account. For two, we could test to see if the bug where the first listed wiki is a deleted wiki still exists [20:53:29] dmehus: just leave it alone [21:32:19] [02mw-config] 07R4356th commented on pull request 03#3831: Do not enable DarkMode by default - 13https://git.io/JO8na [21:33:28] [02mw-config] 07R4356th commented on pull request 03#3831: Do not enable DarkMode by default - 13https://git.io/JO8nX [21:39:20] [02mw-config] 07RhinosF1 commented on pull request 03#3831: Do not enable DarkMode by default - 13https://git.io/JO8c4 [21:44:37] [02mw-config] 07R4356th opened pull request 03#3833: Header modifications for pokemundowiki (T7146) - 13https://git.io/JO8cF [21:45:41] miraheze/mw-config - R4356th the build passed. [21:47:07] [02dns] 07dmehus opened pull request 03#202: Creating zone file for trollpasta.com - 13https://git.io/JO8Cv [21:51:01] dmehus: https://github.com/dmehus/dns/pull/1/files [21:51:03] [ Fix messed up copy paste by RhinosF1 · Pull Request #1 · dmehus/dns · GitHub ] - github.com [21:51:18] RhinosF1, ty [21:51:19] looking [21:51:22] You've got things all on one line that shouldn't be [21:51:28] God knows what you've copied [21:52:07] [02dns] 07dmehus synchronize pull request 03#202: Creating zone file for trollpasta.com - 13https://git.io/JO8Cv [21:52:24] RhinosF1, ah, right, I copied from electowiki.org [21:53:12] dmehus: no you didn't [21:53:25] You copied the nice fancy view of the file [21:53:31] Not the raw version [21:53:32] oh [21:53:39] yeah that's what I did [21:53:55] dmehus: if you're copying code, use edit or raw view [21:53:58] so always copy from the raw version [21:54:00] ah [21:54:03] RhinosF1, ack [21:54:42] Github makes the dns look ugly when it puts in its fancy view [21:54:54] I have no idea why or to what benefit [21:55:06] yeah... [21:56:32] paladox: can you merge [21:56:55] RhinosF1, will [21:56:55] ```; CAA (issue: letsencrypt.com, iodef: operations) [21:56:55] @ TYPE257 \# 22 000569737375656C657473656E63727970742E6F7267 [21:56:55] @ TYPE257 \# 37 0005696F6465666D61696C746F3A6F7065726174696F6E73406D69726168657A652E6F7267``` [21:56:55] get updated when Reception123 runs the cert script? [21:57:13] CAA records aren't TLS certs [21:57:14] [02dns] 07paladox reviewed pull request 03#202 commit - 13https://git.io/JO8Wq [21:57:21] RhinosF1, ah [21:57:24] They're things that tell CAAs who can issue certs [21:57:26] [02dns] 07paladox reviewed pull request 03#202 commit - 13https://git.io/JO8Wq [21:57:48] RhinosF1, ah, so that's why it's the same for all domains then, essentially, all those that use LetsEncrypt anyway [21:57:58] dmehus: yes [21:58:12] [02dns] 07dmehus reviewed pull request 03#202 commit - 13https://git.io/JO8WG [21:58:56] [02dns] 07dmehus synchronize pull request 03#202: Creating zone file for trollpasta.com - 13https://git.io/JO8Cv [21:59:11] RhinosF1, ack, cool [21:59:14] [02dns] 07paladox closed pull request 03#202: Creating zone file for trollpasta.com - 13https://git.io/JO8Cv [21:59:16] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/JO8Wc [21:59:17] [02miraheze/dns] 07dmehus 03850bce1 - Creating zone file for trollpasta.com (#202) [21:59:26] [02dns] 07dmehus reviewed pull request 03#202 commit - 13https://git.io/JO8WW [22:01:18] RhinosF1, I just realized we don't have GitHub Actions reviewing commits in the DNS repo. Guessing it's not needed? [22:01:40] I don't think there's a way we have to test it tbh [22:01:46] ah [22:05:03] [02mw-config] 07MusikAnimal commented on pull request 03#3831: Do not enable DarkMode by default - 13https://git.io/JO8WN [22:35:43] [02miraheze/ssl] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JO84S [22:35:45] [02miraheze/ssl] 07paladox 03f599442 - Add trollpasta.com [22:35:46] [02ssl] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vxP9L [22:35:48] [02ssl] 07paladox opened pull request 03#404: Add trollpasta.com - 13https://git.io/JO849 [22:36:10] [02miraheze/ssl] 07paladox pushed 031 commit to 03paladox-patch-1 [+1/-0/±0] 13https://git.io/JO847 [22:36:12] [02miraheze/ssl] 07paladox 031622636 - Create trollpasta.com.crt [22:36:13] [02ssl] 07paladox synchronize pull request 03#404: Add trollpasta.com - 13https://git.io/JO849 [22:36:35] [02ssl] 07paladox closed pull request 03#404: Add trollpasta.com - 13https://git.io/JO849 [22:36:37] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JO84F [22:36:38] [02miraheze/ssl] 07paladox 030d77f4d - Add trollpasta.com (#404) [22:36:40] [02miraheze/ssl] 07paladox deleted branch 03paladox-patch-1 [22:36:41] [02ssl] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vxP9L [22:39:27] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±0] 13https://git.io/JO8BJ [22:39:29] [02miraheze/ssl] 07paladox 03376254b - Update trollpasta.com.crt [22:44:44] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO8B6 [22:44:46] [02miraheze/ssl] 07paladox 03b1c2121 - Redirect www.trollpasta.com to trollpasta.com [22:44:57] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JO8B1 [22:44:58] [02miraheze/ssl] 07paladox 035c1f9a5 - fix [22:52:05] PROBLEM - www.trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.trollpasta.com could not be found