[00:10:56] JohnLewis, around? [00:13:39] I have a quick question I noticed when doing a global rename for a user a few days ago. If you'll look at the rename user log on this one wiki (https://alternatehistory.miraheze.org/wiki/Special:Log/renameuser), all the "Requested" links are redlinks, and what's interesting is that there's no interwiki prefix in the links, except if you cross-reference that to the actual glbrename user logs on Meta, most of them did use an interwiki prefix. I [00:13:40] wondered if maybe the Interwiki extension wasn't installed on that wiki, but that doesn't seem to be the case. So there's got to be a MediaWiki variable set to false or something on that wiki, but I'm not sure which one. I tried asking in the -discreet channel on Discord, but none of the SREs knew why [00:13:41] [ User rename log - Alternate History ] - alternatehistory.miraheze.org [00:13:44] Any ideas? [00:14:23] s/glbrename/gblrename [00:14:24] dmehus meant to say: I have a quick question I noticed when doing a global rename for a user a few days ago. If you'll look at the rename user log on this one wiki (https://alternatehistory.miraheze.org/wiki/Special:Log/renameuser), all the "Requested" links are redlinks, and what's interesting is that there's no interwiki prefix in the links, except if you cross-reference that to the actual gblrename user logs on Meta, most of them did u [00:48:36] night [00:52:05] Command sent from Discord by Doug: [00:52:05] .tell SPF|Cloud night, SPF|Cloud. Sent by @Doug (dmehus) on Discord [00:52:05] MH-Discord: I'll pass that on when SPF|Cloud is around. [01:22:57] Strange. [01:45:03] PROBLEM - ping6 on dbbackup2 is CRITICAL: PING CRITICAL - Packet loss = 70%, RTA = 101.44 ms [01:47:06] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 103.39 ms [01:57:26] Test [01:57:34] Done [01:57:48] 👍 to @MacFan4000 [05:15:08] RECOVERY - ping6 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 99.01 ms [05:30:10] PROBLEM - wiki.cyberfurs.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:36:59] RECOVERY - wiki.cyberfurs.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.cyberfurs.org' will expire on Fri 26 Mar 2021 00:23:16 GMT +0000. [05:46:56] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 100.02 ms [05:48:58] RECOVERY - ping6 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 99.12 ms [06:05:32] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 102.81 ms [06:53:22] RECOVERY - services4 APT on services4 is OK: APT OK: 26 packages available for upgrade (0 critical updates). [06:54:34] RECOVERY - services3 APT on services3 is OK: APT OK: 26 packages available for upgrade (0 critical updates). [07:47:45] PROBLEM - wiki.yumeka.team - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:25:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JtVrD [08:25:13] [02miraheze/services] 07MirahezeSSLBot 0359dede5 - BOT: Updating services config for wikis [08:40:37] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 7.01, 4.99, 2.88 [08:43:32] RECOVERY - wiki.yumeka.team - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.team' will expire on Mon 07 Feb 2022 23:59:59 GMT +0000. [08:56:38] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.03, 3.94, 3.63 [08:58:41] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.62, 4.06, 3.71 [09:00:37] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.57, 3.60, 3.59 [09:00:47] PROBLEM - wiki.yumeka.team - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:06:38] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 1.16, 2.42, 3.11 [09:49:36] RECOVERY - wiki.yumeka.team - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.team' will expire on Mon 07 Feb 2022 23:59:59 GMT +0000. [10:44:08] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 251.82 ms [10:46:12] RECOVERY - ping4 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 249.22 ms [10:52:25] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 259.69 ms [10:57:37] RECOVERY - ping6 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 99.34 ms [11:03:27] PROBLEM - guia.cineastas.pt - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - guia.cineastas.pt All nameservers failed to answer the query. [11:03:46] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 101.47 ms [11:10:22] RECOVERY - guia.cineastas.pt - reverse DNS on sslhost is OK: rDNS OK - guia.cineastas.pt reverse DNS resolves to cp11.miraheze.org [11:48:50] !log disabled three spam accounts on Phab [11:48:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:49:07] !log removed spam from the below Phab accounts' profiles [11:49:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:56:57] paladox: it's not possible to ban IPs on Phabricator is it? [11:57:06] we've got 3 spambots who registered today from the same IP [12:39:28] No [12:39:29] But you can on meta [13:47:39] oh, true, thanks [13:50:47] np [14:15:34] PROBLEM - ping6 on dbbackup2 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 103.73 ms [14:17:38] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 101.99 ms [14:33:43] PROBLEM - wiki.yumeka.team - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:40:34] RECOVERY - wiki.yumeka.team - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.team' will expire on Mon 07 Feb 2022 23:59:59 GMT +0000. [14:49:51] PROBLEM - wiki.yumeka.team - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:57:20] Responding to @R4356th and @DarkMatterMan4500's 🤨 emoji reaction...I said something before Reception123 said "oh, true, thanks," but I guess it didn't reach Freenode as I was one of the IRCCloud users affected by a temporary service disruption (hence why I got logged out unexpectedly). So when I replied "np," it was in reply to a message I thought Reception123 had seen and was thanking me for lol [14:58:07] Oh. [15:10:53] Ah, I see. [15:38:33] RECOVERY - wiki.yumeka.team - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.team' will expire on Mon 07 Feb 2022 23:59:59 GMT +0000. [15:54:39] PROBLEM - wiki.yumeka.team - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:58:16] Universal_Omega, is Special:Statistics MediaWiki core or an extension? In either case, regarding your idea to add the number of jobs to your BackendPerformance.js user script/gadget, I had an additional idea. Can we add an upstream feature to Special:Statistics to parse and return the number of jobs outstanding on a given wiki on that special page? I think that'd be ideal. :) [16:01:30] RECOVERY - wiki.yumeka.team - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.team' will expire on Mon 07 Feb 2022 23:59:59 GMT +0000. [16:11:27] Special:Statistics is mediawiki core [16:12:42] paladox, ah, thanks. Might be something useful worth adding then. [16:20:27] PROBLEM - ping6 on dbbackup2 is CRITICAL: PING CRITICAL - Packet loss = 100% [16:22:29] PROBLEM - ping6 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 102.93 ms [16:35:47] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JtwfH [16:35:48] [02miraheze/puppet] 07paladox 035f74c11 - Lower test3 php childs to 18 [17:05:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JtwUX [17:05:26] [02miraheze/puppet] 07paladox 039c5d211 - Update puppet3.yaml [17:06:40] [02miraheze/MirahezeMagic] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://git.io/JtwUN [17:06:42] [02miraheze/MirahezeMagic] 07Universal-Omega 0354277b0 - Add Special:OAuth to TitleReadWhitelist hook [17:06:43] [02MirahezeMagic] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://git.io/fQRGX [17:06:45] [02MirahezeMagic] 07Universal-Omega opened pull request 03#204: Add Special:OAuth to TitleReadWhitelist hook - 13https://git.io/JtwUA [17:07:07] Universal_Omega: do you have any idea why the interwiki links aren't working properly on https://alternatehistory.miraheze.org/wiki/User_talk:Sapphire_Williams ? [17:07:08] [ User talk:Sapphire Williams - Alternate History ] - alternatehistory.miraheze.org [17:08:00] miraheze/MirahezeMagic - Universal-Omega the build passed. [17:08:21] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [17:10:35] Reception123: nope, that's weird. [17:10:47] yeah, I don't get why it's doing that [17:10:54] Universal_Omega: my only guess is some extension interfering in some way? [17:11:00] it's usually those that cause weird errors like this one [17:11:11] Yeah likely. [17:13:39] Reception123: anywhere else that happens on (that wiki or another, or is just on the talk page?) [17:13:59] I've just tested it on testwiki to make sure, nothing there [17:14:00] going to test2 [17:14:13] test3 sorry :D [17:14:54] Reception123: won't work on test3. I know that. We have to many interwiki conflicts with local and global there it causes issues, our mirahezemagic interwiki won't even work there at all. [17:15:06] Universal_Omega: it actually did [17:15:12] https://test3.miraheze.org/wiki/Test Stewards works from here :) [17:15:13] [ Test - Test3 ] - test3.miraheze.org [17:16:26] Reception123: oh, weird just tested didn't seem to work. Maybe that specific prefix one that I used doesn't then. [17:17:12] yeah, if it works on test3 then it can't really be an extension by itself, there would have to be some setting [17:17:17] but what could cause interwiki links not to work? [17:19:39] I'll try on my own user talk [17:19:44] Reception123: also weirdly on test3, our mirahezemagic interwiki doesn't even show up at all, they are invisible completely, and unlike other wikis where if used in logs they appear as red links to the local wiki, in test3 they appear in logs as blue links linking to 0.miraheze.org.... and what could cause interwiki links to not work is local conflicts with global, the page with the prefix as title,, existing locally, or [17:19:44] possible only on talk pages? [17:20:05] hmm, let me check [17:20:48] https://alternatehistory.miraheze.org/wiki/User_talk:Reception123 yup, same thing, just ignores the M: [17:20:49] [ User talk:Reception123 - Alternate History ] - alternatehistory.miraheze.org [17:21:27] Reception123: hmm. Weird. [17:21:37] Universal_Omega: yeah interesting, searching "M:" on any wiki redirects to the Meta mainpage [17:21:43] while searching it on this wiki redirect to its own main page [17:22:26] Does a page titled M exist on there? Reception123 [17:22:30] figuerd it out [17:22:32] Universal_Omega: it's this https://alternatehistory.miraheze.org/wiki/Special:ManageWiki/namespaces/0 [17:22:33] [ Manage this wiki's namespaces - Alternate History ] - alternatehistory.miraheze.org [17:22:38] an M namespace [17:22:49] that redirects to
that's the conflict [17:22:55] dmehus: ^ there's your conundrum :) [17:23:33] Oh. Interesting. Well good that's figured out then. [17:24:12] Now I'm back to trying to get mirahezemagic interwiki links working in log summaries. [17:25:54] Reception123, that can't be, though, as it doesn't occur on other wikis, and I used `m:` not `M:` [17:26:14] dmehus: have you tried on another wiki that uses a local "M" namespace? [17:26:17] and I don't think it's case-sensitive [17:26:36] Universal_Omega: ah, you mean you're working on https://phabricator.miraheze.org/T6222 ? [17:26:37] [ ⚓ T6222 Update configuration for MirahezeMagic interwiki wikilinks to apply to log actions, edit summaries, and page previews ] - phabricator.miraheze.org [17:26:46] Reception123: yep. [17:26:52] Reception123, the name of their main namespace is `(Main)`, though, no? [17:26:53] sounds good, that task is quite old :) [17:27:02] dmehus: yes, but the M namespace redirects to the Main one [17:27:10] so that's why it's overriding the global interwiki M that goes to Meta [17:27:11] Oh [17:27:15] Interesting [17:27:17] because their M goes to their own Main namespace instead [17:27:24] Reception123: Or trying to figure it out anyways, as the behavior on test3 makes it very odd. [17:27:30] dmehus: so you can fix the issue by using Meta: instead :) [17:28:10] Well, actually, I can't. We should probably delete the redirect in lieu of local administration, and explain why we deleted it, which they can recreate if they wish [17:29:06] but lol it's funny you figured this out and JohnLewis and I both looked at it [17:29:36] I figured it was like a localisation cache thing, but he reminded me that that wouldn't change the input (`m:`) [17:30:26] Universal_Omega, wow, yeah that T6222 task I created has been open for awhile, that'd be great. Mind doing the mass patrol DB query for me first though? [17:30:51] also the BackendPerformance.js script fix would be great, too ;) [17:32:12] Oh right one second. [17:34:43] Universal_Omega, ty :) [17:35:16] Reception123, I figured it out, actually. Is `$wgCapitalLinks` checked or unchecked by default? [17:35:36] dmehus: I'm quite sure it's set to true by default [17:35:38] but not completely [17:35:49] yeah that's what I thought [17:35:54] does `true` = checked? [17:36:46] dmehus: yes checked means the value is set to true :) [17:37:01] yeah, that's what I thought, but wanted to be sure [17:37:06] Reason I ask is because an `M:` redirect doesn't exist on the wiki, so I'm not sure that we've completely solved the issue actually [17:37:27] since the redirect doesn't exist, how does it conflict with `Main` namespace? [17:37:27] dmehus: hm? what do you mean? the redirect exists in ManageWiki/namespaces [17:37:34] oh [17:37:36] as an alias [17:37:40] crap lol [17:37:55] yes, as an alias :) [17:37:57] * dmehus wonders why you'd have a shortcut for Main namespace [17:38:01] I should've used that word heh [17:38:12] * Reception123 forgot it [17:38:52] if we set `$wgCapitalLinks` to false, then, I bet that would solve the issue. Can we test that on `test3wiki` and if it does, I'll suggest that Sapphire Williams? [17:39:27] well maybe they want their m: redirect :D [17:39:32] i.e., try creating an `M` alias on test3wiki, then seeing if the problem corrects when `$wgCapitalLinks` is set to false [17:39:43] well no that's not what I'm suggesting [17:40:03] I'm suggesting maybe we can resolve that by suggesting they set `$wgCapitalLinks` to `false` [17:40:10] which they could consider [17:41:09] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtwkh [17:41:10] [02miraheze/MatomoAnalytics] 07paladox 035453df3 - Delete cache when deleting site from matomo table [17:41:12] [02MatomoAnalytics] 07paladox created branch 03paladox-patch-1 - 13https://git.io/fN4LT [17:41:13] [02MatomoAnalytics] 07paladox opened pull request 03#34: Delete cache when deleting site from matomo table - 13https://git.io/Jtwkj [17:41:52] dmehus: ok sure [17:42:14] miraheze/MatomoAnalytics - paladox the build passed. [17:46:56] dmehus: now I really get that task about having a search function in ManageWiki [17:47:05] tbh most categories aren't really helpful to find something like $wgCapitalLinks [17:47:25] intuitively it would be `Edit` but it's not :( [17:47:35] ah, it was `link` [17:50:53] dmehus: the fix unfortunately doesn't seem to work [17:54:16] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JtwIP [17:54:17] [02miraheze/MatomoAnalytics] 07paladox 03da03b1e - Update MatomoAnalytics.php [17:54:19] [02MatomoAnalytics] 07paladox synchronize pull request 03#34: Delete cache when deleting site from matomo table - 13https://git.io/Jtwkj [17:55:19] miraheze/MatomoAnalytics - paladox the build passed. [18:05:55] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JtwLO [18:05:56] [02miraheze/MatomoAnalytics] 07paladox 0336b4556 - Delete cache when deleting site from matomo table [18:05:58] [02MatomoAnalytics] 07paladox created branch 03paladox-patch-2 - 13https://git.io/fN4LT [18:05:59] [02MatomoAnalytics] 07paladox opened pull request 03#35: Delete cache when deleting site from matomo table - 13https://git.io/JtwL3 [18:06:58] miraheze/MatomoAnalytics - paladox the build passed. [18:12:47] [02MatomoAnalytics] 07JohnFLewis closed pull request 03#34: Delete cache when deleting site from matomo table - 13https://git.io/Jtwkj [18:13:07] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JtwLg [18:13:09] [02miraheze/MatomoAnalytics] 07paladox 0329d976b - Update MatomoAnalytics.php [18:13:10] [02MatomoAnalytics] 07paladox synchronize pull request 03#35: Delete cache when deleting site from matomo table - 13https://git.io/JtwL3 [18:13:52] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JtwL2 [18:13:54] [02miraheze/MatomoAnalytics] 07paladox 03522f938 - Update CHANGELOG [18:13:55] [02MatomoAnalytics] 07paladox synchronize pull request 03#35: Delete cache when deleting site from matomo table - 13https://git.io/JtwL3 [18:14:05] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/JtwLV [18:14:06] [02miraheze/MatomoAnalytics] 07paladox 032edf2a6 - Update extension.json [18:14:08] [02MatomoAnalytics] 07paladox synchronize pull request 03#35: Delete cache when deleting site from matomo table - 13https://git.io/JtwL3 [18:14:14] Reception123, it doesn't seem to be a namespace alias conflict, as `alternatehistorywiki` doesn't have an `M` alias for `Main` namespace (https://uesrpg.miraheze.org/wiki/Special:ManageWiki/namespaces/0) [18:14:15] [ Manage this wiki's namespaces - UESRPG Wiki ] - uesrpg.miraheze.org [18:14:18] miraheze/MatomoAnalytics - paladox the build passed. [18:14:58] Reception123, oh yeah, I know what you mean, though intuitively I did look under "Links" for `$wgCapitalLinks` [18:15:00] dmehus: you've linked another wiki [18:15:13] miraheze/MatomoAnalytics - paladox the build passed. [18:15:16] oh right [18:15:24] let me look on that other wiki lol [18:15:28] miraheze/MatomoAnalytics - paladox the build passed. [18:15:52] Reception123: ^ [18:16:11] okay, yeah they do have an alias [18:16:15] Yup [18:16:26] [02MatomoAnalytics] 07JohnFLewis closed pull request 03#35: Delete cache when deleting site from matomo table - 13https://git.io/JtwL3 [18:16:27] [02miraheze/MatomoAnalytics] 07JohnFLewis pushed 035 commits to 03master [+0/-0/±7] 13https://git.io/JtwLr [18:16:29] [02miraheze/MatomoAnalytics] 07JohnFLewis 033b6f1ae - Merge pull request #35 from miraheze/paladox-patch-2 [18:16:31] * dmehus wonders how many shortcut redirects they even have prefaced by `M:` [18:17:03] None! lol [18:17:25] I'm going to suggest Sapphire Williams they just remove those conflicting aliases for main namespace then [18:17:29] miraheze/MatomoAnalytics - JohnFLewis the build passed. [18:17:48] Reception123, ^ Did you confirm on test3wiki that this is the issue? [18:18:14] Yes [18:19:22] okay [18:19:29] I'll suggest that then [18:19:44] it also conflicts with their `AH` and `MP` redirects to their main page [18:20:05] so there's two reasons for them to ditch the aliases for `(Main)` namespace [18:20:32] Feel free to :) [18:20:58] 👍 [18:35:40] [02MatomoAnalytics] 07Universal-Omega deleted branch 03paladox-patch-2 - 13https://git.io/fN4LT [18:35:41] [02miraheze/MatomoAnalytics] 07Universal-Omega deleted branch 03paladox-patch-2 [19:49:00] I edited an inactive wiki, [[mh:discord:]] hours ago but it still has the inactivity site notice. Any idea why or is it that the script that changes the CW or MW wiki state has not been run yet? [19:49:00] https://discord.miraheze.org/wiki/ [19:49:02] [ The Discord Wiki ] - discord.miraheze.org [20:50:38] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.47, 2.44, 1.45 [20:53:31] @R4356th, the wiki must be manually made active by a bureaucrat. Once that wiki goes inactive, the notice isn't removed. It's just a reminder, essentially. You can't automatically make a wiki inactive though. Nonetheless, editing on a wiki is still possible until it becomes closed, and the deletion script goes by the latest RC entry, not the date when the wiki was marked as inactive/closed [20:54:28] The script seems to be automatically run, though, every day, at around Noon-ish UTC time [20:54:37] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.32, 3.04, 1.94 [21:01:30] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.18, 3.45, 2.56 [21:03:25] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.23, 3.37, 2.64 [21:07:14] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtwcv [21:07:15] [02miraheze/puppet] 07paladox 03d821d82 - mediawiki: Restrict which user runs npm to www-data [21:07:17] [02puppet] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbiAS [21:07:21] [02puppet] 07paladox opened pull request 03#1640: mediawiki: Restrict which user runs npm to www-data - 13https://git.io/Jtwcf [21:08:42] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JtwcJ [21:08:43] [02miraheze/puppet] 07paladox 03295083f - Update servicessetup.pp [21:08:45] [02puppet] 07paladox synchronize pull request 03#1640: mediawiki: Restrict which user runs npm to www-data - 13https://git.io/Jtwcf [21:09:37] paladox, not sure what exactly those commits ^ do, but seems generally like a good idea to run those as `www-data` rather than `root`, so likely prudent security practice-related? [21:10:22] thats what the commit is changing... from root to www-data [21:10:53] yeah... is the reason behind that related to good security practices, generally speaking? [21:11:22] yes [21:11:25] ah, cool [21:11:28] thanks :) [21:12:44] [02puppet] 07paladox closed pull request 03#1640: mediawiki: Restrict which user runs npm to www-data - 13https://git.io/Jtwcf [21:12:45] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jtwcm [21:12:47] [02miraheze/puppet] 07paladox 034c21c2d - mediawiki: Restrict which user runs npm to www-data (#1640) [21:12:49] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-1 [21:12:50] [02puppet] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbiAS [21:13:22] !log disable puppet on dbbackup2 [21:13:22] SPF|Cloud: 2021-02-08 - 00:52:05UTC tell SPF|Cloud night, SPF|Cloud. Sent by @Doug (dmehus) on Discord [21:13:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:13:47] right [21:16:24] PROBLEM - dbbackup2 Puppet on dbbackup2 is WARNING: WARNING: Puppet is currently disabled, message: SPF, last run 3 minutes ago with 0 failures [21:17:47] !log disable puppet on dbbackup1 [21:17:51] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:18:51] !log dbbackup[12]: set global innodb_flush_sync=0; <- to see if that reduces load on the backup servers [21:18:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:20:28] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?viewPanel=287&orgId=1&var-job=node&var-node=dbbackup2.miraheze.org&var-port=9100&from=now-15m&to=now-1m I'm convinced this graph is a lie [21:20:29] [ Grafana ] - grafana.miraheze.org [21:21:27] PROBLEM - dbbackup1 Puppet on dbbackup1 is WARNING: WARNING: Puppet is currently disabled, message: SPF, last run 22 minutes ago with 0 failures [21:22:19] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [21:25:17] !log revert innodb_flush_sync hack (set back to 1) [21:25:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:35:40] !log dbbackup[12]: set global innodb_flush_log_at_trx_commit=2; [21:35:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:36:00] ^ this is very bad, but... let's try [21:52:37] !log dbbackup[12]: revert to innodb_flush_log_at_trx_commit=1 [21:52:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:05:31] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.00, 5.40, 4.18 [22:07:31] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.25, 5.22, 4.26 [22:15:09] Why is it "very bad," SPF|Cloud? What are potential negative ramifications from trying whatever it is you tried? [22:15:54] a crash causes corruption of the data [22:16:12] ah [22:16:22] I'm very frustrated now [22:17:01] yeah... not great for sure. Is this a backup of the backup database, though? Couldn't we just wipe the server and backup our backup database again? [22:17:30] how will a wipe fix the performance problem? [22:17:54] Well, it won't, but it would solve the problem of data potentially being corrupted [22:17:59] the database load > the load these virtual machines can handle [22:18:14] yeah, that I don't have an answer to :P [22:19:00] you could choose for a full re-import in case the server crashes, but that's just adding even more debt [22:19:15] ah, true [22:19:42] do paladox or JohnLewis have any creative ideas to try re: database load being > the load the VMs can handle? [22:20:24] I don't want to add extra duties to volunteers (us) that are already swamped with work [22:20:46] and the impact of the setting is not positive enough to resolve the issue, unfortunately [22:21:04] yeah, not add to duties, just discuss / share thoughts [22:23:17] you could try https://serverfault.com/questions/486677/should-we-mount-with-data-writeback-and-barrier-0-on-ext3 [22:23:17] [ centos - Should we mount with data=writeback and barrier=0 on ext3? - Server Fault ] - serverfault.com [22:23:46] but this is hdds so your unlikely to improve anything substantially. [22:24:52] Replicas with SSDs would solve all issues, but... money [22:37:58] heh [22:57:35] enough for today [22:58:18] !log enable puppet on dbbackup[12] & restart replica on dbbackup2 [22:58:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:59:18] PROBLEM - wiki.finnsoftware.net - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.finnsoftware.net could not be found [22:59:26] RECOVERY - dbbackup1 Puppet on dbbackup1 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [23:43:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtw0n [23:43:17] [02miraheze/puppet] 07paladox 03026d6bb - services: Do not use root to install npm modules [23:43:18] [02puppet] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbiAS [23:43:20] [02puppet] 07paladox opened pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:43:59] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtw0C [23:44:01] [02miraheze/puppet] 07paladox 0358cdf69 - Update mathoid.pp [23:44:02] [02puppet] 07paladox synchronize pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:45:10] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtw0l [23:45:11] [02miraheze/puppet] 07paladox 039220209 - Update proton.pp [23:45:13] [02puppet] 07paladox synchronize pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:45:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtw04 [23:45:44] [02miraheze/puppet] 07paladox 039990c8a - Update restbase.pp [23:45:46] [02puppet] 07paladox synchronize pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:47:05] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/Jtw0R [23:47:07] [02miraheze/puppet] 07paladox 03024c6a6 - Update proton.pp [23:47:08] [02puppet] 07paladox synchronize pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:47:38] [02puppet] 07paladox closed pull request 03#1641: services: Do not use root to install npm modules - 13https://git.io/Jtw0c [23:47:39] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±4] 13https://git.io/Jtw00 [23:47:41] [02miraheze/puppet] 07paladox 03567675a - services: Do not use root to install npm modules (#1641) [23:47:42] [02puppet] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbiAS [23:47:44] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-1 [23:49:02] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jtw0a [23:49:04] [02miraheze/puppet] 07paladox 0309b2643 - Fix [23:50:40] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jtw0o [23:50:41] [02miraheze/puppet] 07paladox 03c475390 - Fix [23:50:44] PROBLEM - services3 Puppet on services3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 seconds ago with 1 failures. Failed resources (up to 3 shown) [23:54:44] RECOVERY - services3 Puppet on services3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures