[00:07:57] [02miraheze/mediawiki] 07Universal-Omega pushed 031 commit to 03revert-1336-revert-1335-revert-1334-Universal-Omega-patch-1 [+0/-0/±1] 13https://git.io/Jml6F [00:07:59] [02miraheze/mediawiki] 07Universal-Omega 03be3b085 - Revert "Revert "Revert "Update IncidentReporting (#1334)" (#1335)" (#1336)" [00:08:00] [02mediawiki] 07Universal-Omega created branch 03revert-1336-revert-1335-revert-1334-Universal-Omega-patch-1 - 13https://git.io/vbL5b [00:08:27] [02mediawiki] 07Universal-Omega opened pull request 03#1337: Revert "Update IncidentReporting" - 13https://git.io/Jmli1 [00:08:33] [02mediawiki] 07Universal-Omega closed pull request 03#1337: Revert "Update IncidentReporting" - 13https://git.io/Jmli1 [00:08:35] [02miraheze/mediawiki] 07Universal-Omega pushed 031 commit to 03REL1_35 [+0/-0/±1] 13https://git.io/Jmli5 [00:08:36] [02miraheze/mediawiki] 07Universal-Omega 03fdd85c6 - Revert "Revert "Revert "Update IncidentReporting (#1334)" (#1335)" (#1336)" (#1337) [00:08:38] [02mediawiki] 07Universal-Omega deleted branch 03revert-1336-revert-1335-revert-1334-Universal-Omega-patch-1 - 13https://git.io/vbL5b [00:08:39] [02miraheze/mediawiki] 07Universal-Omega deleted branch 03revert-1336-revert-1335-revert-1334-Universal-Omega-patch-1 [00:10:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 253s [00:11:31] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JmlMy [00:11:32] [02miraheze/puppet] 07paladox 03ba62520 - mediawiki: Update vmtouch-mediawiki-files.list [00:12:19] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 74s [00:18:21] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JmldG [00:18:22] [02miraheze/puppet] 07paladox 03226522f - mediawiki: Notify vmtouch service when making changes to /etc/vmtouch-mediawiki-files.list [00:39:37] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [01:17:18] !log add dingedbwiki to matomo [01:17:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:17:43] !log add civwikiwiki to matomo [01:17:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:28:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 243s [01:32:18] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [02:12:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 200s [02:14:19] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 199s [02:16:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 303s [02:22:19] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 2s [02:54:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 329s [02:56:18] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [08:56:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 167s [08:58:18] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [11:08:19] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 193s [11:10:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 211s [11:14:18] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [12:47:23] [02miraheze/CreateWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jmg0h [12:47:25] [02miraheze/CreateWiki] 07translatewiki 03551541d - Localisation updates from https://translatewiki.net. [12:47:26] [ Main page - translatewiki.net ] - translatewiki.net [12:47:26] [02miraheze/DataDump] 07translatewiki pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/Jmg0j [12:47:28] [02miraheze/DataDump] 07translatewiki 03a0f9da8 - Localisation updates from https://translatewiki.net. [12:47:29] [ Main page - translatewiki.net ] - translatewiki.net [12:47:29] [02miraheze/IncidentReporting] 07translatewiki pushed 031 commit to 03master [+1/-0/±3] 13https://git.io/JmgEe [12:47:31] [02miraheze/IncidentReporting] 07translatewiki 03ae94e28 - Localisation updates from https://translatewiki.net. [12:47:32] [ Main page - translatewiki.net ] - translatewiki.net [12:47:32] [02miraheze/MirahezeMagic] 07translatewiki pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JmgEU [12:47:34] [02miraheze/MirahezeMagic] 07translatewiki 03afe5e10 - Localisation updates from https://translatewiki.net. [12:47:35] [ Main page - translatewiki.net ] - translatewiki.net [12:47:35] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JmgET [12:47:37] [02miraheze/ManageWiki] 07translatewiki 0365c9dd0 - Localisation updates from https://translatewiki.net. [12:47:37] [ Main page - translatewiki.net ] - translatewiki.net [12:47:38] [02miraheze/landing] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JmgEk [12:47:40] [02miraheze/landing] 07translatewiki 03f5154e8 - Localisation updates from https://translatewiki.net. [12:47:41] ... [12:48:27] miraheze/IncidentReporting - translatewiki the build passed. [12:48:28] miraheze/DataDump - translatewiki the build passed. [12:48:34] miraheze/CreateWiki - translatewiki the build passed. [12:48:37] miraheze/MirahezeMagic - translatewiki the build passed. [12:48:43] miraheze/ManageWiki - translatewiki the build passed. [12:48:52] miraheze/landing - translatewiki the build passed. [12:52:16] Can a chanop please remove those bans ^^ [13:36:12] hi [13:39:32] !log dbbackup1: stop replication for c4 to test impact on load, current stats: c2 lagged by 43 hours, c4 lagged by 67.5 hours, [13:39:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:41:06] !log sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --update --wiki minecraftwiki [13:41:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:41:25] dbbackup1 has issues catching up, despite stopping replication for c4 [13:41:34] shrug [14:04:36] PROBLEM - en.nocyclo.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for en.nocyclo.tk could not be found [14:06:50] PROBLEM - meta.nocyclo.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for meta.nocyclo.tk could not be found [14:09:28] PROBLEM - es.nocyclo.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for es.nocyclo.tk could not be found [14:28:55] Those are bad stats for replication yikes [14:30:02] Sario: late but done [14:30:45] JohnLewis: thanks [14:42:19] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 285s [14:43:02] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jm2uF [14:43:03] [02miraheze/mw-config] 07paladox 031fbb469 - Use http_response_code rather than header for setting 404 [14:44:26] miraheze/mw-config - paladox the build passed. [14:49:56] PROBLEM - pubwiki.lab.co.pl - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - pubwiki.lab.co.pl All nameservers failed to answer the query. [15:03:40] RECOVERY - pubwiki.lab.co.pl - reverse DNS on sslhost is OK: rDNS OK - pubwiki.lab.co.pl reverse DNS resolves to cp11.miraheze.org [15:18:18] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [15:26:15] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.19, 3.57, 1.89 [15:28:15] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.51, 2.48, 1.69 [15:47:51] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.59, 1.76, 1.22 [15:49:51] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.37, 1.23, 1.08 [16:25:30] Reception123: "As ironic as it may be, this page itself is outdated and does not correctly state which docs are problematic or not. This will be rectified soon" I found that so funny :) [16:26:16] JohnLewis: the truth must be admitted to :P [16:29:18] True [16:40:04] well https://meta.miraheze.org/wiki/Tech:Server_usage should finally be mostly accurate (except some specs perhaps). Hope I didn't leave out any server [16:40:05] [ Tech:Server usage - Miraheze Meta ] - meta.miraheze.org [16:42:10] Reception123: I wouldn't bother manually updating such a list [16:42:51] I guess it is a bit time consuming, but it's not bad to have a list of all servers available [16:43:04] make mediawiki pages structured via Cargo (since we don't have SMW), deploy a tool on all VMs to fetch information each 24 hours [16:43:15] https://www.researchgate.net/publication/230636122_Towards_a_Collaborative_Semantic_Wiki-based_Approach_to_IT_Service_Management [16:43:59] oh, that's an interesting idea [16:44:02] Cargo is one of the most unstable extensions we have installed, every day there's a complaint it doesn't work seemingly [16:44:08] ^ [16:44:40] I was afraid someone would mention that... [16:45:10] well, who wants to deploy Wikibase on meta? ;) [16:45:58] WikiBase isn't bad now [16:46:15] I think we fixed all its issues [16:46:45] is there a known root cause for all Cargo issues? [16:48:00] Seems a mix of upstream bugs and extension being updated without checking it works for days/weeks on end [16:48:14] "Semantic MediaWiki – Miraheze does not currently have the resources to support SMW. You can use Cargo instead." [16:48:45] JohnLewis: yeah, regarding it being updated without checking, I've just made a more general task https://phabricator.miraheze.org/T6997 which would maybe involve switching back Cargo to REL1_35 per Universal_Omega 's recommendation [16:48:46] [ ⚓ T6997 Check which extensions can be switched back from master to REL_ ] - phabricator.miraheze.org [16:48:49] assuming SMW is similar to Cargo, but doesn't give you headaches every day [16:49:13] as some of our extensions really have no reason to still use master branches, and it shouldn't surprise us it causes errors if they're actually only compatible with the next version of MW [16:50:03] Mhm, there's very little reason to use master really [16:50:23] is it worth spending engineering time on migrating wikis from Cargo to SMW, if this unlocks new use cases and reduces load on MWEs with all bugs Cargo comes with? [16:50:29] yeah, it's mostly been done because there was some upstream error at some point that wouldn't/couldn't be backported [16:50:31] I've always wondered where the 'we don't have the resources for SMW' came from, as I don't understand why we don't [16:50:51] I have no idea where it came from, but I do remember us saying that forever and me thinking that was the situation [16:52:30] if people find it impossible to install and maintain SMW, I'd like to know why [16:52:45] Sounds like it might be time to re-investigate SMW? [16:52:46] PROBLEM - wiki.yumeka.icu - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.yumeka.icu could not be found [16:53:26] I really have no idea how Translate uses master [16:53:30] I recall an SMW-like extension is amongst the top of the list 'most requested extensions', Cargo clearly doesn't fit the requirements ('do not cause issues every day') [16:53:32] https://phabricator.miraheze.org/T452#5946 is the earliest mention over it being declined, ironically by me [16:53:33] [ ⚓ T452 SemanticWiki and SemanticResultsFormat Extension for aryaman.miraheze.org ] - phabricator.miraheze.org [16:53:51] almost five years have passed since [16:55:47] documentation looks excellent (if not better than any Wikimedia extension, to date), even has professional support available [16:58:07] Yeah, I think we should at least try to install it [16:59:50] structured data in MediaWiki is a compelling method to fulfill lots of use cases [17:02:36] Reception123: Translate follows a different release schedule [17:05:12] RhinosF1: ah, ok [17:05:39] Reception123: it's part of the MLEB group [17:06:09] that makes sense, or else it would be very surprising for us to not be using REL for such an extension [17:12:45] I hope we can have both Cargo and SMW as available options. Cargo works well for me with smaller things, but it's not very flexible in the way it works [17:57:08] MLEB has a master policy [18:05:22] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 6.90, 5.48, 4.44 [18:07:21] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.74, 5.16, 4.45 [18:07:54] PROBLEM - jobrunner3 Current Load on jobrunner3 is CRITICAL: CRITICAL - load average: 6.30, 4.88, 3.29 [18:09:54] RECOVERY - jobrunner3 Current Load on jobrunner3 is OK: OK - load average: 4.06, 4.64, 3.41 [18:19:23] [02puppet] 07RhinosF1 opened pull request 03#1718: puppetserver: +ssh key - 13https://git.io/JmVRO [18:19:46] paladox: ^ [18:19:55] Needs an actual key generating [18:24:03] [02puppet] 07JohnFLewis commented on pull request 03#1718: puppetserver: +ssh key - 13https://git.io/JmV09 [18:29:54] [02puppet] 07RhinosF1 commented on pull request 03#1718: puppetserver: +ssh key - 13https://git.io/JmVu2 [18:30:41] [02puppet] 07JohnFLewis commented on pull request 03#1718: puppetserver: +ssh key - 13https://git.io/JmVuy [18:31:29] JohnLewis: that massively complicates it [18:32:01] Which is what I said in my comment on why it’s on hold [18:34:13] JohnLewis: I wish someone would have said 2 hours ago [18:35:33] Discussed it with Reception + who did you approach in Infra to discuss the task with today? [18:35:57] My bad yeah, there were many side discussions via PM and I didn't get the chance to relay that message [18:38:19] Any progress on renewing the SSL process is indirectly blocked on https://phabricator.miraheze.org/T6974 as we’re wanting to reconsider allocation of resources to the MediaWiki front facing infrastructure, which has been an active and ongoing discussion the past few days and has support of both EMs. It wouldn’t make sense to improve a system which might be going in its entirety [18:38:20] [ ⚓ T6974 Jobs Statistics in Grafana ] - phabricator.miraheze.org [18:39:03] JohnLewis okay [18:41:22] How's acme challenges gonna end up working from LE [18:41:30] As they base on the existence of a file [18:42:10] If I knew the answer, it wouldn’t be an open problem to solve [18:42:42] but before we even know if we’re going forward, we need that task above resolved so we can make an informed decision over whether it’s a good idea or not [19:32:00] Can someone try go to https://publictestwiki.com/wiki/Special:RecentChanges - i getting "internal error" [19:32:01] [ Recent changes - TestWiki ] - publictestwiki.com [19:44:38] Me too, I got this error on Special:RecentChanges in Public Test Wiki: [52ced638f5a892c0994da8e5] 2021-03-18 19:44:01: Fatal exception of type "MWException" [19:44:50] it works fine for me, that's strange [19:44:54] JohnLewis: ^ can you reproduce this error? [19:45:28] Works for me, do you have access to graylog to look up that exception? [19:47:22] Yup, was just going to do that [19:48:56] oh wow that's confusing, logbot entries are in graylog now too [19:49:10] `Language::sprintfDate: The timestamp 100000101005959 should have 14 characters` [19:49:11] hmm [19:49:22] more precisely: `[52ced638f5a892c0994da8e5] /wiki/Special:RecentChanges MWException from line 1174 of /srv/mediawiki/w/languages/Language.php: Language::sprintfDate: The timestamp 100000101005959 should have 14 characters` [19:49:31] Do the graylogs mean something? Sorry if I asked. [19:50:20] @DarkMatterMan4500 graylog is the service we use for all sorts of logs, including error logs [19:50:31] see https://meta.miraheze.org/wiki/Tech:Graylog [19:50:32] [ Tech:Graylog - Miraheze Meta ] - meta.miraheze.org [19:50:53] Oh, so it's not just for global account logs of that sort, right? [19:51:16] it has nothing to do with that no, it's a sysadmin tool [19:51:35] Oh. So only Stewards and Global sysops have control of that? [19:53:46] @DarkMatterMan4500 Not at all, that's my point, this is a technical, system administrator tool for errors. Stewards and Global Sysops have nothing at all to do with Graylog and cannot access it [19:54:06] It's a purely sysadmin tool, and for example in this case lets us know what the full error is on publictestwiki [19:54:33] Ah, okay. [19:57:42] Hum, this maybe has something to do with time settings. [19:57:42] I changed on Special:Preferences my timezone (Europe/Paris) to the default one (UTC) and Special:RecentChanges work again [19:57:42] This is weird [20:02:30] let me try to change to CET and see what happens [20:03:05] that's it yes [20:03:16] that's odd [20:07:04] ^ paladox would you perhaps have any idea why this error is happening? [20:07:16] I found https://phabricator.wikimedia.org/T174221 but I don't see why it's happening here when you have the CET timezone [20:07:17] [ ⚓ T174221 Language::sprintfDate doesn't like infinity ] - phabricator.wikimedia.org [20:07:28] oh, i'm not sure [20:08:19] HeartsDo: is today the first time it happened? [20:09:43] Good investigative point is to figure out what timestamp is causing it, and why it's not 14 characters (probably in the DB?) [20:10:03] I discovered it yesterday [20:10:57] Yeah, I wonder why it's only on testwiki [20:12:02] PROBLEM - ping4 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 137.91 ms [20:19:27] MariaDB [testwiki]> select * from recentchanges where length(rc_timestamp) <> 14; [20:19:28] Empty set (0.004 sec) [20:19:31] hmm, doesn't seem to be that [20:28:11] RECOVERY - ping4 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 99.14 ms [20:29:10] Reception123: what about in the logging table? [20:29:28] tried that too, and found nothing :( [20:30:04] I even tried a more unconventional way, I dumped the SQL then did cat testwiki.sql | grep "100000101005959" and that didn't seem to do it either [20:36:17] PROBLEM - ping4 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 128.89 ms [20:38:53] JohnLewis: https://graylog.miraheze.org/messages/graylog_123/acb28771-8829-11eb-b6dc-0200001a24a4 gets us closer but I'm not really sure what that means [20:39:42] That's a web search [20:40:27] oh, so it doesn't get us anywhere then :( [20:41:17] this is when it seems the first error was spotted: 2021-03-17 16:37:38.000 +00:00 [20:43:39] PROBLEM - ping6 on ns1 is WARNING: PING WARNING - Packet loss = 0%, RTA = 120.62 ms [20:43:46] `11:46 Brewster239 talk contribs block protected Protection test [Edit=Allow only administrators] (expires 23:59, 31 December 9999) [Move=Allow only administrators] (expires 23:59, 31 December 9999) [Delete=Allow only administrators] (expires 23:59, 31 December 9999) [Protect=Allow only administrators] (expires 23:59, 31 December 9999) ‎(Test protect)` [20:43:51] I have a feeling this is it [20:44:21] RECOVERY - ping4 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 100.55 ms [20:44:33] or it could just be a coincidental other strange timestamp [20:45:40] 9999? That year won't happen for 7 thousand years from now. [20:47:04] very strongly doubt the concept of a wiki will be known to humanity by then :P [20:47:39] RECOVERY - ping6 on ns1 is OK: PING OK - Packet loss = 0%, RTA = 104.08 ms [20:48:27] Oh, its this! Special:RecentChanges work again but the page who is the log crash to MWException [20:49:02] HeartsDo: which page is giving an exception? [20:49:10] https://test.miraheze.org/w/index.php?title=Special:Log&page=Protection+test [20:49:12] [ All public logs - TestWiki ] - test.miraheze.org [20:49:46] interesting, so it would be that [20:50:02] but RecentChanges doesn't work for me when I'm on CET [20:53:12] 🤔 [21:03:52] Brewster239's log is the culprit for sure, I tried to change my RecentChanges settings for making to 100 changes in 7 days, same issue, so every page who this log appear will fall to MWException on CET (and maybe others timezones 🤷‍♂️) [21:04:30] it does seem to be very likely, but the issue is when I look at the DB I can't find any timestamps that are more than 14 characters [21:08:52] Hidding the log entry might help maybe? 👀 [21:09:15] I don't really think so as it would still be somewhere in the DB, but you can try if you want, can't hurt :D [21:12:52] Tested and... no, same issue [21:13:02] HeartsDo: heh, it seems to work for me now actually [21:13:35] HeartsDo: I also hid another revision that you didn't ("Create protection test"), so maybe try now? [21:13:48] even if it works though, I'll surely file an upstream task [21:14:15] oh it works now :p [21:14:28] yay, I'll do the upstream report then [21:17:06] Reception123: are we going to reconsider SMW? [21:17:20] SPF|Cloud: considering the discussion above, I don't see why we shouldn't try [21:18:00] great, I guess it's up to you to create a Phab task and schedule this :) [21:19:24] and the idea of using SMW to create a CMDB of our infrastructure (take a look at the research report I sent) may be a nice idea for the first use case [21:20:13] yeah [21:20:16] HeartsDo: https://phabricator.wikimedia.org/T277809 :) [21:20:16] [ ⚓ T277809 MWException when setting protection date to year 9999 ] - phabricator.wikimedia.org [21:22:17] SPF|Cloud: https://phabricator.miraheze.org/T7000 [21:22:18] [ ⚓ T7000 Reconsider implementing Semantic MediaWiki ] - phabricator.miraheze.org [21:22:21] and there goes our 7000th task! [21:22:28] Thanks, and I added me as a subcriber on the task! :p [21:22:48] HeartsDo: yeah, though tbh I doubt it will get solved any time soon, it's very very minor [21:23:26] https://phabricator.wikimedia.org/T277809#6926831 heh yes, I was sure it would get lowest [21:23:27] [ ⚓ T277809 MWException when setting protection date to year 9999 ] - phabricator.wikimedia.org [21:36:11] using MediaWiki syntax in Phabricator comment doesn't work well [21:37:45] @SRE, I can't recall who was talking about it (and where I read it), but I saw some messages regarding automating the addition of a new certificate (for https). have you considered https://wikitech.wikimedia.org/wiki/Acme-chief? [21:37:46] [ Acme-chief - Wikitech ] - wikitech.wikimedia.org [21:47:08] PROBLEM - ping4 on dbbackup2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 128.70 ms [21:49:10] RECOVERY - ping4 on dbbackup2 is OK: PING OK - Packet loss = 0%, RTA = 117.72 ms [21:56:50] [02miraheze/puppet] 07Southparkfan pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JmwGB [21:56:51] [02miraheze/puppet] 07Southparkfan 038e1d7af - Prometheus blackbox: add HTTPS metrics for more sites (T6800) [22:15:08] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.59, 1.83, 1.36 [22:17:05] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.88, 1.51, 1.31 [22:58:20] !log start backup on dbbackup2 (command: https://phabricator.miraheze.org/T5877#132335) for c3, to test impact on load [22:58:21] [ ⚓ T5877 Revise MariaDB backup strategy ] - phabricator.miraheze.org [22:58:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:00:27] night [23:02:18] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 286s [23:06:40] PROBLEM - mail2 IMAP on mail2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:39:38] RECOVERY - mail2 IMAP on mail2 is OK: IMAP OK - 0.009 second response time on 51.195.236.253 port 143 [* OK [CAPABILITY IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE LITERAL+ STARTTLS LOGINDISABLED] Dovecot (Debian) ready.] [23:43:30] so... maybe we can see SMW on MH already this week? 👀 [23:47:14] PROBLEM - ping4 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 334.52 ms [23:47:50] PROBLEM - mail2 IMAP on mail2 is CRITICAL: No data received from host [23:49:46] RECOVERY - mail2 IMAP on mail2 is OK: IMAP OK - 0.010 second response time on 51.195.236.253 port 143 [* OK [CAPABILITY IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE LITERAL+ STARTTLS LOGINDISABLED] Dovecot (Debian) ready.] [23:59:41] PROBLEM - mail2 IMAP on mail2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds