[00:08:29] RECOVERY - MariaDB Slave Lag: s4 on dbstore1001 is OK: OK slave_sql_lag Replication lag: 89628.37 seconds [00:08:43] Krenair: https://wikitech.wikimedia.org/wiki/MariaDB#dbstore1001_.26_dbstore2001 [00:09:30] oh, ok [00:09:30] useful if something deletes a bunch of stuff and that gets replicated out :) [00:10:03] the delayed ones can't be stopped and used to rebuild more easily [00:22:22] yeah :) [02:12:11] 10Operations, 10DC-Ops: document all scs connections - https://phabricator.wikimedia.org/T175876#3612626 (10ayounsi) 1/ Longer term that data should be in Netbox or similar - T170144. Until then spreadsheet or Wikitech seems fine to me. 2/ Rancid seems to be able to pull and archive configuration from OpenGea... [02:18:31] RECOVERY - MariaDB Slave Lag: s7 on dbstore1001 is OK: OK slave_sql_lag Replication lag: 89704.70 seconds [02:35:09] PROBLEM - puppet last run on lvs1005 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [03:03:39] RECOVERY - puppet last run on lvs1005 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [03:13:11] (03PS5) 10TerraCodes: Remove overlapping userrights [mediawiki-config] - 10https://gerrit.wikimedia.org/r/370791 (https://phabricator.wikimedia.org/T101983) [04:21:59] PROBLEM - HHVM rendering on mw1198 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:22:50] RECOVERY - HHVM rendering on mw1198 is OK: HTTP OK: HTTP/1.1 200 OK - 73998 bytes in 3.373 second response time [04:37:28] (03CR) 10Gergő Tisza: [C: 031] add tgr to pdfrender-admin sudo group [puppet] - 10https://gerrit.wikimedia.org/r/378060 (https://phabricator.wikimedia.org/T175882) (owner: 10RobH) [04:39:12] 10Operations, 10Ops-Access-Requests, 10Patch-For-Review: Requesting access to scb* and pdfrender-admin for tgr - https://phabricator.wikimedia.org/T175882#3612638 (10Tgr) a:05Tgr>03RobH Signed. >>! In T175882#3608405, @RobH wrote: > It seems that he only needs pdfrender-admin, not anything else. I'll l... [05:21:30] PROBLEM - Eqiad HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [1000.0] [05:22:09] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [1000.0] [05:22:29] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [1000.0] [05:24:09] PROBLEM - Ulsfo HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [1000.0] [05:36:59] 10Operations, 10Traffic: Text eqiad varnish 503 spikes - https://phabricator.wikimedia.org/T175803#3612658 (10Steinsplitter) >>! In T175803#3612226, @Samtar wrote: > It looks like cp1052 had a spike, but has since recovered The problem seems to be back, getting such erros: ``` Request from via cp1053 cp... [05:42:19] PROBLEM - Codfw HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [05:47:05] 10Operations, 10Traffic: Text eqiad varnish 503 spikes - https://phabricator.wikimedia.org/T175803#3603561 (10APerson) Problem still persists: PROBLEM - Codfw HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [06:11:10] RECOVERY - MariaDB Slave Lag: s3 on dbstore1001 is OK: OK slave_sql_lag Replication lag: 89890.17 seconds [06:14:29] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [1000.0] [06:14:39] RECOVERY - Eqiad HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:14:59] RECOVERY - Codfw HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:23:10] RECOVERY - Ulsfo HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:23:39] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:24:19] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:27:49] PROBLEM - graphite.wikimedia.org on graphite1003 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 398 bytes in 0.002 second response time [06:28:49] RECOVERY - graphite.wikimedia.org on graphite1003 is OK: HTTP OK: HTTP/1.1 200 OK - 1547 bytes in 0.013 second response time [06:32:19] PROBLEM - Ulsfo HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [1000.0] [06:33:39] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [06:33:50] PROBLEM - Eqiad HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [06:34:29] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [1000.0] [06:50:09] RECOVERY - Eqiad HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:50:39] RECOVERY - Ulsfo HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:50:40] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [06:51:59] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:09:09] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [1000.0] [07:15:09] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:32:19] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [1000.0] [07:32:39] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [1000.0] [07:32:49] PROBLEM - Eqiad HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [07:34:19] PROBLEM - Ulsfo HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [1000.0] [07:34:20] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:34:39] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:37:20] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [1000.0] [07:37:39] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 55.56% of data above the critical threshold [1000.0] [07:43:29] RECOVERY - Ulsfo HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:43:49] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:44:30] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:44:59] RECOVERY - Eqiad HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:52:39] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [1000.0] [07:53:59] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [07:57:09] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [08:00:49] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [08:06:19] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [08:06:59] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [08:11:46] !log restart varnish backend on cp1053 - recurrent mailbox lag [08:12:02] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [08:12:25] 10Operations, 10Traffic: Text eqiad varnish 503 spikes - https://phabricator.wikimedia.org/T175803#3612747 (10Samtar) {F9596771} {F9596801} The frequency of spikes seems to be increasing over the last 24 hours when compared to the last seven days {F9596862} {F9596845} [08:14:00] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [08:14:17] elukey: morning o/ would you agree the frequency of these spikes are increasing? ^ [08:14:19] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [08:17:44] TheresNoTime: hi! It might be due to some traffic patterns, but afaics cp1053 and cp1052 are the heavy hitters during the past 10 hours [08:49:51] (03PS6) 10Zoranzoki21: Add new throttle rules.. [mediawiki-config] - 10https://gerrit.wikimedia.org/r/378393 (https://phabricator.wikimedia.org/T176037) [09:20:29] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [09:22:19] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [09:27:39] PROBLEM - Esams HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [09:28:19] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [09:29:39] RECOVERY - Esams HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [09:44:39] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [09:46:08] last two peaks are cp1052, I'll restart the backend in there too [09:46:14] !log restart varnish backend on cp1052 - recurrent mailbox lag [09:46:30] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [09:49:29] ok varnishlog looks ok [09:50:42] 10Operations, 10Traffic: Text eqiad varnish 503 spikes - https://phabricator.wikimedia.org/T175803#3612761 (10elukey) Restarted varnish-backend on cp1053 and cp1052 since they were showing up frequently in the X-caches ints. [09:51:19] PROBLEM - Wikitech and wt-static content in sync on labtestweb2001 is CRITICAL: wikitech-static CRIT - wikitech and wikitech-static out of sync (210851s 200000s) [09:52:49] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [09:53:09] PROBLEM - Wikitech and wt-static content in sync on silver is CRITICAL: wikitech-static CRIT - wikitech and wikitech-static out of sync (210851s 200000s) [09:58:32] so https://wikitech-static.wikimedia.org/w/api.php seems up, not sure what's happening [10:00:22] andrewbogott: ---^ [10:14:41] (03PS3) 10MarcoAurelio: New 'abusefilter-helper' configuration for en.wikipedia [mediawiki-config] - 10https://gerrit.wikimedia.org/r/377473 (https://phabricator.wikimedia.org/T175684) [10:21:39] RECOVERY - MariaDB Slave Lag: s1 on dbstore1001 is OK: OK slave_sql_lag Replication lag: 89862.52 seconds [11:21:29] RECOVERY - Check systemd state on restbase1009 is OK: OK - running: The system is fully operational [11:31:03] (03PS9) 10MarcoAurelio: Cloud VPS configuration for hi.wikivoyage [puppet] - 10https://gerrit.wikimedia.org/r/371096 (https://phabricator.wikimedia.org/T173013) [11:46:24] elukey, that check is not about the site being up [11:46:29] elukey, the problem is that the site is out of date [11:46:47] hence 'out of sync' [11:47:27] the latest content on wikitech-static is from Thursday [11:48:04] but wikitech has revisions today [11:48:40] you should check what's going on with the script on wikitech-static [11:57:28] 10Operations, 10TemplateStyles, 10Traffic, 10Wikimedia-Extension-setup, and 5 others: Deploy TemplateStyles to svwiki - https://phabricator.wikimedia.org/T176082#3612936 (10Nirmos) [12:40:30] <_joe_> !log restarting hhvm on some api appservers, to ease the cpu overlock [12:40:44] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [12:49:56] <_joe_> !log taking a full debug dump of mw1288 after depooling it [12:50:09] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [12:51:30] PROBLEM - HHVM rendering on mw1288 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:52:19] PROBLEM - Nginx local proxy to apache on mw1288 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:52:19] PROBLEM - Apache HTTP on mw1288 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:52:37] guess that is expected? ^ [12:54:07] Sagan, yes [12:55:14] <_joe_> Sagan: yes, see my entry in the SAL just before [12:55:26] <_joe_> I'm taking a full dump of the stack of that machine [12:55:48] <_joe_> before doing a rolling restart of all hhvm servers [12:55:53] <_joe_> in the api cluster [12:56:19] RECOVERY - Apache HTTP on mw1288 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 612 bytes in 0.036 second response time [12:56:19] RECOVERY - Nginx local proxy to apache on mw1288 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 613 bytes in 0.045 second response time [12:56:19] <_joe_> I want to be able to investigate this further, but then I just want to fix the issue and go back to what I was doing on sunday :P [12:56:30] RECOVERY - HHVM rendering on mw1288 is OK: HTTP OK: HTTP/1.1 200 OK - 74786 bytes in 0.098 second response time [12:56:40] PROBLEM - Disk space on mw1288 is CRITICAL: DISK CRITICAL - free space: /tmp 0 MB (0% inode=99%) [12:58:49] RECOVERY - Disk space on mw1288 is OK: DISK OK [12:59:10] PROBLEM - puppet last run on mw1288 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:00:49] PROBLEM - HHVM rendering on mw1198 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 1308 bytes in 0.001 second response time [13:01:49] RECOVERY - HHVM rendering on mw1198 is OK: HTTP OK: HTTP/1.1 200 OK - 74790 bytes in 1.426 second response time [13:05:52] Krenair: I know it, but my point was that the static site was up and running, so the verification of what's appening (that I am not really clear how to do) could be delayed to tomorrow [13:06:35] ok [13:06:47] *happening [13:25:40] RECOVERY - puppet last run on mw1288 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [15:34:06] 10Operations, 10Analytics, 10Analytics-Wikistats, 10Wikidata, and 6 others: Create Wikiversity Hindi - https://phabricator.wikimedia.org/T168765#3613113 (10Dzahn) The last unchecked checkbox was "labs". The comments above sound like there are db replicas now in labs. So.. ticket resolved? [15:36:09] PROBLEM - MegaRAID on db1046 is CRITICAL: CRITICAL: 1 LD(s) must have write cache policy WriteBack, currently using: WriteThrough [15:57:23] elukey: the 'in sync' error is about wikitech-static diverging from wikitech. I'll see what's happening with the rsync [17:12:37] thanks andrewbogott! [17:43:11] 10Operations: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613217 (10Andrew) [17:43:56] elukey: ^ I've located the problem, but it seems to be the result of a mediawiki bug. I'm not immediately sure who to pass it on to. [17:45:04] If anyone feels like fixing an interesting mediawiki bug, T176090 seems like a good one. [17:45:05] T176090: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090 [17:46:10] ACKNOWLEDGEMENT - Wikitech and wt-static content in sync on silver is CRITICAL: wikitech-static CRIT - wikitech and wikitech-static out of sync (230181s 200000s) andrew bogott This is T176090 [17:48:09] RECOVERY - salt-minion processes on labtestvirt2001 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/salt-minion [17:48:19] RECOVERY - salt-minion processes on labtestvirt2002 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/salt-minion [17:53:39] RECOVERY - DPKG on labtestvirt2002 is OK: All packages OK [18:44:09] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/page/{language}/{title}{/revision} (Fetch enwiki Oxygen page) timed out before a response was received: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received [18:45:10] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [18:46:32] andrewbogott, wtf wow [18:46:33] ok [18:46:49] 10Operations, 10MediaWiki-Maintenance-scripts: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613291 (10Krenair) [18:49:50] 10Operations, 10MediaWiki-Maintenance-scripts: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613205 (10Reedy) We seem to have two issues here... Why is MW emitting invalid XML? Also, this is likely a "dupe" of {T175444}, as you can see at https://wikitech.wikimedia.org/wiki/File:G... [18:56:19] RECOVERY - MegaRAID on db1046 is OK: OK: optimal, 1 logical, 2 physical, WriteBack policy [19:03:16] 10Operations, 10MediaWiki-Maintenance-scripts: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613323 (10Reedy) @Andrew Is it only Files that are generating broken xml? [19:08:37] somebody who can help me with some easy regexp at shell? [19:08:42] *somebody here [19:10:59] Sagan, hmm? [19:11:34] Krenair: I want to construct a if in a shell file for my icinga, which matches if a param contains *ZNC* [19:14:24] krenair@bastion-01:~$ echo aZNCa | grep -q ZNC [19:14:24] krenair@bastion-01:~$ echo $? [19:14:24] 0 [19:14:24] krenair@bastion-01:~$ echo aZNa | grep -q ZNC [19:14:25] krenair@bastion-01:~$ echo $? [19:14:26] 1 [19:18:38] so you could do something like this: [19:18:40] echo $1 | grep -q ZNC [19:18:40] if [ $? -eq 0 ] [19:18:40] then [19:18:40] echo "match" [19:18:40] fi [19:20:19] after a quick google it turns out you can also do this: [19:20:24] if [[ $1 == *"ZNC"* ]] [19:20:24] then [19:20:24] echo "match" [19:20:24] fi [19:21:07] Sagan [19:23:01] Krenair: I will try it, thx :) [19:32:57] Krenair: it works, thank you very much :) [19:33:04] np [19:33:59] PROBLEM - puppet last run on dbmonitor1001 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [20:02:29] RECOVERY - puppet last run on dbmonitor1001 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [20:23:16] (03CR) 10MarcoAurelio: [C: 04-1] "recheck" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/370791 (https://phabricator.wikimedia.org/T101983) (owner: 10TerraCodes) [20:36:00] (03CR) 10Luke081515: "Did you removed all task references which got unuseful now?" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/370791 (https://phabricator.wikimedia.org/T101983) (owner: 10TerraCodes) [20:41:04] 10Operations, 10MediaWiki-Maintenance-scripts: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613406 (10Andrew) @Reedy I'm on holiday and so only got as far as seeing that that one use-case produces the problem. I don't know immediately how to find all the mismatches, although there... [21:41:20] (03CR) 10TerraCodes: [C: 031] "> Did you removed all task references which got unuseful now?" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/370791 (https://phabricator.wikimedia.org/T101983) (owner: 10TerraCodes) [21:43:59] PROBLEM - Check Varnish expiry mailbox lag on cp1099 is CRITICAL: CRITICAL: expiry mailbox lag is 2028479 [22:13:59] RECOVERY - Check Varnish expiry mailbox lag on cp1099 is OK: OK: expiry mailbox lag is 0 [22:47:29] PROBLEM - Ensure NFS exports are maintained for new instances with NFS on labstore1004 is CRITICAL: CRITICAL - Expecting active but unit nfs-exportd is inactive [22:48:39] PROBLEM - puppet last run on labstore1004 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [22:50:30] RECOVERY - Ensure NFS exports are maintained for new instances with NFS on labstore1004 is OK: OK - nfs-exportd is active [22:53:25] ^ got it [22:53:39] RECOVERY - puppet last run on labstore1004 is OK: OK: Puppet is currently enabled, last run 5 seconds ago with 0 failures [23:06:54] (03PS1) 10Madhuvishy: firstboot: Force puppet run after ensure NFS mounts available [puppet] - 10https://gerrit.wikimedia.org/r/378639 (https://phabricator.wikimedia.org/T171508) [23:07:36] (03CR) 10Madhuvishy: [C: 032] firstboot: Force puppet run after ensure NFS mounts available [puppet] - 10https://gerrit.wikimedia.org/r/378639 (https://phabricator.wikimedia.org/T171508) (owner: 10Madhuvishy) [23:59:19] 10Operations, 10MediaWiki-Maintenance-scripts: wikitech-static sync failing - https://phabricator.wikimedia.org/T176090#3613524 (10Reedy) So the offending code is https://github.com/wikimedia/mediawiki/blob/master/includes/export/XmlDumpWriter.php#L406 Pass it `''` and it's fine, pass it `null` and it breaks...