[00:24:34] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [00:26:24] PROBLEM - Ulsfo HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [00:35:14] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [00:37:03] RECOVERY - Ulsfo HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [01:06:17] 6operations, 6Phabricator, 6Project-Creators, 6Triagers: Requests for addition to the #project-creators group (in comments) - https://phabricator.wikimedia.org/T706#1984525 (10Danny_B) Please add me to #project-creators group. I would like to cleanup the #tracking bugs backlog. Thanks. [02:04:40] can somebody run a statistic query on preferences for me? labs do not replicate necessary data (toolserver used to though) [02:25:01] !log mwdeploy@tin sync-l10n completed (1.27.0-wmf.10) (duration: 10m 14s) [02:25:08] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [02:32:12] !log l10nupdate@tin ResourceLoader cache refresh completed at Sun Jan 31 02:32:12 UTC 2016 (duration 7m 11s) [02:32:17] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [02:32:55] PROBLEM - Unmerged changes on repository mediawiki_config on mira is CRITICAL: There is one unmerged change in mediawiki_config (dir /srv/mediawiki-staging/). [02:40:13] (03CR) 10Danny B.: "The commit message is incorrect - it is sk.wikisource, not wikipedia." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/265896 (https://phabricator.wikimedia.org/T122175) (owner: 10Dereckson) [02:57:15] PROBLEM - puppet last run on cp3012 is CRITICAL: CRITICAL: puppet fail [03:25:23] RECOVERY - puppet last run on cp3012 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [03:35:04] PROBLEM - puppet last run on mw1097 is CRITICAL: CRITICAL: Puppet has 1 failures [03:35:37] Danny_B: we can't fix commit message one merged into master, alas :/ [03:37:26] But, well, your explicit note in the code review is helpful to more quickly know if it's the commit message or change the issue. [04:03:03] RECOVERY - puppet last run on mw1097 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [04:07:04] (03PS1) 10Tim Landscheidt: dynamicproxy: Remove obsolete code [puppet] - 10https://gerrit.wikimedia.org/r/267523 [04:11:20] (03CR) 10Tim Landscheidt: "Tested on Toolsbeta." [puppet] - 10https://gerrit.wikimedia.org/r/267523 (owner: 10Tim Landscheidt) [04:16:39] Danny_B: What kind of query? [04:22:44] PROBLEM - Incoming network saturation on labstore1003 is CRITICAL: CRITICAL: 22.22% of data above the critical threshold [100000000.0] [05:34:59] !log restarted extensions/CentralAuth/maintenance/resetGlobalUserTokens.php [05:35:04] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [05:39:54] PROBLEM - Incoming network saturation on labstore1003 is CRITICAL: CRITICAL: 14.29% of data above the critical threshold [100000000.0] [06:08:04] RECOVERY - Incoming network saturation on labstore1003 is OK: OK: Less than 10.00% above the threshold [75000000.0] [06:27:13] PROBLEM - Kafka Broker Under Replicated Partitions on kafka1022 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [10.0] [06:30:14] PROBLEM - puppet last run on mw2081 is CRITICAL: CRITICAL: Puppet has 1 failures [06:30:43] PROBLEM - puppet last run on mw2018 is CRITICAL: CRITICAL: Puppet has 1 failures [06:30:44] PROBLEM - puppet last run on mw2073 is CRITICAL: CRITICAL: Puppet has 1 failures [06:31:24] PROBLEM - puppet last run on mw1135 is CRITICAL: CRITICAL: Puppet has 1 failures [06:32:13] PROBLEM - puppet last run on mw2129 is CRITICAL: CRITICAL: Puppet has 1 failures [06:32:13] PROBLEM - puppet last run on mw2045 is CRITICAL: CRITICAL: Puppet has 1 failures [06:34:57] (03CR) 10BBlack: "That's probably the better way in the long run, but for now this is the less-invasive change I think. We'd have to specify all possible w" [puppet] - 10https://gerrit.wikimedia.org/r/267381 (https://phabricator.wikimedia.org/T125176) (owner: 10GWicke) [06:36:52] (03CR) 10BBlack: MW parsoid URLs: s/parsoidcache/parsoid/ (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/267234 (https://phabricator.wikimedia.org/T110472) (owner: 10BBlack) [06:43:38] (03CR) 10Mobrovac: [C: 031] MW parsoid URLs: s/parsoidcache/parsoid/ (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/267234 (https://phabricator.wikimedia.org/T110472) (owner: 10BBlack) [06:52:25] PROBLEM - Ulsfo HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 14.29% of data above the critical threshold [1000.0] [06:54:13] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 44.44% of data above the critical threshold [1000.0] [06:56:44] RECOVERY - puppet last run on mw2081 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [06:56:44] RECOVERY - puppet last run on mw2129 is OK: OK: Puppet is currently enabled, last run 28 seconds ago with 0 failures [06:57:44] RECOVERY - puppet last run on mw1135 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:58:34] RECOVERY - puppet last run on mw2045 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:58:54] RECOVERY - puppet last run on mw2018 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:58:54] RECOVERY - puppet last run on mw2073 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:01:24] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [07:03:13] RECOVERY - Ulsfo HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [08:07:53] PROBLEM - Kafka Broker Replica Max Lag on kafka1018 is CRITICAL: CRITICAL: 56.00% of data above the critical threshold [5000000.0] [08:11:35] PROBLEM - Kafka Broker Replica Max Lag on kafka1022 is CRITICAL: CRITICAL: 53.85% of data above the critical threshold [5000000.0] [08:21:53] RECOVERY - Kafka Broker Replica Max Lag on kafka1018 is OK: OK: Less than 50.00% above the threshold [1000000.0] [08:22:04] RECOVERY - Kafka Broker Replica Max Lag on kafka1022 is OK: OK: Less than 50.00% above the threshold [1000000.0] [09:26:24] PROBLEM - puppet last run on db2011 is CRITICAL: CRITICAL: Puppet has 1 failures [09:32:08] 6operations, 10Wikimedia-Video, 5Patch-For-Review: 1gb file upload limit is too restrictive for conference presentation videos - https://phabricator.wikimedia.org/T116514#1984679 (10Reedy) [09:32:41] 6operations, 10Wikimedia-Video, 5Patch-For-Review: 1gb file upload limit is too restrictive for conference presentation videos - https://phabricator.wikimedia.org/T116514#1751367 (10Reedy) @fgiunchedi Want to give a yes/no to this or the changeset? :) [09:52:44] RECOVERY - puppet last run on db2011 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [10:00:57] (03CR) 10Alexandros Kosiaris: [C: 032] "PuppetSWAT is for production, not deployment-prep. We can merge this whenever. In fact, merging it right now" [puppet] - 10https://gerrit.wikimedia.org/r/267416 (owner: 10Mobrovac) [10:16:09] (03PS1) 10Alexandros Kosiaris: servermon: Remove useless AllowOverride [puppet] - 10https://gerrit.wikimedia.org/r/267526 [10:22:01] (03CR) 10Alexandros Kosiaris: [C: 032] servermon: Remove useless AllowOverride [puppet] - 10https://gerrit.wikimedia.org/r/267526 (owner: 10Alexandros Kosiaris) [10:29:25] PROBLEM - puppet last run on ms-be1010 is CRITICAL: CRITICAL: puppet fail [10:30:41] (03CR) 10Hashar: "Additionally, the beta cluster has its own puppet master ( deployment-puppetmaster.deployment-prep.eqiad.wmflabs ) so you an cherry pick " [puppet] - 10https://gerrit.wikimedia.org/r/267416 (owner: 10Mobrovac) [10:35:13] PROBLEM - check_mysql on db1008 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 614 [10:39:04] PROBLEM - puppet last run on mw2171 is CRITICAL: CRITICAL: puppet fail [10:40:13] PROBLEM - check_mysql on db1008 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 915 [10:45:13] RECOVERY - check_mysql on db1008 is OK: Uptime: 1019215 Threads: 2 Questions: 6420553 Slow queries: 6908 Opens: 2775 Flush tables: 2 Open tables: 430 Queries per second avg: 6.299 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [10:46:03] (03PS2) 10Faidon Liambotis: reprepro: add HP's MCP repository to updates [puppet] - 10https://gerrit.wikimedia.org/r/267262 (https://phabricator.wikimedia.org/T97998) [10:47:51] (03CR) 10Alexandros Kosiaris: [C: 031] reprepro: add HP's MCP repository to updates [puppet] - 10https://gerrit.wikimedia.org/r/267262 (https://phabricator.wikimedia.org/T97998) (owner: 10Faidon Liambotis) [10:56:24] PROBLEM - Kafka Broker Replica Max Lag on kafka1018 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [5000000.0] [10:59:55] RECOVERY - Kafka Broker Replica Max Lag on kafka1018 is OK: OK: Less than 50.00% above the threshold [1000000.0] [11:05:24] RECOVERY - puppet last run on mw2171 is OK: OK: Puppet is currently enabled, last run 4 seconds ago with 0 failures [12:06:23] (03PS1) 10Alexandros Kosiaris: bacula: minor linting [puppet] - 10https://gerrit.wikimedia.org/r/267534 [12:07:15] (03CR) 10Alexandros Kosiaris: [C: 032] bacula: minor linting [puppet] - 10https://gerrit.wikimedia.org/r/267534 (owner: 10Alexandros Kosiaris) [12:07:22] (03PS2) 10Alexandros Kosiaris: bacula: minor linting [puppet] - 10https://gerrit.wikimedia.org/r/267534 [12:20:24] PROBLEM - Kafka Broker Replica Max Lag on kafka1013 is CRITICAL: CRITICAL: 59.09% of data above the critical threshold [5000000.0] [12:24:03] PROBLEM - Kafka Broker Replica Max Lag on kafka1018 is CRITICAL: CRITICAL: 73.91% of data above the critical threshold [5000000.0] [12:34:34] RECOVERY - Kafka Broker Replica Max Lag on kafka1018 is OK: OK: Less than 50.00% above the threshold [1000000.0] [12:38:03] RECOVERY - Kafka Broker Replica Max Lag on kafka1013 is OK: OK: Less than 50.00% above the threshold [1000000.0] [13:11:04] PROBLEM - Kafka Broker Replica Max Lag on kafka1020 is CRITICAL: CRITICAL: 57.14% of data above the critical threshold [5000000.0] [13:24:54] RECOVERY - Kafka Broker Replica Max Lag on kafka1020 is OK: OK: Less than 50.00% above the threshold [1000000.0] [13:33:06] bd808: What was the reason to revert to wmf.10 this time? [13:40:44] PROBLEM - swift-account-server on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:40:44] PROBLEM - swift-container-updater on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:40:54] PROBLEM - Check size of conntrack table on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:35] PROBLEM - swift-container-auditor on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:43] PROBLEM - swift-object-updater on ms-be1010 is CRITICAL: Timeout while attempting connection [13:41:43] PROBLEM - swift-object-auditor on ms-be1010 is CRITICAL: Timeout while attempting connection [13:41:43] PROBLEM - swift-account-reaper on ms-be1010 is CRITICAL: Timeout while attempting connection [13:41:43] PROBLEM - salt-minion processes on ms-be1010 is CRITICAL: Timeout while attempting connection [13:41:43] PROBLEM - swift-object-server on ms-be1010 is CRITICAL: Timeout while attempting connection [13:41:43] PROBLEM - swift-container-server on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:43] PROBLEM - RAID on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:44] PROBLEM - swift-container-replicator on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:44] PROBLEM - swift-object-replicator on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:45] PROBLEM - configured eth on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:45] PROBLEM - dhclient process on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:41:46] PROBLEM - swift-account-auditor on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:42:03] PROBLEM - Disk space on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:42:23] PROBLEM - DPKG on ms-be1010 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [14:05:43] PROBLEM - Kafka Broker Replica Max Lag on kafka1018 is CRITICAL: CRITICAL: 57.69% of data above the critical threshold [5000000.0] [14:19:34] RECOVERY - Kafka Broker Replica Max Lag on kafka1018 is OK: OK: Less than 50.00% above the threshold [1000000.0] [16:01:52] !log changed wikiversions.php on mw1017 to serve wmf.10 for SessionManager-related debugging [16:01:55] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [17:57:03] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 10.00% of data above the critical threshold [1000.0] [18:04:05] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [18:41:03] PROBLEM - Kafka Broker Replica Max Lag on kafka1014 is CRITICAL: CRITICAL: 62.50% of data above the critical threshold [5000000.0] [18:51:24] RECOVERY - Kafka Broker Replica Max Lag on kafka1014 is OK: OK: Less than 50.00% above the threshold [1000000.0] [19:46:43] PROBLEM - SSH on ms-be1010 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:48:14] RECOVERY - SSH on ms-be1010 is OK: SSH OK - OpenSSH_6.6.1p1 Ubuntu-2ubuntu2wmfprecise2 (protocol 2.0) [20:27:36] (03PS1) 10MarcoAurelio: Set $wgEnotifMinorEdits = true on huwiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/267558 (https://phabricator.wikimedia.org/T125351) [20:33:23] (03CR) 10Luke081515: [C: 031] Set $wgEnotifMinorEdits = true on huwiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/267558 (https://phabricator.wikimedia.org/T125351) (owner: 10MarcoAurelio) [20:57:48] (03CR) 10Tacsipacsi: [C: 031] Set $wgEnotifMinorEdits = true on huwiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/267558 (https://phabricator.wikimedia.org/T125351) (owner: 10MarcoAurelio) [21:10:22] 6operations, 10MediaWiki-Cache, 10MediaWiki-JobQueue, 10MediaWiki-JobRunner, and 2 others: Investigate massive increase in htmlCacheUpdate jobs in Dec/Jan - https://phabricator.wikimedia.org/T124418#1985222 (10ori) Distribution of purge URLs by hostname: ``` [fluorine:/a/mw-log] $ field 7 AdHocDebug.log |... [21:33:12] !log ori@mira Synchronized php-1.27.0-wmf.10/includes/jobqueue/jobs/HTMLCacheUpdateJob.php: Live-hacked wfDebugLog() call for T124418 (duration: 01m 31s) [21:33:15] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [21:37:19] !log https://tools.wmflabs.org/sal/production missing data from 2016-01-30 until now [21:37:22] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [22:00:24] 6operations, 10MediaWiki-Cache, 10MediaWiki-JobQueue, 10MediaWiki-JobRunner, and 2 others: Investigate massive increase in htmlCacheUpdate jobs in Dec/Jan - https://phabricator.wikimedia.org/T124418#1985305 (10Lydia_Pintscher) FYI: Wiktionary isn't supported yet by Wikidata so at least that part can't come... [22:03:53] !log backfilled missing data in https://tools.wmflabs.org/sal/production from https://wikitech.wikimedia.org/wiki/Server_Admin_Log [22:03:57] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [22:08:42] any root around to run a couple commands on ruthenium for me? (a) /home/ssastry/bin/update-code.sh (b) systemctl restart parsoid-rt-client.service [22:08:57] not urgent. [22:09:22] subbu: (a) errors with: [22:09:24] You are not currently on a branch. Please specify which [22:09:24] branch you want to merge with. See git-pull(1) for details. [22:09:24] git pull [22:09:49] let me check. [22:10:27] for some reason /usr/lib/parsoid/src is not on master. [22:11:50] (03PS4) 10Tim Landscheidt: geturls: Fix pyflakes warnings [software] - 10https://gerrit.wikimedia.org/r/169253 [22:11:51] should I put it on master? [22:12:15] ori, yes please. [22:14:08] !log Updated parsoid on ruthenium and restarted parsoid-rt-client on ruthenium, per subbu's request. [22:14:11] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [22:14:22] thanks. [22:15:17] np [22:23:06] 6operations, 6Commons, 6Multimedia, 10Traffic: Commons API fails (413 error) to upload file within 100MB threshold - https://phabricator.wikimedia.org/T86436#1985332 (10Nemo_bis) > The file was 57MB in size. If the original DjVu is 57 MB, then it's likely that the upload was indeed over 100 MB, due to htt... [22:30:29] testreduce code has a bug that is being exposed with node 4.2 .. ori if you are around, can you do another restart of parsoid-rt-client ? Starting tomorrow, I should have sudo access to most of this and I'll investigate the bug. [22:30:59] subbu: sure [22:31:25] !log restarted parsoid-rt-client.service [22:31:28] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [22:34:47] subbu: just so you know, I'll be heading out in about 20 minutes. Happy to sudo on your behalf until then. Tim should be online soon, too. [22:34:59] ori, this is good. [22:35:29] i just wanted to get rt testing kicked off. [22:35:43] there was a stuck client that didn't cleanly restart the first time around looks like. [22:36:22] i am going to head off soon myself. have a good rest of your afternoon. [22:41:19] csteipp_afk: is it possible to block a certain user-agent from editting? [22:41:26] you too [22:41:34] mafk: No, not at this time [22:42:04] csteipp_afk: ah, well. It's because we're under a spambot attack today, all of them using the same UA [22:42:16] several hours now, and all IPs [22:42:42] mafk: Someone in ops might be able to block that at the varnish layer--- that would at least slow them down until they adapted. [22:42:56] (unless it's already a common UA) [22:43:23] I've blacklisted the pattern of the page they're creating in the title blacklist [22:43:54] see yourself https://meta.wikimedia.org/w/index.php?title=Special:AbuseLog&wpSearchFilter=104 [22:44:35] mafk: Thanks for doing that. Any idea what the scale of this? How many IP's involved, and how much spam is this producing? [22:44:57] like a thousand today csteipp_afk [22:45:16] stryn and other stewards have been globally blocking IPs all day [22:47:00] 270 hits on filter 104 today, but that's just one they're triggering [22:47:21] Oh wow (scrolling through that abuse log) [22:48:15] mafk: Is the UA something that other people use? Or is it unique enough we could just block it all? [22:48:36] csteipp_afk: due to privacy, I'd better pm you with it if you don't mind? [22:48:41] but it looks common to me [22:49:15] mafk: I don't really need it.. just wondered if it was "Spam Bot" vs some specific but legitimate browser.. [22:49:33] looks common to me [22:50:01] feel free to CU any of those IPs if you need the tech data... unless you can get it by other ways ;) [22:50:29] If it was specific, we could add a totally hacky hook to just block it. But I'd rather not cut off legitimate users. [22:51:34] maybe they're all hitting the title blacklist [22:51:40] now, I mean [22:51:46] you could check the hits [22:51:58] title blacklist log is not enabled for regular users [22:57:31] 6operations, 10MediaWiki-Cache, 10MediaWiki-JobQueue, 10MediaWiki-JobRunner, and 2 others: Investigate massive increase in htmlCacheUpdate jobs in Dec/Jan - https://phabricator.wikimedia.org/T124418#1985370 (10ori) Distribution of (indirect) callers of `HTMLCacheUpdate::__construct` for the past 20 minutes... [23:05:31] mafk: I feel bad there's not a lot I can offer you all right now for fighting this. I think this is common enough that it would be good to figure out a good way of handling this. The AbuseFilter solution got put on hold since it introduced privacy issues, but maybe we need a specific extension just for this that only stewards can access. [23:07:13] csteipp_afk: just for curiosity, can you access the title blacklist log on the DB and check if they're attempting to edit after the TBL entry? [23:07:38] mafk: I think so, one sec.. [23:09:57] Trying to figure out where we log that... [23:13:27] title blacklist log goes into the logging table [23:25:24] csteipp_afk: well, don't worry, got to go now. [23:33:43] PROBLEM - Kafka Broker Replica Max Lag on kafka1018 is CRITICAL: CRITICAL: 63.64% of data above the critical threshold [5000000.0] [23:40:43] RECOVERY - Kafka Broker Replica Max Lag on kafka1018 is OK: OK: Less than 50.00% above the threshold [1000000.0] [23:58:52] !log krenair@mira Synchronized php-1.27.0-wmf.11/extensions/VisualEditor/extension.json: https://gerrit.wikimedia.org/r/#/c/267617/ (duration: 01m 28s) [23:58:54] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log, Master [23:59:37] Krenair: wmf/1.27.0-wmf.11.nosessionmanager in /core would need a corresponding version in /vendor for CI to pass, I think – it's falling back to master.