[08:13:23] Oh, shit. That's right, no merging... [09:32:15] Lydia_WMDE: What to do now? Should we finally revoke the merge rights? [09:33:42] sjoerddebruin: aude is looking into it. it looks like an issue in core. so i'd say yes for now and we'll get it fixed asap [09:33:44] sjoerddebruin: i'm investigating [09:34:07] hopefully we can get a fix this afternoon, but i'm still not exactly sure what the problem is [09:34:40] Hm [09:34:55] sjoerddebruin: can you do it? [09:35:39] Lydia_WMDE: I don't know who to contact. It's a configuration change i think. [09:35:51] ah [09:35:57] ok let me look into it then [09:36:36] Or it could be a steward task, IDK. [09:37:01] Nope, that's global. [10:28:11] aude: did you end up solving https://phabricator.wikimedia.org/T115892 ? [10:28:14] just spotted it [10:29:27] He just found out the cause, it seems. [10:30:29] She ;) [10:30:41] Oh, I always forget... :( [10:30:48] :D [10:31:36] There are a lot of male users on nlwiki with feminine names. :/ [10:32:10] And aude sounds so male... [10:33:56] https://en.wikipedia.org/wiki/Aude [10:33:57] :P [10:35:00] * nikki just uses they most of the time [10:42:44] hi multichill [10:42:51] addshore: commented in the ticket about the cause [10:43:26] now need to figure out a reasonable solution [10:44:03] hoi [10:46:06] Lydia_WMDE: Found something yet? [10:47:36] sjoerddebruin: no not yet :/ [10:47:48] Should I just create another Phab task then? [10:50:19] sjoerddebruin: that'd be great [10:55:53] Lydia_WMDE: https://phabricator.wikimedia.org/T115994 hope it's clear enough [10:56:31] sjoerddebruin: <3 [10:57:02] bye [11:22:25] is anyone here familiar with the julian calendar? if ruwiki says 29 july (11 august) 1861, I assume one is the julian date and the other is the gregorian one, but I have no idea which is which [11:25:03] google says... [11:25:21] gregorian introduced... 13 removed [11:25:39] So julian should be later [11:26:03] https://en.wikipedia.org/wiki/Julian_calendar [11:26:16] Or am I confused [11:26:27] Yup looks like it [11:26:28] "Consequently, the Julian calendar is currently 13 days behind the Gregorian calendar; for instance, 1 January in the Julian calendar is 14 January in the Gregorian." [11:45:59] Reedy: sadly reality is more complicated. you need to know which julian calendar. there are multiple ones, they switched the year in different month and day of the month. [11:46:24] jzerebecki: It should be relative though :P [11:51:54] nikki: that seems a bit too far apart to be the normal difference between the usual russian julian calendar and gregorian (oktober revolution happened in november, which fits with the 13 days Reedy mentioned) [11:53:36] addshore: Reedy does https://gerrit.wikimedia.org/r/#/c/247546/1/wmf-config/Wikibase.php look ok? (temporary action until we have real fix) [11:53:56] tested locally, (as sysop / bureaucrat) i'm not allowed now to access Special:MergeItems [11:54:29] and the gadget (api) denies permission [11:56:21] thanks :) [12:00:09] jzerebecki: I'm not sure I understand what you mean... I'm mostly trying to figure out what to change the current value to (it's "29" right now, which is definitely wrong). if it fits with what Reedy said, does that mean setting it to 29th july (julian) would be ok? [12:02:39] sjoerddebruin: disallowed item-merge for now, until we have the real fix [12:03:00] hopefully today, although the bug is a little bit more complex :/ [12:06:50] nikki: i'm only saying that a) I don't know what I'm doing and b) those dates to not fit the normal difference for russian julian and gregorian. they might fit a different julian calendar, which is not available for entering dates on wikidata.org . or it might be something entierly different. do you have the Qid? [12:07:10] https://www.wikidata.org/wiki/Q10860165 [12:11:12] nikki: forget what I said earlier. it fits. 29th july is the julian date. [12:11:53] ok, thanks :) [12:14:25] nikki: What do you think about this: https://phabricator.wikimedia.org/T115981 [12:22:57] that sounds like a reasonable idea (although I imagine it wouldn't be implemented very soon since it would mean looking somewhere completely different for the name) [12:23:16] Don't know if it's possible. :) [13:13:12] aude: *looks* [13:13:33] yeh looks good [13:13:33] :D [13:16:31] addshore: thanks :) [13:17:14] I still havn't looked at the issue to undertsand it though! [13:17:22] *looks again* [13:18:03] think we need SiteLinkUniquenessValidator to be aware of some allowed conflicts to ignore [13:18:12] e.g. sitelinks on the from-merge item [13:18:47] hmmmmmmm [13:18:49] else, inprocess caching SiteLinkLookup perhaps that is aware of the new item [13:19:15] oh, so without reading any more, we are getting site link conflicts from the merged from item when adding to the to item? [13:19:36] the way everything is hooked together is complex [13:19:41] yep [13:19:54] addshore: before, the order of execution was [13:20:07] 1) modify old item and new item (with change ops) [13:20:17] 2) save from-item (removing everything) [13:20:30] 3) immediately do secondary data updates (update items per sit etable) [13:20:34] 4) save to-item [13:20:45] and now those updates are defered, okay, gottcha! [13:20:46] 5) perform validations in the process (site link uniqueness) [13:20:51] addshore: exactly [13:21:03] making SiteLinkUniquenessValidator indeed seems like the nicest solution right now [13:21:05] so, site link table still has the from-item site links when checking uniquness [13:21:13] *making it aware of things to ignore [13:21:15] addshore: what i am thinking.... [13:21:25] yeh, I just read on phab ;) [13:21:31] potentially this is an issue also with label constraints, but not sure where it would actually be a problem [13:21:48] yeh, it should also be an issue with those [13:21:53] each action is quite independent [13:22:11] the second save knows nothing about the from-item or the fact that it is a merge action [13:22:18] but magic defered timing is magic [13:22:26] yeah :/ [13:22:50] per pahb it might look like a bit of work to get SiteLinkUniquenessValidator to do what we want? [13:22:58] indeed :( [13:23:02] hmmm [13:23:33] *pulls wikibase* [13:24:19] there is a SiteLinkConflictLookup interface, but basically it's SiteLinkTable that implements and what we use [13:25:20] all coming from the EntityConstraintProvider for the changeop [13:25:49] not just change op [13:26:07] also ItemHandler [13:28:08] could have another implementation of SiteLinkConflictLookup wrapping SiteLinkTable ? that can be constructed with $sitelinksToIgnore ? [13:28:35] maybe but how to hook that into the code? [13:28:39] mhhhm [13:28:49] *goes to look at ItemHandler too* [13:29:32] mhhhm, its ugly of course because we only know what we want to ignore at runtime [13:29:45] welll, deep in the runtime... :/ [13:30:11] there is an event dispatcher thing in WikiPAgeEntityStore [13:30:48] maybe that is the place to make it aware about merges and then let SiteLinkConflictLookup aware of stuff to ignore [13:32:21] $errors = $this->removeConflictsWithEntity( $errors, $fromId ); [13:32:38] ah [13:32:52] thats already in ChangeOpsMerge? [13:34:22] there is [13:34:27] oh [13:34:32] but then i guess it validates again on save? [13:34:41] but stuff is validated again on save.... yes [13:34:51] mhhhm [13:34:57] we can't remove the conflicting site links on save [13:35:12] except we can ignore such violations potentially [13:35:25] *goes to look at ItemMergeInteractor* [13:35:30] possibly bad to ignore them though [13:35:44] as things genuinely could have changed [13:36:42] adding some sort of flag to entityStore->saveEntity might work? [13:37:02] then when checking the conflicts the same logic as in the changeop could be applied [13:37:39] well, a flag wouldn't work, but an extra param :/ [13:38:20] hm :/ [13:38:31] or, WikipageEntityStore could collect the entity ids that is has saved in this run and then not shout about conflicts with them (hmm that seems ugly too) [13:38:46] we can register a watcher some place [13:39:13] still not sure thats right though :/ although in most cases there is only 1 edit in a runtime anyway [13:40:15] really something should throw an exception regarding the conflicts, then further up the tree something look at it and says, no, these conflcts are fine and saves again? :/ [13:40:37] and in that case there could be a flag for ignoring conflicts for saveEntity [13:41:02] maybe [13:41:35] I still don't think a flag is enough there though, as the thing further up the tree should explicitly say which conflicts it is okay with [13:42:21] but problem is $status [13:42:23] not ok [13:42:29] it's not an exception [13:42:48] EntityStore could have a addConflictsToIgnore method? :/ [13:43:29] so it could be.. save(), status with conflicts returned, conflicts we are okay with added to ignore list, resave :/ [13:45:52] hm [13:46:56] would probably also need a uniform Conflict object? [14:08:01] * aude starts with splitting SiteLinkConflictLookup from SiteLinkTable [14:08:31] then may experiment with registering conflicts or site links with it, upon item merge save action [14:10:00] okay :) ping me if you need any review aude! [14:11:23] ok [14:11:30] let's see if this works and is not too hacky [14:15:04] :D [14:16:41] SiteLinkConflictLookup could also be potentially moved to repo [14:18:01] aude: its used by SiteLinkTable which is used in DirectSqlStore which is in lib :P [14:25:19] what? [14:25:20] ok [14:25:47] that's what i'm removing / splitting [14:27:09] * aude submits a draft and see if jenkins approves [14:29:37] :) [14:30:05] i really think only actual use is in repo for constraint checks, though MockRepository still implements SiteLinkCOnflictLookup [14:30:33] though maybe that is not needed and it can all be moved :) [14:37:03] good afternoon Lydia_WMDE! [14:37:19] harej: hey [14:37:43] harej: hello mr. president! :) [14:37:53] ;-) [14:50:03] andre__: per your question on #wm-bot the other day, the list is not on github! You manage the setting per channel through the bot! [14:50:26] Didn't see anyone answer you there so figured id poke you :) [15:08:20] addshore: thanks. makes sense. I asked because I was wondering if there is a "pull that single config file" way to check which IRC channels are logged, for some Community Metrics) [15:09:55] andre__: http://bots.wmflabs.org/~wm-bot/dump/systemdata.htm [15:10:04] that has all of the info on it, updated every 20 mins [15:10:35] addshore: uh! ♥!!! Thanks. [15:10:38] there isn't a structured form currently though [15:11:30] andre__: I just opened a pull request to make pulling out the number of users for a channel easier https://github.com/benapetr/wikimedia-bot/pull/49 ;) [15:11:44] could probably do one to make the other bits easier too [15:11:46] hah [15:11:50] it might just be worth making a json though! [15:11:52] * aude adds tests for SiteLinkConflictLookup which never had tests in SiteLinkTable :( [15:12:03] andre__: it seems we both looked for something like this at the same time :D [15:12:59] addshore: well, I was just expanding/updating the list of "tracked" IRC channels on korma.wmflabs.org in the context of https://phabricator.wikimedia.org/T56230#1738468 [15:13:04] but that's still very messy territory [15:13:20] well, "Iterations. We can believe in." I guess [15:14:44] cool! [15:37:49] Hmm, why is the topic locked and the access list so short? The merging thing should probably be in the topic [15:40:06] multichill: poke Lydia_WMDE [15:40:39] unlock the topicccc [15:40:46] on it [15:40:49] on it [15:40:50] :D [15:40:56] * Lydia_WMDE needs to look up the command [15:41:10] aude: Besides breaking merge, did references also change? [15:41:16] Suddenly all of them are expanded! [15:41:16] Lydia_WMDE: I'll take over :) [15:41:26] multichill: o really? [15:41:26] JohnFLewis: but but but [15:41:31] then i'll never learn :D [15:41:36] can't imagine it's related [15:41:52] maybe it's an issue with a gadget? [15:42:01] I've noticed that references keep being expanded, it seems to go away when I edit the page, so maybe it's something to do with caching [15:42:16] Lydia_WMDE: /cs set #wikidata mlock -t :) [15:42:19] hmmm, maybe we need to bump parser cache [15:42:25] For the future anyway [15:42:27] JohnFLewis: aha! [15:42:27] https://www.wikidata.org/wiki/Q581285 is all expanded for me [15:42:29] * Lydia_WMDE tries [15:42:42] o_O [15:42:56] ok should be done [15:42:56] can bump the cache [15:43:09] it's not nice to do but apparently necessary :/ [15:43:24] multichill: still? [15:43:28] i purged and it went away [15:43:44] so indeed likely a caching issue [15:46:04] * aude has a deploy window soon, although lots of things [16:04:38] * hoo waves from SF :) [16:05:22] enjoy the nice weather :) [16:06:04] I will [16:06:17] But also need to get some things done [16:06:35] Will be in the office later on [16:07:20] * aude waves [16:07:43] Hi aude :) [16:08:04] * aude waiting to deploy stuff [16:08:16] wmf3 ? [16:08:18] but there might be a problem with the jobqueue / redis [16:08:25] Or backports? [16:08:40] hoo: yeah, updating our branch + enabling wikibase client on new wikis :) [16:09:14] If you wait for 1h or so, I can support [16:09:25] But need breakfast first [16:10:31] we have 2 hours or suppose however long needed [16:14:03] Can I remove ORM yet? :P [16:14:23] No :( [17:43:05] multichill: https://www.wikidata.org/wiki/User:Multichill/Fuzzy_painter_matches [17:43:07] :) [17:43:12] yah! [17:43:22] <3 [17:46:58] aude: Deployed, yet? [17:49:48] apparently not [17:51:31] hoo: running scap, etc while ops are debugging job queue / db load issues seemed like a bad idea to me [17:51:44] i'll try again tomorrow [17:52:14] I can take care of it, if you want [17:52:23] it's only 11am here [17:54:15] well.... we have several things [17:54:41] I see the change for pushing the branch to Wikibase wmf3 and several backports [17:54:46] 1) update 1.27.3 wikidata [17:55:08] 2) update wikimedia messages 1.27.3 and 1.27.2 with new messages (i hope and think it's the right thing to do) [17:55:31] probably scap at this point [17:55:36] Yeah, we did that before :) [17:55:47] 3) enable adding site links to the new wikis on test.wikidata* [17:55:53] make sure it works [17:56:12] probably have to touch some things like SitesModule* [17:56:12] Amir1: What happened with the formatting? [17:56:33] SitesModule is "self purging"... will take up to 15m, though [17:56:39] 4) once i18n stuff is good, then enable site links on wikidata [17:56:44] make sure it works again :) [17:56:46] worst case (10m own caching + 5m RL cache) [17:56:58] then enable wikibase client on mediawiki.org, metawiki and wikispecies [17:57:10] (i already updated sites table and added wbc_entity_usage) [17:57:16] Nice :) [17:57:17] don't have patch for this yet [17:57:32] Is it possible to query wiki with FreeBase IDs like "/m/01mf0"? [17:57:34] then check other projects i18n still works [17:57:44] Also need to bump the cache epoch after adding the new sites for sitelinks, right? [17:57:48] (on wikidata9 [17:58:05] hoo: not for this, afaik but people are reporting problems unrelated so yet bump cache epoch [17:58:16] there is no new site link section [17:58:22] Oh, right... they will just be "other sites" [17:58:33] * aude also planned to enable geodata (and maybe should do tomorrow) [17:58:59] means new table on wikidata and update cirrus mapping config (in consultation with the search people) [17:59:08] but no refreshlinks yet [17:59:45] Ok :) [17:59:47] the backports would be nice (esp. for wmf3) also [17:59:50] I'll see what I can do today [17:59:55] ok [18:00:07] will certainly go for wmf3 and depending on how much comes up also for the sitelinks [18:00:31] i already said on phabricator that we'd do sitelinks tomorrow, but don't think people would mind them earlier [18:00:56] hehe :D [18:01:02] wmf3 + wikimedia messages + scap would already help [18:02:27] Yeah, certainly [18:03:10] Is there anything else I should take a look at? Review, bugs, ...? [18:04:19] i am working on item merge bug [18:04:29] Ah, just wanted to ask about that [18:04:42] Can it be reproduced on testwikidata? [18:04:52] I see that the permission has been revoked there as well [18:05:08] https://phabricator.wikimedia.org/T115892 [18:05:20] hoo: if you want to unrevoke, go ahead [18:05:38] it's easily reproduced on my local wiki [18:07:41] Ok... and the patch up for backporting addresses the issue? [18:08:01] Offhand I don't see why it would [18:10:24] hoo: no [18:10:41] but they are related bugs (one was appearing in the logs) [18:10:45] Ok [18:11:00] so not critical, but nice to have [18:11:11] i think i have a solution, but then open to other solutions [18:11:16] I think giving a master connection to SiteLinkTable would work as a stop gap [18:11:22] no [18:11:27] :/ [18:11:30] afaik it might have master already [18:11:46] problem is updating site links table is now deferred [18:12:19] so SiteLinkConflictFinder thinks there is a conflict when saving the to-item (based on still existing site links in the table for the from-item) [18:12:30] err SiteLinkConflictLookup* [18:12:38] oh, I see [18:12:44] i want to add a mechanism for during merge, to ignore these [18:12:48] and during merge only [18:13:08] it's nasty how all the services are pulled together [18:13:29] and then we never had enough tests for this stuff [18:16:04] Could also post filter errors in ChangeOpsMerge::applyConstraintChecks [18:16:12] But probably not going to be very nice [18:20:32] aude: You might want to remove your -2 from https://gerrit.wikimedia.org/r/246782 now [18:25:35] ok [18:25:52] hoo: problem is not during change ops (only) [18:25:57] it's on save, when we validate [18:26:12] We validate twice? [18:26:25] Well, depends on what services are used to save AFAIR [18:26:25] we always validate on save [18:27:48] hoo: https://gerrit.wikimedia.org/r/#/c/247626/ is what i came up with though it's not that nice [18:28:08] and we might have similar issue (though not sure if it affects anything) with label constraints [18:28:36] * aude needs to figure out how to test this [18:29:34] :/ That's a little bit of black magic [18:29:42] to just ignore what has been changed [18:30:03] :( [18:30:04] https://en.wikipedia.org/wiki/Magic_%28programming%29#Variants [18:30:41] would be nicer to use an in process site link lookup, but the constraints don't use lookup [18:33:22] mh [18:33:47] anyway, open to ideas :) [18:34:11] * hoo goes to update his stuff and reproduce this [18:34:23] but think making site link update non-deferred is not a good option (if that's even possible now) [18:35:01] Don't we have a (non-hacky) way to make it synchronous again? [18:35:22] i doubt and not sure we want that [18:35:58] entity store does not know that a merge is happening, for example [18:36:04] maybe we could pass a flag [18:36:27] Well, we want these updates to happen very fast [18:36:49] unlike say updating the various link tables, the items per site table really shouldn't lag behind much [18:37:44] true :/ [18:38:56] https://gerrit.wikimedia.org/r/#/c/244407/1/includes/page/WikiPage.php [18:40:11] "For web requests this will be at request-end (post-send on HHVM)." [18:40:15] I guess that's ok in general [18:40:43] hoo: we could make these updates no longer EntityModificationUpdates if we really want [18:40:58] but it's not great imho [18:46:56] I have a fix [18:47:00] but it's not very nice [18:47:03] want me to upload it? [18:47:04] \o/ [18:47:06] sure [18:48:00] * aude still would like my split of SiteLinkConflictLookup merged :) [18:52:42] aude: https://gerrit.wikimedia.org/r/247633 [18:52:48] k [18:52:53] Basically just not checking constraints on save is the fix [18:53:16] hmmm, depends on what constraints [18:53:17] We already check all of them in changeopsmerge (minus the ones we *don't want to* check), that is fine to do [18:53:20] might be ok [18:53:39] We check all constraints in ChangeOpsMerge [18:53:47] the same we check on update [18:53:48] k [18:53:53] so that should be fine as far as I can tell [18:54:41] * aude tries the patch [18:56:19] think it will work though [18:56:40] Go ahead :) [18:57:04] I even tested it with a real conflict (by bringing my wb_items_per_site in an inconsistent state on purpose) [18:58:53] * aude is enough satisfied [18:58:59] \o/ [18:59:45] if you can also come up with some sort of tests that would have caught this, would be nice [18:59:55] but doesn't have to be this patch [19:00:01] I doubt we can [19:00:05] would need browser tests for that [19:00:17] yeah :( [19:00:17] because the thing that matters here are post request handlers [19:00:54] * aude doesn't make hoo write browser tests [19:00:59] but would be nice [19:01:08] I should learn how to do that again at some point [19:01:10] if we had such [19:01:16] last time I touched them they were still living in Wikibase [19:01:26] me too [19:01:55] * aude still thinks https://gerrit.wikimedia.org/r/#/c/247588/ is useful improvement though [19:02:07] Yeah, it is [19:02:08] but not critical or anything [19:02:20] not sure why I didn't do that after splitting up the interfaces [19:02:30] Ok with you to backport my fix later on? [19:03:12] cherry picking [19:03:17] but go ahead with them [19:05:27] :) [19:05:43] * hoo hopes that it's ok to deploy later on [19:05:45] * aude should find food now and get back to my hostel :P [19:05:56] last night at the hostel for now :) [19:06:23] maybe around some later [19:06:24] Are you in Berlin again? [19:06:27] yes [19:08:51] :) [19:11:07] oh, so we want to go to wmf.3b [19:11:48] Confusing :D [19:17:04] addshore: Any reason you didn't tag DataTypes 0.5.1 yet? [19:17:12] I saw you update the release notes [20:43:42] JeroenDeDauw: addshore: Released DataTypes 0.5.1 now [20:43:53] also github has problems with timezones :D [20:44:00] " @mariushoch mariushoch released this 6 hours ago " :D [20:47:36] * aude back :) [20:48:24] :) [20:48:41] I'm about to start with the deployment [20:48:46] greg-g is ok with it [20:49:08] so... just fyi, I just found out our jobqueue is growing without shrinking, pretty badly, I need to figure this out in the next 10 minutes [20:49:29] that's why i stopped [20:50:47] aude: Any reason you also did the cherry pick for wmf1? Or was that just in case we don't want to go to wmf3b now? [20:50:52] i was afk for the morning and didn't realize it was still going on [20:58:02] hoo: thought i did wmf1 (still deployed now on wikidata) [20:58:23] and wmf3b which is what i want for today on group0 and tomorrow on wikidata [20:58:40] wmf3 is still on test.wikidata but won't go to wikidata [20:58:57] aude: I wanted to put wmf3 to Wikidata, then update messages, then scap [20:59:54] wm3 what? [20:59:57] wmf38 [20:59:58] ah [21:01:29] hoo: group1 is tomorrow afaik [21:01:33] hoo: y u no release in the future? way cooler [21:01:45] * aude would like new code on test first [21:02:16] aude: so... wait for test to go to wmf4, then push or wmf3b to wmf4 [21:02:28] In that case, our wmf3b should be a wmf4 [21:03:11] there is no wmf4 [21:03:15] until next week [21:03:25] oh right [21:03:27] * hoo moans [21:03:48] In that case we would need to downgrade Wikidata first [21:03:49] https://wikitech.wikimedia.org/wiki/Deployments#Tuesday.2C.C2.A0October.C2.A020 [21:03:59] * hoo doesn't want to do that [21:04:23] no, test.wikidata has wmf3 core + wmf1 wikibase [21:04:34] aaaaaaaaaaaaaaaa [21:04:37] so does wikidata [21:04:45] or am I confused now? [21:04:55] no, test has wmf3 core + wmf3 wikibase [21:04:58] afaik [21:05:03] how's that possible? [21:05:09] and wikidata has wmf2 core + wmf1 wikibase [21:05:20] in that case Special:version on WD is off [21:05:26] because we halted deployments on wednesday before annything got deployed [21:05:29] * aude looks [21:06:02] according to wikiversions.json Wikidata is on wmf3 [21:06:11] oh [21:06:28] * aude wonder how/when [21:06:52] hm [21:07:38] could explain the merge items bug and why it appeared now [21:08:02] yeah, almost surely [21:08:46] travis is broken *sigh* [21:08:59] oh, I see why [21:09:01] easy fix [21:11:23] i see "group1 wikis to 1.27.0-wmf.3" in the git log [21:12:06] but thought that never was deployed yet before there were issues last week [21:20:14] aude: Fixes travis (and probably jenkins on master) https://gerrit.wikimedia.org/r/247724 [21:20:26] eg. https://travis-ci.org/wikimedia/mediawiki-extensions-Wikibase/jobs/86489248 [21:20:33] 9:43 PM " @mariushoch mariushoch released this 6 hours ago " :D [21:20:34] nice [21:21:18] hi addshore :D [21:21:19] hoo: aude anything interesting breaking this time? ;) [21:21:23] The release actually broke stuff [21:21:31] reallyyY? [21:21:33] addshore: We fixed the merge stuff [21:21:36] >.> [21:21:41] Just the tests [21:21:43] hoo: yay! how? [21:21:47] but still, surprising [21:21:53] https://travis-ci.org/wikimedia/mediawiki-extensions-Wikibase/jobs/86489248 [21:23:08] hoo: i'm not so happy that the deployment calendar is wrong now :/ [21:23:18] whats is the significance of the b in this branch name? wmf/1.27.0-wmf.3b [21:23:30] addshore: things are very confusing [21:23:35] addshore: b is the new wmf3 branch [21:23:38] all new and shiny [21:23:54] according to the deployment calendar, test wikis have wmf3 core + wmf3 wikibase [21:24:01] and wikidata has wmf2 core + wmf1 wikibase [21:24:16] but apparently wmf3 is on wikidata also [21:24:26] hoo: the new branch? what happened to the old branch? [21:24:40] addshore: Which old branch? [21:24:41] addshore: wmf3 is on wikidata now [21:24:44] wmf3 is currently deployed [21:24:45] yes [21:24:56] why is it called wmf/1.27.0-wmf.3b not wmf/1.27.0-wmf.3 then? [21:25:00] so we have the options of pushing new code directly to wikidata without time on test* only [21:25:02] oh, beta branch? [21:25:05] O_o [21:25:16] aude: Yes [21:25:23] or downgrade Wikidata [21:25:28] but that sounds even more horrible [21:25:34] I'd go to Wikidata directly [21:25:39] doesn't seem *so* scary [21:25:41] i'd say temporary downgrade maybe [21:25:50] ugh :( [21:25:54] but then wikivoyage etc. still has wmf3 [21:26:08] and downgrading all group1 seems bad [21:26:23] hoo: how about we can try the code on mw1017? [21:26:31] That sounds sensible [21:26:58] * aude confident the code is unproblematic but still scary to go right to wikidata + wikivoyage etc [21:27:15] yeah, let's go to mw1017 first [21:27:27] Still, need to figure what's up with the jobs first [21:27:31] and probably would be good to let greg know the deployment calendar is inaccurate [21:27:34] not sure anyone is actually looking into that right now [21:27:38] yeah [21:27:43] i thought ori was [21:28:06] from wikibase perspecitve, it would be nice if we could reduce the impact of the addusages jobs [21:28:15] How? [21:28:19] but not sure that's the issue and it's a general issue that affects botht hat and refresh links [21:28:24] I think there's a to do about that [21:28:26] https://grafana-admin.wikimedia.org/dashboard/db/job-queue [21:29:00] and http://graphite.wikimedia.org/render/?from=-7d&width=1054&height=487&_salt=1445369534.106&target=MediaWiki.jobqueue.abandons.deleteLinks.count&target=MediaWiki.jobqueue.inserts.*.count [21:29:06] //TODO: Before posting a job, check slave database. If no changes are needed, skip update. [21:29:13] yeah [21:29:38] https://phabricator.wikimedia.org/F2746456 (3 months) [21:30:03] Wow, that's pretty much [21:30:06] but not to surprising [21:30:18] given it basically has the same scope refreshlinks has [21:30:20] it's count for pop [21:30:34] scope as in pages affected [21:30:47] yep [21:32:16] * hoo points out https://gerrit.wikimedia.org/r/247724 again [21:32:20] trivial [21:32:32] * aude click [21:34:28] ori is ok with scaping the things out [21:34:31] so let's do that [21:34:35] ok [21:43:31] hoo: bah, we have to specify specific version of data types or else backport your fix [21:43:51] and update datatypes [21:44:00] aude: No, we can just not update it on the branch [21:44:10] * hoo did update wikibase/wikibase only [21:44:21] Unless there's other stuff we want to pull in [21:44:22] true, but jenkins says no [21:44:36] argh [21:44:37] * aude prefer jenkins says yes, even though we could ignore this [21:44:37] right [21:44:56] I was only thinking about the built [21:45:01] ok, so let's backport that as well [21:45:16] k