[00:00:01] Over and out. [02:30:36] !bug 36795 [02:30:36] https://bugzilla.wikimedia.org/show_bug.cgi?id=36795 [07:42:22] New patchset: Hashar; "fix android nightly builds" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/14173 [07:42:43] New review: Hashar; "Been on production for a few days already." [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/14173 [07:42:45] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/14173 [12:45:52] hashar: the run is completed [12:45:53] no new failures in IE (textSelection in IE9/10 and mw.Uri in IE6-8) [12:45:54] 5 and 7 [12:45:54] https://gerrit.wikimedia.org/r/#/c/14057/ [12:46:02] \O/ [12:46:53] I'm currently experimenting with letting 2 instances of node-browserstack run simultaneously. So far we've never done that, not even at jQuery [12:46:56] (to speed it up) [12:47:13] I requested 2 accounts and both are working now [12:48:02] should get us up to 20 simultaneous browsers (browserstack API allows ~ 10 simultan browsers per account) [12:48:20] though 10 different browsers. the same browser ID can only be started 2 times at at a time [12:48:26] so 20 browsers or 4 the same (with 2 accounts) [12:48:33] pretty neat [12:49:48] hashar: Can you point me to where the code is that outputs something to jenkins build console? [12:49:53] I want to have the url to the testswarm job there [12:50:05] the api returns that url, so that should be perfectly possible [12:51:43] hey guys [12:52:33] Krinkle: that would be the jobs/_shared/build.xml ant script [12:52:41] okay [12:52:50] hashar: eh... [12:52:55] [12:52:59] can you guys help me syncing a change to mediawiki-config repo i just merged? [12:53:00] hashar: How do I read the request response, and parse json and output to console? [12:53:10] which is the worth ant target ever :( [12:53:23] oh sorry wrong target [12:53:26] I mean [12:53:43] so yeah the is supposed to pass the output to stdout [12:53:45] Right, so I should just do it from the php script, and ant will output it? [12:53:54] testswarm-submit.php that is [12:53:56] you can try by playing with ant locally [12:54:05] yeah that .php script ${testswsarm.submit.script} [12:54:11] I am not sure it returns anything right now [12:54:25] ok [12:54:27] so something like : print "Job submitted to $url\n"; would work [12:54:31] simple enough :-] [12:57:11] ok [13:36:09] Krinkle: I will be out for the rest of the day in a few minute [13:36:22] Krinkle: but will definitely review / deploy anything you send :-] [13:58:21] hashar: okay. For now I'm plugging along on labs. [13:58:38] hopefully by the end of this week or during the hackathon next week we'll set up wmf labs os primary testswarm [14:08:18] Krinkle: have you puppetized it somehow ? :D [14:08:33] no, not at all [14:08:38] when was I supposed to do that :P [14:08:42] anyway, switching is just about change the submit script to send to a different URL I guess [14:08:48] yeah [14:08:49] we could even submit to both instances [14:08:57] so you can eventually compare [14:09:04] hashar: by the way, do you know yet what we're going to do with jenkins? [14:09:10] https://fr.wikipedia.org/wiki/Fichier:Wikipedian.png [14:09:19] According to Ryan we shouldn't have the primary jenkins be in labs [14:09:23] Jenkins will be moved to labs [14:09:24] because gerrit depends on it [14:09:27] which makes sense [14:09:41] i.e. no production in labs [14:09:46] ohhh [14:09:48] well hmm [14:10:05] that is quiet the opposite of what we thought we would do [14:10:11] aka move integration system entirely to labs [14:10:18] so we can "play" with it without resorting on ops [14:10:23] Right [14:10:38] I also want at some point to setup Jenkins slaves [14:10:42] hashar: we can play with it though, that's not the point. But would it be the primary? [14:10:47] to get tests run against postgre/mysql as well [14:11:16] maybe the main jenkins instance could stay in production though [14:11:19] is performance an issue? I imagine it may be slow in labs, we don't want that to happen. [14:11:31] then we'll need node and stuff :) [14:11:42] shouldn't be too impossible [14:11:55] we don't have to use npm as package manager, node is just node. [14:12:40] I guess labs is in a best effort status [14:12:49] anyway, jenkins is not going to migrate anytime soon [14:12:55] got to many things to do in July already [14:13:02] and August I get on vacation for most of the month [14:13:05] I want to get linting running [14:13:13] which needs node [14:13:24] and result aggregation into the Build graphs, which needs node [14:13:31] we had a debian package at somepoint [14:13:36] (so that builds fail if testswarm fails) [14:13:37] though it did not receive much love [14:13:46] nodes is in apt-get now [14:13:54] the central ubuntu one [14:14:02] well we can't wait for testswarm when doing build, that need to be done asynchronously I think [14:14:25] I didn't know but I found out last week [14:14:26] although will all the browstack power, tests might run fast [14:14:35] hashar: I'm not sure, I think we should wait for it. BUt we need to make it run faster, yes. [14:14:47] And we need to make sure it is easy to disable that requirement in case browserstack has issues. [14:15:11] Ubuntu Precise ship nodes 0.6.12 [14:15:12] which so far I haven't' seen at jquery (its been 6 months), but still. [14:15:45] I would prefer Testswarm result to be a second comment [14:16:00] so we get Jenkins to report Linting / PHPUnit failures and just fail the build [14:16:12] report the change OK [14:16:15] then send to testswarm [14:16:23] then add yet another OK / FAIL status [14:16:36] I disagree, we don't want to merge something that fails. [14:16:41] for lot of changes, we probably are not interested in testswarm results [14:16:51] I'm usually not interested in db or php results [14:16:57] hehe [14:17:10] but you need db/php tests to pass before installing a mediawiki [14:17:17] or you risk having false positive in testswarm tests [14:17:27] sure, the child jobs run in order [14:17:32] that's always been the case [14:17:36] as an example, a backend API could be faulty, that would reflect as a failing test in QUnit [14:17:38] if any child jobs fail, the next doesn't start, right ? [14:17:45] that depends [14:17:58] the PHPUnit tests are always ran IIRC [14:18:04] not if lint fails [14:18:11] and if not, we can make it that way [14:18:16] yeah lint is a build step of the main project [14:18:48] it is not very clear in Jenkins configuration though :-] [14:19:10] so merge >| lint >| phpunit >| testswarm [14:19:44] I would say so [14:19:52] or we can do phpunit / testswarm in parralel [14:20:01] I definitely need to rewrite the testswarm-snapshot target [14:20:05] should be made simpler [14:20:24] but first I want to add Wikidata and extensions testing [14:20:31] anyway, on short term: * report testswarm url in console * get js linting * report testswarm url in gerrit comment * (long term future): make build dependent on outcome [14:20:33] that is what I am going to focus on for the next weeks [14:20:47] url in console is trivial [14:20:50] definitel [14:20:55] I know [14:21:41] nodejs we have 0.4.9-wm2 [14:21:47] which is probably totally out of date [14:22:28] lacking patches etc [14:22:43] it might be possible to backport latest node version on Ubuntu Lucid [14:23:12] reporting the testswarm url in gerrit, you could use ssh gerrit approve "(some comment)" [14:23:29] that can be done as an ant target by parsing the output of the submit script [14:23:40] "make build dependent on outcome" I am not sure what you mean there [14:24:51] hashar: that means we create a child job (like for the various PHPunit tests) that uses node-testswarm to submit the job (instead of curl) and gets a callback when the job is done in testswarm and makes the pretty graphs and returns PASSED or FAILED, which will affect the main git fetcher job [14:25:49] ahhhh [14:25:53] yeah that would be nice [14:25:57] hashar: I'd like the url to testswarm to be in the main comment, no need to create a separate comment. This url should be able to go in there, because the url is known right away when the submission happens, which happens before the comment is created. [14:26:33] the main comment is a configuration statement in Jenkins :/ [14:26:38] not sure we can alter it easily [14:26:58] will have to be looked at [14:27:00] we'll figure out a way [14:27:07] but I am pretty sure we will have to get a second comment [14:27:13] or perhaps disable the comment thing from the Gerrit plugin and create our own entirely [14:27:27] well I would prefer we don't reinvent something :-] [14:27:35] we will see [14:27:42] it not that much important anyway [14:27:44] No, that's a broken workflow. Getting 2 notifications and stuff and everything. Then I'll rather find the url myself from the console [14:28:04] I mean it will be annoying, but not a blocker per see [14:28:05] ;) [14:28:11] sure [14:28:27] hashar: You're going to be at Wikimania during the dev days? [14:28:29] I don't want us to be blocked on publishing the URL just because we have to spend ton of times writing something new [14:28:43] I prefer to publish the URL as an annoying second comment, then fix it up :-] [14:28:45] you said yourself, it is as easy as ssh gerrit approve "(some comment)" [14:28:50] at least the users get the URL [14:28:54] yeah [14:28:57] probably easy [14:29:00] will have to look at it [14:29:09] and I am not going to Wikimania [14:29:16] skipped that to let my slot to someone else [14:29:34] + got my little family that really need my attention :-D [14:29:38] since it is not blocking, I'd say no second comment. Either do it well or not at all. There are other ways to get the url (namely clicking the jerkins build> console > testswarm) [14:29:41] ok [14:30:42] disclaimer: "Jerkins" is not a joke spelling, it is mac os x correcting "jenk" to "jerk" while I'm typing the word. [14:31:14] * Krinkle adds word "jenk" spelling  [14:31:42] hashar: can you merge https://gerrit.wikimedia.org/r/#/c/14057/ by the way/ [14:32:43] sure [14:33:29] Krinkle: I am not confident in merging that one sorry [14:33:29] :-( [14:33:34] ok [14:33:39] I need to look at it carefully [14:33:47] just commented about how it would be nice to have that in swarm [14:34:06] maybe be resubmitting a patchset that will resubmit a job to the swarm and make that run first? [14:34:14] I think testswarm distribute the latest job to the clients [14:34:26] Like I said, we already have the swarm result [14:34:26] its complete [14:34:37] form the link in my comment [14:34:43] hooo [14:35:07] still has some failing tests ;-D [14:35:12] I guess they were there already [14:35:19] yes, but those are in testswarm-2 as well. [14:35:22] Have been for months [14:35:25] anyway, no time to review today sorry , might poke it tomorrow [14:35:43] Okay if you don't want to merge it, no problem. Just wanted to make sure you know the the swarm has checked it, re: -1 [14:35:52] yeah that is great;) [14:35:57] can't merge it now* [14:36:00] also [14:36:05] we had to revert a change this morning [14:36:09] I noticed [14:36:15] already on top of it :) Bug report and upstream bug [14:36:18] probably trivial to fix [14:36:27] not really, unfortunately [14:36:29] but we had no idea how to fix it :-D so we just reverted [14:36:45] sounds good. [14:37:06] commented on 14057 [14:37:09] and I am off now [14:37:20] have to prepare a dinner for tonight. See you tomorrow! ;) [14:37:28] okay, have a nice holiday (in case I don't speak to you until then) [14:37:38] enjoy WikiMania :-] [14:37:47] cya! [18:35:38] Crazy question, is it possible to further subdivide an extension's gerrit repository to contain multiple things? [18:36:12] Just a curiosity at this point, but I have multiple things that I maintain as a part of this project, it'd be nice to have them all handy at once [18:49:54] submodules? [18:50:22] Reedy: Right. But can I create those sans admin intervention? [18:50:31] yup [18:50:37] Add a .gitmodule file [18:52:54] Reedy: Any more guidance? A link perhaps? Or shall I search-engine with that much? [18:54:47] http://wikitech.wikimedia.org/view/How_to_deploy_code#Case_1d:_new_extension [18:54:55] Vaguely shows you what you need [18:56:14] Reedy: Hm, seems like I'll need to create the repository before I do that [18:57:43] yeah.. [18:57:51] Perhaps I've been less than specific, but I have three repositories for separate projects, indeed, projects which stand alone. But they are things I maintain as part of this project, and should probably be included in the WMF repos somewhere. But they're not MW extensions, so there's no place for them really. [18:58:32] that's why we have mediawiki/tools [18:58:50] Reedy: Well, they aren't tools either, they're plugins for Etherpad Lite [18:59:18] heh [18:59:40] well, you can host them offsite (github or whatever) [18:59:49] or just ask for a top level set of repos creating [18:59:52] like there are for translatewiki [19:00:18] Reedy: *nod* I've had them on gitorious [19:00:23] I guess that's an OK solution for now [19:01:41] Ah well. [19:02:05] Reedy: Pretty lonely around here today, eh? Everyone's off on this side of the pond [19:02:13] indeed [19:02:16] Midweek slackers [19:02:26] Reedy: Aye. Not I, said the Mark. [19:02:41] (though I'm starting to realize the shortness of my TODO list) [21:37:30] any wikidata folks here? [21:38:05] ah #wikimedia-wikidata [21:38:07] interesting [23:06:09] Krinkle: accidently undid your rebase, when I tried to check in my changes :( [23:10:19] Krinkle: fixing... [23:10:28] why escaping is better than validation: http://www.theregister.co.uk/2012/07/04/accenture_slips_up_on_pcehr_again/ [23:12:20] :/ [23:15:07] Seems like you guys hire the same sort of people the NHS does here [23:17:42] TimStarling: amen. I hate it when sites have stupid validation restrictions. [23:17:55] especially when it comes to names, phone numbers and e-mailaddresses [23:18:05] actually, pretty much any form value. [23:19:03] intel asked me for my phone number recently when I registered to download a free trial of some software [23:19:13] so I gave it to them, but it said "that's not a valid phone number" [23:19:21] so I gave them the WMF office number instead [23:20:08] their salesmen can call reception [23:21:16] Krinkle: Looks like it didn't matter anyway, the rebase just picked up a change to a deleted file. My revision of the Example extension is checked in now: https://gerrit.wikimedia.org/r/#/c/14273/ [23:21:51] I think it was the plus sign, but you can't have an international number without a plus sign [23:22:03] kaldari: not a deleted file [23:22:14] kaldari: HelloWorld.i18n.php was preserved [23:22:50] it is? [23:23:52] looks deleted to me [23:24:28] nope [23:24:29] its a rename [23:24:30] gerrit doesn't know how to show those always, but git-rebase will know. [23:24:44] you don't even have to use 'git mv' sometimes. it knows based on content that it is related [23:25:06] Krinkle: I think you're thinking ofelloWorld/HelloWorld.alias.php [23:25:11] er HelloWorld/HelloWorld.alias.php [23:25:30] indeed [23:30:38] Krinkle: What are your thoughts about whether or not to include magic word functionality in BoilerPlate? [23:31:03] I didn't include it, and the left-over I forgot to remove you asked to remove. [23:31:10] which I did [23:31:21] I don't want to remove anything when cloning BoilerPlate [23:31:32] ^ the being the general usecase [23:33:46] yeah, I agree [23:44:27] kaldari: I added comments to your /examples change set, but I wrote them before you submitted PS2, make sure you do see them, just navigate to PS1 in gerrit [23:44:28] still apply mostly [23:45:00] will do [23:45:15] I'm not sure how to fix the HEAD on your change that I messed up :( [23:45:57] everything I've tried hasn't worked [23:50:55] kaldari: What do you mean? [23:51:51] I need to undo that patch to your change [23:53:27] but it's not a commit, so I'm not sure how to undo it