[00:36:22] Can someone with Gerrit admin please abandon https://gerrit.wikimedia.org/r/#/c/20233/ ? It was an incorrect submit. [00:40:47] siebrand: Done [00:41:14] RoanKattouw: ty [02:56:34] hi pastakhov__! [03:15:49] sumanah: hi! [03:16:03] pastakhov__: happy to see the contributions you've been making lately [03:16:12] pastakhov__: will you be joining in some of our live tech chats in the near future? [03:16:32] https://www.mediawiki.org/wiki/Meetings/2012-11-29 [03:18:36] I speak a little English :( [03:20:21] sumanah: language barrier prevents me. [03:21:13] pastakhov__: :( thank you for telling me that, that's important for me to remember [03:24:35] sumanah: In any case, thank you for the invitation :-) [03:24:42] sure! :) [03:25:09] also pastakhov__ I want you to know about https://meta.wikimedia.org/wiki/Participation:Support [03:25:20] to help you attend events that might help you contribute more effectively [11:16:32] New patchset: Hashar; "silent all pipelines for production deployment" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34703 [11:16:56] New review: Hashar; "Going to double check that in labs." [integration/zuul-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/34703 [11:16:56] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34703 [11:57:10] New patchset: Hashar; "correct junit path in Zuul jobs." [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/34707 [11:57:11] New patchset: Hashar; "zuul job: mediawiki-core-install-sqlite" [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/34708 [11:57:32] Change merged: Hashar; [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/34707 [11:57:39] Change merged: Hashar; [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/34708 [12:00:23] New patchset: Hashar; "Merge in Zuul triggered jobs." [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34709 [12:00:54] New review: Hashar; "The Zuul branch hosted my experiment in labs. That is now landing in production." [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/34709 [12:00:54] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34709 [12:52:40] New review: Erik Zachte; "No comments, looks good" [analytics/wikistats] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/33858 [14:20:11] New patchset: Hashar; "template to install mw against a database" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34713 [14:20:11] New patchset: Hashar; "fix build.xml path in production" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34714 [14:20:58] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34713 [14:21:08] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34714 [15:11:59] New patchset: Hashar; "use full path to build.xml file" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34715 [15:52:26] New patchset: Hashar; "WMF wrapper around grunt" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34720 [15:53:14] New review: Hashar; "Added saper and PLatonides since I think they like hacking shell scripts." [integration/jenkins] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/34720 [18:42:50] New patchset: Stefan.petrea; "Added conf for wikistats only for editors" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/33858 [20:15:28] New review: Krinkle; "Once we start using it in multiple places, I will. I want to make sure this works as intended and ma..." [integration/grunt-contrib-wikimedia] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/34488 [20:16:44] New patchset: Stefan.petrea; "Added conf for wikistats only for editors" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/33858 [20:25:33] New patchset: Hashar; "gate job is now Independent" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34733 [20:25:53] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34733 [20:37:05] New review: Diederik; "Ok." [analytics/wikistats] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/33858 [20:37:05] Change merged: Diederik; [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/33858 [21:55:01] New patchset: Hashar; "project for operations/puppet" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34847 [21:55:15] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/34847 [22:46:19] hashar: ping [22:48:43] Krinkle: dieting to bed right now :-] [22:49:04] i got like a couple of minutes [22:49:23] hashar: okay, I wanted to quickly hear your thoughts on something [22:49:51] hashar: If we start deferring the main test suite to on-merge (and only do lint on submission to Gerrit), that means the duration of the test suite becomes even more important. [22:50:03] Because that means submitting two changes can cause a merge conflicts still. [22:50:28] (because clicking "Submit change #" would be effectively be an asynchronous command) [22:50:33] My point/question is this: [22:50:53] If we do TestSwarm/qunit asynchronously also (not blocking the main build), then its results will become useless [22:51:08] hitting submit in gerrit will still merge [22:51:14] to trigger tests one will have to CR+2 [22:51:16] In the current situation if TestSwarm reports 30 minutes after the main Jenkins comments, that's OK. It will still be seen before merging in most cases. [22:51:31] hashar: Hm... [22:51:53] So people have wait for jenkins to finish and then go back to the change and press submit to merge it? [22:52:10] That's not what Ryan and you sold me when saying how awesome openstack's flow is. [22:52:11] yeah that is not changing from today [22:52:22] Well, it would be changing. [22:52:29] we can adopt openstack flow [22:52:30] It means adding another process to the workflow for the reviewer. [22:52:37] cause people oppose having jenkins to merge changes in the repo [22:52:41] they still want to manually merge [22:52:46] but that will change eventually [22:52:51] that is one of my goal for next year [22:53:11] Right now: Committer pushes to gerrit for review, lint+unit tests runs, once that is done, reviewer can +2&submit in 1 go. [22:53:45] What you describe: Committer pushes to gerrit for review, lint tests runs, once that is done, reviewer can +2, unit tests will run, once one reviewer can merge. [22:53:58] That is unacceptable/unworkable in practice imho. [22:54:02] Later: Commiter pushes, lint run. Someone review and if happy approve the change by CR+2. Tests are run. [22:54:15] if tests are fine, Jenkins verify+1 the change [22:54:31] if tests fails Jenkins verify-1 and send a nice warning error [22:54:38] once verified +1, one can press Submit [22:54:46] That's what I just said. [22:54:57] (which is sooo dumb that everyone will agree that Jenkins should submit whenever Verified+1 is reached) [22:55:07] yeah I am rephrasing to make sure we both understand each other [22:55:13] Nono [22:55:22] so that is not what you said :-] [22:55:35] the idea is to defer running tests AFTER code review [22:55:58] There is a big difference between automatically merging "Verified and CR+2" and running tests on-merge. [22:56:01] Yes [22:56:11] But not in such a way that the reviewer has to visit the change page twice. [22:56:16] with 10 minutes in between [22:56:17] that's unacceptbale [22:56:26] what is exactly unacceptable ? [22:56:36] going back to a page to press submit once the tests have been run ? [22:56:44] That when I do code review and want to merge, I have to do CR+2, wait for jenkins, and then merge [22:56:51] Yes, that is unacceptable [22:57:05] and unworkable because it can cause a merge conflict in the time I wait for jenkins [22:57:58] """That when I do code review and want to merge, I have to do CR+2, wait for jenkins.""" so up to that there is no issue [22:58:00] It will add more complication and shit to the workflow. Why not go straight to what Ryan and us have been talking about for a year (namely to hook into gerrit's on-merge and hold on to that, run tests and if success continue merge) [22:58:06] so not on +2 but on merge. [22:58:09] if we get jenkins to merge, problem solved [22:58:43] Yes [22:58:59] It will also solve the annoyance of submit vs. +2 [22:59:06] or rather, confusion, (not annoyance) [22:59:19] I got zuul deployed on production this afternoon btw :) [22:59:20] +2 will then mean "submit after tests", right? [22:59:49] kind of [22:59:59] can you elaborate? [23:00:04] +2 will be yeah this change is fine, let s merge it [23:00:12] then tests are run to make sure it is not going to kill the master [23:00:21] Right [23:00:26] if any test fail => v -1 [23:00:36] what I want is that if v+1 then we submit automatically [23:00:47] v+1 and CR+2 [23:00:49] I am describing the full workflow tommorow [23:00:55] will post to you / ryan and a few other [23:01:01] then to engineering then wikitech [23:01:19] hashar: Just to make sure you're not reinventing the wheel, be sure to check the wiki document we/I wrote last year. [23:01:21] openstack has a different workflow which is that you need two different people to CR+2 [23:01:32] then one can approve +1 and that will trigger the tests + merge [23:01:33] So tests will only be run once it's been approved by a reviewer? [23:01:45] I don't mean "everything openstack does", I'm referring to a specific part of their workflow. [23:01:54] Krinkle: yeah I will base my work on the awesome workflow your wrote done on mw.org [23:02:11] that one http://www.mediawiki.org/wiki/Continuous_integration/Workflow_specification [23:02:12] Krenair: More info later, too much context to reiterate now. [23:02:18] ok [23:02:25] Krenair: yeah the idea is to speed things up [23:02:55] hashar: So the thing I wanted your opinion on.. [23:02:56] Krenair: there is no point in running the full test suite for any tiny patchset. When we receive lot of patchset (i.e. during busy hours) gallium ends up too busy [23:03:47] meaning, we will still test every commit, but only when it is reviewed. So instead of testing first and waiting for someone to approve, we do human review first, and if it is considered wanted and good, then we confirm it works properly. [23:04:00] there's a bunch of other stuff attached that will be announced later. [23:04:07] Krenair: so we will just lint / check style on submission. Review will be made by actual human. That also mean that submitter should run the test suite before sending the patch. [23:04:08] hashar: so the thing.. [23:04:32] hashar: if we defer testing to after CR, that means any non-blocking tests (such as TestSwarm in this case) will never be seen or taken into consideration. [23:04:53] by non-blocking I mean, the Jenkins build will continue without it. [23:04:56] (which i fine) [23:04:57] unless we make testswarm a blocker [23:05:14] hashar: Yes, but it takes about 15 minutes right now (although I'm working on speeding that up) [23:05:18] which is possible if we make jenkins to send the request to testswarm and then have the job wait for the result [23:05:21] and browserstack could be down in theory. [23:05:40] well one could still submit in Gerrit. [23:05:49] hashar: Yes, I already have that logic in jQuery's jenkins (the node-testswarm hook that waits for testswarm in jenkins) [23:06:04] might want to require a second CR+2 before enabling submit (cause that will by pass tests) [23:06:20] hashar: I'm not sure what you mean. [23:06:44] basically, we can enforce a merge without waiting for tests [23:06:46] hashar: This is not an exceptoinal case we're discussing, this is going to be every single core commit. [23:07:04] sure, by doing V+1 manually [23:07:25] but that can't be part of the workflow. [23:07:28] obviously. [23:07:45] hashar: So you'd be okay with having testswarm's 10 minutes of fame part of the main job? [23:08:07] kind of [23:08:14] (with a timeout of course, just in case, defaulting to true in that case) [23:08:14] I don't think it is going to be that much of a trouble [23:08:21] we don't really need changes to hit master ASAP [23:08:44] sure, I just wanted your opinion on it. It isn't super long, but it is twice as long as the build currently take (3-4 minutes) [23:09:02] hashar: And we'd do phantomjs first which is super fast and on the server. [23:09:09] exactly [23:09:17] which might catch some common javascript errors [23:09:26] yeah, and then never submit to testswarm in the first place [23:09:34] indeed [23:09:45] that is trivial to do in Zuul [23:09:48] phantomjs is already puppetized so that's great [23:09:52] he.. not puppetized [23:09:59] I mean gruntified. [23:10:44] and phantoms 1.6.0 has been debianized : http://packages.debian.org/sid/phantomjs ;-] [23:11:09] awesomized! [23:11:15] 1.4.0 is in precise [23:11:21] might be "just enough" for us [23:11:43] though I am not quiet sure how you could run the QUnit tests in phantomjs [23:11:47] what do you mean, by "is in precise" [23:12:01] but your awesomeness probably already wrote the javascript to do so [23:12:09] no, not really [23:12:09] Precise is the ubuntu version we are using [23:12:11] I dont have to [23:12:15] well hmm [23:12:18] hashar: I know, but how is phantomjs inside that? [23:12:39] hashar: qunit runs from HTML (It can run without an html document, but we need the html document, since we use it, and we also call api.php etc.) [23:12:52] hashar: So we need to do the static snapshot thing regardless of testswarm. [23:13:05] require('webpage').create().open( localhost/mediawiki/core/33123/3/Special:JavaScriptTest ); [23:13:27] that'd be one way to do it, yes. [23:13:41] hashar: actually, we may not have o do the /123123/123 stuff [23:13:49] since we must only do one build at a time [23:14:09] I am not sure how zuul handle the builds [23:14:15] he might do then in parralel [23:14:21] I need to check that [23:14:24] what do you mean, it might? [23:14:29] we control that from jenkins [23:14:35] this is a blocker [23:14:36] na from zuul :-] [23:14:50] so zuul has two different workflow known as "pipelines" [23:14:54] with conflicts and stuff and making sure we test the actual result [23:15:09] one is "independent", each jobs are run against latest master+ patch regardless of possible conflicts [23:15:22] we can't have any parallelisation there. [23:15:37] the other is "Dependent" which analyze all changes submitted and only test the merge of all of them. [23:15:45] hashar: but about phantomjs, what do you mean by in precise? Can't 1.6 run on precise? its just software. [23:15:47] if that merge pass the tests,everything merged [23:16:22] Precise is a snapshot of some OSS [23:16:46] it has 1.4.0 [23:16:48] phantomjs is not preinstalled on Ubuntu precise, right? [23:17:05] but ubuntu also has 1.6.0 in a future version, so we can probably install that one in Precise by back porting the package [23:17:21] it is available in the Ubuntu repository but not installed by default [23:17:27] I"m not sure, are you saying that the debian/apt repository is versioned as a whole? [23:17:46] so that is just about: package { "phantomjs": ensure => latest; } [23:17:55] http://packages.ubuntu.com/search?keywords=phantomjs [23:18:20] that is a search for any packages in ubuntu containing "phantomjs" [23:18:27] precise, quantal, raring are ubuntu versions [23:18:38] Precise is the one WMF uses on the cluster [23:18:47] Quantal is the future version [23:18:47] ok [23:18:56] Raring the next after Quantal [23:19:00] yes [23:19:04] OS versions? [23:19:11] http://en.wikipedia.org/wiki/List_of_Ubuntu_releases :-] [23:19:15] Yes, I know those [23:19:19] ahh [23:19:33] so from time to time they take a copy of their master [23:19:41] and name that with some funny name then release it [23:19:45] how is that related to the software packages? If apt-get install phantomjs I get the latest stable version of phantomjs regardless of my Ubunto version, not? [23:20:02] you get the version shipped with your distribution [23:20:08] and which has been actually tested on it [23:20:11] ok [23:20:19] with all its dependencies satisfied and tests [23:20:21] (somehow) [23:20:24] * Krinkle is a noob with apt, yesterday I messed up a server by doing "apt-get remove apt" [23:20:35] if phantomsjs suddenly needed a new nodejs version you will have to upgrade nodejs as well [23:20:36] I wanted to remove "ack" (since accidentally installed that instead of ack-grep) [23:20:41] but mistyped it [23:20:43] and then all the other packages that use node js and so on [23:20:50] took me about 2 hours to install it by hand [23:20:51] so versions are freezed [23:20:53] frozen [23:21:04] ohhh [23:21:06] poor timo :( [23:21:06] from debian.org and with dpkg -i ddd.deb [23:21:08] yeah [23:21:13] apt-get remove ack [23:21:16] yes [23:21:19] apt-get install ack-grep [23:21:22] ubunto names ack ack-grep instead of ack [23:21:23] ;-] [23:21:29] so I installed ack, then ack-grep and wanted to remove ack again [23:21:31] and typed apt [23:21:36] then ln -s /usr/bin/ack-grep /usr/local/bin/ack [23:21:52] that's not the problem, and would've helped. [23:22:01] apt-get still gonna name it ack-grep [23:22:23] that is the package and binary names indeed [23:22:33] to avoid a conflict in case you want both ack and ack [23:22:48] I installed the wrong package, and removed it again, but accidentally removed all of apt instead of ack. [23:22:52] yeah [23:22:57] I know the history [23:23:06] ahhh [23:23:07] :( [23:23:12] 2 letters :) [23:23:23] Anyhow, I got to use wget, hadn't used that in a while. [23:23:25] http://upload.wikimedia.org/wikipedia/en/timeline/b806407d9209e69f8f2230bd1bd504ea.png [23:23:27] sooo nice [23:23:30] (to download them from the debian mirrors) [23:23:34] continuous integration :) [23:24:08] yeah [23:24:28] there's lots of minor versions missing there [23:24:31] what happened to those [23:24:41] 5.04 > 5.10 [23:24:55] They have a thing for .04 don't they? [23:25:06] except for 6.06 [23:25:23] they're planing them too, 13.04 [23:25:25] ? [23:25:27] lol [23:25:56] the major version keep increasing [23:25:59] minor version is the month [23:26:03] err [23:26:05] year.month [23:26:06] sorry [23:26:10] oh [23:26:11] lol [23:26:20] which is also used by Juniper [23:26:39] so when you see you have v10.04 you know it is from April 2010 [23:26:42] "Canonical has released new versions of Ubuntu every six months" [23:26:47] 4 10 4 10 [23:26:49] that makes sense [23:27:02] and you need to upgrade that box over the next 5 months (assuming a 3 years lifetime) [23:27:03] except that it doesn't when you omit 20 from the year [23:27:07] could've fooled me [23:27:18] when you look at mediawiki 1.14 … that is not that helpfull :-] [23:27:27] like they know 12.01, 12.02, 12.03 will suck they go straight for 12.04 [23:27:42] :P [23:27:48] this way they also know when they will release 99.03 [23:27:53] I will be dead by that time though [23:28:12] "the first immortal people are already alive today" [23:28:21] my daughter will most probably be still alive :-] [23:28:46] anyway 00:30 [23:28:53] strange timezone you have [23:28:54] and I have a workflow to write down tomorrow [23:28:56] ohh also [23:29:00] UTC +01:02 [23:29:01] :P [23:29:05] I need to write a report for our sprint [23:29:12] will do that tomorrow too [23:29:19] 00:28 here [23:29:20] and send it to you and zelko [23:29:26] anyhow, thanks for the conversation [23:29:42] We're on the right track here, gonna make some more cool stuff [23:30:01] After this week I have to lower ci again though on my priority. [23:30:22] yeah :( [23:30:28] need to attempt to enroll some other people [23:30:38] like Ori and S [23:30:41] hashar: I'd like to get phantomjs and maybe testswarm going this week/weekend. [23:30:59] you could testswarm to run against master once per hour [23:31:00] phantomjs in production and testswarm in labs (but continuously, like we did the old testswarm) [23:31:04] that will already be a nice addition [23:31:08] hashar: Yes [23:31:19] hashar: How though? time-based jenkins job? [23:31:25] yeah time based [23:31:27] cool [23:31:28] get it simple :-] [23:31:33] tell me more about, tomorrow [23:31:38] we can figure out the rest later on [23:31:58] and that hourly build can be blocking too, it wouldn't report back to gerrit [23:32:01] since testswarm is almost done… I guess it is higher priority than phantomjs [23:32:21] phantomjs has priority imho, because we can get that in the every-commit build withotu issues [23:32:27] and is easy to install in production [23:32:32] yeah probably [23:32:48] whereas testswarm needs some more puppetisation and we need to figure out the security aspects [23:33:00] convinced me :-] [23:33:04] so phhantomjs first ? [23:36:27] so yeah phantomjs first :-] [23:36:29] * hashar takes notes [23:36:56] bye bye everyone!