[09:17:00] New patchset: Hashar; "migrate mw GIT fetching job to mediawiki/core" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/3906 [09:17:09] yeahhh [09:17:12] lovely bot [09:17:37] New review: Hashar; "Merging back. It is already in production." [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/3906 [09:17:40] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/3906 [13:18:40] hashar: where are the name for Jenkins jobs defined? [13:18:41] from the web or somewhere in the git repo? [13:30:26] web [13:30:38] Krinkle: the git repo is just a way to save jobs names [13:30:42] err [13:30:54] Krinkle: the git repo is just a way to save up jenkins configuration [13:31:04] break brb [13:31:18] hashar: k, can I have access to that on Jenkins? [13:31:38] https://integration.mediawiki.org/ci/ and connect with your lab account [13:31:41] Just familiarizing myself so that I can set it up on labs or locally in a similar way [13:31:47] I think you get admin access there already [13:31:47] LDAP? [13:31:49] ok [13:32:04] I didn't think it would be Ldap connected [13:32:06] very nice [13:32:14] or if you get a jenkins instance, fetch the integration/jenkins.git repo and boot your jenkins using it [13:33:58] nah, not yet. [13:34:09] Just checking out what the options are what they're set to [13:34:17] k k [13:34:31] getting a break and some fresh air. Will be back in 10 - 15 minutes [13:50:31] Krinkle: back [13:50:51] hashar: The namings confuse me a bit, I'm trying to clean them up [13:50:57] oh no please [13:51:02] I haven't done anything [13:51:20] :D [13:51:26] But they really have to change [13:51:32] the reason is that the name is used as a directory under /var/lib/jenkins/jobs/ [13:51:38] which will confuse the local git repo [13:51:55] I figured [13:51:58] what I did is that all MediaWiki related jobs are named 'MediaWiki-' [13:52:18] the one that are prefixed with -phpunit are going to die [13:52:26] suffixed* [13:52:31] yeah [13:52:32] sorry [13:52:32] why are those going to die [13:52:45] Merge them into 1 job instead of downstream ? [13:52:57] finally the old MediaWiki-analysis MediaWiki-lint are also going to be rewrote to something else [13:53:46] the -lint need to be made a child job of GIT-fetching [13:54:09] also need to be made faster and to support linting of JavaScript [13:55:12] Universal linter! [13:55:36] exactly [13:55:36] hashar: But the lint check and the unit test run at different times [13:55:49] and I am probably going to make it to only lint the files that were changed [13:55:52] Lint at check-in, unit test at review [13:55:55] to make it lightning fast [13:56:36] there should be no time between unit test and merge, and there is also the execution of arbitrary code [13:57:12] hashar: https://www.mediawiki.org/wiki/Continuous_integration/Workflow_specification#footer [13:57:22] I've changed the bottom sectin [13:57:26] section* [13:57:34] does that look good? The job structure that we're going to main for [13:59:04] oh the spec [13:59:23] so I have been looking for a way to execute tests on patch submission [13:59:31] would require to build some kind of chroot to host the patch [13:59:39] which might just be too scary [13:59:41] or [14:00:08] only trigger it after someone at least looked at it. Not sure how to do that though [14:00:20] or just live with it and run the tests on patch submission [14:00:33] I don't think it makes sense to run unit tests at check-in. They won't represent the result post-merge because there will likely be many hours (if not more) between check-in and review/merge. Plus it is a security issue and also partly wasted resources on code that may be boldly rejected [14:00:50] hashar: They run when the merge is approved [14:00:58] Jenkins performs the actual merge [14:00:58] no [14:01:05] actually they run on submission [14:01:16] Im talking about the spec, not the current status [14:01:21] and I have submitted a hook change to have gerrit trigger a notification on merge [14:01:50] Gerrit reviewer approves > Jenkins runs > Jenkins approves and merge auto-completes or auto-rejects [14:02:05] that hook rebase the target to origin/master , apply the change then run the test. If that works, you get a merge if not merge fail [14:02:19] hashar: It shouldn't rebase it [14:02:30] But you have the code for this already [14:03:04] https://gerrit.wikimedia.org/r/#change,2495 [14:03:13] stolen from openstack IIRC [14:03:48] when does it run? post-merge or on-merge (right before) [14:03:55] (that change set proposal) [14:04:14] that change seems to be post merge [14:04:58] Jenkins has build-in support for polling a git repository right? (which we'd use for the timeline of the master branch) [14:05:08] It doesn't poll, it listens [14:05:12] This is the Gerrit Trigger Plugin [14:05:25] which we have [14:05:29] I know [14:05:32] you're missing part of my comment [14:05:41] hashar: BTW, OpenStack's new version of GTP can listen for a commit-pushed event which occurs for every commit pushed, even direct pushes :) [14:05:51] I'm talking about the to-be-specified jenkins job that tests the result in the master, not pre-merge or on-checkin [14:05:52] (not sure on the exact name but that's what it does) [14:06:48] direct pushes are still allowed in some repo's? Is that for technical reasons (gerrit limitation) or convinience? [14:07:02] Not on master [14:07:10] ohh [14:07:11] k [14:07:11] But on some other branches yes [14:07:21] you know, I haven't looked at jenkins for like a month or so :-D [14:08:20] Gerrit Trigger is at 2.3.1 right now, 2.5.1 available! [14:14:55] hashar: Is it possible to install mediawiki multiple times from 1 git clone for different database backends? [14:15:07] that is what is being done actually [14:15:10] 1 clone [14:15:11] 1 install [14:15:17] e.g. a little bit like a wikifarm with a switch in LocalSettings [14:15:28] per database backend [14:15:33] xactly [14:15:46] that is what I was working on when 1.19 / git ticked ;-D [14:15:47] and it runs install.php multiple times? [14:15:50] I've not migrated my dev installs yet... [14:15:50] yup [14:16:00] so we can test the installer against each DB [14:16:12] hashar: Does it have a switch in LocalSettings or is it synchronous? [14:16:19] sync [14:16:24] if it is synchronous we can just alter LocalSettings after each one [14:16:51] and leave it with 1, and that's the one it will keep for the public version that TestSwarm will run on [14:17:26] well once tests are run, we could surely make a snapshot of that code + DB and publish that somewhere for testswarm use [14:17:38] Why make a snapshot? [14:17:53] After the last db backend is ran, we can leave it in that state [14:18:08] the did it is in can be aliased to a www public dir [14:18:13] just like we do for testswarm already [14:18:40] dir* [14:19:05] it need a snapshot [14:19:15] why [14:19:18] cause all tests use the same mediawiki checkout directory [14:19:34] that is to avoid having to copy the 150MB or so on disk between each tests [14:19:38] to speed up the process [14:19:59] we'll need a separate clone for each one anyway, we already know that from testswarm [14:20:02] because it is asynchrous [14:20:18] and also an up to date master [14:20:22] well we can surely make a copy for testswarm [14:20:29] and make it publicly available and setup [14:20:43] so jenkins undoes the cherry pick? [14:20:50] ?? [14:20:50] for the next one [14:20:56] that is something like : [14:21:03] jenkins detect a new change [14:21:53] change is checked out, lint test run [14:21:59] then various databaseless tests are run [14:22:04] you're skiping something [14:22:10] then attempt to install MW against sqlite/mysql/postgre [14:22:11] you say it reuses the git clone [14:22:15] then run the tests requiring a DB [14:22:26] so it doesn't do a new 150MB clone each time [14:22:37] and it applies the commit with cherry-pick [14:22:38] then save a copy of that installation, make it available on some www and publish testswarm jobs [14:22:51] * Reedy rebases hashar [14:22:52] so it undoes the cherry-pick at the end of the test? [14:22:57] nop [14:23:02] ? [14:23:04] why not [14:23:11] at the end of all tests the working copy is in some unknown state [14:23:32] wether at begin or end does;'t matter [14:23:44] the first job though will make it clean (aka the job applying the patch just reset --hard the working copy [14:23:53] so that is at the beginning :-D [14:24:09] for each build , right? [14:24:24] whenever a change is applied by the first job [14:24:44] https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching [14:24:48] "MediaWiki-GIT-Fetching" is a job [14:24:54] "first job" doesn't make sense in this context [14:24:55] yup [14:25:09] you mean build? [14:25:12] parent job if you want [14:25:17] a build is the result of a job [14:25:19] for each build at the start it resets [14:25:25] so each change will technically have several builds [14:25:33] yes [14:25:38] which would mostly consist in tests results) [14:25:45] ok [14:25:49] something else [14:25:53] aka the lint build, installer build, API tests build etc.. [14:25:57] is it correct that you want to get rid of the parent-job structure? [14:26:12] what do you mean by parent-job struct? [14:26:20] https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching/ [14:26:25] i.e. have it all in one job, that just takes the mediawiki-core, applies the commit, runs sqlite, then switch local settings, run postgres or whatever [14:26:28] that one currently has two childrens [14:26:33] yes [14:26:40] do you want to remove those children and merge it? [14:26:47] and I am going to add a whole lot more of jobs to add [14:26:55] brb [14:27:06] I need to talk to some journalist [14:27:22] Krinkle: overall, I will try to have this more or less implemented today and tomorrow specially [14:27:25] then write some note about it [14:27:37] and publish that sometime next week [14:27:50] most is in my head only unfortunately, so I want to brain dump that to jenkins [14:27:51] I'd rather have it written first so we know what we're doing without trouble later [14:27:54] then publish what I did [14:28:03] it doesn't take much time [14:28:08] then I will look at your nice workflow and rewrite what I did with you :-] [14:28:36] yeah fully agree [14:28:40] did that on paper already :-] [14:28:48] I need to disconnect a bit. Will be back! [14:29:14] ok, so back to square 1 [14:34:44] hashar: I think we may need to rethink it a little [14:34:57] I am currently writing a little to the specification page and running into a problem [14:35:19] The child job for "TestSwarm" will aggregate it's results from the TestSwarm API [14:36:03] however, asynchronous or not, until those results are in, it can't do another build right? [14:36:32] because when the builds are over for a particular commit it is going to do irc notifications and gerrit stuff [14:36:48] and it can't do that until TestSwarm is done, just like it has to wait for phpunit results [14:50:23] Fuck dependancies [14:51:33] I know I could probably cheat by repplying stuff via gerrits patch link, but I guess that's maybe not the best idea in the world [14:51:43] (and then abandon the other changes) [14:53:18] Reedy: How may I help you? :) [14:53:33] I'm trying to work out how to fix this dependancy mess on the collection commits [14:53:34] :p [14:54:07] OK [14:54:08] Links? [14:54:44] Do you have 1) screwed-up dependencies because of amends in the middle or 2) A and B depend on each other in Gerrit but not conceptually and you want to detach B from A? [14:55:09] They're your commits ;) [14:55:09] https://gerrit.wikimedia.org/r/3442 [14:55:17] https://gerrit.wikimedia.org/r/3445 [14:55:50] Reedy: one of the recently created wikis has wgLogo pointing to an unprotected Commons file [14:56:21] Can't we ask a commons admin (or I think I can do it...) to protect it? [14:56:33] Reedy: https://commons.wikimedia.org/wiki/File:Wikipedia-logo-v2-lez.png [14:57:13] Let's see here [14:57:16] Protected [14:58:05] Thanks [15:00:28] that logo doesn't match the rest afaik [15:00:33] 135x155 [15:01:07] Wouldn't be the first; a lot aren't correctly sized, so just use a 135px thumbnail usually [15:01:10] Reedy: 3442 is fixed [15:01:25] I mean it is 135 [15:01:26] It depended on 3441 which depended on an older version of 3440, but hashar had abandoned 3441 and squashed it into 3440 [15:01:31] but 135x135 instead of 135x155 [15:01:37] positioned too high [15:02:58] Need to get hashars [[Gerrit/resolve conflict]] guide updated, seems a bit too simplistic [15:03:13] I am back [15:07:16] Reedy: I think they're solved now, but they're all blocked on 3440 getting reviewed [15:07:25] Reedy: I warned against this for stacked changes [15:07:33] ...after I introduced a bunch of them :) [15:09:04] hashar: I've simp lied https://www.mediawiki.org/wiki/Continuous_integration/Workflow_specification a lot and made it more accurate with what we've discussed [15:09:07] simplified* [15:11:00] :D [15:11:46] you are so awesome [15:13:06] Krinkle: something we will want to do is to drop php -l [15:13:06] I write better than I do git [15:13:10] it is too slow :-] [15:13:18] Edit as you wish [15:13:22] What's the alternative?> [15:13:25] well git is something you will eventually learn [15:13:34] good writing is a gift :D [15:13:57] thx [15:14:04] for php linking, Tim wrote a php script that does that using some pecl parsing extension built on PHP .c tokenizer [15:14:12] Ah, I remember that [15:14:16] that saves all the time needed to restart php [15:14:22] and makes linting very faster [15:14:28] it is in tools/code-utils IIRC [15:14:35] I thought it was only for checking certain conventions and whitespace etc. not as a replacement for php -l [15:14:55] but if it does do that, then by all means, lets drop it [15:14:58] svn+ssh://svn.wikimedia.org/svnroot/mediawiki/trunk/tools/code-utils [15:15:07] conventions is stylize.php [15:15:27] oh not that one actually alter a php file [15:15:28] and check-vars.php also does stuff [15:15:34] check-vars.php is the one checking conventions [15:15:39] yeah [15:15:50] I would want that one to be replaced by PHP CodeSniffer [15:16:13] ? [15:16:34] http://www.squizlabs.com/php-codesniffer [15:16:39] * Krinkle googled [15:16:40] that is a tool that let you describe rules [15:16:44] but would it be a replacement for [15:16:49] php lint check? [15:16:58] that already comes with nice reporting and a … guess what? [15:17:02] … a jenkins plugin!! ;-D [15:17:11] it would replace check-vars.php [15:17:23] I did a basic syntax file at https://github.com/hashar/MediaWiki-CodeSniffer [15:17:41] but can we use it inside a job? cause we also lint other file types [15:17:46] re linting, Tim script is lint.php [15:17:49] css / js and perhaps more [15:18:04] which needs the `parsekit` PHP Extension [15:18:40] anyway the universal linter remains to be wrote [15:18:40] k [15:18:41] do we have that on the production machine for integration already? [15:18:53] I thought it could first establish a list of files that were modified (git log --stat or something) [15:19:01] then regroup them per type (PHP, JS, CSS) [15:19:07] I suppose the universal linter would just be a light weight wrapper with other libs in it's externals [15:19:07] then for each group, run its associated linter [15:19:19] then report an aggregate result with a simple FAIL / SUCCESS [15:19:26] everything else being reported in a log file somewhere [15:19:30] e.g. Tim's lint.php for php files, node jslint for js files etc. [15:19:35] xaclty [15:19:41] oh +2 on using node jslint :-D [15:19:57] so in turns we need to convince ops to have node.js installed [15:20:05] and will need a node.js Debian package [15:20:15] let me input all of that in bugzilla [15:22:22] hashar: Looks like there is a fairly popular csslint script that has a node.js version as well [15:22:24] for CLI [15:22:32] so that's 2 reasons to get nodeJS [15:22:43] I think Jeroen did a deb package already [15:22:49] there might be one in ubuntu too [15:23:04] wether that csslint is capable of skipping IE hacks is unclear though [15:23:04] contint tracking bug : https://bugzilla.wikimedia.org/show_bug.cgi?id=35584 [15:23:39] hashar: Hm.. isn't the "Testing infrastructure" component as tracker already? [15:23:58] https://bugzilla.wikimedia.org/buglist.cgi?query_format=advanced&list_id=103664&component=Testing%20Infrastructure&resolution=---&product=Wikimedia [15:24:11] oh yeah [15:24:27] should have named that something like: "make Jenkins works" [15:24:43] Although we could use a tracking bug for "Implementing continuous integration specification 1.0" [15:24:59] e.g. https://www.mediawiki.org/wiki/Continuous_integration/Workflow_specification [15:26:23] added url [15:26:27] oh you already did [15:26:54] hm.. didn't get a collision [15:27:56] I am creating a bug for the universal lint checker [15:28:06] cool [15:29:10] https://bugzilla.wikimedia.org/show_bug.cgi?id=35585 [15:34:27] PHP Code Style : https://bugzilla.wikimedia.org/show_bug.cgi?id=35588 [15:39:32] !b 31236 [15:39:32] https://bugzilla.wikimedia.org/show_bug.cgi?id=31236 [15:48:42] !b 31518 | hashar [15:48:42] hashar: https://bugzilla.wikimedia.org/show_bug.cgi?id=31518 [15:48:43] dupe? [15:49:13] ;) [15:49:34] that one is about code coverage [15:49:37] which is another issue :-] [15:49:40] ah [15:49:41] I see [15:49:54] it is about checking automatically which area of the code are run when running test [15:50:01] yeah, I see [15:50:02] now [15:50:05] thus you could potentially found area of code which are untested [15:50:13] is analysis part of the contint plan ? Maybe that should be on a lower priority [15:50:24] we could surely rename 31518 to something like : job to do codecoverage [15:50:49] I think that is part of the plan, since that helps writing new tests [15:51:00] though it is not that much an important thing to have [15:51:23] you could have 100% coverage of the code file and yet not analyze all the possible code paths [16:17:56] I tried submitting a commit for review and got the following error: [16:17:58] remote: Resolving deltas: 0% (0/9) [16:17:58] To ssh://kaldari@gerrit.wikimedia.org:29418/mediawiki/core.git [16:17:58] ! [remote rejected] HEAD -> refs/for/master/bug/27757 (change 3896 closed) [16:17:59] error: failed to push some refs to 'ssh://kaldari@gerrit.wikimedia.org:29418/mediawiki/core.git' [16:18:22] any idea how to fix that? [16:19:49] looks like someone did a squish/merge on one of my commits in the branch and now I can't submit any more commits [16:20:28] <^demon|away> kaldari: The change was squashed into another one and abandoned. [16:20:38] <^demon|away> You can't push a new patchset to an abandoned change. [16:20:57] so how do I do any more development on it? [16:21:17] <^demon|away> Well if it was squished elsehwere, you'd have to work from that patch [16:21:32] how do I do that? [16:22:15] <^demon|away> Same way you amended and resubmitted that patch, only with the one it was squished into. [16:22:20] <^demon|away> Seems like https://gerrit.wikimedia.org/r/#change,3890 [16:22:55] yeah, that's the original commit [16:23:15] after someone did the squish, I pulled the master and rebased [16:23:24] <^demon|away> Yeah, that one should be amended with your changes. [16:23:50] but when I try to do git review, it wants to submit all the original commits (including the abandoned one) [16:24:55] <^demon|away> Your local history probably still has the abandoned commit, that's why. Two ways to fix this: [16:25:06] <^demon|away> a) rebase your local history to squash the commits, or [16:25:45] <^demon|away> b) Start with a fresh branch, probably via `git review -d 3890` [16:25:52] <^demon|away> (b) is probably easier. [16:26:36] <^demon|away> RoanKattouw_away: Ping me when you're around again. [16:27:07] ah, I think the problem is I rebased my master instead of the branch [16:27:23] sorry I'm git dislexic :) [16:27:32] I'm getting there though [16:27:35] <^demon|away> Practice makes perfect :) [16:28:12] ^demon: new git extensions repos are not created now as well as old are migrated, right? [16:28:56] <^demon> Yeah, I'm giving priority to extensions already in svn. [16:29:06] <^demon> I'll send something about that tomorrow hopefully. [16:29:28] * vvv misses git [16:30:09] hmm, I tried rebasing the branch, but it didn't do anything ("Current branch tokensAPI is up to date."). You mentioned that I should rebase my local history. Is that a different procedure? [16:30:38] <^demon> Could you pastebin your `git log` somewhere? [16:30:45] sure... [16:31:21] eh, that seems to be quite lengthy [16:31:26] you want the whole thing? [16:32:35] it's 280,000 lines [16:32:58] I'll just paste the top of it [16:33:28] <^demon> Yeah, just the last couple of entries :p [16:33:37] <^demon> I don't need the log from 8 years ago. [16:33:39] <^demon> ;-) [16:33:45] http://pastie.org/3692730 [16:34:28] the 2nd one is the one that was squished by hashar I think [16:34:55] <^demon> Mmmk, yeah. What we're gonna want to do here is rebase those last 2 onto the 3rd. [16:35:02] <^demon> *latest 2 [16:35:21] * ^demon is trying to do this from memory now [16:36:37] personally, I'd prefer to keep the most recent one separate [16:36:46] as it is an optional addition [16:37:12] and could theoretically be rejected separately from the other 2 [16:37:19] <^demon> Ah gotcha. [16:43:50] kaldari: are you free ? [16:44:01] It also looks like I undid the merge that hashar did on gerrit, so now that change is just lost as far as gerrit can tell. [16:44:23] had to discuss somethings about the GSoC project on UploadWizard [16:44:33] drecodeam: could you give me a little while, working on another problem at the moment [16:44:51] ya thats fine [16:44:55] thanks [16:46:59] ^demon: so how do I merge 2 changes locally? [16:48:05] <^demon> Oh man kaldari, I forgot about you :( I need to stop multitasking. [16:48:32] all I've found is documentation for merging a branch, but not squashing 2 individual changes together [16:49:10] <^demon> `git rebase -i HEAD~3` should open a text file where you see those 3 commits [16:49:46] yep [16:49:58] <^demon> Ok, the 2nd line, change 'pick' to 'squash' [16:50:03] <^demon> Then save and quit. [16:51:01] <^demon> Now what does your `git log` look like? [16:52:06] <^demon> Oh, it'll probably ask you about combining the commit messages first. Combine them in a way that makes sense, using only the older commit's change-id. [16:56:23] OK, the log has just 2 commits now, and one of them has 2 changeIDs. I assume that's correct? [16:56:50] oops, should have read your other message :) [16:59:39] ha, I used the reword command to fix it [16:59:44] <^demon> :) [16:59:50] I'm a git ninja now :) [17:01:53] OK, I submitted it for review and it looks like everything worked! [17:02:07] ^demon: Thank you so much for your assistance! [17:02:14] <^demon> You're welcome :) [17:16:23] drecodeam: You still around? [17:16:39] hey ! ya i am still here [17:16:57] so what are the new ideas for UploadWizard? [17:17:36] i was talking to JeroenDeDauw yesterday, he also asked me to consider adding a remote upload to commons as a feature [17:18:19] i think it can be done using the API, as we already are doing the uploads through the API, so should not be very difficult to implement. But i have not looked much in details for this one. [17:18:21] remote? Like a non-web-based client? [17:18:45] or non-browser-based I mean [17:19:16] no by remote he meant adding files to commons through other mediawiki installations [17:19:26] oh, that would be cool [17:19:40] remote upload without authentication/authz? How would you do this? [17:19:57] you can use the API to login as well [17:20:28] i haven looked into the login API, but if its there in the API, then it can be done through that [17:20:29] <^demon|away> I'm not sure I'd trust such an extension--you'd be passing your wmf login credentials to a 3rd-party wiki. [17:20:29] on a 3rd party app? You shouldn't ask users to give away their credentials [17:20:33] <^demon|away> Makes me feel icky :( [17:20:42] but you would have to prompt them for user/pass, which might be dangerous [17:21:07] yeah we should not support anything like this, not even on the toolserver [17:21:24] I can collect your admin credentials and sell them on ebay [17:21:29] ya that would not be the right way of doing it [17:22:50] it's also going to be difficult due to cross-domain scripting limitation since commons is on a different domain name than wikipedia [17:23:22] hexmode: ping? [17:23:24] 20% checkin? [17:23:28] kaldari: but then can be handled through an hidden iframe i guess [17:23:35] <^demon|away> kaldari: Well scripting isn't an issue, you can do the whole thing via the API. [17:23:48] there's a thread suggesting an RFC for basic OAuth support on the engineering list, that's the only way to support this kind of applications in a secure way [17:23:50] sumanah: pong [17:23:50] <^demon|away> It's not a matter of impracticability, but paranoia :) [17:24:01] <^demon|away> DarTar: Yes, that's the right way to go :) [17:24:24] hexmode: AaronSchulz is the one to talk to today, right? [17:24:27] but since asking for username and pwd is not the right thing, i think we can wait till OAuth gets implemented [17:24:34] and preilly? [17:24:53] sounds right. [17:25:04] does the RfC exist yet, or it's just a suggestion in an email thread? [17:25:09] sumanah: I'm busy today with operations changes for zero [17:25:17] sumanah: I'm going to do 20% tomorrow [17:25:20] zero! [17:25:29] sumanah: but, that will be mostly code review [17:25:31] AaronSchulz: around? [17:25:33] preilly: if Thursday's usually a bad day, do you want to just switch to a different day of the week? [17:25:45] sumanah: it's not usually a bad day [17:25:52] sumanah: it's just been the last two weeks [17:26:33] preilly: ok. Good luck with Wikipedia Zero [17:26:35] ^demon|away: Pong [17:26:36] exciting stuff, huh? [17:26:55] sumanah: thanks! [17:27:04] kaldari: as of now what i have in mind is working on would be flikr and geolocation integration [17:27:11] <^demon|away> RoanKattouw_away: Was thinking of doing that gerrit change today, but I need like you or Ryan to restart gerrit after. [17:27:12] *flickr [17:27:18] RoanKattouw_away: how can two people who aren't here talk? [17:27:44] if you change gerrit's config via puppet, it'll restart automatically [17:28:22] Oh, hiya Ryan_Lane [17:28:38] hola RoanKattouw [17:28:40] DarTar: I'm here now, sorry for being late. Was doing post-dinner cleanup and lost track of time [17:28:45] np [17:28:54] * Ryan_Lane isn't actually here [17:29:01] if you are not available now we could do after noon PT [17:31:03] drecodeam: Would you be able to write up a proposal of what you have in mind so far? Then I could help you refine it further. [17:31:36] I am available [17:31:40] kaldari: drecodeam https://www.mediawiki.org/wiki/Summer_of_Code_2012/Application_template [17:32:18] kaldari: i would do it tonight, i had been working on implementing drag and drop feature to get the understanding of the code.Got done with it and now i am working on the proposal [17:32:21] DarTar: Let me debug that AFT4 / ClickTracking code that's apparently not giving you page titles and revids [17:32:36] sumanah: already bookmarked it ! thanks [17:32:43] RoanKattouw: cool [17:32:44] sumanah: Is there a place on mediawiki.org where people should post their proposals (before they are posted to Google)? [17:32:59] kaldari: indeed there is. [17:33:13] kaldari: http://www.mediawiki.org/wiki/Category:Summer_of_Code_2012_applications [17:33:17] kaldari: https://www.mediawiki.org/wiki/Category:Summer_of_Code_2012_applications [17:33:17] <^demon|away> Ryan_Lane: This is actually changed in a db table, requires a gerrit restart. [17:33:22] <^demon|away> Not changed via puppet :( [17:33:35] easy enoug [17:33:38] roan has root [17:33:40] cool [17:33:43] I won't be back till tuesday [17:33:45] drecodeam: great, glad you know about that :-) [17:33:55] <^demon|away> Ryan_Lane: Yeah I figured. I'll make the change and then Roan can restart gerrit :) [17:33:56] if you break it, you buy it. don't forge [17:33:59] *forget [17:34:12] kaldari: https://www.mediawiki.org/wiki/Summer_of_Code_2012#Student_applications has instructions [17:34:16] drecodeam: cool, then just ping me when it's up and I'll look over it [17:34:31] <^demon|away> Updating `approval_category_values` because "I'd prefer it if you didn't submit this" offends. [17:34:34] kaldari: by the way, i worked on the drag and drop implementation. When you have time you can take a look at that : https://gerrit.wikimedia.org/r/#change,3808 [17:34:42] ping rsterbin [17:34:48] sure, I'll take a look this weekend [17:35:30] kaldari: working on the proposal now. Would get back to you soon. Thanks a lot. [17:35:42] just added myself as a reviewer [17:35:47] no problem [17:38:30] hi folks [17:39:11] kaldari or sumanah: i am working on wikimania sponsorships and trying to figure out who to contact at yahoo! [17:39:39] https://twitter.com/#!/gyehuda looks good [17:39:53] any other suggestions or does anyone know Gil? [17:40:12] aude: hmmm, I don't know Gil, but you might try Frederick Kenneth Schmidt [17:40:22] aude: just pm'd you his email [17:40:22] oooh, what does he do? [17:40:26] thanks! :) [17:40:42] aude: he's Director, Academic Relations at Yahoo Labs in NYC. So maybe he can point you in the right direction [17:40:47] very cool [17:40:53] you are welcome, aude [17:41:12] the YDN and Yahoo Labs are what i was looking at and would be a good fit for the conference [17:42:59] sumanah: and if you missed it, google is a diamond sponsor :) [17:43:06] aude: oh great! [17:43:29] yes!!! :) and we have a few others like WikiHow :) [17:44:15] rock. [17:44:35] aude: I don't think I know anyone at Yahoo :( [17:55:00] kaldari: that's okay but do you know anyone anywhere else that might be a good fit as a sponsor? [17:55:48] have you talked to anyone at Wikia? [17:55:49] * aude thinks yahoo's developer programs and tools are cool and interesting and would be cool to learn more about them at the hackathon [17:56:03] how they could help us with mangling data [17:56:19] kaldari: i just emailed wikia (again) but a different, better contact [17:56:35] cool, if you don't hear anything back, let me know [17:56:48] and I'll dig up some other contacts there [17:57:54] DarTar: what's up? [17:59:07] kaldari: thanks! [17:59:24] kaldari: they've had some change over in staff but i'm hopeful and [17:59:35] DarTar: I found a bug [17:59:37] eager to hear about wikia's tech stuff [17:59:55] DarTar: Remind me, do you need the title|revid data to show up in the 'additional' field at the very end of the line, or as a suffix to the event name [17:59:57] ? [18:00:39] RoanKattouw: it's the additional field [18:01:28] OK [18:01:31] Then I have a fix [18:01:38] You'll recognize this [18:02:54] DarTar, rsterbin: https://gerrit.wikimedia.org/r/3926 [18:04:35] Reedy: Could you do a quick review&approval of https://gerrit.wikimedia.org/r/3926 please? [18:04:44] re [18:04:47] Hmm, maybe this git migration is a good thing [18:04:57] RoanKattouw: yep, it's the additional field [18:05:00] I rebase stuff for people, and in return they review stuff for me [18:05:19] as per these specs: http://meta.wikimedia.org/wiki/Research:Article_feedback/Clicktracking#Additional_data [18:06:30] deone [18:06:32] *Done [18:06:53] Thanks [18:08:11] RoanKattouw, Reedy: thank you [18:08:28] off to a meeting, will be back by noon [18:11:15] DarTar, DarTar_clone: (Whichever is the real Dario) fix deployed [18:11:26] (clap) [18:11:44] I'll check the logs in a moment [18:12:15] RoanKattouw: howdy [18:12:41] Morning [18:12:48] Hey -- I have a working HTML DOM -> linear model converter [18:12:59] I'm writing documentation now, will submit to Gerrit soon [18:15:56] i could use some git help [18:16:00] OK [18:16:01] specifically gerrit related [18:16:04] Fire away [18:16:53] so, there's this branch that Christian made, refs/heads/supertest (when running git ls-remote origin in the visual editor repo) [18:17:12] we were just messing around, we don't really want to keep that, how do we delete it? [18:17:55] like, not locally (cause git branch -r -d origin/supertest does that, but git pull brings you right back to having a reference to it) [18:18:07] In Gerrit, go to Admin -> Projects -> VisualEditor [18:18:15] Then Branches [18:18:25] Check supertest and click Delete [18:18:29] lol [18:18:31] duh [18:26:46] RoanKattouw: do you understand the different choices of project options in admin > projects > visual editor > general ? [18:27:15] Most of them yes [18:27:27] I don't think it would be wise to change any of them [18:27:32] But I can explain them to you if you want [18:27:53] yeah, i'm looking for an explanation, not saying it needs changing [18:28:11] the merge if necessary setting [18:28:40] how does that compare to other options in that list [18:28:45] that's mostly what I'm wondering [18:30:28] Ok [18:30:42] Do you know what a fast-forward merge is? [18:31:04] it pulls first? [18:31:09] maybe not? [18:31:21] No [18:31:29] Let me prepare some ASCII art to explain this [18:31:33] lolz [18:31:41] i could probably find a definition online.. [18:32:06] i should know this, i've ready at least half of the o'reily book on git [18:32:42] http://pastebin.com/epBdp2Tb [18:32:58] So when you merge the branch with D and E back in the mainline, you need a merge commit [18:33:04] Because both branches have diverged [18:33:21] So git creates a special kind of commit that has two parents (E and C) [18:33:36] Makes sense, right? This is what merges normally look like [18:33:41] However, there is a special case [18:33:47] Imagine that B and C don't exist [18:34:04] So someone branched off master at A, commits D and E to their feature branch, and then wants to merge that back in [18:34:17] However, master doesn't have any new commits [18:34:53] So you could create a merge commit with parents A and E, but that's kind of superfluous, because you can also just shift D and E on top of A and pretend the whole branching never happened [18:35:06] The latter is called a fast-forward merge [18:36:14] http://pastebin.com/mPwDTakK [18:37:36] So "Merge if necessary" is what git merge does by default. It does a fast-forward merge if possible (i.e. if there are no commits in master between the branch point and the merge point), and a regular merge otherwise [18:38:07] "Always merge" is the equivalent of git merge --no-ff , it always does a regular merge, so it always creates a merge commit, even if it could've been avoided with a fast-forward merge [18:38:44] "Fast Forward Only" will simply refuse to create merge commits, so it requires that you rebase things on top of master before you can merge them [18:39:46] "Cherry Pick" doesn't really merge at all, it just cherry-picks the submitted revision on top of master. This is kind of evil [18:40:11] It's somewhat similar to git merge --squash, but subtly different [18:40:34] So these four choices are for the merge strategy that Gerrit uses when you've approved a change and tell it to merge that change into master [18:40:40] TrevorParscal: Am I making sense? [18:41:00] yes [18:41:39] ff is essentially an optimization type of thing, or a cleanup in advance sort of thing, to make it less confusing to look at the history and such [18:41:41] yes? [18:41:54] it avoids a "merge" if it's not really needed [18:41:57] Yeah [18:42:22] right on [18:42:23] It makes for less confusing graphs [18:42:31] agreed [18:42:37] and I agree that the setting is currently ideal [18:42:53] by the way, we are now using the trunk branch of mediawiki/extensions/VisualEditor.git for our "fast paced collaboration" stuff, pushing multiple changes to master at once when we have something reasonable done [18:43:31] ^demon|away: when you get a chance could you update https://bugzilla.wikimedia.org/34138 ?? [18:43:56] TrevorParscal: trunk branch in Gerrit or Github? [18:44:01] gerrit [18:44:09] Aha, gerit [18:44:13] RoanKattouw: this will help when say inez and christian are working in tandem on a single discrete feature [18:44:18] For sure [18:44:28] we jokingly called it trunk cause it's sort of like our little svn [18:44:34] You know, you can magically create branches in our Gerrit by just pushing to them [18:44:46] Just in case you find yourself wanting to create branches with more descriptive names [18:44:58] as opposed to using the UI you mean? [18:45:24] yeah, I think that's how supertest was made - but i guess gerrit rejects attempts to delete these branches from the command line [18:45:28] at least it did so for rob and I [18:45:35] maybe if I had god privileges or something... [18:45:46] Yeah, I don't think you can delete them from the CLI [18:46:39] also, if we did work in a branch called foo, merged it into master then deleted foo [18:46:52] does that cause any problems? [18:46:54] No [18:47:04] i mean, it merged the full history, so we can just toss the branch right? [18:47:13] The commits will still be there, a branch is just a pointer [18:47:14] keep it clean, drop it when you are done using it [18:47:18] yeah [18:47:22] understood [18:47:33] The only bad thing you can do is orphan commits by deleting the last pointer to them [18:48:09] Also, note that git push -f (--force) is disabled in Gerrit (it works in github) [18:48:17] do they then remain in the system forever or is there some garbage collection that can be done? [18:48:22] There is gc [18:48:30] figured there would be [18:48:50] By default, git push won't do things like rebase published commits [18:48:50] so the problem with orphaned commits is? [18:49:00] The problem is you might accidentally orphan them [18:49:22] You can usually get them back with some effort [18:49:38] i must be missing something then, if you for instance make a branch, commit 5 times, give up on it and delete the branch, you now have 5 orphaned commits (right?) [18:49:42] and gc will kill them eventually [18:49:44] Yes [18:49:48] but you chose to abandon them, so why is that a problem? [18:49:56] Well, you might not have meant it :) [18:49:57] because gerrit still thinks they exist or something? [18:50:02] oh, sure... [18:50:02] No, this is a general git thing [18:50:23] so, effectively that's like "the problem with windows is you might accidentally delete your program files folder" [18:50:30] More or less [18:50:33] ok [18:50:48] we all know the best way to get windows to run fast is to delete that folder :) [18:50:56] deleting System is also good [18:51:01] lolz [18:51:05] Well for instance, when you're rebasing or even just amending commits, you're not really changing them, you're just creating new ones and updating the branch pointer [18:51:19] So the pre-amend / pre-rebase commits are still around, they'll just be orphaned [18:51:28] sure [18:51:33] So when I'm doing a rebase I'm not 100% comfortable with, I run "git branch rescue" first [18:51:51] what does that do exactly? [18:51:54] That creates a branch "rescue" pointing to the pre-rebase state [18:52:19] ok, so rescue isn't some special keyword or branch name, it's just your clever naming [18:52:23] Yeah [18:52:28] Sorry, that was ambiguous [18:53:02] sorry, git commands are sometimes tricky because not all options appear to preceded with - or -- [18:53:07] Then if I've screwed up, I can go back with git reset --hard rescue [18:53:09] Yeah [18:53:27] Hmm, well [18:53:37] I think generally all switches are prefixed with - or -- [18:53:43] It's only the command (commit/rebase/branch/etc) that isn't [18:53:48] sure [18:54:05] that's true [18:54:23] But this is not necessarily obvious :) [18:54:46] I've been using git for personal projects for a while, say a year and a half at least, but it's pretty deep so thanks for the help, I'm sure I will get much better at it quickly now that I'm using for work [18:54:59] Heh no worries [18:55:13] I could rebase my way out of a box either before I started using git for puppet [18:55:29] *paper ba [18:55:31] g [18:55:53] btw, for whatever reason, ve team has a side-channel in Skype - i guess so we can randomly use audio and video or something [18:56:02] feel free to open yet another communication channel [18:56:07] hehe [18:56:18] I will use that as a back-channel instead of g-chat in the future probably [18:56:34] * RoanKattouw would prefer to just have #wikimedia-veteam or something [18:56:34] at least when talking to the ve team [18:56:39] Then I don't have to run Yet Another Application [18:56:40] yeah, me too [18:56:47] not sure why they got on skype.... [18:56:54] maybe one of those inez ideas.. [18:56:59] Tell them to move? You're the boss ;) [18:57:09] I can see if I can get them to [18:57:10] lolz [18:57:22] Also, did Inez and Christian move yet? [18:57:36] they did, they are where ryan and ian used to be respectively [18:57:43] Cool [18:57:59] What did you think of my suggestion of doing a quick standup meeting each morning? [18:58:07] Oh and did you succeed in wheeling in a whiteboard? [18:58:11] Those two kind of go together [18:59:26] Inez_, ChristianWikia ? [18:59:45] RoanKattouw: yes, I will get you a photo [18:59:56] Nice [19:00:29] So basically what I suggested was that we do daily standups in the morning, share our progress and plans, and write that on the whiteboard [19:00:36] crap, late for lunch with a friend, cya [19:01:02] Then we have more of a record of our progress, and Terry said that an acceptable method of keeping him up to speed on the project was for him to join or listen in on that standup once a week [19:02:34] is there a workaround for this? https://bugzilla.wikimedia.org/show_bug.cgi?id=17565 [19:03:03] trying to view an rss feed of contribs showing diffs instead of the whole page content [19:03:33] Weird; this works for RecentChanges, right? [19:04:00] apparently so, i'll test it [19:06:34] yes the recentchanges work roankattouw [19:06:41] *works [19:56:54] <^demon|away> RoanKattouw: Got a minute? [19:57:00] Sure [19:57:48] <^demon|away> Ok, I'm going to make the change in the reviewdb, then I'll need you to stop/start gerrit [19:57:59] OK [19:58:05] manganese, right? [19:58:15] <^demon|away> Yep [19:58:43] <^demon|away> The script is `/var/lib/gerrit2/review_site/bin/gerrit.sh stop|start` [19:59:07] <^demon|away> You may have to `export GERRIT_SITE=/var/lib/gerrit/review_site/` [20:00:02] It has a 'restart' command you know [20:00:08] Anything wrong with using that instead? [20:00:11] he needs downtime i guess [20:00:17] Oh, right [20:00:22] Change the DB during the downtime [20:00:35] ^demon|away: OK, I'm ready, tell me when [20:00:39] <^demon|away> No, I can change it before. [20:00:49] OK [20:00:52] <^demon|away> But I just realized the field is varchar(50), and our suggested text is 53 chars long. [20:00:58] lol [20:01:10] So you need downtime for the ALTER TABLE? [20:01:41] <^demon|away> No, I'm not doing any alter tables. The restart just gave me an error on gerrit-dev earlier so I figured start|stop was ok [20:01:48] <^demon|away> stop|start [20:01:50] <^demon|away> Anyway [20:01:55] OK [20:02:04] Just say when [20:02:26] Or do you need time to apply creativity and trim 3 chars off the text? [20:02:47] <^demon|away> Yeah I need a minute. [20:02:54] OK [20:02:56] <^demon|away> "There is a problem with this patchset, please improve." [20:03:03] <^demon|away> Find 3 characters I can trim. [20:03:11] <^demon|away> We already changed fix -> improve on purpose. [20:03:26] patchset->change ? [20:03:44] Meh that's -2 [20:04:09] Oh There's would be -1 [20:04:19] <^demon|away> Ah, that'll do [20:04:45] <^demon|away> Perfect. Ok, go ahead and stop|start it [20:05:23] stopping [20:05:26] stoped [20:05:28] starting [20:05:45] started [20:06:42] <^demon|away> https://gerrit.wikimedia.org/r/#change,3898 - whee :D [20:06:58] But! [20:07:02] https://gerrit.wikimedia.org/r/#change,publish,3793,1 [20:07:14] wtf [20:07:21] It's updated on that change but not on the other? [20:07:28] I mean, when viewing the review form [20:07:52] <^demon|away> I'm confused. [20:07:53] <^demon|away> What? [20:07:55] <^demon|away> Looks ok to me [20:08:15] Hmm [20:08:24] Maybe there was caching somewhere [20:08:33] wfm now [20:09:31] <^demon|away> Yeah, all the review forms + review table that has a -1 in it should be updated. [20:09:52] <^demon|away> Old comments are not, those are added to the comment and thus immutable. [20:09:58] <^demon|away> Not worth cleaning up imho. [21:12:48] RoanKattouw: article_title and rev_id is now showing up in the AFT4 logs, good job [21:13:05] yay [21:14:27] the last annoying thing was the CTA throttling cookie, fabrice was able to reproduce the same problem, if you have any idea why this is still happening... [21:14:52] I could try to rip out the code that respects the cookie [21:15:55] my understanding is that we don't want to support that cookie-based throttling at all so it's ok to nuke it [21:19:39] RoanKattouw you know what, I still see events like ext.articleFeedback@10-pitch-edit-show with no additional data, is that as expected or is it a matter of caching? [21:20:55] My additional data fix only applies to save-attempt and save-complete [21:21:10] I didn't know the other events were also lacking additional data [21:21:31] ah ic - in AFTv5 we actually now have additional data added to all events [21:21:44] sorry I was probably not clear about this [21:22:43] it's not as urgent as the save_attempt and save_complete, but it would be good to have to make sure the two log formats are identical [21:22:57] is that a complex fix? [21:23:36] Well it takes more time than I want to spend at 11:30pm :) [21:23:59] sure, maybe we can schedule this and the cookie removal for another day/time [21:24:19] tomorrow/Monday ? [21:25:12] I have tomorrow off [21:25:30] np let's touch base on Monday then [21:25:38] OK [21:25:45] I'll poke at it on Monday and we'll deploy it on Wednesday [21:26:08] Speaking of which -- I think there's supposed to be a deployment on Wednesday but there's no calendar event and I haven't heard from Fabrice about it [21:26:35] Could you poke him about this? I'll need to know what to review&deploy on Monday because I'll be on planes all day on Tuesday [21:29:25] ( DarTar ---^^ ) [21:29:44] sure [23:25:56] New patchset: Diederik; "Tim rewrote append_char function Removed exotic for/loop Replaced all strtok with strchr." [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/3222 [23:30:23] rmoen: How do I test that HTML5 drag and drop on UW? [23:31:22] Reedy: I haven't tested it yet. looking