[00:02:01] New patchset: Spage; "Count most popular field values in .csv files." [analytics/E3Analysis] (master) - https://gerrit.wikimedia.org/r/21850 [00:51:54] chrismcmahon: Considering that both of those are generally solved problems, I'm not super-worried about it, but hey, that's me [00:59:49] New patchset: Spage; "List & count most popular field values in .csv files." [analytics/E3Analysis] (master) - https://gerrit.wikimedia.org/r/21850 [09:28:02] !g I29d729173e673b0422a587a289bf22df9c1ab4ea [09:28:02] https://gerrit.wikimedia.org/r/#q,I29d729173e673b0422a587a289bf22df9c1ab4ea,n,z [09:37:15] hashar: I like your idea to document internal scripts by man pages, this indicates a sense of professionalism, to be willing to produce good and easy to use documentation. [09:37:31] Dereckson: merci :-))) [09:37:51] Dereckson: I think it will help people willing to play with beta [09:37:58] there is a lot of work to do though [09:40:27] By the way, I see at https://gerrit.wikimedia.org/r/#/c/16606/ the changeset 1 were published as DRAFT. How to do that? [09:41:21] git push origin master:refs/drafts/master [09:41:31] git-review has a switch too, can't remember which one [09:41:35] maybe git-review draft [09:42:04] -D or --draft [09:47:49] Thank you, that works. [09:53:35] Dereckson: I fixed the manages issues https://gerrit.wikimedia.org/r/#/c/16606/ Patchset 6 [09:53:39] thanks a ton for the review :-) [10:11:29] You're welcome. [11:59:15] ahh thunderbird 15 comes with a new GUI :-) [12:46:11] New patchset: Hashar; "Job for mediawiki/extensions/TitleBlacklist" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21874 [12:46:11] New patchset: Hashar; "TitleBlackList job now report back to Gerrit" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21875 [12:46:41] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21875 [12:46:41] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21874 [15:38:20] * marktraceur applauds wildly as hashar enters the room [15:38:41] *blushes* [15:38:46] marktraceur: have you reached Krinkle [15:38:54] about your JS tests ? [15:39:03] I got myself out of review since I am not really a JS guy [15:39:13] hashar: Yes! He replied several times and I'm going to jump on it as soon as the UW sprint is over [15:39:22] great!! [15:39:25] I am off [15:39:29] attending a barcamp IRL :) [15:39:36] cya tomorrow [15:42:28] * marktraceur is sad that it has nothing to do with a bar [15:46:19] New patchset: Hashar; "fix up Wikibase injection" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21892 [15:46:48] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/21892 [15:49:19] fun bit of trivia: the first BarCamp was held at Socialtext, because they were annoyed with how elitist FooCamp was. Socialtext was the first commercial wiki company (to my knowledge). (I worked for Socialtext for two years) [15:49:33] chrismcmahon: Cool! [15:49:46] chrismcmahon: (I'm halfway through [[Foobar]]) [16:31:00] chrismcmahon: OK, I'm getting the config to work for the tests [16:31:55] chrismcmahon: And it's beautiful, it really is....except that my local install and Commons are two very different beasts, so there are failing tests everywhere :/ [16:52:18] glad to hear it's beautiful, looking forward to seeing it in action! [16:54:58] chrismcmahon: I'm a little curious as to why the two interfaces have been so starkly different [16:55:08] I'm getting weird PLURAL bugs on my copy, but not in production [16:55:48] marktraceur: dunno. many unreleased changes made but not deployed? [16:56:37] Hrmmm [16:57:30] Aha [16:58:23] My server is cleverly (not so much) sending only the plural form of the message [16:58:26] But whyyyy [17:00:03] hashar: extTitleblacklist has a flaw it seems, it runs twice [17:00:14] https://integration.mediawiki.org/ci/job/Ext-TitleBlacklist/18/consoleFull#ant-target-12 [17:00:48] I thought one was for core and one for the extension but that's not the case [17:01:14] it should only install once, and then run full phpunit (including core unit tests), so that it also uses hooks [17:01:57] btw, what does it use as mediawiki core? latest stable master head? [19:01:50] marktraceur: +1 for writing accurate tests, and I'm really happy I was able to contribute to the PLURAL discovery [19:03:49] *nod* I wish it weren't so complicated to test/fix it [19:59:28] hey AaronSchulz - do you have a little time today to review a few volunteers' patches? [20:00:15] maybe [20:01:23] AaronSchulz: https://gerrit.wikimedia.org/r/#/dashboard/16 has a few that I've added you as reviewer for [20:07:50] <^demon> sumanah: Fyi, dashboard/# won't work with 2.5. There's no direct way to see "what does [john doe] have to review" -- you can get close with some queries [20:07:56] <^demon> And custom "dashboard" things [20:08:00] whuzzawha [20:08:05] ok [20:08:31] <^demon> The "dashboard" is private. It was never really *meant* to be public which is why it's got such a terrible url with an opaque UID. [20:08:52] <^demon> Clicking on a username now gives an "owner:foo" query which makes more sense with what other clickable links do. [20:10:03] ok. And "reviewer:" still works to see what changesets have that person as a reviewer, right? [20:10:18] <^demon> I don't think anything with that's changed, no. [20:10:25] <^demon> Lemme get the gerrit-dev instance back up and we'll have docs. [20:13:56] thanks ^demon [20:15:49] RoanKattouw: btw, nice work on CORS. [20:17:25] ^demon: if you have additional Gerrit issues in their code.google.com issue list that you'd like me to star, let me know :/ [20:18:13] sumanah: can you link me as well (so I can star it :) ) [20:18:19] :) [20:18:36] <^demon> http://code.google.com/p/gerrit/issues/detail?id=1496 is annoying :) [20:19:15] <^demon> 1436, 1382, 1124 as well. [20:19:19] <^demon> (All ldap things) [20:19:35] Krinkle: 429: With branch level permissions, users need an easy way to find an approver [20:19:47] <^demon> 429 would be nice. [20:20:46] 764: Reviewer/submitter suggestions based on a change's target repo and the set of files it modifies [20:21:00] 1479: Expose logs of changes to groups & group membership [20:21:08] 1300: Expose logs of changes to groups & group membership [20:21:09] FYI: marktraceur and I are working on https://bugzilla.wikimedia.org/show_bug.cgi?id=39771 [20:21:11] Krinkle: ^ [20:21:27] As it appears difficult to debug, we've decided to do analysis in here. [20:21:55] marktraceur zoomed in on https://gerrit.wikimedia.org/r/#/c/20666/ but cannot get it to reproduce anymore. [20:22:07] sumanah: thx, done [20:22:11] oh, and Krinkle and ^demon, Issue 1274: Support localization/internationalization of gerrit UI [20:22:12] I initially ran git-bisect and came up with e71bf6850cf74dd3133085cb16fba011c5d496e6 as the cause, but later when checking out that commit, couldn't reproduce. master still has the problem, of course. [20:22:14] I don't see anything in that patch set that would cause the behavior he observes. [20:22:26] <^demon> sumanah: http://www.mediawiki.org/wiki/File:DemonsAnnoyingGerritIssues.png [20:22:29] <^demon> Krinkle: ^ [20:22:36] I can try git-bisecting with e71bf6850cf74dd3133085cb16fba011c5d496e6 as the good commit, and see what happens [20:23:00] ^demon: truly your annoyance goes beyond anyone else's ;-) [20:24:33] RoanKattouw: I get "SSH_AUTH_SOCK not set or not pointing to a socket. Did you start your ssh-agent?" when I try sync-file for 8dc8ee8463da26f6c19204317351a60d06b6b982 (jquery.js) [20:24:54] I know what it is related to, but not sure how to handle eit [20:25:11] Are you using screen on fenari? [20:25:18] no, regular ssh [20:25:20] OK [20:25:26] Did you use ssh -A to forward your agent? [20:26:08] not that I know, no. I just have my ssh key saved in my ssh-add locally (listed in ~/.ssh/config) and I connect through ssh fenari.wikimedia.org [20:26:19] <^demon> Fun gerrit trick: if you're pushing from an environment in which you have no key or ssh access to gerrit (like: labs, in screen, sudo'd). Generate a one-time password from gerrit and push over HTTPS. When you're done, just revoke the password :) [20:26:21] OK, so log out of fenari and ssh back in with ssh -A [20:26:38] Then on fenari, verify that ssh-add -l lists your key, and you should be good to go [20:26:57] <^demon> [20:27:03] lol [20:27:27] RoanKattouw: okay, now it works. [20:27:30] found it: https://gerrit.wikimedia.org/r/#/c/20911/3/includes/MessageBlobStore.php [20:27:39] RoanKattouw: Does that forward the private key itself, or a special version of it only? [20:27:49] <^demon> I don't recommend pushing over HTTPS regularly because having to type a password is annoying (and saving it in .netrc is insecure). [20:27:56] <^demon> But for one-off usages, it's so freaking useful. [20:27:57] I assume this is necceary because sync-file connects to another server where it actually happens? [20:28:01] Krinkle: It sends challenge/responses back to your local agent, the key data doesn't go anywhere [20:28:09] ah, okay [20:28:18] (which makes me wonder, why can't fenari do it directly?) [20:28:26] * RoanKattouw high-fives Krinkle for his first deployment :) [20:28:33] Indeed! [20:28:34] congrats Krinkle! [20:29:02] Nikerabbit: Awesome, do you want to do the honors of uploading a change? [20:29:06] By default, your local agent won't allow itself to be queried by fenari, you have to explicitly allow that with ssh -A [20:29:24] siebrand: Evidently Nikerabbit found the problem [20:29:45] right otherwise someone else on fenari could do things on my behalf if they're root. though that's a pseudo-securituy issue in this case I guess since its on fenari anyway. [20:29:50] Because, you know, forwarding isn't completely risk-free. Anyone who has root on fenari can hijack your agent process and impersonate you as long as the connection is active. [20:30:01] marktraceur: working on it [20:30:01] right [20:30:05] marktraceur: He's a rock star :) [20:30:11] But in practice ... exactly, it's fenari, and most of the roots have other ways of impersonating you if they want [20:30:28] why logmsgbot now in wikimedia-tech again instead of -dev or -operations? [20:30:38] It's always been in -tech, it never moved [20:30:46] I thought we moved all stuff to -operations and left -tech as the community tech channel ([[m:tech]]) [20:30:49] I'm not sure why we never moved it [20:30:53] ok [20:31:03] does !log work from anywhere else? [20:31:04] I do know that !log works in both channels because morebots joins both -tech and -operations now [20:31:07] right [20:31:11] there's two bots [20:31:17] logmsgbot and morebots, yeah [20:31:19] * sumanah would love for someone to update https://www.mediawiki.org/wiki/MediaWiki_on_IRC to reflect current reality [20:31:30] Krenair: Thehelpfulone ^ if you feel like some gnoming :-) [20:31:54] sumanah: https://meta.wikimedia.org/wiki/IRC/Channels is far more up-to-date [20:35:31] marktraceur: indeed, when I updated [[Mailing lists]], meta was more up-to-date than mediawiki.org [20:37:53] symdone [20:37:57] sumanah: done [20:38:10] Hm.. yeah [20:38:14] that doesn't belong on mediawiki.org [20:38:18] (the wikimedia-* ones) [20:39:12] this is that old can o' worms [20:40:01] until we have a real wikitech.wikimedia.org or similar, that covers non-MediaWiki tech, I'm fine with things like #wikimedia-tech being in the mw.org list [20:40:17] Thank you for the update, Krinkle [20:49:55] chrismcmahonafk: All right, we got it working, and my tests are going now. I'll push it up to gitorious, but I'd also like some help setting up more disparate tests if you have a sec later [20:56:09] kaldari: https://gitorious.org/mwe-upwiz-testing/mwe-upwiz-testing <-- Oh yes we did. [20:56:31] sweet [20:57:02] how do I use it? [20:57:19] kaldari: Download via git, the README should be enough to install and run [20:57:28] ok [20:57:44] And if you have cases you see not tested, let me know and I can add them [20:58:04] (kind of shooting fish in a frying pan at this point, since there are so few tests) [21:18:05] * chrismcmahon watches devs pass around browser test code with satisfaction. It's going to be fun to grow those tests.  [21:18:47] chrismcmahon: Though not simple, since at least some of us don't know ruby :/ [21:20:42] marktraceur I think it took you like 5 minutes before you were making improvements? One of the nice things about that testing "stack" is that it makes it (fairly) easy to focus just on the page and on the test. I've seen examples in other languages where you also have to hack on the Page Object model itself, the job-runner, the reporting structure, etc. etc. [21:21:31] I'd sort of be OK with that if it were JS....I know how to do stuff in JS [21:21:53] I don't even understand Ruby's coditionals just yet, just kida winging it [21:21:58] kinda* [21:22:10] marktraceur: I did an interview today with a candidate for QA Engineer who found my repo on her own, immediately grokked the Page Object stuff there, and is now refactoring her Java tests with a Page Object model. [21:22:20] oh that's lovely [21:22:24] Oh cool! [21:22:37] marktraceur: conditionals in UI tests are a pretty big smell imnsho [21:22:56] chrismcmahon: True, but there already are several in this set [21:23:09] chrismcmahon: I had to add an "or" clause to allow for blacklist/no blacklist [21:23:27] (Commons blacklists DSC*, my wiki isn't set up for that, so I get two different errors) [21:23:40] OK. in the long run we'll hack the blacklist at runtime I hope [21:29:33] marktraceur: fun bit of trivia: the Fitnesse wiki (for ATDD) expressly forbids conditionals and loops by design. it allow re-usable routines in the form of macros. (people write conditionals into their fixtures for Fitnesse, but it's not good practice) [21:32:20] someone available for a review? [21:32:37] Platonides: Urgent? Count me in. [21:32:57] https://gerrit.wikimedia.org/r/21960 [21:33:06] chrismcmahon: Interesting! So basically it's a more-pure functional subset of the language. Crazy stuff. [21:33:22] Oh hm, I've seen this bug before, it's a WLM bug [21:33:33] it's not hard, but i'd like to bribe someone to deploy it before september [21:33:52] yep, I finally got to it myself [21:33:56] marktraceur: UI (and API) test folks really don't like conditionals. or loops. [21:34:05] turned out to be quite simple [21:34:26] Platonides: I think kaldari and I might be able to do that as part of tomorrow's also-WLM-related deployment of UW. I mean, he'll have to confirm or deny that, I'm not totally sure. [21:34:37] good [21:35:04] should I tag it in some way? [21:35:42] Platonides: I'll let him know when he gets a sec (and he's been pinged, so he'll surely be in here momentarily to say "augh why did mark volunteer me") [21:35:48] xD [21:36:47] at which hour are you planning to deploy? the earliest the safest, as there's more time to detect a breakage [21:37:06] Platonides: I'm not sure, let's take a field trip to the deployment schedule [21:37:06] we don't want a change merged on 31st to avoid uploads :) [21:37:37] Hm, evidently there's no such window yet [21:37:41] -.- [21:38:27] marktraceur: https://bugzilla.wikimedia.org/show_bug.cgi?id=39778 [21:38:32] I saw [21:43:54] btw, there was a report today in the ml "As soon i receive that an image with the same name already exists, I [21:43:54] will have a really nightmare. [21:44:03] (uploading with UW) [21:44:12] is it in that set of changes? [21:44:37] Platonides: We have soooo many changes to the error handling code [21:44:43] | this many | [21:45:14] I'll take it as "probably yes" :) [21:46:54] *nod* [22:02:27] chrismcmahon: UW asks for a leave-page confirmation while in the upload process, is there any way you can think of to handle that in the selenium tests? [22:02:55] (I'm triggering it with a call to visit_page(UploadWizardPage) [22:02:57] ) [22:02:58] we've merged like 15 changes in the past 2 days [22:03:09] so no telling how many problems will be fixed [22:03:17] maybe 15 [22:03:33] Maybe 15 THOUSAND [22:03:36] (probably 15) [22:05:25] marktraceur: I think that's a FF profile thing. I ran into it as a problem for a bit, then it was handled automatically for me. if it's an issue for you, Se can manipulate FF's profile status at start, see the docs. [22:06:06] marktraceur: I'd be interested in what happens when you use a different browser [22:07:09] chrismcmahon: Profile, huh? All right, I'll see. [22:07:23] marktraceur: updated FF recently? [22:07:41] chrismcmahon: Ehhh....14.something IIRC [22:08:13] should be fine [22:08:47] marktraceur: also do 'gem update' just to be sure, I think the Se crew deal with the "Leave Page" thing all the time [22:08:57] chrismcmahon: What part of the profile should I be modifying? [22:09:11] marktraceur: beats me, I never had to go there [22:09:22] Uhhh [22:09:28] and it still works for me on both OSX and Linus [22:11:30] marktraceur: I can do legwork here if you need it. also, #selenium on freenode usually has experts with quick answers [22:12:02] chrismcmahon: I just imagined you typing on a macbook while Linus Torvalds gave you a piggyback ride. Carry on. [22:12:23] LOL for realz [22:12:41] I guess I'll hop on #selenium\ [22:12:49] oops just realized what I typed :) [22:17:13] chrismcmahon: I believe I've found the _selenium_ part of what I need, but I have no idea how I'm supposed to use the WebDriver instance from the spec file. Tips? [22:18:44] marktraceur: I should get your latest code and see.... [22:19:09] I only encountered the "leave page" at the very end of the test when it didn't matter [22:19:14] Right [22:19:29] Now, I'm trying to reload the page and do a new test [22:19:34] Which means I need to handle the alert [22:19:46] ok [22:23:54] marktraceur: that alert is pretty strange, looking more [22:24:53] chrismcmahon: FYI I thought @page.confirm(true) do visit_page(UploadWizardPage) end would work, but it didn't [22:25:04] (that's in the PageObject docs you sent me) [22:26:52] marktraceur: funny, SeIDE doesn't even see that alert [22:27:11] ... [22:27:38] 'm pretty sure it's an alert [22:29:04] Hm, maybe not [22:29:11] Maybe it's an onbeforeunload event [22:29:12] marktraceur: a workaround might be do an explicit close on the browser, then re-open (see config.after(:all) in spec_helper.rb [22:29:33] chrismcmahon: closing the browser also triggers onbeforeunload [22:30:01] marktraceur: yeah, but Se will do basically a force-close then (I think) [22:30:06] chrismcmahon: https://developer.mozilla.org/en-US/docs/DOM/window.onbeforeunload [22:30:13] chrismcmahon: After a few seconds, but.... [22:30:31] OK, so can I just set a custom handler for the event? [22:30:56] I think so. I'd be interested in what #selenium has to say [22:31:08] Right [22:31:10] Will ask [22:31:19] me too, watching [22:41:44] marktraceur: jarib (Jari Bakken) is a freakin' genius. he's the maintainer for se-webdriver and watir-webdriver, and a super-nice guy [22:41:56] Noted! [22:50:42] marktraceur: so we're OK now? [22:52:45] Should be [22:52:52] I just need to fix stuff [22:54:23] :) life of a tester [22:58:19] marktraceur: we're actually dealing with three levels of API here, selenium, watir, and page-object. each level should pass any instruction it doesn't understand down to the next-lowest level, until the command is executable. But that makes things like calling 'execute_script' on the @page object read oddly. [22:59:02] *nod* [23:00:08] but it would be the same but worse if we rolled our own, it's still Se at the heart [23:03:19] hmm, /me wonders if I should talk to the page-object maintainer about that. it might be worth having a well-known representation of the lower-level api(s) available for exactly a situation like calling 'execute_script' [23:06:43] kaldari: All kinds of new patchsets just for you [23:08:03] kaldari: https://gerrit.wikimedia.org/r/21820 <-- patchset three is a rebase that should pull in the other change [23:08:13] Oh and I needed to....hold [23:10:17] kaldari: 3 and 4 are now rebases, 5 should have the fix necessary [23:46:57] Hey robla, have time for a make-release question? [23:47:09] csteipp: sure, what's up? [23:47:35] If I'm making a new 1.19 release... do I include the latest extensions? [23:47:59] That looks like what it's doing... but I didn't know if I should pull in past versions of the extensions? [23:48:49] oh, gross, that's probably a bug [23:49:20] well, hmm...maybe it's a matter of tagging things correctly [23:49:59] we haven't been bundling extensions for that long, which was why my first instinct was to go there [23:50:42] Which *should* be fine... but I haven't tested ConfirmEdit (etc.) with the 1.19 branch... [23:51:12] It looks like exportExtension just clones and packages the head [23:52:55] ok...revising theory after glancing at the code [23:53:40] when the conversion to git happened, we stopped taking branch into account [23:54:43] TimStarling: advice on what we should do in the brave new world of Git + bundling extensions + making MediaWiki 1.18 tarball? [23:55:29] "nothing like this will ever be ok from a host on our cluster." [23:55:35] sorry wrong channel [23:56:38] it will be fun [23:57:11] I don't think we should start bundling the extensions in a minor release of 1.18, we should probably just hack the script to make minor releases work [23:58:22] TimStarling: didn't we already bundle extensions back then? or did Sam start retro bundling ? :) [23:58:30] * robla looks at when that started [23:59:12] 1.18.2 comes with extensions... [23:59:30] so you're just worried about the versions changing? [23:59:41] we don't have to change the versions, we can just use whatever was previously bundled [23:59:49] Oh, good call [23:59:53] I'll do that