[14:07:40] <^demon> hexmode: Busy? [14:08:08] not too busy to help you :) [14:08:48] <^demon> hexmode: Busy? [14:09:54] well, if you would stick around, ^demon, I would help you :) [14:10:15] <^demon> Stupid wifi. [14:10:17] <^demon> hexmode: Busy? [14:10:22] heh [14:10:38] ^demon: you kept popping up and then disappearing! [14:10:47] sup? [14:10:55] <^demon> I wanna take a stab at jenkins [14:11:13] *hexmode hands ^demon a knife [14:11:28] <^demon> http://ci2.tesla.usability.wikimedia.org:8080/ is the url. No users/logins yet. [14:12:10] k... now what? [14:12:18] <^demon> I have Sebastian's "php-template" to get us started. [14:12:26] <^demon> We should be able to clone that for parts of it. [14:13:12] ooo... clicking "configure" [14:13:39] <^demon> Created "MediaWiki" job by cloning php-template. [14:13:43] <^demon> Didn't make any changes yet. [14:14:36] <^demon> Added our repo URL. [14:15:13] <^demon> Ok, this is where I got stuck last time. [14:15:19] :) [14:15:27] <^demon> Basically what we need is a script to run install.php each try. [14:15:35] https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project how to set it up for push [14:15:51] k, and that can be an ant file, right? [14:16:31] I think you can just use the build.xml from ci.... lemme fetch it [14:16:56] <^demon> Ooh that might work :D [14:17:02] <^demon> paths probably changed. [14:17:52] most of that is just from the the properties file, though [14:17:58] but yes, some [14:18:44] <^demon> I thought it was in /trunk/tools [14:19:12] not sure I checked it in :P [14:19:24] <^demon> Could've sworn we had. [14:19:27] I did once but.... [14:19:42] let me check the current version against svn [14:22:03] ^demon: Can you summarize why Jenkins instead of CruiseControl ? Not that I care too much about _php_ unit, just curious and helps to explain others. [14:22:35] <^demon> It seems to have more cool features I like (like integration with phploc, graphing of results with graphviz). [14:22:42] <^demon> + larger dev community, seems better supported [14:22:45] aight. [14:22:46] <^demon> + lots and lots of plugins [14:22:55] <^demon> So lots of possibilities IMO [14:23:03] There seems to be a trend on the phrase "lots and lots" as of this/last month. [14:23:12] <^demon> Lots of that :) [14:23:12] I first noticed it during Sue's talk in Haifa. [14:23:20] "lots and lots of .." [14:24:47] Krinkle: you should stick around more Americans [14:24:48] ^demon: Wow, testSwarm has a LOT of work to do (s/TestSwarm/Krinkle). Caching, OOP, Efficiency, Code cleanup/whitespace consistency, file structure, database interaction.. [14:25:00] then you'd here lots and lots more rof that [14:25:02] hear [14:25:17] It's going to be rewritten from scratch basically. Atleast that's what the jQuery Testing team is gonna do. [14:25:42] <^demon> Oh, they're doing work on it now? [14:25:43] Or rather they want to do it, no progress on it yet though. [14:25:44] By the jQ testing team you don't mean you, hopefully? [14:25:54] <^demon> Rather than you doing all the changes, and them pulling them :p [14:26:00] I'm one of dozen members. [14:26:04] Got commit access yesterday. [14:26:05] Oh, OK [14:26:10] dozen=12, litterly. [14:26:19] (BTW, my suitcase was delivered to my doorstep about an hour ago, you got yours yet?) [14:26:30] RoanKattouw: Yep, around 10am got mine. [14:26:34] Pfft lucky [14:26:37] Mine got here at 3:30 [14:26:56] Makes sense, though, considering where we live [14:27:01] RoanKattouw: Maybe you can take a quick look ? http://jquerytesting.pbworks.com/w/page/44121636/TestSwarm-OOP [14:27:36] Right now it's a bunch of functions and variables with no infrastructure. url-parameter based php-includes. [14:27:53] The logic is pretty good, the code is just bad. [14:28:34] Sounds sane, but is missing logic/UI separation [14:28:40] You'll probably want data model classes [14:28:53] e.g. if there is such a thing as a testcase, you may want a TestCase class [14:29:07] <^demon> But it's not a huge amount of PHP, I can't imagine it'd take too long to clean up [14:29:16] Other than that I can't comment on it much because I have zero familiarity with this code [14:29:46] <^demon> Hehe https://github.com/Krinkle/testswarm/blob/master/logic/logout.php [14:32:02] Actually, the logic is pretty small. The core parts it has: addjob-page (adds stuff to the job-table), run.js (ajax quest for new jobs), get test-api (query to get most recent un-ran test and returns id and url, passed to run.js which opens up an iframe for it), inject.js (ran from within the iframe, hooks into QUnit-ready event, when fired, it submits through ajax to TestSwarm with test results for the test id and current user agent). [14:32:04] RoanKattouw: ^ [14:34:13] Heck, now that I think about it, we might turn it into an Extension. That would solve the need to duplicate SpecialPage, OutputPage, WebRequest, DatabaseMysql. The Extension would have a few tables (user agents, user agent groups, jobs, runs, clients) and two special pages. The rest can be re-used from mediawiki (user, user groups, Special:userrights) [14:35:04] <^demon> Eh, I think that's overkill. [14:35:11] <^demon> You don't need 90% of the MW stack. [14:35:30] True [14:35:41] Nevermind [14:36:07] <^demon> hexmode: Still going to check in that build.xml? [14:36:52] ^demon: yes otp 1s [14:37:17] <^demon> k :) [14:58:05] <^demon> hexmode: It was in /trunk/test-server/ [14:58:15] lame [14:59:12] k, off the phone [14:59:22] now... diffs [15:07:03] ^demon: committed [15:15:25] <^demon> Yuck, jenkins seems to expect build.xml to be in the source dir. [15:16:00] <^demon> Ah I can change that [15:18:53] <^demon> FATAL: Unable to find build script at /var/lib/jenkins/jobs/${env.BUILD_TAG}/workspace/build.xml [15:19:00] <^demon> I must not be doing that right :\ [15:52:17] <^demon> hexmode: It's running :D [15:52:20] <^demon> http://ci2.tesla.usability.wikimedia.org:8080/job/MediaWiki/15/console [15:52:24] <^demon> If you want to follow along [15:52:28] <^demon> Only took 15 builds, heh [15:52:32] \o/ [15:53:19] console is killer app of jenkins [15:53:49] <^demon> Yeah, I love being able to actually follow along with the build. [15:53:55] FAIL [15:57:00] <^demon> Almost succeeded :) [15:57:09] <^demon> The actual tests passed. [16:01:07] <^demon> Hah [16:01:09] <^demon> ERROR: Directory '/var/lib/jenkins/jobs/MediaWiki/workspace/api' exists but failed copying to '/var/lib/jenkins/jobs/MediaWiki/builds/2011-08-09_15-55-44/htmlreports/API_Documentation'. [16:01:09] <^demon> ERROR: This is especially strange since your build otherwise succeeded. [16:15:34] <^demon> hexmode: Aww https://issues.jenkins-ci.org/browse/JENKINS-7390 :( [16:18:59] <^demon> Oh crap???phpcs sent the vm into swap [16:24:22] neilk_: Good morning. Trevor said you were working on an API for authentication or something yesterday (around 3pm) and that I should talk to you to see if you need help/thoughts/whatever, but my brain was fried at the time [16:24:46] yeah, I am thinking I should post something to wikitech or whatever [16:24:53] mediawiki.or? [16:24:56] api for authentication? [16:24:58] it's hard to get across all the requirements just casually talking [16:25:12] right, Ryan_Lane -- RobLa told me you & Chad were working on something similar. [16:25:18] ... [16:25:27] heh [16:25:28] not OpenID/OAuth though [16:25:32] The proxy academic thing [16:25:39] why don't we just get it over with and do openid an oauth? [16:25:41] oh right, he explained it [16:25:59] Ryan_Lane, are you offering to make the extension not suck? :P [16:26:05] the proxy thing is supposed to use openid [16:26:20] <^demon> Right. [16:26:35] in my case, O{penid,auth} doesn't do what I want, it's not for external users, it's for internal systems that might not be mediawiki, and, don't map in a 1:1 way with the mediawiki instances. [16:26:36] is it better to fix the extension, or continue to write non-standard methods over and over and over and over? [16:26:52] Oh, right, it's THAT [16:26:58] This is for Hackpad/Etherpad, right? [16:27:00] how does openid/oauth not handle this? [16:27:01] exactly [16:27:40] also, while you all were in Israel or whatever, raindrift decided to work on the Wikia chat thing (and also is investigating a more standard solution such as a Jabber server) [16:27:45] so now we both need this [16:28:37] so the idea is how can we authenticate users to a particular wiki (in other words same username as some other wiki, they *can* vary) but in a system that we control, which we want to know as little about mediawiki as possible [16:28:57] Oh, that's cool [16:29:01] CentralAuth is not an option really because we want usernames to match if you invoke a chat/etherpad [16:29:22] How is he working on the Chat thing exactly? It's open source, right? In what way does it not meet our needs? [16:29:53] meh. I'll just wait for the wikitech-l post, and I'll discuss there :) [16:30:00] RoanKattouw - he's not in yet, probably on his way [16:30:04] otherwise I'd get him to explain [16:30:05] but [16:30:08] stupid irc [16:30:23] so, Wikia whipped up this thing in Node.js, which is designed around 1 chat per wiki [16:30:26] I'm getting responses in batches :D [16:30:50] but, what if you want a chat on many different pages of the wiki. Maybe every Category page, maybe *every* page, potentially [16:31:00] you might want to list channels and things [16:31:03] Oh, right [16:31:05] I remember now [16:31:06] I'm confused how openid/oauth don't support this [16:31:08] and Wikia's thing doesn't do that. [16:31:29] Ryan_Lane: I can explain, but I'll answer Roan's q first [16:31:33] anyway [16:31:50] also, we talked to the Wikia guy, Sean somebody, whilst y'all were in Haifa. [16:31:59] openid is based on urls, which properly identify users to a specific user on a specific wiki [16:32:30] *RoanKattouw finds it very ironic the words "whilst" and "y'all" are used adjacent to each other in that sentence [16:32:45] and, we asked, hey, why not ejabberd, this is scalable and existed already. He was kind of uncertain about why they didn't use jabber and said it had something to do with there not being group chat available for ejabberd. But there is! [16:33:28] So it may have come down to they Did Not Do The Research, because Ian had this working almost instantly. [16:33:30] *Ryan_Lane sighs [16:33:37] my connection is obviously too bad for this :( [16:33:44] Ryan_Lane: where are you now? [16:33:57] Jerusalem? [16:34:02] Or Petra? [16:34:28] jerusalem [16:34:33] anyway raindrift/Ian can explain better [16:34:36] neilk_: So the whole node.js backend for Chat was another case of NIH, basically? [16:34:41] yup [16:34:59] Alright, so OpenID/OAuth. I'm also kind of wondering why that can't be used [16:35:28] ok, it can be, but why would you *want* to do thi. OpenID & OAuth are for cross-domain authentication & authorization, not same-domain. [16:35:45] so the minute you want to pull up the chat window / edit an article, you want them to type in a URL? [16:35:50] that doesn't make sense [16:37:14] We just want to have an easy way for a service -- inside our very own cluster -- to verify that a user is who s/he says she is on a particular wiki. That's way easier than /O.*/ [16:37:48] basically, we can just pass the session id in as a kind of token to that service. [16:38:08] and say "hey, go talk to frwiki at this API url, they'll verify me" [16:38:45] Ryan_Lane: ping [16:38:57] seems sending stuff occasionally helps :) [16:39:01] If I use OpenID/OAuth, I have to create two entire new account systems for chat and concurrent editing and then link accounts and crap. [16:39:22] neilk_: no. it should handle it for them [16:39:27] "it" ? [16:40:05] there isn't any reason the user needs to know anything [16:40:35] when the user logs in, give them an openid cookie. when they hit the secondary service, check for that cookie, and automatically use openid from our domain [16:40:49] trust our own applications to act on the user's behalf [16:40:53] Ryan_Lane: sounds awesome, when can I have that? [16:40:54] without user interaction [16:41:09] hide those applications from the list [16:41:25] as far as the user knows, it's just another part of our app [16:41:57] <^demon> Ryan_Lane: sounds awesome, when can I have that? [16:42:03] :D [16:42:17] <^demon> ^ Hopefully sooner rather than later. I'm tired of us reinventing the wheel with user auth every damn time because nobody bites the bullet and does it right. [16:42:19] I'm saying we should put our effort into building support for that instead of another method [16:42:31] ^demon: exactly [16:42:41] I agree that reinventing the wheel sucks, on the other hand I'd like to able to demo something *this* week. [16:43:02] this is stopping raindrift & I from going forward with this fancy new live chat & concurrent editing [16:43:13] neilk_: "it" being the external app [16:43:19] chat, or etherpad [16:43:24] *^demon wishes more people cared about backend support for such things. [16:43:27] <^demon> I know Ryan_Lane does :) [16:43:28] Ryan_Lane is lagged like 60 seconds... [16:44:41] <^demon> Ok, it's time for some lunch 'round here. [16:45:00] <^demon|away> hexmode: I've got a full build with phpcs, pdepend, etc etc etc running. [16:45:04] <^demon|away> Maybe it'll finish before I return [16:45:07] I would be very much in favor of doing the right thing, and if that's Openid, excellent. After hacking away at this for a couple of days I appreciate the complexities of getting this right, and if we can leverage others' work I'm all for it. [16:45:08] <^demon|away> Build #21. [16:45:14] That said, the OpenID 2.0 people are all insane [16:45:56] ^demon|away: awwomse [16:46:03] heh [16:46:31] well, anyway, I'll respond back to the wikitech-l post when I see it and get a chance [16:46:49] I understand the need for something right now [16:47:04] I'm just so tired of all these hacky authn/authz things [16:47:13] Yay, CA [16:47:22] Isn't aaron on IRC ? [16:47:56] Oh, he's in #mediawiki [16:47:58] nvm [16:49:38] Ryan_Lane: +1 [17:23:02] neilk_: you want me to set up a toolserver site with the upload speed info? [17:23:13] (but not today) [17:23:35] hexmode: I was curious how you gathered stats, but I wasn't asking for any new service [17:23:58] neilk_: did I answer your q? [17:24:37] *hexmode tries to make sure he doesn't leave you hanging [17:25:15] hexmode: where did you explain what method you used to decide that the bug was fixed? I don't see that anywhere. [17:27:15] neilk_: I was just going by other reports on the bug, that is all. But I'll defer to your judgment if you think it is still a problem. [17:27:43] hexmode: you said something about timing the uploads, though. [17:27:51] hexmode: I have no evidence either way [17:27:55] neilk_: given the lag problems on the toolserver, I'm not sure I trust reports that a bot on the toolserver is slow. [17:28:15] I'm looking for some reasonable way to know that the problem exists on our end or if it's just certain users. [17:28:22] also, I'm lazy [17:28:33] also, it's not really my job, except that people think that it is. [17:28:40] so I get complaints anyway. [17:29:03] neilk_: so, I uploaded data from the toolserver bot that showed an *approximation* of that bot's upload rate [17:29:27] basically time between upload timestamps/size [17:30:17] I also gave the data to a researcher who is really good at visualizing data [17:30:26] she may find something better [17:31:02] But I think a report that doesn't focus on one user would be best [17:31:28] as long as we can find a particular user in it [17:31:43] *hexmode makes a small change in his script for this [20:35:12] <^demon> hexmode: So, since the code sniffing, analyzers and everything else take a really really long time to run???.I'm thinking of scheduling those for some n-day interval. The normal "build & phpunit" should be fine to do per-commit since it only takes ~2mins or so [21:13:41] question for people familiar with ResourceLoader and OutputPage: I'm trying to generate a page that has no mediawiki chrome (no sidebar, etc), but I want to keep using RL for fetching JavaScript resrouces and CSS. Is there a way to do that? If I use $wgOut->setArticleBodyOnly(true);, I lose all the page output (headers,