[01:07:30] hi [01:07:34] hi [01:07:56] ^demon: the auto-update does not show yet :( [01:08:29] <^demon> Hmm...I did the same thing I did for mediawiki/core. [01:10:07] <^demon> Fale is your username on packagist, right? [01:10:17] ^demon: yes [01:10:40] <^demon> Hmm, the username is correct, and the token is the one you /msg'd me yesterday :\ [01:10:50] <^demon> Too bad github has no logs for this, impossible to tell why it's failing. [01:11:24] ^demon: is there any weird box to be flagged? [01:13:01] <^demon> Nope. I tried the "Test Hook" button, and it looked successful. [01:13:34] <^demon> I don't see DataValues as a package on packagist--link to that? [01:13:50] https://packagist.org/packages/mediawiki/data-values [01:14:52] <^demon> Hrm...I really dunno. [01:15:00] <^demon> It Just Worked(tm) when we did it before. [01:17:06] ^demon: I see :( We could trying adding you as co-manteiner of the packagist package [01:17:27] <^demon> I don't have a packagist account :p [01:17:46] <^demon> Let's see what user we used for mediawiki/core. [01:17:48] Oh, Ok :D [01:17:50] <^demon> Maybe we can add it [01:17:56] exactly :) [01:18:15] <^demon> user is "mediawiki" [01:18:28] I'm going to add it [01:18:43] ^demon: done :) [01:19:21] ^demon: auto-update is now active :) [01:19:39] <^demon> I hadn't changed it yet. [01:19:51] ^demon: BUT IT WORKS [01:19:54] (sorry for cap :)) [01:19:56] <^demon> Well, yay :P [01:20:18] ^demon: I think that the repository owner (on github) have to be a manteiner [01:20:30] or something like that [01:24:38] ^demon: thanks a lot for your help :) [01:24:42] <^demon> yw [01:43:28] * SigmaWP waves to anomie  [01:43:31] Wanna file a bug for me? [01:43:48] hi SigmaWP, what's up? [01:44:17] anomie: The camelCase naming convention at https://www.mediawiki.org/wiki/Extension:Scribunto/Lua_reference_manual#Scribunto_libraries goes contrary to Lua's standard library [01:44:47] kaldari: (I know you asked a while ago but) you might find http://gerrit-stats.wmflabs.org/ interesting [01:44:58] In order to maintain consistency, aliases that comply with the standard library should be added [01:45:10] Damianz: thnkas! [01:45:12] or the functions could be renamed completely [01:45:20] You'll want to talk to TimStarling about that one. MediaWiki generally uses camelCase for functions, and Tim decided our Lua code should match [01:45:30] ah. [01:46:40] thanks anomie [01:48:33] runningyourwordsintogetherisjuststupid [01:48:49] sadlyicanstillreadwhatyouaresaying [01:48:50] ilikefollowingconventionsbutyouhavetodrawthelinesomewherebetweenconsistencyandstupid [01:49:00] what_about_this? [01:49:13] itsaconventionwhichoftenconfusespeopleespeciallynonnativespeakersofenglish [01:49:14] :) [01:49:16] CamelCaseIsFarMoreReadableReally [01:49:33] <^demon> lowerCamelCaseIsFarSuperiorToUpperCamelCase [01:49:43] lower_case is best_case! [01:49:43] ButCamelsHave2Humps [01:49:49] you know, arabic has this problem [01:49:53] underscores_are_more_readable_but_are_ugly [01:50:01] <^demon> TimStarling: Camel humps? [01:50:14] psh, you need more Python [01:50:17] We could just write a NLP for the mediawiki source tree :D [01:50:22] garden path sentences [01:51:04] you know, when you read a line without spaces, if you guess wrong, it might take another few words before you realise your mistake [01:51:27] really? I just make up the words in the middle to match :D [01:52:01] if the sentence is plausible with another analysis for a particularly long time, like the whole sentence, then it's a garden path sentence [01:52:16] ah [01:52:24] because it leads you down the garden ptah [01:52:25] path [01:53:01] * SigmaWP nudges, so what about lower_case, eh? :) [01:54:24] what about it? [01:54:33] it's not anyone's convention [01:55:35] Hm, ok then [01:58:24] * SigmaWP waves [02:57:57] should I be able to see checkuser data from September 2012 still? [02:58:18] it's been more than 3 months but sometimes the script misses [02:59:02] but by like two months? [02:59:42] jdelanoy: no [03:00:08] :\ [03:00:59] Hmmmmm. [03:01:14] sswiktionary is the project I'm looking at now, specifically http://ss.wiktionary.org/w/index.php?title=Special:CheckUser&user=60.169.77.119&reason= if someone wants to look at it [03:01:22] Something seems broken. [03:01:35] jdelanoy: Deletions from the database table are triggered by every 100th edit or something, I think. [03:01:41] hmm [03:01:44] So on an inactive wiki, data might linger. [03:01:49] oh, ok [03:01:51] But please file a bug if you think there may be one. [03:01:55] Someone can investigate. [03:02:02] I don't actually know if there is one or not [03:02:03] https://en.wikipedia.org/wiki/Clapper_v._Amnesty_International looks terribly broken to me. [03:02:28] Susan: I agree [03:03:13] fixed [03:03:16] purged and fixed [03:04:23] legoktm: always try purging first :P [03:04:39] meh [03:04:48] Purging ruins the test case. [09:04:26] Reedy: perhaps we can continue in here [09:04:31] heh [09:05:17] A quick bash script to query all the tables would suggest the lowest (earliest) expiry date would look to be correct [09:05:29] so delete stuff created 6 weeks ago, where the expiry limit is a year: look for things that have an expiry time of now + limit - 6 weeks, right? [09:05:32] something like that [09:05:48] which is in fact what th script does now [09:05:56] ok that's cool. [09:06:13] when you ran it what did you give as options? [09:06:49] ah found it [09:06:57] mwscript purgeParserCache.php aawiki --age=2592000 [09:06:58] Deleting objects expiring before 08:39, 27 February 2013 [09:06:59] hm [09:07:26] which seems wrong [09:07:41] yeah, gotta figure that out [09:08:27] "Delete objects created more than this many seconds ago, assuming $wgParserCacheExpireTime has been consistent." [09:08:38] of course the script does say 'assuming that the parser cache expiry time is constant' which it isn't, as of now :-/ [09:08:44] yep [09:09:47] so instead of tigns that expire now + 1 year - 30 days or whatever, it will do now + 1 month - 30 days [09:09:54] yeah that's not any good [09:11:40] wait. that's ok. [09:12:12] this is about 'created a month ago or more' not 'expired a month ago or more' [09:12:43] ohh [09:13:28] so the next thing is the impact of the parse cache expiry limit [09:13:45] ther ewill be a bunch of stuff in there that expires [09:13:50] 9 months from now [09:13:57] it won't get expired for a looooong time [09:14:09] (9 months is an example) [09:14:48] it will get expired when we get to 8 months from now, I guess (sound right?) [09:16:46] $baseConds = array( 'exptime < ' . $db->addQuotes( $dbTimestamp ) ) [09:17:49] er 9 months from now gah [09:18:39] because at 9 months from now, it will check the expiry timestamp, see it's the same as 'now', see the parser cache expirt limit is 30, and say 'ah this must have gone in 30 days ago, time to toss it' [09:19:29] so that config change was problematic for that reason [09:19:55] we can just kill some parser cache entries that don't expire for ages [09:20:16] and/or replace the expiry date rather than nuking them from the cache [09:20:30] anything > say the end of march, set to the end of march [09:20:37] sure [09:21:51] which means we migiht have stuff in there as much as 60 days old before it gets cleaned up, but at end of march it will be fixed up [09:23:05] that foundationwiki item turn up in any of those shards? [09:27:05] SELECT exptime FROM pc\d{3} WHERE SUBSTRING( keyname, 0, 20 ) = "foundationwiki:pcache"; [09:29:50] foundationwiki:pcache:idhash:21087-0!*!0!!*!4!* and timestamp 20120919200207 (copying in here for sanity's sake) [09:31:16] I didn't know the table name could take regexps like that [09:31:24] is foundationwiki even popular enough in needing to be cached? [09:32:13] not 0 based indexed [09:32:19] SELECT exptime FROM pc\d{3} WHERE SUBSTRING( keyname, 1, 21 ) = "foundationwiki:pcache"; [09:32:55] Oh, it can't [09:33:02] I don't think.. [09:33:25] I was just doing it to remember I needed to change it [09:34:26] reedy@fenari:~$ wc -l foundation.txt [09:34:26] 9522 foundation.txt [09:34:38] minus upto 2 x 255 [09:34:38] ah [09:34:58] reedy@fenari:~$ grep 2012 foundation.txt [09:34:58] reedy@fenari:~$ [09:35:00] Nada, apparently [09:35:44] 95 2013, 8917 2014 [09:37:33] The difficulty is trying to find broken pages [09:37:45] Made slightly harder by the fact resoures now exist ;) [09:38:18] er [09:38:25] well wtf [09:41:32] Bleh, I should get some sleep, meeting in just under 8 hours? [09:41:45] er yeah [09:42:26] guess the footer will wait [09:42:30] ttyl [09:42:56] Yaa :) [09:44:45] Reedy: http://www.guardian.co.uk/science/2013/feb/25/sleeping-six-hours-night-activity-genes they say [09:45:21] I've got 6-6.5 before I need to get up ;) [09:45:44] do it for your genes! [09:47:20] people have time to sleep? I thought that is what coffee was for? [09:50:53] don't like coffee bleah [10:03:52] apergos: you are not the only one here :) [10:04:20] yay for that [16:26:49] Reedy: around? [16:26:49] Krinkle: RoanKattouw_away: i forget, who do i talk to about tablesorter bugs? [16:27:07] someone reported one to OTRS, i can reproduce it [16:27:11] the Enroll action on course pages is giving a fatal error. [16:27:14] tried debug mode and got a stack [16:27:19] jeremyb_: afaik neither of us maintain that, report it on bugzilla. [16:27:23] ragesoss: what error? [16:27:40] http://en.wikipedia.org/wiki/Special:Enroll/Example_University/Example_Course_(2013_Q1) [16:27:40] PHP fatal error in /usr/local/apache/common-local/php-1.21wmf10/extensions/EducationProgram/includes/TimelineGroup.php line 67: [16:27:41] Class 'UnknownGroup' not found [16:27:56] Krinkle: well, now i've got a stack, i was thinking maybe someone could help me make sense of it :) [16:28:04] Krinkle: i guess i'll just file [16:28:16] (this is my first time debugging in chrome fwiw) [16:28:58] ragesoss: and the ID of that particular instance of the error? [16:29:24] jeremyb_: how do I find that? [16:29:38] ragesoss: should be something on the same page where you just pasted from [16:29:57] nope [16:30:36] huh. i can reproduce ragesoss's error and don't see a hash [16:31:55] jeremyb_: I could help you out, but I'm in the middle of something [16:32:03] I might be the one to look at it later, or someoone else might. [16:32:05] Krinkle: k. i'll play with it a little [16:32:15] Krinkle: i thought one of you was the owner or something. maybe [16:38:43] bbl [16:42:55] ragesoss, that error doesn't make sense [16:43:05] the class UnknownGroup is defined in that same file! [16:48:16] :( [16:54:40] Platonides, jeremyb_ it looks like we still enrolled successfully. [16:54:41] ;/ [16:54:55] http://en.wikipedia.org/wiki/Special:Log/student [17:51:00] !log updated payments cluster to eab489d0889a [17:51:02] Logged the message, Master [18:25:30] marktraceur: ping re EtherpadLite [18:25:35] sumanah: Hi! [18:25:38] it seems down [18:25:44] I saw your other ping, Reedy said it went down [18:25:47] it's down again now [18:25:52] Aha [18:26:01] Caught in the act [18:26:06] sumanah: Is there load on the server? [18:26:09] yes [18:26:14] ~25 people in a meeting right now [18:26:32] That might do it [18:27:03] sumanah: That was an actual crash [18:27:22] ok. What should we do now? [18:27:43] Hm. [18:27:47] * marktraceur thought he put it back up [18:31:20] I'm still getting a 503, marktraceur [18:31:30] sumanah: The service is running now, but I can't reach it either [18:31:32] (not a nag, just checking in) [18:32:15] ok. marktraceur can you possibly grab a particular pad's data for me? [18:32:20] I can put it on etherpad.wikimedia.org [18:32:24] Hm [18:32:32] * marktraceur hasn't tried that before but might be able to [18:34:37] sumanah: Yes, I can [18:34:58] ok. [18:35:00] sumanah: But it looks like it's up again [18:35:10] sumanah: I'd suggest transferring it now :) [18:35:52] thank you marktraceur [18:36:03] sumanah: Sorry about that, I was trying to figure out something with the logs and it must have caused a fatal permissions error or something [18:36:09] wow [18:41:00] Question: Does the number of votes a bug has actually affect how quickly someone will fix it? [18:41:18] No. [18:41:21] Not directly. [18:41:30] how about indirectly? [18:41:30] We are considering turning off votes/voting for that reason [18:41:49] andre__: ^ maybe you can speak to this [18:41:57] ah. well that's kind of depressing. [18:42:22] YairRand, I don't think that anybody cares about votes, to be honest. [18:42:32] so what is an effective way to get bugs noticed? spamming IRC? :) [18:43:02] yes [18:43:04] tbh [18:43:10] YairRand: That's a good question. So, have you heard about volunteer product management? [18:43:16] and indirectly it even misses any relation if you don't know the highest votes in the system. - are 50 votes a lot? or not? [18:43:19] on kde we have lists for e.g. "most wanted feature" etc. there we use votes for it [18:43:25] no one looks at votes that I know of, they are just anther way to keep the commmunity at bay :-P [18:43:36] YairRand: https://blog.wikimedia.org/2012/11/21/lead-development-process-product-adviser-manager/ [18:44:05] YairRand: also, andre__ as Bug Wrangler is always available as your first point of contact to say that a particular bug is urgent or important and explain why; he can then escalate [18:44:12] yepp [18:45:13] and if nobody looks at votes I'd like to switch them off. However some would prefer to document the votes (database extraction and then write script to add last number of votes as a Bugzilla comment before switching it off?). But that's very low priority for me. [18:45:17] YairRand: another important way to get bugs noticed: explain clearly, in the bug's title ("summary") and comment, why it is urgent -- even things that may seem obvious to you may not be obvious to another person [18:45:46] *that* may seem obvious to you but it hasn't been obvious to other bug reporters :-) [18:46:42] ah. well, Bug 27488 is getting really annoying for the enwikt community, as it's blocking deployment of a major script that was approved by vote a couple months ago... [18:47:02] !b 27488 [18:47:02] https://bugzilla.wikimedia.org/27488 [18:47:28] "Allow user scripts to load from position top" [18:47:35] YairRand, hmm, that ticket is assigned to Roan. [18:48:02] (for two years now. Hmm.) Plus has seen quite some WONTFIX/reopening. [18:48:35] Hmm indeed. [18:49:03] so I'd first try to contact Roan if I was you and ask if he's really assigned to this (and if not, he should reassign to default). :-/ [18:49:51] also YairRand *please* state in the bug that this is a blocker for your community, and link liberally to the script, decision, other discussion, etc. [18:59:02] Thanks YairRand [19:04:36] Would it be a bad idea to have as a general rule that the on-wiki response to "why doesn't this thing [borked by some bug] do what we want?" is "It's bug XXX, and it's probably not going to be fixed anytime soon unless people go spam irc."? [19:05:31] That's a bad idea [19:05:32] I think [19:05:47] I think the general answer "it's bug xxx and so we should bug Andre" is better [19:06:06] telling people about the bug in Bugzilla is an excellent idea [19:06:19] ah [19:06:40] also please tell people about https://blog.wikimedia.org/2012/11/21/lead-development-process-product-adviser-manager/ which is an opportunity to influence this kind of prioritization [19:07:37] sumanah: where is the list of those volunteer PM/advisers? [19:08:01] There's no public list consolidating them; do you think there needs to be? [19:08:34] Nemo_bis: https://www.mediawiki.org/wiki/Wikimedia_Platform_Engineering and https://www.mediawiki.org/wiki/Wikimedia_Features_engineering and hub pages like that will list product managers [19:08:38] in case you don't get answers on stuck bugs I'm happy to help. [19:08:39] and that includes volunteers [19:08:46] best is to ask assigned developers first, if existing. [19:09:33] but yeah, manpower is limited, so providing good reasons on a bug report why something is important is very helpful to be able to judge its importance [19:10:07] sumanah: I was just curious [19:10:21] Nemo_bis: Right now we have Jan (Kozuch) on Lua/Scribunto & Mariya on data dumps. [19:10:21] maybe the blogpost could be copied to a wiki page [19:10:29] ah [19:10:55] Nemo_bis: guillom is working on improving the volunteer product management intake/pipeline so I'll point him to your suggestion (copy the blogpost to a wiki page) [19:11:06] "the blogpost" being https://blog.wikimedia.org/2012/11/21/lead-development-process-product-adviser-manager/ [19:11:29] ah ok, already being taken care of I see :) [19:11:57] Nemo_bis: sometimes the summaries on pages like https://www.mediawiki.org/wiki/Wikimedia_Platform_Engineering (activity statuses) talk about stuff like this [19:12:04] Nemo_bis: I'm in the process of adapting the blog post to https://www.mediawiki.org/wiki/Product_development [20:05:25] !log updated payments cluster to fcbf9211c9836b [20:05:28] Logged the message, Master [20:47:54] mwalker|grumpy, pgehres|foods: I love you guys. [20:48:05] Although the food might help with the grumpy. You never know. [21:08:59] Isarra: what have we done today to receive this love? [21:09:10] ...something. [21:09:28] * Isarra hefts a bottle of cough medicine in salute. [21:09:52] fair enough; lifts his water bottle in return! [21:33:38] could someone please tell me where we define wikis as RTL? which file at http://noc.wikimedia.org/conf/ ? [21:34:52] Is it not based on thes site language? [21:35:10] it is, I think [21:35:44] and hence languages/MessagesFOOBAR.php [21:35:45] $rtl = true; [21:35:59] okay, so where do we define the languages that are RTL? [21:35:59] right. [21:36:14] As above [21:36:22] except path fail [21:36:31] this is in mediawiki core, in languages/messages/ [21:36:43] languages/messages/MessagesAr.php [21:37:10] okay, so in MW core, there is an inbuilt list that basically knows the languages and says RTL [21:37:21] <^demon> No, there's no list of known rtl languages. [21:37:27] <^demon> But when you load a language, you can know it's rtl. [21:38:16] that might be right, but I don't follow the logic [21:38:35] sDrewth: http://dpaste.de/cWN40/raw/ [21:38:43] click [21:39:12] okay, so they are defined [21:39:45] so the config files specify $wgLanguageCode, mediawiki goes and looks at the messages file for the language specified by that value, and checks whether it's rtl or not [21:41:13] afaik, at any rate. [21:52:43] ori ... En_rtl ... we have a wiki that puts EN in reverse, or is that a test place? [22:01:59] AaronSchulz: do you know what actually results in this query? https://ishmael.wikimedia.org/more.php?host=db1043&hours=24&checksum=10443322987523193794 [22:03:06] Special:ActiveUsers [22:04:30] ok, very infrequent [22:04:43] seems slower than it used to be [22:04:59] AaronSchulz: the most common enwiki watchlist query in the slow log is https://ishmael.wikimedia.org/more.php?host=db1043&hours=24&checksum=3519460255199439721 [22:05:23] it's hanging on results 150-200 [22:05:48] sDrewth: heh, no idea. http://www.mediawiki.org/wiki/Special:Code/MediaWiki/96955 gives a hint about its purpose. [22:07:08] the slow log is 100% of actually slow queries, i don't see anything that would point directly to 10s watchlist page loads [22:07:22] AaronSchulz: the more common version from the sample log is https://ishmael.wikimedia.org/sample/more.php?host=db1043&hours=24&checksum=11865465699465735771 [22:08:23] 10s does indeed not seem to be common [22:08:49] heh, http://en.wikipedia.org/w/index.php?title=Special:ActiveUsers&offset=1+Promotional timed out at squid ;) [22:09:30] TimStarling: :) [22:10:18] cookies [22:11:14] * TimStarling licks yum yum [22:11:20] one for now, one for later [22:17:10] binasher: I don't get why the activeusers thing joins on user [22:18:21] it can just use rc_user [22:18:22] * AaronSchulz sighs [22:18:35] * AaronSchulz changes some code [22:18:52] AaronSchulz: Make sure it's the right code [22:19:45] actually if it used user_editcount instead of grouping it would be fine to join on user [22:19:58] of course it wouldn't be the "recent" edits anymore [22:21:27] AaronSchulz: the performance of ActiveUsersPager is amazingly better with mariadb [22:21:45] just the difference in the explains makes me :D [22:22:22] joins are so much smarter now. it's finally a proper rdbms! [22:22:24] I'm removing the user table, at least that avoids massive random looks for rows to throw away [22:22:37] *removing from the query ;) [22:22:41] I was about to say [22:22:49] though that would fix it too [22:23:04] There would've been no point you and Tim investigating the token issue then [22:23:42] binasher: Do you have any stats for how much quicker stuff overall seems to be on mariadb? [22:23:49] quicker/less memory/whatever [22:24:57] AaronSchulz: http://pastebin.mozilla.org/2181832 [22:25:58] yeah... [22:26:05] heh [22:26:25] i should migrate the enwiki special / watchlist / etc. db to mariadb [22:26:56] sounds like a big win [22:30:12] I guess they're all getting migrated in the near future? [22:32:56] at some point [22:33:09] enwiki at least seems safe to migrate now [22:34:22] other wikis don't work with mariadb 5.5.28 which is what we've been testing, but the issue is fixed with the current stable release, 5.5.29 [22:35:49] i'm packaging a new build of 5.5.29 but other issues might crop up though [22:36:08] What was wrong with them? [22:40:27] Reedy: 5.5.28 has a nice little bug where replace queries utilizing a unique key that isn't the pk fail [22:40:44] bah [22:40:48] i'm surprised it hasn't been an issue with enwiki [22:41:00] wikidatawiki was the first to hit it [22:43:24] binasher, maybe you should add a test for that [22:43:32] nowadays you can't even rely on the database :( [22:43:38] there are also some general incompatibilities between mysql 5.1 and 5.5.. the only one we've hit so far is that unsigned int types are actually enforced. the AFT schema has an unsigned int column that it occasionally tried to set to a negative value, which is an error on 5.5. that code has been fixed but i wouldn't be surprised if extensions used on projects other than enwiki have issues [22:45:37] when i opened a bug against mariadb for the replace key issue, monty responded personally within a few hours [22:49:14] that's good [22:49:21] and a pretty big bug it was [22:52:29] Ryan_Lane: Around? [22:52:33] yes [22:52:35] what's up? [22:53:07] Ryan_Lane: hashar and I are chatting about Parsoid on Jenkins and came across a call to npm in the Parsoid puppet class - was that intentional? [22:53:58] eh? [22:54:06] I have no clue [22:54:10] I didn't write any of that stuff [22:54:13] Asking you because you reviewed it [22:54:21] https://gerrit.wikimedia.org/r/#/c/15856/ [22:54:22] that adds the npm package [22:54:40] which supposedly let the parsoid team update their node_modules in production using npm [22:55:10] Which is good, because we have a lot of them, but we're just chatting about how npm isn't desirable in production [22:55:10] I think it installs it from a location we manage [22:55:17] ohh [22:55:26] RoanKattouw_away: ^^ ? [22:55:28] kind of a wikimedia repositories of nodejs modules ? [22:55:47] (I could use a similar setup for python modules) [22:57:32] I'm not seeing anything that would send it somewhere else, but if RoanKattouw_away has anything else to say it would be helpful [22:57:42] Also gwicke if you have some knowledge on the subject [23:09:18] lets put the modules in a git repo :-] [23:09:23] and deploy them from that git repo [23:12:18] hashar: that may actually be what's occuring [23:12:28] hashar: parsoid uses git deploy [23:12:49] grrr [23:12:50] :D [23:13:08] the node modules are not in a git repo [23:13:14] so I guess someone either ran npm on tin [23:13:25] or used scp from his laptop to tin [23:13:26] it would likely be in a repo local to tin [23:13:44] haven't found any such repo though :/ [23:13:58] really we need RoanKattouw_away here to discuss this [23:14:07] yeah [23:14:12] lets take a cab to Stanford [23:14:18] That guy, always furthering his education [23:14:37] James_F: Can we borrow your Roan-whip? [23:15:04] He's still ill. [23:15:08] So... no. :-) [23:15:19] oh poor Roan :( [23:15:26] marktraceur: so lets create mediawiki/extensions/Parsoid/js/contrib ? [23:15:30] Yes yes [23:15:50] hashar: Poor me. You weren't the one holding his sick bucket last night. [23:16:07] James_F: Doesn't he have a toilet? Geez. [23:16:09] I would have done it if I knew! [23:16:22] * James_F grins. [23:16:31] marktraceur: git clone ssh://gerrit.wikimedia.org:29418/mediawiki/extensions/Parsoid/js/contrib [23:19:43] * marktraceur thinks that repository may be broken slightly [23:20:22] hashar: Going to need to create a master branch, I think. [23:42:20] AaronSchulz: it looks like the memcached debug log shows all operations, but doesn't note if gets are hits or misses [23:44:50] gah