[12:22:19] RoanKattouw: around? [12:22:27] Yes [12:23:11] RoanKattouw: there is a new extension that brings webfont support to mediawiki... I just converted it to use resource loader and implemented a ugly gui for it imitating narayam [12:24:00] I got a CC from GerardM about something like that year [12:24:02] *yeah [12:24:15] it's active at http://translatewiki.net/sw/Main_Page [12:24:41] Do you have a test page where the fonts are being uesd? :) [12:24:44] obviously it needs some more work and rethinking the gui [12:24:57] RoanKattouw: append ?uselang=ml ? :) [12:25:00] I see the dropdown but it doesn't do anything, obviously, because the entire main page is in English [12:25:05] Right :) [12:25:22] lol, it overlaps with Narayam's dropdown [12:26:00] probably [12:26:19] doesn't overlap me, but that is what you get with absolute positioning :( [12:26:45] I'll see if I can get the code into our svn [12:30:24] Narayam and the webfonts will mostly go together .. they are for people who do not have proper support on the system they are using (think internet cafes) [12:31:35] Nikerabbit RoanKattouw are you happy when I blog about it at this stage already ? [12:32:01] As long as you don't make it sound like I've committed to doing anything (cause I haven't) it's fine with me [12:36:39] Roan clear [16:15:01] hmm, did we change settings on #mediawiki_security? it won't let me in :( [16:15:29] Poke RobH [16:15:40] He's in charge of the access controls there [16:16:09] bah [16:16:16] haha [16:16:18] weehhhhhh [16:16:29] my chat is broken can you make it go [16:16:30] <^demon> Turns out "ssssshhhhh protection" wasn't enough [16:16:33] brion: nope, you mentioned the secret channel, you cannot be trusted. [16:16:46] if i knew of it it couldn't be that secret [16:16:52] i also need you to turn in your cabal badge [16:16:56] and your puppet gun [16:16:57] the REAL secret channel's one none of us know about [16:17:02] Or is it just a double bluff? [16:17:04] sockpuppet gun, my bad [16:17:12] you're all on double secret probation [16:20:12] RoanKattouw: howdy [16:20:23] Morning [16:20:45] so, I need to make ArticleFeedback actually ignore old ratings when they expire... [16:20:58] so, that's going to be fun and interesting to do that in a scalable way [16:21:26] RESOLVED WONTFIX [16:21:32] ha ha [16:21:34] Very scalable [16:21:54] When are they supposed to expire? [16:22:11] 30 revisions [16:22:22] hmm [16:22:26] well, I have an idea actually - but its only going to work as long as we stick to revisions as the ONLY way to expire ratings [16:23:16] Expire on edit? [16:23:27] we can have totals for each revision, then the most expensive calculating a grand total can be would be "select sum(total) limit 30 order by revision_id" [16:23:48] right now we have totals for each rating, which span across revisions - they are all munged together [16:24:02] my idea is much the way we do UserDailyContribs [16:24:12] Recalc in an after-edit hook? [16:24:18] yeah [16:24:26] because the revision is what expires things [16:24:34] Sounds sane [16:24:42] ... dang it [16:24:46] now I have to actually do it [16:24:50] I was gonna flag things as expired in that hook but recalculating the sums is better [16:25:11] i imagine it will be cheaper [16:25:27] i just have to make sure that when you re-rate something, we cleanup after ourselves [16:25:47] Don't we already do that? [16:25:51] We have the aggregates table for taht [16:25:54] article_feedback_pages [16:25:58] You could just subtract things there [16:26:07] The addition is already implemented there [16:26:14] because if the original rating was for 5 revs ago, and the new refreshed rating is for the current rev, we need to dec the -5 rev row and inc the current rev row [16:26:26] yeah, well we do, but we are doing it in a single place [16:26:46] Oh grrr [16:26:49] we need to add some logic to subtract from the previous rev row, and add to the new rev row, in the case that they are not the same [16:26:53] shouldn't be too hard [16:27:01] No, don't modify the data [16:27:15] The way it currently works is this [16:27:19] don't modify the sums and counts? [16:27:26] You can modify those [16:27:30] But not the historic ratings [16:27:37] correct [16:27:39] I know that [16:27:42] The subtract-and-add-if-not-equal thing is ALSO implemented already [16:27:45] I'm only talking about the sums [16:27:53] All that stuff is already done in article_feedback_pages [16:27:59] because article_feedback_pages is going to have to have lots more data in it [16:28:07] How come? [16:28:37] one row for each revision up to the expiration point (30 by default) for each article with ratings - instead of 1 per article [16:28:46] Oh you wanted to do it that way [16:28:53] how would you do it? [16:29:00] yet another column? [16:29:05] i mean table [16:29:10] No, more smartness [16:29:17] In the on-edit-recalc logic [16:29:18] ?? [16:29:31] Upon edit, figure out which ratings just fell off, and subtract them [16:29:45] yeah [16:29:48] It's complicated cause you need to make sure the rating wasn't overwritten later ec [16:29:50] *etc [16:29:55] I'm with you on that [16:29:56] But it can be done in-place [16:30:06] As opposed to changing the table structure and having to migrate the existing data [16:30:07] on a single sum you mean? [16:30:11] right [16:30:18] ok, I get what you mean [16:31:26] so when a rev happens, we say all ratings on the edge (related to the single, now expired, rev that's 30 revs ago) will be subtracted from the sums and totals [16:31:32] seems reasonable [16:31:39] Yes [16:31:46] But be careful [16:32:08] *TrevorParscal looks for aligators [16:32:13] If there's a more recent rating by the same user, skip it [16:32:22] Cause it'll have already been covered by the re-rate [16:32:28] oooh [16:32:42] It's complicated cause you need to make sure the rating wasn't overwritten later ec [16:32:53] yeah... should we set a column on ratings that says they are inactive, which can be set upon re-rating? [16:33:05] that could make this easier [16:33:40] then you could say "select * from ratings where active = 1" [16:33:53] Could do [16:34:00] and you only have to do an update at the same time as you do the insert for the new rating [16:34:12] And upon post-edit expiry [16:34:26] But it'd allow you to only have to look at the ratings for one rev in the post-edit expiry logic [16:35:02] right [16:35:04] which is cheaper [16:35:08] and we do enough crap on save [16:35:46] It means you have to modify the re-rating logic that finds the old rating to mark that old rating as expired [16:35:50] Ahm [16:35:51] Inactive [16:36:15] right [16:36:23] rather than just appending willy nilly [16:36:39] Well that logic already looks up that rating to account for it when adjusting the total [16:36:43] but we already look that thing up to perform a proper subtraction [16:36:46] exactly [16:38:04] what's the kosher way to do bool in our DB schema? [16:38:22] int(1)? binary(1)? [16:38:36] Ah [16:38:51] a bit will do 0 or 1 [16:39:01] Also, why does our schema need to be Jewish? [16:39:02] We seem to use bool a lot in tables.sql [16:39:19] Because kosher is nice and short [16:39:20] Reedy: ha ha [16:39:46] Trevor could have said "How do I make a bool a temple in our schema?" but that's just more cumbersome [16:39:51] "bool NOT NULL default 0" [16:39:56] ("my bool a temple") [16:40:37] Morning alolita [16:40:51] Roan: Hi! [16:41:37] *RoanKattouw hoped someone would get the Mormon reference there, but alas [16:44:08] given I'm going to be selecting on this active column a bit [16:44:14] i'm thinking an index is in order [16:44:40] seems like adding it to aa_user_page_revision would work [16:44:49] any ideas there? [16:45:02] Hmm [16:45:42] when looking things up by rev we are going to always be asking for active stuff or inactive stuff explicitly [16:45:50] so that seems to make sense to add it to that index [16:45:59] Right [16:46:15] If the typical query is one where you have a constant WHERE on all of those fields, then it makes sesne [16:47:03] However, user_id is at the front of the index for some reason [16:47:15] You'll probably just want an index on (aa_revision, aa_active) instead [16:47:24] ok [16:47:34] Because you'll be doing WHERE aa_revision=123 AND aa_active=1 or something [16:47:44] You won't have a WHERE on aa_user_id I don't think [16:48:02] And since that's the first field in the index that instantly makes aa_user_page_revision usesless [16:48:04] in some cases I do, but not in all [16:48:22] Although [16:48:28] The primary key starts with aa_revision [16:48:33] So per-revision lookups are indexed [16:48:52] Resolving the WHERE the old-fashioned way shouldn't be too horrible [16:49:00] It's a boolean field anyway [16:50:40] no, I'm planning on storing the boolean value as a blob, which contains a haiku, and if the haiku is more positive than negative then the value is true. We will have to outsource labor in 3rd world countries to make this call each time, but I think it will be worth it in the end because the world needs more haikus [16:51:03] But is it webscale? [16:51:17] :-D [16:51:24] Yes, because the 3rd world labor is cheaper than recovering your data from /dev/null [16:51:39] Make chinese children remember numbers? [16:51:58] One kid can probably remember at least 10 MD5 password hashes for users [16:53:07] how fast can they recall it though? [16:53:34] I'm not sure [16:53:44] One way to find out [16:53:59] stand back, he's going to do science [16:54:06] uh oh [16:54:20] That could be a dissertation project [16:54:22] *apergos puts on 'Blinded by Science" [16:54:42] Are Indian or Chinese children better at remembering MD5 strings? [16:55:05] ha ha ha [16:55:12] she blinded me with science! [16:55:14] Actually [16:55:23] We could include Roan in for european comparison [16:55:50] actually (correction) "She blinded me with science" [16:55:52] no, that's a totally corrupt sample [16:55:59] http://www.youtube.com/watch?v=2IlHgbOWj4o [16:56:17] (correcting me, not you, bri on) [16:56:20] Roan can decrypt MD5 hashes in his head [16:56:21] We could just rule roan out early on, as there's not so much of an abundance of those [16:56:44] I'm not scalable [16:56:51] Indeed [16:57:03] Chinese children are in abundance, are horizontally scalable [16:57:04] Dang it RoanKattouw, start making babies already! [16:57:36] Hmm... I always thought that people from texas were the most horizontally scalable... [16:57:40] that's not scalable either. [16:57:46] paying other people to make babies, [16:57:49] now that has something [16:58:13] *TrevorParscal starts build the clone army of the future using Roan's DNA [16:58:35] Who are you gonna hire to raise those kids? [16:58:50] just let em run around on the streeets [16:58:53] They're all gonna have Asperger's growing up, remember? [16:58:54] toughens em up [16:59:47] *TrevorParscal starts thousands of montesorri schools [16:59:59] There you go [17:00:10] we can do it! [17:00:22] In fact you should look into getting some of Maria Montessori's DNA, get her clones to run the schools [17:00:51] RoanKattouw: ok, so if I make the active column default to 1, then I have to run a script to deactivate non-current ratings, but at least inserts are automatically active [17:01:03] Yes [17:01:26] where do I put that script? [17:01:42] should that be a schema update? [17:02:08] It should be in the updater yes [17:02:28] You can specify functions as custom updaters [17:02:38] and as soon as I call $updater->addExtensionUpdate it will run the update right? not just queue it? [17:03:04] It'll run it when update.php is run [17:03:32] right, i'm just saying, i need the column to be there before I run this function [17:04:51] Yes [17:04:53] So add the updaters in the right order [17:05:18] cool [17:34:24] Why do we get random people that we've seemingly never heard of signing up to be mentors [17:35:54] do I need to sign up on the google site? [17:36:03] or is my name on the mw page sufficient? [17:36:26] I think it's sufficient for now [17:36:42] Until the proper proposals get under way, when i'd guess someone has to assign mentors and students [17:36:44] eoky dokey [17:36:54] at somepoint I think I do have to sign up there [17:37:00] I did last year, it's just a bit fuzzy [17:37:07] yeah [17:37:16] seems a bit early now [18:27:51] Morning mdale [18:28:15] mdale: So I took a stab at reviewing TimedMediaHandler, wrote up some notes at https://secure.wikimedia.org/wikipedia/mediawiki/wiki/User:Catrope/Extension_review/TimedMediaHandler [18:28:27] Sweet thank RoanKattouw [18:28:31] But the sheer amount of JS, both in TMH and MwEmbedSupport scares me [18:28:40] sure [18:28:52] some MwEmbedSupport is shared with upload handler [18:28:55] I'm not gonna do all that stuff on my own, it'll take more than a week with the bandwidth I have for work right now (vs. school stuff) [18:28:59] Right [18:29:07] So I was thinking about enlisting Neil [18:29:09] yea but the overview is very helpfull [18:29:19] It's mostly the server-side stuf [18:29:42] RoanKattouw: this script I am going to write to deactivate all ratings farther than 30 revisions in the past or not the most current rating for a given article [18:29:43] yea that what I thought would be good.. I will add some comments to your notes. [18:29:52] I could use some help thinking about it [18:29:59] With the caveat that those media backend things are kind of over my head (so we should pull in Tim or Bryan there) and the JS is like 2k lines in TMH alone [18:30:11] wut? [18:30:13] I also created a TMH component in Bugzilla, made you the default assignee, and reported two bugs [18:30:22] TrevorParscal: Hmm [18:30:35] Well you'll want to iterate over the pages in the table I guess [18:30:54] No index on page unfortunately :( [18:31:03] i can add one [18:31:15] Just for the migration script? [18:31:26] You could iterate over revs too, that's indexed [18:32:13] for each ( page that's been rated ) { for each ( rating for this page ) { if ( !first ) { set active = 0; } } } [18:32:35] I was thinking this [18:33:20] for each (revision that's been rated) { find its page; find the 30 most recent revs for that page; if (rev not in that list) UPDATE .. SET active=0 WHERE aa_revision=N ] [18:33:59] can't I just say if rev < current rev - 30 ? [18:34:06] For performance you'll want to 1) cache the "30 most recent revs" lists keyed by page ID and 2) do the UPDATE queries in batches of 100, max 500 rows [18:34:08] No [18:34:14] Cause revids are incremented globally [18:34:19] ah [18:34:30] well, the logic in the expiry is broken already [18:34:32] Otherwise expiring a rating on enwiki would take 1 or 2 seconds [18:34:41] It's not really, is it? [18:34:47] no [18:34:49] nevermind [18:35:08] RoanKattouw: TMH is only 2K lines? I think its more like 11K [18:35:11] You need to grab SELECT rev_id FROM revision WHERE rev_page=N ORDER BY rev_timestamp DESC LIMIT 30 [18:35:11] but it does use < [18:35:16] ( just js ) [18:35:20] mdale: Well there's one file that's like 1600 lines [18:35:26] sure [18:35:34] That's where I was like OK you know what, NO [18:36:04] I've spent 5 hours combing through your code, I'm not gonna do another 1600 lines of JS today [18:36:11] yea thats fine [18:36:42] BTW the feature set is awesome [18:38:26] some things work a little odd because it also works stand alone and supports another 10K of kaltura specific features. but have worked pretty hard to make sure there is no leakage and everything extends everything through bind / event model [18:38:29] The subtitle editor was broken on prototype so I didn't get a chance to check that out [18:38:41] Yeah that makes everything a bit weirder [18:38:51] A lot of it also seems to use the previous incarnation of the MW JS API [18:38:56] (i.e. gM() and all of that) [18:39:18] yea working on porting things over .. but some functionality still is not in core / in upload wizard [18:39:27] all that goes into mwEmbedSupport ext atm [18:39:28] With MwEmbedSupport presumably mapping that to either MW or Kaltura stuffs [18:39:36] Yeah stuff like language is in there [18:40:04] no kaltura stuff in mwEmbedSupport .. all the stand alone stuff is in http://svn.wikimedia.org/svnroot/mediawiki/branches/MwEmbedStandAloneRL1_17/ [18:40:16] Oh that's right [18:40:26] Well there's this other standalone backend isn't there [18:42:17] I am moving the other stand alone backed to a uniform RL_117 style api and it shares a bit of the actual RL files.. [18:42:27] The self contained folders called mwEmbedModules.. that can be copied and pasted either into stand alone or as part of an extension. [18:42:48] so they include everything they need messages, configuration, styles, assets etc. [18:43:51] Yeah I saw some of the init logic in MwEmbedSupport [18:45:40] RoanKattouw: I won't have time to do any large reviews until I've had this fscking hypersonic aerodynamics exam in 4 weeks [18:46:08] Similar situation here [18:46:23] it's my last exam though :D [18:46:25] some outstanding patches remain to be done for trunk such as 26901 would greatly reduce the footprint of enabled modules by loading their configuration at time of invocation ( similar to how we handle messages ) [18:50:46] what are the extensions that *should* be installed by default on all wmf wikis? [18:51:01] *hexmode is coming up with a list of things that people ask for anyway [18:51:12] or what they would ask for if they knew [18:51:24] Reedy has already said AbuseFilter [18:57:26] hexmode: You mean besides the ones already installed on all wmf wikis ? [18:58:05] Krinkle: yes. Just trying to make a list of what people most often ask for [18:58:08] I know.... [18:58:16] I should start a wikipage! [18:59:32] :-P [19:00:13] ok, common, the meta-ness is brilliant! [19:00:19] come on [19:00:22] not common [19:00:25] Isn't Danese (WMF) on IRC ? Or do I just happen to miss danese everytime I join ? [19:00:44] She's not usually in public channels [19:01:55] She's in the staff channels though, want me to grab her? [19:02:49] yeah [19:03:28] Hi krinkle: [19:03:44] Not necessarily in 'public' but via IRC, yes. [19:03:46] Oh there you are :) [19:03:53] PM ok ? [19:04:07] yes [19:23:44] http://myownplayground.atspace.com/cookietest.html [19:36:31] Meeting is in 20 minutes, right ? [19:36:39] 30 cookies per domain [19:36:49] oh the total size... not so much [19:47:17] Krinkle: now 7 min [19:48:06] thx [20:07:39] Etherpad link plox? [20:07:56] Krinkle: post your URL here please? [20:08:00] http://eiximenis.wikimedia.org/CodeReviewTracker [20:08:04] http://eiximenis.wikimedia.org/Bug27339 [20:12:11] I'm still in my suit [20:32:57] vvv, still about? [20:33:01] Yep [20:33:13] Any chance you could either fix https://bugzilla.wikimedia.org/show_bug.cgi?id=27470 or give me a hint or 2? :) [20:39:09] Reedy: probably someone removed check for page creation permission from upload module [20:39:21] Was it rewritten in 1.17? [20:39:27] I think so [20:39:30] thedj, about? [20:40:14] I've got a strong feeling it's likely changee [20:40:14] d [20:43:14] Reedy: it looks like UploadBase assumes that 'edit', 'upload', etc are not per-page restrictions [20:45:38] Reedy: oh, and it also allows to upload images even when they are create-protected [20:45:58] hexmode, https://bugzilla.wikimedia.org/show_bug.cgi?id=26751 "Extensions that should really be core functionality (tracking)" [20:46:47] vvv, looks like we've got a couple of breakages then [20:46:51] Yep [20:47:05] All we have to do is fix UploadBase::isAllowed [20:49:19] Yeah, looks a bit simple [20:49:51] I'll log that as a bug then [20:52:29] Thanks vvv [20:52:37] No problems :) [20:52:50] That explains why the extension vaguely looked correct [20:52:56] and I wasn't getting anywhere testing the code [20:54:28] hexmode: I've posted it over here http://eiximenis.wikimedia.org/BugTriage [20:55:00] Since you mentioned "1.17-post" will be over soon, we could perhaps continue the bug triage anyway, without being constrained to "1.17 blockers" [20:55:49] Time to de-suit [20:56:08] Olala, Reedy Striptease [20:56:15] ;) [20:56:42] I've had it on nearly 12 hours, i'm beyond bored with it [20:58:17] Reedy: So, about the "from"/"start" option I discussed yesterday in CodeReview; you mentioned "dir" as a parameter, but I didn't git it to work [20:58:28] basically I want the opposite of "offset". [20:58:36] so isntead of all revisions before N, only revisions after N [20:58:36] Let me actually get changed, and i'll have a proper look [20:58:45] sure [21:00:18] I'll be back after Dinner [21:05:02] hexmode: Dude I am SO sorry for missing the meeting [21:05:19] It was scheduled for 9pm my time so I was blissfully watching a mvie [21:07:33] RoanKattouw: I need to adjust the ApiQueryArticleFeedback.php file so that it provides sums of counts and totals for multiple rows (op to 30) with a common page_id and rating_id [21:07:53] Could you rephrase that? [21:07:57] ha ha [21:07:58] yes [21:08:03] Maybe it's because it's 10pm but I failed to parse that [21:08:07] it was hard to type, I'm sure it was hard to read [21:08:33] basically, right now it just dumps the article_feedback_pages table, joining in the i18n keys from article_feedback_ratings [21:08:55] I need it to do something different [21:09:19] now I have a revision column in article_feedback_pages, which is part of the primary key [21:10:09] so sums and totals need to be sumed between all rows for a given page_id, for up to 30 rows [21:10:30] so, I'm thinking I need to do something more complex than the ApirQueryBase class was designed to do [21:10:31] Wait [21:10:37] So you *did* go with the per-revision thing? [21:10:38] k [21:10:48] yes, because the other way was problematic in other ways [21:10:55] How so? [21:11:37] RoanKattouw: I'll forgive you *this* time :) [21:12:07] We did discuss two weekly meetings, one per hemisphere [21:12:13] hah [21:12:19] I'd be up for that if it would hepl [21:12:21] help [21:12:31] Wasn't an abnormal meeting time for me really [21:12:36] I just kinda burned out on work today [21:12:38] And forgot [21:12:48] no, just a TV watching time ;) [21:12:58] See that's the thing [21:13:02] no prob [21:13:05] I usually work when I should really be watching TV [21:13:14] hahaha [21:13:14] lol [21:13:20] I just did that tonight [21:13:24] I put on the news *twice* [21:13:27] This'll all be fixed in 7-9 months [21:13:27] two different channels. [21:13:35] no good, worked through both of em [21:13:39] I don't own a TV, so I pretty much work all the time :P [21:14:01] brb [21:14:01] While it's technically possible, it was awkward - plus it stored information about whether the rating was the most current one, when that information could be derived from the timestamp or the revision number, it was sort of redundant [21:14:17] Redundant sure [21:14:20] "watching tv" is often synonymous with "having a life" ;) [21:14:28] But normalization is the enemy of performance sometimes [21:14:32] hexmode: Paradoxically, that is true [21:16:03] TrevorParscal: Could you explain what you want more clearly while I'm afk for 5 mins? [21:16:29] I will try and poke at it and retry if I'm still in need of help :) [21:33:45] robla: You asked about automating etherpad CRT numbers, right [21:33:51] (CodeReviewTracker) [21:38:43] Well, most of the needed API is in place for CR ;) [21:39:14] Reedy: The revision count may be a problem though, since apparently the "Total count" does not take "Status'-filter into account. [21:39:27] Yeah [21:39:34] But as per before, it's fixed in trunk [21:40:03] I heard CR will be copied to wmf directly from trunk soon ? [21:40:08] Something like that [21:40:15] Chad was going to finish reviewing it [21:40:16] or do we wait 'till next wmf branch (1.17wmf2 / 1.18wmf1) [21:40:20] Haha, hell no [21:40:28] CR will get updated out like it did before [21:40:48] There is only one outstanding issue noted in CR, it's not a major blocker, just an annoyance [21:40:59] But would be nice to tidy it up [21:41:10] Then a couple of maintenance scripts, and 2 indexes to add before pushing it live [21:45:09] https://bugzilla.wikimedia.org/show_bug.cgi?id=28122 being the blocker [21:45:56] It *will* happen ;) [21:46:06] Right, need to sort food myself. Will look at what you asked when I get back [22:21:13] Reedy, RoanKattouw: http://meta.wikimedia.org/wiki/WMF_Extensions which of these should be tarball blockers? [22:21:40] since I think we're really ready to focus on kicking out the tarball now :) [22:21:53] Bug 27339 is mostly cleaned up [22:24:23] I'm gonna pass on that one for the day [22:24:38] If you really want my thoughts on that send me an e-mail and look in the morning [22:24:51] If I hadn't gotten all excited about travel plans I'd've been asleep already [22:24:51] k, will do. [22:25:12] travel is boooring! [22:25:20] *hexmode tries to fool RoanKattouw [22:25:28] Pfft [22:25:35] hexmode: So next week we'll be continueing on http://eiximenis.wikimedia.org/BugTriage ? I've copied most of it now. [22:26:02] Krinkle: yes, good idea. [22:26:14] RoanKattouw: just trying to help you sleep ;) [22:28:22] Reedy, Krinkle, RoanKattouw, about to make the triage thing recuring. Sounds like the peeps in CA could do an hour earlier and it might be better for you guys [22:28:37] Agree [22:28:40] Cool [22:28:48] That effectively keeps the time constant for us [22:28:50] Especially with summer-hourshift coming soon [22:28:52] Cause we're starting DST on Sunday [22:28:53] I'll gather some devs closer to Tim and see if we could come up with a second meeting [22:28:53] indeed [22:29:25] hexmode: 2nd meeting, as in 2nd one this week ? [22:29:26] Well Australia will go out of DST soon I tihnk [22:29:43] 2nd meeting for the other hemisphere [22:30:00] was danese's suggestion, I think [22:30:15] Maybe some devs in India? [23:04:31] Krinkle, like http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki&offset=70000&sort=cr_id&asc=1 ?# [23:05:12] :) [23:05:44] Hm.. clicking the "rev" in the header to go do desc, looses offset [23:05:52] but that's the bug you opened the other day, right ? [23:06:16] Errm [23:09:51] by the way, the count thing I mentioned earlier, it's perfectly visible here: [23:09:52] http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki/status/new&offset=82674&limit=500&path=%2Ftrunk%2Fphase3&sort=cr_id&asc=1 [23:10:24] argh,nevermind. I always paste the wrong links when I'm giving examples... [23:10:32] computers suck [23:10:38] this one: http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki/status/new&offset=77974&path=/trunk/phase3 [23:10:44] it says 838 but it's actually 8 [23:10:52] 830 ater after 77974 [23:11:09] Oh, yeah [23:11:12] That's fixed in trunk [23:11:14] OK [23:11:26] And API ? [23:11:46] Was the count fixed there already ? [23:11:54] I didn't know the API gave a count [23:13:36] I dont know how the API works for CR, I think it doesn't have path, status, dir, offset etc., right ? [23:14:11] crstart - Timestamp to start listing at [23:14:13] No direction [23:15:16] basically what I'm aiming at is getting numbers for: number of fixmes in trunk/phase3 before branchpoint, number of "news" before and number of "new" after branch points. [23:15:20] point* [23:15:30] I guess I'll use toolserver [23:15:43] yeah [23:15:46] DB queries work nicely