[02:47:07] Technical question for the blackout landing page.... [02:47:40] If I use $wgOut->disable(); does $wgOut->setSquidMaxage( foobar ); still work? [02:48:00] or do I need to set cache headers manually? [02:49:44] TimStarling: ^^ [03:00:48] kaldari: in trunk, sendCacheControl (which handles setSquidMaxage) is ignored if disable() is set [03:01:38] cool, I'll just set them manually [03:01:39] OutputPage::output: if disabled: return; ???later??? sendCacheControl(); OutputPage:: sendCacheControl: handles mSquidMaxage set by setSquidMaxage() [03:01:56] thanks for digging that up. much appreciated! [03:01:56] yeah, manually afaics [03:02:10] yw :) [10:31:30] neilk_: should CongressLookup added to Translatewiki? or is this extension important for US people only? [10:32:06] Raymond: well the US has many speakers of other languages [10:32:22] I see [10:32:37] I will add it in a few minutes [10:32:40] Raymond_afk: in theory... yes. In practice, maybe not, as this is for the wednesday protest on enwiki anyway [10:32:45] no wait ... [10:32:48] .... [10:33:05] join #wikimedia-sopa and ask Eloquence or something... I don't know [12:17:29] hi, will be tomorrow a possibility to see the code of an article? (API/m.enwp) [13:00:26] hello wikipedians ... does anyone know if the mediawiki API is going to be down tomorrow? [13:05:20] or where the right place is to ask that question? [13:08:03] kennyr04, which API are you talking about, exactly? [13:52:42] guillom: this one: http://en.wikipedia.org/w/api.php [14:01:05] kennyr04, afaik only write API will be disabled [14:01:31] MaxSem: cool thanks [14:02:14] kennyr04, you should subscribe to the Mediawiki-api-announce mailing list [18:57:52] Is there a features team meeting later today? [19:00:47] JeroenDeDauw: afaik between now and few minutes [19:01:19] Krinkle: where's the etherpad? [19:01:38] http://etherpad.wikimedia.org/FeaturesTeam2012-W03 [19:08:08] gwicke: hi [19:08:49] update on the parser / visual editor [19:10:15] benny, robm: update on feedback dashboard [19:10:17] so- I worked on template expansions [19:10:30] gwicke: welcome back on irc [19:10:40] surfacing MarkAsHelpful on Feedbackboard page [19:10:40] sorry for the delay.. [19:10:48] bsitu: go ahead.. [19:11:08] *gwicke was asleep at the wheel.. [19:11:25] bsitu: lets give gabriel some time :-) while you give an update [19:11:43] okay, :) [19:12:02] Email copy update to feedbackdashboard response email [19:12:22] Deployment & Testing of Moodbar & Concurrency [19:12:36] i asked robla and he's cool with reedy helping w cr and deployment [19:12:41] so please ping reedy [19:13:12] okay, I will [19:13:50] rmoen: any updates [19:14:24] raindrift: thanks for extensionizing the concurrency code [19:14:48] alolita: rmoen is having connection issue [19:15:02] bsitu: thanks [19:15:47] raindrift: did you want to add more on the concurrency extension; are you planning to add documentation on mw.org [19:15:50] alolita: sorry about that, my chat was frozen [19:16:01] rmoen: np [19:16:56] rmoen: any updates from you [19:17:21] alolita: today i am preparing for deployment of the concurrency extension, testing and adding documentation. As well, looking into some future solutions for click tracking logging [19:17:39] Hi [19:17:45] reedy: hi [19:18:11] reedy: you've been nominated to help us with cr and deployment for feedback dashboard changes on thursday morning pst [19:18:18] alolita: Some small front end adjustments to moodbar as well [19:18:31] rmoen: great; thanks [19:18:41] alolita: np [19:19:11] reedy: given the fast moving development happening right now for the sopa blackout; we have pushed the fb deployment to thursday from wed (tomorrow) [19:19:41] reedy: so would really appreciate your working with rmoen and bsitu on cr/deployment for fb changes [19:20:22] thanks rmoen, bsitu [19:20:27] gwicke: ready now [19:20:37] yep, I am [19:20:46] template expansions are mostly working now [19:20:50] including the parser tests [19:21:02] That sounds fine to me [19:21:02] which was a long time coming [19:21:15] so now I am fixing up things around it [19:21:24] like paragraph post-expansion etc [19:21:27] Reedy: can we move the discussion here, I'd like to make sure rsterbin is part of it [19:21:30] reedy: thanks much! appreciate your help :-) [19:21:40] what's up? [19:21:47] and starting to add parser functions and magic variable [19:21:48] Just give me a ping when you're wanting to do stuff :) [19:21:48] s [19:22:00] or maybe we can wait until the weekly checkin is done [19:22:01] alolita: i don't really have a lot to add about InterfaceConcurrency. It works, and we're using it. If it turns out to be really useful, maybe we'll move it into core for 1.20. I'll document it shortly. :) [19:22:09] reedy: will do [19:22:12] rsterbin: ready to deploy the clicktracking patch [19:22:23] good to go [19:22:24] Reedy can help with that [19:22:30] we also had some very good discussions about the editor <-> parser integration [19:22:49] raindrift: thanks - documentation will really help [19:22:55] will try to catch brion about parser <-> mw stuff this week [19:22:55] question is: do we need to push to test first or has it been extensively tested on your end? [19:23:07] evening [19:23:09] gwicke: yup the discussions were great progress [19:23:33] that's it from me [19:23:40] yoni and i have tested it both locally and on prototype [19:23:42] gwicke: thanks! [19:24:09] aharoni, nikerabbit: update on i18n [19:24:25] hi [19:24:26] changes deployed yesterday [19:24:34] Bug fixes [19:24:42] yup mostly bug fixes [19:24:51] DarTar: this testing takes the form of: check contents of clicktracking table, do stuff, check contents again [19:25:08] we are mostly working on documentation and testing. [19:25:11] if that's sufficient enough testing for you, i think we can push [19:25:18] Santhosh - for Narayam and {{GRAMMAR}}. [19:25:29] Niklas - for Translate. [19:25:40] myself - for WebFonts. [19:25:41] rsterbin I'd like to make sure that for every click event on the trigger we also have a matching impression event for the overlay [19:25:54] that would be enough for me as a test [19:26:03] that was showing up properly on prototype [19:26:11] and code review of course, we had a sprint on friday and reaced something like 135/35, but it's gotten worse again [19:26:25] new/fixme [19:26:29] brb [19:26:38] k [19:26:42] nikerabbit: the friday sprint on cr helped greatly [19:26:45] 190/29 [19:26:57] but we seem to have fallen back again [19:27:03] and by we I don'y only mean l10n team, Reedy was there too and others [19:27:17] nikerabbit: yup [19:27:42] aharoni: thanks for the update [19:27:43] plus, i fought with some weirdness trying to set up PHPunit and make tests pass, but hashar, reedy and krinkle helped to sort that out. [19:28:15] rsterbin, Reedy: just checked with fabriceflorin - he says it's ok to push to production and we'll be looking at the data we're collecting in real time to see if it's kosher [19:28:23] and fixed some fixmes in the process :) [19:28:24] Hi guys! Are we comfortable releasing the AFT5 patch, given that it has been tested on prototype and approved by [19:28:25] aharoni: talks submitted for gnunify right? [19:28:33] cool [19:28:35] true. [19:28:54] Sorry, ... and approved by Roan. Reha, are you comfortable with this? [19:28:56] raindrift: update on upload wizard [19:29:11] It's live now ;) [19:29:29] Thanks, Reedy. [19:29:32] yep [19:29:40] aharoni: we can arrange a call / video chat if you want :-b [19:29:58] thanks Reedy, looking up right now what we get in the log [19:30:02] Dario is checking the data right now, to see if it works as intended. [19:30:06] hashar: that would be very helpful [19:30:08] aharoni: if you need any help or review about testing let me know [19:30:39] raindrift: you there? [19:30:43] alolita: UW: we fixed a bug with the custom license feature, but that's all the work I've done on it in the last week. In the previous weeks a lot of code reorganization happened, but I wasn't very involved with it. The focus for multimedia has been on TMH. [19:31:20] btw alolita: rmoen and bsitu confirmed that we're not writing clicktracking data to the DB any more, everything is being logged via UDP, it's just not properly documented [19:31:25] raindrift: yup - we pushed the custom license feature fix last week [19:32:02] DarTar: thanks - that clarifies things since it would be kinda crazy to log to the db if we're using udplogger [19:32:06] *DarTar is preparing an email with an action list for clicktracking stuff [19:32:13] totally [19:32:32] hashar, thanks a lot. Most of all, it would be most helpful if you could improve https://www.mediawiki.org/wiki/Manual:PHP_unit_testing as much as possible. [19:33:05] raindrift: did you want to add anything else for TMH [19:33:09] i improved it a little bit, but there's probably a lot more to add. [19:33:57] aharoni: good start on PHP unit testing documentation - will be very helpful for other features also [19:34:35] so other than that - a brief wikipedia-blackout update [19:34:46] alolita: there's not much to add for TMH yet. I'm working with MDale, Ryan, and J on getting a labs depolyment together, still. [19:34:54] aharoni: yeah it is a bit incomplete [19:35:22] I'm AFK for 10015 [19:35:25] also, it sounds like we'll be basing our test plan at least partly on what Kaltura uses, but we're also going to have to test many things they don't, since they don't care about old browsers. [19:35:25] *10-15 [19:35:35] aharoni: I have added that to my todo list. Meanwhile you will have to ask me question directly :-)) [19:35:40] dinner time! [19:35:57] raindrift: thanks; some developers from the features team (kaldari, jorm, neilk, katie) have been doing a lot of work on design, development of javascript functionality, data access for congress info [19:36:04] rsterbin, fabriceflorin, Reedy: confirmed, the first optionN-impression-overlay events are coming through and they all have a matching triggerD-click-overlay event [19:36:09] awesome [19:36:38] a lot of us were in yesterday to work on implementation; we're still working on it [19:36:48] will go live 9pm PST tonight [19:36:52] the data should be kosher now, I'll let it run for a while before I run another sanity check [19:37:05] krinkle: did you have an update? [19:37:10] alolita: yep on the pad [19:37:20] 1.19 deployment related on-wiki resourceloader migration helping with code and documentation [19:37:21] cool; anything to add [19:37:25] Thanks, DarTar, that's wonderful! Thanks so much for spearheading this release, much appreciated. [19:37:33] np [19:37:36] alolita: nope, that's pretty much it. doing code review right now [19:37:53] krinkle: thanks; appreciate your help [19:37:56] and I've offered a little availability to sopa-team if they need codereview [19:38:10] although not needed so far. [19:38:16] krinkle: excellent; please feel free to ping neilk and kaldari [19:38:28] sure [19:38:57] I wanted to do some SOPA review but everything was marked [ok]??already :-) [19:39:20] any other updates from the features team? else we should be pretty much done [19:39:33] I have only looked at the congress representative extension though [19:39:37] trevor and roan did a good presentation at linux conf au [19:39:51] are there eclipse users ?? [19:40:07] hashar: great; feel free to add any comments if you see any issues [19:40:59] thats all I have; any questions on the sopa blackout or any other projects? [19:41:34] jorm reports he's deep in sopa blackout thinking [19:41:41] and coding. [19:41:55] i'm doing an html page for the congress lookup thing right now. [19:41:55] and coding ;-) [19:42:17] thanks all - ttyl! [19:51:27] alolita: hey [19:51:42] Oren: hi [19:52:26] I've started development on the new Search in the last couple of days [19:53:11] Oren: awesome; welcome [19:54:30] it would be useful if some one with more subversion knowledge than me copied search-lucene-3 [19:54:47] I'm back [19:55:16] that way I would not have to worry about meesing up the current version [19:56:13] I mean copied search-lucene-2 to search-lucene-3 [19:56:15] Oren: sure; the devs to talk with re: svn are roankattouw, reedy, brion [19:56:20] ok [19:57:23] ok back to coding... [20:01:01] you don't mind if I add that link to the terminology page :-) [21:36:36] *werdna waves [21:36:52] I guess I should drop in on this SOPA sprint in a few hours [22:10:16] apergos: ping [22:10:31] ponnngggg [22:10:32] i jumped on board with the CongressLookup stuff with katie and kaldari [22:10:37] ok [22:10:50] we ran into some interesting issues [22:10:53] heh [22:10:56] particularly around zip-9 stuff [22:10:59] well I noticed something about the data [22:11:11] which is why you see 4 tables and potentially 4 lookups [22:11:17] but it's not just the 9 digit data [22:11:18] and we're thinking about dropping support for zip-9 since we're getting down to the wire [22:11:23] the 5 digit data has the same problem [22:11:39] so if there are 2 or 3 congresspeople with the same 5-digit ZIP, what, show them both? [22:11:40] indeed it does - although i made a schema change to deal with it [22:11:44] yeah [22:11:49] path of least resistence [22:12:06] we can throw in some explanatory text for the user [22:12:15] that would certainly be quicker [22:12:17] :) [22:12:34] can you give me the 30 sound byte version of the 9zip issues? [22:13:00] so yeah i changed cl_zip5 to have an autoincrement id field and just index the clz5_zip field, then we can just pull rep data out of that table and not worry about checking against zip9 [22:13:07] as for the zip9 issue... [22:13:17] K4-713 can probably sum it up better for you [22:14:37] *apergos waits for K4-713 to notice the ping [22:14:48] heh just gave her an irl ping [22:14:54] nice [22:15:00] apergos: Ah, here I am. [22:15:03] Hello. [22:15:04] heh [22:15:18] so I'm told you can give me a 30 second sound byte of the zip9 issues [22:15:27] *K4-713 tries to catch up to the present tense [22:16:02] Right, okay. [22:16:26] Basically, in most cases, we don't have the data to do a direct lookup anyway. [22:16:40] It's due to the way this data is saved in the files we're parsing. [22:16:45] er? [22:17:01] which files? [22:17:22] There's a text file that kaldari got form somewhere... [22:17:28] zip4.. somethinig? [22:17:47] It was just called zip.txt when he sent it to me. [22:17:50] afaict the data is ok in there... got an example? [22:17:57] (if it's the file I'm thinking of) [22:18:35] I'm finding what they did a little difficult to describe. [22:18:44] just grab an entry [22:18:49] They report only the most significant digits necessary to define a group, and then stop. [22:18:52] yes [22:19:00] so I set up the zip9 stuff to deal with that [22:19:05] for anything that's > 5 digits [22:19:11] Okay. [22:19:44] I split out the entries to 9 digit ones -> table with 9 digit entries... 8 digit ones -> table with 8 digit entries etc [22:20:14] then if we get a zip+4 from the user, we try the 9 digit table, if not there, drop a digit off the end, try the 8 digit [22:20:22] repeat til we find or fail at table with 6 [22:20:27] So, we've set things up, for maximum speed, to just look up the 5-digit in the zip5 table, thinking that we'd wire it up to get more precise if it came back with more than one entry. [22:20:34] If we had time. [22:20:52] rigiht [22:21:00] so this does that, it tries the 5 digit table first [22:21:05] We're worried that the servers will melt if most people require more than one db lookup. [22:21:08] only then if it fails will it go through the rest [22:22:01] Okay. That makes total sense to me... [22:22:13] if this seems like more complexity thann you want righ tnow, [22:22:15] you could: [22:22:44] tag or something the current copy (so my changes don't get lost) and then revert stuff, or [22:22:50] meh [22:23:26] you could probably just return fail right away in the lookup code, so the rest is never reached [22:23:43] (nicer would be a config var but who needs to be nice right now) [22:23:44] I'm honestly not sure what's going on in the actual code right now. I've had my head buried in the database for... ever. [22:23:48] ah [22:24:08] i think a vast majority of users are going to enter a 5-digit zip code. for those who enter a 9 digit zip code, a vast majority of those are going to have a 1:1 mapping of 5-digit zip to representative [22:24:09] so the 5 digit zips are stored that same way you know, some of them are represented as 2 or 3 digit wildcards [22:24:21] Not in our data, actually. [22:24:29] ah, you've expanded them all? [22:24:41] They can safely be padded out with leading zeroes. [22:24:47] um, not leading [22:24:51] As far as I know, ours were always that way. [22:25:03] Yeah: 504 in our db is 00504. [22:25:09] not 504?? [22:25:27] That is the source of my entire set of headaches right now. [22:25:49] hmm [22:26:06] However, the zip3 table would be trailing ??. [22:26:10] maybe you want to work with the zip4 file as I got it from gov blot? [22:26:23] oh also I added some padding normalization stuff to the code [22:26:29] that might help some of your issues [22:27:05] from my perspective, it seems the best way to avoid issues right now would be drop the complexity associated with the 9 digit zips [22:27:13] ok [22:27:15] because things seem to otherwise... work? [22:27:33] if you decide to revert everything can you keep a branch or something so it can be intergrated later? [22:27:40] i made some minor tweaks to the schema and to the database class that will make things work for just doing zip-5 support [22:27:49] Maybe we should check and see if 00501 still does something rational. [22:28:34] http://www.govtrack.us/developers/data.xpd this is where I got the zip4 file from [22:28:56] apergos: sure - we can also just revert things with svn merge -c - so the changes will still exist in trunk - but if it's easier/clearer for you i can set up a branch [22:28:57] where numbers shorter than 5 digits are wildcard [22:29:12] either way, I don't have a preference [22:29:30] okidoke - this will be super useful in the future with 9 digit support [22:29:46] in the future we ought to do geocode against street addresses [22:29:52] but that's the distant future [22:30:08] also! i think we already have some fancy arrangement with google maps api to do that kind of stuff [22:30:11] So, as far as I can tell, everything is still actually working. ;) [22:30:47] oh that would be nice (but other users would not have that same sort of access we do) [22:31:05] awjr: is there a place where you all are collecting corrections for the congressional contact info? [22:31:51] http://en.wikipedia.org/wiki/Wikipedia:SOPA_initiative/Congress_data [22:32:04] perfect, thanks apergos [22:32:14] sure [22:32:16] yeah, what apergos said :) [22:33:14] we had this whole sopa extension in the dev channel instead of the sopa channel? :-D [22:33:23] I just now noticed... [22:33:36] ok apergos, im going to revert r109225 [22:33:40] oh i didnt know there was a sopa channel :( [22:33:44] there's more than one rev [22:33:47] k [22:33:53] that was the most recent one i saw from you [22:33:59] so prolly all of them with my name on it have to go [22:34:30] nah 109153 is still good [22:34:36] which one is that? [22:34:40] an important one! [22:34:49] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/109153 [22:35:08] table definitions and the data model [22:35:28] yeah but those table defs are wrong now some of them [22:35:50] or rather they are insufficient [22:35:57] as I discovered later. [22:36:33] you need to pull the whole thing [22:36:43] ah ok but we dont need to revert that rev - i think the only one in the way is the latest [22:37:03] oh and 109202 [22:37:07] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/109202 [22:37:10] well that rev (109153) has broken 9digit lookup code in it, see [22:37:40] if you leave those in, you should... [22:37:50] oh wait i see - sorry i was reading the non-existent diff in the code review tool as an indication that the file was added [22:37:58] but those are just mods [22:38:57] actually you shouldn't leave em in [22:39:21] so i'll back out that rev as well [22:39:39] yes, all of em with my name on them [22:39:49] should i leave in the changes you made to tripZip() ? [22:39:51] er [22:39:53] trimZip() [22:40:06] no [22:40:11] kk [22:40:24] I don't know how they will interact with the original code [22:40:30] alright [22:44:48] 2631 and 02631 [22:44:59] give different results, reported by jeremyb earlier [22:45:17] K4-713 ^ [22:45:58] everything should be MA of course [22:49:19] awjr: Argh. [22:49:33] ...I'll test that. [22:50:34] Everything is fine on my local instance for both 2631 and 02631. [22:51:05] K4-713: the house is the same but the senators are different [22:51:13] (how could that be?!) [22:51:29] Page is identical for me. Again: Local instance. [22:51:43] K4-713: did you try testwiki too? [22:51:47] jeremyb: Where are you testing? [22:51:48] (just so you can see it) [22:52:03] http://test.wikipedia.org/?banner=blackout [22:52:06] *jeremyb tries again [22:52:22] Ah, okay. I have no idea when the last deploy to test was. Anybody? [22:52:24] still broke [22:52:55] i could understand if they were both wrong or if the house was wrong and senate was right [22:53:08] but i don't get how the house is right and senate is wrong! [22:53:16] Doesn't appear to be a problem in trunk, anyway. [22:53:28] Anybody know what's actually up on test? [22:53:50] K4-713: you're not running 1.18wmf1 or whatever it is? [22:53:53] (locally) [22:54:32] Actually, I am likely to be running the special fundraising version of mediawiki. [22:54:38] ruhroh [22:54:49] which would be mw 1.17 [22:55:04] for this you want to be developing against what's being used on the cluster [22:55:15] which is... [22:55:17] wait for it... [22:55:27] svn.wikimedia.org/svnroot/mediawiki/branches/wmf/1.18wmf1 [22:55:30] jeremyb: it breaks because the sentaor lookup is with the first three digits of the xip code [22:55:33] you don't need more [22:55:33] Well, so here's the thing. I'm not actually writing any of that code. [22:55:50] oh then nevermind [22:55:51] apergos: but why doesn't it use 236 as first 3 digits? [22:56:01] and if the code doesn't (I forget which) put the zero both times, strip it both times, whichever it is... [22:56:08] because those aren't the first three [22:56:11] it's a 5 digit zip [22:56:28] *K4-713 is still wondering how stale the deploy on test is. [22:56:29] the first three really are 026 I guess [22:56:45] 17 22:52:22 < K4-713> Ah, okay. I have no idea when the last deploy to test was. Anybody? [22:57:00] ...Bueller? [22:57:11] (26, 'MA'), [22:57:15] er [22:57:17] 023 [22:57:19] whichever [22:57:38] (236, 'VA'), [22:57:50] (23, 'MA'), [22:57:52] that's why [22:58:05] join wikipedia-sopa [22:58:23] lol ^^ [23:03:57] K4-713: if the code is stable right now you should make sure all revs (not reverted) are reviewed and then [23:04:04] get someone to deploy it, keeping ps in the loop [23:04:20] I'm saying this about other people because very soon I will go to bed, it's 1 am here [23:04:28] *ops in the loop [23:04:36] apergos: Understood. [23:04:57] As soon as my split zipcode script finishes doing its magic, I'll have more data to throw in there as well. [23:05:02] ok [23:05:03] Then I'll get on somebody to do all that. [23:05:06] what does your script do? [23:06:03] It finds all 5-digit zips that are split districts, and makes sure they're all listed in the zip5 table. [23:06:35] That way, if we only get 5 digits, we can display all possibilities under district, or something like that. [23:06:36] ah [23:06:39] right [23:07:03] so did your data not have any wildcard 5 digit zips in it when you got it? [23:07:13] just out of curiosity [23:07:24] No, because all the split districts have resolution out to past 5 digits. [23:07:35] I triple-checked that. [23:07:36] um yes but what I mean is [23:07:57] the zip4 file I had grom wherever it was, lists some 5 digit zp codes with 2 or 3 or 4 digits because [23:08:03] They're all directly look-upable 5 digit zips. [23:08:12] All 5 digits are for real. [23:08:26] if you put * after the number nd wildcard match it, all those 5 digit zips are in the same district [23:08:33] so it just aggregated them like that, see [23:08:46] hmm so you got already processed data, interesting [23:09:08] Not exactly. There are wildcards, but they're all after the 5 digits, a dash, and then at least one digit. [23:09:14] Of the extension. [23:09:15] that's different [23:09:28] I could send you the file kaldari sent me last night. [23:09:34] That would probably help. [23:09:44] I'm talking about zips that are 2, 3 or 4 digits *total*, intended to be wildcard for 5 digits... [23:09:50] Right. [23:09:52] if you don't have em, so much the better! [23:10:12] Yeah, all the split districts are by necessity more precise than just 5 digits. [23:10:17] In this file, anyway. [23:10:18] only send it if you have time and feel like entertaining me :-D [23:10:26] Oh, I totally do. ;) [23:10:46] these aren't split districts... :-D [23:11:01] Argh! [23:11:11] Split _zipcodes_. [23:11:23] someone just didn't want to put a bunch of entries for xyz01 xyz02... yxz99 when they could just write xyz districtname and be done with it :-D [23:11:24] *K4-713 hits head on keyboard [23:11:30] Right. [23:11:31] Totally. [23:11:34] ok :-D [23:11:40] ...it's a lot easier to explain if you see the file. [23:11:52] sure [23:11:54] None of them are split unless they have all 5. [23:11:58] right [23:12:18] someone expanded them or... there's another source someplace