[00:12:58] hi robla [00:13:08] hey there [00:13:14] been learning a lot at the Linux Foundation summit thingy [00:13:24] are there any meetings you want me at today? [00:13:31] like, now + into early evening? [00:13:44] sumanah: nope....keep learning [00:13:53] robla: :D ok [00:14:47] robla: ok, see you tomorrow then. [00:15:08] see ya! [18:54:41] hi everybody [18:54:46] hi alolita [18:55:02] hi denis, hi alolita [18:55:28] hi denis, mmerz [18:57:04] hi, this is Manuel from WMDE.. did everybody get my email? [19:08:11] hi alolita ma [19:08:11] mmerz [19:08:59] denis, mmerz: so the only info i have is that the de chapter plans to do a chapter survey [19:09:14] do you have any details mapped out on what you would like to use for this survey [19:09:19] as you know we use lime survey [19:10:29] alolita: you didn't get all the papers, abstracts and so on we already sent to eric and zack? [19:10:54] no i don't have any of these details :-) [19:11:30] great. we intend to use another surveytool, as everything is already prepared [19:11:39] if you send me the details - we can take a look and note our questions on your requirements for the survey [19:12:10] what surveytool are you planning to use [19:12:19] mmerz ? [19:12:22] we use EFS survey [19:12:48] http://www.globalpark.com/ [19:13:08] but the problem is not the survey itself [19:13:13] ok [19:13:24] the problem is the sampling of users [19:13:30] ok - [19:14:06] are you involved in the current WMF editor survey? [19:14:13] yes [19:14:18] perfect :-) [19:14:41] how far are you with the banner sollution? (a banner that shows only once per user) [19:15:41] btw.. the WMF editor survey was planned to run in the first week of april.. did it already run? [19:15:58] no i think it's going to be run next week [19:16:09] ok [19:16:12] so our banner solution was - to use central notice [19:16:22] to display the notice only once to editors [19:16:33] once they click on the survey - they don't see the notice anymore [19:17:12] ah, ok, I see.. this is a good way of doing it [19:17:40] we would need a similar solution for our WMF/WMDE survey [19:18:15] cool [19:18:17] also - let [19:18:33] let's map out if there are localization needs you have [19:18:47] are these documented anywhere [19:18:56] dont want to ask you redundant questions [19:19:40] okay, i would propose we send you the papers [19:20:11] and we meet back again in a few days [19:20:22] I sent you an email with our additional requirements.. we should discuss which of them can be implemented [19:20:31] ok that's wise :-) since we can actually drill down in an action plan then [19:20:36] excellent [19:21:02] you have my contact right [19:21:15] yep [19:21:44] how about meeting again next wednesday then? [19:22:44] wed next does not work for me [19:23:16] alolita: whats better? [19:23:37] can we do tuesday at 130p [19:23:46] 130p to 2p - is that too late [19:24:18] alolita: mmerz : mabye an hour sooner, fine for you? [19:24:43] let s see hold on [19:25:50] denis: thursday at 11-1130 PDT [19:25:54] does that work? [19:26:10] alolita mom [19:27:07] alolita: okay [19:27:48] awesome - so pl send me the docs and let's review and map out a game plan next time :-) [19:27:50] alolita: ok [19:28:03] thanks then! ttyl! [19:28:23] lets make it so! cu! [19:28:26] fine and cu [20:55:17] before I write one, does anyone know of an extension that collects email addresses and sends auto-replies? [20:55:50] Special:EvilEmailHarvestingMonster ? [20:56:12] lol, please please call it that :) [20:56:51] *TrevorParscal looks at http://www.mediawiki.org/wiki/Category:Email_extensions [21:17:05] I just did "SELECT ... " | bulkmailer [21:41:30] TrevorParscal: where'd OptIn move to? [21:42:04] TrevorParscal: btw, moving the extensions out of UsabilityInitiative is causing us problems ;) [22:32:45] Ryan_Lane: dude, that happened like 6 months ago [22:32:54] what wiki are you poking at? [22:33:41] we are creating wikis [22:33:51] wehre? [22:33:53] and the addwiki script was importing sql from non-existant extensions [22:33:55] it's fixed now [22:34:34] this might sound like a silly question, but what's a guaranteed way to make invalid wiki markup? [22:34:34] but this is a perfect example of why I recommended against renaming these on wikitech-l ;) [22:34:55] I'm trying stuff but the parser just gives up and interprets it as text [22:36:17] TrevorParscal: ^^ [22:37:43] Ryan_Lane: I did that because I wanted to cause you pain! [22:37:55] I was pretty sure that was the reason... [22:38:33] I know that's not what you were aiming for, but renaming extensions is a major PITA for ops folk [22:39:05] neilk_, there is no such way [22:39:19] any input will lead to some sort of output, unless the program has a bug or breaks [22:39:35] the concept of 'invalid wikitext' isn't really valid [22:39:42] brion: I seem to remember that Wikipedia has complained some times when I tried submitting some broken wikitext. [22:40:03] it's like asking what misformed English sentence will cause a phone call to fail [22:40:18] ok [22:40:23] there might be a couple warnings [22:40:33] might also be some AbuseFilter entries on particular wikis [22:40:35] complete aside but is this going to be the behaviour going forward too? [22:40:39] yes [22:40:59] nothing a human can type should cause the document to fail [22:41:25] it might not give you what you wanted, but it'll sit there and stare back at you :) [22:41:26] including, say ? [22:41:29] sure [22:41:39] also FLKISD(*RTUMW#$FU(*)FSY*M(R$)(YFNM(SW(&^H(*&^HT*&(#$RT*&HT*^&N$T^H#&^T%R&SD*^F(*&SED(*(&S(*^&(#^*&^*#HGGG%G [22:42:18] is there an image i can draw in photoshop that photoshop will reject as invalid? :) [22:42:55] hm [22:43:25] maybe I don't know a lot about parsers, but if we are going to build something with a grammar that does lead to some pathological cases [22:43:52] like a document which consisted of "" repeated a million times [22:44:24] or '{{{{{{{{{{{{{{{ ... ' [22:44:28] I think it's better to consider that in wikitext, everything is output as plaint text unless the parser can match against it, in which case the behavior is undefined [22:45:10] yeah [22:45:17] that's simple if everything is regexes [22:45:27] if you're trying to parse it, the parser may get lost in backtracking [22:45:41] if it's that kind of parser [22:45:56] I think PEG-types are less vulnerable to this, but I'm not an expert [22:46:52] my original approach was, break things into blocks, then break blocks into spans, and spans into more spans, by iteratively applying pattern matching to them [22:47:12] right, that's normal rec-descent [22:47:16] whatever is undetected is just plan text [22:47:36] you are reminding me of what led to Beautiful Soup [22:47:36] what happens when it encounters 1000 '{{' in a row [22:47:47] and it seemed to be working fine, on real wikipedia articles - but the edge cases could certainly get interesting [22:47:56] sumanah, my inbox is getting spammed! :P [22:48:19] I'm sorry, Reedy -- you can turn off some notifications in the Melange interface [22:48:40] (I'm assuming you are complaining about GSoC notifications, Reedy?) [22:48:43] or, more to the point [22:48:47] Yeah, I am [22:48:56] {{template|{{template|{{template| .... }} [22:49:08] Reedy: in your "profile" [22:49:11] my parser handled that ok [22:49:16] Yeah, just looking [22:49:58] okay can I ask someone's help with Title::Blacklist? I don't know why but I can't get it to actually reject files [22:50:38] Are you an admin? [22:50:42] (wherever you are testing) [22:50:46] If so, that is an override [22:50:47] i'm not logged in as one [22:50:55] my parser was a little brute force though - it would say parse 1 level (using depth tracing to match braces for instance) - that would split a single span into multiple spans, then I would parse each of those 1 level, until no more splitting occured [22:52:05] Reedy: http://pastebin.com/Pc8ZWEju should that config be sufficient ? [22:52:21] Reedy: the contents of MediaWiki:Titleblacklist are exactly the same as Commons [22:52:34] That's exactly what I put for testing [22:52:37] And copied from commons [22:53:29] uh oh [22:53:40] I think there's is a bug in MediaWiki here [22:53:58] I get a rejection if I go through regular Special:Upload, but *not* if I go through Special:UploadWizard. [22:54:14] Ahh [22:54:17] That wouldn't suprise me [22:54:23] in other words when the file is "revived" from stash the file doesn't get the same strictures applied. [22:54:45] but wait [22:54:55] I *do* see that bug happening live, I think... [22:58:05] i need to name an extension, it provides an API method for adding an email address to a table, and sends out an auto-responder - it's going to be used for harvesting emails for people who want to "help wikipedia" [22:58:20] is Special:HungryHungryEmailHippo a bad name? [22:58:22] What, like a mailing list? [22:58:23] :D [22:58:35] TrevorParscal, I'd like to see you localise that... [22:58:35] not a list, just a volunteer thing I guess [22:58:38] ha ha [22:58:42] well, it is a list... [22:58:45] Technically [22:58:54] MailOut ? [22:59:04] apergos likes naming things :D [22:59:09] Reedy: neilk_ told me the other day of the difficulties in localizing the admonition "Don't be that guy" [22:59:18] Specail:ICANHAZEMAILZ> [22:59:20] ? [22:59:35] definitely not! [22:59:50] (i.e. noyoucannothazemailz :-P) [23:00:07] Special:IMMAGONNASPAMYOU [23:00:14] :-D [23:00:17] yes [23:01:12] well, it's going to be an api call really... not a special page... [23:01:30] MailingList or DistributionList or something seems to make sense [23:01:40] api.php?action=stalkmeplz [23:01:55] the api call adds the emails to the table? [23:01:57] it's just a way to capture emails, not send them to the whole list at once [23:02:00] yes [23:02:05] or adds them and also sends email to them? or...? [23:02:15] just sends a confirmation, to validate them [23:02:21] emailcapture or some such [23:02:37] boring but it will be understandable [23:02:53] yes, everything is boring in mediawiki land [23:03:25] when names get too exciting we have no idea what the heck the calls do :-P [23:04:43] $wgFluxCapacitor->hyperdrive( 'SuperCollider::exploderate', MW_GAMMA_RAY ); [23:05:02] seems like a far superior way to program to me... [23:05:17] I see no possible way of being confused, whatsoever [23:05:21] :) [23:05:24] :-D [23:05:27] :) [23:05:30] The new parser needs an awesome name [23:05:38] Reedy: in Berlin! [23:05:43] that can be a fun thing we do in Berlin [23:05:51] Name That Parser! [23:06:13] Der Parser [23:06:19] meh [23:06:22] now that is boring [23:06:30] ?? Parser [23:06:38] how about, instead of get rid of globals (because not everyone believes it's possible or needed) we can just rename all globals to something more interesting [23:06:47] hahaha [23:06:48] $wgTitle is now $wgWhereYouAt [23:06:55] ooohh oohhh [23:07:00] $wgRequest is not $wgHolla [23:07:05] we could have filters that let you change the names at install [23:07:09] so you could use ... [23:07:23] remember redneck dialog in the install dialog from rehdat like 5.x or something?? [23:07:38] $wgOut is now $wgYoCheckIt [23:07:41] so you could select to have all the globals in piratespeak [23:08:00] that's pretty useful as well... [23:08:37] that should have been "redneck dialect" [23:08:44] understood [23:08:46] sorry... 2 am, really oughta sleep... [23:09:57] apergos is $wgOut [23:09:58] cya [23:10:06] l8r [23:11:18] Reedy: are we sure that TitleBlacklist is working on Commons? I've tried several ways to trigger it and nothing. [23:11:35] Reedy: http://commons.wikimedia.org/wiki/File:DVC01234.JPG [23:11:37] It was on trunk [23:11:47] I'm not sure what got merged or didn't get mergec [23:11:48] d [23:12:21] There is an incomplete list of patterns that Special:Upload blocks in the UI, but the TitleBlacklist things don't seem to work [23:13:05] was able to upload this: http://commons.wikimedia.org/wiki/File:Tumblr_1234567890123456789_01.png ( since deleted ) [23:13:52] Wow, artwork :D [23:14:10] Roan looks to have merged them out [23:14:18] them == ? [23:14:28] the revisions i made fixing it etc [23:14:32] yay [23:14:38] anything I can do to help [23:14:41] ? [23:14:55] yay was ironic if that wasn't clear. [23:16:03] I'd try and test it locally if I wasn't feeling crappy atm [23:16:32] well let's see [23:16:36] Commons haven't reported it broken again, so i'm guessing it's fixed.. [23:17:03] ...I just uploaded three files that ought to be blacklisted [23:17:24] in a non-admin account [23:18:47] There might be a neeeded revision that hasn't been merged [23:19:09] ok [23:19:14] so [23:19:22] can you test it on trunk? [23:19:25] locally I can get it to do something via Special:Upload [23:19:40] but not Special:UploadWizard, which is more indirect (stashing then unstashing) [23:19:49] locally = on trnk [23:19:50] yeha [23:19:50] trunk [23:19:54] the calls are weird [23:20:01] it might be the code path you're going through doesn't hit it [23:20:02] errm [23:20:12] weird because other people reported this as occurring [23:20:22] possibly an older version of whatever was deployed DID do this. [23:20:37] well, thanks for fixing my bug then :) [23:20:40] my work here is done [23:20:53] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/84605 [23:21:28] uh, line 402?! [23:22:48] What about it? [23:22:58] - if ( $nt->exists() ) { [23:22:58] + if ( !$nt->exists() ) { [23:22:58] $permErrorsCreate = $nt->getUserPermissionsErrors( 'createpage', $user ); [23:23:01] it flipped the logic... [23:23:06] purposefullu [23:23:10] ok [23:23:27] If you look, it's doing an existance check, and then if it does exist, whinges about maybe not having permissions to create the page [23:23:38] well it says it's merged? [23:23:48] Yup [23:23:53] It shouldn't be live on the site [23:24:05] we pushed on Wednesday/Thursday night though [23:24:09] so it probably is [23:24:33] that was merged a fortnight ago.. [23:24:52] i think I lack context here [23:24:58] is this a change we want to see in prod or not? [23:26:23] yes [23:27:07] so that sounds like it should be working