[00:29:07] huh: ni [00:29:12] Ni? [00:29:15] *no [02:45:48] If rss on private wikis doesnt work then whats sence of it in there? [02:46:16] *sense [02:53:54] What do you mean? [02:53:57] The RSS extension? [02:54:28] Esuba: ^ [02:58:38] gloria oh i dont know what provides it. Atom feed of RC, Whatchlist and whatever [02:59:05] e.g. On otrswiki it exists but ever empty [02:59:08] Esuba: Presumably the RSS feed for private wikis is private. [02:59:20] It shouldn't be empty if you auth against it. [02:59:23] That might be a bug, dunno. [02:59:57] I think you can have user authentication with the RSS feed by passing it through an entry point. [11:51:29] andre__: is that enWS request a configuration issue, or will it need coding? I couldn't find any detail at MW: [11:51:43] sDrewth, I have no idea either I must admit. [11:51:52] I hope that somebody else can provide input [11:52:14] k [11:52:20] * sDrewth hands out white canes [11:55:23] sDrewth: what was the bug again? [11:56:50] twkozlowski, https://bugzilla.wikimedia.org/show_bug.cgi?id=62521 [11:57:34] odder the one you removed yourself from [12:07:45] that will probably need some custom coding [12:08:34] Would make sense to probably limit it to use the contentns config option [12:43:27] https://www.mediawiki.org/wiki/Talk:Analytics/Wikistats [17:30:31] If I want to create a script that extracts specific information from *all* pages on Wikipedia, once every 2 months, can I do so on Wikipedia directly or do I need to use a local copy? [17:31:32] binni: better use a dump [17:33:32] matanya, okay, but can I use libraries like mwclient https://github.com/mwclient/mwclient on the dump? isn't the dump just some xml files that can't be accessed through a mediawiki api? [17:34:31] binni: there is pywikibot [17:34:48] the dump is a xml file [18:05:24] matanya, so I setup a mediawiki instance on my machine, import the XML dump, and then query the data through the mediawiki api with pywikibot or mwclient? Or do I need additonal files aswell? [18:21:36] Who is Hfung and Asharma? [18:22:15] twkozlowski: Howie Fung, head of Product; Alolita Sharma, head of Language Engineering [18:22:21] Those titles may not be totally accurate [18:22:26] http://wikimediafoundation.org/wiki/Staff_and_contractors?showall=1 [18:22:34] "Director of Language Engineering" [18:22:40] "Director of Product Development" [18:23:06] Ah, that's why they can't moderate an e-mail I sent to a mailig about 20 hours ago [18:23:16] a mailing list* [18:35:15] binni: that would work [18:54:41] [[Tech]]; Ruslik0; /* Background */; https://meta.wikimedia.org/w/index.php?diff=7800636&oldid=7789289&rcid=5095178 [21:19:52] The "New York office" was rather effective :) http://lists.wikimedia.org/pipermail/wikitech-l/2014-March/075146.html [21:29:35] I don't understand, was that a nesplit or are really that many people on irccloud? :) [21:30:00] nah, str4nd surely isn't [22:21:43] Mediawiki:Rclistfrom is borked [22:22:05] in wmf17 [22:24:47] the whole message is swallowed by a link, causing HTML to show up in bad ways. anyone know if this change was intentional? [22:25:09] (I'm not sure if I should file a bug, if it might not be a bug...) [22:58:43] gwicke, does the parser ever insert non breaking spaces? [23:05:39] mwalker, it does [23:06:37] there can be non-breaking spaces anywhere in content; IIRC the parser also adds nbsp when there is a space before a colon [23:07:01] the PHP parser uses entities, we use UTF8 [23:15:59] I am having issue where I cannot see any article content. I see the articles, the table of contents, but each section only contains the heading title, a link to edit, but no content. Any idea? [23:23:03] mwalker: there are several cases in which nbsp is inserted, yes [23:23:20] (and there are more being requested, particularly by germans) [23:23:32] (out of envy for French, of course) [23:25:56] ok; makes sense I guess... I'm trying to figure out why https://meta.wikimedia.org/w/index.php?title=Translations:Fundraising/Translation/Thank_you_email_20131202/5/ca becomes "L�objectiu de la Viquip�dia �s reunir la totalitat del coneixement hum� i oferir-lo..." [23:26:31] but if its something that is expected and I need to deal with; I need to figure out why my code when it gets the parsed content is stripping the entity [23:28:13] you fetch parsed content? :o [23:29:25] yay http://code.google.com/p/gerrit/issues/detail?id=2456 [23:57:10] Nemo_bis, yes; I'm fetching parsed content [23:57:23] because I'm using that as input to another system that is not mediawiki aware