[01:31:56] Is there a good reason why my file queries via API are failing to return the image repository as normal? [01:46:49] http://lists.wikimedia.org/pipermail/wikimedia-l/ is giving a 403 [01:47:14] mutante: ^ [01:49:54] legoktm, I just had an API problem that stopped occurring like 2 seconds ago [01:50:02] * Magog_the_Ogre wonders if WMF is having issues [01:50:08] I don't think so [01:50:11] what issue are you having? [01:50:16] were I guess. [01:52:07] legoktm, the API wasn't returning proper file repository information [01:52:14] ...but it is now? [01:52:19] yes, it is now [01:52:40] as soon as I turned on trace mode in my program, the problem was gone [01:53:13] weird. [03:25:26] Hi [03:25:41] Who updates Special:UncategorizedPages? [03:31:53] Anyone? [03:33:32] LlamaAl: it runs on its own once in a while. why? [03:34:08] jackmcbarn: Some users on eswiki want to know who to update it by themselves [07:08:48] Seeing the occasional search error at enWP "An error has occurred while searching: Pool queue is full" [12:06:43] if I want to scrape wikipedia using the MediaWiki API, do I scrape Wikipedia directly (http://en.wikipedia.org/w/api.php) or use another source? [12:06:50] and if another source, which/where? [12:06:59] I want to scrape articles links and backlinks and feed that info into a database [12:07:40] binni, dumps.wikimedia.org [12:08:33] MaxSem, thanks [12:17:02] binni: article links are already directly available in the dumps, but also on the replicated databases on tool labs [15:20:39] I cannot open it.wikipedia.org on any browser on my laptop, while it works on other machines [15:21:00] also I can ping it, resolve it and open other wikimedia websites [15:21:43] it's definitely problem though I wonder if someone already experienced a similar issue