[00:24:50] gn8 folks [01:00:19] Whenever I render a book right now, the PDF/ODT loads except it doesn't contain any article content [01:00:43] And the book rendering screen is showing html tags in the text rather than rendering them [01:01:08] (Which I tested and verified on multiple books that have previously worked and on Firefox and Safari) [03:24:01] I think the Books namespace has an error, I can't get any books to load on PDF or pediapress. Sent here from #en-help [04:11:26] what errors? [04:11:53] Aaron|home: In regards to the books namespace? It's just a bunch of blank pages. [04:12:36] Aaron|home: None of the books are rendering properly [04:12:44] It renders titles but no actual page content [04:13:31] yeah I see [04:13:42] I also see double-escaped html [14:59:30] :) [14:59:44] James_F: I was just looking at that competency matrix [15:00:02] Aaron|home: It's fun, isn't it? [15:00:18] bookmarked [15:41:59] Aaron|home: ping? [15:42:37] hm [15:42:46] hi [15:43:56] my had two rendering LVS flaps today and I've been looking at possible causes [15:44:07] I tried looking at graphite to see possible NFS lag [15:44:17] and I found this instead: https://gdash.wikimedia.org/dashboards/parser/ [15:44:39] unrelated to my investigation afaik, but noteworthy :) [15:45:16] odd [15:45:52] yes [15:56:21] paravoid: I should start enabled short thumbnail names beyond testwiki now [15:56:51] seems to work fine (after finding out an FF address paste bug was messing up my manual testing) [15:57:00] FF address paste bug? [15:57:17] I couldn't reproduce your findings, was that the FF bug? [15:57:53] a) click a short thumb url, b) paste over thumbnail.jpg with the full name [this works and redirects to the short name] [15:58:27] if you repeat that again the paste actually breaks the url so it has the http://upload.wikimedia.org and a bunch of stuff there twice [15:58:35] which gives the 402 error or whatever [15:58:47] I didn't notice this until I pasted out what was in the address bar [15:58:55] opera doesn't have this problem [16:29:00] !log Set abbrvThreshold to 160 for thumbnails [16:29:11] Logged the message, Master [16:29:13] hm? [16:55:19] hey all [16:55:37] what's the currently acceptable spidering limit? [16:56:09] One spider per net ;) [16:56:22] wah wah [16:57:02] don't worry, no DDoS, just one 10 MBit/s machine [16:57:13] Depends what you're requesting [16:57:23] Logged in/logged out [16:57:38] don't give any ideas [16:57:44] stuff like http://de.wikipedia.org/w/index.php?title=Wikipedia:Hauptseite&oldid=107589666&action=render [16:57:55] where oldid is in 90% of the case the current revision of the paghe [16:58:37] what i'm intending is to retry my experiments with static HTML dumps which I did two years ago on toolserver [16:58:39] Don't we have a meta page about this? [16:58:56] back then, i couldn't finish it because toolserver-disks were NFS and unsuitable for the millions of files [16:59:34] now I have my own server, 8x 250GB SSD in RAID which should (tm) be performant enough XD [17:00:18] Right, doing old ID is going to give a cache miss, all the way up, I think [17:01:02] http://wikitech.wikimedia.org/view/Robot_policy [17:01:37] "Most users should limit themselves to a single connection, with a small delay (100ms or so) between requests to avoid a tight loop when there is an error condition." [17:02:36] HardDisk_WP: ALSO! [17:02:43] Please use a decent user agent string :) [17:03:15] will do, will do. is there still the problem that anything containing php in the useragent string gets a 403 Forbidden?^ [17:03:43] TIAS? [17:03:52] :D [17:03:58] I wonder if using the api rather than action=render would give a better cache hit rate [17:05:17] Hmm [17:05:27] No squid cache, but I seem to recall poking this code in parse.. [17:05:28] https://de.wikipedia.org/w/api.php?action=parse&oldid=107589666&format=xml [17:05:37] holy crap, the API has evolved in the last two years... [17:06:12] lol. action=parse has been around a while [17:06:19] // If for some reason the "oldid" is actually the current revision, it may be cached [17:06:19] if ( $rev->isCurrent() ) { [17:06:24] i was just reading through the docs :D [17:06:33] For which we hit the parser cache (hopefully). yay [17:06:39] I'm not sure if action=render does the same thing [17:09:01] It might.. [17:11:35] looks like i'll just use the API [17:11:57] Great :) [17:19:26] http://commons.wikimedia.org/wiki/File:Peugeot_306_%C5%BEandarmerija.jpg What to do with files like that? (Incomplete file move) [17:51:55] hoo: move it back and try again? [17:52:26] Reedy: That's what I thought... going to give it a try... [17:53:44] "A page of that name already exists, or the name you have chosen is not valid. Please choose another name. " -.- [17:53:53] Since then can't I move over redirects [17:55:15] Reedy: Shouldn't I be able to move that page back? The page history only has one entry https://commons.wikimedia.org/w/index.php?title=File:Peugeot_306_%C5%BEandarmerija.jpg&action=history and it's a redirect to the new one... [17:56:43] * hoo should have moved it with suppressredirect ... [18:10:30] on commons, if i go to Special:UncategorizedFiles and i pick a file like f.e. File:Bruxelles Java Masque Wayang 02 10 2011 06.jpg and try to add a category using HotCat, i get "Could not retrieve the page text from the server. Therefore, your category changes cannot be saved. We apologize for the inconvenience.". It seems there are a few files that are in this state and therefore have never been categorized.. i wonder how [18:12:19] mutante: mysql> SELECT page_latest FROM page WHERE page_namespace = 6 AND page_title = 'Bruxelles_Java_Masque_Wayang_02_10_2011_06.jpg'; [18:12:21] gives 0 [18:12:40] http://commons.wikimedia.org/w/index.php?title=File:Bruxelles_Java_Masque_Wayang_02_10_2011_06.jpg&action=history [18:12:44] and history is empty [18:12:57] yet the file is there somehow http://commons.wikimedia.org/wiki/File:Bruxelles_Java_Masque_Wayang_02_10_2011_06.jpg [18:14:13] http://commons.wikimedia.org/w/index.php?page=File%3ABruxelles+Java+Masque+Wayang+02+10+2011+06.jpg&title=Special%3ALog [18:18:30] http://commons.wikimedia.org/wiki/File:Bruxelles_Java_Masque_Wayang_02_10_2011_06.jpg => The revision #0 of the page named "Bruxelles Java Masque Wayang 02 10 2011 06.jpg" does not exist. [18:18:30] This is usually caused by following an outdated history link to a page that has been deleted. Details can be found in the deletion log. [18:18:31] lol [18:19:10] mutante, http://commons.wikimedia.org/wiki/Commons:Administrators%27_noticeboard/Archive_31#System_problems => known bug [18:19:23] https://bugzilla.wikimedia.org/show_bug.cgi?id=32551 [18:19:30] thanks :) [21:24:14] hey all [21:24:30] why do the servers ignore my Accept-Encoding header within a curl request? [21:24:31] hello [21:24:34] headers are here: http://pastebin.com/ynjn8f2V [21:25:15] Oh forget it, it doesnt... there's a content-encoding header [21:25:27] but why is X-Vary-Options: Accept-Encoding; present then? [21:27:32] "In Accept-Encoding, we only care whether "gzip" is present or not," [21:27:52] so it's to distinguis those who are capable of gzip and those who are not. [21:31:06] ah k [23:39:32] gn8 folks