[01:46:01] gn8 folks [02:17:34] !log LocalisationUpdate completed (1.19) at Sun Mar 4 02:17:34 UTC 2012 [02:17:40] Logged the message, Master [02:35:16] !log LocalisationUpdate completed (1.18) at Sun Mar 4 02:35:16 UTC 2012 [02:35:19] Logged the message, Master [05:12:47] Does anyone know when the visual editor will go live on EN? [05:22:16] Pine: you might want to check wikitext-l archives [05:22:51] jeremyb: ok, where can I find them? [05:22:55] Pine: or http://www.mediawiki.org/wiki/Visual_editor#Status [05:23:18] thanks [05:24:05] Pine: http://lists.wikimedia.org/pipermail/wikitext-l/ of course ;-) (linked from https://lists.wikimedia.org/mailman/listinfo/wikitext-l which is linked from https://lists.wikimedia.org/) [05:24:44] Pine: also, try the test sandbox [05:25:00] jeremyb: thanks, the list doesn't say much recently about it [05:25:11] Pine: the wiki page is pretty recent [05:25:22] Yeah, http://www.mediawiki.org/wiki/Visual_editor#Status is good enough for my purposes. Thanks [05:25:45] longer version: http://www.mediawiki.org/wiki/Visual_editor/status [05:26:20] Pine: are you a tree? [05:27:17] * Pine leaves rustle [07:04:03] Holy... cow... [07:30:38] hej allihopa! [07:35:11] why are download URLs for images on Commons of the form upload.../d/df/ ? [07:35:39] can the c/c6/ and d/df/ be calculated easily? [08:07:54] you can ask api to provide the url of image [08:08:42] they can be calculated easily, actually [08:10:05] if the name is provided with spaces converted to underscores, then: [08:10:10] hashpath = getHashPathForLevel(fname,2) [08:10:18] def getHashPathForLevel( name, levels ): [08:10:35] summer = hashlib.md5() [08:10:35] summer.update( name ) [08:10:36] md5Hash = summer.hexdigest() [08:10:50] path = '' [08:10:50] for i in range( 1,levels+1 ): [08:10:51] path = path + md5Hash[0:i] + '/' [08:10:51] return path [08:11:12] that's all there is too it. [08:11:13] *to [08:12:48] the urls include hashes because we have millions of images, and millions of images in one directory = impossibly slow for a server to retrieve [08:26:13] but the calculation could be made by the server, e.g. .htaccess [08:26:38] Wikipedia has millions of articles, named e.g. .../Sweden [08:26:59] the article texts aren't stores as flat files in a directory [08:27:17] I, the user, doesn't know that [08:27:23] they live in databases that are built to handle millions of rows (withcareful indexing, particular atention to query structure, etc) [08:27:40] they question was why the urls are like this; [08:27:47] so I was answering the question [08:27:55] well, yes and no [08:28:24] you asked why the urls are of form x/y/filename ... this is why [08:28:27] of course there can be hashes and filesystems involved. But that doesn't imply some of the hash structure should be exposed in external URLs [08:29:02] is there a good reason to expose the hashes in the URLs? Or was that just the way it happened? [08:29:19] LA2: If we didn't have hashes in the filenames, there would be extra processing time, and we do have files that have duplicate names thus presenting clashes [08:29:41] a duplcate name will mean the same file [08:30:05] for all files, transfer time (1 second for 1 Mbyte at 8 Mbit/s) far exceeds that processing time [08:30:05] a file with the same name on multiplle projects is of course stored in a separate area per project [08:31:20] we didn't have a lot of people clamoring for simplified urls at the time, I guess; a two level hash only adds about 4 characters to the url [08:33:02] yes, but 4 characters that make it impossible to do: for f in 1 2 3 4 ; do wget .../file_$f.jpg ; done [08:33:48] you just have to add 3-4 lines to that script [08:34:49] it's not a script, it's what I occasionally type at the command line [08:35:15] time to make it one I guess [08:35:41] and put it on the server? [08:36:34] if URL doesn't match [a-z]/[a-z][a-z]/ then pass URL through script, that calculates the hash [08:38:13] no [08:38:23] we wouldn't put it on the server(s) [08:38:49] time for *you* to make a script :-P [14:22:28] Gday to anyone who is awake. In trying to delete a file, there has twice been the error ... Error deleting file: A non-identical file already exists at mwstore://local-backend/local-deleted/9/s/5/9s5e0ypsw2y72d5emu4extzuut8slsc.jpg. [14:22:56] is this immediately fixable, or should it progress through a bugzilla? [14:24:27] the error has occurred for two different deletors [14:30:35] I don't know so I would recommend bugzilla [15:23:58] an old bug is re-appearring again! https://bugzilla.wikimedia.org/show_bug.cgi?id=31577 [15:24:01] (nl-wiki) [15:24:04] together with: https://bugzilla.wikimedia.org/show_bug.cgi?id=31576 [15:30:26] category should be empty now: http://nl.wikipedia.org/wiki/Categorie:Wikipedia:Beginnetje_nog_niet_onderverdeeld [15:33:04] category should be empty too: http://nl.wikipedia.org/wiki/Categorie:Wikipedia:Pagina%27s_met_ontbrekende_references [15:33:15] cause magic words/ parserfunctions malfunctioning [15:45:05] sDrewth, it's good it gave you such error [15:45:14] otherwise, the deletion would have produced data loss [15:45:40] which files were them? [15:47:25] just the one [15:47:32] the one in the subject line [15:47:42] oh duh [15:47:45] I did a bugzilla [15:48:23] (NEW) Deletion fails https://commons.wikimedia.org/w/index.php?title=File:Jennifer_Nettles_in_David_Meister.jpg&action=delete - https://bugzilla.wikimedia.org/34959 normal; Wikimedia: Site requests; (billinghurst) [15:56:06] billing? [16:38:50] hi [16:39:34] https://bugzilla.wikimedia.org/show_bug.cgi?id=34412 [16:40:22] When is it processed? [16:47:19] Nobody does? [16:48:20] It can do, if it is whom? [16:57:00] Is it futility as bugzilla is requested? If it is right, don't let me request by bugzilla. It will do, if it says whom? [17:00:01] hi [17:12:16] hi,there [17:52:59] Hello, I'm not sure if this is the right place to report this, but I just tried to scrape data from wikipedia two different ways and got similar errors, telling me that I can report the error to Wikimedia System Administrators [17:53:31] And I quote: "If you report this error to the Wikimedia System Administrators, please include the details below. Request: GET http://wikipedia.org/wiki/Biblical_names, from 24.12.9.161 via sq75.wikimedia.org (squid/2.7.STABLE9) to () Error: ERR_ACCESS_DENIED, errno [No Error] at Sun, 04 Mar 2012 17:35:14 GMT " [17:53:55] And, similarly: " If you report this error to the Wikimedia System Administrators, please include the details below. Request: GET http://wikipedia.org/wiki/Biblical_names, from 50.16.131.104 via sq72.wikimedia.org (squid/2.7.STABLE9) to () Error: ERR_ACCESS_DENIED, errno [No Error] at Sun, 04 Mar 2012 17:37:44 GMT" [17:54:32] Perhaps this is irrelevant, but the first one used the datasciencetoolkit and the second used the asciinator (http://www.aaronsw.com/2002/html2text/) [17:55:15] Is this the right place to report this error? Or should I report it elsewhere? -- THanks [18:08:53] gabe_g: the user-agents are banned [18:11:49] gabe_g: to make it a bit clearer. You have to use a different user-agent with some easy way to identify you in case the bot get mad :-) [18:12:00] gabe_g: something like freenode/gabe_g would probably do it [18:19:27] Has anyone seen this before? "Wikimedia Foundation Error Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes. If you report this error to the Wikimedia System Administrators, please include the details below. Request: GET http://wikipedia.org/wiki/Biblical_names, from 24.12.9.161 via sq74.wikimedia.org (squid/2.7.STABLE9) to () Error [18:26:54] gabe_g: did you attempt to try what hashar suggested? [18:27:07] Oh, different error [18:27:43] gabe_g: in that for, you're doing weird redirecting queries [18:27:59] At least use https://en.wikipedia.org/wiki/Biblical_names [18:37:33] Thanks Reedy and hashar [18:38:00] Reedy, when I use "https://en.wikipedia.org/wiki/Biblical_names" I get the same error [18:38:18] Does it actually give an error code? [18:38:40] No, just "Request: GET http://en.wikipedia.org/wiki/Biblical_names, from 208.80.154.9 via cp1002.eqiad.wmnet (squid/2.7.STABLE9) to () Error: ERR_ACCESS_DENIED, errno [No Error] at Sun, 04 Mar 2012 18:37:47 GMT" [18:39:00] I didn't understand what Hashar was saying-- [18:39:05] Well, that shows more information with the ERR-ACCESS-DENIED [18:39:23] What are you trying to use to access Wikipedia? [18:39:41] I've tried two things and gotten similar errors. They are both meant to scrape text from html. [18:39:52] * Reedy barfs [18:39:54] The asciinator (http://www.aaronsw.com/2002/html2text/) [18:40:09] and the data science toolkit [18:40:11] * ToAruShiroiNeko wonders who cleans-up after the ReedyOne [18:40:21] Reedy, why did that make you barf? [18:40:35] Screen scraping [18:40:36] gabe_g apologizes for making Reedy sick [18:40:45] also apologizes for the mess on the floor [18:40:48] and for his ignorance [18:40:59] I want to get a list of biblical names [18:41:03] perhaps I'm being stupid [18:41:10] but it was the first way I thought of [18:41:17] there is no such thing as a stupid question, until you ask it :) [18:41:19] :p [18:41:50] Ok, so is what I am trying to do discouraged? [18:41:57] Or is it just 'in bad taste' ? [18:42:13] And is there a better way or a way to make it work. [18:42:58] It's somewhat bad taste [18:43:23] Ok. is there a more elegant solution? [18:43:24] Depending on what you're actually trying to do [18:43:25] https://en.wikipedia.org/wiki/List_of_biblical_names?action=raw [18:44:17] ? [18:44:57] Ok, that downloades the text from that page, but I want to do it for each of the.... https://en.wikipedia.org/wiki/List_of_biblical_names_starting_with_A [18:45:05] for each letter of the alphabet [18:45:31] Ok, I think I got what you're saying. [18:45:40] Sorry--as is obvious I'm sure, I'm quite noo [18:46:22] So now I need to go through each of those letters....using curl? [18:46:24] perhaps? [18:47:35] And, also, do you know why my previous attempts using those ready-made tools didn't work? [18:47:48] is it because there is some specific protocol that they are not following or...? [18:48:56] By the way--thanks Reedy for the direction [18:53:10] Bye [19:44:13] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34690 - Changing the name in the title bar to Assamese' [19:44:18] Logged the message, Master [19:48:24] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34931 - Add namespaces aliases on as.wikipedia.org' [19:48:28] Logged the message, Master [19:53:12] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34867 - Switch Sango wiktionary logo' [19:53:15] Logged the message, Master [20:01:50] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34766 - Logo of Sanskrit Wikisource' [20:01:53] Logged the message, Master [20:08:08] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34618 - Install MoodBar on fr.wikisource' [20:08:11] Logged the message, Master [20:12:28] !log reedy synchronized wmf-config/InitialiseSettings.php 'Variablise moodbarconfig infoUrl' [20:12:31] Logged the message, Master [20:14:33] !log reedy synchronized wmf-config/CommonSettings.php 'Variablise moodbarconfig infoUrl' [20:14:36] Logged the message, Master [20:25:03] !log reedy synchronized wmf-config/InitialiseSettings.php 'Create wmgMoodBarCutoffTime' [20:25:07] Logged the message, Master [20:25:45] !log reedy synchronized wmf-config/CommonSettings.php 'wmgMoodBarCutoffTime' [20:25:48] Logged the message, Master [20:29:38] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34694 - Install the Quiz extension on de.wikibooks' [20:29:41] Logged the message, Master [20:31:15] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34715 - Please modify the import sources for the Spanish Wikiversity' [20:31:18] Logged the message, Master [20:42:44] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34567 - New logo for Arabic Wiktionary' [20:42:47] Logged the message, Master [21:02:50] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34897 - Enable Special:Import on Catalan wikisource' [21:02:53] Logged the message, Master [21:41:22] !log reedy synchronized wmf-config/ 'Bug 32726 - Set =true for Commons' [21:41:25] Logged the message, Master [21:42:23] saving page gives blank screen [21:42:28] nl-wikt [21:42:41] nlwp/enwp/commons all 500 errors [21:42:44] everything gives blank pages [21:43:13] Are you doing some mantinence? [21:43:18] I'm getting blank pages.. [21:43:23] Not even an error [21:43:23] not planned afaik [21:43:30] Let the stampede begin [21:43:40] [23:39:43] +logmsgbot> !log reedy synchronized wmf-config/ 'Bug 32726 - Set =true for Commons' [21:44:00] I assume Reedy is already fixing it [21:44:24] let's see [21:44:58] !log reedy synchronized wmf-config/InitialiseSettings.php 'fix .' [21:45:01] Logged the message, Master [21:45:39] thanks Reedy [21:45:54] syntax error? [21:45:59] http://en.wikipedia.org/wiki/Special:NewPages - shows blank for me [21:46:04] It's not giving an error [21:46:11] working for me now [21:46:11] it's just not giving a page [21:46:39] MaxSem: apparently sync-dir doesn't syntax check, sync-file and scap do [21:46:47] Reedy: Hehe, you crashed all sites because of a php syntax error? LOL :-) [21:47:07] Not the first time ;) [21:47:21] Working again [21:47:24] What broke? [21:47:30] everything [21:47:59] * Qcoder00 wonders if wiki dev do patching on the productions erver ;) [21:48:24] I was worried for 3 minutes there that I might have to get a life temporarily. [21:48:36] I was basically staring into the blank white screen abyss. [21:48:43] !log reedy synchronized wmf-config/ 'Bug 32726 - Set =true for Commons' [21:48:46] Logged the message, Master [21:48:49] getting a life, how was it then before the internet came up? [21:49:20] Don't ask me, too young :P [21:52:05] there should be some hook to check the syntax before syncing the changes [21:52:11] :O [21:52:17] There isn't already? [23:36:51] Hi, seems like something strange is going on... much of *.com just went dead from Norway.. Are you there USA? [23:37:12] *.org is also missing, *.net is up [23:38:41] Have the bots finally taken control? Kill Cyberdyne Systems! o_O [23:41:42] HardDisk_WP: de.wikipedia.ORG is still working from germany [23:45:06] IA disk space http://www.archive.org/~tracey/mrtg/df.html [23:45:12] HardDisk_WP: sorry, wrong key [23:46:25] Nemo_bis: maybe somebody deleted his/her old porn? [23:46:52] DaBPunkt, no, they added 200 TB a few days ago [23:47:44] Unless something weird is gong on from Norway there are massive routing problems [23:51:02] gn8 folks [23:52:33] jeblad: so, are you sure it's routing and not just DNS? [23:52:43] jeblad: have you tried alternate NS? [23:54:02] I can't get a reply from dns servers outside Norway it seems.. [23:54:17] jeblad: did you try 8.8.8.8 ? [23:54:33] !log reedy synchronized wmf-config/ 'Bug 32726 - Set =true for Commons' [23:54:36] Heh. [23:54:38] $variable replacmeent? [23:54:41] replacement [23:54:51] Yup [23:54:52] lol [23:54:53] I'm not sure what it is, the cellphone works.. nothing else, but I can get to some norwegian newspapers [23:55:15] After 30 years, you'd think there'd be something a bit smarter in use... [23:55:54] for all who want to click a link ;) https://bugzilla.wikimedia.org/show_bug.cgi?id=32726 [23:56:39] yeay, works [23:57:04] now Extension:Babel is nearly as good as the good old templates ;) [23:57:09] jeblad: can you see http://dpaste.com/711614/plain/ ? [23:57:43] nope [23:58:58] all lookup on google.com fails [23:59:05] jeblad: linux ? [23:59:12] yes