[00:03:50] twkozlowski: why "Wiki Loves Monuments" instead of "Wikipedia" or "Wikimedia"? Just to be shorter? [00:04:21] Wikipedia wouldn't make sense [00:05:01] Wiki doesn't make sense either, Reedy [00:05:20] Does it refer to Ward Cunningham's website or the type of editable project? [00:05:28] website or database developed collaboratively by a community of users, allowing any user to add and edit content Loves Monuments [00:07:06] <^d> I don't love monuments. [00:07:13] but the wiki does [00:07:36] <^d> MY WIKI DOESN'T! [00:07:40] <^d> MY WIKI LOVES NO ONE [00:07:50] ^d: wrong, I'm sure it loves you [00:07:51] ^d: no, it's referring to WikiWikiWeb obviously [00:07:57] <^d> Nemo_bis: It does not. [00:08:07] ^d: won't believe [00:08:13] <^d> My wiki hates me! [00:08:19] <^d> You would too, if I kept changing how you worked [00:08:21] ^d: it's just pretending [00:08:30] <^d> And broke you in explicable ways. [00:08:39] hmm sounds promising [00:08:55] * ^d gives Nemo_bis 3 extra arms and a job queue, because he can [00:09:01] .*? Loves Monuments [00:09:26] * Nemo_bis asks ^d a help page [00:09:29] huh: that's a good question. [00:09:34] That also matches "Everyone hates Monuments. Nobody Loves Monuments", Reedy [00:09:40] Nobody = Nemo [00:09:43] <^d> Indeed. [00:09:49] <^d> Nobody Loves Monuments [00:10:17] I think the topic back at #wikilovesmonuments doesn't agree with this statement, ^d [00:10:39] huh: I think the name is a poor attempt at being cool and modern [00:10:40] <^d> Good thing I'm not in there! [00:10:41] <^d> :) [00:10:47] since everyone calls 'Wikipedia' 'Wiki' [00:11:02] twkozlowski: I've never seen it called that [00:11:06] but it came useful later on when no one had to ask for trademark permission :) [00:11:21] at least outside of China. [00:11:52] huh: then maybe that' our European and US craziness. [00:12:05] yes, and Google [00:12:27] https://www.mediawiki.org/wiki/Wikipmediawiki [00:12:29] anyway, seriously, there are three requirements satisfied by the name [00:12:37] meh [00:12:45] Nemo_bis: which are? [00:12:47] 1) It doesn't use WMF trademarks so it's easy without oppressing bureaucracy [00:13:10] 1a) That's unless you want to use the Wikipedia logo to promote the competition. [00:13:11] 2) It can mean Commons, which is where the images have gto be uploaded [00:13:34] 3) It can be sold as Wikipedia, which is the goal with which people were attracted into participating [00:13:55] 'Wikimedia' can be sold as both Commons and Wikipedia [00:13:59] the bad thing is, 'monuments' is really, really vague. [00:14:05] 1) it kinda sounds like "Wikipedia" [00:14:07] twkozlowski: sure, I meant just for the name [00:14:13] 2) it has "media" in its name, aka commons [00:14:14] twkozlowski: it's not bug, it's a feature [00:14:23] 3) it sounds like MediaWiki, so obviously they're the same [00:14:27] Nah, you want the internet laugh at us for making a typo in the name of a competition... [00:14:33] to laugh* [00:14:36] wik(ibooks|idata|imedia|inews|ipedia|iquote|isource|iversity|ivoyage|tionary) Loves Monuments [00:14:48] huh: we sometimes use typos intentionally, but it's not best practice [00:14:48] Yes. [00:15:18] Reedy: Thanks for not including Wikispecies. They only care about life forms, not monuments [00:15:22] (like the mythical "Wikipedia Executive Director" in centralnotice) [00:15:46] I mean, monuments. The US-ians had to change that to 'heritage sites.' [00:15:59] (or more recently edsu's email cc'd to commons-l, where there are two typos and finally the correct version, a good compromise) [00:16:12] Well, to be fair, "UNESCO World Heritage Site" is a thing [00:16:19] who cares about USA, that's the periphery of the empire [00:16:26] when it comes to monuments [00:16:35] huh: good thing they used that name [00:16:46] that way Wikipedia can be a World Heritage Site [00:16:52] It's a SITE, right? [00:17:00] lol [00:17:01] and it's in the WORLD [00:17:12] that's a majority of the words! [00:17:15] I also think it was inherited [00:17:16] I remember that was on Meta's main page or strategywiki's for a few years [00:17:20] forgot which [00:17:27] huh: meta... and only few months [00:17:35] it felt like years [00:17:45] it was larger than the rest of the news [00:17:58] we've never had anything to place there [00:18:35] it's still better than "the last new perennial proposal is... *DRUMS DRUMS*" [00:18:58] go on a blackout? [00:19:14] no, whatever is on meta mainpage nowadays [00:19:22] steward elections [00:19:31] bunch of project proposals [00:19:35] aka Special:RandomInCategory/Proposed_projects [00:20:00] exactly [00:20:14] hmm second time I got https://meta.wikimedia.org/wiki/Category:Proposed_projects_-_directory [00:20:19] I didn't know it included subcats [00:22:23] Nemo_bis: try a few Special:Random on bug.wikipedia.org [00:22:33] what % are French communes? [00:22:44] what's wrong about french communes [00:22:57] bugwiki is almost completely French communes [00:23:03] on en.wiki I got an "unincorporated community" whose only feature is NOT having a post office [00:23:19] pl.wp had an idea of creating articles on Tibetian and Mongolian villages [00:23:32] I was trying to generate trigrams for every WP language, but bugwiki's did work out as all pages there are the same with two changes [00:24:04] "$randomcommune iyanaritu séuwa komun ri déparetema $department ri Perancis." [00:25:14] so what's the problem [00:25:29] once the bot owner goes away, another bot owner will run a bot to delete them all [00:25:42] it's the lifecycle of all botpedias [00:25:54] like big band and universe collapse [00:26:00] *bang [00:26:28] They could at least throw in a random Italian comune and a few Romanian rivers [00:26:31] comuni [00:27:13] I prefer asteroids and proteins [00:27:45] it just depends on the size of your ambitions [00:28:03] first you decide how many pages you want to create by bot, then you pick the database to import [00:28:17] if you want 40k or so, french communes are the best option [00:29:24] ocwiki was too ambitious with their "import all the Wikidata items" strategy [00:30:12] did they manage to bring the site down again? [00:30:36] I have no idea, but a lot of the articles are useless [00:30:51] https://war.wikipedia.org/w/index.php?title=Cubaris_nepalensis&action=history <- this wiki is in top 10 by articles, right? [00:31:19] yep, the lsjbot package offers a 1M one-size-fits-all import [00:31:41] for those who only see in 10^6 scale [08:03:43] MatmaRex: is https://gerrit.wikimedia.org/r/#/c/108024/ deployed on testwiki? [08:04:10] I assumed yes because ULS was updated to master, but I didn't really check [08:04:16] no [08:04:19] ULS was cherry picked [08:04:24] "included in: master" [08:04:47] it's on beta labs though I guess [08:07:28] Nemo_bis: no idea [08:08:44] but it's disabled, unless there's a beta wiki which inherits testwiki config [08:08:50] thanks MatmaRex and legoktm [08:11:04] ugh, it took me four tries to create an account with these captchas [08:11:50] &usecaptcha=no [08:12:58] Nemo_bis: yeah, the beta labs version does seem to be fixed [08:13:40] MatmaRex: oki, can you comment on bug [08:13:43] it still looks pretty silly with some languages names being pretty and antiliased, and some not, but it's definitely better [08:15:47] Nemo_bis: done [08:15:50] thanks [08:21:55] ori: (and anyone awake?) https://dpaste.de/KDMt#L implies .done is ok, .fail is not ok; but in some cases the api returns an error, the code enters .done, and only the understandJson(data) thing fails to find the nodes it needs (https://github.com/Gryllida/Afch2/blob/master/script.js) [08:31:58] https://github.com/Gryllida/Afch2/blob/master/script.js#L331 line 340 of interest [08:45:57] do scripts suffer namespace pollution problems, eg is it ok to say 'function Foo' or is it preferred to say 'function mysriptname.foo'? (does the software already take care of such things for me by isolating them somehow?) [08:47:44] Hi, maybe someone would've an advice for me or could point me in the right direction? I'm currently trying to understand some existing templates on a wiki and i'm wonder if ther was something like a translation tool or prettifier for those? I see a lot of brackets but can't really make any sense of them. I know what they should do but i'm uncertain. [08:59:07] good question thanks [09:00:27] https://en.wikipedia.org/wiki/User:Ais523/bracketmatch.js comes to mind [09:12:16] That does look helpful. Thanks. :) Definitly a way to get some better navigation in that forest of brackets. [09:15:05] you should ask for help when you don't understand; if writing something, Lua templates are a slightly more readable option [09:23:52] There actually is something i can't make sense of. The following bit should check whenever or not parameter 3/2 is set or not but how does the second if evaluate? It's part of the condition as far as i can tell. Would this cause an implict and? {{#if: {{{parameter3|}}}{{#if: {{{parameter2|}}}||1}}|sometext}} [09:30:14] Ah, no i found that bit i needed. It should check whenever parameter 3 or insert a true if parameter two is set, right? [09:35:42] ori: https://github.com/Gryllida/Afch2/blob/master/script.js - I started from scratch, please look for XXX on the page and suggest something, I'm trying to just get DOM sorted (it means 3 ajax calls of which 2nd and 3rd should wait for the 1st one to complete) [09:45:23] GIJoe: {{#if: {{{parameter3|}}}{{#if: {{{parameter2|}}}||1}}|sometext}} looks to me like there might be missing | after {{{parameter3|}}}. if it wasn't missing, I would expect it to return nothing if both params are set, '1' if only param 3 is set, and 'sometext' if no params are set [09:45:50] (but that's just a guess; I haven't used the #if statement myself, and experimenting would be more reliable than me) [09:47:35] Hehe OK. Thanks for your input on this! :) [10:23:19] gry: "constructor without argument, function calls without arguments." -- these things are totally ok [12:29:25] Hi guys [12:29:48] I notice it's impossible to patrol edits made to some pages on Commons [12:29:54] https://commons.wikimedia.org/w/index.php?title=Special:RecentChanges&days=30&from=&hideliu=1&hidepatrolled=1 [12:30:10] twkozlowski: can you edit them? [12:30:22] the pages? [12:31:34] https://commons.wikimedia.org/w/index.php?title=Commons:First_steps/Account/es&diff=prev&oldid=113263181 [12:31:47] this is a translation page, and I can't mark it as patrolled [12:32:02] it shouldn't be left as unpatrolled then [12:32:21] translation pages can't be patrolled, yes [12:32:36] thought it was about timedtext [12:33:02] https://bugzilla.wikimedia.org/show_bug.cgi?id=42162 [12:33:02] why are edits marked as unpatrolled then? :) [12:33:12] that too [12:33:58] because they're not patrolled :P [12:34:08] everything is unpatrolled until patrolled [12:34:27] if you can't patrol it... [12:34:43] hence the bug [12:38:09] ah btw twkozlowski, remember only sysops and patrollers can use the link above [12:38:31] the others can't use hidepatrolled=1 [12:38:43] (another bug, fix pending review since a year ago) [16:31:10] hmmm. [16:31:11] https://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=hume.wikimedia.org&v=823574&m=Global_JobQueue_length&r=hour&z=default&jr=&js=&st=1365625056&z=large [16:31:37] yep, mentioned it some time ago on the channel(s) [16:31:39] <^d> Yeah, job queue got pretty big after we deployed Cirrus to enwiki. [16:31:49] <^d> We got backed up by about ~2mil jobs. [16:31:51] but ori told me that's not worth a !log [16:31:55] did the job queue really get done so abruptly at the end of last week, or has something gone wrong with collecting stats [16:32:02] latter [16:32:14] <^d> Latter, trust me, the job queue's pretty full :) [16:32:17] -nan clearly is wrong :) [16:32:28] yeah [16:33:35] <^d> Here's enwiki's jobqueue: http://p.defau.lt/?qra3LD3_RnWBXEIWDarujQ :) [16:33:47] !log still no data in Global_JobQueue_length ganglia graph on hume (3 days now) [16:33:54] Logged the message, Master [16:34:02] I thought worse! [16:34:12] not too bad. [16:34:31] <^d> Yeah, it was worse the other day :) [16:34:40] <^d> We're making headway on those cirrusSearchLinksUpdate jobs now [16:34:44] worse is 7 million jobs [16:34:47] :D [16:35:10] quite busy https://gdash.wikimedia.org/dashboards/jobq/ [16:35:17] i do think the jobqueue is something we need to look into next year.... [16:35:40] <^d> Why? We just reworked it this year. [16:35:47] a dozen times [16:35:50] <^d> It's way more reliable with redis now :) [16:35:58] reliable yes [16:36:08] but i'm not sure it's smart enough yet. [16:36:10] removing the sleep live hack doesn't seem to have increased activity in jobrunners visibly https://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Jobrunners%20eqiad&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2&st=1376083124&g=network_report&z=large [16:36:27] Wikidata doesn't work :-( [16:36:39] <^d> Nemo_bis: Limiting factor for jobs is mostly the databases actually. [16:36:53] <^d> We could very easily ramp up the number of job runners. [16:37:03] <^d> But hurt other things :) [16:37:27] sure [16:37:43] well from my point of view as a en.wiki editor, it has been under enormous stress this year. much more than compared to the 2011 and 2012 [16:38:02] not as much perhaps as 2007/8 though. [16:38:06] <^d> We're doing way more with the job queue than before. [16:38:13] indeed. [16:38:23] <^d> We probably need to find ways to increase throughput without hurting the databases. [16:38:30] <^d> But architecturally, I think it's pretty sound now. [16:38:42] <^d> Aaron's been pondering some slightly better deduplication, but otherwise I think we're good. [16:39:16] sepearate into jobrunners that need master db vs those that don't [16:39:30] i guess that will be difficult [16:39:35] <^d> Yeah. I was thinking about that recently. [16:39:46] <^d> Wondering if we could have some dedicated slave for jobs that don't need master data. [16:39:57] i mean, the email queue probably doesn't need master i guess... [16:41:14] anyway, if we have 7 million jobs that take 3 months to clear, that's just not nice, even though it works as expected. [16:41:36] <^d> Right. [16:41:38] and our lua builders have hardly started with their meta templates :D [16:42:01] <^d> Right. We do a decent job of keeping up with normal day-to-day job rates. [16:42:13] <^d> It's when a megatemplate is edited or something and we get backed up that problems happen. [16:42:16] it's those bursts that kill us [16:42:18] <^d> Like you say, the 7 mil job case. [16:42:50] someone was already suggesting creating copies of modules in modules, just to keep the queue down. :D [16:43:42] <^d> We should...file a bug [16:43:53] <^d> "Handle gigantic bursts of jobs better" [16:43:54] <^d> or something [16:49:22] <^d> thedj: https://bugzilla.wikimedia.org/show_bug.cgi?id=60348 [16:50:23] ^d you type fast... i only had about half of that text in this time :D [16:50:29] <^d> :) [16:52:58] is Wikidata slow as a snail for anyone else? [16:53:14] https://www.wikidata.org/wiki/Q567 has been loading for > 30 secs [16:53:23] oh, got a 'Page Unresponsive' warning now [16:53:28] is that a new thing? do you often load that page? [16:53:37] Only today. [16:54:03] <^d> My wifi's been a little slow since last night...I'm not a good comparison :\ [16:54:26] it's a known bug that in some circustances entries are extremely slow to load, check bugzilla [16:54:30] twkozlowski: 9 seconds here, but the biggest chunk was the round-robin login starting at 3.5 seconds [16:54:52] oh, wait, checking specific item [16:55:28] unresponsive script warning :/ [16:56:23] stoping the script gets me the page content, at least [16:57:43] noscript all the things [16:58:22] :) [17:01:27] https://bugzilla.wikimedia.org/show_bug.cgi?id=54098 Nemo_bis? [17:02:25] yep, and friends [18:50:35] 00ciao a tutti.. [19:09:25] ori: what about everything else? ajax calls with a need to wait for one? [20:08:12] [[Tech]]; Albertojuanse; /* Ve on esWiki */ new section; https://meta.wikimedia.org/w/index.php?diff=7191322&oldid=7159721&rcid=4879013 [20:08:36] [[Tech]]; Albertojuanse; /* Ve on esWiki */; https://meta.wikimedia.org/w/index.php?diff=7191326&oldid=7191322&rcid=4879015 [21:38:01] RoanKattouw_away, marktraceur: Is it accurate that oojs is now in core, but oojs UI is still only available via VE? [22:06:06] kaldari: Yes [22:06:22] marktraceur: thanks :) [22:06:29] kaldari: MMV is using oojs already, so it's going well :) [22:06:35] MMV? [22:06:43] multimedia viewer [22:06:44] :) [22:07:40] marktraceur: how are you building your dialogs? Are you depending on oojs-ui for that or writing your own UI code on top of oojs? [22:08:41] kaldari: We only have one dialog...I used jQuery.dialog, but I feel really bad about it [22:09:07] hehe [22:10:25] marktraceur: well, until oojs-ui is in core, I guess you shouldn't feel too bad :P [22:11:20] True [22:11:27] We totally hacked in oojs before it was in core [22:11:32] So it's not without precedent [22:21:14] kaldari, marktraceur: So I've been meaning to put oojs-ui in core, it's almost done, it's just one of those things that's on my TODO list that I haven't had time for yet [22:21:20] And this week isn't helping :S [22:22:10] kaldari, marktraceur: Specifically, RoanKattouw needs to come up with a way to add in the i18n into MW core. Which'll be fun. [22:22:45] No, that's actually not the problem any more [22:23:04] We have JSON i18n support in core now [22:24:28] I just need to get around to polishing that update script and adapting it to core [23:15:05] Can someone help me find something? [23:15:14] it may not even exist, but I think it does [23:15:27] No [23:15:39] to search a page's history on metawiki [23:15:41] ##god appears to be available [23:15:54] Nemo_bis: :P [23:16:16] Ctrl + F? [23:16:22] I would like to find some revisions in the history of a page with many revisions. [23:16:33] It does not have archives, but requests are removed all the time. [23:17:00] http://wikipedia.ramselehof.de/wikiblame.php ? [23:17:09] I can't find it in WikiBlame, Nemo_bis, I tried [23:17:23] it only scans a few revisions but this was in 2010 maybe [23:17:25] did you force wikitext search? linear? [23:17:30] ah [23:17:33] oh, hm [23:17:38] * huh tries linear search [23:17:41] did you select the year/month [23:17:58] I don't know when it was hoenstly [23:17:59] sorry [23:18:10] probably before 2011(?) [23:18:11] upper or lower limit is a start [23:18:21] saving 3 years revisions is not bad :) [23:19:27] this is like looking for a needle in a haystack [23:19:37] Google? [23:19:42] Probably not [23:20:14] I'm beginning to think the text was never even there, but I'd like a definitive answer. [23:20:34] the page "only" has 2198 edits... [23:20:47] dbquery? [23:21:02] that's not much [23:21:55] just restrict the search to several months and it will eventually find it