[00:05:17] <{{Guy}}> ← "Template:Guy" pun intended.. [00:06:17] * {{Guy}} thought it was funny and witty... tough crowd... [04:43:13] !ask [04:43:13] https://www.mediawiki.org/wiki/Extension:Ask [04:43:16] Heh. [05:53:37] cute. [05:54:05] spending warm summer days indoors / writing frightening verse / to a buck-toothed girl in luxembourg [05:54:41] poet-ori [05:55:40] https://www.youtube.com/watch?v=CEpAtTe-oJY [05:56:17] ori-l: stuck on GPRS, will be network suicide if I open it [05:58:12] YuviPanda: it's the song 'ask' by the smiths, from which i excerpted the lyric [06:00:43] ah :) [06:00:49] i need to read that [07:13:19] apergos: we are under attack [07:13:51] ? [07:13:56] Hi matanya. [07:14:03] mass spambot creation [07:14:12] hi Elsie [07:14:20] matanya: Links are helpful. :-) [07:14:29] join #cvn-sw [07:14:43] I'd rather not. Is it happening on one wiki or more? [07:14:46] can it be throttled in some way? [07:14:51] cw [07:15:02] Yes, but only if you provide more details. [07:15:13] cw? [07:15:32] cross wiki [07:15:33] Greylist [[wikt:de:User:198.199.89.42]] used edit summary "Insurance" in creating [[wikt:de:User:Aqwcjkw]] (+388) URL: http://de.wiktionary.org/w/index.php?oldid=3070814&rcid=3098767 "Neue Seite: Should you be planing a trip to Mexico or the Caribbean in the course of hurricane time of year, it's an intelligent idea to buy vacation insurance, if it's within your budget. How [07:15:33] eve�" [07:15:33] Blacklist [[wikt:chr:User:65.49.14.78]] used edit summary "Insurance" in creating [[wikt:chr:User:Johkpckq]] (+366) URL: http://chr.wiktionary.org/w/index.php?oldid=9085&rcid=7625 "Created page with "If you are visiting Mexico or maybe the Caribbean in the course of hurricane year, it's an intelligent concept to get travel insurance, if it's affordable. [07:16:03] matanya: If we had the Global Abusefiilter, this wouldn't be much of an issue [07:16:07] Greylist [[q:bs:User:65.49.14.78]] used edit summary "purchase" in creating [[q:bs:User:Kkapfgosfd]] (+371) URL: http://bs.wikiquote.org/w/index.php?oldid=49876&rcid=48645 "Napravljena stranica sa 'In case you are planing a trip to Mexico or the Caribbean throughout hurricane season, it's a sensible concept to purchase vacation insurance, if it's affordable. Sadly,...'" [07:16:07] New user [[gl:User:Ies.francisco.aguiar]] created. Block: http://gl.wikipedia.org/wiki/Special:Blockip/Ies.francisco.aguiar [07:16:07] User [[ta:User:Sengai Podhuvan]] Possible gibberish? [[ta:மானாவூர்ப் பதிகம்]] (+1553) Diff: http://ta.wikipedia.org/?diff=1464689&oldid=1464684 "" [07:16:09] Blacklist [[q:ar:User:183.62.192.187]] used edit summary "Insurance" in creating [[q:ar:User:Gdjixyeyf]] (+361) URL: http://ar.wikiquote.org/w/index.php?oldid=31654&rcid=31884 "أنشأ الصفحة ب'If you are planing a trip to Mexico or perhaps the Caribbean while in hurricane time of year, it's a smart idea to get journey insurance plan, if it's affordable. Howeve...'" [07:16:15] I know [07:16:17] But it's not enabled on every wiki yet [07:16:19] but this is insane [07:16:28] matanya: Can you pastebin the logs, please? [07:16:36] yes Elfix [07:16:37] It's difficult to understand what the rate is from snippets. [07:16:40] Poor Elfix, heh. [07:16:43] sorry [07:16:47] :-) [07:18:16] Bsadowski1: please help me here: http://etherpad.wikimedia.org/yeZfmYkMu2 [07:19:21] see here Elsie ^ [07:20:03] I see a few IP addresses. [07:20:18] those are the spamming ip's [07:21:56] So globally block them? :-) [07:22:27] easy to say [07:23:08] Is global blocking not working? [07:23:25] There are emergency settings that can be implemented, but you have to demonstrate that there's an emergency. [07:24:28] Global blocking is working, but the rate of the spam creations was very fast across different wikis. [07:24:44] Has it slowed? [07:25:15] Appears so.. if you join #cvn-sw you can see for youself when it happens. [07:33:46] I can't handle that much excitement. I'm only one man. [07:34:15] Elsie: you can :) [07:34:45] yes matanya [07:35:00] sorry, stupid tab compilation [07:35:20] completion ;-) [07:35:42] morning typos... [07:36:06] * matanya must not type before 12:00 [10:42:38] Elsie: under attack again [13:34:59] <{{Guy}}> *What am I doing in pteranadon nest?* [16:12:32] apergos: ping [16:13:39] aude: pong [16:13:48] but I'm about to be in a meeting. what's up? [16:14:10] curious about https://bugzilla.wikimedia.org/51225 [16:14:23] if i knew where wikidump.conf is (e.g. gerrit?), i could make a patch [16:14:33] i see we'd need something like the flaggedrevs dblist [16:14:37] for geodata [16:14:55] I should do better than that [16:14:59] not urgent, but soonish would be awesome [16:15:02] apergos: ok :) [16:15:05] we should start doing: if db has table x then [16:15:09] yeah [16:15:28] right now i'm trying to go through all the geodata on toollabs and add to wikidata :) [16:15:28] there are some things where we need to rely on lists (which to skip, which to run, which are closed/private) [16:15:32] much easier if i had a dump [16:15:33] most of the rest, not so much [16:15:40] apergos: ok [16:16:06] can't you just mysqldump the table? that's all I would do [16:16:11] just wrapped in a bunch of python [16:16:15] i'll try [16:16:21] yes, what i'm trying [16:16:23] ok [16:16:32] it's not terribly large or outrageous [16:16:43] it has ~1.4 million entries for enwiki [16:16:53] so i can work on chunks [16:17:17] but also want to geocode everything, which means use postgres [16:17:30] so this means conversion [16:17:44] which you would be stuck with whether I generated your mysqldumped table or you did [16:17:45] yeah, and then postgres is not on labs, so [16:17:46] bleah [16:18:00] i'm using external vm for postgres [16:18:02] yeah [16:18:21] a dump would mean, i can download to my vm and load to postgres [16:18:38] convert / load [16:19:12] yeah, I'm not putting you off, I just know I won't get to this soon [16:19:19] even though it's short [16:19:20] :/ [16:19:47] my dump related cycles are going to migration of the bulk of the jobs to eqiad and to reviving the media dumps [16:19:56] ok [16:20:02] and I'm told I should not use very many cycles on dumps now so... [16:20:08] alright [16:20:32] i suppose wikidump.conf is not public? [16:20:39] no [16:20:44] but I wouldn't go at it that way anyways [16:20:47] ok [16:20:55] seriously, this is the time to start 'if db has thie table then add this job' [16:20:59] yeah! [16:21:00] it would all be right in the python [16:21:21] ok, i don't want to keep you from your meeting [16:21:30] * apergos looks around [16:21:46] toollabs is reasonalbly useful and we might get postgres there [16:22:09] ok [16:22:23] thanks anyway and i'll wait for the dump :) [16:22:24] apergos: hello, i'm here; i don't see parent; i pinged him on jabber few minutes ago, but so far no response [16:22:34] I was just about to look for him there [16:22:39] whenever it happens, will be awesome [16:22:51] (aude, eventually :-) ) [16:22:57] :D [16:23:07] or geodata will just use wikidata [16:23:19] because i move everything to wikidata (all the coordinates) [16:23:25] whichever comes first :D [16:23:32] :-) [16:23:43] Svick: II guess we could get started [16:24:04] what's been happening today? [16:24:44] indexes; i think i'm almost done, i just have to fix some bug and then test it properly; i think it will be done today [16:25:53] sweet [16:26:57] out of curiosity, i also looked at text ids: it seems that on enwiki there is 520M revisions with text ids in the dump and they use 517M different text ids [16:27:13] 3 million dups? ouch [16:27:51] yeah, looks that way [16:28:15] are those reverts? that would make sense, reverting is relatively common [16:28:27] I think for first pass you can just keep the dup texts but it's worth discussing at a later point [16:28:39] I don't remember now what makes that happen [16:28:45] yeah, i don't plan to do anything with this now [16:30:25] what's coming up next? the library? [16:30:28] hi, is it possible to use js on wikipedia to get content from a labs page, or they're on diferent servers? [16:30:35] oh, it looks like it's page moves [16:30:40] ahhh [16:30:47] well that's annoying [16:30:56] Is Chris S. on vacation? [16:31:07] I thought reverts weren't noticed as something special by the software [16:31:24] I mean, they are just another edit as far as mw is concerned [16:32:08] yeah, i think reverts aren't noticed; what i meant is that the text id duplication is caused by page moves, becuse that creates a new revision with the same text [16:32:32] at least the first revision with duplicate text id is a move [16:33:59] yes, sorry, the "I thought" was "yeah I didn't think it was reverts but more likely something else" [16:34:22] and there is ~3M moves in the logging table, which confirms it [16:34:30] and there we have it [16:35:34] I wonder what wikis have the highest % of moves [16:35:46] ie the wiktionaries, the wikipedias... [16:35:52] anyways that can wait til later [16:36:26] which means probably doesn't make much sense to worry about this at all, because delta compression will take care of it [16:36:40] if that's the route you go, yes [16:38:05] yeah, i originaly though i would have to implement the delta compression myself, which made compressing in groups a safer choice; but now that i know about the existing libraries, i think i will try those first [16:38:13] :-) [16:38:22] yay for open source eh? [16:38:28] yeah :-) [16:39:13] when i'm done with indexes, i think i'll work on outputting the dump as XML [16:39:21] oh fun fun fun [16:39:33] both stub and content I guess [16:39:48] yeah [16:40:47] if all metadata and the content dumps are in one file then you can conceivably do that in one pass if the user wants both [16:41:13] s/content dumps/revision contents/ [16:43:03] yeah, i can basically just output the XML to stdout and that's all [16:43:28] ah you have that issue... we don't want to compress [16:43:34] well meh [16:43:52] what do you mean? [16:44:43] I mean that we write uncompressed to stdout for speed [16:44:49] and let the user do whatever with it [16:45:19] that's going to be a bit limiting but we'll see how people work with it [16:45:28] don't worry about it for now [16:45:28] exactly; if the user wants compressed dump, he can easily create it himself from that [16:47:15] anything you want to talk about as regards the conversion back to xml (or anything else)? [16:47:18] and if some tool reads compressed XML dumps now, it should be relatively simple to modify it to read it from output of idumps [16:48:20] no, nothing; i'll let you know if i have some questions later [16:50:07] ok, cool [16:50:21] well parentxxyy didn't make it in time so he will have to read the logs [16:50:40] and... see ya tomorrow :-) [16:50:45] bye [16:53:01] Parent5446 [17:13:40] Is it just me or did the recent deployment brought some (unexpected from users view) changes: https://de.wikipedia.org/wiki/MediaWiki:Revreview-unlocked/en is displayed at editing. Never saw this message before. And https://gerrit.wikimedia.org/r/75611 doesn't work for me apart from the fact that I think the style changed [17:14:52] AaronSchulz: ^ [17:15:13] se4598: what style changed? [17:15:17] se4598: also, you mean in VE, right? [17:15:37] MatmaRex: edit notice in VE as in old wikitext editor [17:16:11] se4598: yeah, VE was fixed to show these notices [17:16:13] the notify thing: this changed and the box is now in the upper corner but doesn't scroll with [17:16:37] se4598: that's rather impossible. [17:16:40] browser? [17:16:48] firefox 22 as well in chrome [17:17:19] hm, true [17:17:21] magic [17:17:55] MatmaRex: and the review thing: I'm pretty sure to not saw the notice (in the old editor) while editing in my user-namespace (but I can be wrong) [17:18:30] Ha [17:18:38] se4598: That notice is my fault, fixing [17:18:47] se4598: isee it inboth… [17:19:00] https://de.wikipedia.org/w/index.php?title=Benutzer:Matma_Rex&action=edit&uselang=en [17:20:05] yeah, but I didn't remember that it was there before although the message could exist since 2010: https://de.wikipedia.org/w/index.php?title=MediaWiki:Revreview-unlocked&action=history [17:20:21] ah. then i dunno. [17:20:49] ah, RoanKattouw'sfixing that [17:20:57] Yeah [17:20:57] https://gerrit.wikimedia.org/r/75648 [17:21:06] Trivial oversight in my FlaggedRevs change, one-line fix [17:22:26] RoanKattouw: Is the message used elsewere or is that some old debris? [17:22:36] So, it's used in an obscure case [17:22:59] Which is if you use Special:Stabilization to explicitly set the page to show the latest version if the wiki would normally show stable, or vice versa [17:25:38] MatmaRex: are you on the notify thing?, because now not only the style changed but a message also blocks the user bar on the top right [17:30:13] se4598: Fix for the FlaggedRevs notice going out now [17:31:06] thanks [17:31:47] FR notices should be fixed now [17:32:43] RoanKattouw: trying to load VE, but api response is: PHP fatal error in /usr/local/apache/common-local/php-1.22wmf10/extensions/FlaggedRevs/frontend/FlaggablePageView.php line 918: [17:32:43] Call to a member function isReviewable() on a non-object [17:32:56] RARGH [17:32:57] Of course [17:32:59] * RoanKattouw stabs ->load() [17:33:52] * ^d signs RoanKattouw up for an anger management course ;-) [17:34:16] <^d> And I'm taking your knife away. [17:34:30] RoanKattouw: sigh [17:34:40] Having just returned from dinner: I suppose that the VE is in the middle of an update right now? [17:34:40] AaronSchulz: sorry :( [17:34:48] This is why I should test my changes, even one-liner [17:34:54] Excirial: Yes [17:35:02] AaronSchulz: Thanks [17:35:17] Ok, in that case the "Error while loading data from server: error." can be explained :) [17:35:56] Yes :( [17:37:00] Oh, and is there a clear "New VE version has been deployed" log somewhere? Digging trough the channel logs to see if a milestone is live is ehm.. suboptimal. [17:38:05] Excirial: stuff's currently slightly broken [17:38:14] I'm about to deploy the fix [17:38:23] so let RoanKattouw clean it up before pestering him :) [17:38:23] The bug is in FlaggedRevs actually [17:38:24] :D [17:38:26] But it 's still my fault [17:39:04] Matma - i noticed that. But i like to retest the bugs i reported when it is live. Last time i could close a few bugs marked as "New" as having been fixed. :) [17:40:55] MatmaRex: notification: rollback, fixing or leaving it? https://bugzilla.wikimedia.org/show_bug.cgi?id=50870#c14 [17:41:42] OK that error should be fixed now [17:42:40] se4598: no idea, but that shouldn't be happening [17:42:57] Krinkle: ^ [17:43:56] Yes [17:44:10] se4598: it worksfor me on pl.wp [17:44:21] WFM too [17:44:22] you sure it's not a local customisation breaking things? [17:44:27] se4598: Are you using Vector or Monobook? [17:44:35] vector [17:44:54] MatmaRex: pl.wp works for me too [17:45:15] it'sbroken onde.wp for me, though. [17:45:42] On http://en.wikipedia.org/wiki/IEEE_Software, executing mw.notify('a') a few times I notice something weird [17:45:48] though not horribly broken, it seems to switch from layout to floating when offset is > 0 px instead of > offset [17:45:56] e.g. when scrolling anywhere not the absolute top [17:46:05] I can't reproduce that locally or on other wikis though [17:46:13] Yes [17:46:18] I did notice that locally as well [17:46:29] you did? [17:46:37] It starts floating as soon as you scroll down the tiniest bit [17:47:01] Before you've closed the 7em gap from the top [17:47:05] so its only on dewiki [17:47:10] The code explicitly prevents that (supposed to anyway), it's like VE toolbar. Only once the scrolltop is > original offset [17:47:25] Krinkle: Are you measuring the offset correctly? Are you sure it's not relative to the parent or something? [17:47:38] Nope, pretty sure. Because it worked locally [17:47:46] MatmaRex also verified [17:47:50] yeah [17:47:57] this wasn't happening [17:48:04] It happens for me in FF [17:48:07] and, well, it works on pl.wp right now [17:48:11] I think it also happened in Chrome but I'm not sure [17:48:23] On dewiki it floats immediately in FF 22 [17:48:39] RoanKattouw: Was it backported or forwarded from master? [17:48:40] e.g. new wmf branch or backport [17:48:41] se4598: I don't see broken notification behavior on dewiki using Vector in FF 22 [17:48:48] Krinkle: Backported [17:48:53] Did I miss a change? [17:49:05] RoanKattouw: Not that I know of but it might conflict with something else we backported [17:49:09] I didn't update MW core to master, I'm not /that/ crazy [17:49:16] RoanKattouw: css not applied at time of calling offset()? [17:49:23] that's got to be it [17:49:38] No, it's broken in master too [17:49:47] hm [17:49:54] RoanKattouw: If you can reproduce it locally on master (which I can't) can you insert console.log for the offset value it gets? [17:49:58] i refreshed and it behaves corrently on de.wp now [17:49:59] D: [17:50:09] the one cached once on $(init) [17:50:26] Wait, you cache the offset value? [17:50:36] How is that not begging for breakage? [17:50:59] its expensive to do while scrolling, and it shouldn't change. [17:51:28] Right [17:51:31] Yeah I guess that's fair [17:51:45] it causes a reflow every move of the scrollbar, and the scrollbar and page will not move until all event handlers are called to completion and reflows it triggers [17:51:53] Cached offset Object {top: 0, left: 1571.203125} [17:52:06] that's a problem :) [17:52:08] That's master on Chrome [17:52:35] Hmm when I compute it later it's correct [17:52:40] RoanKattouw: tried it on dewiki: works in IE10 but in FF and Chrome its still there (FF logged in and chrome out) [17:52:48] yeah, so it is computed before css is applied I guess [17:53:00] wait, I know what happened [17:53:20] MatmaRex: Remember I removed the class name and made it happen on update instead (and call that) [17:53:27] we calculate offset between those two points in time [17:53:39] so it is in neither floating nor layout mode. [17:53:45] hm [17:53:50] I think that's it [17:54:03] isn't position: fixed applied bydefault? [17:54:41] sure, but the skin offset is only done when area-floating [17:54:43] (or absolute) [17:54:52] area-layout* [17:55:03] hm, true [17:55:10] Right [17:55:14] So you need to apply the class earlier [17:56:54] RoanKattouw: well, the class may have to be removed before rendering if your initial scroll position is > 0 (e.g. when using a # anchor in the url, or when refreshing to a scroll position as browsers do, or if you scroll and mw.notify is called later) [17:57:05] Ugh, right [17:57:05] but for the offset calculation, it needs to be in layout mode [17:57:18] So, why don't we lazy-init the offset variable [17:57:23] Measure it on the first scroll event [17:57:32] RoanKattouw: https://gerrit.wikimedia.org/r/#/c/75662/ [17:57:40] RoanKattouw: no can do [17:57:41] Hopefully it's impossible to trigger a scroll event before we've finalized the initial positioning of the notification area, right? [17:57:56] Oh, because the page can be scrolled down [17:57:58] Right, of course [17:58:00] RoanKattouw: It needs to be calculated in layout mode, first scroll event may not be during layout mode [17:58:12] It worked first, I just restored it to an earlier patch set version [17:58:14] We need to position it as if we're not scrolled down first, then measure, then put it in its actual position [17:58:16] can you verify? [17:58:26] RoanKattouw: http://i.imgur.com/SK7H6mB.png [17:58:35] It starts in layout mode, we do the offset, then we trigger the first update [17:58:59] se4598: That's ... not at all what I see in Chrome [17:59:00] we do the same in VE actually, I just removed it here to "optimise" it by removing it since update() adds it, but then I forgot that that would break this [17:59:36] Krinkle: +2ed [17:59:36] I'll deploy [18:01:24] Krinkle: do you know why I see http://i.imgur.com/SK7H6mB.png althoug I cleared that cache [18:01:56] se4598: it's broken [18:02:02] se4598: Krinkle and RoanKattouw are fixing it [18:02:05] give them a minute :D [18:02:10] Almost done [18:02:40] MatmaRex: yeah but Roan sayed he can't experience it [18:02:55] MatmaRex: Look at his screenshot, his notification thing is really broken in Chrome [18:02:57] * Krinkle silently resumes dinner peeking at the laptop behind him [18:03:05] It's probably user/Gadget CSS or something though [18:03:10] Krinkle: We're doing the standup now-ish, too [18:03:13] RoanKattouw: yeah, that's how it looked for me? [18:03:17] MatmaRex: Not for me [18:03:26] yes for me :) [18:03:45] and Krinkle'schange should fix it, imo [18:04:07] se4598: thanks for the quick report, hold on for a while :) [18:04:13] It's deployed [18:04:21] I'll be away from my desk for 15 mins, brb [18:09:31] MatmaRex, RoanKattouw_away: fixed for me, thanks for the quick responde. there must had been some race condition on dewiki apparently, nevermind it [20:43:35] Are there any large (top 100 in total pages) that bypass varnish? Or some other cache layer? [20:43:50] My impression has been that nearly all large wikis use some layer of cacheing. However, I have heard some theories that large wikis can function without cache -- despite having lots of DPL. Is this true? Or am I completely mistaken? [20:44:21] with enough resources everything can work [20:44:39] Are there any wikis that currently do this? [20:44:54] not owned by wmf [20:45:04] I assume we would need an incredible amount of web servers and an enormous database server and memcache [20:45:12] well, you can cache elsewhere [20:45:14] even databases cache [20:45:20] memcache is caching! no memcache! [20:45:44] it would be a fun exercise to build wikipedia without frontend cache [20:45:44] Right, I meant specifically about squid/varnish [20:45:45] yeah it would be silly web servers and tonsof fusion io drives [20:45:45] not that difficult, tbh [20:45:53] LeslieCarr: lies [20:45:56] the dataset is tiny [20:46:12] LeslieCarr: also, one would simplify the UI logic a lot in that case [20:46:13] but they'd have ot pull the data and render it every time [20:46:17] domas, I heard Google has 1M servers. and that we serve less connections per second;) [20:46:29] are we talking with mediawiki or with a newly written "newwiki" ? [20:46:30] MaxSem: I don't know what you've heard [20:46:41] LeslieCarr: simply deploying HHVM would give 3x boost already :) [20:46:50] LeslieCarr: also, lots of HTTP requests handled by squid are quite cheap [20:46:52] MaxSem: google also does things completely differently than us. sort of comparing apples and ponies [20:47:05] and MongoDB! [20:47:10] it's webscale! [20:47:48] Besides the WMF sites in the top 100 (based on total number of pages), are there any others that anyone is aware of who don't use a varnish/squid layer? [20:47:58] LeslieCarr: also, more dynamic loading, less pure http "give me everything" [20:48:23] Geoff_: wiki ones or nonwiki ones? :) [20:48:28] Wiki ones [20:48:38] of the top 100 Mediawiki sites [20:48:44] Geoff_, write a short script that looks at headers? [20:48:45] mediawiki is stupid, generally [20:48:55] so you need squid/varnish for it [20:49:06] mediawiki is 10x slower than it could be [20:49:12] (or than what it was) [20:49:46] and requires more and more memory for execution on every new version :( [20:50:00] yup [20:50:03] ridiculous parser, slow startup [20:50:11] Domas: Yes, that has been my impression as well, that we are significantly slower without varnish [20:50:12] I had startup to ~5ms in my tests [20:50:18] down to ~5ms [20:50:24] millions of lines of code that PHP needs to parse [20:50:29] could make it even faster with a bit bigger code reorg [20:50:37] Vulpix: no need to parse code to serve mediawiki tbh [20:50:44] thats what opcode caches (or JITing VMs) are for [20:50:44] :) [20:51:31] Yes, without cache on, how do you resolve the slow parseing time? [20:51:38] Can you throw enough database/server resources at it? [20:51:39] indeed, although excluding the parsing time, it needs a lot of memory anyway [20:52:06] Geoff_, we have memcached for parrser cache [20:52:08] or you mean wikitext parse? [20:52:23] it needs lots of memory because nobody optimized memory usage [20:52:29] it needs cpu because people don't really work on optimizing cpu [20:52:32] *shrug* [20:52:43] and because it's written on PHP :P [20:52:53] Just general parsing in the page load time. When serving up a page directly from the database, without varnish, the load times can be dreadful -- even with memcached turned up [20:53:04] without parser cache, you'll need an infinite amount of apaches and pages will still be slow like hell [20:53:12] Vulpix: PHP can be fast [20:53:18] PHP is fastest interpreted language at the moment [20:53:23] That is, if you bother to optimise it. [20:53:34] also, you shouldn't call "parser cache" a "cache" [20:53:34] domas, faster than JS in V8? :P [20:53:35] it is mandatory [20:53:40] MaxSem: probably [20:53:54] didn't see the numbers lately [20:54:19] dunno! [20:54:35] it is also different on how large scope codebase you're optimizing [20:54:50] anyway, rename parser cache into "parsed text storage" [20:54:59] and you will have mediawiki fast without too much caching :-D [20:55:04] http://benchmarksgame.alioth.debian.org/u32/benchmark.php?test=all&lang=v8&lang2=php&data=u32 [20:56:03] MaxSem: you're comparing against Zend, not HHVM [20:56:22] try comparing JS running on IE 3.0 [20:56:26] ;-) [21:03:06] anyway, if wikipedia had different performance needs, it would have different solutions [21:03:15] it is easier to throw hardware at the problem right now [21:03:15] :) [21:33:30] StarCraft II - Account Action Notification‏ Blizzard Entertainm​ent (noreply@blizzard.com) Add to contacts 12:58 PM To: r**********@*******.*** Picture of Blizzard Entertainment From: Blizzard Entertainment (noreply@blizzard.com) Microsoft [21:33:32] SmartScreen classified this message as junk. Sent: Wed 7/24/13 12:58 PM To: r**********@*******.*** Microsoft SmartScreen marked this message as junk and we'll delete it after ten days. Wait, it's safe! Greetings HydraulicsMe#1578, [21:33:33] Battle.net Account: r**********@*******.*** BattleTag: HydraulicsMe#1578 Action: 3 Hour Suspension Violation: Harassment - Spamming This includes sending an excessive number of in-game messages or unwanted friend invitations over a short period [21:33:35] of time. Details (Listed in Greenwich Mean Time): This is a warning against the above behavior, which Blizzard deems unacceptable for StarCraft II. In addition to this warning, your game license has been issued a suspension as detailed above. [21:33:36] Your game license will not be available for play during this time. As the account holder, you are responsible for the activity associated with this game license. Further violations will result in harsher suspensions or permanent closure. This page [21:33:38] contains details on how suspensions are appealed and reviewed: http://www.battle.net/support/article/6741 Regards, Customer Support Blizzard Entertainment http://battle.net/support [21:33:41] Battle.net Account Locked - Action Required‏ Blizzard Entertainm​ent (noreply@blizzard.com) Add to contacts 1:08 PM [Keep this message at the top of your inbox] To: r**********@*******.*** Picture of Blizzard Entertainment Greetings, Blizzard [21:33:42] has locked the Battle.net account ***********@*******.*** due to an unusual change in its access pattern. If you recently changed your connection pattern, you can unlock your account with a simple password reset. Visit the Password Reset page to [21:33:44] begin the process. The form can also be reached by clicking “Can’t log in” on any Battle.net login page. If you did not recently change your connection pattern, someone else may have attempted to access your account using your password. We encourage [21:33:45] you to follow our security checklist to secure your computer, and then performing a Password Reset to unlock access. Once you’ve regained access, consider adding Battle.net SMS Protect, a free service that allows you to quickly recover your [21:33:47] Battle.net account using a mobile device. For more information, see the SMS Protect FAQ. Regards, Blizzard Entertainment http://battle.net/support [21:33:49] i'm proud here.. [21:33:54] what? [21:33:55] i'm gonna have to kill the person who reported me [21:34:06] all i have is a pocket knife i dont even have her address i dont give a shit about her [21:34:09] Huh? [21:34:14] GrandmaAlive: this is a channel for wikimedia technical information [21:34:33] thanks QueenOfFrance [21:34:42] np [21:34:43] :) [21:35:20] what just happened [21:35:22] Reedy: Hi. Can you pastebin the output of "show create table page_props" for me? [21:35:38] Reedy: I'm trying to figure out if the index from https://bugzilla.wikimedia.org/show_bug.cgi?id=45316 is on Wikimedia wikis. [21:35:49] Just a troll c [21:36:00] QueenOfFrance took care of it. [21:36:23] very very strange troll [21:36:47] * Technical_13 has seen stranger... [21:37:00] Elsie: it has to be [21:37:09] Does it? [21:37:12] Elsie: otherwise special:pageswithprop would fail miserably [21:37:15] !wp special:pageswithprop [21:37:15] https://en.wikipedia.org/?title=special%3apageswithprop [21:37:26] What the fuck kind of URL is that... [21:37:43] one with percent-encoded parameters. [21:37:45] !wp [21:37:45] https://en.wikipedia.org/?title=$url_encoded_* [21:37:55] !p percent-encoding [21:37:57] /?title= is bizarre. [21:37:58] !wp percent-encoding [21:37:58] https://en.wikipedia.org/?title=percent-encoding [21:37:59] grumble. [21:38:08] I know what percent-encoding is. [21:38:19] it works, unlike, say, [21:38:20] !mw [21:38:20] https://www.mediawiki.org/wiki/$1 [21:38:33] Elsie: It's on enwiki at least [21:38:35] !mw Main Page [21:38:35] https://www.mediawiki.org/wiki/Main [21:38:38] Reedy: K. [21:38:40] oh no, fails. [21:38:49] MatmaRex: _ [21:38:50] Bots are evil. [21:38:56] ... I'll see if I can make a $encoded_wiki_* later like there is for recentchanges feed now. [21:39:23] Reedy: feel free to implement wiki-encoding for the bot :D [21:39:28] domas: Yeah, the dataset is small enough that it'd be pretty cheap to put everything in RAM. I'm not sure if that would count as a cache. [21:40:00] putting RAM is "cache" over accessing disks [21:40:05] *into [21:40:05] MatmaRex: wm-bot already has wiki encoding on recentchanges feed.. think I can add it to infobot too. [21:40:11] putting into disks is "cache" over accessing from tapes [21:40:15] Heh. [21:40:20] putting into tapes is "cache" over accessing from stone tablets. [21:41:05] in this case LeslieCarr was waving her hands around the "oh we will need lots of flash" without actually looking much at the problem [21:41:07] :) [21:41:31] :p [21:41:32] domas: lists.wikimedia.org/pipermail/foundation-l/2009-May/051683.html [21:41:37] Oh, shit, protocol. [21:41:41] domas: Everything needs Abobe Flash [21:41:41] http://lists.wikimedia.org/pipermail/foundation-l/2009-May/051683.html [21:41:57] That whole thread is pretty great. :-) [21:43:51] brb [21:50:37] skimming that whole page is amazing. Gdansk, Google Wave, the licensing vote [22:38:22] Hello [22:38:47] hi Qcoder00 [22:39:04] http://commons.wikimedia.org/wiki/File:Flag_of_the_Mongol_Empire_2.svg - How do I find other links that need updating like in this diff? [22:39:13] Other than by checking every single link? [22:40:59] I.E Is there a way of finding out if an interwiki link from Commons to what is claimed to be a local image is in fact local, and not just an alias for an image already at Commons? [22:48:09] Qcoder00: hm [22:48:33] http://commons.wikimedia.org/wiki/Commons:Village_pump#Link_repointing [22:49:09] https://www.mediawiki.org/wiki/API:Iwbacklinks [22:49:14] there's this API module [22:50:39] but i can't seem to make it work. [22:53:03] I've asked on both enwiki and COmmons VP [22:53:16] Something will happen [22:53:19] I hope [23:00:39] gn8 folks