[00:00:15] binasher: AbuseFilter stores the text id [00:02:09] jeremyb: I'm about to leave for the day, is there something I can help with quickly? [00:02:55] chrismcmahon: bouncing it your way [00:03:17] Platonides stole it? [00:04:01] I was 'answering' [00:04:07] jeremyb: probably be tomorrow before I reply, hope that's OK [00:04:11] sending a bunch of questions, actually :P [00:04:49] (sent) [00:04:52] Platonides: do you think it's probably something with the weird unstyled headings because of cache ttl? [00:05:03] Platonides: that was my first guess [00:05:17] brion: rebased [00:05:32] jeremyb, it could be [00:05:48] although he talked about printing, which doesn't match too well with it [00:06:07] I thought of "he changed his zoom settings" [00:06:10] but who knows [00:06:18] with the little information he gave [00:06:25] brion: also see https://gerrit.wikimedia.org/r/#/c/35566/ :) [00:06:41] gn8 folks [00:08:48] btw, where's that queue in the hierarchy? [00:09:55] hmm, funny [00:10:05] I could see and reply that mail, but I can't view the queue [00:10:45] or perhaps it's just empty [00:13:58] Platonides: is that just a response name or your real name? [00:17:33] just a response name [00:17:44] Herpy Derpington [00:17:50] xD [00:24:17] night [00:47:19] awjr: you guys finished with your deployment? I was going to do a config change. [00:47:27] kaldari: yup go for it [00:47:29] thanks [00:51:31] hi, is anyone looking into the broken parser functions reported at http://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#Recent_changes_to_MediaWiki ? [00:52:43] Guessing it's related to the php-1.21wmf5/extensions/ParserFunctions/Expr.php synch made an hour ago [00:52:59] Mmm [00:54:19] I'll revert it out [00:54:24] ori-l: ^^ [00:55:06] Thanks, there are already 3 sections at VPT which are probably related [00:55:17] lol [00:55:27] the issue is actually finding the causes of it originally [00:56:01] Somewhere on the cluster there are some that were broken with the previous code [01:04:25] that fixed them all [01:04:29] That's annoying [01:07:35] Great, thanks Reedy. [02:27:57] Reedy: hah! [02:28:32] (int)0.001 === 0. hilarious. did you revert or cast to float? [02:28:46] I just reverted it for now [02:32:16] ori-l: Though, I was a good boy and added some tests to prevent the regression happening again [02:32:46] do the error logs at least show what other falsy value was sneaking through? [02:33:04] Nope [02:33:21] If it did, fixing it would've been much more obvious [02:34:17] Reedy: is that the entire change? [02:34:34] lol [02:34:35] nope [02:34:38] I failed [02:34:46] * Reedy glares at Reedy. [02:41:30] It is nearly 3am [02:41:33] wat [02:41:45] (float) 0 === 0 [02:41:45] false [02:42:22] rasmus lerdorf should pay reparations [02:43:40] Reedy: sleep! i'll submit another patch set [02:43:53] With some sweary comments? [02:44:10] sure [02:47:01] (float) 0 === (float) 0 [02:48:15] i'm not trusting that until i type that into a repl and see for myself [02:48:22] i learned my lesson [02:49:05] enwiki is a great place to test [02:50:28] I still want to know what the broken condition is [02:50:40] PHP, evidently [02:50:43] heh [02:50:45] wfSuppressWarnings() [02:50:48] try / catch? [02:51:04] would need a custom error handler [02:51:26] @ ftw [02:55:04] ok, so to confirm [02:55:56] you want to throw an ExprError if the variable is numeric and equal to zero [02:56:15] but you explicitly _want_ to let other cases fall through? [02:56:28] or do you want exprerror to catch empty array, null, etc? [02:56:45] no, i guess you want ExprError so you have a trace [02:57:10] but is ExprError going to produce one, or does it get caught and handled by the caller? [02:57:16] ^ Reedy [02:57:38] we have some weird code [02:57:39] if ( false === ( $stack[] = pow( $left, $right ) ) ) { [02:57:39] throw new ExprError( 'division_by_zero', $this->names[$op] ); [02:58:00] The exception makes a big read error message insetad of the weird number [03:01:02] yeah, that's a bit odd [03:02:13] that's your code :P [03:02:43] I'm fairly sure i'm not the original author :p [03:03:53] i'm off for tonight now.. I think :p [03:03:54] so ExprErrors are caught and a message is outputted to the user [03:04:12] so if you catch all values that would be cast to zero, you'll eliminate the warning, but [03:04:54] whether or not that's desirable depends on whether other falsey values can be inputted by the user [03:05:05] or whether that's indicative of something broken in the implementation [03:05:47] I hope if you give it 2 / foobar it doesn't even try to evaluate it.. [03:06:18] i have to put my son to bed but i'll look at it more in a bit [03:06:26] meanwhile, sleep! [03:06:44] Expression error: Unrecognised word "foobar". [03:06:55] So it's some weird edge value [03:06:56] laters [04:49:16] I need to do a tiny bugfix deployment to the UploadWizard stuff I deployed earlier today [04:49:59] would anyone object to that? [04:52:24] Silence is golden :) [04:52:58] heh [04:53:44] I think most people are asleep. [04:54:42] let them sleep [04:55:04] the gnomes will fix everything in the meantime [05:02:23] all done [05:06:10] Yay [05:12:02] kaldari: want to look at https://gerrit.wikimedia.org/r/#/c/37372/ ? [05:12:20] sure... [05:13:38] I have no idea what this does, but the code looks fine :) [05:14:04] merged [05:14:42] it's a bug 42715 thing [05:15:32] looking... [05:16:31] oh that's annoying [05:17:24] I closed a bug from 2009 today, BTW [05:17:35] it was a good vintage [05:18:32] alright, time to close the computer [05:18:37] night all [05:20:21] bye [09:13:16] hola buenos días [09:16:02] hi [09:16:12] estoy trabajando con una mediawiki para una empresa, y tengo un poco de problema con el resourceloader, ya que he bloqueado el acceso a los usuarios exteriores para que no puedan leer las páginas, debido a que la wiki es de dominio privado para los empleados, y me gustaría poder redireccionar la página principal al inicio de sesión, y no que me salga la página esa que dice: " Es necesario iniciar sesion... debe iniciar sesi [09:16:23] No se si se me entiende [09:16:24] -> #mediawiki [09:19:07] can i speech in spanish? [10:21:29] Dereckson: so hmm shebangs are behaving differently under Linux and BSD :-) Arguments are not split and "#!/usr/bin/env perl -w" ends up looking for a file named 'perl -w' [10:21:38] Dereckson: replied on https://gerrit.wikimedia.org/r/#/c/37374/ with a solution :-] [10:22:03] Dereckson: you must be using FreeBSD or mac os :-) [10:39:49] hashar: yes, I'm a FreeBSD user. [10:41:08] Dereckson: in this specific case the fix is easy, simply drop -w [10:41:15] Dereckson: does the same as "use warnins;" [10:41:17] So imagine you need to call php -s to offer syntax highlighting in a CGI environment. How would do achieve that? [10:41:52] using env ? [10:41:55] I have no idea [10:42:28] would need to fix the linux kernel maybe [10:43:21] (well the example isn't a real world one - we have to ask webserver or any other component to directly handle .php and .phps; for example, editing SuEXEC (which call some execv funcitons) to ask him to directly call path/to/php for .php and '/path/to/php', '-s' for .phps) [10:43:36] You're alas right :/ [10:44:01] In 2002, a popular way to "execute" .php without shebang were to add a linux kernel module called something like bin_msft [10:44:14] it allowed to associate extensions to an executable, like on Windows [10:48:39] Thank you to have attracted my attention on this detail, by the way, I increased my Linux specifities' knowledge. [10:49:17] Dereckson: that is really a corner case [10:49:55] Dereckson: I have been struct by it when trying to use something like: #!/usr/bin/env php -n [10:50:00] (where -n means do not use php.ini) [10:50:17] I ended up creating a "phpn" shell script [12:13:25] andre__: in about 30 h it will be a week since https://bugzilla.wikimedia.org/show_bug.cgi?id=42614 was reported [12:14:52] Nemo_bis: :( You're free to ask Aaron for updates as it's assigned to him. [12:16:41] nah [12:44:16] <^demon> DanielK_WMDE: Good morning. [12:44:45] ^demon: hey! thanks for coming on early! [12:45:17] <^demon> No problem. So, end of day yesterday we ended up with the 3 things done I wanted to accomplish: [12:45:20] ^demon: io found two things that needed fixing before we can procede. well, one logic fix, and one change for injecting wfDebugLog calls. [12:45:24] <^demon> 1) Crons in place for prune [12:45:27] <^demon> 2) Changes table is back on [12:45:38] <^demon> 3) Re rebuild entity per page (you know) [12:46:15] <^demon> Which version are we planning on deploying? Do we have a branch or will it be from master? [12:46:26] yea, cool. now we need pollForChanges. which needs https://gerrit.wikimedia.org/r/#/c/37400/ first. we are about to backport this [12:46:47] for investigating why we don't see the links on test2, we'll need https://gerrit.wikimedia.org/r/#/c/37402 [12:47:14] <^demon> Yeah, I saw 37402. [12:50:41] aude and tobi are backporting. please pull these in when they are done. then... [12:50:50] let's start with the debug patch [12:51:00] what do you think, should we set up polling first, or debug the client? [12:51:02] ok, debug [12:51:02] i'd be more comfortable to see the links there before we start polling [12:51:09] right [12:51:12] it'll tell us that much is workign right [12:51:20] it's technicall unrelated, but I understandf [12:51:28] runnign poll for changes with some bug, might not be so nice [12:52:59] <^demon> DanielK_WMDE: Which branch are we backporting to? mw1.21-wmf5? [12:53:03] yes [12:53:06] ^demon: can i update the wikibase tag point [12:53:11] for core wmf5 [12:53:17] or you can do it? [12:53:48] ok, i'm afk for a few minutes. quick concentration break [12:53:50] <^demon> Just make a new tag, it doesn't really make a difference. [12:53:51] we want 38edfe0a3e6c288d09e35ed4218383135fa9b8a7 deployed [12:54:04] <^demon> Yeah, the sha1 is what we need. [12:54:57] ok, that's it [12:55:22] <^demon> The current tag is deployed-to-wikidata.org_2012-12-05, just make deployed-to-wikidata.org_2012-12-07 :p [12:55:24] that introduces the debug log points, but not the needed pollforchanges patch yet [12:55:36] would could merge both and then use that tag [12:55:43] <^demon> Yes, do that. [12:55:46] <^demon> So I only have to sync once. [12:55:48] ok, one moment [12:55:51] ^demon: btw, 37402 has a dependency on another change. i already backported that. perhaps give it a quick eyeball. it's facc4bcae5e909d [12:56:49] 37402 is ok [12:57:00] it merged cleanly into the branch [12:57:03] <^demon> DanielK_WMDE: I can't find that commit, git show shows nothing. Link? [12:57:12] it's in our branch [12:57:29] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Wikibase.git;a=log;h=refs/heads/mw1.21-wmf5 [12:57:58] * aude can try tagging it [13:04:04] ^demon: https://gerrit.wikimedia.org/r/#/c/37404/ [13:05:03] uh, i got a merge confluct while pullign the branch now. what happened? [13:05:32] https://gerrit.wikimedia.org/r/37424 [13:05:51] spoiled my local branch, it seems [13:06:02] DanielK_WMDE: everything is clean for me [13:06:05] ok, everythign seems to be in [13:06:14] ^demon: there's the tag for core [13:06:31] aude: yea, fixed it with git reset origin/mw1.21-wmf5 [13:06:56] ok :) [13:07:19] ^demon: just to verify - what's the current setting for repoDatabase and changesDatabase? [13:07:36] both should be set to "wikidatawiki" [13:07:57] <^demon> Yes, both are. [13:08:00] it is, at least in mediawiki-config [13:08:04] ok, thanks [13:08:07] which i'm sure reedy deployed [13:08:07] <^demon> http://noc.wikimedia.org/conf/highlight.php?file=CommonSettings.php [13:08:46] looks good [13:11:12] <^demon> Sync'ing. [13:13:33] yay [13:13:51] hi hashar [13:14:08] lo [13:15:31] ^demon: btw, saving the status of pollForChanges in a local file is a nasty stop gap hack. i'll rewrite that, using db based locking, so it will work nicely for multiple target wikis. [13:15:40] i'll write the spec today (or monday) [13:15:50] <^demon> Sounds good. [13:18:33] ok, https://test2.wikipedia.org/wiki/New_York_City has one local link and the "edit links" thing [13:19:08] it can read the wb_items_per_site table on wikidatawiki [13:23:11] ^demon: how can we get sane log output? we only need the wikibase channel, but preferrably only the requests we do ourselves. is there a good way to achieve that? [13:23:30] alternatively, the full output of the wikibase log channel would do, I suppose [13:23:37] <^demon> I'm trying to track down a root to create the file on fluorine. [13:23:49] <^demon> And then I'll deploy https://gerrit.wikimedia.org/r/#/c/37426/ [13:26:30] ^demon: isn't udp2log supposed to create the file for us ? [13:26:40] seems the dir belong to udp2log:udp2log [13:26:56] <^demon> I don't think it creates the file :\ [13:27:04] <^demon> I seem to remember people always manually creating them. [13:27:07] it does on beta [13:27:12] though that is a different setup [13:27:13] anybody knows which exact DNS blocklist operator is used for wikidata-test? [13:27:13] <^demon> Oh hmm, maybe that's fixed. [13:27:23] worth trying it :) [13:28:51] <^demon> a-ha! [13:28:56] <^demon> It did appear. [13:29:22] <^demon> DanielK_WMDE: [13:29:23] <^demon> http://p.defau.lt/?vbRupU0WDN_ngOqhoQn_9Q [13:29:38] magic [13:30:18] ^demon: so, seems like Revision::getRevisionText is unable to fetch the content from ES. [13:30:23] any idea why that would be? [13:30:45] can you give us the raw blob from the text table, associated with rev 631469 on wikidatawiki? [13:31:19] <^demon> I don't know where to look, honestly. [13:31:20] ^demon: this is cross-wiki ES access... maybe the load balancer magic doesn't like that? [13:31:42] ^demon: for the text blob? whold on a second [13:31:42] andre__, I asked DanielK_WMDE which directed me to Silke_WMDE, I just asked her in #wikimedia-wikidata [13:31:53] oh thanks [13:32:35] Platonides, because I want to close one bug report as WONTFIX by a user affected by it, but it's unfriendly to not tell where to complain to instead :P [13:32:47] bug 42792 :) [13:34:06] ^demon: ok, so the blob is at DB://cluster25/316188 [13:36:06] ^demon: my guess is that it's failing in ExternalStore, line 72: $store = self::getStoreObject( $proto, $params ); [13:36:25] do you know how ES is configured? can a wiki in one shard access the ES cluster of another shard? [13:36:32] that seems to be our problem here [13:36:47] (I did test LB_multi, but not ES) [13:37:13] <^demon> I really don't know, have never tried. [13:37:29] <^demon> If we could come up with a minimal test case, I could eval.php it and see what happens. [13:37:56] ^demon: ExternalStore::fetchFromURL( "DB://cluster25/316188" ); [13:38:19] <^demon> Yeah, I just put 2 and 2 together :) [13:38:31] ^demon: ExternalStore::getStoreObject( "DB", "cluster25/316188" ); [13:39:17] and finally ExternalStore::getStoreObject( "DB", "cluster25/316188" )->fetchFromURL( "DB://cluster25/316188" ); [13:39:42] <^demon> First two worked. [13:39:50] ah, wait, the second one is wrong [13:40:01] ExternalStore::getStoreObject( "DB", array() ); [13:40:17] the first one worked?? [13:40:21] hm... [13:40:21] <^demon> Ok, that works, as did the first. [13:40:27] damn [13:40:27] <^demon> ExternalStore::getStoreObject( "DB", "cluster25/316188" )->fetchFromURL( "DB://cluster25/316188" ); [13:40:41] <^demon> Gave: DB connection error: Access denied for user 'wikiadmin'@'208.80.152.%' to database 'c' (10.0.0.237) [13:40:49] o_O [13:41:06] ah, wait [13:41:17] ExternalStore::getStoreObject( "DB", array() )->fetchFromURL( "DB://cluster25/316188" ); [13:41:21] like that [13:41:28] so, if that worked... what didn't? [13:41:34] <^demon> Works. [13:42:17] hm. [13:43:48] ^demon: http://pastebin.com/qzupY63f [13:44:20] <^demon> *whoops* [13:44:29] <^demon> I should've been testing this from test2wiki, not wikidatawiki, right? [13:44:58] * aude back [13:45:23] ^demon: heh [13:45:23] please give me working language links for my presentation :D [13:45:25] yes! [13:45:53] aude: seems like the problem is accessing ES servers in another shard. [13:46:46] i see [13:46:58] the blobs [13:47:09] <^demon> DanielK_WMDE: The minimal case from the pastebin works on test2. [13:48:21] o_O [13:51:03] this is strange... it looks like there are messages missing between the last line and the line before [13:51:15] i can't see what path is taken there. very strange [13:52:46] orr! why does LangLinkHandler::getEntityLinks take a Parser, not a Title object? [13:52:56] that makes it impossible to test, now :/ [13:53:24] ^demon: can you please try the same request again? [13:55:08] <^demon> Which one? [13:55:42] ^demon: whatever created the log file [13:56:04] <^demon> Ah, https://test2.wikipedia.org/wiki/New_York_City?action=purge [13:56:37] ^demon: yea... but that's not updating the log file, is it? [13:56:50] <^demon> Yes it is. [13:57:12] http://p.defau.lt/?vbRupU0WDN_ngOqhoQn_9Q still shows me 13:28:32 [13:57:19] <^demon> That's just the pastebin'd version [13:57:25] <^demon> The actual log is on fluorine. [13:57:38] ah [13:57:54] sorry, didn't pay attention to the url :) [13:57:56] <^demon> Updated version: http://p.defau.lt/?3ldfjcuEnV51DNjpta_yDg [13:58:01] can you give me the latest, please? [13:58:33] :( [13:59:47] ^demon: thansk. hm. here's the mystery: i see one log entry from extensions/Wikibase/client/includes/store/sql/WikiPageEntityLookup.php line 246. Then I see one from extensions/Wikibase/client/includes/LangLinkHandler.php line LangLinkHandler line 61. [14:00:16] but it's not hitting... oh! [14:00:37] the other log message in WikiPageEntityLookup is a wfWarning, so it doesn't show in the channel. [14:00:46] so, that mystery is solved. [14:00:48] hm [14:01:12] ^demon: so... \Revision::getRevisionText( $row ); fails for unknown reasons. [14:01:35] ugh [14:01:36] it would be good to know what's in $row->old_text and $row->old_flags [14:04:22] <^demon> Live debugging fun ;-) [14:05:00] <^demon> I'm var_dump()ing $row on wikidata.org, so get your output fast ;-) [14:06:05] <^demon> Whoops, I meant on test2wiki. [14:06:10] <^demon> It's still too early :\ [14:06:18] <^demon> DanielK_WMDE, aude: See the output of $row? [14:06:27] ^demon: https://gerrit.wikimedia.org/r/#/c/37427/ [14:06:47] ^demon: yea, if you have it, please! [14:07:18] <^demon> https://test2.wikipedia.org/wiki/New_York_City [14:07:21] <^demon> Just look at the top ;-) [14:09:25] ^demon: please try calling Revision::getRevisionText( $row ) in eval.php, where $row is that array. perhaps use var_export() on the page, so you can copy & paste :) [14:09:43] it *looks* fine. [14:09:56] * aude looks [14:10:10] :D [14:10:51] DB://cluster25/316188 [14:11:06] * aude doesn't understand exactly how that part works [14:11:10] aude: yes. but chad tried that "url" in eval, and it worked... [14:11:24] heh [14:11:25] huh [14:11:48] <^demon> Hmm, $blob is false. [14:12:05] ^demon: right. now the question is... why?! [14:12:20] ugh [14:13:04] ^demon: if we can't find that out, perhaps leave it for now and ask tim for help. we can set up polling anyway, doesn't make a difference. [14:13:17] <^demon> Tim's on vacation. [14:13:26] yay :P [14:13:54] well, someone else who knows about ES. asher, or brion, or someone. [14:14:16] we now at least have narrowed down the cause a lot. i'm confident that it doesn't impact change propagation. [14:14:25] we just can't show anything :P [14:14:37] <^demon> I just cherry picked https://gerrit.wikimedia.org/r/#/c/37427/ into production. [14:15:30] ^demon: cool! can you paste the respective line from the log? [14:15:41] <^demon> 2012-12-07 14:15:22 srv265 test2wiki: blob spec: flags=utf-8,gzip,external, text=DB://cluster25/316188 [14:15:44] <^demon> As expected, really. [14:15:51] hmhm [14:16:17] well, we could try getRevisionText( $row ) on the full row, as suggested. [14:16:23] but if it works in eval()... [14:16:26] *shrug* [14:16:43] seems like a bug in the load balancer to me. or i don't really understand how the ES stuff in there works [14:17:35] <^demon> We could throw some wfDebugLog() stuff into revision or externalstore. [14:18:19] into the correct store implementation. there it would be useful. [14:18:47] ^demon, aude: how about I file a bog about this, we try to get that some attention, and we move on? [14:19:01] it'd be very hacky but suppose an alternaitve is to fetch entity data from the api [14:19:09] at least something external clients could do [14:19:29] yes. it's just a lot of load. [14:19:34] but we can do that. [14:19:36] * aude nods [14:19:40] but it's silly - this should work [14:19:42] <^demon> I'd rather not. [14:19:44] expect gadgets will do stuff like that [14:20:00] sure. [14:20:27] <^demon> Perhaps we need another set of eyes. AaronSchulz? [14:20:56] <^demon> Oh, it's only quarter after 6 there. [14:20:57] <^demon> Ugh [14:21:22] :( [14:25:03] * DanielK_WMDE is writing a bug report [14:25:44] ^demon: have you thought about how often and for how long the poll script should be running? [14:26:33] <^demon> I'm thinking since it's just a stopgap, we could do every 5m on hume. [14:26:43] <^demon> That's often enough for testing, and since it's only one wiki we won't overload anything. [14:28:00] it should work [14:29:19] <^demon> The --once operation would be more useful if you could specify the number of batches to do before exiting. [14:29:30] <^demon> So you could do --batches=5, or --batches=1, etc. [14:35:06] aude, ^demon: https://bugzilla.wikimedia.org/show_bug.cgi?id=42825 [14:35:15] afk [14:35:54] <^demon> Sounds about right. I'm gonna get some breakfast. [14:41:09] Computers suck [14:42:48] <^demon|brb> Reedy: Indeed. If you're bored, play with ES :p [14:43:03] lol [14:46:00] Reedy: got API yet another lame patch https://gerrit.wikimedia.org/r/#/c/37430/ :D [14:46:11] Reedy: to let us split the api.log file which is wayyyy to big [14:46:37] Reedy: would be great if you had some idea regarding https://bugzilla.wikimedia.org/show_bug.cgi?id=42825 [14:52:56] Got a quick-ish errand to run and then I'll have a look and see if I can help any [15:08:48] DanielK_WMDE: thanks [15:48:47] ^demon: wb [15:49:56] ^demon: so, as far as I can see, pollForChanges should run every few minutes. Persistent (default) mode would be ideal for us, but I can imagine that you don't like it. So, perhaps go for the --all option. Or --once, if you think --all can be problematic. [15:50:40] oh, wait... [15:53:06] <^demon> Hmmm. [15:53:53] ah, yea, it'll work. until we merge Ia65666b7. then it would break - needs fixing on master [15:54:05] <^demon> --all should be fine. I just ran it. [15:54:18] <^demon> (One off, I wanted to see what performance was like) [15:54:27] ^demon: how long did it take for the entire backlog? [15:54:35] <^demon> Like, 45s? [15:54:39] https://test2.wikipedia.org/w/index.php?title=Special:RecentChanges&hidewikidata=0 :o [15:54:39] <^demon> Maybe a minute [15:55:02] cool [15:55:05] it links to me on wikidata.org :D [15:55:36] we need to improve the edit summary for slurpinterwiki, i see [15:56:14] if it's multiple , then it could just be "Update of interwikis from enwiki" ... same as one on wikidata [15:56:40] East Peoria, Illinois (Q506116); 13:48 . . Aude (Talk | contribs) (Language link added: ca:East Peoria) [15:56:42] neat [15:57:08] aude: it could use our magic /* stuff */, so we could even internationalize their messages [15:57:31] ^demon: cool! hm... when you add this to cron, it will run again. and inject dupes. [15:58:10] <^demon> Is there a patch to fix the dupe issue yet? [15:58:13] DanielK_WMDE: it is internationalized [15:58:20] aude: ah, neat [15:58:26] test2wiki is english [15:58:52] ^demon: not really. but i can tell you how to avoid it. if the continuation stuff works right, there will be no dupes [15:58:58] https://test2.wikipedia.org/w/index.php?title=Special:RecentChanges&hidewikidata=0&uselang=de [15:59:04] ^demon: where did you try to run it manually? on the same box cron is running? [15:59:12] right now, the edit summary just shows the first link added :/ [15:59:29] <^demon> I ran it from fenari, not hume :\ [15:59:53] * aude is 89.204.154.56 too [15:59:59] ^demon: grab max(change_id) from the table, run it again from hume with --startid [16:00:03] must login to use slurpinterwiki [16:00:08] ^demon: that will create an appropriate continuation file [16:00:59] <^demon> Gotcha. [16:01:00] ...in /tmp, by the way. unless you use --statefile, or /var/run/ is writable [16:01:16] ^demon: an explicite --statefile is probably a good idea [16:03:47] <^demon> Eh, /tmp will be fine for now. It's not meant to run on hume forever anyway. [16:04:40] <^demon> DanielK_WMDE: If https://gerrit.wikimedia.org/r/#/c/37429/2/manifests/misc/maintenance.pp,unified looks good to you, I'll go bug an opsen. [16:05:29] looks okay ^demon [16:05:45] if all is well, maybe make it 5 minutes or something shorter [16:06:00] <^demon> It is every 5 minutes. [16:06:23] ^demon, aude: if the file is kileld from /tmp, it will re-do the entire backlog, creating dupes... [16:06:38] DanielK_WMDE: ^demon we can't have that [16:06:54] so, don't let it use /tmp [16:07:15] find a good place under /var or soemthing. [16:07:18] <^demon> Argh, and hume is going to be rebuilt soon too. [16:07:24] <^demon> So I need to puppetize this. [16:07:41] sorry for not thinking about that earlier :/ [16:07:41] <^demon> Whatever file it goes in [16:07:55] <^demon> It's fine, just need to think real quick. [16:10:32] <^demon> Let's stash it in /home/wikipedia. That's nfs, so it'll be the same on fenari & hume. [16:13:23] uh... [16:13:41] aude: do the entries injected into RC contain the original change ID? [16:13:57] if they do, we could just find the newest such entry, look at the change ID, and go from there... [16:14:06] that would be a LOT nicer than a local file :) [16:15:06] DanielK_WMDE: might be in rc_params [16:15:39] we just can't ever truncate the wb_changes or do aything to reset the auto increment ids [16:16:31] truncating is ok [16:16:43] but autoincrement ids must stay. [16:17:07] aude: but that is true no matter where and how we store the continuation state. it always uses changer_id for reference. [16:17:08] rc_params: a:1:{s:20:"wikibase-repo-change";a:12:{s:2:"id";i:1012;s:4:"type";s:20:"wikibase-item~remove";s:4:"time";s:14:"20121206230756";s:9:"object_id";s:7:"q480993";s:7:"user_id";i:3102;s:11:"revision_id";i:0;s:11:"entity_type";s:4:"item";s:9:"user_text";s:20:"Katie Filbert (WMDE)";s:7:"page_id";i:0;s:6:"rev_id";i:0;s:9:"parent_id";i:0;s:7:"comment";a:2:{s:7:"message";s:18:"wbc-comment-remove";s:8:"sitecode";s:6:"enwiki";}}} [16:17:14] ugly but that's what's there [16:17:39] i don't see the change id [16:17:42] in the recent change save patch i have, i used the revision id [16:17:44] but we can add that. [16:17:56] it's more likely to be reliable, IMHO [16:18:10] i... don't think so. [16:18:18] but lets discuss this at a better time [16:19:28] ^demon: btw - how often does pruneChanges run? [16:19:28] * aude is weary o fchange_id unless we are 110% sure it will never reset or otherwise corrupt [16:19:34] ...and with what parameters? [16:19:37] <^demon> DanielK_WMDE: Every 15m. [16:20:09] <^demon> No special options, just defaults. [16:20:33] ^demon: you may want --number-of-days 1. The default is 7, and the table grows fast. [16:20:51] i'm working on making it smaller, but after 7 days, we may be hitting some wall again [16:21:00] i think that's about as log as it took the first time around [16:21:39] * aude is heading out for dinner.... [16:21:44] aude: have fun! [16:21:47] ok [16:22:18] i'm happy pollforchanges works okay (except for the continuation stuff) [16:27:27] ^demon: i'll hang around a bit to see how things go. let me know when it's up. [16:28:51] <^demon> prune has the new param live on hume. Finishing off the one for poll. [16:29:56] thanks [16:39:31] <^demon> aude, DanielK_WMDE: cron for polling for test2. [16:39:40] <^demon> I've already put that file in place. [16:39:44] <^demon> https://gerrit.wikimedia.org/r/#/c/37429/5 [16:40:54] ^demon: uh, the state file should not have a .pid extension - it's not a pid file (that also exists, in /tmp, and could be configured with --pidfile; but losing the pid file is not a big deal at the moment) [16:41:19] this will work as is, it's just a bit confusing. and if you change it later, you have to actually rename the file to keep the state [16:41:20] oh, well [16:41:34] anyway [16:41:38] ^demon: when will it run? [16:41:45] <^demon> I'll just drop the .pid [16:41:59] or use .state or whatever, yea [16:41:59] <^demon> Every 5 minutes, that's the minute => "*/5" [16:42:04] or .changeid [16:42:10] ok cool [16:42:20] can you post the log when it has run? [16:43:18] <^demon> Ok, uploaded PS6 to fix the pid -> changeid. [16:43:29] <^demon> I've gotta bug someone to merge it and run puppet before it'll start. [16:43:35] <^demon> But yeah, after that I'll post the log. [16:43:51] ok cool. [16:44:08] i'll go fix donner & do chores. will peek in every now and then [16:52:18] hey guys, anyone know if anything has suddenly changed with the jobqueue system cause its suddenly jumped in the last 3 hours [16:52:20] http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=506&m=enwiki_JobQueue_length [17:13:33] Seddon: https://gdash.wikimedia.org/dashboards/jobq/ suggests that there might be a peak in submitted jobs; but the queue is not being cleared for en.wiki since many days ago [17:27:25] when there's 5M, it's not exactly suprising.. [17:27:55] Reedy: the queue is being cleared for the other wikis [17:28:05] enwiki gets a lower ranking [17:28:06] en.wiki and fr.wiki are the only ones where it doesn't decrease [17:28:09] so other queues do get done [17:28:19] dunno [17:29:38] fr worries me still [17:29:44] PHP Notice: Trying to get property of non-object in /home/wikipedia/common/php-1.21wmf5/extensions/CentralAuth/CentralAuthUser.php on line 115 [17:29:48] Didn't Tim fix t hat one? :/ [17:29:59] how is that still.... [17:30:59] I suspect CA hasn't been updated.. [17:34:47] reedy@fenari:~$ mwscript runJobs.php zhwiktionary [17:34:47] reedy@fenari:~$ [17:34:48] wth [17:35:09] there's 65 jobs.. [17:49:03] * Seddon does have a funny feeling that the gradient of the graph over the last hour and a half is distinctly steeper than previously [17:49:50] Maybe [17:49:56] But half the job types aren't being run [17:50:01] so nothing is suprising [17:52:38] * Reedy glares at AaronSchulz [19:17:19] sumanah: to get back to the performance/scaling - I have no idea how to effectively prevent stuff like LST on itwikisource from happening. they were using LST in a way I had not thought of, which meant I had no easy way to test it. Even now, I'm not sure how I could have prevented it, even had I known how itwikisource uses LST, as it's pretty hard to test a single wiki page in isolation. [19:17:37] (I have to leave in a short while, and you left #mediawiki, so as a fyi) [19:18:13] sorry about that valhallasw - wifi disconnected [19:18:25] np [19:18:29] bye! [19:21:43] valhallasw, probably you would need to import that page and all its dependencies to check... [19:22:53] last time we tried to reproduce itwikisource on another wiki it took years and we were never done ^^ [19:23:26] I know Zaran was working on that + beta labs in January [19:26:18] Platonides: yes, but not all dependencies are included (e.g. the Page: namespace), as well as the dependencies on commons [19:26:45] in addition, you'd need to also set up all other used extensions [19:27:09] (and in this specific case, there also was the problem that I couldn't even render the page in the original situation) [19:37:23] anyway, it would be nice to have a better way to test things like this than crowdsourcing it during deployment ;-) [19:37:35] but I really have to go now. See you! [20:00:56] !log synchronized payments cluster to 351b8cfa58dc99 [20:01:05] Logged the message, Master [20:27:49] kaldari: you there? [20:28:14] kaldari: this is gonna sound bad, but i just got the bad heading styles. [20:29:27] kaldari: link to the styles: http://bit.ly/STcS71 [20:29:47] cache headers: [20:29:47] X-Cache: strontium miss (0), cp3022 hit (1) [20:30:38] annnd it's gone now. hm, i hope these headers come from the bad version, not the good one [20:32:32] good cache headers: X-Cache: sq69 miss (0), cp3019 miss (0) [20:32:50] Ryan_Lane: hello, i just hit that heading CSS bug again, and this time i have debugging information. [20:33:04] (you were working on it yesterday, right?) [20:33:12] I was not working in it [20:33:13] no [20:33:17] kaldari was, I think [20:34:09] kaldari is missing now :( [20:34:31] and i've got the x-cache headers, as well as full file contents [20:34:50] (bad one was X-Cache: strontium miss (0), cp3022 hit (1), i just copied it before you joined) [20:38:39] ooh [20:38:47] was eating lunch [20:39:28] i'm pretty sure that i didn't have this file cached before, as i emptied my cache recently in relation to this bug [20:39:57] unfortunately i got the good file after refreshing the page, so it might be a false alarm [20:40:03] plus the client-side caches are supposed to expire after 5 minutes anyway [20:40:16] but i though it worth reporting [20:40:31] you saw the bad headers just recently? [20:54:45] ^demon: *poke* [20:55:50] <^demon> Hi, cron was just deployed. [20:56:38] <^demon> It's playing catch up right now [21:00:22] <^demon> DanielK_WMDE: http://p.defau.lt/?dMapco_FA1chPgeW4IuaTg [21:00:39] <^demon> It's actually fine. It's run every 5m since 20:40, and takes about ~2m each time. [21:01:06] 20:40 utc? [21:01:14] <^demon> Yep. [21:01:16] with --all? [21:01:26] <^demon> Yep. [21:01:49] that means it should have caugt up. but i don't see much in rc... let me check again [21:02:55] <^demon> Hmm, I'm not seeing RC either. [21:03:08] ^demon: doesn't look like it's injecting anything into rc https://test2.wikipedia.org/w/index.php?title=Special:RecentChanges&hidewikidata=0 [21:04:25] gah. [21:04:36] so what is it doing?... [21:05:40] ^demon: doesn't seem to work :/ [21:05:47] <^demon> I know, I'm not sure. [21:05:52] shall we let it run anyway? [21:06:13] <^demon> I say yes. It worked when we ran it one-off the first time. [21:06:20] <^demon> So I'm not sure why it's not running in the cron. [21:06:21] true [21:06:42] i hope it's connecting to the right BD and not injecting into some other wiki's rc feed :P [21:06:48] *DB [21:07:25] <^demon> I'm doing it for --wiki test2wiki [21:09:06] which should work. [21:09:18] https://test2.wikipedia.org/w/index.php?title=Special:RecentChanges&hidewikidata=0 [21:09:20] looks good [21:09:21] <^demon> Hmm, it worked when I ran it from hume as one-off just now [21:09:30] <^demon> That's not the cron [21:09:34] it's just that there are not a lot of pages on test2 [21:09:46] most of them simple topics that are probably already populated [21:10:19] that's no coincidence [21:10:38] <^demon> Those should've been caught earlier. They showed up when I did it manually. [21:10:45] ^demon: run it again in a few minutes? just to see if we then see changes again [21:11:02] <^demon> Yeah, once the cron finishes (it just started) [21:11:07] don't you see my changes? [21:11:27] aude: yes we do. but they are the only once since 14:30. [21:11:37] i made new ones [21:11:38] aude: and they only showed when ^demonran the script by hand [21:11:50] did he just run the script by hand? [21:11:54] yes [21:11:57] oh [21:12:23] wait until he does again. after that, make more edits. see if they show after 5 minutes [21:12:29] oh [21:12:31] ok [21:14:09] <^demon> I'm running it manually now. [21:14:58] someone just made [[Comet]] and it has a test2 page [21:15:05] is the script on cronjob still? [21:15:14] comet, the wikidata item [21:15:22] <^demon> The cron's having trouble with the pid file. [21:15:31] <^demon> Warning: file_put_contents(/tmp/WBpollForChanges_test2wiki.pid): failed to open stream: Permission denied in /home/wikipedia/common/php-1.21wmf5/extensions/Wikibase/lib/includes/Utils.php on line 547 [21:15:36] oh no [21:15:52] i have to run mine as sudo -u www-data [21:16:06] <^demon> Ha, my fault. [21:16:07] * Damianz frowns [21:16:08] which some how has the permissions and permission to write in /tmp [21:16:08] <^demon> I stole ownership. [21:16:12] ah [21:16:27] What's wrong with /var/run/? You know like FHS and that shiz [21:17:32] Damianz: that's actually what it tries to use per default. if it can write there, it will. [21:17:42] Thank god for that [21:17:50] * Damianz gives DanielK 1 cookie [21:17:57] * DanielK_WMDE noms [21:19:02] kaldari: sorry, i missed your reply earlier - yes, i did see incorrect CSS just before i reported it here, then i refreshed the page to be able to access the headers (i believe the contents didn't change), i copied x-cache header as pasted here, then i refreshed it again and got "good" CSS this time [21:20:17] ^demon: hm... /tmp isn't hardcoded, it uses the system's temp dir setting. cron tends to have a very minimalistic environment... maybe sys_get_temp_dir returns something silly? [21:20:24] ^demon: does the pid file get updated by cron? [21:20:30] * aude impatiently hits cmd + R :D [21:20:45] <^demon> DanielK_WMDE: the pid file is getting updated now, since I fixed the permissions. [21:21:15] any idea whether the permissiosn were broken before? [21:22:01] there is reason we don't use wfTempDir() ? [21:22:08] in pollforchanges? [21:22:19] would that help to get the correct thing? [21:23:13] OK; done with the sync / being a bad weekend-pooper. thanks for indulging. [21:23:20] aude: would probably be better, but if the pid file is being updaqted as ^demon sais, that isn't the problem. [21:23:46] * ^demon is still digging [21:24:12] um, ok [21:24:26] ^demon: try --verbose ? [21:24:42] <^demon> A-ha! [21:24:44] don't think verbose helps a lot, but ok [21:24:58] ? [21:25:13] might give a hint or two [21:25:17] uh oh, i see duplicates [21:25:19] but it's not really debug level output [21:25:24] x4 [21:25:33] <^demon> --statefile doesn't seem to be updated after being run as the cron. [21:25:39] <^demon> But I'm not seeing any errors, hrm. [21:25:46] <^demon> Just same value before & after running. [21:25:50] not good [21:26:10] that would indicate it's not processing them [21:26:12] and it didn't find comet [21:26:14] for whatever reason [21:26:28] only repeated my changes x4 [21:26:33] x3 [21:27:42] i see the old changes seven times [21:27:52] i suppose the state file wasn't readable at some point [21:28:00] so the script re-did everything? [21:28:17] bah [21:28:25] i see my east peoria edits 11x [21:28:44] this is gettign strange [21:29:35] Dred Scott x15 :( [21:30:03] aude: hm, local links are missing the namespace prefix (see Good article, 08:27) [21:30:09] that's a template [21:30:31] HUH [21:31:01] oops [21:31:45] <^demon> Crap, we need to cut off the cron if the file's not going to update. It's taking long enough on each run that we're hitting "already running" [21:31:52] ^demon: i'll put in more verbose/debug out put over the weekend, monday the latest [21:31:53] no, it's also an article apparently but need to check [21:31:55] https://test2.wikipedia.org/wiki/Good_article [21:32:09] https://wikidata.org/wiki/Q5303 is a template :( [21:32:25] <^demon> Can I go ahead and disable the cron for the weekend? I don't want it running if we're not going to be around to debug it further. [21:32:30] aude: ...points to a template on enwiki [21:32:37] yeah, can fix it [21:32:38] ^demon: yes, please [21:36:14] MatmaRex: and you said you're using Chrome I think? [21:36:44] kaldari: Opera [21:36:50] ok [21:37:00] kaldari: want the cache keys from those? [21:37:22] no, the info you gave might be useful though [21:37:23] (the ones in CSS comments at the bottom) [21:37:27] alright [21:37:32] thanks, and i hope it helps :) [21:37:47] clearing cookies and cache seems to be working for people, though [21:40:23] <^demon> DanielK_WMDE, aude: Cron's back off now. We'll pick this back up Monday unless anyone has a brilliant idea come to them over the weekend :) [21:45:32] ^demon: ok [21:46:07] i can probably look more at the puppet stuff, etc. and see how hume works / is configured [21:47:31] ^demon: thanks for your help [21:48:00] <^demon> You're welcome. [23:44:15] apergos, is $revision, parameter of CheckStorage::importRevision() a Revision object?