[02:25:09] !log LocalisationUpdate completed (1.20wmf5) at Mon Jun 18 02:25:09 UTC 2012 [02:25:18] Logged the message, Master [02:47:49] !log LocalisationUpdate completed (1.20wmf4) at Mon Jun 18 02:47:49 UTC 2012 [02:47:55] Logged the message, Master [08:17:53] !log hashar synchronized wmf-config/InitialiseSettings.php ' (bug 37672) Use odf on collection for ml projects ' [08:18:00] Logged the message, Master [10:43:24] apergos: ping [10:47:02] Vito_away: pong [10:47:12] apergos: any news from tiscali? [10:47:22] I had exchanged a couple emails [10:47:43] I think it's in their court, I can ping em again in a couple days, see if they need help ;-) [10:47:50] they did seem to want to do it [12:39:11] Hello. Is the media-handling (e.g. chunked upload) respected in replication lag (maxlag param) or only what immediately affects the database? [12:39:18] It's because we had someone on Commons who uploaded tons of data during a bot-testrun but respected the lag. Nevertheless other uploads were refused during this time or were incredibly slow. [12:42:11] is chunked uploading even touching the database? [12:43:09] only the last step (completing the upload), I guess. But that's why I am here :-) [12:44:44] Well, any comment that is founded on knowledge (not guessing) is welcome on https://commons.wikimedia.org/wiki/Commons:Bots/Requests/Fbot#Discussion [12:44:50] Thanks. [12:53:39] apergos: To resolve, or not to resolve bug 27939? [12:54:30] sure I'll close it I guess. [12:54:37] :) [12:55:11] and possibly bug 1298 [12:58:24] 2 down, 12 to go :) [12:58:57] heh [13:02:36] wut? bug 28956 [13:02:50] its a long time ago that this was implemented... [13:17:53] Hydriz: please don't add me to the static html dumps [13:18:07] those are not me whatsoever [13:18:44] also the incrementals dumps are only really adds/changes dumps, I need to do more with that which is why that bug was not resolved [13:19:28] bad docs :( [13:19:43] I explain that in detail actually [13:19:51] I will point to that when I re-open [13:21:25] these don't handle moves, renames, deletions. [13:22:02] ah, I see... [13:22:13] removed myself from the static html dump one [13:22:56] assigned to Hydriz :P [13:24:13] all righty then :-) [13:25:28] and just to clarify your email... [13:25:38] incremental media is twice a month? [13:25:45] full is once a month? [13:31:08] well that's the hope [13:31:20] I don't want to commit to more often than that because fulls take a long time to run [13:31:34] and if we run into network or vm lockups then we have delays [13:31:44] frankly fulls once a month is fine [13:32:04] incrementals is cause I'm a nice person :-P [13:32:05] but it seems to take more than one month to completely dump full dumps [13:32:21] the last fulls are from june 3 [13:32:23] they are done [13:32:29] how is that more than a month? [13:32:39] possibly that was the first set [13:32:52] the second set was faster, yes [13:32:59] I was monitoring its progress :P [13:33:00] the first run was in many pieces, with tests [13:33:04] removals [13:33:07] and many outages. [13:33:11] it was a *test* run after all [13:33:21] samples too, for us :P [13:33:35] anyways the plan atm is: monthly fulls [13:33:40] one, perhaps two incrementals [13:33:55] but incrementals, from when to when? [13:34:11] from the last full to the [13:34:27] I forget what I had in mind. probably from the last full period [13:34:42] no wonder the latest set was so small [13:34:45] easier that way [13:34:50] enwiki is only 25G [13:35:07] so you only need a full and an incremental to get up to close to current [13:35:10] or rather, the largest media tarballs are from dewiki(s) [13:35:35] the plan seems good :) [13:35:52] as long as (I hope) the incrementals don't get shockingly big [13:36:01] with a once a month run, no [13:36:04] of the fulls [13:36:23] I mean if people start uploading 1T of stuff a month to use on en [13:36:29] then two things will happen [13:36:37] 1) the incrementals will get large [13:36:44] 2) we'll run out of space :-D [13:37:04] willing to give us an estimate of how much space is available atm? [13:37:11] I don't know [13:37:14] heh [13:37:18] swift backend [13:37:28] which we're not using yet [13:38:18] putting the incrs to the IA now... [13:38:30] what about the new fulls? [13:38:37] and it doesn't seem like there is any update on the historical media over there [13:38:56] at least, I was promised, start of June [13:39:09] promised? [13:39:24] you gotta stop taking projects as personal promises :-D [13:39:29] *projections [13:39:32] lol [13:39:43] told maybe? [14:35:54] petan: https://gerrit.wikimedia.org/r/#q,11745,n,z [14:35:55] so [14:35:56] hashar: can you review & deploy? :P [14:36:22] I have no idea what the `campaign` is for :/ [14:36:46] !b 37662 [14:36:46] https://bugzilla.wikimedia.org/show_bug.cgi?id=37662 [14:36:48] explanation [14:36:51] kind of [14:37:02] heb: hi [14:37:06] heb: can you explain it [14:37:22] he is away a bit [14:37:27] we tried it on labs [14:37:30] it works fine [14:42:48] joining #wikipedia-da [14:44:55] petan: looks like the 'dk' campaign already got configured :-] [14:45:23] hashar: so... [14:46:18] hashar: did you ask in that chan? [14:48:31] yeah [14:48:36] reworking the patch right now [14:48:40] and doing other stuff too [14:48:41] ok [14:48:47] while having an IRL conversation hehe [14:49:10] what is that [14:49:22] @regsearch [Ii][Rr][Ll] [14:49:22] No results were found, remember, the bot is searching through content of keys and their names [14:49:27] :o [14:50:41] syncing now [14:51:06] petan: you poked? [14:51:11] hashar: you has review [14:51:27] jeremyb: maybe few days ago? [14:51:43] 17 08:30:19 #wikimedia-tech: < petan> jeremyb: around? [14:51:52] it's possible I poked but I have no idea why [14:52:29] i responded yesterday [14:52:33] aha [14:52:39] and now i am again ;) [14:52:42] !log hashar synchronized wmf-config/InitialiseSettings.php '(bug 37662) change wgUploadNavigationUrl @ dawiki' [14:52:47] Logged the message, Master [14:52:49] let me know if you remember ;) [14:56:19] hashar: so... 11161? [14:59:57] !g 11161 [14:59:57] https://gerrit.wikimedia.org/r/#q,11161,n,z [15:00:58] jeremyb: done :) [15:00:59] thanks! [15:01:11] hashar: danke ;) [18:03:29] Reedy: enwiki deployment time? [18:04:49] and we'll all be in a meeting, of course ;-) [18:05:52] 11:05, so I guess so [18:05:57] mmm [18:06:05] * AaronSchulz was staring at jobvite [18:06:32] apergos: which is perfect because a) it *should* mean you're all paying attention and not screwing around with the infrastructure, and b) we know where to find you if something goes wrong :) [18:06:43] AaronSchulz: you're not supposed to let us know when you're looking for a new job [18:06:53] ^ [18:07:19] * apergos eyes robla and thinks about moving traffic around [18:07:35] LeslieCarr: :p [18:07:41] <^demon|zzz> Not to mention, the deployments are downright uneventful these days :) [18:08:00] indeeeed [18:08:07] ...and ^demon|zzz totally jinxes us. thanks! [18:08:12] * Reedy mashes his keyboard [18:08:15] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.20wmf5 [18:08:20] Logged the message, Master [18:08:36] srv281: ssh: connect to host srv281 port 22: Connection timed out [18:08:36] /usr/local/bin/sync-wikiversions: line 24: sudo -u mwdeploy rsync -l 10.0.5.8::common/wikiversions.* /usr/local/apache/common-local: No such file or directory [18:09:18] Reedy: just that one? [18:09:28] Yeah, only one gave that error [18:12:29] Ok, so why is enwiki still on wmf4 [18:13:29] just wondering that myself [18:13:39] -enwiki php-1.20wmf4 * [18:13:39] +enwiki php-1.20wmf5 * [18:13:43] git log wikiversions.dat ? [18:13:50] maybe an issue with wmflabs branch :/ [18:13:51] I've not committed it [18:14:35] @ver:enwikiphp-1.20wmf5 [18:14:35] ^@^@^@^@^@^@^@ext:enwiki [18:14:40] Looks vaguely right in the cdb [18:14:44] last is 664ad88b by Arthur on June 4th made enwiki wmf3 to wmf4 [18:15:13] !log reedy synchronized wikiversions.cdb 'sync using sync-file' [18:15:18] Logged the message, Master [18:15:23] I did not touch that for sure [18:15:24] That fixed it [18:15:29] yup [18:15:42] AaronSchulz: looks like the combined copier in the newer sync-wikiversions doesn't work right... [18:15:55] yeah, I suspect that might be it [18:16:03] ala [18:16:07] ddsh -cM -g mediawiki-installation -o -oSetupTimeout=30 -F30 'sudo -u mwdeploy rsync -l 10.0.5.8::common/wikiversions.* /usr/local/apache/common-local' [18:16:57] That's a small issue [18:17:30] globbing fail? [18:17:46] https://gerrit.wikimedia.org/r/#/c/7823/ [18:19:25] Any op dealing with Wikimedia mailboxes around? [18:20:30] vvv: try in #wikimedia-operations maybe RobH , also specify which mailbox :) [18:23:11] robla: ignoring sync-wikiversions, all looks ok [18:23:24] \o/ [18:23:38] !log reedy synchronized wikiversions.dat [18:23:43] Logged the message, Master [18:24:05] Reedy, are you upgrading enwiki? [18:24:15] I already have, yes [18:24:58] Reedy: it should be changed to use --include= I guess [18:25:07] so we could have wikiversions.* there [18:25:14] ah, without EducationProgram, ok [18:25:49] preilly just asked in the office to bump up the time of the MobileFrontend deployment. Since enwiki looks fine, I figure it should be fine to do it in 10-15 minutes [18:25:54] <^demon|zzz> robla: Hehe, https://bugzilla.wikimedia.org/show_bug.cgi?id=36228#c4 [18:26:34] robla: sounds fine to me [18:26:53] I used to have a book on Prolog...I think I ditched it a decade ago [18:27:05] someone threw their future away [18:27:14] <^demon|zzz> robla: How much do you remember? ;-) [18:27:33] ^demon|zzz: that I had a book :-/ [18:27:51] xD [18:28:08] <^demon|zzz> robla: Well so far that makes you the prolog expert around here. [18:28:11] maybe looking at existing lines on rules.pl help [18:28:29] <^demon|zzz> Oh man, where's the link for that... [18:28:44] <^demon|zzz> Ah, here it is. The default rules: http://code.google.com/p/gerrit/source/browse/gerrit-server/src/main/prolog/gerrit_common.pl [18:36:43] weird [18:36:54] that is definitely not perl [18:37:20] <^demon|zzz> No, it's prolog :) [18:37:31] ^demon|zzz: I am 100% sure I can do that using a jenkins job that listen to any changes made to mediawiki/* [18:37:38] then look if some .sql file got changed [18:37:55] <^demon|zzz> And why would you do that when we could write prolog? [18:38:30] I was busy with girls when we had the 101 prolog :-( [18:40:00] https://en.wikibooks.org/wiki/Prolog [18:40:17] sorry hashar, there isn't a french one [18:40:25] thanks Sam [18:40:36] well maybe we could reimplements that set of rules using Object CAML [18:41:39] the funny thing is that the prolog rules seems to be converted to Java when building Gerrit [18:42:03] awesome [18:42:05] <^demon|zzz> Yup. [18:42:23] <^demon|zzz> It's one of the slower steps of `mvn package` actually, calculating all the rule permutations. [18:42:24] there is even a prolog shell to test your prolog program in Gerrit! http://review.coreboot.org/Documentation/pgm-prolog-shell.html [18:43:52] I heard you liek prolog, so here's a prolog program to test your prolog while you write prolog.. [18:45:13] <^demon|zzz> RoanKattouw: Wanna learn prolog? [18:45:25] Ahm, no? [18:45:31] ChuckNorris: can you help with prolog? [18:51:48] !log maxsem synchronized php-1.20wmf4/extensions/MobileFrontend/ [18:51:53] Logged the message, Master [18:52:40] !log maxsem synchronized php-1.20wmf5/extensions/MobileFrontend/ [18:52:45] Logged the message, Master [19:37:15] Reedy: you're sure that sync script doesn't work? [19:37:34] It didn't move enwiki to 1.20wmf5 [19:38:18] so fairly [19:41:17] does [Special:MostLinkedPages] still exist? [19:41:54] Reedy: running that command one hume with -vvv seems to look ok [19:42:04] lol [19:42:39] ^demon|zzz: so instead of prolog, we can use gerrit query :-D [19:48:27] Reedy: we can also do sudo -u mwdeploy rsync -l 10.0.5.8::common/wikiversions.{dat,cdb} /usr/local/apache/common-local/ [19:51:37] Reedy: I can confirm that works on hume [19:51:48] ... [19:54:56] Reedy: I think it's kind of cute :) [19:55:46] i had some instruction in prolog...don't remember much of it other than i used it to solve the zebra puzzle [19:56:03] * BobTheWikipedian gasps...very late post [20:43:16] !log preilly synchronized php-1.20wmf5/extensions/MobileFrontend 'update to remove bad code' [20:43:21] Logged the message, Master [20:43:44] !log preilly synchronized php-1.20wmf4/extensions/MobileFrontend 'update to remove bad code' [20:43:49] Logged the message, Master [20:54:06] Platonides: feel free to assign that to you [20:54:14] Platonides: I rejected my change [22:26:48] !log aaron synchronized php-1.20wmf5/extensions/FlaggedRevs 'deployed e53310f548cf3f3e4f1ddfa10f5efd0eff06eeec' [22:26:53] Logged the message, Master [22:28:10] University of Robla? [22:28:23] apply now [22:29:29] just like Kensington University? [22:30:15] only more ossm [22:31:32] !log updated production civicrm to r1814 [22:31:37] Logged the message, Master [22:41:37] binasher: can you nuke the profiling stuff ending in -local-backend? [22:43:31] AaronSchulz: sure [22:43:49] should be nuked from graphite [22:56:53] \o/ [22:57:03] yay [22:57:06] it is the Schulz [23:04:10] binasher: have page_touched invalidations gone up today? [23:05:31] AaronSchulz: what went out today? [23:12:19] nothing other than enwiki upgrade [23:12:27] I just see a lot of thumbnail purges though [23:25:48] binasher: where is the non-slow query sample profiling? [23:26:19] AaronSchulz: /sample/ [23:26:31] it should be linked to from the dbtree too [23:27:15] i think i'm going to make uncensored versions of these pages that are only visible to me [23:27:24] would you prefer that? [23:28:14] I wish I could see the username [23:28:28] * AaronSchulz wonders if there is a way to tell who is doing purges [23:33:28] which site [23:33:29] ? [23:35:17] commons [23:35:25] O [23:35:38] I think someone was doing mass null edits/purges to update a template... [23:36:00] Betacommand ^ Dispenser maybe? [23:36:36] is there something going on on the job queue? [23:36:47] dunno, is it getting backed up? [23:37:15] I don't think so [23:37:43] but in that case it'd probably show at the joq table [23:37:53] http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=9835&m=enwiki_JobQueue_length&r=hour&z=small&jr=&js=&st=1340062647&z=large [23:38:30] reedy@fenari:~$ mwscript showJobs.php commonswiki [23:38:30] 5649 [23:39:41] job runners certainly are gonna be busy [23:44:50] htmlCacheUpdate: 67 [23:44:51] refreshLinks2: 5513 [23:45:42] that's interesting … [23:47:13] http://en.wikipedia.org/w/index.php?title=Cosmic_string&action=history [23:47:32] * AaronSchulz snickers [23:48:32] Reedy: now why do we need to purges thumbnails of file description pages that use a template due to a template change? [23:49:10] grumble [23:49:45] AaronSchulz: from memory, it's a VERY high use template, and months after the change, they're all not migrated [23:51:03] Reedy: it seems like purging the thumbnails is accidental [23:51:13] Quite possibly [23:51:25] I mean they didn't change, only the parsed wikitext [23:51:56] So editing license templates must slow down file ops due to all the purges [23:52:54] well, the actual jobs don't seem to have this problem [23:52:57] meh [23:54:14] hopefully someone can elabourate [23:56:30] binasher: AaronSchulz: " Reedy: Yup, 11,000 null edits/purge-link-updates per hour right now" [23:59:21] [00:58:14] screen with 10 sessions