[00:44:03] !b 34995 [00:44:03] https://bugzilla.wikimedia.org/show_bug.cgi?id=34995 [00:47:36] wtf [00:47:45] centralauth has two myisam tables in production [02:17:55] !log LocalisationUpdate completed (1.19) at Wed Apr 11 02:17:55 UTC 2012 [02:18:00] Logged the message, Master [02:26:28] !log LocalisationUpdate completed (1.20wmf1) at Wed Apr 11 02:26:28 UTC 2012 [02:26:30] Logged the message, Master [06:44:03] About that 1.20wmf1... is possible to create test.wikisource.org so we could test mw with ProofreadPage and so on... ? [06:44:12] Should I file a bug for that? [07:17:11] Beau_: just file a bug asking for it to be enabled on test2 [07:17:26] (and any other extensions you can think of) [07:17:46] but extension wise it shouldn't break [07:18:51] since it's just the same code from svn (mostly) but getting deployed from git (and since it hasn't broken the mw wiki, extensions should be fine) [07:19:19] p858snake|l: PP currently deployed is broken for opera users, it broke after migration to 1.20 [07:19:22] eee... 1.19 [07:19:50] so it is a good idea to do some tests before deployment [07:22:13] well your change hasn't been reviewed/merged (https://gerrit.wikimedia.org/r/#change,4194) so there will be no differnce when its changed over [07:22:32] p858snake|l: but there may be some other breakages ;-) [07:23:33] I'll just file a bug for enabling and configuring proofread page on test2 as you suggested. [07:24:55] Yes, that would be the best way. As for when changes when the git switch is flicked, only stuff that is merged will change (see: https://gerrit.wikimedia.org/r/#q,status:merged+project:mediawiki/extensions/ProofreadPage,n,z ) [07:25:16] p858snake|l: yeah, I am aware of that [12:47:33] hi, does anyone know how comes that dumps.wikimedia.org is terribly slow (<10 KB/s)? i'm trying to download the dumps of this month so far, like this one file: http://dumps.wikimedia.org/other/pagecounts-raw/2012/2012-04/pagecounts-20120406-110000.gz (81 MB) but at some point the download even stopped and connection was refused :( [12:51:35] jorn: Works for me... ok not terrible fast, but I got 150-200 Kbyte/s [12:52:19] hoo: normally it serves at around 10 MB/s [12:53:28] jorn: it will limit (or i think actually block) you if you have multiple simultaneous connections [12:53:58] jeremyb: i don't have... at least not that i'm aware of [12:56:21] nope, checked it again, just one connection [12:57:15] bu tok, maybe it's just overwhelmed with requests at the moment [12:57:29] s/bu tok/but ok/ [12:58:19] sDrewth: looked at 35826 recently? [12:58:22] !b 35826 [12:58:22] https://bugzilla.wikimedia.org/show_bug.cgi?id=35826 [12:58:50] [12:59:21] yes, I was waiting for other comment [13:00:14] my point was around if someone was using http at say mw, then followed a link at labs, that took them back to mw, they would essentially be logged out [13:00:28] http -> https -> https [13:02:47] in the end for me, your solution solves *my* issue [13:05:49] sDrewth: that shouldn't be. in that case you should still be logged in [13:07:18] are you sure, I constantly face the issue of non-relative protocol addresses [13:07:58] (mw)http -> (labs)https -> (mw)https [13:08:28] err... [13:08:40] i'm not saying you can't do those hops [13:09:01] i'm saying if you do then you will be still logged in. (if you were before the first click) [13:09:36] well, I am not understanding you [13:09:39] then [13:10:07] I can see that as being a very real progression just following links [13:10:07] it's not complicated... [13:30:17] jeremyb: are you going to finish the sentence? or is it guess the conclusion [13:31:22] I didn't see it as complex, however, from what you have said, and from my experience, we seem to either be hung up in jargon, or talking at cross-purposes [13:34:07] found an issue with 1.20 at mw, AbuseLog cannot be accessed https://www.mediawiki.org/wiki/Special:AbuseLog [13:48:27] !log reedy synchronized php-1.20wmf1/extensions/AbuseFilter/special/SpecialAbuseLog.php [13:48:29] Logged the message, Master [13:49:43] thx reedy, that was quick :-) [13:50:10] Obvious bug was very obvious! ;) [13:51:38] with that I was unsure whether to call it the extension or mediawiki, is there a preference, or in the end it doesn't matter as you guys will fix it anyway [13:52:13] Pretty much, chances are it'll get moved to the right component when someone notices it [15:15:40] !log py synchronized wmf-config/lucene.php 'pushing search pool 3 to eqiad. for realz this time!' [15:15:42] Logged the message, Master [15:31:29] !log py synchronized wmf-config/lucene.php 'pushing search pool 2 to eqiad. for realz this time!' [15:31:31] Logged the message, Master [15:45:43] !log py synchronized wmf-config/lucene.php 'pushing search pool 1 and prefix pool to eqiad. for realz this time!' [15:45:46] Logged the message, Master [15:59:08] !log py synchronized wmf-config/lucene.php 'pushing search pool 4 to eqiad. for realz this time!' [15:59:11] Logged the message, Master [16:14:22] actually all other dumps on http://dumps.wikimedia.org are slow as well (also tried it from an external ip and they were quite slow yesterday as well), any idea whom to ask / notify about this? [16:23:42] what do you mean, "slow"? [16:23:45] slow to download? [16:24:00] jorn: [16:24:02] yes, it's about 50 KB/s [16:24:13] we cap downloads [16:25:06] in eqiad we should not cap downloads :) [16:25:13] I should see how the your.org mirror is doing and announce it if it looks stable [16:25:17] i read the notice on the page, yes, but usually you have like 10 MB/s download speed [16:25:23] rsyncs take precedence [16:26:30] i'd be honored if our university could host a mirror [16:26:39] that would be pretty great [16:27:13] have you had a look at our space/bw needs? [16:27:27] depending on whatthey would be willing to host [16:27:42] well, i'm sure the bandwidth won't decrease if there are no mirrors ;) [16:27:51] I mean [16:27:55] bw for a mirror [16:27:57] http://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps [16:28:32] if your university has the spare space and bw, someone can shoot me an email [16:28:45] I'm also available here of course [16:30:23] email: ariel at wikimedia [16:31:52] it's not like we have a shortage of bandwidth though [16:32:00] so why are we capping? [16:32:37] we have had downloaders who easily use 1/3 of a 1 gb pipe [16:32:45] you should upgrade it to 10G [16:32:45] so we cap [16:32:48] or at least multiple gige [16:33:00] I have a bonded interface on the host now [16:33:09] i'll ask our departments IT and the university IT department which hosts some mirrors alread... the advantage of having a mirror in a german university would be that it would be internal traffic for the whole DFN then (http://www.dfn.de/en/) [16:33:17] and using 1/3rd of the pipe is not a problem [16:33:18] 2 gb [16:33:23] they can use the full pipe [16:33:27] it is when we have multiple such [16:33:29] as long as there's no bandwidth starvation :) [16:33:35] well that is what we had [16:33:43] setup fair queuing then [16:34:01] equal share for everyone [16:34:06] what I want is that our rsyncers have precedence always [16:34:20] easy enough [16:34:40] I'll add it to my todo list [16:34:56] lartc.org [16:35:36] <^demon> apergos: How actively are you searching out new mirrors right now? [16:35:43] \we'd love em [16:36:06] it's not like we don't have "the WMF is looking for mirrors" in giant red letters plastered over a bunch of related pages :_D [16:36:08] :-D [16:36:17] <^demon> I can't make any promises, but I know the guy who manages our mirrors of various Linux distros and some other open source projects. I can ask him what our space/bandwidth is like and if he'd be interested in having VCU mirror the dumps. [16:36:23] coool [16:36:30] you have the link and the power [16:36:33] do it! :-) [16:36:49] <^demon> I'll shoot him an e-mail. [16:36:49] jorn: I would reallylike it if we had a mirror in Europe [16:37:01] ...apergos [16:37:04] more than one would be nice but I would settle for a start of at least one :-D [16:37:04] set one up in europe? [16:37:11] uh huh [16:37:12] it's not like we don't have rackspace and bandwidth there :P [16:37:25] <^demon> apergos: We're a pretty large university and we're on the east coast, so I imagine our connection is pretty good. I think it'll be a matter of interest/manpower than ability. [16:37:28] ^demon: do you have some scaptrap list somewhere for this deployment? [16:37:30] *cough*content issues*cough* [16:37:36] <^demon> Nikerabbit: Ask Reedy. [16:37:48] I asked legal about this: if it's a third party it's not an issue [16:37:49] Reedy: do you have some scaptrap list somewhere for this deployment [16:37:57] if it's us then we have to revisit that whole thing [16:38:04] Not really [16:38:04] http://etherpad.wikimedia.org/DeploymentChecklist [16:38:05] which is fine, I think it should be revisited but [16:38:09] <^demon> apergos: Are dumps in ashburn or tampa? [16:38:15] yes [16:38:20] the rsync host is in eqiad [16:38:27] we will have one in both places eventually [16:38:50] <^demon> I was just thinking, if we set it up right, mirroring DC -> Richmond should be pretty fast. [16:39:12] I would hope so [16:39:38] <^demon> But anyways, I'll e-mail him and see. [16:39:44] great [16:53:09] apergos: just as a thought: maybe put that note about seeking for mirrors on http://dumps.wikimedia.org/ itself as well... and contact info [16:53:21] and as you mentioned rsync... there's an rsync endpoint as well? [16:53:30] yes, for our mirrors [16:53:35] ah ok [16:55:02] I've had it on there in the past... sometime during some cleanup I must have removed it [16:55:05] hmm [16:55:45] ok well it's on my list todo now [16:56:05] ;) [18:09:00] bonne nuit~ [19:23:00] !log reedy synchronized live-1.5/ 'fix resources symlinks' [19:23:02] Logged the message, Master [19:28:03] !log reedy synchronized live-1.5/ 'fix resources symlinks' [19:28:05] Logged the message, Master [19:40:28] apergos: sorry to come back to this, just re-read our conversation earlier... are you really limiting the speed to around 50 KB/s? [19:41:02] the limit is what's on the downloads page [19:42:08] cause the hourly pagecounts are around 90 MB these days and by that rate it takes me quater an hour to download them... in other words 1/4 year for a year of logs [19:42:18] on dumps.wikimedia.org it says: "Please note that we have rate limited downloaders and we are capping the number of per-ip connections to 2. This will help to ensure that everyone can access the files with resonable download times. Clients that try to evade these limits may be blocked. " [20:01:29] good night then [20:57:24] Da li moze neko da mi kaze da li mi je sada clanak ok? Prvi put radim ovo pa ne znam.. [22:27:19] bonne nuit =_= [23:14:03] LeslieCarr: without any deeper knowledge of the issue: maybe nrpe keeps a copy of the config somewhere else, and useses that? or it just doesn't load from the fs but keeps on using the config it has in memory? [23:14:29] hrm [23:14:36] config it has in memory, that would be weird but possible [23:14:39] thanks for the idea :) [23:14:57] other "strange" theory my mind just produced: have you checked it can read the new config file access wise? [23:16:08] yeah, it can [23:17:01] hum ... can you kill the nrpe, without someone getting really nervous? ;) if so, I'd try a shutdown and then a "cold" restart [23:17:16] (of nrpe, not the box, ofc) [23:17:35] oh already restarted nrpe like 50 times :) [23:17:57] with nrpe restart? or stop, wait, start? [23:18:39] stop wait start, and just running it by hadn instead of using service [23:19:09] hum, that pretty much kills my idea of the conf in mem :S [23:25:55] so insane