[01:38:01] from time to time i get "Our servers are currently experiencing a technical problem. […]" along with an edit conflict. details: [01:38:12] Request: POST http://de.wikipedia.org/w/api.php, from 208.80.154.8 via cp1018.eqiad.wmnet (squid/2.7.STABLE9) to 10.64.0.124 (10.64.0.124)
[01:38:15] Error: ERR_READ_TIMEOUT, errno [No Error] at Sun, 02 Dec 2012 23:19:56 GMT [01:39:09] errr, i don't see how you could have both that error and also an edit conflict [01:39:15] rephrase? [01:40:53] let me see my code first [01:47:11] ok, it seems that i edit-conflict with myself [01:48:21] and that the edit was made but that i don't get a proper response about that [01:58:33] i've seen that before [02:08:50] giftpflanze: that will happen if the page takes too long to convert from wikitext to HTML [02:09:25] you should remove any instance of {{REVISIONID}} from it [02:09:43] even if i use the api? [02:09:51] yes [02:10:00] what page was it? [02:10:11] let me see [02:11:05] TimStarling: why is that a particularly bad magic word? [02:11:25] because it causes the page to be parsed twice during save, instead of once [02:12:08] once to see if parsing fails and then again to use the right value of REVISIONID? [02:12:27] Bundesstraße 2; Osttimor; S-Bahn Berlin; Liste der Berliner U-Bahnhöfe [02:12:34] basically, yes, the first time is for the spam blacklist and abusefilter [02:12:48] they need parsed results because they need to know what links were added [02:13:22] but we don't allocate a revision ID because it's not known if the page will actually be saved or if the hook will reject it [02:13:32] right [02:14:27] where i guess allocation is really done by mysql master and that only happens on insert and insert doesn't happen until after hooks, etc. pass [02:15:43] 2012-12-02 07:33:50 srv225 dewiki: 14.45 Osttimor [02:16:03] 2012-12-02 20:58:34 srv278 dewiki: 14.65 S-Bahn_Berlin [02:16:13] and i also have Frankreich [02:16:17] 14.65 is seconds? [02:17:03] 2012-12-03 02:16:50 srv255 dewiki: 22.11 Frankreich [02:17:06] yes, seconds [02:17:26] so these are times that you would not expect to cause a timeout [02:17:37] well, there is this: [02:17:39] 2012-12-02 08:53:29 srv299 dewiki: 37.81 Frankreich [02:17:47] not sure what happened there, maybe the page changed [02:18:40] giftpflanze: what username did your bot use? [02:19:08] GiftBot [02:20:29] it's strange that i have the 3rd timeout on Frankreich now, beside the article is saved already [02:21:26] api.log is 30GB, it will take me a while to find your log entries [02:24:26] you do quite a lot of API requests don't you? [02:25:05] several every second? [02:26:57] yeah, 3.2 req/s over a 9 minute log snippet [02:29:13] what's recommended? [02:30:16] giftpflanze: well first of all: don't do multiple reqs at the same time. if one's running already then wait before starting another [02:30:30] giftpflanze: but also you should maybe do <= 1 / sec [02:30:50] we have a log entry for Osttimor at 23:44:11 which took 63s [02:30:56] giftpflanze: and you should also always have a good UA string with your bot's contact info, etc. [02:31:12] and one at 23:44:56 which took 43s [02:32:43] i don't do multiple requests at the same time (except when multiple scripts are running at the same time but they use only a few at all [i hope]) [02:35:05] most of the requests only take 20ms or so [02:35:35] you could speed up your script by specfying multiple titles in each query [02:35:54] obviously editing is a different matter [02:36:27] ok, so preview is only 15.8 [02:38:40] anyway, in terms of bot programming, yes a 503 response will sometimes be sent [02:40:17] if you get a 503 response, the underlying request will still be running for up to 3 minutes [02:41:07] so an appropriate response would be to check the article history to see if the request completed, once every minute or so for 3 minutes [02:41:34] why do you raise timeout so early? [02:42:21] because squid is limited in how many concurrent connections it can have [02:42:45] and if there is a problem with the network or the backend, it will smack into that limit pretty hard [02:42:58] limiting timeout is one way to reduce the number of connections in such a case [02:43:41] does varnish have the same issue? [02:44:21] well, I say squid, but I mean the linux kernel [02:44:33] and do you mean concurrent backend connections or client or total? [02:44:44] or the 64K ephemeral port limit [02:44:44] ok, so you mean FDs [02:45:31] squid will run out of ephemeral ports to connect to the backend [02:45:42] or, that is to say, the kernel will run out [02:46:06] there's no practical limit to how many FDs you can have [02:46:29] we have plenty of RAM [02:49:18] giftpflanze: if the request is not completed after 3 minutes, it should probably not be retried [02:49:25] certainly it should not be retried more than once [02:49:49] okay [02:50:11] because the most likely cause is a problem with the specific article taking too long to render [02:50:28] but I'm still not sure why your requests took 60s [02:54:54] huh, i thought there was some kind of FD limit somewhere that was actually hit within WM infra. can't remember the details though [02:55:22] most servers will claim they are out of FDs when they are out of ephemeral ports [02:55:33] maybe that's it [02:55:35] since the error is the same [03:02:20] TimStarling: do you give 503's for other errors as well? because i have implemented to wait 5 seconds and retry (that was intended for maxlag handling). will it always be adequate to retry once and then wait for 3 minutes and then check article history? [03:02:55] https://www.mediawiki.org/wiki/Manual:Maxlag_parameter [03:03:09] this has information on how to not retry when you get a 503 that isn't related to maxlag [10:33:37] Hallo. [10:33:47] It would be nice if someone could do this: https://bugzilla.wikimedia.org/show_bug.cgi?id=42644 . Thanks. [10:40:58] aharoni: You're saying for them to remove the account creation limit for en.wikipedia? [10:41:03] meh [10:41:16] Only for that IP. [10:41:20] Ah :P [10:41:27] Okay good lol [10:56:13] Hey guys, does anyone know whats causing this: http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=506&m=enwiki_JobQueue_length [10:56:37] and how long the delay is on the jobs queue? [10:58:29] Seddon: maybe https://bugzilla.wikimedia.org/show_bug.cgi?id=42614 ? [10:59:30] there's been quite a big drop of queued jobs after the 29/30 https://gdash.wikimedia.org/dashboards/jobq/deploys [11:02:40] hmmm, the delay at the moment could be as much as two days based on that [11:02:49] das ist nicht gut [11:04:07] oh hang on, its not that bad.... but itll be at least 18 hours or so [11:06:14] Nikerabbit: can this be the same as your bug? [11:08:13] Nemo_bis: possibly [11:09:37] Nikerabbit: okthanks, commented [11:11:26] Nemo_bis: thanks for the comment :) this particular job is marking a new translation. I noticed it wasn't filtering through as quickly as it should so guessed something was up. [11:12:15] I hope we'll be able to have a 1.20.x release with working job queue.. [11:13:36] yeah, I noticed that since september its pretty volatile [11:16:43] 1.20 is quite unlucky https://bugzilla.wikimedia.org/buglist.cgi?list_id=164057&product=MediaWiki&query_format=advanced&resolution=---&target_milestone=1.20.x%20release&order=priority%2Cbug_severity%2Cbug_id&query_based_on= [11:21:26] Nikerabbit: now that WMF uses bi-weekly deployment, TWN is the only early/responsive adopter not only of master but also of the 1.x.y releases! :( [11:32:12] andre__: sorry about the short notice about https://bugzilla.wikimedia.org/show_bug.cgi?id=42644 . [11:33:19] aharoni: yeah, I'm not convinved you'll find folks to fix it in time, also having the timezone difference in mind. [11:33:32] nobody in Europe? [11:34:13] I have no idea which shell access folks are located where... [11:34:42] mark might be able? [11:36:03] although I don't think he normally picks up things like that [11:36:41] no [12:00:35] aharoni: for enwiki, find an admin in #wikipedia-en and ask for the accountcreator flag temporarily so you can go over the rate limit [12:03:29] legoktm: he's aware of this solution [12:03:44] (which is explained two times on a Meta and mw.o help pages) [12:06:41] legoktm: btw on en.wiki a sysop is needed to completely address the problem, acc. creat. don't have sufficient rights https://en.wikipedia.org/wiki/Wikipedia_talk:Account_creator#Confirm_users [12:07:15] hello, anybody with powers online? could somebody look at and approve this config change: https://gerrit.wikimedia.org/r/#/c/36197/ ? [12:19:43] Nemo_bis: right, thats a good point [12:21:14] legoktm: maybe you could comment that my proposal makes sense ;) [12:21:21] it's such a trivial and useful addition [12:22:32] the problem is that enwp'ers are hesitant to let non-admins alter user-rights [12:22:39] while its a logical idea [12:22:52] this is something that it's added automatically, normally... [12:23:22] "Course coordinators" alter rights byw [12:23:24] *btw [12:23:32] oh wait [12:23:42] so as soon as the account is created, it would get confirmed status? [12:24:04] im not sure thats the best idea either... [12:25:37] if one bothers creating an account, it's fairly certain one also thinks it won't be used for page move vandalism and such [12:27:01] errr im not sure about that. but then again, i havent done ACC stuff for 4 years [12:27:21] and that was the time when grawp had just started [13:02:34] Hello. [13:02:47] Has someone already handled Library of Israel editathon throttle? [13:09:05] andre__: if you can find someone to quickly deploy https://gerrit.wikimedia.org/r/36548 this is ready [13:10:36] <^demon> andre__, Dereckson: On it. [13:12:16] <^demon> Sync'd. [13:15:02] Merci ^demon [13:26:23] hi ^demon [13:26:32] <^demon> Hi. [13:26:53] when are we deploying the new wikidata stuff? [13:36:21] NEVAR [13:37:33] reedy :D [13:42:42] <^demon> aude: During the normal deploy window. [13:43:01] ^demon: which is approximately when? [13:43:22] note that we want to deploy the mw1.21-wmf5 branch, which is what was tagged on Friday + security fixes requested by Chris [13:43:40] <^demon> 19:00-21:00 utc. [13:44:36] Prep work (creating of submodule hash updates etc) can be done ahead of time [13:44:53] ^demon: works for us [13:46:09] ah, that's right.... we already have the 1.21wmf5 core branch so just need to tag the submodule with the right hash [13:46:54] <^demon> Yep, that's how it's done. We'll tag it just a bit before the deploy window--in case any last minute fixes still need to happen. [13:47:13] right now, we are running all our tests with the branch [13:47:30] not quite ready yet but certainly well before the window it's ready [14:20:54] hi [14:21:38] https://gerrit.wikimedia.org/r/#/c/34964 wants to be merged and deployed today [14:21:59] hashar: Ah, right, I'm not sure it is SQLLite after all, just a question of whether you use/don't use PHPUnit to run the tests [14:22:19] Jarry1250: I use parserTests.php locally (non PHPUnit based) [14:22:27] Jarry1250: Jenkins use the PHPUnit variant [14:22:36] both should more or less give the same results [14:22:47] hashar: Intriguing, they fail on parserTests.php for you? *investigates* [14:23:00] yup that is what I pasted previously [14:23:19] Jarry1250: what is the change # already ? [14:23:54] hashar: 25838 [14:23:59] At least I'm reproducing now [14:24:05] ahhh [14:24:07] good news :-] [14:24:53] Jarry1250: Running test SVG thumbnails with no language set... FAILED [14:25:06] but the two other tests do work "with language de" and "invalid language code" [14:25:28] Jarry1250: http://dpaste.org/HIjXE/ [14:27:00] hashar: Argh. I'm going to ignore parsersTest.php for the moment though [14:41:02] hashar: Does Jenkins run on a Swift file-backend, do you know, or a "regular" one? [14:41:16] Jarry1250: a local file backend I guess [14:45:05] hashar: kk [14:50:35] AaronSchulz: Can you run a FileBackend store() operation where instead of a 'src' path you provide raw content? Or do you write to a temp file first? [15:11:53] AaronSchulz: never mind, found an example [16:53:55] Anyone with shell access to the Jenkins box around? [16:54:42] I don't think it has lirsvg installed but it would be great for someone to try running "rsvg --version" and check [16:56:04] *librsvg [16:57:07] ^demon: Do you know which box it's on, btw? [16:57:22] <^demon> gallium. [16:58:05] <^demon> librsvg isn't installed, just checked. [16:59:03] ^demon: Ah, thanks. I *think* we're back on repo versions, so I think it should be easy to install, but I'll go check the old bug reports [17:05:09] ^demon: Is Gallium on Precise yet? [17:05:35] (not really required, but it would be nice to have the same version of rsvg) [17:06:10] I suppose I could just have as the predicted outcome an "rsvg not found" error...! [17:06:31] Jarry1250, the test could be skipped if there's no rsvg [17:06:33] 12.04 [17:07:03] Platonides: Well yes. But I'd rather have unit tests that are run by Jenkins... [17:07:17] Reedy: Could we just install librsvg then? [17:07:29] "just" [17:07:29] yeah [17:07:35] Add it to the relevant puppet manifest [17:08:01] Reedy: Yes, "just" was more hope than expectation :P [17:08:08] *investigates* [17:12:11] Anyone know what manifest Gallium's on? (Gerrit?) [17:13:07] <^demon> It's in manifests/misc/contint.pp [17:14:10] ^demon: Yeah, thanks, just found it myself. Okay, right, better have a go at this then. [17:39:52] Reedy: Hi! [17:39:55] Can you check, if it is possible, if djvutxt works fine on WM server with 1.20-wmf5, please? [17:39:57] This utility cause maybe cause this bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=42466 [17:40:24] note, the djvutext version doesn't change with mediawiki versions... [17:40:58] What's the test case? [17:41:06] /a relevant test case [17:42:15] If it's broken "recently" it likely matches 10.04 -> 12.04 [17:43:06] It's broken since the deployment of 1.20wmf4 or some days after. [17:43:26] djvutxt --detail=page 'PATH TO A DJVU FILE WITH A TEXT LAYER' [17:44:53] If you can't, a call to "djvutxt" that returns the version of the utility may also help us. [17:44:55] Got a test file? [17:45:01] reedy@srv219:~$ djvutxt [17:45:01] DDJVU --- DjVuLibre-3.5.24 [17:45:16] ^demon, Reedy, etc.: Done -- https://gerrit.wikimedia.org/r/#/c/36583/ . My first ever interaction with puppet, mind. Probably messed even that tiny change up. [17:45:43] Jarry1250: hey, at least jenkins is happy! [17:45:52] Looks sane [17:46:04] based on the fact imagemagick is already there [17:46:24] Reedy: The package name is from the imagescaler manifest [17:46:36] ii librsvg2-bin 2.36.1-1wm1 command-line and graphical viewers for SVG file [17:46:59] reedy: https://commons.wikimedia.org/wiki/File:The_Life_and_Times_of_Selina,_Countess_of_Huntingdon_Vol._2.djvu by example. [17:47:52] jeremyb: Always a positive, given that the whole point of the exercise is to get Jenkins off my back on a different rev :) [17:47:59] https://upload.wikimedia.org/wikipedia/commons/thumb/6/62/The_Life_and_Times_of_Selina%2C_Countess_of_Huntingdon_Vol._2.djvu/page1-369px-The_Life_and_Times_of_Selina%2C_Countess_of_Huntingdon_Vol._2.djvu.jpg [17:48:27] https://upload.wikimedia.org/wikipedia/commons/6/62/The_Life_and_Times_of_Selina%2C_Countess_of_Huntingdon_Vol._2.djvu even [17:49:20] Tpt: There's tonnes of text... [17:49:33] (page 419 1480 1473 1821 "\037\035\013PLEASE DO NOT REMOVE \nCARDS OR SLIPS FROM THIS POCKET \n\037UNIVERSITY OF TORONTO LIBRARY \n\037\035\013\013" [17:49:51] Reedy: So the utility is working. Good [17:50:05] Thanks a lot! [18:00:21] Nemo_bis: so the changes eventually filtered through, quicker than I expected but slower than it should take. [18:02:39] Nemo_bis: I reckon it took maybe 3 hours to get cleared [18:03:24] Seddon: could be worse indeed! [18:04:07] Nemo_bis: quicker than previously but the queue isn't yet at the same length that it was... lets hope it doesn't get that bad.... [18:05:48] Seddon: ask robla ;) [18:08:22] * Seddon pounces robla [18:18:42] andre__: could you update the link in webstatscollector to http://dumps.wikimedia.org/other/pagecounts-raw/ ? the current one is a 404 [18:19:08] Seddon: what am I being asked about? [18:19:39] Nemo_bis, done. thanks [18:19:48] robla: I think Nemo_bis was implying you would be the solution to all my problems regarding the job queue length :) [18:19:59] JobQueue issues. Bug 42614. [18:20:10] !b 42614 [18:20:10] https://bugzilla.wikimedia.org/show_bug.cgi?id=42614 [18:21:42] csteipp: poke [18:21:48] Hi aude [18:21:52] also when it comes to really bad bugs, https://bugzilla.wikimedia.org/show_bug.cgi?id=42592 is another one in that category. And we've got our first "immediate" ticket. [18:21:58] we fixed the two issues for wikibase [18:22:04] Oh great! [18:22:09] Are they in the branch now? [18:22:19] api remove claims not being used, so removed the file and fixed the other thing [18:22:24] yes, in the branch now [18:22:35] aude, you have the changesets? [18:22:44] https://gerrit.wikimedia.org/r/#/c/36585/ [18:22:56] https://gerrit.wikimedia.org/r/#/c/36584/ [18:22:58] here we go [18:23:08] Ah, git pull just finally finished. Looks good. [18:23:25] Denny_WMDE: they are merged [18:32:31] robla: bug 42370 might justify a 1.20.2 tarball release (see https://bugzilla.wikimedia.org/show_bug.cgi?id=42592#c11) - any idea how to proceed? [18:32:42] !bug 42370 [18:32:42] https://bugzilla.wikimedia.org/show_bug.cgi?id=42370 [18:36:42] andre__: I'm looking, but I need to scope things. has hexmode been around? [18:37:05] oh, he's on #mediawiki. we should probably have this type of conversation there [18:37:29] let's try. [18:37:36] (in #mediawiki) [18:41:12] Reedy: ^demon do not use yet but https://gerrit.wikimedia.org/r/#/c/36587/ is the commit point to use [19:11:12] Reedy: dewiki (all all other) to 1.21wmf5 will follow tomorrow or on Wednesday? [19:11:30] wednesday [19:12:39] thanks [19:42:11] hello [19:43:20] hallochen [20:02:05] go figure, I was looking for a way to change HEAD in a Gerrit repository. The first link points to a post by ^demon :D [20:03:32] see [20:03:43] it's coming home, it's coming home... [20:06:56] <^demon> hashar: I wrote a gerrit plugin to do it :p [20:07:37] ^demon: yeah I have seen an abandoned change :-] [20:08:16] <^demon> Yeah, someone suggested there was a better way to do it. [20:08:21] <^demon> I agree, but just didn't have time. [20:09:19] ^demon: can we change it in the db ? [20:11:02] <^demon> No, it's in the git repo. [20:11:15] <^demon> I think it can be done with `git symbolic-link` [20:12:08] yeah bare repo /var/lib/gerrit2/review_site/git/operations/puppet.git [20:12:09] \O/ [20:12:46] ^demon: also how can I get the integration/* repos replicated on GitHub ? [20:12:58] is there a specific puppet file I should amend or is that hardcoded somewhere? [20:13:11] <^demon> No. Two steps. [20:13:28] <^demon> 1) Create the repo on github (same name as gerrit, /'s become -'s) [20:13:41] <^demon> 2) Add mediawiki-replication to the Read permissions on the repo (or parent repo) [20:15:16] ^demon: that process is too easy and ruining much of the hacking fun :-] Kudos! [20:16:43] ^demon: and I had a feature request for Gerrit but lost it during brain context switching :-D [20:17:26] chrismcmahon, DarTar went to enwiki as a logged-in user and has the messed up bold tabs and headings. [20:17:34] ^demon: You don't appear to have permission to create repositories for this organization. Sorry about that. [20:17:35] sniff [20:17:45] ^demon: not a big deal, will poke ya about it later [20:18:23] They're h3s so it's the new HTML, kaldari thinks it may be s a RL problem [20:18:31] <^demon> hashar: Gave you owner on github. [20:18:49] anyone know the bug for this? [20:19:31] spagewmf: crud. did it look like this? http://bug-attachment.wikimedia.org/attachment.cgi?id=11424 [20:19:46] chrismcmahon: yes [20:19:53] hi DarTar [20:20:02] looks exactly the same, I have a screenshot handy if needed [20:20:04] ^demon: thanks. [20:20:15] ^demon: I got the branch updated: git symbolic-ref HEAD refs/heads/production [20:20:17] \O/ [20:21:20] kaldari, skipping lunch to work on the site! [20:21:21] DarTar: spagewmf kaldari that's https://bugzilla.wikimedia.org/show_bug.cgi?id=42452. kaldari, is this the "residual issues" you mentioned in that BZ? [20:22:48] Belize? [20:22:54] :) [20:22:59] I reopened the bug [20:23:20] http://bug-attachment.wikimedia.org/attachment.cgi?id=11456 [20:23:24] how recently was the deploy to en.wiki? [20:23:31] 83 minutes [20:23:41] that's a lot more than 5 minutes of doom [20:24:08] fundraising said they were having similar issues with a deployment on Thursday [20:24:37] RL not providing a new global JS var unless you used debug=true [20:25:34] touch startup.hs [20:25:36] *.js [20:27:01] FWIW I don't see the big bold headings on enwiki in my browsers. *BUT*, in the left-hand nav I see indented "Navigation" , "Interaction" instead of "Support", and no expand/contract arrows on "Interaction" and "Toolbox" [20:28:55] I'm seeing the same [20:28:56] touching, should I sync it now? [20:29:02] yup [20:29:32] syncing [20:29:50] spagewmf: yep, that's the same bug [20:30:19] 'Aborted due to syntax errors' [20:30:28] ah, that must be the problem [20:30:39] startup.js can't sync due to syntax errors [20:30:49] Use sync-common-file , not sync-file [20:31:00] * RoanKattouw stabs sync-file for trying to PHP-lint things with a .js extension [20:31:08] ok... [20:31:57] syncing now [20:32:10] done [20:33:44] still messed up on en.wiki :( [20:34:54] hmm, I see newer CSS now, but still missing the collapse triangles [20:35:05] did you guys update the Vector extension on en.wiki? [20:35:14] how's look for you DarTar? [20:35:22] links in the sidebar are not collapsing... [20:35:22] yep [20:35:52] * MatmaRex is going to have to avoid enwiki for a while if they realize it's (partially) my fault :( [20:35:53] kaldari: looks good now [20:36:34] oh wait, I also have the same indentation problem spagewmf was reporting [20:39:30] for some reason the background css definition from Vector isn't getting applied [20:39:43] works with debug=true though [20:40:27] New profile and shift-reload don't fix it, though &debug=1 does [20:40:52] (at least it's not my fault then) [20:41:15] Yeah, the extension CSS that is loading is old [20:41:25] let me touch and sync [20:42:58] sync complete [20:43:46] Hi. Just coming by to shout that CollapsibleNav (part of Vector extension) is broken on enwiki. [20:44:29] Edokter: known, people were fixing it just now [20:44:56] Howcome only enwiki seems affected? [20:45:24] Edokter: because wmf5 was just deployed on enwiki only? [20:45:28] "Why the hell did they let something so broken ship, and without any sort of notice? Hasteur (talk) 20:29, 3 December 2012 (UTC)" [20:45:36] i didn't even have to check to know this guy's a texan. [20:45:44] andre__: hi! I assume you're paying attention to this :/ [20:45:46] lol [20:46:08] it's fixed for me now [20:46:10] anyone else? [20:46:12] Edokter: https://www.mediawiki.org/wiki/MediaWiki_1.21/Roadmap , 1.21wmf5, phase 3 [20:46:34] sumanah, yeah, https://bugzilla.wikimedia.org/show_bug.cgi?id=42452 got reopened because of this [20:46:42] Commons and test also have wmf5; they have no issues. [20:46:48] kaldari: WFM [20:47:00] but I cannot reproduce it either on en.wp currently [20:47:09] looks like kaldari fixed it [20:47:41] i kinda like how you guys taught everyone to purge until it works [20:47:49] (see [[w:en:WP:VPT]]) [20:47:59] now they think tey fixed it, cute [20:47:59] :D [20:48:45] Seems fixed now! [20:49:25] what was the fix? [20:50:01] touched and synced the startup.js and ext.vector.collapsibleNav.css files [20:50:36] I'm still not sure what the actual cause of the problem was though [20:50:56] kaldari perhaps the new revisions had earlier times than before so RL didn't use them? [20:51:19] binasher: hello. Do you have a moment to run a short query on db1033 for me? [20:51:30] maybe, a lot of stuff got weirded from all the rolling back last week [20:52:55] all is well, back to usual. Bye. [20:54:54] brion: what about iPad, do you happen to have one handy? [20:55:13] paravoid: just tested, it still gets the desktop site as it should [20:55:17] I guess I can install a UA spoofer [20:55:18] oh [20:55:19] cool [20:55:21] great, thanks. [20:55:23] :) [20:56:19] and most importantly, it still shows the fundraiser message ;) [20:57:10] :D [20:57:24] brion: how great ;/ [21:08:13] Reedy: Phe and I have found the cause of the bug in the extraction of djvu text layer. I've submitted a patch that fix temporary the issue. Can you review it, please ? https://gerrit.wikimedia.org/r/#/c/36632/ [21:13:50] and i'm hoping to get a config change deployed. could someone review? https://gerrit.wikimedia.org/r/#/c/36197/ [21:28:01] PHP Notice: Trying to get property of non-object in /home/wikipedia/common/php-1.21wmf5/extensions/CentralAuth/CentralAuthUser.php on line 115 [21:28:14] Reedy: no wonder the job queue has problems [21:29:52] Ryan_Lane (and hashar): Thanks for the review/merge, how often does puppet run on a box like gallium? [21:30:05] Jarry1250: one per hour maybe ? [21:30:15] Jarry1250: let me check at the box [21:30:51] Jarry1250: librsvg2-bin is not installed yet, puppet ran 59 minutes ago [21:31:43] hashar: Thanks [21:33:30] I can force run [21:33:54] I should get myself sudo right to force run puppet on that box :-D [21:37:04] hashar: gallium? [21:37:17] running it [21:37:21] mutante: it ran :) [21:37:40] though the package is not installed hmm [21:37:41] hashar: there are quite a few packages to be upgraded...want me to ? [21:38:18] mutante: oh hmm will get the packages upgraded with mark / paravoid during european morning [21:38:32] mutante: if something screw up, I prefer it to happen early in the european morning :-] [21:38:52] sure, I can help you with whatever this is tomorrow. [21:38:53] ok..good [21:39:38] paravoid: apt-get upgrade on the contint box :) will ping you. thanks! [21:39:57] Why wouldn't it install the package? Sorry, bit slow on all this puppet stuff. [21:40:00] I am wondering if there is an error on gallium, librsvg2-bin did not get installed [21:40:53] gallium puppet-agent[8479]: (/Stage[main]/Misc::Contint::Test::Packages/Package[librsvg2-bin]/ensure) change from purged to present failed: [21:41:05] Some packages could not be installed. This may mean that you have#012requested an impossible situation or if you are using the unstable#012distribution that some required packages have not yet been created#012or been moved out of Incoming [21:41:23] librsvg2-bin : Depends: librsvg2-2 (>= 2.36.1-1wm1) but it is not going to be installed [21:42:04] it wants librsvg2-2 and thinks that is > 2.36 of the wikimedia version of it [21:43:33] the package dependencies have always confused me :/ [21:43:46] both seems to have a 2.36.1-1wm1 candidate [21:44:38] yet it does not get that the "-wm" package is a valid one it could use to fulfill the dependency [21:45:00] hmm.. do you know why we have a -wm version of it in the first place [21:45:04] apt-cache show librsvg2-bin gives me a version of 2.36.1.-1wm1 but list as a dependency librsvg2-2 (>= 2.36.1-0ubuntu1) [21:45:11] no idea [21:45:13] exactly [21:46:45] This all works on the imagescaler boxes [21:46:53] They ensure => latest, mind [21:52:29] Jarry1250: yeah the package are broken [21:52:44] Jarry1250: will check with our deb expert to get it fixed, will not happen tonight though, sorry :( [21:53:10] hashar: Okay, thanks, I've got other things to be working on, don't worry :) [21:53:52] !log updated payments to 54d3f8f0f9c7bdc [21:53:59] Logged the message, Master [22:01:51] hashar: Jarry1250: there are also tons of package:i386 packages on it that it would remove as "no longer needed".. even though it is a 64bit system [22:02:16] got a list to paste ? [22:03:52] from #operations - Merlissimo: are problem to Special:BlockList after wmf5 update already known? ? [22:03:52] !log updated payments to 8eb328e96a1ef6ac [22:04:00] Logged the message, Master [22:04:29] hashar: http://wikitech.wikimedia.org/view/User:Dzahn/pastebin [22:05:00] mutante: I guess they are no more needed since we have the amd64 versions [22:05:18] that might be related to the upgrade from Lucid to Precise [22:05:21] hashar: yep, and librsvg2-2 is also i386 and conflicts [22:05:41] as opposed to imagescaler box where it uses 2 wmf packages and also 64bit [22:07:09] hashar: Jarry1250: should be better now [22:07:17] ii librsvg2-bin 2.36.1-1wm1 [22:07:24] \O/ [22:07:53] !log gallium - removed librsvg2-2:i386, installed librsvg2-2, librsvg2-bin [22:08:03] Logged the message, Master [22:08:48] it also pulled libcroco3 ..and removed ia32-libs ia32-libs-multiarch:i386 [22:09:27] i.e. the 32bit libs migration stuff [22:09:55] so now we will be able to test out rsvg rendering! [22:11:55] !log updated payments cluster to d7946a804d763 [22:12:03] Logged the message, Master [22:18:25] hashar: Is there a way to retrigger Jenkins for my patchset? [22:18:32] (apart from resubmitting it) [22:18:46] Jarry1250: in mediawiki/core ? [22:18:57] hashar: Yes. [22:19:00] Jarry1250: logging to https://integration.mediawiki.org/ci/ using your labs account [22:19:12] on the left there will be a link Query and Trigger Gerrit Patches [22:19:21] which send you to https://integration.mediawiki.org/ci/gerrit_manual_trigger/? [22:19:24] where you can look for a change [22:19:31] (simply enter the change number: 12345) [22:19:40] click the patchset you want to retrigger (usually the very last one) [22:19:42] then submit :-) [22:20:44] hashar: Ah, cool [22:22:55] AaronSchulz: Did you deal with that? [22:23:33] http://www.wikidata.org/wiki/Special:BlockList is CA row related.... [22:25:38] hashar: "rsvg: command not found" :( [22:25:48] rsvg-convert iirc [22:26:30] hashar: Well I can change it, but rsvg is the default on Unix AFAIK [22:26:43] Anyway, let's see. [22:29:30] AaronSchulz: https://bugzilla.wikimedia.org/show_bug.cgi?id=42662 [22:33:03] hashar: rsvg-convert: command not found . Do you have command line access? Does rsvg --version work? Does rsvg-convert --version work? [22:34:09] reedy@gallium:~$ rsvg-convert --version [22:34:09] The program 'rsvg-convert' is currently not installed. To run 'rsvg-convert' please ask your administrator to install the package 'librsvg2-bin' [22:34:26] Has anyone forced a puppet run? [22:34:32] The last Puppet run was at Mon Dec 3 22:17:44 UTC 2012 (16 minutes ago). [22:34:33] hm [22:35:11] Yeah, it's not installed (yet) [22:35:34] he puppet uninstalled it somehow :-] [22:35:45] we get a very nasty dependency issue on gallium [22:35:47] was it there? [22:36:02] yup mutante installed librsvg2-bin manually earlier [22:36:18] looks like puppet fixed the fix and reseted everything back to the broken situation [22:36:50] reedy@gallium:~$ aptitude why librsvg2-bin [22:36:50] i imagemagick Recommends libmagickcore4-extra [22:36:50] pB libmagickcore4-extra Depends librsvg2-2 (>= 2.14.4) [22:36:51] p librsvg2-2 Suggests librsvg2-bin [22:36:55] Jarry1250: I sent myself reminders, will have a look at this tomorrow [22:37:17] Yay. I still don't understand why gallium has a deps problem and the imagescalers don't. Weird. [22:37:31] Other shit installed. (tm) [22:37:56] reedy@srv219:~$ aptitude why librsvg2-bin [22:37:56] i wikimedia-task-appserver Depends librsvg2-bin [22:47:24] Who is responsible for http://translatewiki.net/ ? Just want to tell that the site is down [22:48:12] se4598: #mediawiki-i18n [23:04:05] csteipp: Tim-away: See also https://bugzilla.wikimedia.org/show_bug.cgi?id=42662 [23:04:40] Aarons bug https://bugzilla.wikimedia.org/show_bug.cgi?id=42614 [23:05:23] #7 /home/wikipedia/common/php-1.21wmf5/includes/GlobalFunctions.php(3832): Hooks::run('UserArrayFromRe...', Array) [23:05:34] They both go through that hook into CA [23:06:51] [internal function]: CentralAuthHooks::onUserArrayFromResult(NULL, Object(ResultWrapper)) [23:07:02] It's not CAs fault it's being passed null... [23:11:56] just updating my gits [23:12:28] TimStarling: is it ok for me to set wgShowExceptionDetails on in cli mode? [23:12:45] yes [23:16:14] TimStarling: and don't forget about srv193/lua :) [23:16:50] not sure I'm an SPOF on that one [23:17:38] Reedy: null is correct [23:17:55] it's a reference, the hook sets it to something other than null [23:18:07] Ah [23:18:13] Just looked out of place in the stack trace [23:19:01] TimStarling: heh, I pinging notpeter a while back too [23:19:37] yeah, I get it [23:24:37] root@srv193:~# php -i | grep luasandbox [23:24:37] /etc/php5/cli/conf.d/luasandbox.ini, [23:24:37] luasandbox [23:24:37] luasandbox support => enabled [23:29:40] AaronSchulz: you say jobs were run on this box? [23:29:54] where is the jobs-loop.sh? [23:30:18] or is there a cron job or something? [23:30:35] gn8 folks [23:30:46] gah, it's just fenari not 193 [23:31:01] so probably doesn't matter normally [23:32:07] it's still on lucid [23:32:28] hume too? [23:32:34] yeah [23:32:37] hume too [23:33:52] I'll install lua on fenari manually, on the assumption that it will be upgraded to precise very soon [23:34:16] !log installed php-luasandbox on fenari [23:34:24] Logged the message, Master [23:36:02] $caRow = isset( $this->globalData[$row->user_name] ) ? $this->globalData[$row->user_name] : false; [23:36:24] that code makes it seem like it somehow won't explode if there is no data [23:38:11] yeah, but it does explode [23:38:35] there will be no data if there's no globaluser row [23:38:46] which will be the case for unmerged accounts [23:39:15] you know I had forgotten we even had unmerged accounts until JamesF started talking about it as a top technical priority [23:39:33] it seems to happen a lot now, probably after the 28th though [23:39:44] * AaronSchulz wonders what changed [23:40:55] from the blame it seems like it should have been broken since 2008 [23:49:18] CentralAuthUser::loadFromRow() doesn't mind having no row [23:52:06] yeah, no $row is probably just used for "user doesn't exist", like maybe account creation [23:52:42] maybe it has no effect, apart from the notice [23:52:51] maybe that notice has always been there but you are the first person to notice it [23:53:10] $caUser = new self( $row->gu_name ); [23:53:24] construct object with null name, throw notice [23:53:29] $caUser->loadFromRow( $row, $fromMaster ); [23:53:49] initialise mIsAttached to false, mGlobalId to 0 [23:54:48] static function onUserGetEmail( $user, &$email ) { [23:54:48] $ca = CentralAuthUser::getInstance( $user ); [23:54:48] if ( $ca->isAttached() ) { [23:54:48] $email = $ca->getEmail(); [23:55:06] $ca->isAttached() is false, so no action, unattached email is used as expected [23:58:29] mw1 enwiki DatabaseBase::getMasterPos 10.0.6.73 1227 Access denied; you need the SUPER,REPLICATION CLIENT privilege for this operation (10.0.6.73) SHOW MASTER STATUS [23:58:32] strange