[02:14:46] !log LocalisationUpdate completed (1.19) at Wed Apr 25 02:14:46 UTC 2012 [02:14:57] Logged the message, Master [02:28:47] !log LocalisationUpdate completed (1.20wmf1) at Wed Apr 25 02:28:47 UTC 2012 [02:28:50] Logged the message, Master [03:23:51] !log asher synchronized wmf-config/db.php 'adding db58 to s7 as a new slave with a low weight' [03:23:58] Logged the message, Master [03:24:34] !log asher synchronized wmf-config/db.php 'pulling db58' [03:24:37] Logged the message, Master [03:43:13] !log asher synchronized wmf-config/db.php 'adding db58 to s7 as a new slave with a low weight' [03:43:15] Logged the message, Master [06:36:42] Upload at Commons failed: Request: POST http://commons.wikimedia.org/wiki/Special:Upload, from 91.198.174.42 via amssq44.esams.wikimedia.org (squid/2.7.STABLE9) to 208.80.152.73 (208.80.152.73) [06:36:46] Error: ERR_READ_TIMEOUT, errno [No Error] at Wed, 25 Apr 2012 06:33:24 GMT [10:05:20] hello [11:10:27] session failure [11:15:12] persistent? [11:15:33] nah, spasmodic [11:15:55] sorry, was just battling with the submit button [11:16:04] grr, again [11:34:41] grr [12:06:39] Nikerabbit: again, and mostly I am seeing it at meta [12:06:53] though that is where a little more of my work is concentrated at the moment [15:40:52] Who is responsible for the mailing-adminstration if I may ask? [16:56:02] Hi. [16:56:07] http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29#Still_happening-Mozilla_Anything_-_Mozilla_Firefox_11_and_logging_in [16:56:23] I keep getting logged out, like all those people who are using Firefox. [16:59:01] Hm. I found that it might be related to this (reopened) bug. https://bugzilla.wikimedia.org/show_bug.cgi?id=35900 [16:59:19] That's the ticket for it yes [16:59:41] juancarlos: could you add your +1 voice on bugzilla ? [17:00:01] Sure. [17:00:43] Hmmm [17:00:57] Now that I've reread mctest.php it looks like we might have more flaky memc boxes [17:01:42] One thing, though, is that I haven't been logged out while *editing* but just navigating the site. I'll state that in my cmt on the bugzilla but just wanted to make that clear here, too. [17:01:59] If only mctest.php would give me a *consistent* result [17:02:05] It's different boxes being flaky at different times [17:04:49] Zomg [17:04:54] memc integrity is terrible [17:05:06] Different boxes come up with different results all the time [17:06:20] Maybe the cache is just very full and turnover is extremely high? I'd find that strange though [17:07:39] juancarlos, so it's still happening to you? [17:07:42] RoanKattouw: where can i read mctest.php? [17:08:09] jeremyb: It's in maintenance/ in the MediaWiki core repo [17:08:15] I just live-hacked it to be a bit more useful [17:08:29] And it's showing some boxes failing to retain data for as much as 200ms [17:08:52] Platonides: as of 10-15 minutes ago [17:10:14] just happened again [17:10:53] oh wierd [17:11:01] RoanKattouw: what will the live test effect? nagios? [17:11:14] It doesn't affect much if anything [17:11:23] Nagios doesn't check for memc integrity, just whether memc is up [17:11:33] This test writes data into memc then reads it back out and compares it [17:11:34] oh. so it's all manual [17:11:45] I'm just manually running it and grepping out the 100% success linse [17:12:42] i wonder if we graph stats on e.g. max age or 90%ile age across all objects in memcache [17:12:59] that's a question for asher I gues [17:16:04] jeremyb, can you do that? [17:16:28] Platonides: i haven't a clue. i guess with poison pills of some kind [17:17:26] (but hopefully some other way) [17:17:33] it would need to be provided by the status command [17:19:28] jeremyb, I don't see any age in stats command [17:19:44] there are hits and misses, bu those aren't too significatives [17:19:58] we could do a wfIncrStats() for each session we don't find, though [17:20:33] juancarlos, could you try if it also happens in http://en.wikipedia.beta.wmflabs.org/wiki/ ? [17:21:39] Yeah, I'll try. [17:22:27] switchover to 1.20wmf1 for *.wikipedia.org happening in 40 min [17:22:49] Anyone know what could cause a NULL value for user.user_registration field on en.wiki ? It is user who registered ~ 2009 ish [17:23:09] user who was registered before the column was added [17:23:23] it was only added in 2009? [17:23:32] whatever blame says :) [17:23:37] I think brion added it [17:23:52] https://en.wikipedia.org/wiki/Special:ListUsers/Chaoslover [17:23:53] | 106472 | Chaoslover | NULL | 137 | 1 | [17:24:08] | user_id | user_name | user_registration | [17:26:52] Platonides: Haven't been logged out so far. I've visited probably 50 different pages. [17:27:52] AaronSchulz: The last time that line was modified in tables.sql according to svn:annotate on svn.wm.o was 2007 [17:27:53] http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/maintenance/tables.sql?r1=23238&r2=23239& [17:27:54] and it already existed in the left side of the diff [17:27:56] so it was added earlier [17:28:06] so the existence of the field doesn't explain the missing of the information [17:28:34] according to the "Harvard-Sciences Po Wikipedia study" a few months back, they were missing information on user_registration from quite a few users [17:28:52] users from 2009? [17:29:01] they were using API action "userdailycontribs" to get the information, which returns 0 for registration if it is NULL in the database [17:29:11] AaronSchulz: registration date [17:29:28] sure, some people who entered the survey registered in 2009 or whatever year [17:30:08] if people who made accounts in 2009 don't have that field, that sounds like a bug [17:30:17] I guess one could fix by using the log table [17:30:19] also, looking at Special:Log there is no create entry either, so its not just the db field in the user table [17:30:26] or not ;) [17:30:36] first edit then, I guess [17:30:49] first edit from this user in particular was 2009-02 [17:30:56] * AaronSchulz is tired of db corruption [17:30:59] yeah [17:31:21] tell me about it, especially annoying when working on Toolserver with data, with ts issues lately I can't tell where the error is [17:31:23] could be anything [17:36:06] btw, found addition of user_registered http://mediawiki.org/wiki/Special:Code/MediaWiki/12207 [17:36:19] AaronSchulz: brion indeed [17:37:05] brion: Any idea what could cause a NULL value for user_registration (and no logging entry for 'create') for a user who signed up in 2008 or later ? [17:37:23] hmm [17:37:24] Krinkle: What's their user ID? I could check in the live DB to see if it's also NULL there [17:37:33] newish users should have user_registration date i think [17:37:51] enwiki:106472 judging from scrollback [17:37:53] maybe some weird manual creation, or a glitch, or ... [17:37:53] RoanKattouw: Harvard gave 2 example user names that they got NULL on from the API [17:37:54] mysql> SELECT user_id, user_name, user_registration FROM user WHERE user_name IN ("Chaoslover", "UpstateNYer"); [17:37:55] ^ [17:37:56] | 106472 | Chaoslover | NULL | [17:37:57] | 134690 | UpstateNYer | NULL | [17:38:14] Yup same here [17:38:24] * Krinkle ran on ts-willow [17:38:27] That's not ts corruption, live DB also has nulls there [17:38:31] ok [17:38:45] They used userdailycontribs API, which returned "0", but that is prolly result of intval() casting. [17:38:48] its just live corruption :) [17:38:51] g. [17:38:57] (ignore that) [17:42:03] Rawr [17:42:12] I tried looking at the user creation log [17:42:27] Or rather, the first 5 log entries by these users [17:42:42] But they're not in there either AFAICT [17:42:43] and? [17:42:45] yeah [17:42:52] one of them uploaded a picture once [17:44:02] UpstateNYer has patrol log entries [17:44:18] Chaoslover has 1 log entry which is an upload, correct [17:44:55] I'll recommend harvard to check log and usercontribs then and use the lowest date of the first entry of both [18:04:08] * robla shakes fist at wireless office access [18:04:39] Reedy: AaronSchulz: ready to finish off 1.20wmf1 deployment? [18:05:11] Scary ;) [18:06:43] Reedy: what should we start with? [18:06:49] dewiki? [18:07:00] dunno, was just going to look [18:07:19] looks like Aaron is already picking a target [18:09:30] :o [18:09:40] dewiki first? [18:10:08] !log aaron rebuilt wikiversions.cdb and synchronized wikiversions files: Moving all remaining wikis to php-1.20wmf1 [18:10:11] Logged the message, Master [18:10:12] lol [18:11:26] Reedy: so much for subtlety [18:11:31] AaronSchulz: http://memegenerator.net/instance/19389135 [18:13:09] just a few old Collection.templates.php, not much in the logs [18:13:12] * AaronSchulz browses around [18:16:26] PHP Fatal error: Undefined class constant 'CACHE_VERSION' in /usr/local/apache/common-local/php-1.20wmf1/extensions/FeaturedFeeds/SpecialFeedItem.php on line 39 [18:16:35] hmm, Max has a fix for that [18:16:37] Hi. How soon will begin the update to MediaWiki 1.20wmf1 ? [18:16:44] begin? [18:16:47] already began [18:16:51] .... [18:16:59] Finished [18:17:02] Wiki13: Thank :) [18:17:05] ah [18:17:13] finished? [18:17:15] Wiki13: it was boring :p [18:17:17] so fast o.O [18:17:25] as of 8 minutes ago [18:17:51] * AaronSchulz needs a third screen [18:17:53] is it now done for all wikis, i guess? [18:18:25] yup [18:18:35] ah [18:19:33] !g 5780 [18:19:37] chrismcmahon: could you help me visit all of the wikipedia home pages? http://meta.wikimedia.org/wiki/Special:SiteMatrix [18:20:16] I've done a-e so far [18:20:33] robla: what are you looking for there? [18:20:47] chrismcmahon: making sure the home page loads [18:21:26] had we done that last week, we would have caught that we had botched the wikiversity migration [18:21:35] ah, ok [18:22:22] AaronSchulz: Are you deploying https://gerrit.wikimedia.org/r/#change,5780 or shall I? [18:22:51] I'm done through G. [18:23:44] robla I'll start at Z and work backward. then I'll automate this, I already have a script that's close [18:26:17] chrismcmahon: automation on this would also be helpful, but I think it's worth our time to take one quick visual scan of all of those pages, since sometimes we'll have things like error messages at the top of the page and such [18:26:29] robla: looks sketchy to you? http://zh.wiktionary.org/wiki/Wiktionary [18:26:32] RoanKattouw: you can [18:26:37] I'm still parsing the instructions [18:26:49] Alright [18:26:55] Since it's a fatal fix I'll just go ahead and do it [18:27:15] live hack? [18:27:15] chrismcmahon: that's looking fine to me. note...just pay attn to the wikipedia column. don't worry about the others [18:27:22] probably much easier :) [18:27:28] I mean backport to wmf/1.20wmf1 and all that [18:27:33] It'll take me like 5m ins [18:27:38] ok [18:28:05] chrismcmahon: that said, I'm kinda curious what you're seeing that looks out of place to you on zh.wiktionary.org [18:29:31] * robla is done with A-K now [18:29:35] Meh, I'm lazy, I'll just update FeaturedFeeds to master [18:29:43] The only thing in there except the fatal fix is i18n updates [18:29:44] lol [18:29:51] * AaronSchulz considered that [18:31:04] robla: when I click from sitematrix I actually get directed to http://zh.wiktionary.org/wiki/Wiktionary:%E9%A6%96%E9%A1%B5 which seems to have styling issues [18:31:17] AaronSchulz: https://gerrit.wikimedia.org/r/5821 go ahead and take it from there [18:32:05] what about http://za.wikipedia.org/wiki/Yiebdaeuz ? [18:32:40] * AaronSchulz is merging 779f224 [18:33:01] chrismcmahon: that's also looking fine to me. what problem are you seeing? [18:34:07] just the lots of whitespace [18:34:33] RoanKattouw: why does that have no diff? [18:35:09] lack of section headers and such [18:35:43] hashar: RoanKattouw: On int.mw.o/testswarm I'm experiencing something I also experienced once with a local mediawiki install: If I load up several tabs that are slow, it appears it can only prices one at a time (the later ones are blocked until the first one completes). However I don't have this when using Wikipedia or some random other major site. [18:35:59] hashar: RoanKattouw: Could this be related to how apache is configured? or maybe to the database? [18:36:01] chrismcmahon: that's probably just the way that wiki has always been [18:36:13] prices*proces* [18:36:51] !log aaron synchronized php-1.20wmf1/img_auth.php 'Deployed f7e49bd71bd8356751242c5ce1cbae076a27cf7a' [18:36:53] Logged the message, Master [18:37:12] e.g. I open up a page and do a POST that takes > 1sec , then I open 10 tabs with simple get requests that are usually very snappy, those are stalled until that tab with POST is finished [18:37:29] chrismcmahon: just do the very very minimalist of skims. I look at each page no more than 0.5 sec in many cases....just enough to see if the page has loaded and there's not weird "PHP warning: blah blah..." at the top of the page [18:37:34] robla: done Z down to U, starting T [18:37:58] chrismcmahon: cool, once you're done with that, we're done (I did A-S) [18:38:21] well we put our warnings in logs, so you wouldn't see them [18:38:31] except fatals ;) [18:40:49] * robla takes down central notice about maintenance [18:40:50] robla: nothing obviously wrong [18:49:17] robla: I was wondering why there were translations on meta :) [18:49:52] [19:48:21] don't know if this has been reported [18:49:53] [19:48:28] but I can block account names that don't even exist [18:49:53] [19:48:29] https://en.wikipedia.org/w/index.php?title=Special%3ALog&type=&user=&page=User%3AThehelpfulone+is+the+evil&year=&month=-1&tagfilter=&hide_patrol_log=1&hide_review_log=1 [18:49:54] [19:48:55] this could be something for #-tech actually [18:49:57] !log aaron synchronized php-1.20wmf1/extensions/FeaturedFeeds/SpecialFeedItem.php 'Deployed 4fb14a7b2ca9be715b820a9847d999f21c7d2cfc' [18:49:59] Logged the message, Master [18:52:20] Krinkle: It's possible that something is holding a lock somewhere I guess. Wouldn't happen when you have N servers [18:52:52] RoanKattouw: I noticed TestSwarm is using mysql_pconnect, I only heard bad things about it. Any advice? [18:53:14] TestSwarm does make a lot of small queries and does so continuously in ajax requests (polling) [18:53:41] I have no idea [18:53:42] I'm not sure if that justifies it (it is one of the small parts of testswarm db.php still left from how jresig created it, I just kept it) [18:53:53] persistent connections [18:56:13] Thehelpfulone: I don't think that's been reported. AaronSchulz: ideas on what's up with that? ^ [18:56:45] To repeat my ealier question (now are more active people arround): Who is in charge of the mailing-list-adminitration? I need access to the toolserver-announce-list [18:57:26] not really [18:57:33] * AaronSchulz is looking at GlobalBlocking [19:00:35] DaBPunkt: Casey Brown (cbrown1023), mutante is the wikimedia operations team side [19:00:54] and philippe is taking on casey's job in his absence [19:01:06] Krinkle: fwiw, I used mysql_pconnect a lot at my last job, and had very few issues with in in the last couple of years. I think php fixed a lot of the issues they used to have. [19:01:26] Thehelpfulone: ok, tnx [19:01:36] thats not true [19:01:42] no one person in ops is in charge of mailing lists. [19:01:52] Thehelpfulone & DaBPunkt ^ [19:02:19] RobH: then who is? [19:02:32] csteipp: one issue I hear a lot is locking issues with slow queries (since there is only one connection), and race conditions with mysql_last_id() not corresponding to the insert query right before it if there are multiple httpd connections using the same mysql connectin [19:02:35] no one person, best thing is to ask here, or in -operations and get somenoe to open an RT ticket for your request [19:02:39] anyway, gotta go for dinner, brbr in an hour [19:02:52] DaBPunkt: i can tell you who admins that list though [19:02:57] which is the specific thing to do =] [19:02:59] lemme pull it up [19:03:16] RobH: I know that already: River did :) [19:03:31] ahhh, river admins it, so i take it you want to take over the admin? [19:03:38] yes [19:03:53] i just dont want you guys thinking poor daniel has to handle those requests personally, he just has done the past half dozen or so ;] [19:04:00] RoanKattouw: I guess some hook set $result to a string and didn't return false before GlobalBlocking::getUserPermissionsErrors tried to append an item [19:04:11] DaBPunkt: have you announced your intent to the list itself? [19:04:19] cuz if the list has no issues, then it makes it easier to approve it [19:04:52] DaBPunkt: this way if it was someone an issue (which knowing you i dont think it is) its easier to justify [19:04:55] since it's an announce list, that might be a problem without admin access :) [19:05:05] It's an announce list, yeah [19:05:08] true, i can approve the announcement via the list though [19:05:12] using my special powerz [19:05:16] All posts in the past months are by DaB. [19:05:19] lol [19:05:21] RobH: no, but AFAIK I am the only one who can write to the list anyway, so nobody could response [19:05:24] all this sounds ok to me.... [19:05:45] i am gonna go with better to beg forgiveness than ask permission, eh? [19:05:54] yeah, go for it [19:05:57] RobH: ah ok thanks [19:06:18] DaBPunkt: so whats yer email address? [19:06:31] im just gonna make you the admin and if someone hates it, its all your fault and I had nothing to do with it ;] [19:06:36] cuz river is mia [19:06:48] *g* wp@dabpunkt.eu [19:07:05] Krinkle-away: I am pretty sure that is caused by your browser limiting requests being done to the server [19:07:12] preilly: can you tail fatal.log in /h/w/log? [19:07:25] DaBPunkt: I am going to just add you and keep river on there, and email you both the new random admin pass i am going to create [19:07:29] seems pretty zomgs [19:07:30] and adding you to the admin field [19:07:31] Krinkle-away: or Apache Gallium as a low number of childs [19:07:33] AaronSchulz: in_array issue? [19:07:51] no, though I forgot to mention that should be fixed in master too [19:08:05] RobH: ok, sounds good [19:09:50] DaBPunkt: added you, emailed the new admin pass to both you and river [19:09:55] you should be all set [19:12:33] feel free to reset the admin pass again so just you and river have it or whatever [19:12:47] but i already delted the email from my sent, and i have root so my having it is kinda nonissue [19:14:48] RobH: ok works. Thanks for your help [19:18:00] hashar: I know for sure it's not my browser, I don't have this problem with most other servers. [19:19:00] hashar: I guess it is either apache limiting the number of requests per IP or something like that (or even not allowing concurrent requests globally), or mysql causing a block [19:21:33] or PHP that just sucks [19:21:40] ok I already used that argument :D [19:22:41] Thehelpfulone: I can't block non-existing users on my testwiki (on master) [19:23:07] hmm well isolated to enwiki? [19:23:11] * Thehelpfulone tries on meta [19:23:58] interesting AaronSchulz [19:24:04] it doesn't work on meta either [19:24:09] but I was using a user script to do the blocking.. [19:24:26] can't repro in 1.20wmf1 either [19:24:43] it must be the script [19:24:46] maybe an API bug [19:24:51] * Thehelpfulone gets a link [19:25:00] http://en.wikipedia.org/wiki/User:Animum/easyblock.js [19:26:23] AaronSchulz: so that script allows me to do the blocking [19:27:04] the script may have a bug, but that api must as well [19:27:10] (api.php) [19:27:26] * AaronSchulz isn't seeing much user validation in ApiBlock [19:33:39] !log catrope synchronized php-1.20wmf1/extensions/Math/ 'Deploying 4c9e7dbe761c798ce15d7e2acef829a1582c058b' [19:33:41] Logged the message, Master [19:34:24] RoanKattouw: speaking of bashrc, feel free to steal my tab completion [19:34:32] Thanks :) [19:34:50] though it should be puppetized [19:38:24] !log catrope synchronized php-1.20wmf1/extensions/ZeroRatedMobileAccess/ZeroRatedMobileAccess.body.php 'Attempted fatal fix' [19:38:26] Logged the message, Master [20:29:38] hello gangleri [20:30:26] People complaining about errors on it.wiki which sound like "page can't be shown because of an invalid or unsupported compression format": what's this, minified JS? Browser not supporting it? [20:30:53] more info about the error? [20:31:03] only this [20:31:28] where does it show up? is it generated by the browser or the server? as the result of which request? [20:31:37] heh, all good questions [20:32:30] let's try to catch the user somewhere – calling him on phone seems excessive [20:33:18] who's suggesting a phone call? ;) [20:33:32] Nemo_Bis I need a moification of http://test.wikipedia.org/?curid=13629 to support "?curid=" without "w/windex" [20:33:54] gangleri, I can edit it [20:33:59] jeremyb, me :p [20:34:05] Nemo_bis: you can? [20:34:33] jeremyb, what's strange? [20:34:39] fine please do so; pease add another portlet (I do not know ehich is more appropriate) [20:35:10] Nemo_bis: huh? [20:36:07] Nemo_bis: you can edit what? [20:36:17] gangleri, ehm, I can edit the page but I don't understand (almost) anything of JS [20:36:31] jeremyb, that MediaWiki page [20:36:36] omg this discussion is a mess [20:36:39] Is anyone using that gadget on other wikis? [20:36:44] this looks like a very outdated version of the gadget [20:36:46] yay Krinkle [20:36:53] Hi [20:37:04] I am a bureaucrat at test.wikipedia; else please edit the talk page [20:38:28] gangleri, I've restored your rights [20:38:37] gangleri: also, curid is not the same as permalink [20:38:49] oldid is permalink [20:38:51] i think [20:38:52] hi Krinkle: I posted about authority control issues http://test.wikipedia.org/?curid=41080 the [[commons:template:Normdaten]] supports one value per entity only [20:38:57] gangleri: permalink is a permanent link to the current version of the article (revision oldid) [20:39:25] gangleri: "curid" and "articleId" is the page id, which, when used, will always show the latest version of the article [20:39:31] curid/articleId is like title [20:39:57] wgArticleId can be used as "shortestlinks" [20:40:09] so shortening permalink to only ?curid= will break the permalink making it no longer a permalink [20:40:24] so instead insert a new portlet link using mw.util.addPortletLink [20:41:09] Krinkle; one shoiuld use another portöet not the MediaWiki permanentlink for shortestlinks [20:41:18] that's what I say [20:41:23] **portlelt** [20:41:28] portlet [20:42:11] I am typing fast; my ayes are bad and the computers script is small und gray [20:42:19] **eyes [20:44:50] gangleri: https://toolserver.org/~krinkle/tmp/psty/shorturl-portlet/ [20:45:05] Krinkle please compare http://yi.wiktionary.org/?curid=1564&uselang=en#sysops ***and** http://yi.wiktionary.org/wiki/%D7%B0%D7%99%D7%A7%D7%99%D7%B0%D7%A2%D7%A8%D7%98%D7%A2%D7%A8%D7%91%D7%95%D7%9A:Administrators#sysops [20:45:58] gangleri: I know what it is for [20:46:13] but note that page ids are less stable than titles [20:46:36] a page may be split or renamed, then the origin page will keep the page id, so the ID does not always mean the same subject [20:47:05] e.g. a page named "Foo" may be moved to "Foo (bar)", and "Foo" will be created as a new page with a different ID [20:47:12] authors get disambiguatiions (sometimes they move) [20:47:18] for that reason it is recommended to use short urls that shorten to the title itself [20:47:30] not the internal page id [20:48:00] Krinkle: And sometimes they delete pages for history merges, which makes them get a totally new page id, without any (visible) change [20:48:06] Also note that most browsers and programs can handle UTF-8 / multi-byte characters, so those % escapings are not always required. [20:48:15] hoo: yep, exactly [20:48:39] page ids are internal, do not use them in public features or things that non-programmers will deal with (e.g. do not use ?curid= in a tweet) [20:48:45] Oh god, better leave the escapings in, UTF-8 in URL can fail, badly :/ [20:48:55] oldie and diffid are fairly stable [20:49:01] but not curid, not yet atleast [20:49:11] the ShortURL extension is close to deployment (or maybe live already) [20:49:15] oldid is stable [20:49:24] Unless the revision it points to is oversighted [20:49:25] until the revision is deleted [20:49:27] yeah [20:49:46] short url extension uses a mapping table to the title instead of the page id [20:53:11] [06:49] until the revision is deleted p858snake|l: sure, if it is undeleted it'll have the same revision id [20:53:39] but when it is deleted, it can't be shown [20:54:02] and in most cases when someone links to an oldie, they probably didn't care about that specific revision, just about the contents of it at that point [20:54:10] so a previous revision would be fine too [21:19:28] * halfak tests [21:19:43] * jeremyb halves [21:20:08] LeslieCarr: I'm not able to get to emery.wikimedia.org (just sitting down with Dario) Did you take my key from internproxy? [21:20:45] halfak: yes [21:20:51] right now i'm breaking mobile [21:20:54] give me about 10 minutes [21:21:01] kk [21:21:12] thanks LeslieCarr [21:25:49] uh Error creating thumbnail: convert: no decode delegate for this image format `/tmp/magick-XXDjAfDj' @ error/constitute.c/ReadImage/532. [21:25:50] convert: missing an image filename `/tmp/transform_54c0309-1.jpg' @ error/convert.c/ConvertImageCommand/2970. [21:35:36] ok halfak [21:36:11] LeslieCarr: When I try to access emery using my key, I get "Permission denied (publickey)." [21:36:31] I'm using the same key that works with internproxy. [21:36:45] I literally copy-pasted the line in my sshconfig. [21:36:54] I'm expecting to use the username "halfak" [21:37:39] ok [21:39:42] actually hold on, have to deal with more vendor shit [21:40:04] OK :P [21:40:05] halfak: also, we usually have these discussions on wikimedia-operations, as most people in -tech don't care [21:40:21] Indeed. Dario directed me to find you here. [21:40:29] yeah, sorry about that [21:40:40] SHAME! [21:42:37] well move on over channels [22:23:36] gn8 folks [22:34:29] Can someone please check if there's something fishy with the Meta-wiki job queue? [22:34:52] I've been expecting a job to complete for the past half hour, but nothing yet. [22:38:35] Thehelpfulone: anyways, you should file an api bug about blocking [22:39:08] ok [22:42:54] AaronSchulz: can you check if the meta job queue is healthy? [22:43:57] siebrand: 206 jobs [22:44:03] * AaronSchulz wonders if this stuff is graphed, or maybe that's just enwiki [22:44:25] Reedy: would those have timestamps (i.e. what's the oldest one?) [22:45:02] less than 6 hours [22:45:15] 20120425182454 [22:45:29] Hmm. I see that's the problem, then. Thanks Sam. [22:45:46] The queue there is usually <5 mins. [22:45:50] MessageIndexRebuildJob: 5 [22:45:52] RenderJob: 78 [22:45:54] enotifNotify: 86 [22:45:56] refreshLinks2: 28 [22:46:09] * AaronSchulz wonders what RenderJob is for [22:46:25] I think the jobs might not be working [22:46:25] Now it takes more than an hour before a translatable page is indexed through MessageIndexRebuildJob. [22:46:35] There's a load of spam for enotif related jobs [22:46:38] I guess we'll have to find another solution for WMF wikis. [22:46:49] we need a new queue system [22:46:51] Trying to get property of non-object in CentralAuthUser.php on line 115 [22:47:07] At twn MessageIndexRebuildJob takes 10 seconds or so, but it does contain 50k keys. [22:47:18] some abstract class hierarchy with subclasses that use queue system that smart people already invented ;) [22:47:31] That's not the volume I expect any WMF wiki so have soon, so we might just make it part of the request. [22:50:11] AaronSchulz: I think the RenderJobs are for updating translatable pages. [22:50:43] AaronSchulz: extensions/Translate/tag/RenderJob.php. [22:50:48] huh, "render" made me think "files" [22:50:50] ok [22:50:50] siebrand: it is possible to give certain jobs a higher weighting [22:50:56] which can be done fairly easily [22:51:04] Reedy: oh, how? [22:51:06] so they essentially gain priority [22:51:15] Needs a change to the job runner scripts, merging and pushing to production [22:51:42] Reedy: you mean runJobs.php, or a customization? [22:52:10] Most of your render jobs have just run... (manually) [22:52:17] I thought that our queue system didn't have any priorities. [22:52:46] Reedy: busy FuzzyBot :) https://meta.wikimedia.org/wiki/Special:Contributions/FuzzyBot [22:52:55] there's some more.. [22:53:55] no jobs for meta now [22:54:42] Reedy: great. For me it was mainly about the MessageIndexRebuildJob, but the queue was indeed stuck somehow? [23:01:16] The job queue is a dark art [23:01:39] It does get stuck from time to time [23:02:21] AaronSchulz: I thought one of the misc hosts had something on ganglia [23:03:08] I think it was just an enwiki graph [23:03:47] yeah [23:03:50] but I can't even find that [23:09:59] AaronSchulz: duh. Right under my nose [23:10:12] Looks like for the last 5 hours the job queue length for enwiki is just growing steadily [23:10:43] http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=3335&m=enwiki%20JobQueue%20length&r=hour&z=small&jr=&js=&st=1335395131&z=large [23:11:13] yep [23:11:27] RoanKattouw_away: ^ [23:39:26] nighty [23:48:14] Reedy: did you figure out what was going on with the job queue? [23:48:32] Nope [23:48:52] looks like it corresponds to when we moved everything to 1.20 [23:49:03] Reedy: you manually ran those jobs right? [23:49:07] which is a little weird, because enwiki *shouldn't* have been affected [23:49:07] On meta, yes [23:49:14] hmm, ok [23:49:54] it started to balloon up shortly after 18:00 UTC [23:50:15] 18:10 logmsgbot_: aaron rebuilt wikiversions.cdb and synchronized wikiversions files: Moving all remaining wikis to php-1.20wmf1 [23:50:41] AaronSchulz: https://gdash.wikimedia.org/dashboards/jobq/ [23:50:59] Job runners aren't running [23:51:39] * AaronSchulz wonders where Reedy finds these graphs [23:51:43] they're standing in one place, doubled over, panting [23:51:52] https://gdash.wikimedia.org/ [23:52:05] AaronSchulz: I found it existed by searching for job in the puppet repo ;) [23:52:07] the green peak in the last hour was when I ran those on meta [23:52:17] sure [23:53:17] AaronSchulz: the 1.19 directories are still around, right? [23:53:29] I assume [23:53:34] * AaronSchulz didn't delete them [23:54:50] * robla tries to puzzle out why migrating wikis other than enwiki would kill enwiki's job queue [23:55:11] /bin/bash /usr/local/apache/common/php/extensions/WikimediaMaintenance/jobs-loop.sh [23:56:32] Haha [23:56:36] I know why [23:56:36] php -n MWScript.php nextJobDB.php --wiki=aawiki [23:56:42] reedy@srv231:/usr/local/apache/common/multiversion$ php -n MWScript.php nextJobDB.php --wiki=aawiki [23:56:42] CACHE_ACCEL requested but no suitable object cache is present. You may want to install APC. [23:56:42] Backtrace: [23:56:42] #0 [internal function]: ObjectCache::newAccelerator(Array) [23:57:04] ah...serial execution. whee! [23:57:09] aawiki I guess was still on 1.19 [23:57:13] sounds like a rubbish error, since we use memcached [23:58:16] Simple fix is to put aawiki back to 1.19 for the moment [23:58:48] it'll probably get stuck again, though, no? [23:59:24] depending on what the job is that's clogging the queue [23:59:35] It's not a job clogging the queue [23:59:51] the job runners aren't being told where they need to run their jobs for