[01:15:43] !log synchronized the payments cluster to 85e9007aef3e75 [01:15:59] Logged the message, Master [04:03:05] TimStarling: Happy Tim Starling Day! [04:03:29] yay [04:07:36] * Aaron|home keeps hearing the same annoying alarm go off every 10 min [07:25:31] where do those info on the user in the tagline come from in http://bug-attachment.wikimedia.org/attachment.cgi?id=11247 ? [07:26:07] "An edit filter manager and afttest, 4 months old, with 6 edits. Last edited 58 seconds ago." [10:52:06] hey all anyone know how to use the api with the logged in user? [12:36:58] Hi [12:37:02] Any devs in? [13:15:22] Qcoder00: -> mediawiki [13:15:33] OK [16:57:51] !log Truncating wikibase.wb_changes [16:57:54] Logged the message, Master [18:00:09] Reedy: 1.21wmf3 time? [18:00:31] yeah [18:09:06] 8 Catchable fatal error: Argument 1 passed to ReaderFeedbackHooks::ratingToolboxLink() must be an instance of Skin, instance of VectorTemplate given in /usr/local/apache/common-local/php-1.21wmf3/extensions/ReaderFeedback/ReaderFeedback.hooks.php on line 178 [18:09:06] 1 Catchable fatal error: Argument 1 passed to ContentHandler::getContentText() must implement interface Content, boolean given, called in /usr/local/apache/common-local/php-1.21wmf3/includes/Article.php on line 390 and defined in /usr/local/apache/common-local/php-1.21wmf3/includes/content/ContentHandler.php on line 88 [18:09:14] The second one is likely education program [18:11:21] I'm gonna just remove the type hint for RF [18:14:45] Reedy: I think Jeroen might have had a fix in master for the latter one [18:14:55] https://gerrit.wikimedia.org/r/#/c/30901/ [18:14:59] I'm not merging that as the fix... [18:15:22] holy crap [18:15:30] yeah, not that one [18:15:36] https://bugzilla.wikimedia.org/show_bug.cgi?id=41496 [18:15:50] there was a smaller one that he did first that might make a good backport [18:16:09] https://gerrit.wikimedia.org/r/#/c/30780/ [18:17:59] Reedy: actually, come to think of it, now I'm a little confused. did the EP fatal just coincidentally happen during this window, or is EP deployed to more than just test2 + enwiki? [18:18:18] I think it's somewhat co-incidental [18:18:31] k...let's not futz with it then [18:18:48] the readerfeedback one is happening a lot more frequently, hence fixing that [18:18:58] just waiting for the error logs to quieten down [18:24:11] Those errors are filtering [18:24:12] time to do some more [18:28:24] how is this possible? [18:28:52] I deleted a page, then saw that it was meant to be a user page [18:29:11] so I restored it and moved it to the right place (and I had to delete it again and restore it again as the user already created a user page too) [18:29:35] so now all the edits are on the right place, but this is the only thing I see in the history: https://commons.wikimedia.org/w/index.php?title=User:Zev_Rothkoff&action=history [18:29:41] can you help? [18:43:39] Reedy: I rebased https://gerrit.wikimedia.org/r/#/c/28631/ you approved but were in merge conflict state. [18:54:32] https://commons.wikimedia.org/wiki/Commons:Village_pump#Edits_secretly_removed.3F [18:54:42] please reply or do something about it [18:54:44] thnx [18:55:16] andre__: ^^ [19:33:35] !log updated payments cluster to 5d8309e21 [19:33:48] Logged the message, Master [19:39:21] domas are you around by any chance? [19:39:58] nope [19:41:39] Reedy: you around? any idea why the version link for MediaWiki off http://en.wikisource.org/wiki/Special:Version would be 404? [19:42:09] For the git hash? [19:42:12] yes [19:42:23] 82c284e6988f797c049b5b6eb23d11fe1598d75a [19:42:30] let's see what git log says.. [19:43:15] 82c284e6988f797c049b5b6eb23d11fe1598d75a [19:43:54] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=log;h=refs/heads/wmf/1.21wmf2 [19:44:08] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=commit;h=6d7e0d323d8a4bdfb35b1dc7d97b430c8b75457f [19:45:14] Ah [19:45:20] chrismcmahon: because it's a merge commit at the top [19:45:38] I think.. [19:47:33] OK. spot check of wikisource in FF, Chrome, IE7, things look pretty OK, other than IE7 is dog-slow and has non-fatal errors that have been there for some time already [19:51:02] domas urgh, please let me know when you are available, it concerns https://bugzilla.wikimedia.org/process_bug.cgi and remark of Roan mentioning you [19:51:18] gah! https://bugzilla.wikimedia.org/show_bug.cgi?id=19986 [19:51:43] and also the later remark by Hashar [19:55:00] ToAruShiroiNeko: You should probably ask asher [20:02:28] Reedy I have, which is why he posted that link there [20:03:22] I dont know who to contact in regards to wiki renames, it seems like a few test runs are all thats needed :/ [20:09:15] !log Restarting Jenkins [20:09:29] Logged the message, Master [20:13:28] Reedy: power cycle! [20:19:00] I wonder what it's doing... [20:19:35] me guesses twiddling thumbs [20:21:17] <^demon> Ah, and there's hashar. [20:21:20] <^demon> Just when we need him :) [20:21:28] wonderbar [20:21:29] what is the problem ? [20:21:48] <^demon> Jenkins got hung on a build. Reedy kicked jenkins to restart. [20:21:56] <^demon> The build queue is still pretty backed up. [20:21:58] :-( [20:22:12] I know it already happened twice [20:22:40] HOLY S<####censored> [20:24:46] <^demon> It seems to be slowly catching up, so unless it hangs I'm thinking of just letting it do its thing. [20:25:00] <^demon> (Although upping the number of executors might be nice :p) [20:25:13] we can't really up it right now [20:25:18] I have lowered it from 4 to 3 [20:25:23] <^demon> Ah. [20:25:28] cause too many tests running at the sametime cause lot of disk I/O [20:26:01] might want to switch to MySQL [20:26:12] or host the sqlite database in a RAM disk [20:26:16] or get some SSD on the server [20:26:43] I traced a process that took 2 seconds to write the english l10ncache cdb file [20:27:20] internet connection flappy :/ [20:27:41] hashar: how much ram is there? [20:28:12] <^demon> Well changing sqlite to ram disk wouldn't fix the CDB problem. [20:28:19] so we had two php process with a 3.1GB RSIZE [20:28:30] and 8.8GB VSIZE :/ [20:28:32] <^demon> Easiest way to fix the re-cache problem is to do a manual recache before starting the test run, and disable recaching for the duration. [20:28:45] is https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching/6955/ hanging? [20:28:45] I guess we should wrap the php process in some ulimit call [20:28:45] <^demon> (from my armchair) [20:30:11] hashar: did someone change the parser tests? [20:30:37] hashar: And what about letting the tests time out after X? [20:30:45] atop at 8pm utc using atop: http://dpaste.org/SfOip/ [20:30:59] ton of mem use and swap :/ [20:31:15] hoo: the tests are supposed to timeout ;-/ [20:31:23] AaronSchulz: git log it ? [20:31:54] hashar: I know, but that seems to not always work... I thought about auto killing the executors [20:35:38] ahhhh [20:35:52] ^demon: so Ext-Wikibase seems to have a nasty mem leak [20:35:53] php /var/lib/jenkins/jobs/Ext-Wikibase/workspace/maintenance/update.php [20:36:04] that is the command line that was running :/ [20:37:20] 19:23:30 [exec] ...site_identifiers table already exists. [20:37:21] 20:06:53 Build was aborted [20:37:22] hmm [20:39:41] no idea what is going on, should probably enable debug log [20:39:47] for mediawiki [20:40:01] hashar: speaking of logs... could you perhaps pull something from the error log for me? [20:40:15] which error log ? [20:40:22] wikidata.org [20:40:23] [76c44306] 2012-10-31 20:34:13: Fatal exception of type MWException. [20:40:51] please post the full info to https://bugzilla.wikimedia.org/show_bug.cgi?id=41574 [20:41:39] looking [20:41:53] if I still remember where to find the exception logs :-] [20:42:35] Reedy probably does :) [20:42:40] but he's not responding. [20:42:50] found [20:43:05] hashar: is it the same place where thumbnails?.log is? [20:43:29] DanielK_WMDE: pasted it :-] [20:43:40] though I should probably attach it [20:44:51] DanielK_WMDE: http://bug-attachment.wikimedia.org/attachment.cgi?id=11271 [20:45:13] Nemo_bis: yes. We have logs written in some places [20:45:18] and its made available readonly on some server [20:45:21] (as I understand it) [20:45:31] at least there is only one place to remember about :-] [20:45:54] hashar: thanks a lot, that helps! [20:46:25] hashar: it's not mentioned anywhere on wikitech [20:46:36] does it contain private data or could it be shared somehow? [20:46:45] definitely private / security data sorry :-( [20:46:55] you need to sign a non disclosure agreement to read those logs [20:46:58] (I think) [20:47:36] I see [20:51:47] Sorted? [20:53:18] Reedy: yea, thanks [20:57:11] Krinkle: does your prototype cleanup include dirs like RoanKattouw_away's http://prototype.wikimedia.org/logs/ ? [20:58:15] Nemo_bis: When that server is going to be decommissioned, the entire harddrive is to be considered inexistent, gone. [20:58:23] however I don't do decommissioning. [20:58:40] I just nuke what I believe can go and has had a chance to be migrated, so that decommissioning becomes easier. [20:59:17] Nemo_bis: I won't delete that, but if you need it, make sure however owns it makes a backup . [20:59:17] Nemo_bis: Note that the elephant bot isn't in use anymore [20:59:21] that log dir is idle now [20:59:25] we use wm-bot now [20:59:31] which is on labs [20:59:53] Krinkle: I know [21:00:01] http://bots.wmflabs.org/~petrb/logs/ [21:00:03] petan: can you copy that dir on labs? [21:00:18] ah, populating it as archive [21:00:30] that should work, there is no overlapping history afaik. [21:00:42] Krinkle: can you generate a gzip or something? [21:00:45] note though that you shouldn't use labs as a permanent storage either afaik. [21:00:46] to avoid wgetting [21:01:06] not until we have backups and tool labs anyway. [21:01:12] but whetev, it should be fine. [21:01:18] heh, I know [21:01:28] I just wrote something about it somewhere [21:01:30] certainly more reliable than prototype I suppose [21:01:34] Nemo_bis: link? [21:01:42] looking [21:02:21] Nemo_bis: I'm making a tarball now [21:03:05] Krinkle: https://bugzilla.wikimedia.org/show_bug.cgi?id=34953#c6 [21:03:12] Krinkle: thanks, I'll also copy it to archive.org [21:04:37] (c6 was in reply to hashar btw) [21:04:42] I'm not sure where it is stored on the server, its not in public-html [21:04:48] hm [21:05:10] probably aliased from apache conf [21:06:16] Alias /logs/ /home/catrope/mwbotlogs/ [21:06:20] yep [21:07:18] mwbot has been replaced by wmbot AFAIK [21:07:27] yes, we're archiving it :) [21:07:59] Oh good [21:08:28] RoanKattouw: `tar -zcvf name.tar.gz path/to/dir` should do it, right ? [21:08:36] Hm.. now how do I get it from ssh to my hard drive.. [21:09:10] Krinkle: On your local machine, scp prototype.wikimedia.org:/path/to/file /local/path [21:09:15] aha [21:09:52] scp prototype.wikimedia.org:/home/krinkle/mwbotlogs-prototypewikimediaorg-20121031.tar.gz . [21:09:53] nice [21:10:05] you can also use sftp username@host [21:10:09] cd around and then do get filename [21:10:34] lol, I suppose I could've just put it in /var/www/ though [21:10:34] and download it [21:10:48] that too [21:10:55] I'll do that anyway since you need it, Nemo_bis [21:11:02] :) [21:11:37] Krinkle: http://www.expandrive.com/ <- Useful tool for using remote SFTP etc as local drives [21:12:28] Nemo_bis: http://prototype.wikimedia.org/tmp/ [21:13:25] tyvm [21:14:10] Reedy: nice [21:20:15] https://archive.org/details/WikimediaIrcLogs -> http://archive.org/download/WikimediaIrcLogs/mwbotlogs.zip/ [21:21:39] Hoarder! [21:22:38] heh [21:22:58] isn't zipview.php pretty [21:23:40] hey Reedy out of curiosity, did anyone figure out anything about the job queue? [21:23:42] or was it all actually ok? [21:24:08] Asher noticed that only some times of jobs were being run [21:24:22] IIRC, it was the priority ones according to the jobs loop [21:24:31] and yet we weren't seeing huge accumulation on the job tables [21:24:36] hmmmm [21:24:40] using !log sometimes? :p [21:24:54] I think robla asked Tim to have a look, but I don't know if he did/if he did what the result of it was.. [21:24:59] ok [21:25:12] anyways looks like you guys are on it [21:25:14] http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=43352&m=enwiki_JobQueue_length&r=hour&z=small&jr=&js=&st=1351718706&z=large [21:25:20] thanks for the update [21:25:22] enwiki job queue is still slowly climbing :( [21:25:34] ah argh [21:25:34] so it is [21:26:21] it has a long way to go before it gets as backed up as zh was [21:26:30] lol [21:26:33] hopefully the cause is found well before that [21:26:45] so...yeah....that [21:27:00] here's what we know.... [21:27:13] we seem to only have enotif jobs coming through [21:27:22] yay spamming [21:27:27] being run? so the refreshlinks ones don't run? [21:27:33] Yeah [21:27:35] apergos: correct [21:27:41] this jives with what I was seeing when watching the command line [21:27:45] If you watch runJobs.log on fluorine [21:27:50] uh huh [21:28:09] I wonder why it doesn't pick the other ones up, so bizarre [21:28:10] what I don't have any visibility into is whether refreshlinks is hung or just not running or what [21:28:41] I think that might require someone with root to figure that out [21:29:06] tail -f /a/mw-log/runJobs.log | grep -v enotifNotify [21:29:06] a whole lotta nothing [21:29:20] well it would need to actually be running on a host in order to be hung [21:29:53] apergos: when things are working, how does the job get kicked off? [21:30:11] exammple, I am on mw12 [21:30:19] here's /usr/local/bin/jobs-loop.sh -t 300 [21:30:37] I see it spawn a [21:30:39] robla: can you !log please? :) [21:30:46] php -n MWScript.php nextJobDB.php --wiki=aawiki --type=sendMail (or various other types) [21:30:48] very short lived [21:31:05] Nemo_bis: what am I !logging? [21:31:08] ArticleFeedback, enotifNotify MoodBarHTMLMailer [21:31:12] !log please [21:31:26] Logged the message, Master [21:31:26] ossm [21:31:40] now I see a pile of enotif for itwiki [21:31:54] not one refreshLinks [21:31:58] Oooh, MoodBarHTMLMailerJob [21:31:59] so I just don't see how it can be hung on this end [21:32:10] Vito_away: what did you do? [21:32:22] Reedy: smoking gun? [21:32:28] types="sendMail enotifNotify uploadFromUrl fixDoubleRedirect MoodBarHTMLMailerJob ArticleFeedbackv5MailerJob RenderJob" [21:32:29] No, just something different [21:32:57] but I don't think we are doing a type based one [21:33:18] ah but it will pick those up before anything else, right [21:34:09] yeah, they're weighted [21:34:28] Optimisation du code svg. minorEdit=0 oldid=0 watchers=Array STARTING [21:34:28] Optimisation du code svg. minorEdit=0 oldid=0 watchers=Array t=55 good [21:34:28] one could hmm set -x on this bash script, run it with output to a file for a bit [21:34:28] o_0 [21:34:33] see what it thinks it's doing [21:34:56] or [21:35:46] !log payments synchronized to 5d40dbe27d05 [21:36:00] Logged the message, Master [21:36:38] db=`php -n MWScript.php nextJobDB.php --wiki=aawiki` [21:36:42] echo $db [21:36:42] Fatal error: Call to undefined method Job::defaultQueueConditions() in /usr/local/apache/common-local/php-1.21wmf3/maintenance/nextJobDB.php on line 103 [21:36:53] heh [21:36:59] someone wanna look at that? [21:37:22] Oooh [21:37:24] Let's see [21:37:34] now we're talkin' [21:37:48] Why isn't that appearing in the error logs... [21:37:48] works fine for specific types, but not without the --type option [21:37:56] it winds up shoved into the db varioable [21:38:16] if ( $type === false ) { [21:38:16] $conds = Job::defaultQueueConditions( ); [21:38:16] } else { [21:38:16] $conds = array( 'job_cmd' => $type ); [21:38:16] } [21:38:32] Yeah, defaultQueueConditions doesn't exist [21:38:37] er :-D [21:39:12] Hmm [21:39:21] Do the job runners only run php code from one version? [21:39:27] OH [21:39:35] Do they use the php shortcut? [21:39:47] ie /usr/local/apache/common/php [21:39:53] invoked by [21:39:54] AaronSchulz: ^ [21:40:01] php -n MWScript.php blah [21:40:10] hmm, no... [21:40:16] mwscript should take care of that [21:40:49] nice -n 20 php MWScript.php runJobs.php --wiki="$db" --procs=12 --maxtime=$maxtime & [21:40:51] I can't see anything obvious that Job::defaultQueueConditions() should be related with [21:41:06] that's the invocation for default (nonpriority) .... once db is set to something useful [21:41:32] nice -n 20 php MWScript.php runJobs.php --wiki="$db" --procs=12 --type="$type" --maxtime=$maxtime & [21:41:36] that's the invocation for specific types [21:41:45] where db is set by [21:41:46] db=`php -n MWScript.php nextJobDB.php --wiki=aawiki --type="$type"` [21:42:00] and that works properly (or db is empty, since there's nothing pending of the specific type) [21:42:15] Based on the code that is in wmf2 for defaultQueueConditions(), I'm not quite sure why it was removed.. [21:42:23] these are all invoked from the cwd /usr/local/apache/common-local/multiversion [21:42:28] ok [21:43:10] JobQueueGroup getDefaultQueueTypes() seems to be its replacement... [21:43:17] But it's not a static method (though, could be made one) [21:43:41] who added it? [21:43:54] and do they happen to be around now? :-) [21:44:07] * robla is looking through git now [21:44:08] I'm highly guessing it was moved by AaronSchulzs rewrite of job stuff [21:44:14] ditto [21:45:02] well he oughta be able to chime in pretty quickly on whether you could just staticify (ew) the function and change that in the script and be done [21:45:40] getDefaultQueueTypes and getQueueTypes would both need staticing.. [21:47:36] https://gerrit.wikimedia.org/r/#/c/13194/ [21:48:03] nextjobdb wasn't touched [21:49:38] grrr [21:50:03] The simplest fix is to replace it with array(); [21:50:05] oh I am so not opening this in gerrit [21:50:10] I do not want 25 tabs of crap [21:50:15] haha [21:50:29] live hack time [21:50:40] exciting [21:52:18] apergos: try the running of nextJobDB.php again? [21:52:36] yay [21:52:44] (it gave me a db!) [21:52:48] woo ;) [21:52:53] Yes, the schema changes is needed. Use JobQueueGroup::singleton()->pop() to pop jobs. You can run them like how wiki.php and runJobs.php do. [21:52:58] ? [21:53:12] ..? [21:54:12] just wondering if someone added the docs and if schema changes were applied [21:54:15] now we have to wait a while to see if en wiki job queue drops off [21:54:22] schema changes were definitely done [21:55:21] Do the job runners want restarting to take account of the updated code? [21:56:40] good q [21:56:48] * apergos looks at mw12 again [21:56:49] IIRC they reload stuff every X [21:56:55] (I doubt it but checking) [21:57:54] see this is out of a bash script, it should rerun the job every time [21:58:04] I mean reading the php script anew each time [21:58:18] right [21:58:45] anything in the logs? [22:00:32] nope [22:00:45] hmph [22:01:37] started one by hand [22:01:47] ie feeding it the value for db [22:01:50] shows up in log? [22:02:27] it was refreshLinks2 for jawikinews [22:02:41] is it still going? [22:02:47] no [22:02:48] finished quickly [22:03:01] ok, yeah [22:03:01] that's logged [22:03:09] Took a whole 4 seconds [22:03:14] but no other ones? [22:03:29] I'm just doing manually what the bash script does [22:03:38] after it gets done with a loop of the priority types [22:03:42] in the last 1000 lines of the log, 58 are jawiki refreshLinks2 [22:03:47] everything else is enotifNotify [22:04:11] Reedy: do you have some hack somewhere? [22:04:19] yeah [22:04:39] nextJobDB.php line 103 [22:05:28] Reedy: so aawiki is special? [22:05:56] * AaronSchulz run nextJobDB.php a lot yesterday [22:07:27] hmm it has to find no pending jobs at all of the priority types before doing anything else [22:07:31] none in any db [22:08:30] Reedy: how long was aawiki running wmf3? I can't see how this explains the queue growth yesterday [22:09:33] https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=commitdiff;h=e033c29d75c9082cd4a4e3be4f319d58bafe5de9 [22:09:39] was changed earlier today [22:10:23] Reedy: Jenkins seems to hung up again (did nothing since 21:56 UTC in all 3 runners) [22:10:51] hashar: ^^ [22:11:24] * apergos runs by hand for another random wiki (from the db value) [22:11:39] it's still going too [22:13:07] lots of mgwiktionary jobs [22:13:15] yep that's the one [22:13:24] it was the db from nextjobdb [22:13:50] so why does it work manually, but not how it was supposed to... Or rather, why did it stop working [22:13:58] * apergos wil restart the job runner over here after that, though in theory it should have no impact whatosoever [22:14:06] yes, both of those questions [22:16:33] restarted [22:16:47] looks like those have finished... [22:17:03] uh oh [22:17:17] now I don't see it spawning anything [22:17:17] uh oh? [22:17:30] do you get server info in the runJObs log? [22:17:44] yup [22:17:44] 2012-10-31 22:16:07 mw12 mgwiktionary: htmlCacheUpdate Endrika:pron_X-SAMPA table=templatelinks start=71040 end=71984 t=5104 good [22:17:44] if so can you see if mw12 is sending anything? [22:17:44] or similar [22:17:49] after mgwiktionary [22:17:53] cause that completed [22:18:14] ah there's one [22:18:16] enotif [22:18:17] meh [22:18:32] yup, only enotif [22:19:35] do we have enugh enotify jobs queued that we will never get to the non priority jobs? [22:19:59] enotif is enabled everwhere [22:20:17] well what I mean is [22:20:28] if jobs form that are put into the queue often enough [22:20:35] the job runners will never run out of them [22:20:49] so we'll never proceed to the non pririty jobs (= refreshlinks2) [22:20:59] But it wouldn't explain why the load dropped by like 80% [22:21:09] no [22:21:12] you're right about that [22:21:45] it does enotif 12 procs at a time, just like the other ones [22:22:35] I'd expect enotif to be very quick [22:22:44] enwiki table has no enotif jobs [22:23:17] yeah but that's not how the loop works [22:23:27] yeah [22:23:40] have a look at modules/mediawiki_new/jobrunner/jobs-loop.sh.erb in puppt [22:23:57] the loop says, if *any* db has enotify, do those. [22:24:07] then if any db has the next type [22:24:16] and only if we get through every priority type with no db having jobs [22:24:21] do we move to the rest [22:24:25] Reedy: we don't have any backport tagging system do we? [22:24:34] AaronSchulz: nope [22:24:40] Should we disable enotif for a while and see what happens? [22:26:19] no idea tbh [22:27:09] TimStarling: did you look at the job queue yesterday? [22:27:14] no [22:27:40] is it broken? [22:27:51] :-D [22:28:24] As of half past midnight yesterday, job runner load dropped to about 20% of the load.. and have stayed there [22:28:48] i would like to add a development guideline (aka coding convention) about primary keys, if warranted. is it a good idea for every new table created to have a primary or unique key, even if the need for it isn't immediately evident/present? [22:28:55] since several of you are looking at it now, do folks mind if I wander off to sleep? [22:29:06] And only jobs that were high priority have been running since [22:29:09] I don't think I'm going to glean anything more from the script [22:29:15] apergos: no, you are a bad person for wanting to sleep [22:29:15] apergos: go forth and sleep. thanks so much for keeping us focused on this [22:29:37] thanks for poking at it. see yas tomorrow (nya nya AaronSchulz :-P) [22:29:52] AaronSchulz: want to fix that fatal properly? :p [22:30:14] the defaultconds one? [22:30:35] yeah, please [22:30:45] Reedy: just merge https://gerrit.wikimedia.org/r/#/c/31138/ [22:31:24] ohh, you did already [22:31:37] that's mostly the old code of the deleted function [22:31:39] heh [22:32:10] binasher: bringing you up to speed: looks doubtful that TMH was the culprit, but ^^ 31138 is probably our fix [22:33:35] hah, looks good [22:34:02] binasher: what do you think of my proposal above, at 18:28 [22:36:02] 18:28 in what time zone? [22:36:11] 19 min ago [22:36:28] oh, heh. i was looking far back [22:36:43] figured you meant utc [22:36:48] leucosticte_: channel is a bit busy, maybe you could try #wikimedia-dev or #mediawiki where it's also more on topic [22:37:37] apache 28215 1.0 0.2 292112 27256 ? SN 22:37 0:00 php MWScript.php runJobs.php --wiki=shwiki --procs=12 --maxtime=300 [22:37:37] [22:37:44] no type. this is good I think [22:37:58] right. really gonenow :-/ [22:38:07] AaronSchulz: yup [22:38:11] That fixed it [22:38:11] apergos: ^ [22:39:09] https://gdash.wikimedia.org/dashboards/jobq/ [22:39:12] * Reedy waits [22:39:21] Nemo_bis: will do, thanks [22:40:08] leucosticte_: yes, i would advise that, at least whenever innodb is the database storage engine [22:40:10] http://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&s=by+name&c=Jobrunners%2520pmtpa&tab=m&vn= [22:40:17] apergos: binasher ^^ hahaa [22:40:51] leucosticte_: i recommend taking a look at http://www.percona.com/files/presentations/WEBINAR-MySQL-Indexing-Best-Practices.pdf [22:40:59] there we go! [22:41:03] uuh [22:41:16] * Nemo_bis wants 20min graphs too [22:41:21] run job runners, run! [22:41:27] https://gdash.wikimedia.org/dashboards/jobq/deploys [22:41:55] So why did they all stop when they did? [23:18:44] Hi. Are you aware of the issue with the login (each 6th edit or so, Commons tells me that I wouldn't be logged-in)? [23:18:59] (despite I am logged-in)... [23:25:25] Well, now you are. Thanks in advance for fixing. Good night. [23:27:44] csteipp: ^ [23:27:49] Not sure if it's related.. [23:28:36] Thanks Reedy, I hadn't seen that. Any links? [23:35:55] Nemo_bis: ¿? [23:41:30] gn8 folks [23:55:35] heading bed, have a good night