[00:00:28] I am around [00:00:57] Reedy: why does MW extensions have no 1.21 milestone on bugzilla? [00:07:46] probably because nobody has created on eyet [00:07:55] I can do that, if it's wanted [00:11:04] * chrismcmahon cranks up IE7 just for the heck of it. annoying errors on test2 are annoying, but editing works [00:15:56] chrismcmahon: do you have a browserstack account? [00:16:21] ori-l: crossbrowsertesting, same idea I think [00:16:29] ah, cool [00:16:55] ori-l: and I have a VM with IE7 that I can control much more finely than the hosted services [00:17:29] chrismcmahon: if you had to explain "Zuul" in one short sentence, what would that be? [00:17:55] evening guys - would one of you kind souls happen to know where you would find details of the file size limits on WP please? [00:18:20] Got someone trying to upload a 241MB .ogv file, and getting told it's bigger than the server is configured to allow [00:18:24] and that's going to commons [00:18:32] andre__: hashar is the expert, but as I understand it, zuul provides a way to control secure access to the Jenkins host, which is required for two-way communication between Jenkins and gerrit [00:19:03] chrismcmahon: thanks, I'll try to boil that down into something shorter :) [00:19:17] * andre__ working on a list of upstream bugtrackers [00:19:46] BarkingFish, if it needs to be uploaded and it's bigger than the allowed size, you can make a bug and a shell user will download it into production [00:20:37] we don't want it fiddling with or anything Krenair - simply wanting to know what the largest size is that the server is config'd to allow :) [00:21:41] the guy just told me he's re-encoding the vid at a *much* reduced quality, I think it's possibly something to do with Frankenstorm :) [00:21:58] It would just be good to know what the max file size is for the future though [00:29:22] Maybe AaronSchulz can tell you, BarkingFish .. [00:32:16] 500mb [00:33:25] AaronSchulz, does that apply to all filetypes? [00:33:43] yes, though that is configurable [00:34:23] ah. I ask because as I mentioned above, user in #wikipedia-en trying to upload an ogv file to commons got hit at 241MB [00:34:38] using UW? [00:34:51] yup [00:36:02] did he check the "chunked upload" box in his/her prefs? [00:36:18] idk, AaronSchulz - I'll check [00:40:31] AaronSchulz, sorry, I can't get a response from the guy right now - he's just pinged off the network. [00:40:37] I'll speak to him when he comes back on :) [00:40:48] Thanks for getting back to me anyhow :D [01:07:25] AaronSchulz, thanks for your patience. Just spoke with the user concerned, Hurricane_Sven, and he said that he didn't have anything checked on that failed upload, which was btw, 241MB and not 261MB :) [05:23:33] Is Hurricane Sandy affecting the eqiad datacenter? [07:47:19] Jasper_Deng: yes/no - the datacenter itself is ok for the moment, connectivity in the area is having some issues [07:54:20] oh, right, there's that.. [07:55:02] LeslieCarr: as evidenced by my frequent disconnects [07:55:12] * Jasper_Deng sees many sysadmins on wen they're usually not [07:55:46] Yeah … turns out when one of the major connectivity points in the world gets flooded, the internet is not too happy :-/ [07:56:28] LeslieCarr: affecting us everywhere? [07:57:16] hard to say 100% for certain ? definitely connectivity across the ocean is having issues, and networks that have major NYC points are [07:57:43] but a network that is connecting to us in ashburn and doesn't touch nyc would be okay … except they may have overloaded links due to avoiding nyc, etc [07:58:07] chain reaction [07:58:08] so sort of ? [07:58:21] or wait… "it's complicated" ;) [07:58:31] that soons like a relationship in turnmoil, lol [07:59:07] heh [08:08:30] en.wp is slow for me from .nl [08:30:49] I imagine that connectivity into the east coast from overseas may be a bit shaky right now (*cough*sandy*cough*) although it looks like our dc there is in fine shape itself [08:32:21] Reedy: this is just a heads up for when you're around, the job runner ganglia graphs show a sudden dropoff at midnight uc today, as far as what they are processing. when I look at the job count on a few large wikis it looks very low... is it possible some jobs aren't being queued now? or might something else have changed in mw? [08:33:46] maybe it's just a sign that everything caught up but the drop looks suspicious to me [08:47:43] What does it require to attain autoconfirmed status on lovely new Wikidata? is it the standard 4 days? [08:57:25] ah DanielK_WMDE, you might know: [08:57:25] What does it require to attain autoconfirmed status on lovely new Wikidata? is it the standard 4 days? [09:00:44] apergos: maybe someone run the deduplication script? [09:00:46] dunno [09:00:53] looks likely, zh.wiki has only a few thousands jobs compared to four millions yesterday [09:00:53] I just figure I'll point it out and he can take a peek, make sure there's nothing actually broke [09:00:53] n [09:01:01] I see [09:01:10] well however that happened it's a good thing [09:01:21] :) [09:03:22] hmm apergos you could actually be right, https://gdash.wikimedia.org/dashboards/jobq/ is suspicious [09:12:25] ok I'm looking at this [09:12:39] yeah hm [09:12:53] well we'll see what he says [10:52:15] Nikerabbit: hello [10:52:38] hi ori-l [10:53:08] just saw your e-mail. taking a look. is it already resolved or is there something i could still do? [10:54:35] ori-l: in this case those were only annoyances, I only needed to modify InitialiseSettings.php and update one other extension file [10:55:28] I left the other changes as they were [10:56:16] bbl, have to do some shopping [10:56:31] Nikerabbit: it looks like it's deployed [10:56:33] unless i'm misreading it [10:57:22] well, i'll try to figure out what went wrong -- ttyl [10:58:24] perhaps I'm misinterpreting this: [10:58:25] nikerabbit@fenari:/home/wikipedia/common/php-1.21wmf2$ git pull [10:58:26] Updating 18e51f3..f96d2db [10:58:26] Fast-forward extensions/EventLogging | 2 +- extensions/NewUserMessage | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) [10:59:04] oh, i think i know [10:59:57] i updated core and merged, and then on fenari, i git fetch / git checkout master in the extension subdirectory, because i didn't want to pull any other changes [11:00:19] so the change got deployed, but the submodule ref wasn't updated [11:00:45] sounds plausible [11:01:43] the reason i sometime do that is....... precisely the reason you wrote about -- git pull grabs a bunch of other other people's changes [11:02:06] but maybe that's only making it worse. anyway, i don't mean to delay you [11:03:13] ori-l: of course you are others are not doing it on purpose, I just wanted to point out a problem [13:13:48] anyone with db access around? [13:15:47] I can connect to my db just fine (my webtools too). [13:16:59] dakdada I mean WMF database access [13:18:34] Betacommand: what do you need? [13:19:16] Nemo_bis: I need to check if an issue is an actual database bug or just something with the toolserver database [13:19:32] ah [13:19:44] there's plenty of errors indeed [13:19:48] some are tracked on bugzilla [13:20:00] Nemo_bis: this was thought to have been fixed [13:20:06] :( [13:20:27] Nemo_bis: do you have access? [13:20:33] noo [13:20:46] grr [13:21:11] its a easy query, just dont have access to the live DB myself [13:21:46] for the sake of tracking, it's better if you ask on bugzilla first anywya [13:21:59] Betacommand: hint: paste the SELECT in here instead of waiting for someone to show up first [13:22:06] but maybe someone answers here now [13:22:21] and pastebin the output as you see it [13:22:21] (here or on bugzilla linked here) [13:22:27] enwiki_p [13:22:29] select page_namespace, page_title, page_id,cl_timestamp from categorylinks left join page on cl_from = page_id where cl_to ="Candidates_for_speedy_deletion" and page_id IS NOT NULL and page_id not in (178897,454473,0,31281758,23897825) order by cl_timestamp [13:24:49] Im getting ~87 hits, while enwiki is only showing ~30 [13:36:26] looks like it is toolserver related [14:57:07] andre__: on https://bugzilla.wikimedia.org/show_bug.cgi?id=33115#c5, that's not a good place where to look for language names [14:57:56] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=languages/Names.php;h=da55c8235fcad83d644b0dd1ca2fadd417d64fb3;hb=HEAD is better [15:05:38] hi ^demon zeljkof could you look at a gerrit error and see if you can shed some light? https://gerrit.wikimedia.org/r/#/c/30769/ [15:07:17] <^demon> chrismcmahon: Not an error. It can't merge due to a conflict. You'll need to download the change and manually rebase. [15:07:36] <^demon> (Not trivial enough of a rebase for gerrit to do automatically) [15:08:05] ^demon: every day I hate gerrit just a little bit more [15:08:27] ^demon zeljkof by "download the change" do you mean to do a pull-and-rebase? [15:08:30] <^demon> Yes. [15:09:31] 'rebase' has not figured highly in my git experience until I started working with gerrit :/ [15:09:37] <^demon> This isn't gerrit's fault...the exact same thing would've happened in SVN. There's a conflict in docs/jenkins.md [15:10:07] <^demon> (Git doesn't know how to merge it, I just tried) [15:11:54] chrismcmahon, ^demon: I remember resolving a conflict today and pushing it to gerrit [15:13:04] <^demon> I see no second patch set for that change. [15:15:36] Nemo_bis: ah, thanks! [15:18:56] ^demon: I am not sure how to make it [15:19:16] I will create a gist with the things I tried, I guess I did something wrong [15:21:31] <^demon> You need to pull or cherry pick the change into your clone. Git will complain about merge conflicts on docs/jenkins.md. After resolving them (use your diff/merge/text editor of choice), amend the existing commit and push back to gerrit. [15:22:27] ^demon: I see, I did not amend [15:22:29] will do that right now [15:41:31] I hate gerrit [15:43:21] Hint: A potential Change-Id was found, but it was not in the footer of the commit message. [15:43:54] looks like conflicts are under change-id in commit message, and gerrit does not like that [16:01:02] Reedy: you around? [16:02:08] yup [16:02:45] dunno if you saw the backread, it was a while back [16:03:05] the job runners were looking funny [16:03:10] well to be specific they were loking very idle [16:03:41] http://ganglia.wikimedia.org/latest/?r=day&cs=&ce=&m=load_one&s=by+name&c=Jobrunners+pmtpa&h=&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 [16:03:48] they dropped off exactly at midnight utc [16:04:11] so when I look at the log from runjob there are things running, just not very many [16:04:16] almost as if.... [16:04:22] not much stuff is getting queued somehow [16:04:37] wondering if you might have any insight or if it's all ok [16:05:13] and when I look at the number of outstandin jobs via the api for a few big projcts those were all low [16:05:19] not as if: very little jobs are being queue [16:05:20] d [16:05:43] as https://gdash.wikimedia.org/dashboards/jobq/ shows [16:05:43] Hmm [16:05:57] 1.21wmf3 was pushed, but only to test/test2/mediawikwiki/wikidatawiki [16:06:03] So that shouldn't have made any different.. [16:06:31] Seddon is experiencing something that looks like broken job queue [16:06:45] translation notification bot does nothing, it seems [16:08:32] runJobs looks fine, if maybe a little low on notifications [16:09:59] so to quote Nemo_bis from earlier... [16:09:59] zh.wiki has only a few thousands jobs compared to four millions yesterday [16:10:14] (I didn't check that, but if so it's odd) [16:10:14] I think Asher may have truncated the table (again) [16:10:16] ahahaha [16:10:22] well that would solve the mystery then [16:10:29] thanks :-D [16:10:50] Should probably slap him for not !log'ing it ;) [16:11:41] ok but Seddon has queued his stuff today, not before 0.00 UTC [16:12:00] three seperate attempts to use the notification tool [16:12:05] http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&v=506&m=enwiki_JobQueue_length [16:14:24] Reedy, I am guessing that those ever increasing graphs represent something broken and that my requests are somewhere in there :P [16:14:47] probably :D [16:15:26] also that now you should fear when they start decreasing, because people will receive three duplicate notifications :p [16:16:01] thankfully the third was one to myself as a test :P [16:16:15] There's numerous enwiki enotif jobs being run [16:16:43] apergos: Has anyone tried restarting a job runner? [16:16:48] Either the process or the actual machine? [16:17:00] I have not [16:17:26] the boxes (I looked at one) seem fine [16:17:30] nothing abnormal [16:20:40] 20121030002257 [16:20:55] so the curernt enwiki jobs at the top of the table are from just after midnight [16:29:11] apergos: srv278 seems to be also doing jobrunner work... [16:30:50] very suspicious about enwiki [16:31:32] reedy@srv278:~$ ps aux | grep job [16:31:32] apache 1071 0.2 0.0 17668 1524 ? SN 06:17 1:14 /bin/bash /usr/local/bin/jobs-loop.sh -t 300 [16:33:54] that's... interesting [16:34:15] I'm guessing puppet just never stopped the runner on 278 [16:34:37] srv258 - srv280 are application servers, job runners, memcached [16:34:37] says puppet [16:34:44] o_0 [16:34:51] WHY!? [16:35:05] they are't reall [16:35:06] y [16:35:10] oh, good :p [16:35:11] it's a comment that was never updated [16:35:20] * apergos checks out the host [16:37:41] stopped [16:38:22] thanks [16:38:46] thanks for seeing it [16:39:05] any others while you're looking? [16:39:40] I didn't notice any... [16:39:43] Let me check the job [16:39:46] k [16:39:55] it stood out when the rest in that column were mw.. [16:40:24] It's done 14,508 jobs today! ;) [16:41:03] geeee [16:41:14] No other srv host in the log [16:41:17] see there's just no way we have that few jobs being produced [16:41:25] even allowing for the number of job runners [16:41:28] no, I mean srv278 alone has [16:41:31] yes [16:41:52] editing a few templates on en gives us a few hundred thousand jobs right away [16:42:08] 178000 lines in the log file.. [16:42:11] so even allowing for 16 job runners... [16:47:01] @replag [16:48:14] 178000 lines, since midnight, all projects? [16:48:17] it just seems... [16:48:27] really a low number [16:49:01] yeah.. [16:49:09] logrotate runs at 06:25 [16:49:14] reedy@fluorine:~$ wc -l runJobs.log-201210* [16:49:14] 2587402 runJobs.log-20121029 [16:49:14] 1994965 runJobs.log-20121030 [16:49:14] 4582367 total [16:49:38] so for just under 10 hours, it's maybe a 5th of what it should be? [16:50:15] assuming that zhwiki and dedup brokenness don't inflate that number too much [16:50:25] is there a nice way we can get the sum of showJobs.php for all wikis... [16:52:05] I don't know [16:54:16] cmjohnson1: hi. Does Steve need a racktables account? [16:54:29] it's definitely queing up refreshlinks jobs, I see em go in as recent [16:55:02] mutante: yes...he is in there but it won't grant access [16:55:11] enwiki looks to roughly have the most [16:55:59] 14261 jobs [16:56:01] piddly little number [16:56:14] indeed [16:56:21] when the job runners are fully active, most are usually 0 [16:57:35] cmjohnson1: i'll take a look [16:57:45] k..thx [16:57:45] But as we know, if there are jobs in the queues, the runners shouldn't be nearly idle :/ [16:57:54] indeed [17:01:16] I see new stuff get queued into en (not very much right now) and it matches up with the edits [17:01:31] just looking at refreshlinks2, since I understand that one [17:01:43] maybe there's some other type of job we are missing [17:01:51] !log Running sync-common on srv194 [17:02:05] Logged the message, Master [17:03:10] I don;t have any other good thoughts [17:03:24] but at least if you are aware of it, in case something bizarre gets reported.... [17:06:42] !log Running sync-common on srv199 [17:06:52] was srv194 special in some way? [17:06:57] Logged the message, Master [17:07:20] it is not in dsh group mediawiki-installation , only in "mediawiki-installation~" [17:08:19] while that would explain the missing sync there, no idea why srv199 was different, it is in the file as normal [17:09:07] 199 done, still broken [17:09:07] srv199.pmtpa.wmnet 404 Not Found [17:09:26] sounds like it's possibly got an out of date apache config/apache not restarted [17:09:57] restarted 17:09 [17:09:59] so that's not it [17:10:21] apache2.conf from oct 22 [17:13:18] huh it is indeed a different apache2.conf than the one on say srv301 [17:13:51] oh [17:13:54] it's running precise [17:14:29] so who the bleep knows [17:15:39] Reedy: "This script operates on all servers in the following dsh groups: apaches, image_scalers, snapshot, searchidx" btw [17:15:44] per wikitech at least [17:15:58] does not mention the mw-install group [17:16:23] but still, both servers in "apaches" [17:16:42] srv199 is in apaches yeah [17:19:01] Oct 27 06:25:07 srv199 apache2[13888]: [error] [client 10.64.0.138] File does not exist: /usr/local/apache/common/docroot/wikipedia.org/fi.wikipedia.org [17:19:01] lots of crap like this in the logs since oct 2 [17:19:17] i.e.since when it was booted up [17:20:06] !log running sync-apache on srv199 only [17:20:15] ./sync-srv199 [17:20:15] Synchronizing /home/wikipedia/conf/httpd to /usr/local/apache/conf... [17:20:19] Logged the message, Master [17:20:47] let's see what happens [17:21:11] oot@srv199:/etc/apache2/wmf# grep wikidata * [17:21:11] main.conf: ServerName www.wikidata.org [17:21:17] Reedy: run the test again? [17:21:38] eh, let me restart [17:21:51] done [17:22:28] 200 OK [17:22:39] woo hoo [17:22:41] yay [17:23:00] well, all i did was make a temp. copy of sync-apache [17:23:17] called sync-srv199 and adjusted it to only run on that one server [17:23:24] hahaha [17:23:29] you seriously can't give it one hostname? [17:23:31] baaahhhh [17:23:43] eh, no, it has hardcoded dsh groups :p [17:23:56] yeah but [17:23:56] nm [17:23:57] so not worth it... [17:24:01] heh [19:12:35] hmm, why am i getting pages that miss vector styling... [19:16:56] hmm, the entire vector site skin seems missing from that RL result... [20:16:21] is it safe to delete this page? https://meta.wikimedia.org/wiki/Steward_requests/Speedy_deletions#Bigdelete_at_en:wp [20:16:37] (I was told to inform you before I perform a bigdelete action somewhere, so here I am) ;) [20:27:55] Trijnstel: a lot of people in a meeting now... [20:33:57] jeremyb: How are you doing? [20:34:07] Any problems with the storm? [20:34:21] jeremyb: that's okay [20:34:28] please say so when you're ready :) [20:36:22] multichill: i know people with problems (mostly just electricity) and have no clue when the subway will return. but i have no problems myself [20:37:00] Ok. So you took plenty of images for Commons? ;-) [20:37:36] Bit like https://commons.wikimedia.org/wiki/File:Wikipedia_Takes_Montreal_during_hurricane_Irene_03.jpg :-) [20:43:20] multichill: i did actually go out specifically looking for cars with trees on top today. found none. (only looked for ~15 mins though) [20:44:08] More effort needed! [20:45:15] We want to see freely licensed destruction! [20:45:59] failure is free, success is proprietary [20:47:04] multichill: got some small issue with wikimania2010.pl mail from lists.wmnederland.nl; one of my MX-es is not working (stupid me) but the other one is fine; you can check the logs for marcin@wikimania2010.pl and see why secondary MX was not used [20:47:19] Reedy: see PM :) [20:47:34] I'm pretty sure I'm not root on that server saper [20:47:55] multichill: I hoped if anyone is, it's you :) [20:47:55] [21:16] Trijnstel is it safe to delete this page? https://meta.wikimedia.org/wiki/Steward_requests/Speedy_deletions#Bigdelete_at_en:wp [20:47:55] [21:16] Trijnstel (I was told to inform you before I perform a bigdelete action somewhere, so here I am) [20:47:55] still busy? [20:48:03] (I just want to have the confirmation to do it) [20:51:35] kaldari: [20:51:35] 7 Fatal error: Class 'ApiCentralNoticeAllocations' not found in /usr/local/apache/common-local/php-1.21wmf3/extensions/CentralNotice/special/SpecialBannerListLoader.php on line 16 [20:51:43] noticed in the fatal log [20:54:26] kaldari: I notice $wgAutoloadClasses is conditional for it [21:00:05] https://gerrit.wikimedia.org/r/30888 [21:00:19] looking [21:01:31] AaronSchulz: Reedy and I are investigating this: http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Jobrunners%20pmtpa&m=load_one&r=day&s=by%20name&hc=4&mc=2&st=1351630586&g=network_report&z=large&c=Jobrunners%20pmtpa [21:01:57] apergos: ^^ Seems the job runner problems might co-incide with the TMH deployment... [21:02:33] looks like the job queue traffic slowed to a trickle right when you did the TMH stuff and when ori-l did the EventHandler stuff [21:02:52] (the traffic went way down 00:30 UTC) [21:03:01] Trijnstel: actually the meeting wasn't really relevant and I poked some people while it was still going. there's just no one around now that is both comfortable with DBs and also willing to commit to still be here in 10-20 mins [21:03:16] Reedy: will deploy that in a sec [21:03:22] thanks [21:03:38] 00:27 logmsgbot_: olivneh synchronized php-1.21wmf3/extensions/EventLogging 'Updating EventLogging on test2' [21:03:44] robla: TMH doesn't use the job queue [21:03:46] 00:28 logmsgbot_: aaron synchronized php-1.21wmf3/extensions/TimedMediaHandler 'deployed c1ac05640377f4f99cbe2a094e80d3d25d63b93d' [21:03:52] jeremyb: ok, I'll wait then... [21:03:53] 00:29 logmsgbot_: aaron synchronized php-1.21wmf2/extensions/TimedMediaHandler 'deployed 18e51f3b06b84d1d5fbf47272d9ebfc5008dc879' [21:04:08] kaldari: I suspect the wgAPIModules line might want moving back... Depending if you actually want the api module to appear everywhere. At least, the class always needs autoloading [21:04:09] Aaron|home: I'm pretty sure it does...at least that's what Jan told me [21:04:29] it has it's own transcode queue that we don't use [21:04:52] Jan was working on enabling it last week [21:05:17] we have a couple of new machines that we'd like to put into production to use them [21:05:34] dunno if one of those changes hitched a ride with the other changes you deployed [21:05:36] might have, though [21:07:21] in any case, those jobs need a separate runner to be set up, since it uses another table [21:07:58] if that was the code that was failing, where would the errors show up? [21:09:03] Reedy: syncing file now [21:09:17] for wmf2 at least [21:09:18] kaldari: is wmf3 on nearly master too? [21:09:21] heh [21:09:47] yeah, I'll update wmf3 in a sec [21:14:32] [30-Oct-2012 21:14:23] Fatal error: require() [function.require]: Failed opening required '/usr/local/apache/common-local/php-1.21wmf2/extensions/CentralNotice/special/SpecialBannerRandom.php' (include_path='/usr/local/apache/common-local/php-1.21wmf2/extensions/OggHandler/PEAR/File_Ogg:/usr/local/apache/common-local/php-1.21wmf2:/usr/local/apache/common-local/php-1. [21:14:32] 21wmf2/lib:/usr/local/lib/php:/usr/share/php') at /usr/local/apache/common-local/php-1.21wmf2/includes/AutoLoader.php on line 1193 [21:15:22] yeah, I didn't realize the scap was still running, so the file wasn't there yet. Just ran sync-dir on the whole extension dir [21:15:25] should be fixed now [21:15:28] heh [21:15:55] are there more deploys coming or that's it? [21:16:18] that's it for m [21:16:19] me [21:16:29] k [21:16:29] "OggHandler/PEAR" so we get a nice advertising-video in the sitenotice this year? ;/ [21:16:29] * jeremyb looks at the schedule [21:19:38] huh, did pecl-memcached happen? Aaron|home ? [21:20:50] Not yet AFAIK [21:20:53] no [21:21:35] Reedy: is asher around? [21:21:48] Aaron|home: I'm not in the office ;) [21:22:02] but I mean IRC or something [21:22:08] I haven't been on that long [21:23:09] Aaron|home: In my experince it is too early for him [21:24:15] scap finally finished [21:24:46] Are you doing wmf3? Or do you want me to [21:25:36] Aaron|home: he went offline about 60 mins ago [21:25:51] Aaron|home: i asked because you're window is just ending is all ;) [21:27:00] Reedy: back, and that's interesting but weird [21:27:31] jeremyb: heh that's not even on the calender, just the wikitech page [21:28:09] Aaron|home: err, there's 2 calendars??? where's the other? [21:28:28] I'm glad you guys are looking at it anyways [21:29:45] there is google calender [21:30:29] jeremyb: still no one around? [21:30:43] otherwise I'll leave this channel I think [21:30:43] Aaron|home: shared? [21:30:44] I wait for more than an hour now [21:30:50] the job queue has issues, it's probably not a good time to do mass deletes [21:31:00] Trijnstel: i can have someone just do it for you if you like? [21:31:07] ? [21:31:14] you mean the deletion done by a staff member? [21:31:16] Trijnstel: doesn't *need* a steward ;) [21:31:18] yeah [21:31:34] I think only stewards are allowed to do these deletions... [21:33:39] Trijnstel: up to you... [21:34:46] Trijnstel: big deletions are not such a big problem [21:35:03] ok then; and I think it's only a few edits more than the 5000 [21:35:26] Reedy: heh, I guess asher nuked the zhwiki jobs again [21:35:35] bigdelete is currenly assigned only to stewards [21:35:35] count is small again [21:36:26] I don't know how does bigdelete work in detail [21:36:35] but afaik is needed even for pages with less than 5000 revs [21:36:41] Aaron|home: Yeah, I thought I'd seen somewhere that he was going to... [21:36:44] at least I saw it in the past [21:37:15] it's about 500 MB, not such a big history [21:37:56] (https://meta.wikimedia.org/wiki/Steward_requests/Speedy_deletions#Bigdelete_at_en:wp btw) [21:38:06] correct me if I'm wrong [21:38:58] I restored it because of a misunderstanding, and now that I understand better I'd like to delete it but can't. I tried revdeleting a batch of 4,999 revisions (all of the last 5000 except the latest revision) and planned to revdelete the rest as well, but that didn't work either: "414 Request-URI Too Large nginx/1.1.19". I can't think of how to get this out of the way except by exercising bigdelete. Nyttend (talk) 20:13, 30 October 2012 (UTC) [21:38:58] * Vito facepalms [21:44:15] request uri?? [21:44:48] the revdelete parameter goes as post data... [21:45:28] in any case it's not surprising if 4999 checkboxes don't work [21:49:14] not at all [21:54:19] afk for the night, talk to folks latr [21:58:07] apergos: good night [22:00:38] kaldari: [30-Oct-2012 22:00:10] Fatal error: Class 'ApiCentralNoticeAllocations' not found at /usr/local/apache/common-local/php-1.21wmf3/extensions/CentralNotice/special/SpecialBannerListLoader.php on line 16 [22:00:44] binasher: are you able to watch the DB during a bigdelete coming shortly? [22:00:58] enwiki [22:01:55] Reedy: oh yeah, didn't finish wmf3 fix [22:02:13] one sec... [22:03:57] jeremyb: um, what? [22:04:31] binasher: you know the user right bigdelete? [22:04:47] sounds scary [22:04:48] binasher: 5k revs at once i guess is the threshold [22:05:03] what are you going to do? [22:05:13] 30 21:37:56 < Trijnstel> (https://meta.wikimedia.org/wiki/Steward_requests/Speedy_deletions#Bigdelete_at_en:wp btw) [22:07:32] jeremyb: that's fine, proceed [22:17:13] Reedy: OK, should be actually fixed on test.wiki now as well [22:18:53] binasher: should be finished now FYI. danke [22:20:02] No replag! amazing [22:21:28] Reedy, plug again all those servers! [22:32:02] Hey ready, did that request queue issue get solved? [22:32:08] Whoes ready? [22:42:33] is Ori around here somewhere? [22:43:23] Looks gone [22:43:39] he exited 1 h 20 minutes ago [22:44:58] be back in a couple min or so.... [23:01:20] ready = reedy :P [23:08:50] Aaron|home: TimStarling: should we deploy https://gerrit.wikimedia.org/r/#/c/30773/ , or should we delay the deployment of TMH to enwiki? [23:09:35] we kinda gotta do one or the other, or else we're likely going to have lots of broken thumbnails [23:11:14] where does 400mb come from? [23:11:42] what makes that the target number? [23:12:14] dunno [23:14:08] Aaron|home: would you prefer over 9000? [23:19:41] robla: fwiw, TMH might have some IE regression problems, it was not doing well in IE8/IE7 earlier today, mdale was investigating last I heard. [23:19:44] discussion moved to #wikimedia-dev if y'all are interested [23:54:25] gn8 folks