[01:26:25] !log on locke: moved fundraising logs from /a/squid/fundraising/logs to /a/squid so that they will be processed by logrotate [01:26:31] Logged the message, Master [01:35:42] !log on locke: moved bannerImpressions.log to archive and restarted udp2log [01:35:45] Logged the message, Master [01:38:20] !log on locke: compressing bannerImpressions.log [01:38:23] Logged the message, Master [01:40:02] Thanks TimStarling. we are hoping to have cmjohnson look at storage3 tomorrow. [01:40:42] I sent an email to Andrew Otto, Ariel and Rob about it [01:41:04] I don't think it's appropriate or acceptable to have an unsampled log of page views [01:41:14] bannerImpressions.log doesn't even do what it says anyway [01:41:24] it's not a log of banner impressions, it's a log of cold-cache page views [01:41:45] it has a CC header which allows browser caching, so the second pageview of a session won't be recorded [01:41:52] but it's still far too much data to be storing to disk [01:42:12] I can't imagine what sort of analysis you would want to do on it that would require an unsampled log of pageviews stored to disk [01:42:15] that would definitely invalidate a bunch of the fundraising tests [01:42:33] and even if you did want an unsampled log, it's not something we can provide on locke [01:43:00] I'm compressing the log, but I'm not sure that will be enough to get it through until tomorrow [01:43:08] so I'm thinking about also setting a sample factor of 10 [01:43:17] we've been doing it for a while, at least since a couple of months before teh fundraiser [01:43:39] I don't think so: http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=Miscellaneous%20pmtpa&h=locke.wikimedia.org&v=150.003&m=disk_free&r=day&z=default&jr=&js=&st=1336348049&vl=GB&ti=Disk%20Space%20Available&z=large [01:43:51] obviously something changed two days ago [01:44:03] yeah, we usually rotate them off onto storage3 [01:44:13] no, the slope increased [01:44:17] but the raid seems to have borked there and is not accessible [01:44:24] you can see there is a daily rotation with a shallow slope [01:44:41] and in the last two days there is a much steeper slope [01:44:42] i believe it was a 15 minute rotation on the banner and LP impression lofgs [01:48:11] yeah, I think I've seen these rotation scripts before [01:48:33] I didn't realise the data was so redundant though [01:49:54] I mean, if you configure CN to give a banner to 50% of people, you wouldn't be surprised when it gets an impression rate equal to 50% of page views [01:50:17] maybe the point is just to measure page views, but there are much cheaper ways of doing that [01:50:28] yeah, but with the ability to hide the notices, that actually vaies [01:51:34] I inherited this whole system and am certainly willing to look into more efficient ways of getting these numbers, expecially if it increases reliability of the numbers and the speed at which we can retrieve them [01:58:15] ok, my mistake, it will measure all page views [02:00:06] !log LocalisationUpdate failed: git pull of extensions failed [02:00:09] Logged the message, Master [02:01:30] looks like that's now failed 3 days in a row [02:01:57] don't do anything drastic right now, I still have to move the logs back to where they were [02:02:46] I don't think I have the ability to do anything drastic :-) [02:02:47] was udp2log running as root? [02:02:59] some files are owned by root, it's a bit odd [02:03:28] no idea, but I know aper_gos was doing some things last night [02:06:18] when I started rotating logs just now, I noticed that udp2log didn't seem to be responding to a HUP signal [02:06:33] so maybe that was the root problem [02:06:44] or maybe I was just doing it wrong, who knows [02:07:03] I'll HUP it again shortly, we'll see what it does [02:12:19] !log on locke: moved fundraising logs back where they were [02:12:22] Logged the message, Master [02:12:38] HUP worked just fine this time [02:14:02] that's good [02:14:39] TimStarling: responding now on the email. i'm also curious why the slope changed somewhere around 10:48-11:30 UTC [02:14:52] apparently the rotation script broke [02:15:02] see above [02:15:09] is that the only explanation? [02:15:39] * robla looks for where storage3 went belly up [02:16:11] I'm compressing a log with almost a day of data in it, if the script was working it would have only 15 minutes [02:16:24] Nagios says 05-05-2012 10:29:07 for the mysqld process which is on /a (also broken) [02:18:37] yes, storage3 being down would break it [02:19:21] if I change it to a 1/100 sample, then we'll have plenty of time to fix storage3 and we won't have to find hundreds of GB per day of storage space elsewhere [02:19:58] i have no objections to that on the banner impression filter [02:20:21] the terms of use banners are exploding those logs atm [02:20:37] ok, should I change the filename at the same time as introducing the sampling so that it's easier to analyse? [02:21:19] if you don't mind indicating that its sampled in the filename, that would at least make it easy to spot [02:21:25] ok [02:23:54] yeah, ok. the slope just looked so oddly smooth for something that was the result of log rotation. I would have expected to see a sawtooth, but I guess if it's happening frequently enough, it'll look smooth [02:24:45] bytes received on storage3 seems to correlate: http://ganglia.wikimedia.org/latest/graph_all_periods.php?c=MySQL%20pmtpa&h=storage3.pmtpa.wmnet&v=4482.89&m=bytes_in&r=year&z=default&jr=&js=&st=1336357197&vl=bytes%2Fsec&ti=Bytes%20Received&z=large [02:25:25] !log on locke: introduced 1/100 sampling for banner impressions, changed filename to bannerImpressions-sampled100.log [02:25:28] Logged the message, Master [02:25:45] robla: indeed they do, its also easy to spot when the ToU banners went up at 100% everywhere [02:41:10] speaking of ToU, these jurisdiction clauses I see everywhere seem very strange to me [02:41:56] extradition is very rare for civil cases, and certainly nobody is ever going to be extradited for posting penis images on wikipedia [02:43:11] so by declaring the ToU to be solely under californian jursdiction, they seem to rule out the possibility of the rules ever being enforceable internationally [02:43:34] assuming the person who is sued for breaching them doesn't mind not going to the US [02:43:49] * pgehres doesn't pretend to understand why legal chooses the words they do [02:44:53] if the ToU allowed suing someone in any jurisdiction, then the defendant could be sued in a court that they are actually required to respect [02:56:10] TimStarling: idk if you're still busy w/ locke; i wonder if you saw the discussion about thumbnailing of [[commons:file:P.J. Proby 2007.jpg]] a week ago? [02:56:42] I didn't [02:56:49] some large swaths of dimension ranges produce very dark thumbs (relative to original). some sizes are normal [02:57:01] 29 22:02:18 < jeremyb> 2039px is bad, 2040px is good [02:57:34] https://commons.wikimedia.org/wiki/File:P.J._Proby_2007.jpg in case you want an actual URL ;) [02:59:27] TimStarling: When you get a spare chance (if you havn't already), can you look at ED and possibly unbreak it, since people keep wanting to actually download stuff with it [02:59:51] uhhhh, encyclopedia dramatica? [03:00:02] or extension distributor? [03:00:30] yes, I'm really going to get tim to unbreak encyclopedia dramatica [03:01:28] i was just trying to think of things and that's all i had [03:01:35] and then i thought of the second one [03:10:13] jeremyb: probably OOM [03:10:20] we shouldn't really allow such large thumbnails [03:12:15] TimStarling: look at 384px [03:18:06] very mysterious [03:18:41] (384px is apparently some kind of default on nlwiki) [03:20:10] there are many mysteries in life, and there are still two things that people have asked me to do today that are probably more important [03:20:18] original report was at nlwiki helpdesk i think, Akoopal is the guy that brought it to this channel [03:20:24] assuming it's just this image [03:20:42] i think it may just be the one. it's been over a week since the report came in... [05:50:08] TimStarling, indeed I can find a discussion about this specific point of the ToU, we've mostly had discussions about whether it's helpful to back our guidelines (and the laws) with a contract and what laws could we ask the users to respect. [05:50:13] *can't [06:25:37] Nemo_bis it really is a bug [06:47:10] hello [06:50:05] does anyone of you know what happened to db45? [06:51:15] seems to have died yesterday morning [07:02:11] nosy: dab mentioned it too [07:02:17] binasher: ^ ? [07:03:16] nosy: what cluster/ [07:03:19] ? * [07:04:08] hey [07:04:15] i'm about to go to bed, i'll pull it though [07:04:24] and look into it in the morning if no one else has [07:05:10] nosy: were you replicating off it? [07:05:17] !log asher synchronized wmf-config/db.php 'db45 is down' [07:05:21] Logged the message, Master [07:07:34] jeremyb: yes we replicated dewiki off db45 [07:08:00] how should i answer the question on "what cluster"? [07:08:26] do you mean - its pmtpa [07:08:58] but in ganglia the host is in the selection field the lowest to the bottom [07:09:41] nosy: the cluster is s5 then [07:09:46] ah ok [07:09:54] thats what you want to know [07:09:56] in this case it happens that only one wiki is on the cluster but not always [07:10:04] nosy: the dewiki master is db35 [07:10:09] nosy: run this: select * from heartbeat.heartbeat where server_id=10645; [07:10:36] on your dewiki slave, and change master to db35.pmtpa.wmnet with the log file and position returned in that query [07:10:46] binasher: how do i do choose the correct master position? [07:10:48] it will have to skip over a bit, but less than 1 sec of queries [07:11:08] ok...let me see... [07:11:18] so, what about those 1 sec of queries? [07:11:58] they've already been applied to nosy's slave [07:12:31] TimStarling: https://bugzilla.wikimedia.org/show_bug.cgi?id=36346 is the bug that was created [07:12:42] ohh, rerun not skipped [07:12:48] hopefully idempotent! [07:12:51] so i can find the position i should use to replicate starting yesterday morning from the position in the heartbeat table? [07:13:10] toolserver should be configured to auto skip replication errors in the case of already applied / pk conflicts [07:13:39] nosy: yes [07:14:01] thanks for that information i was not aware of how this works [07:14:26] binasher: how do i get to the server_id you just gave me with the query? [07:15:42] maybe from https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=blob;f=templates/mysql/prod.my.cnf.erb#l7 ? [07:15:54] we generate them based on ip addresses of the server [07:16:37] binasher: ok - thanks :) i can ask you tomorrow i hope :D [07:17:03] it seems to be the decimal dotted ip with the 2nd octet removed and no dots [07:17:14] db35 = 10.0.6.45 and the server id is the 1, 3, 4th octets = 10645 [07:17:25] d'oh... [07:18:05] not sure if you can actually query our internal dns though.. do you have fenari access? [07:18:21] you can, it's not split view i'm pretty sure [07:18:54] jeremyb: you're right [07:19:17] i didn't realize that [07:19:37] ok, bed time. nosy, msg me tomorrow or email if you have more questions [07:19:51] thanks :) and good night [07:20:41] nosy: do you have a labs account? [07:20:57] or svn even? [07:23:55] yeah, it's public. i just couldn't use dig for a min because i'm half asleep [07:23:58] $ dig +short @ns0.wikimedia.org db35.pmtpa.wmnet. [07:24:00] 10.0.6.45 [07:28:27] jeremyb: nothing of that [07:28:35] its replicating again :) [07:28:39] nosy: well you can do the last thing i pasted above [07:29:29] jeremyb: thanks for the hint :) but what should this server_id tell me exactly? [07:29:38] erm? [07:29:39] can you explain me the heartbeat table? [07:29:50] i see they are different on the clusters [07:29:54] you should read some of http://www.percona.com/doc/percona-toolkit/2.1/pt-heartbeat.html [07:30:21] jeremy: do you use the percona version of mysql? [07:30:23] runs i guess one per slave [07:30:43] no, it's facebook i think? but you don't need their build to run that tool [07:31:01] that was definitly a part of the puzzle of the database stuff that was still missing in my head [07:31:26] * 20:31 binasher: new s5 master pos - MASTER_LOG_FILE='db35-bin.000011', MASTER_LOG_POS=374074061 [07:31:33] * 22:14 binasher: running 1.19 schema migration script to get former s5, s6, s1 masters (db45, db47, db36) [07:32:00] both from 2012-02-27. http://wikitech.wikimedia.org/index.php?title=Server_admin_log/Archive_20&diff=43935&oldid=43934 http://wikitech.wikimedia.org/index.php?title=Server_admin_log/Archive_20&diff=43952&oldid=43951 [07:32:07] jeremyb: you mean there was a database migration again? [07:32:21] it was 2 months ago and i guess you guys never noticed [07:32:33] oh i see :) [07:48:49] nosy: still using trainwreck? [07:48:59] jeremyb: yes [07:49:15] otherwise we needed more solaris zones to run multiple mysql instances [07:49:21] and then more space for commons [07:49:30] err.... *headache* [07:49:42] because the dbs usually need a copy of commons too [07:49:44] yes [07:49:55] i suspect it to do replication mistakes [07:50:09] what would be your headache about it? [07:50:26] you said solaris is all ;) [07:50:34] not that i have first hand knowledge [07:50:56] we will have debian on the userland servers soon but not on the db servers [07:51:15] oh, didn't know that the plan was for only some. i thought all would move [07:51:26] jeremyb: i dont have first hand knowledge too so let me know [07:51:56] anyway, i was thinking about how to do nagios checks for "is my current master also the cluster master" [07:52:04] jeremyb: well it is a plan but lets see when we get to that point [07:52:33] but what is your concern regarding trainwreck? [07:53:14] oh, just it's an unknown. i know mysql's std binlog replication and not trainwreck [07:53:17] because of the current master of the trainwreck instance? [07:53:49] i guess there should be some way made for the upstream to send in a poison pill to stop replication at the toolserver without also stopping replication at the WMF servers [07:54:16] why? [07:54:35] so we look at if we still use the current master? [07:55:38] so, AIUI, the way it works on the WMF side is all slaves are using one master and then writes stop to that master and start on one of the old slaves and all slaves start using the new master [07:56:31] but there's some period there with no writes and the new master position is taken then [07:56:59] i see [07:57:06] if you blow right past that point then you lose the chance for a perfect (gauranteed uncorrupted) handover [07:57:28] but our problem is more or less our coordination at this point do i see this right? [07:58:00] so that would be an idea so remotely stop the slaves on toolserver? [07:58:52] well, so the WMF side could send an email or other smoke signal. but what they really need is not so much the notification (you can check that on your own with the heartbeat table, etc.). what you need is to stop the slaves completely on the TS side [07:58:55] you could also cause an error here thats what you have in mind? like stopping the replication with a command that makes our replication stop? [07:59:37] and then when a TS admin notices they can start replication in the new place [07:59:38] that could be an idea [08:00:21] so, i was thinking a DB or table (DB is better?) that's mostly exists only at the WMF and not at the TS [08:00:34] one way we always now notice if the master we use is not reachable at all [08:00:40] during the read only period one of the steps is to issue a write to that DB [08:00:42] but that is not db thinking i know [08:00:49] break the TS and not break the WMF slaves [08:01:25] but we always have to change the master then here on sql-proxy when yours is not reachable and that is the point i look at you admin log and stuff :D [08:01:36] s/that's mostly exists/that's mostly empty and exists/ [08:02:01] so the way it currently works is no communication between the old master and ts at all :D [08:02:32] if you coul set a firewall rule to not send any more traffic to the ts from the old master that could do it too [08:02:43] but i am not sure if this is fine art of technology [08:02:52] hrmmm [08:02:57] could be only a bofh approacht [08:03:04] heh [08:03:24] that would work even if our trainwreck was doing strange stuff or so [08:03:24] but what if the TS copy is not caught up yet when the switch happens? [08:03:37] oh... [08:03:50] but dont we always have this problem? [08:04:00] how do you do this on your side? [08:04:10] do you look if everything is in sync before? [08:04:21] on WMF side? (is not mine, i'm neutral ;P) [08:04:24] we have a bot that could tell you about replag :D [08:04:30] yes wmf side [08:04:46] i assume that master switches are avoided when there's lag [08:05:02] and also the read only time can allow the slaves to catch up [08:05:07] you try to fix this first then? [08:05:22] so, 1) make sure no lag, 2) make read only 3) make sure no lag again [08:05:28] nosy: i don't understand [08:05:46] you try to keep your slaves in sync first [08:06:19] you make the old master read only? [08:06:54] either make the master read only or pull the power out of the master [08:07:09] but then you risk slaves not being at the same position [08:07:32] i see [08:09:15] nosy: if you guys want to try something like that i'm happy to propose it to binasher. and maybe he has some other better solution [08:09:58] jeremyb: you mean so, 1) make sure no lag, 2) make read only 3) make sure no lag again [08:10:01] on our side? [08:10:22] no [08:10:32] i mean poison pill [08:11:16] so, whenever a master switch happens then they'll break replication whenever the TS slave gets caught up to the point of the switch [08:12:02] ill ask my fellow admins here and tell you but i think its a good idea [08:12:43] i ll be interesting to see how we can stop the trainwreck [08:12:59] what kind of error it will take or if we need to do some programming there [08:13:13] it was a thing river coded [08:13:25] right. i've no idea about that [08:15:57] jeremyb: lets talk about that tomorrow or the day after tomorrow again [08:16:07] what is your location/time zone? [08:17:13] nosy: k. i'm NYC. but i don't need to be involved, i'm just the guy that proposed it. (as I said I'm neutral, I'm not making any of the decisions) [08:18:02] jeremyb: who does make the descisions in this case? binasher? [08:18:29] probably? i guess depends how big a change they think it is. [08:18:45] i just know you're not always on IRC and there's the time gap so I figured i might see him before you [08:20:20] jeremyb: thanks for sharing the idea [08:20:43] bitte [08:22:15] öy danke [08:22:18] alter :D [08:22:47] (you could even take it a step further and have a process at the TS watching for the breakage, checking if it's caused by a poison pill, fetching new master info and doing the switch. but just doing the breakage at all is a good first step) [08:22:51] ^thats a very slang word for doode [08:22:56] äh...dude [08:23:31] true [08:23:55] and probably not an easy one depending on how trainwreck works [08:24:14] yeah, i have no clue how it works so i'm not thinking about it! [08:24:37] i have none either but one has to probably think about it [08:25:21] is there a list somewhere of which clusters are on which boxen? and which use trainwreck and which don't? [08:25:46] * jeremyb copies nosy into #wikimedia-toolserver ;) [08:25:49] well we have a config that reflects the servers and clusters [08:25:54] wow [08:26:11] but no user has access to this [08:27:03] trainwreck always runs if more then one cluster is replicated /running on one server [08:27:16] you could guess this from the config too [08:27:44] from the user side you can only connect to the cluster instance in ts and select the hostname [08:28:16] nosy: can you /join #wikimedia-toolserver ? [08:30:49] jeremyb: yes but i wont decide anything without my fellow admins here :) [13:38:01] How long does it take for changes to the global title blacklist to spread? [13:44:01] Snowolf: As high as the replag is at max. AFAIK [13:49:34] ETA, replag [13:49:40] :p [13:52:31] Snowolf: IIRC 1h [13:52:51] Beau_: thanks [13:57:02] Snowolf: I have checked, I cannot find any configuration on noc.wikimedia.org regarding caching times; default value set by extension is 15 minutes. [13:57:31] Beau_: 15 minutes seems right lemme try :) [13:58:51] Beau_: yep, the changeh as spread since I asked, thanks! :) [18:01:33] So, I heard it's time to break enwiki [18:01:49] nnnoooooooo [18:01:52] it's the ops meeting [18:02:00] you guys love to do this to us dontcha? :-P [18:02:11] That was the point [18:02:14] We know where you all are ;) [18:02:23] grrrrrr [18:02:30] Reedy: we should deploy next time all the ops people go out for bubble tea [18:02:44] you have to make sure everyone in all the timezones is gone [18:04:44] (famous last words) this ought to be a pretty routine deployment [18:05:40] Reedy: are you going to do the honors? (or "honours")? [18:05:47] honou?rs [18:05:49] AaronSchulz: Actually last week I walked into the elevator to 6 for a deploy as ops was walking down the stairs for bubble tae [18:06:03] I will, yeah [18:07:20] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.20wmf2 [18:07:23] Logged the message, Master [18:07:28] Doned [18:10:08] <^demon> cldr's got some kind of array bug. [18:12:21] Yeah [18:12:25] I've logged that, I think [18:13:41] <^demon> Man, collection sucks. [18:13:44] A couple of those collection warnings I have fixed too (not merged) [18:13:50] not reviewed/merged [18:13:59] Don't think I did PHP Warning: array_push() expects parameter 1 to be array, null given in /usr/local/apache/common-local/php-1.20wmf1/extensions/Collection/Collection.body.php [18:13:59] on line 529 [18:14:11] <^demon> Ah, param 1 not 2. [18:14:17] <^demon> That makes a whole lot more sense. [18:14:23] <^demon> Since param 2 is a hardcoded array :p [18:16:15] RoanKattouw, did you convert and upload those wikimania videos in the end? [18:16:50] if conversion is a problem, you can simply upload them to archive.org with a simple curl command and they'll convert them for you in very nice quality [18:17:22] Nemo_bis: I converted some, odder gave me description files yesterday, so I'll look at those tomorrow [18:17:56] RoanKattouw, so you didn't kill any more servers in the process? :) [18:19:04] next time the Wikimania team should probably just upload to archive.org (or send them a drive) [18:19:18] then we can keep only the ogv on Commons, at a reasonable size [18:19:48] Well, WM2012 is in DC [18:19:53] Someone can give the hard drive to RobH ;) [18:19:54] They mailed us a drive [18:20:02] Which we mailed to Virginia [18:20:14] Rob hooked it up, then I converted most of the video before the box crashed [18:20:18] but conversion was a problem [18:20:25] archive.org is way more robust [18:20:27] No, getting the drive was a problem [18:20:32] that too :) [18:20:39] It didn't show up here until like January [18:20:43] getting the videos to put on the drive even more [18:20:56] The other problem is that no one wrote file description wikitext [18:21:05] I told both WMIL and WMF to do that, no one did [18:21:08] So now odder is doing it [18:21:21] archive.org doesn't care about descriptions :p [18:21:24] commons does [18:21:31] yep [18:21:41] But yeah archive.org might be a better way to go [18:21:46] get stuff on line more quickly [18:22:04] so I hope WM2012 team just dumps everything on archive.org and then lets busy bees upload and curate it on Commons [18:22:29] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/GlobalBlocking.git;a=blob;f=GlobalBlocking.class.php;h=c093647e5072fda99a697c5464b1b9df32cfc56d;hb=HEAD [18:22:54] That would be nice [18:23:15] There's still work involved in splitting the video into logical chunks (talks) and transcoding [18:24:12] transcoding to what? [18:24:23] but yes, splitting is the min [18:24:27] !log reedy synchronized php-1.20wmf1/extensions/GlobalBlocking/GlobalBlocking.class.php [18:24:30] Logged the message, Master [18:25:12] !log reedy synchronized php-1.20wmf2/extensions/GlobalBlocking/GlobalBlocking.class.php [18:25:15] Logged the message, Master [18:25:25] I'm sure I merged and pushed that before... :/ [18:27:23] ^demon: https://gerrit.wikimedia.org/r/#/c/6051/ [18:27:42] AaronSchulz: [18:27:44] 12 PHP Warning: require_once(/home/wikipedia/common/multiversion/MWVersion.php) [function.require-once]: failed to open stream [18:27:44] : No such file or directory in /usr/local/apache/common-local/live-1.5/MWVersion.php on line 12 [18:27:44] 12 PHP Fatal error: require_once() [function.require]: Failed opening required '/home/wikipedia/common/multiversion/MWVersion.php' [18:27:44] (include_path='.:/usr/share/php:/usr/local/apache/common/php') in /usr/local/apache/common-local/live-1.5/MWVersion.php on line 12 [18:27:59] twich [18:28:01] twitch [18:28:06] is this about that gerrit change? [18:28:32] No idea [18:28:46] just noticed it in teh logs [18:29:29] *please* don't break the en wiki maintenance scripts [18:29:32] I will cry. right now. [18:29:43] so close to done with this month's en wp dump... [18:30:50] probably testwiki or something [18:31:13] They've gone now.. Can't be too regular [18:31:25] ok *whew* [18:45:39] !log reedy synchronized php-1.20wmf2/extensions/Collection/Collection.session.php 'head' [18:45:43] Logged the message, Master [18:46:31] !log reedy synchronized php-1.20wmf1/extensions/Collection/Collection.session.php 'head' [18:46:34] Logged the message, Master [18:47:53] <^demon> Reedy: I fixed the whitespace on https://gerrit.wikimedia.org/r/#/c/6571/ btw. [18:48:33] Cheers [22:07:14] hashar: what did you change in https://gerrit.wikimedia.org/r/#/c/6415/6? [22:07:31] AaronSchulz: simple rebase on tip of master [22:07:35] on top of [22:07:52] test stopped failing [22:07:55] to fix jenkins build [22:14:01] !log raindrift synchronized php-1.20wmf1/extensions/PageTriage 'Syncing PageTriage to enwp, a la carte' [22:14:04] Logged the message, Master [22:14:57] !log raindrift synchronized php-1.20wmf2/extensions/PageTriage 'Syncing PageTriage to enwp, a la carte' [22:15:00] Logged the message, Master [22:16:42] !log raindrift synchronized wmf-config/InitialiseSettings.php 'enabling PageTriage on enwp' [22:16:45] Logged the message, Master [22:19:07] !log raindrift synchronized php-1.20wmf1/resources/startup.js 'touch' [22:19:10] Logged the message, Master [22:24:12] !log chmod 775 /usr/local/apache/common-local/php-1.20wmf2/extensions/PageTriage with dsh as root [22:24:15] Logged the message, Mr. Obvious [22:28:57] gn8 folks [22:34:36] !log awjrichards synchronizing Wikimedia installation... : Sync'ing MobileFrontend changes per http://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2012-05-07 [22:34:39] Logged the message, Master [22:35:55] !log awjrichards synchronizing Wikimedia installation... : Sync'ing MobileFrontend changes per http://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2012-05-07 [22:35:58] Logged the message, Master [22:38:57] Scapping twice? / [22:38:57] :/ [22:44:34] sync done. [22:56:54] !log awjrichards synchronized wmf-config/CommonSettings.php 'Bumping mobile resource version' [22:56:57] Logged the message, Master [22:59:11] RD: And it looks PageTriage-related. [22:59:26] http://pastebin.com/5ZByA9J4 [22:59:35] who wants it? [22:59:37] Yup that's a PT bug [22:59:49] Look, a Roan. [22:59:52] Was trying to block an IP [23:00:06] I'm pinging raindrift in #wikimedia-dev [23:00:11] Heh. [23:00:13] So many channels. [23:00:23] RD: You could combine all of your information into a single message. [23:00:48] I prefer texting. [23:00:53] !log tstarling synchronized wmf-config/InitialiseSettings.php 'wgShowExceptionDetails = false' [23:00:56] Logged the message, Master [23:06:59] \o/ [23:07:59] awjr: still deploying? [23:10:17] AaronSchulz yes - im sorry, im going to ahve to run scap again in a minute [23:10:28] k [23:15:09] !log reedy synchronized php-1.20wmf2/extensions/PageTriage/includes/PageTriageUtil.php [23:15:14] Logged the message, Master [23:15:55] RD: give it a try now please [23:15:56] !log reedy synchronized php-1.20wmf1/extensions/PageTriage/includes/PageTriageUtil.php [23:15:59] Logged the message, Master [23:18:25] Reedy: Set $wgShowExceptionDetails = true; at the bottom of LocalSettings.php to show detailed debugging information. [23:18:41] TimStarling: ^ lol [23:19:46] Helpful [23:20:01] brion wrote that message when he introduced the feature [23:20:11] you can see that he intended it for third-party installations, right? [23:21:04] maybe we can change the message now [23:21:21] It was just amusing that in between it being reported, me attempting a fix, and trying again the output changed [23:25:56] !log awjrichards synchronizing Wikimedia installation... : Sync'ing MobileFrontend changes per http://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2012-05-07, take 3 [23:25:59] Logged the message, Master [23:30:02] TimStarling: can we have wgShowExceptionDetails as true on testwiki/test2wiki? [23:30:18] I guess [23:31:53] !log reedy synchronized wmf-config/InitialiseSettings.php 'wgShowExceptionDetails to true for testwiki and test2wiki' [23:31:56] Logged the message, Master [23:32:10] actually the output with $wgShowExceptionDetails = false sucks quite a lot, doesn't it? [23:34:32] Just a bit [23:35:59] !log reedy synchronized php-1.20wmf2/extensions/PageTriage/includes/PageTriageUtil.php [23:36:02] Logged the message, Master [23:36:07] RD: fixed now [23:37:21] Alright [23:43:43] !log reedy synchronized php-1.20wmf1/extensions/PageTriage/ [23:43:46] Logged the message, Master [23:44:17] !log reedy synchronized php-1.20wmf2/extensions/PageTriage/ [23:44:20] Logged the message, Master [23:50:43] sync done. [23:52:02] !log awjrichards synchronized wmf-config/CommonSettings.php 'bumping mobilefrontend resource version #' [23:52:08] Logged the message, Master [23:56:43] AaronSchulz, we're done. sorry for the delay [23:57:00] np