[00:07:38] jeremyb: [00:07:44] LeslieCarr: [00:07:46] oops, forgot to type the content - figured it out :) [00:08:02] ooooh, works [00:08:04] what was it? [00:08:51] so apache2 wasn't set to subscribe to that file [00:09:01] so it didn't know to reload when the file changed [00:09:07] ahhh [00:09:10] so i should fix that [00:14:09] mutante: I assigned a bug to you. [00:14:25] You have two Bugzilla accounts, I guess. [00:15:00] Brooke: the ssl think? i was thinking about taking it [00:15:08] If you want it, go for it. [00:15:37] https://bugzilla.wikimedia.org/show_bug.cgi?id=31369 [00:15:41] i researched it a month or two ago and then never did it. /me digs up the research again [00:16:01] I think you can just use relative URLs? [00:16:35] how do you mean? [00:16:57] hashar made a comment that suggested that... [00:16:58] RewriteRule ^/(.*)$ //www.mediawiki.org/$1 [R=301,L] [00:16:59] would work. [00:17:18] huh [00:17:21] Can Location: headers be protocol relative? [00:17:25] i can test that [00:17:27] I assume so, but I have no idea. [00:17:30] uhhh, idk [00:17:48] but maybe apache will absolutify it anyway [00:19:00] Is absolutify a word? [00:19:08] i was assuming not [00:19:18] but you can let me know [00:19:29] I'm not sure what it means in context. [00:19:48] to make absolute [00:24:11] notpeter: you there? [00:24:38] New review: preilly; "(no comment)" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/13287 [00:30:22] New patchset: Jeremyb; "notify Service[apache2] when its conf changes" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13288 [00:30:31] LeslieCarr: ^ [00:30:54] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13288 [00:31:47] Brooke, did you read on #wikimedia the validation doubts on the feeds wrt protocol-relative URLs? [00:32:02] No. [00:32:07] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13288 [00:32:10] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13288 [00:32:13] it wasn't just protorel [00:32:13] LeslieCarr: can you merge and push this https://gerrit.wikimedia.org/r/#/c/13287/ ? [00:32:16] cool, that should work :) [00:32:22] wow, surprisingly fast ;) [00:32:26] LeslieCarr: I can't find Asher or Peter at this time [00:32:28] preilly: you guys are still doing that ? [00:32:30] hopefully preilly's change will test mine ;) [00:32:47] seems unrelated though ;( [00:32:50] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13287 [00:32:58] Brooke, what about the broken mingle redirect? [00:33:15] is that included in that bug? I don't remember [00:33:20] Does anyone use mingle? [00:33:29] I see the scrollback you're talking about from #wikimedia. [00:33:29] apparently yes [00:33:31] File a bug? [00:33:36] describe the brokeness? where to where? [00:33:45] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13287 [00:33:48] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13287 [00:33:54] Nemo_bis: Brooke: it wasn't just protorel it was also multiple entries in the feed using the same ID [00:34:09] So file two bugs, then. [00:34:46] (well protorel was about the ID) [00:34:51] * Nemo_bis is not going to file bugs about things he doesn't understand and also needs sleep [00:35:11] Nemo_bis: Don't know what mingle issue you're talking about. [00:35:25] yeah, I'm looking for the link [00:35:40] oh, I did file it https://bugzilla.wikimedia.org/show_bug.cgi?id=34160 [00:36:05] haha [00:36:14] unfortunately i think that's not puppetized [00:36:31] LeslieCarr: thanks [00:36:33] andrew_wmf: ping? [00:36:53] !b 34160 | andrew_wmf [00:36:53] andrew_wmf: https://bugzilla.wikimedia.org/34160 [00:40:19] preilly: done [00:41:14] LeslieCarr: thanks again you rock! [00:53:27] Brooke: protorel most certainly isn't working. unless there's some config change to make it work [01:10:37] there's 2 labs project creations waiting if someone wants to do them [01:10:42] (/j #wikimedia-labs) [01:54:35] so, i realized for 31369 (SSL redirect) I did a bunch of unnecessary research ;( [01:54:45] s/unnecessary/irrelevant/ [01:55:57] and squid configs are still not in public VCS somehow?!! [01:56:07] * jeremyb goes to look at the beta setup [02:33:10] any bored root want to test something for me? ;) [02:58:19] New patchset: Jeremyb; "redirects.conf: mk whitespace consistent" [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13290 [03:13:40] New patchset: Jalexander; "Adjust target for WikimediaShopLink to test" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13291 [03:14:51] damnit, Jamesofur you trying to merge conflict me? ;) [03:15:10] jeremyb: well, duh, who wasn't? [03:15:10] :p [03:15:17] gerrit's being slow [03:15:26] for me [03:15:36] me as well [03:15:41] Jamesofur: do some code review for me? ;) [03:15:54] when it's back [03:16:30] jeremyb: going to be leaving the office in a minute but if it's not reviewed when I get home I will [03:16:42] (review, I can't push out on that branch) [03:16:55] sure [03:16:58] of course you can't [03:17:25] there it comes [03:18:35] did anyone just fix gerrit ? [03:18:37] (got paged) [03:19:00] LeslieCarr: saw it broke, saw it fixed [03:19:04] no one said a thing here [03:19:05] cool [03:19:05] yay [03:19:08] hehe [03:19:30] LeslieCarr: james and I have some patches coming if you're in a reviewing mood ;) [03:19:39] (james alexander) [03:19:44] i'm running a d&d game right now [03:19:48] hah [03:19:49] gonna check out the box real quick then go [03:20:07] well how did it page you but not notify the channel? [03:20:09] was it nagios? [03:20:24] weird, don't see anything [03:20:25] watchmouse [03:20:36] though nagios should have caught it [03:20:43] ok, off to kill my party with more evil humans [03:20:44] :) [03:21:03] * Jamesofur shakes his head sadly [03:21:08] well.. not so sadly [03:21:12] obvious followup: can we get watchmouse reporting in here? ;) [03:21:15] quite a nice CPU spike: http://ganglia.wikimedia.org/latest/?c=Miscellaneous%20eqiad&h=manganese.wikimedia.org&m=cpu_report&r=hour&s=descending&hc=4&mc=2 [03:21:33] New review: Jalexander; "(no comment)" [operations/apache-config] (master) C: 1; - https://gerrit.wikimedia.org/r/13290 [03:21:50] also load [03:22:48] yeah… doesn't look like we've seen either cpu or load get that high over the past week or so [03:23:19] it was 80legs again [03:23:39] ugh [03:23:40] I deployed the robots.txt, I guess they didn't get the message [03:23:58] ok, time to leave the office [03:24:00] * Jamesofur will be back [03:24:10] and yeah… if they keep it like that they're going to have to get blocked.. [03:27:33] New patchset: Jeremyb; "comment out redirect cfp.wikimania.wm" [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13292 [03:33:18] New patchset: Jeremyb; "bug 31369 - make redirects protorel where possible" [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13293 [03:34:11] * jeremyb wonders what 80legs was doing? something that could be done another way? (like from a git clone) [03:34:19] local git clone* [03:40:49] jeremyb: 80legs is a spider-as-a-service, so people can write their own custom spider jobs and run it on 80legs. so the quality of the 80legs api in combination with the quality of the dev who wrote the job determines how good or bad the spider behaves [03:41:12] New review: Jeremyb; "haven't tested much (and not in the WMF environment)." [operations/apache-config] (master) C: 0; - https://gerrit.wikimedia.org/r/13293 [03:41:32] drdee: ewwww [03:41:46] drdee: does it follow robots.txt by default? [03:42:01] i believe it does, but not 100% sure [03:42:12] because 28 03:23:39 < TimStarling> I deployed the robots.txt, I guess they didn't get the message [03:42:34] hold on [03:42:44] I'm blocking it [03:43:05] it says on their website that it supports robots.txt, but you have to wait a while for updates to propagate [03:43:12] a day is too long, our service has to be up [03:43:17] so I'm blocking it by UA [03:43:35] yes a day is way too long [03:43:45] should like every 5 mins or at least 1 hr [03:44:04] (well that depends on our expires header i guess? ;P) [03:44:09] it does say it respects robots.txt and that it should read it within 3 hours http://wiki.80legs.com/w/page/1114616/FAQ [03:45:07] we serve last-modified but not expires [03:45:09] sounds right to me [03:46:40] New patchset: Tim Starling; "Block 80legs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13294 [03:47:19] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13294 [03:47:19] New review: Tim Starling; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13294 [03:47:22] Change merged: Tim Starling; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13294 [03:53:52] New patchset: Jeremyb; "bug 31369 - make redirects protorel where possible" [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13293 [03:56:03] New review: Tim Starling; "Won't this break Squid caching? The HTTPS proxy is in front of Squid." [operations/apache-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/13293 [03:56:08] New review: Jeremyb; "PS2 is just adding the RTs from bugzilla to the commit msg" [operations/apache-config] (master) C: 0; - https://gerrit.wikimedia.org/r/13293 [03:56:59] TimStarling: so what then? set a Vary header? [03:57:49] TimStarling: how do we already fragment or not for HTTPS or not? [03:58:10] i guess the main goal of protorel is to not fragment [03:58:14] I guess we could set a vary header [03:58:46] yes, the way we do it is to use protocol relative URLs [03:58:55] that's the only reason for them as far as I'm concerned [03:59:42] TimStarling: can you tell me the prod apache version? [04:00:48] 2.2.14-5ubuntu8.9 [04:01:00] danke [04:01:07] people have been adding protocol relative URLs to external links [04:01:13] it just seems ridiculous to me [04:01:21] hah [04:01:24] why? [04:01:26] if HTTPS is a good idea then you should use it, if not, then don't [04:01:34] well sure [04:01:49] but if it's not crucial for a given external site but they do offer it? [04:01:58] surfing WP with an http:// URL isn't an expression of a user preference [04:02:13] make a decision [04:03:42] New patchset: Jeremyb; "bug 31369 - make redirects protorel where possible" [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13293 [04:03:59] if the protocol in use is not a user preference, if it's just an accident because the search engines happen to index http:, then it shouldn't influence the decision about whether to use https for a given external link [04:04:15] protocol URLs are a nasty obscure hack that breaks clients [04:04:56] which clients? [04:06:41] IE7 and 8, according to Krinkle a couple of hours ago [04:06:46] huh [04:07:17] there are other clients that are broken in more obvious ways, we see them request incorrect URLs with double slashes in the path [04:07:41] don't know what their names are [04:07:53] haha [04:08:03] and also domain in the path? [04:09:24] * jeremyb kinda wants a list of staff (or even just tech) who's coming to wikimania [04:09:35] of course i could just look in the registration system ;) [04:10:03] yes, we see log entries for things like http://en.wikipedia.org//en.wikipedia.org/wiki/Foo [04:10:13] right [04:11:18] TimStarling: Great, it turns out that if stylesheets with having a protocl-relative url are downloaded twice in IE7 and IE8. A fine browser bug :) [04:12:55] twice but at the right URL?! [04:12:59] Yes [04:13:13] someone should find out whether it uses the content from the first or the second fetch ;) [04:13:36] I think (not tested) either both apply (like stylesheets do), or which ever last arrives [04:13:55] huh [04:14:18] if it were the first one then it would just be extra network but not extra waiting for paint [04:14:35] but its not much of an issue since ResourceLoader doesn't use or encourage this kind of loading anyway [04:14:55] Except for skin css fallback, we load everything through module packages that are inserted as CSS text directly, not through a url. [04:17:03] jeremyb: TimStarling: Redirects at the apache level (e.g. wikipedia.com > org, .mw.org > ww.mw.org etc.) - are those cached in squid, and if so cached by request url with or without protocol? I recall something about a shared cache, but not sure if that just applies to mw-parser cache or also to things like squid > apache directly. [04:17:36] i don't think squid has a shared cache [04:17:47] anyway, this is cached by squid [04:18:12] oh, let me read the back scroll first [04:18:15] see the patch. right above the Vary header is the cache-control [04:18:21] I see a comment from tim mentioning this somewhere up there [04:18:31] \ Tim Starling; "Won't this break Squid caching? The HTTPS proxy is in front of Squid." [04:19:03] keep going ;) [04:19:15] sorry, I'm going for lunch [04:19:17] ah, we have HTTPS > squid > apache ? interesting, I didn't know that [04:19:23] TimStarling: bon appetit [04:19:40] I thought we had http apaches and https apaches and squid in front both regardless. [04:19:48] Krinkle: right. IPv6 and HTTPS are both nginx. which then hit squid and then apache [04:20:06] apaches don't even have certs [04:20:10] rigiht [04:21:06] jeremyb: so the https proxy is just for the cert then, right? I mean the rest is shared/compatible [04:21:23] ummm, idk what shared/compatible means [04:21:49] squid has a header that indicates if it's HTTPS or not. but besides that extra header there's no difference [04:21:58] (well maybe more than one extra [04:22:00] ) [04:22:02] all the headers and response body can be forwarded without modification of any kind [04:22:06] damn xkcd.com/859/ [04:22:48] in that mediawiki apaches don't react differently. Ah, wait they do. MediaWiki does redirects as well and uses the current protocol. [04:23:41] what's an example mediawiki generated redirect? [04:24:39] Special:MyPage or saving an edit (post-get-redirect) [04:24:53] lots of redirects going on [04:25:41] post-redirect-get* whatever, the last two belong together [04:26:12] $ curl -sv -4 https://en.wikipedia.org/wiki/m:foo 2>&1 >/dev/null | egrep -e '^< (Vary|Location):' [04:26:15] < Vary: Accept-Encoding,Cookie,X-Forwarded-Proto [04:26:17] < Location: https://meta.wikimedia.org/wiki/foo [04:26:36] looks good to me [04:27:21] (although that's not direct from a backend... but probably fine) [04:27:28] Krinkle [04:27:51] jeremyb: What do you mean by looks good? good in what way? [04:28:02] X-Forwarded-Proto is present in the Vary header [04:29:01] (Mind you, I'm an operations noob) for which node in the communication chain is that header intended (X-Forwarded-Protocol) [04:29:17] sent by MediaWiki? Or from elsewhere? For the squid? [04:29:35] by MediaWiki i hope [04:30:37] $ git grep X-Forwarded-Proto | wc -l [04:30:37] 8 [04:30:44] that's mediawiki core [04:31:19] for any caching proxy. squid and nginx and varnish everything else that might cache it [04:31:29] within WMF it's just squid i guess [04:31:39] but that's just because nothing else caches [04:33:32] I've always been confused by the Vary header. [04:34:15] So using Vary: X-Forwarded-Protocol in redirects.conf would not make the redirect un-cacheable. It tells the requestor (e.g. a proxy) that this response may only be re-used if the value of the header name(s) in Vary are also the same [04:34:22] http://tools.ietf.org/html/rfc2616#section-14.44 ;) [04:34:33] I was at http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 [04:35:18] I'm afraid my abstraction level of http is higher than the one assumed in the context of that spec [04:35:29] heh [04:36:09] we could discuss over beer in ~12 days ;) [04:36:18] and with whiteboard! [04:36:45] almost there [04:36:47] So that also means that without a Vary header, the proxy will re-use the response as cache no matter the headers (only based on url?) [04:37:01] idk [04:37:09] what about content-language? [04:37:33] and accept-* headers? [04:37:45] The fact that we're using *reverse* proxies also makes some assumptions made in documentation for proxies a bit complicated (i.e.g who the "client" is) [04:37:55] some may be there implicitly? also depends on the configuration [04:38:06] we can add headers to always vary on. idk if we have [04:38:15] jeremyb: afaik those are ignored by proxies unless the server (e.g. apache/mediawiki) response with a Vary header. [04:38:24] Some APIs work that way, which is nice. [04:38:41] i don't follow [04:39:20] * jeremyb needs to sleep fairly soon [04:39:22] Instead of doing something like /w/api?format=json&action=foo or /api/foo.json it could just be /api/foo and based on the accept header the web application can use a format [04:39:34] sure [04:39:55] without the Vary header in such response stuff would get messed up :P [04:40:07] yes [04:40:45] but when ignoring the Accept: headers (which afaik most request in mediawiki do, they don't care about the accept header, stuff is motley all is in the url), sending a Vary header would unneededly fragment the cache [04:40:55] mostly* [04:41:40] brb later, and about that beer! [05:03:20] !log fixed fatal.log on fenari, socat was writing to a deleted file [05:03:27] Logged the message, Master [05:03:59] who keeps screwing up fatal.log on fenari, by attempting to rotate it but failing? [05:05:33] cp fatal.log{,.1} && cp /dev/null fatal.log; ? [05:05:54] or just send it a HUP or boot [05:06:17] * jeremyb really should sleep [05:06:19] bye! [05:06:57] yes, those are some of the ways it has been screwed up [05:08:14] !log srv266 was flooding the fatal error log, complaining about a missing file. Killed apache and ran sync-common. [05:08:20] Logged the message, Master [06:21:29] New review: Jalexander; "yeah, old and obsolete" [operations/apache-config] (master) C: 1; - https://gerrit.wikimedia.org/r/13292 [08:35:12] New patchset: Dereckson; "(bug 37674) Adding standard logo for na.wikipedia" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13301 [08:40:44] New patchset: Dereckson; "(bug 37674) Adding standard logo for na.wikipedia" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13301 [09:38:11] New patchset: Hashar; "varnish config for bits.beta.wmflabs.org" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13304 [09:38:48] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13304 [11:10:07] New patchset: Hashar; "static-master for bits docroot" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13316 [12:38:51] New patchset: Hashar; "(bug 37245) makes labs use bits.beta.wmflabs.org" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13322 [12:40:06] New patchset: Hashar; "(bug 37245) docroot 'static-master' for beta bits" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13316 [12:40:24] New review: Hashar; "patchset 2 add bug number" [operations/mediawiki-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/13316 [12:42:34] New patchset: Hashar; "(bug 37245) docroot 'static-master' for beta bits" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13316 [12:43:26] New review: Hashar; "Patchset 3 update topic to reflect bug number and reworded commit message." [operations/mediawiki-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/13316 [12:43:30] /clear [12:46:04] paravoid: can we possibly take care of https://gerrit.wikimedia.org/r/#/c/12178/ ? [12:46:15] to fix two classes conflicting on installing /etc/sudoers [12:46:33] that prevents me from running puppet on the apaches / jobrunner boxes on 'beta' [13:18:53] New patchset: Demon; "(bug 35802) Gerrit email title truncation over eager" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13325 [13:19:26] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13325 [13:19:47] hashar: looking [13:21:00] paravoid: I have installed it on psm-precise instance, seems to work [13:21:08] though I am not really sure what need to be checked [13:23:50] New review: Jens Ohlig; "(no comment)" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/8344 [13:31:50] New review: Demon; "(no comment)" [operations/mediawiki-config] (master); V: 0 C: 2; - https://gerrit.wikimedia.org/r/12990 [13:31:52] Change merged: Demon; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/12990 [13:33:56] New review: Demon; "(no comment)" [operations/mediawiki-config] (master); V: 0 C: 2; - https://gerrit.wikimedia.org/r/12583 [13:33:58] Change merged: Demon; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/12583 [13:35:17] New review: Demon; "(no comment)" [operations/mediawiki-config] (master); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13322 [13:35:19] Change merged: Demon; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13322 [13:47:01] New review: Faidon; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/12178 [13:47:04] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/12178 [13:48:20] New patchset: Ottomata; "filters.oxygen.erb - adding filter for Wikipedia Zero Grameenphone Bangladesh provider" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13327 [13:48:52] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13327 [13:53:48] New patchset: Hashar; "varnish config for bits.beta.wmflabs.org" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13304 [13:54:21] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13304 [13:54:29] New review: Hashar; "Patchset 2 is a rebase I sent by mistake." [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/13304 [13:57:22] the good thing with the new way of doing things is that I can be confident it works [13:57:27] the bad thing is that I merge into production :-) [14:03:15] oh you merged my /etc/sudoers change \O/ [14:04:15] ohh generic::geoip::files disappeared [14:07:22] New patchset: Hashar; "disable geoip on labs for now" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13329 [14:07:55] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13329 [14:07:55] paravoid: https://gerrit.wikimedia.org/r/13329 to disable misc::geoip something, that got rewritten / replaced. I don't need geoip in labs for now. [14:08:38] New review: Faidon; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13329 [14:08:41] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13329 [14:09:01] danke :) [14:13:17] New patchset: Ottomata; "filters.emery.erb - adding Arabic Wikipedia banner page filter" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13331 [14:13:49] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13331 [14:13:55] heh [14:14:33] New review: Ottomata; "Should this be pipe 10? Asking Diederik..." [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/13331 [14:15:16] !log restarting gerrit [14:15:22] Logged the message, Master [14:15:46] !log restarting apache on manganese [14:15:51] Logged the message, Master [14:20:57] hi ops! [14:21:01] this should be an easy one: [14:21:05] if anyone is around to help me out: [14:21:05] https://gerrit.wikimedia.org/r/#/c/13327/ [14:24:58] !log manganese dist-upgrade [14:25:03] Logged the message, Master [14:25:10] hey hashar [14:25:18] hello :-] [14:25:23] to get geoip on labs [14:25:25] all you ahve to do [14:25:25] is [14:25:27] include geoip [14:25:36] oh but it relies on private repo [14:25:38] i seeee [14:25:39] ummmm [14:25:41] are you the one that rewrote the class ? [14:25:49] thanks for doing that :-] [14:25:59] yeah the private repo probably mean that it is not going to work on labs [14:26:10] we have a public private repo though [14:26:11] yeah hmmm [14:26:17] ahha, oh yeah? [14:26:24] yeah I rewrote it [14:26:40] a public private repo? [14:26:52] hmm, the goeip stuff is funny though [14:26:57] do we also have a private public repo :D [14:26:58] well hmm maybe it is private after all [14:27:24] ahey actually [14:27:47] ATTENTION!! rebooting maganese (aka gerrit) for dist-upgrade [14:27:51] so labs got it is own private [14:27:52] does labs talk to sockpuppet, or ummmm wherever the volatile repo is? [14:27:53] that's the problem [14:27:58] we don't need the private repo [14:28:04] because we don't want the GeoIP creds on labs [14:28:13] but by default it doesn't need it [14:28:19] if you just include geoip [14:28:24] will need to try it out so :-] [14:28:26] it will try to copy the files from volatile [14:28:33] puppet:///volatile/GeoIP [14:28:37] i have no idea where that is [14:28:41] though, or if labs has access to that [14:28:42] neither do i [14:28:48] aye hm [14:31:31] bugzilled it https://bugzilla.wikimedia.org/show_bug.cgi?id=38027 [14:33:30] New patchset: Ottomata; "misc/statistics.pp - adding -t flag to rsync log command to preserve mod times" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13334 [14:34:13] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13334 [14:37:58] !log allocating yttrium to payments rack per rt 1227 [14:38:03] Logged the message, RobH [14:38:29] !log manganese rebooted for kernel update [14:38:31] !log dist-upgrading formey (svn/gerrit), rebooting soon [14:38:34] Logged the message, Master [14:38:40] Logged the message, Master [14:40:13] !log svn server is rebooting.brb [14:40:19] Logged the message, Master [14:40:20] heh [14:41:15] and back [14:41:58] * jeremyb wonders if the elements have CNAMEs for their atomic #? [14:42:08] hrmmmm.... [14:42:28] and for that matter whose idea was elements [14:42:42] ottomata1: why not just rsync -a ? [14:42:55] i don't want to preserve ownership [14:43:00] huh [14:43:31] -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X) [14:43:37] yeah [14:43:38] -o, --owner preserve owner (super-user only) [14:44:17] -g, --group preserve group [14:44:22] aye [14:47:27] jeremyb, do you have merge powers? [14:47:31] no [14:47:35] ayyyeee [14:47:59] ottomata: hashar: the labs puppetmaster is virt0. virt0 is in turn also puppetized pointing at sockpuppet. i think. you could include the geoip file on virt0 to pull from sockpuppet's volatile and then use that in labs? [14:48:01] I have 5 simple commits waiting for approval [14:48:28] yeah totally, as long as puppet://volatile is reachable and configured on labs [14:48:29] I don't have access to virt0 :D [14:48:29] it will work [14:48:37] but I will definitely try out the geoip class on some instance [14:48:40] ohohoh [14:48:48] hm, i see, more than just including it [14:48:50] oh hmmmm [14:48:56] * hashar is not an ops so got no root / fancy access [14:49:07] I just have very basic shell access to do administrative tasks ;) [14:49:09] so virt0 node includes geoip [14:49:23] and then sets up a fake labs puppet://volatile [14:49:30] ohh [14:49:31] so that other labs can access [14:49:36] yes [14:49:45] but then the fake labs need some geoip data in it isn't it ? [14:49:49] or a labsvolatile maybe [14:49:59] yeah, could make it conditional i guess [14:50:26] as currently config'd is it possible to include the file without also getting the libraries/apps to parse/use it? [14:51:02] yes [14:51:29] it can even put the data files in any spot it wants [14:51:35] so it can put it in its labsvolatile dir [14:51:42] see class geoip::data [14:52:58] perfect [14:54:45] hey RobHalsell, quick question, when will you be physically in the datacenter that houses the analytics machines? [14:55:57] drdee: as in analytics1001? [14:56:06] yes [14:57:10] drdee: he was there a day or two ago. idk about today [14:58:23] thx [15:01:03] drdee: im in the datacenter now. [15:01:40] I just saw that you closed the RT ticket regarding the cabling of the C row, so I am happy! [15:05:18] drdee: yea i gave leslie the serials yesterday [15:05:22] so its all on her now ;] [15:05:32] excellent! thanks so much! [15:05:39] once network folks have the IP allocations and such for the row then you can get to the new servers [15:15:30] New review: Jeremyb; "I think the precise upgrade is done? And it should be trivial to move the git repo to gerrit. Otto i..." [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/11042 [15:29:51] ottomata: so the geoip class seems to apply cleanly on labs https://bugzilla.wikimedia.org/show_bug.cgi?id=38027 [15:30:08] ottomata: if you could add a comment there about how to verify it is cleanly installed … [15:31:20] !log boron appears to be unallocated, pulling IP allocation, rack allocation, moving to payments per 1227 [15:31:25] Logged the message, Master [15:32:25] hm [15:33:55] k, commented [15:33:59] look in /usr/share/GeoIP [15:34:03] if it has .dat files, then it worked [15:34:30] !log dns updated [15:34:36] Logged the message, Master [15:35:21] ottomata: got 3 files: GeoIPCity GeoIP GeoIPv6 (all .dat) [15:35:37] perfect! [15:35:45] you rocks! [15:37:55] huh [15:37:59] how did you get it?! [15:38:06] New patchset: Hashar; "+geoip for applicationserver::labs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13339 [15:38:33] jeremyb: include geoip [15:38:39] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13339 [15:38:55] hashar: does not compute ;) [15:38:59] will look later [15:39:36] maybe labs can reach volatile after all? [15:40:35] that might be a bug [15:40:39] or a different volatile [15:40:44] really I have no idea how that works [15:40:54] !log pulling the following servers, relocating to payments rack: payments1001-1004, boron, beryllium, lithium [15:40:58] I am just reusing puppet classes and tweak them for my own use :/ [15:41:00] Logged the message, Master [15:44:07] New patchset: Ryan Lane; "php5-memcache isn't used" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13340 [15:44:43] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13340 [15:48:41] New review: Demon; "Isn't the plan to move to using the pecl module rather than the one in MediaWiki anyway? Or does tha..." [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/13340 [15:59:26] New review: Ryan Lane; "Eventually, yes, but it isn't used right now, so I'm removing it on virt0, so that it's consistent." [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/13340 [15:59:34] New review: Ryan Lane; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13340 [15:59:37] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13340 [16:08:04] off for today see you tomorrow [16:47:12] New patchset: Alex Monk; "(bug 38023) Add new namespaces/extensions to frrwiki." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13347 [16:48:49] New patchset: Alex Monk; "(bug 38023) Add new namespaces/extensions to frrwiki." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13347 [17:00:08] New review: Alex Monk; "(no comment)" [operations/mediawiki-config] (master) C: 1; - https://gerrit.wikimedia.org/r/13121 [17:17:10] New review: preilly; "(no comment)" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/13350 [17:18:03] New review: Pyoungmeister; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13350 [17:18:06] Change merged: Pyoungmeister; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13350 [17:28:23] New review: Lcarr; "(no comment)" [operations/apache-config] (master); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13290 [17:28:42] New review: Lcarr; "(no comment)" [operations/apache-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13290 [17:28:44] Change merged: Lcarr; [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13290 [18:01:58] New patchset: Demon; "Upping the accounts and accounts_byname caches from 1024 to 4096" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13356 [18:02:31] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13356 [18:09:24] New review: Jeremyb; "Please mention gerrit in the commit msg." [operations/puppet] (production) C: -1; - https://gerrit.wikimedia.org/r/13356 [18:18:18] heya, evan is requesting emacs on stat1, [18:18:22] i could install just for stat1 [18:18:34] anyone mind if I add it to class base::standard-packages [18:18:37] ? [18:19:49] silence means ok! [18:20:49] ewww emacs ? [18:20:56] why ? [18:20:59] hahaha [18:21:01] the operating system that lacks a good editor? [18:21:05] i dunno, he's a 'global' dev [18:21:11] ;) [18:21:17] what does globaldev mean? [18:21:39] i dunno! [18:21:39] one who can't print out a guide to vi ;) [18:21:45] hehe [18:21:57] i mean, i think it's unnecessary, we shouldn't be doing heavy work directly on a machine [18:22:00] maybe you are right, it is a big package [18:22:02] you can emacs on your computer all day long [18:22:06] yeah that's fine [18:22:07] <^demon> We used to install joe on all the servers because brion liked it more than vim. [18:22:09] he just wants it on stat1 [18:22:16] <^demon> When brion left, mark removed joe from everywhere :p [18:22:17] and then we beat that out of brion [18:22:20] ;) [18:22:22] haha [18:22:24] ok ok ok [18:22:27] will install for stat1 only! [18:22:46] gonna make an emacs class [18:22:48] where should it go? [18:22:49] base.pp? [18:22:50] poor joe :-( [18:23:09] class base::packages::emacs [18:23:10] ? [18:23:12] I tried getting `tree` and `colordiff` but they were rejected :-D [18:23:20] haha [18:23:22] really? [18:23:35] !log swapped bad psu out of ms1001-array3, redundant so no downtime [18:23:41] Logged the message, Master [18:23:48] emacs is probably another story though [18:24:37] jeremyb: globaldev means global development, not global developer :D [18:24:38] tree and colordiff seriously? [18:24:46] drdee: and what's that mean? [18:25:02] !log updating mediawiki-config to grab a12545d edceb4c & eee97ad [18:25:08] Logged the message, Master [18:25:58] jeremyb: ? you asked what globaldev is, and global development is one of our departments [18:26:23] ^demon: :D [18:26:35] whoops localdev: :D [18:26:44] local development is development on a localhost scale :) [18:27:00] :D [18:27:08] drdee: ok, and where does it say what that department does? is anyone else in that department? [18:27:14] New patchset: Ottomata; "Installing emacs on stat1 (per request from Global Dev)." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13359 [18:27:17] i see jwild there [18:27:39] jeremby: there are about 30 folks in that department :D [18:27:47] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13359 [18:27:58] drdee: yeah, i'm starting to see it now [18:28:06] http://wikimediafoundation.org/wiki/Staff_and_contractors [18:28:12] go to Global Development [18:28:45] right, half my brain was ahead of you [18:30:49] !log did various tests using eval.php. Most important is $realm -> production. $cluster -> pmtpa. Syncing [18:30:55] Logged the message, Master [18:33:24] !log srv190 and srv281 got ssh timeout [18:33:29] Logged the message, Master [18:35:18] LeslieCarr: http://www.zdnet.com/blog/burnette/live-from-google-io-2012-day-2-keynote/2668 [18:35:29] !log so the nicely reviewed changes broke the enwiki stylesheets :/ reverted change :-((( [18:35:35] Logged the message, Master [18:36:07] hashar: not a great idea to sync config during the middle of a deploy ;) [18:36:33] indeed :-(( srry [18:36:41] Hmm is meta all like randomly broken style wise for anyone else? Or is chrome being a little retarded again [18:37:29] Damianz: hashar broke the site ;) [18:37:40] it should be fixed now [18:38:10] * Damianz takes the cookies off hashar [18:38:12] ohai [18:38:24] does he get the t-shirt? [18:38:32] or does it not really count? it was only the stylesheets [18:38:54] lets all just use the api for reading articles and no one will ever know :D [18:38:55] ohai Roan [18:39:10] New patchset: Hashar; "Revert "(bug 37245) makes labs use bits.beta.wmflabs.org"" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13362 [18:39:11] New patchset: Hashar; "Revert "detect cluster with /etc/wikimedia-realm"" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13363 [18:39:11] New patchset: Hashar; "Revert "send header from CS.php only for non CLI scripts"" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13364 [18:39:31] New review: Hashar; "(no comment)" [operations/mediawiki-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13364 [18:39:39] New review: Hashar; "(no comment)" [operations/mediawiki-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13362 [18:39:41] Change merged: Hashar; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13362 [18:39:56] New review: Hashar; "(no comment)" [operations/mediawiki-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13363 [18:39:59] Change merged: Hashar; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13364 [18:39:59] Change merged: Hashar; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/13363 [18:40:43] that is all cause of the RL ;-) [18:47:12] !log the internal change to CommonSettings.php caused a lack of stylesheet for less than a minute on most wikis. I did test on test.wikipedia.org and beta project, but there must be a logic error somewhere that mess with the prod projects. Revert changes have been sent out in gerrit and merged in master. [18:47:18] Logged the message, Master [18:56:31] r0csteady: Howdy [19:39:00] Hiya, is most everyone here a member of the WikiMedia ops team? [19:39:14] mostly not actually [19:39:18] some are [19:40:22] !g 2013 [19:40:22] https://gerrit.wikimedia.org/r/2013 [19:40:24] !g 2011 [19:40:25] https://gerrit.wikimedia.org/r/2011 [19:41:32] wdinkel: can we help you with something? [19:42:21] Yes, definitely. So, long story short, I'm an intern who is here at a SF company that works on WebOps stuff, and I had some just general questions about how you guys operate, and what challenges you face, and waht you think would be cool to have (but don't), with regard to webops [19:42:38] A friend of mine at WikiMedia suggested I stop by this channel to chat... [19:43:02] FWIW, the company I'm with is CloudFlare, but I'm really just here becuase I'm curious about what goes into running a massive site like the WikiMedia network [19:43:27] Wikimedia* ;) [19:43:32] Cache, lots of cache [19:43:35] let me dig up some links for you [19:43:38] Ah, okay, sorry bout that :) [19:43:45] wdinkel: Also cool, <3 cloudflare [19:44:09] Damianz: Score! Can send you a free shirt if you give me your mailing address. [19:44:17] will@cloudflare [19:44:17] hah [19:44:26] :D [19:44:44] Goes for anyone here ... [19:44:47] I'm over the pond though :( Well right now, in the pond... it seems anyway [19:44:53] Damn rain [19:45:01] NP, we send internationally [19:45:12] wdinkel: not coming to wikimania i suppose? there will be some tech talks there [19:45:56] They keep me on lock down here, wish I could, but probably can't make it out to DC [19:46:35] I know that you guys have three main datacenters [19:46:41] Do you serve the entire globe from those three? [19:46:50] not 3 main, only 3 [19:47:04] and 1 is more caching [19:47:08] but yes, that's the case [19:47:24] You guys don't worry about pagespeed to EU, Asia, etc.? [19:47:28] Or rather [19:47:32] Don't have any problems with it [19:47:41] The caching centre is in amsterdam, that serves the EU fine [19:48:02] And I guess CN isn't much of a concern... [19:48:06] CN? [19:48:10] China :) [19:48:16] There was a seoul caching centre at one point, but there isn't anymore [19:48:18] we peer with most anyone who's willing [19:48:52] (amsterdam and virginia are big peering hotspots. tampa not so much) [19:49:27] What are the biggest persistent issues you guys face? [19:49:32] I know that's pretty broad ... [19:49:32] Users. [19:49:55] Reedy: lol [19:50:04] I'm a user :( [19:50:09] Ryan_Lane: do you have slides handy for https://wikimania2011.wikimedia.org/wiki/Submissions/The_Site_Architecture_You_Can_Edit ? [19:50:12] oh, and PHP [19:50:17] Cause PHP sucks [19:51:09] Would you mind explaining a little bit about the issues with users? Do you guys get spammed a lot? [19:51:27] Or do you have some pretty good filters in place to automatically deal with that [19:52:39] Most of the spam is usually dealt with by other users [19:52:43] I bet you guys are one of the few sites that could pull off server-side C :) [19:53:13] Though not sure that would solve the sucking problem [19:53:25] ... depending on the nature of the suckness in question... [19:53:28] Reedy: Don't forget the poor bots :( [19:53:58] I'm tempted to call "testing in production" an issue, but I'm impressed by all the ways WMF has evolved to do that. [19:54:22] chrismcmahon: heh. Let me know when you know a way we can properly in testing ;) [19:54:33] *can test properly [19:55:06] Why do you guys choose to do everythign in house instead of hosting with a CDN of any sort? [19:55:12] Reedy: I'm working on that :-) it's going a bit slowly atm [19:55:21] Beyond the cost efficiencies of just having your own racks [19:56:15] There should be an essay on this [19:56:36] Also, do yuo guys actively sync all of your properties' files at each datacenter, or do you have a kind of reverse proxy setup where you pull on demand from a kind of master? [19:56:36] and Reedy I wasn't joking about being impressed with the speed and flexibility of production fixes, it's a unique situation. [19:56:52] wdinkel: properties files? [19:57:10] properties' files being just your web content [19:57:24] oh, not the java thing [19:57:30] Oh, no :) [19:57:31] mediawiki code is "pushed", servers pull most other things from puppet [19:57:39] chrismcmahon: As quickly as stuff gets fixed (software or servers falling over), it would be nice if it was tested and didn't have to be fixed ;) [19:57:57] umm, most content is not on web servers. it's in dedicated mysql DBs [19:58:25] the code and config and a couple other things is rsync'd with dsh [19:58:25] Ah, okay so that is probably actively synced [19:58:35] the db contents [19:58:38] or in puppet and handled by ht epuppet agent [19:58:43] replicated db clusters [19:58:50] see http://noc.wikimedia.org/dbtree/ [19:59:43] What are your thoughts on SPDY? Seems like serving wiki pages would be a sweet use case [19:59:57] Hmm hashar isn't here anymore, must remember to ask him about squid on deployment prep next time he's around. [19:59:59] https://commons.wikimedia.org/wiki/File:200908261431-Rob_Halsell-Wikimedia_Servers_and_Infrastructure.ogg [20:00:02] Errr... this isn't labs. [20:00:05] wdinkel: also https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=blob;f=wmf-config/db.php;hb=master [20:00:08] ^ slightly old, but mostly still relevant [20:00:41] Damianz: getting the labs beta cluster properly puppetized and maintained is going to be a big step toward that. [20:01:41] chrismcmahon: Indeed, I keep meaning to do something useful there but last time it was more fire fighting than productive moving forwards :( [20:02:27] Damianz: hashar has done a lot of work in the last several weeks, just this week I' [20:02:38] chrismcmahon: can you maybe figure out the status of squid sanitizing? ;) i'm happy help to sanitize if ya'll like [20:02:40] I'm pushing to get some real projects there [20:04:41] The last project request email that went around (pdbhandler extension) - to me that would be perfect for deployment-prep so devs don't have to maintain mw installs etc, but then it's 'staging' vs 'development' I guess. [20:04:51] jeremyb: I'm more of a front-end guy, but happy to learn. [20:05:06] Real usage would be cool, I was wondering about testing integration the other day. It would be awesome if we could spin up a clean cluster, deploy everything, run slemium tests etc against it then trash it *opsorgie* [20:05:48] Damianz: agreed [20:06:18] chrismcmahon: this is the prod squid config we're talking about. it's been packaged up and given to someone and i think even sanitized. then someone else had to verify it was really clean and i think that never happened [20:07:50] chrismcmahon: there should be some minimal prod specific stuff and mostly shared public config so that beta can use nearly the same config and changes don't get made in only one place [20:09:09] * jeremyb wonders if ryan's flying now? have a question about the way he sync'd up between the production and test branches [20:09:28] I think he is, or due to really soon. [20:09:36] We kinda lost a load of stuff when the sync happended :( [20:09:53] yeah, i'm digging to see what happened here [20:10:30] my puppet changes from january that are nominally in production but not effective [20:11:28] <^demon|away> jeremyb: Flying tomorrow, I think. He made the comment earlier that he surprised himself by being off-by-one on his day to fly home. [20:11:45] hah, right direction i guess! [20:12:26] i was off by one because my flight was canceled [20:12:36] wdinkel: we don't have anything on a CDN due to people's privacy worries. However, if you have heard of fastly - they are the CDN for wikia and basically are optimized for CDN'ing wiki content. about SPDY, we have a bug open for it (would love to have support for that working, just nobody has had the time to do it …. hint hint https://bugzilla.wikimedia.org/buglist.cgi?title=Special%3ASearch&quicksearch=spdy&list_id=126325 ) [20:12:50] wdinkel: sorry, we were at lunch [20:13:06] Hi lcarr [20:13:09] hah. good lunch? [20:13:18] eh, it was ok [20:13:24] the company was good :) [20:13:28] the food was meh chinese [20:13:59] LeslieCarr: working on something for you to deploy ;) i submitted to gerrit in january and it was merged but some still isn't taking effect [20:14:11] digging [20:14:17] !log dns update for pc1-pc3 [20:14:22] Logged the message, RobH [20:14:29] somehow* [20:15:03] wdinkel: about latency - personally it is something i worry about a lot. We are currently making a west coast caching center which should be up and running in the next 6 months (i hope i don't eat those words) [20:15:35] wdinkel: peering helps a lot for us, and amsterdam is amazingly connected …. the EU gets great latency for logged out users [20:20:21] New patchset: RobH; "added pc1 to dhcp lease file" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13407 [20:20:53] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13407 [20:21:05] about time, i had to wait 15 seconds gerrit-wm! [20:21:20] <^demon> gerrit needs some lovin' [20:21:24] New review: RobH; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13407 [20:21:26] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13407 [20:22:37] binasher: Ok, pc1 shoudl be ok to start the install on, you have not done one on the ciscos right? its kind of a pain in the ass [20:22:38] <^demon> RobH: If gerrit performance irks you, I'd love to see gerrit in the same datacenter as its database again. Ryan's got a ticket open for that but no clue when it'll happen :( [20:23:16] binasher: I have found that you may want to login to the web mgmt and set it to ONLY PXE boot for the install stuff [20:23:23] then revert it back to normal after the install [20:23:24] RobH: thanks! i haven't tried installing on a cisco yet [20:23:32] as i cannot get it to take the one time pxe boot option(s) [20:23:42] ok [20:25:27] and if you get to where its pissing you off so much you wanna burn the world, lemme know and I can poke at it ;] [20:25:39] but i rather let you deal with writing the partman script ;] [20:26:20] ooh, a challenge! [20:34:19] New review: preilly; "(no comment)" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/13408 [20:34:28] notpeter: https://gerrit.wikimedia.org/r/#/c/13408/ [20:35:27] preilly: okie dokie [20:35:59] New review: Pyoungmeister; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13408 [20:36:02] Change merged: Pyoungmeister; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13408 [20:36:39] preilly: running the puppetz [20:36:45] notpeter: okay cool [20:38:06] !log ran aftv5 offload_large_feedback migrations on testwiki and en_labswikimedia [20:38:11] Logged the message, Master [20:41:20] New review: Lcarr; "(no comment)" [operations/apache-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13292 [20:41:22] Change merged: Lcarr; [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/13292 [20:56:44] New patchset: Jeremyb; "viewvc: pull I75990998f, I73f725e98 back into prod" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13409 [20:57:15] New review: Jeremyb; "I1888ff0bfe94d03b reverts this for files/svn/viewvc.conf only" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2276 [20:57:16] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/13409 [21:01:18] jeremyb: I don't have slides handy cause I haven't made them yet [21:01:30] jeremyb: what's your question about syncing the branches? [21:01:34] Ryan_Lane: erm? you already presented that one [21:01:40] which one? [21:01:44] you mean for wikimania? [21:01:57] Ryan_Lane: editing the infra. yeah, haifa [21:02:15] oh, you want the old one [21:02:21] I'm giving a new presentation this year [21:02:24] Ryan_Lane: see my last commit above for the question about syncing. i guess has nothing to do with you now that i looked harder [21:02:39] (I rarely give the same talk twice) [21:02:55] Ryan_Lane: well i wanted something to give to wdinkel. but he's gone now i guess. we do have his email address [21:03:02] ah [21:03:07] Ryan_Lane: That just means we have to follow you around with a camera ;) [21:03:09] the fosdem one would be better [21:03:14] k [21:03:21] it's posted on wikitech [21:03:22] also, no video for haifa? [21:03:22] Which btw did you get video at fosdem? [21:03:26] so is the haifa one [21:03:26] no [21:03:30] no haifa video [21:03:33] which kind of sucks [21:03:38] how weird [21:03:41] yeah [21:03:43] it was recorded [21:03:46] no clue what happened to it [21:03:47] :D [21:03:59] seems to happen to all my recorded presentations [21:04:01] I really need to go though this years fosdem videos, mirrored them but not had time to watch =/ [21:04:02] the other one of yours that i looked at today i saw did have a link [21:04:06] to video [21:04:17] my keynote at the openstack conference is no where to be found either [21:04:23] ugh [21:04:30] summit? or something bigger? [21:04:37] summit+conference [21:04:40] I keynoted the conference [21:04:44] the first one [21:05:27] andrew_wmf_: ping [21:36:40] New review: Reedy; "(no comment)" [operations/mediawiki-config] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/13291 [21:37:22] binasher: have you looked at lqt.sql? [21:37:51] AaronSchulz: nope [21:37:52] i [21:37:59] i have a ton of stuff to review :) [21:44:46] binasher: this 'th_content LONGBLOB NOT NULL' gives me the willies ;) [21:45:03] * AaronSchulz goes back to other stuff [21:45:56] wuuuhhh? [21:52:34] Hey, I got dumped earlier and didn't get a chance to say it, but thanks so much for the links and answers earlier [21:53:07] And if any of you guys want to learn more about CloudFlare or want a shirt, drop me an email at will at cloudflare dot com. Seriously :) [22:04:18] jeremyb: hey - so you had mentioned something for me earlier... [22:19:15] folks, robla: i got gerrit-stats to visualize in limn on my local dev computer [22:19:15] these are per repo stats, but also aggregate stats by main repos (mediawiki, operations, analytics) [22:19:15] i already saw some data quirks [22:19:16] that need to be ironed out [22:19:59] w00t [22:20:58] robla: i will try to get this into labs tomorrow with dschoon so you can have a first look, i am sure you will find issues but at least it's a start :) [22:37:50] binasher: http://forums.mysql.com/read.php?123,45836,134634#msg-134634 [22:38:00] how can someone be "infinitely wrong" ;) [22:40:41] "dead" [23:01:40] AaronSchulz: hah! [23:08:22] binasher: do you have experience with SANs? [23:10:19] AaronSchulz: yeah, although its been a while [23:10:55] binasher: and with DBs over SANs? [23:11:01] running oracle on san attached solaris servers.. feels like a lifetime ago [23:11:11] like RAC? [23:11:42] no rac, just regular oracle [23:12:41] and some sybase-iq (a column store variant of sybase) [23:15:09] !log completed aft offload_large_feedback migration on enwiki [23:15:14] Logged the message, Master [23:26:27] !g 13409 | LeslieCarr [23:26:27] LeslieCarr: https://gerrit.wikimedia.org/r/13409 [23:26:38] LeslieCarr: sorry, was away. that's what i was talking about [23:27:41] ah yeah, switching to actually having our little icon ? [23:27:43] Reedy: Can you help me get this deployed https://gerrit.wikimedia.org/r/#/c/13406/ ? There's been half a dozen dupe reports of that regression since yesterday. [23:28:20] yeah [23:30:38] LeslieCarr: looks like it. couldn't even remember what it did at first [23:30:43] LeslieCarr: //upload.wikimedia.org/wikipedia/commons/thumb/d/d7/Buggie.svg/38px-Buggie.svg.png [23:30:45] Reedy: Thx :) [23:30:51] LeslieCarr: err, http://upload.wikimedia.org/wikipedia/commons/thumb/d/d7/Buggie.svg/38px-Buggie.svg.png rather [23:30:59] LeslieCarr: but also sslification [23:31:05] so cuuute [23:31:14] (of the link itself) [23:31:26] so question though, why is it // instead of the full path ? [23:31:36] I never saw it take effect at all though. for all i know is it's not puppetized [23:31:49] protocol relativity [23:32:01] right. in case viewvc is viewed over HTTPS [23:32:21] which does seem to be currently supported: https://svn.wikimedia.org/viewvc/mediawiki/trunk/debs/ [23:34:37] ah, though it looks like this is not submitted… ? [23:39:37] LeslieCarr: errmm? i gave 3 links in the commit msg [23:42:29] jeremyb: sorry, trying to figure out what exactly you were asking - so this was committed previously, then reverted, but even when it was committed you didn't see the change take effect ? [23:42:39] exactly [23:43:08] i asked about it at the time and then gave up. and now i looked again and it's been reverted [23:43:25] (i assume the revert was unintentional) [23:44:12] LeslieCarr [23:44:46] interesting [23:45:04] i agree! [23:45:11] well, let's try it again and see if it works? ;) [23:45:23] sure ;) [23:45:48] although > for all i know is it's not puppetized [23:45:53] we'll see [23:46:28] hehe that is true [23:46:47] let me see if i can see if that file is linked anywhere [23:46:59] also, i really really want one of those little steamed egg custard buns [23:47:10] i know it has nothing to do with this merge but i really want one [23:47:14] erm, link? ;) [23:47:30] you could just add a garbage comment to the deployed file and see if puppet clobbers you [23:47:39] http://lickmyspoon.com/wp-content/uploads/2009/06/asian-pearl-dim-sum-050.jpg [23:48:05] note: mister softee truck is ~40 ft from here with the song going [23:48:18] ooo [23:48:22] LeslieCarr: oooooh, interesting [23:48:29] (/me is sitting in a park atm) [23:48:31] i understand if you need to run and get some of that :) [23:48:41] haha [23:49:05] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/13409 [23:49:08] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/13409 [23:50:51] merging on formey now... [23:51:04] you mean sockpuppet? ;) [23:51:23] already did the sockpuppet one [23:51:28] pulled to formey [23:51:32] it updated the file ... [23:51:42] yay! [23:51:43] it works [23:51:44] isn't that puppet's job? [23:51:46] much cuter icon [23:51:48] i forced it [23:51:54] no need to wait until it decides to go :) [23:52:21] right, but let's make sure that it really is puppetized properly? [23:52:32] < jeremyb> you could just add a garbage comment to the deployed file and see if puppet clobbers you [23:52:46] oh i didn't manually pull the file, i just forced a puppet run [23:52:51] anyway, it does look live [23:52:55] ok, good ;) [23:52:56] instead of waiting for the random minute every hour the puppet runs happen [23:53:00] danke! [23:53:14] thank you for fixing it :)