[00:05:58] !log krinkle synchronized php-1.22wmf10/extensions/VisualEditor 'I12f52719ecafd7488bb00419' [00:06:08] Logged the message, Master [00:07:55] !log krinkle synchronized php-1.22wmf9/extensions/VisualEditor 'I12f52719ecafd7488bb00419' [00:08:02] Logged the message, Master [00:09:09] Reedy: RoanKattouw_away: I noticed just now that HTTPS Everywhere isn't putting me on HTTPS for *.wikivoyage.org, know if this was fixed and just waiting for them to release or not fixed yet? Asking since you two have submitted patches before [00:11:14] Krinkle: Yes, it was fixed ages ago [00:11:17] Just never backported [00:11:30] backported, or released. [00:11:39] I'm using the chrome extension. [00:11:39] Well, both [00:11:52] There's been recent releases which haven't included the fix [00:11:56] They do know the web is actually evolving over 12 months time, right :P [00:12:07] strange [00:12:12] When I enquired as to why, it was because it hadn't been backported to some branch [00:15:02] It'd be a bit far in my backscroll to try and find ;) [00:15:59] * Krinkle is creating his user page on wikis recently created [00:16:06] https://meta.wikimedia.org/wiki/User:Krinkle/SulBase [00:16:19] http://uk.wikivoyage.org/ is loading content from third party domains on the main page [00:16:25] You sound... [00:16:28] Suprised? :p [00:16:35] facebook.com, akamai cdn [00:16:46] awesome [00:17:18] * Krinkle removes [00:19:26] omg, not just the main page. *Every* page [00:19:57] James_F: is there some kind of page to link to this other than the wmf:Privacy policy? (when removing it, the edit summary) [00:20:23] Krinkle: wmf:Privacy_policy works. [00:20:39] Krinkle: You should also drop an e-mail to Jamesofur. [00:20:43] (Cue ping.) [00:21:18] also link to https://en.wikipedia.org/wiki/File:Cemetery_Entrance.jpg :P [00:21:53] var exhtml = 'http://maps.wikivoyage-ev.org/w/poimap2.php?'; [00:22:13] mw.loader.load('//uk.wikipedia.org/w/index.php?title=User:AS/EditTools.js&action=raw&ctype=text/javascript'); beauuuutiful [00:22:14] James_F: Jamesofur: Do you know anything about that domain? [00:26:51] I know I was pinged in here but Colloquy freaked around the same time and I have no clue what it was :) [00:27:02] New patchset: Ori.livneh; "Salt: Fix parameter used to add a key via grain-ensure" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74097 [00:27:13] Jamesofur, http://uk.wikivoyage.org/w/index.php?title=MediaWiki:Common.js&curid=513&diff=8605&oldid=8604 [00:27:23] http://uk.wikivoyage.org/w/index.php?title=MediaWiki:Common.js&diff=prev&oldid=8604 [00:27:51] Jamesofur: I removed the content loaded from facebook.net and the subsequent requests from that to the akamai cdn [00:27:58] ^ Ryan_Lane [00:28:22] However there is still a request made to 'http://maps.wikivoyage-ev.org/w/poimap2.php', didn't remove that yet, not sure who owns that [00:28:25] ori-l: yeah, that happens every once in a while [00:28:30] thanks [00:28:36] whoever added it should be banned [00:28:46] Ryan_Lane: no, I meant the patchset :) [00:28:48] or at minimum their admin flag should be removed [00:28:49] oh [00:28:50] heh [00:28:51] http://uk.wikivoyage.org/wiki/%D0%9A%D0%BE%D1%80%D0%B8%D1%81%D1%82%D1%83%D0%B2%D0%B0%D1%87:RLuts [00:29:10] no bannings yet :) [00:29:11] https://uk.wikivoyage.org/wiki/User:RLuts?uselang=en [00:29:27] especially given that a lot of these guys are imports from another project with other rules [00:29:30] Jamesofur: dude. at minimum that user shouldn't be an admin [00:29:33] we warn first [00:29:59] Jamesofur: Can you handle this further (and look into the -ev.org thing?) [00:29:59] yup thanks Krinkle [00:29:59] * Ryan_Lane grumbles [00:30:01] alrighty [00:30:10] I'll leave that up to Pb, but I'm not emergency desysoping him on my own (I am calling Pb and talking to the lawyers now ) [00:30:12] though [00:30:36] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74097 [00:31:12] ori-l: merged. thanks :) [00:31:33] thanks for the merge [00:31:36] * ori-l puppetd -tv's [00:32:07] bleh. this change for targetting grains is going to be large [00:32:18] Krinkle, the maps domain is not ours: https://www.nic.ru/whois/?query=wikivoyage-ev.org&hint=wikivoyage-ev.org [00:32:19] because I didn't want to have to add grains for every repo name [00:33:05] MaxSem: https://dpaste.de/3uXqc/raw/ [00:33:26] Jamesofur: ^ looks like the maps domain is not ours either, perhaps a community member registered that. The name however suggests it is from the former wikivoyage organisation (They were a german GmbH or e.V., right?) [00:33:47] ori-l, awesome! [00:33:59] https://www.nic.ru/whois/?query=wikivoyage-ev.org&hint=wikivoyage-ev.org [00:34:02] MaxSem: Thanks [00:34:14] WV or not, our policy prohibits this site [00:34:23] Sure [00:35:06] Jamesofur: If you need me to help out with surgically removing anything else (e.g. the wikivoyage-ev.org link) let me know. [00:36:07] * Jamesofur nods, thanks Krinkle  [00:38:36] Krinkle: just so that you know we're going sending everything to the lawyers to review more closely and may remove the wikivoyage-ev link as well but it's owned by a known quantity (the old wikivoyage org that has agreements with us and is applying for thematic org etc) so we're not removing that rightaway [00:39:46] Jamesofur: OK. Note that the domain doesn't appear to support HTTPS, so it should either get HTTPS, become an extension or perhaps be moved to labs (for the long term) [00:39:59] * Jamesofur nods [00:40:02] thanks, that's important [00:40:26] my guess is we'll help them transition to labs or move to something else (hopefully other wikivoyage groups are using something else already, will make it easier) [00:43:37] mmm, that site is just vanilla OSM + leaflet + some POI stuff [00:44:02] OSM could be used from toolserver (until it is moved to labs), like Wikipedia does [00:45:45] is this for showing maps inside of the page content? [00:45:52] or just a link to a map? [00:46:34] don't know [00:46:46] if it's embedding maps it definitely should not use toolserver [00:46:58] Ryan_Lane: Wikipedia does that [00:47:02] no it doesn't [00:47:04] it links to them [00:47:29] ? It displays a button (done from a gadget) and when clicked it embeds an iframe into the page with the map [00:47:30] if it embeds, it would be a massive privacy violation [00:47:46] lololo https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=ad81f0545ef01ea651886dddac4bef6cec930092 [00:47:56] "Linux for workgroups" [00:48:02] New patchset: Andrew Bogott; "Import vcsrepo module from puppetlabs." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74099 [00:48:17] Ryan_Lane: As per your suggestion ^ [00:48:24] andrewbogott: awesome [00:48:28] https://meta.wikimedia.org/wiki/WikiMiniAtlas [00:48:43] Ryan_Lane: click coordinates globe on top right of https://en.wikipedia.org/wiki/White_House [00:49:19] internal server error [00:49:26] and it didn't embed anything, but redirected me [00:49:27] Ryan_Lane: Not the link, the icon [00:49:28] Ryan_Lane, it seems to support all the features we have now other than specifying an extra-long timeout. Which theoretically we don't need anymore :/ [00:49:45] -_- [00:49:58] I don't see how this isn't a violation of the privacy policy [00:50:09] http://cl.ly/image/2F303n1C2x01 [00:50:18] I also don't see how this is legal [00:50:26] since TS is in the EU [00:51:02] anyway, note that this isn't loaded from the regular toolserver area but the OSM specific cluster (ts-users don't have access) [00:51:04] I guess this doesn't actually link the user to the request [00:51:27] and it's in an iframe, so I guess it doesn't fuck our security [00:51:42] Indeed, the javascript is a wikipedia-hosted gadget. [00:51:53] but we still should get our tileserver up and running [00:52:01] yes, we should [00:52:20] if we do we can actually embed the map in the page, rather than needing a gadget [00:53:16] iirc, James_F would probably know better since I only read it retro-actively once, that discussion happened a few years ago. Subsequently Toolserver decided to support it as a primary thing (e.g. they host OSM tools and Wikimedia tools, OSM isn't a regular tool operated by a toolserver user). And that was good enough until WMF would host it themselves. [00:53:38] yeah [00:53:41] Indeed. [00:54:28] So only ts roots with access to these hosts would be able to change it or read access logs (they might even have disabled access logs for that server, not sure) [00:55:47] and since TS roots have signed our NDA... [00:56:33] well, it's not really an issue anyway, since it's an iframe and the request isn't linked to the user [00:56:50] and it's being loaded by a gadget [01:02:37] Krinkle: since you can tell me faster then I can figure it out in my head ;) if that Facebook script wasn't being asked for on a page was it still loading the script on all pages? [01:03:57] Jamesofur: It was loading the facebook script on all pages, that script from facebook (not the gadget) was then creating a button and inserted it in the area where the sitenotice usually is. When clicked it would load additional scripts from facebook. It is the regular "Like" javascript they promote. [01:04:14] * Jamesofur nods, thanks [01:04:21] so it was making http requests to facebook and their CDNs on every page view [01:47:54] Ryan_Lane: Which part of the privacy policy bans loading remote content? [01:48:28] I've pushed repeatedly for clarification in this area, but the privacy policy is silent on the matter, AFAIK. [01:48:48] in this case it may not [01:48:59] http://status.wikimedia.org has Google Analytics still. [01:49:03] the data is held by folks who have signed NDAs [01:49:32] but one issue is that the data is held in europe. [01:50:09] hopefully they just don't collect logs [01:50:47] As far as I'm aware, other than sysadmin enforcement, there's no written policy about violating user privacy with remote scripts or remote images or whatever else. [01:51:19] it's definitely a privacy violation to include javascript loading google analytics into a wikimedia's project's js ;) [01:51:42] and that would surely be against the privacy policy [01:51:48] Prove it. :-) [01:52:17] a user that had not signed an NDA would be collecting data about users and their requests [01:52:31] that would correlate IP addresses with users [01:52:33] The privacy policy makes no mention of non-disclosure agreements. [01:53:04] meh. I don't feel like getting into this. it doesn't concern me and is a waste of time [01:53:58] Just saying it's difficult to block or ban or de-admin anyone over an unwritten policy. ;-) [01:54:13] I'll gladly do it [01:54:25] Elsie: I agree about the NDA. CheckUsers should sign one. [01:54:28] but again, I don't do it, because we have people who handle it anyway [01:54:54] But CheckUsers should already know that they are handling private user data [01:55:01] Bsadowski1: NDAs are mostly tangential to the issue of remotely loading scripts or images or other resources. [01:55:53] completely ignoring the privacy implications, remotely loading resources affects the security of the projects [01:56:25] So does allowing any local admin to modify site-wide JS. [01:56:25] Or insert raw HTML. [01:56:28] Bsadowski1: how about OS? [01:56:48] I'd seriously consider dropping the tools if I had to sign an NDA. [01:57:01] https://meta.wikimedia.org/wiki/Advertisement_of_the_privacy_policy [01:57:05] Oh, well, in that case, nevermind [01:57:11] Elsie: yes, and if we see an admin doing something malicious they should be banned for that too [01:57:25] https://meta.wikimedia.org/wiki/Google_Analytics [01:57:29] https://meta.wikimedia.org/wiki/NDA [01:57:35] importing external resources can compromise security even without the admin themselves doing it [01:57:39] Sure. [01:57:50] I don't disagree with you on the privacy or security implications. [01:57:52] it also affects the availability of the sites [01:57:56] My point is that all of this is unwritten. [01:58:33] if someone doesn't understand this without it being written they shouldn't have the ability to do it [01:58:33] Elsie: excludes are interesting. [01:59:03] * Ryan_Lane actually hates that admins can modify the site js [01:59:05] Ryan_Lane: There are over 700 wiki communities. Good luck explaining the nuances of this to them before they promote an admin. [01:59:43] The Wikimedia Foundation had no issue using an iframe for Jobvite or whatever. [01:59:57] And it still regularly hosts content on YouTube, though it may use the nocookie domain. [02:00:05] Elsie: and you assume that every person that works for WMF is equally OK with that [02:00:09] Plus, y'know, Google Apps. [02:00:17] google apps is used internally [02:00:33] it isn't necessary for any of the production projects to continue to operate [02:00:40] Google Apps surely hosts user data, though. [02:00:43] neither is the content posted to youtube [02:01:13] Elsie: not as far as I know [02:01:30] Ryan_Lane: it is, and it's used in fundraising banners. [02:01:33] Well, any e-mail that's sent to a staffer. Any Google form. [02:01:45] odder: what's used in fundraising banners? [02:01:47] "User data" is pretty broad. [02:01:49] YouTube is the current preference over Commons [02:01:59] Ryan_Lane: fundraising videos, for example. [02:02:13] I'm not making assumptions about how the entire staff feels, BTW. I just think it's a bit silly to pretend as though it's only local admins who are a concern. [02:02:14] I think there's been like one of those? [02:02:24] and it's absolutely not in the banners [02:02:26] and it's been visible to millions of people? [02:02:38] it's /in/ the banners. [02:02:54] odder: prove it [02:02:54] Link to an example banner? [02:02:58] yes [02:03:13] because I don't believe that a banner embeds a youtube video [02:03:37] https://en.wikipedia.org/wiki/Main_Page?banner=B12_1227_ThankYou_5pillars&forceBannerDisplay=true [02:04:03] you have to be fucking kidding me [02:04:08] Yes. [02:04:40] see https://meta.wikimedia.org/wiki/Research:Donor_engagement/Thank_You_campaign#Banners for a full list (I think) [02:05:01] https://meta.wikimedia.org/wiki/MediaWiki:FR2012/Resources/Video.js [02:05:19] https://meta.wikimedia.org/wiki/MediaWiki:Centralnotice-template-B12_112413_Lovedart [02:05:22] &c. [02:05:36] https://meta.wikimedia.org/w/index.php?title=Special%3ASearch&profile=advanced&search=script+src&fulltext=Search&ns8=1&ns866=1&redirs=1&profile=advanced [02:05:56] I would've thought it'd be possible to include Commons videos instead. [02:06:12] They make a note that you can watch it on Commons. ;-) [02:06:24] * Ryan_Lane groans [02:06:33] more upsetting: the name "Lovedart" [02:06:36] Elsie: which is so helpful! [02:06:49] ori-l: Have you watched the Lovedart video? It's actually really cute. [02:06:53] And well done. [02:06:57] The YouTube stuff aside. [02:06:59] I don't think I have, no [02:07:13] I certainly would click on a link to Commons instead of click on the play button. [02:07:19] clicking* [02:07:29] ah. so.. [02:07:39] the banner itself doesn't load from youtube [02:07:55] the image is on commons [02:08:05] It does not play the video without you clicking it, yes. [02:08:07] if a user clicks the video it loads youtube [02:08:09] That's good manners. [02:08:23] otherwise no info is sent to youtube [02:08:23] it's still absurd [02:08:28] https://commons.wikimedia.org/wiki/File:What%27s_a_Love_Dart%3F.webm [02:08:32] ori-l: ^ [02:09:07] it's NYC, you jerk [02:09:15] I can't look at that, I get too sad. [02:09:29] I'm not sure I even noticed that much. [02:09:57] Ryan_Lane: perhaps you can weigh in at https://meta.wikimedia.org/wiki/Talk:Wikimedia_budget#Revenue.2C_Expenses.2C_and_Staffing (second point) [02:11:46] odder: that's not a good place to discuss that [02:12:00] someone should bring it up on the wikimedia list [02:12:16] well, OK, I can do that. [02:12:23] wait a minute, please. [02:12:57] looks like 7 PM in SF to me, so I guess that's a good time for some evening reading. [02:13:17] I'm not even subscribed to that list :D [02:13:30] I guess I can subscribe with a filter to archive and mark as read [02:14:04] I think you should be able not to receive mail from that list and respond through Gmane [02:14:13] I hate using gmane [02:14:17] me too [02:14:19] :-) [02:14:23] and a filter is easy to do [02:16:21] odder: IIRC fundraising videos used YouTube by default because many people can't view Commons videos without technical problems [02:16:32] http://lists.wikimedia.org/pipermail/wikitech-l/2013-May/069582.html [02:16:34] then we should fix the video problems [02:16:39] or we shouldn't show videos [02:16:48] RoanKattouw: that's funny; I can't view YouTube videos without technical problems :) [02:17:00] http://lists.wikimedia.org/pipermail/wikimedia-l/2012-March/119278.html [02:17:01] !log LocalisationUpdate completed (1.22wmf10) at Wed Jul 17 02:17:00 UTC 2013 [02:17:12] Logged the message, Master [02:17:19] RoanKattouw: why, how would those differ from YouTube videos? [02:17:39] Ryan_Lane: I'd love better guidance in this area. :-) [02:17:45] I'm no specialist, but I think we use HTML5 for newer browser and some other technique for older ones? [02:17:54] I think enforcement is nearly impossible without clearer rules about what is and isn't allowed. [02:18:05] For example, toolserver.org seems to have a blanket exemption from the remote loading rule. [02:18:11] odder: I'm no specialist either, but I believe there were issues with how widespread codecs for free video formats are [02:18:23] As do other Wikimedia domains (defined as...). [02:18:36] Let's see if I can find this discussion [02:19:07] RoanKattouw: the real issue is that we have spent basically no resources to make video properly work [02:19:32] I remember watching Commons videos on my sister's laptop, it looked fine on a Windows 7 machine and Chrome [02:19:33] Nooo wait I know what it is [02:19:43] Squid and Varnish sucking for video [02:20:06] *That* was the problem, we can't serve videos at an it's-embedded-in-a-banner-on-every-page-view scale [02:20:39] Does it differ when it's actually an image that only loads a video after the user clicks on it? [02:20:43] RoanKattouw: Says who? :-) [02:20:51] Elsie: says mark [02:21:24] and Varnish Software, fwiw: https://www.varnish-software.com/blog/http-streaming-varnish [02:21:26] mark publicly said Wikimedia's ops infrastructure couldn't handle Wikimedia's load? [02:22:01] oy vey [02:22:01] I think he said that we shouldn't change Wikimedia's load to include videos served to millions of people [02:22:13] I found one thread, still searching for the other one [02:22:34] Still, what Ryan_Lane said earlier; we just serve an image to them and a video only if they click on the image. [02:22:45] Yes [02:22:48] I wonder how many hits the Commons version got. [02:22:56] So there is technically no privacy policy violation [02:22:59] There's some recordImpression JS in there. [02:23:13] So the video play stats (even for YouTube) are recorded. [02:23:42] YouTube says 78k [02:24:00] Would such amount kill us? [02:25:33] Well, technically we only link to YouTube; same situation if we link to websites that use Google Analytics from our banners. [02:27:39] [On a related note, the video is listed as CC-BY on YouTube and as CC-BY-SA on Commons.] [02:29:05] I've alerted the appropriate authorities. [02:29:46] https://meta.wikimedia.org/wiki/Legal_and_Community_Advocacy/CC-BY-SA_on_Facebook [02:31:39] !log LocalisationUpdate completed (1.22wmf9) at Wed Jul 17 02:31:39 UTC 2013 [02:31:51] Logged the message, Master [02:39:01] New patchset: Ryan Lane; "Use grains for deployment targets" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74108 [02:47:10] @notify Ryan_Lane [02:47:10] I'll let you know when I see Ryan_Lane around here [02:47:28] !log LocalisationUpdate ResourceLoader cache refresh completed at Wed Jul 17 02:47:27 UTC 2013 [02:47:38] Logged the message, Master [03:28:00] odder: why would the license thing cause confusion? It just means both licenses are valid (I agree they should just settle on one but .. ) [03:29:22] Jamesofur: because generally people choose one of the two [03:30:06] Jamesofur: lack of the -SA clause means you can create derivative works and release them under CC-BY-NC-ND [03:30:41] yeah, I'd prefer the SA personally :-/ [03:30:50] which isn't exactly what Victor was after, I suppose [03:31:56] there is an argument that it's easier for reusers to understand/follow (which I think is why YouTube pushes for it) though I don't think he was thinking about that [03:32:04] at least specifically [03:33:06] Jamesofur: sure, CC-BY is a great licence if you want everyone to be able to edit your work and close it afterwards against commercial usage or further derivative versions [03:33:22] * Jamesofur nods [03:33:26] Probably best to move this conversation to #wikimedia. :-) [03:33:34] it's a #wikimedia channel [03:33:58] Indeed, but people can be ornery in here. [03:34:18] this channel is silent, I'm not hurting anyone. [03:35:12] I didn't mean to suggest that you were hurting or interrupting anyone. :-) [03:56:57] Ryan_Lane: http://lists.wikimedia.org/pipermail/wikimedia-l/2013-July/126988.html [03:57:11] yeah, saw it. I had subscribed a bit aho [03:57:11] probably not exactly what you might have expected, but that's it [03:57:12] *ago [03:57:20] OK [06:27:41] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 06:27:35 UTC 2013 [06:28:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [06:28:50] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 06:28:48 UTC 2013 [06:29:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [06:30:10] RECOVERY - SSH on vanadium is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.1 (protocol 2.0) [06:30:20] RECOVERY - RAID on vanadium is OK: OK: Active: 6, Working: 6, Failed: 0, Spare: 0 [06:30:33] RECOVERY - DPKG on vanadium is OK: All packages OK [06:30:40] RECOVERY - Disk space on vanadium is OK: DISK OK [06:30:45] huh [06:33:20] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 06:33:14 UTC 2013 [06:34:00] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [06:41:59] Jul 17 06:29:48 vanadium kernel: [123661.992251] Out of memory: Kill process 1464 (redis-server) score 666 or sacrifice child [06:41:59] Jul 17 06:29:48 vanadium kernel: [123662.000473] Killed process 1464 (redis-server) total-vm:5499088kB, anon-rss:5434408kB, file-rss:500kB [06:47:47] hey TimStarling, got a minute? [06:47:54] i'm trying to figure out what exactly happened here: https://dpaste.de/E1Ksp/raw/ [06:48:40] "score 666 or sacrifice child" [06:48:41] i pushed out new code on the 15th and i guess eventlogging-consumer has a memory leak [06:48:46] That's some pretty dark stuff you're getting into there. [06:49:29] that's a lot of RSS [06:49:35] Sure it's a bug, not just too many services on the box? [06:50:06] What's supposed to be in that Redis? [06:50:31] i think it's just there to support salt, there were probably a dozen keys set at the most [06:50:56] ori-l, so EL is not using Redis on that box? [06:51:06] the fact that it was picked by the oom killer doesn't mean it was doing anything wrong [06:51:17] no [06:51:35] ori-l, right, but how does a dozen SSH keys wind up being 5.5 GB? [06:51:53] Or do you mean some other kind of key? [06:51:58] Not that familiar with salt. [06:52:01] 5.5GB on a box with 8GB of physical [06:52:16] redis-server isn't there for salt [06:52:58] I guess something else was using a few GB, but redis got the axe [06:53:15] redis-server runs on tin for git-deploy reporting. it must be on vanadium for some other reason [06:54:17] i did use it for EventLogging data, but that was probably around six months ago [06:54:25] maybe I just never uninstalled it [06:54:50] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 06:54:46 UTC 2013 [06:54:50] pretty good reason to be first on the chopping block [06:55:29] Puppet says it has EL, nrpe, and solr::ttm [06:55:32] http://ganglia.wikimedia.org/latest/graph_all_periods.php?h=vanadium.eqiad.wmnet&m=cpu_report&r=month&s=by%20name&hc=4&mc=2&st=1374044107&g=mem_report&z=large&c=Miscellaneous%20eqiad [06:55:40] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [06:55:43] I don't know what's on there but not puppetized [06:55:56] ganglia says there couldn't have been 5.5 GB allocated continuously for 6 months [06:56:02] unless it was in persistent storage [06:56:23] i.e. maybe redis was shut down for a while, and was recently started [06:56:40] maybe, it coincides a little too neatly with me pushing out lots of new code [06:57:28] i'll remove redis, not doing so right this moment because i see you're logged on and i don't want to generate noise [06:57:30] root@vanadium:/etc# du -h /var/lib/redis/ [06:57:30] 2.4G /var/lib/redis/ [06:57:41] maybe it is more compact on disk than in memory? [06:58:20] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 06:58:15 UTC 2013 [06:58:25] ori-l, if we don't know what's in there, we should make sure the data is preserved. [06:58:29] rdbcompression yes [06:58:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [06:59:10] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 06:59:01 UTC 2013 [06:59:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [07:00:46] so it was shut down and upon starting it tried to load the most recently generated rdb file into memory? [07:01:36] Seems plausible. [07:01:39] Per http://redis.io/topics/persistence : [07:01:46] "RDB is a very compact single-file point-in-time representation of your Redis data." [07:01:59] oh, duh. [07:02:04] i restarted on the 15th. [07:02:31] i figured i'd use the scheduled outage of EL to apt-get dist-upgrade [07:02:39] that's what made it start. [07:02:40] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 07:02:35 UTC 2013 [07:02:54] Why did it start? [07:02:57] Is it puppetized? [07:03:00] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [07:03:17] superm401: apt-get dist-upgrade will upgrade the package [07:03:27] which will cause the service to start, if it is not started [07:03:33] puppet doesn't clean up after itself unless you use it carefully and you use well-written modules [07:03:51] it's in rc3.d [07:03:58] right [07:04:02] so it will start on boot [07:04:20] yes, so that fully explains it i think [07:04:25] Okay, make sense. It wasn't running, but still set to run on boot. [07:04:37] "Makes sense", I mean. :) [07:05:05] TimStarling: thanks [07:05:37] np [07:05:43] superm401: I had an amnesty from ops to have unpuppetized things on that machine for a while [07:06:11] ori-l, wasn't trying to blame anyone, just Five Whys. [07:06:37] I puppetized it post hoc, but redis probably slipped my mind because I had stopped using it some time before. [07:06:43] yeah, just explaining. [07:23:37] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:24:27] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.127 second response time [07:25:17] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 07:24:58 UTC 2013 [07:25:47] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [07:28:17] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 07:28:12 UTC 2013 [07:28:27] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [07:28:37] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:29:07] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 07:28:58 UTC 2013 [07:29:17] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [07:29:28] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.128 second response time [07:32:57] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 07:32:53 UTC 2013 [07:33:57] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [07:54:57] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 07:54:47 UTC 2013 [07:55:47] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [07:57:47] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 07:57:38 UTC 2013 [07:58:28] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [07:58:47] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 07:58:46 UTC 2013 [07:59:17] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [08:02:47] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 08:02:44 UTC 2013 [08:02:57] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [08:19:45] New patchset: Faidon; "Add an authdns module & associated role classes" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74119 [08:24:59] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 08:24:51 UTC 2013 [08:25:39] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [08:26:29] New patchset: Akosiaris; "Adding oozie and hue to cloudera fetched packages" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74121 [08:27:49] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 08:27:43 UTC 2013 [08:28:07] Change merged: Akosiaris; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74121 [08:28:29] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [08:28:59] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 08:28:54 UTC 2013 [08:29:19] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [08:32:49] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 08:32:48 UTC 2013 [08:32:59] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [08:40:15] !log added oozie and hue packages to apt.wikimedia.org as per analytics team request [08:40:26] Logged the message, Master [08:54:59] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 08:54:52 UTC 2013 [08:55:39] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [08:57:39] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 08:57:34 UTC 2013 [08:58:29] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [08:58:59] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 08:58:52 UTC 2013 [08:59:14] New review: Hashar; "This allow ops to force push, will be used by Faidon to initialize the repository properly." [operations/dns] (refs/meta/config); V: 2 C: 2; - https://gerrit.wikimedia.org/r/74122 [08:59:14] Change merged: Hashar; [operations/dns] (refs/meta/config) - https://gerrit.wikimedia.org/r/74122 [08:59:19] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [09:02:39] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 09:02:38 UTC 2013 [09:02:59] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [09:23:40] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:24:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 2.447 second response time [09:25:00] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 09:24:58 UTC 2013 [09:25:40] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [09:27:40] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:28:00] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 09:27:50 UTC 2013 [09:28:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [09:29:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.134 second response time [09:29:50] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 09:29:43 UTC 2013 [09:30:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [09:32:13] New review: Akosiaris; "LGTM, just a really small proposed optimization." [operations/puppet/cdh4] (master) C: 1; - https://gerrit.wikimedia.org/r/69804 [09:32:40] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:32:50] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 09:32:49 UTC 2013 [09:33:00] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [09:33:00] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 09:32:54 UTC 2013 [09:33:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.598 second response time [09:34:00] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [09:54:50] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 09:54:47 UTC 2013 [09:55:40] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [09:57:40] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 09:57:32 UTC 2013 [09:58:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [09:58:50] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 09:58:48 UTC 2013 [09:59:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [10:02:50] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 10:02:42 UTC 2013 [10:03:00] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [10:25:03] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 10:25:00 UTC 2013 [10:25:41] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [10:26:51] PROBLEM - Puppet freshness on manutius is CRITICAL: No successful Puppet run in the last 10 hours [10:29:01] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 10:29:00 UTC 2013 [10:29:01] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 10:29:00 UTC 2013 [10:29:21] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [10:29:31] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [10:33:11] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 10:33:04 UTC 2013 [10:34:01] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [10:54:51] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 10:54:49 UTC 2013 [10:55:41] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [10:58:11] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 10:58:06 UTC 2013 [10:58:31] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [10:58:51] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 10:58:47 UTC 2013 [10:59:21] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [11:03:21] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 11:03:13 UTC 2013 [11:04:01] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [11:22:14] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [11:23:44] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:24:22] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [11:24:34] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.126 second response time [11:26:04] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 11:25:59 UTC 2013 [11:26:44] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [11:27:54] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 11:27:53 UTC 2013 [11:28:24] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [11:29:04] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 11:28:59 UTC 2013 [11:29:24] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [11:31:18] !log updating gdnsd in apt to 1.9.0-1~precise1 [11:31:29] Logged the message, Master [11:32:54] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 11:32:50 UTC 2013 [11:32:54] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [11:38:02] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [11:40:34] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [11:48:10] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [11:54:54] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 11:54:46 UTC 2013 [11:55:44] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [11:57:54] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 11:57:45 UTC 2013 [11:58:24] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [11:58:54] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 11:58:46 UTC 2013 [11:59:24] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [12:02:54] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 12:02:45 UTC 2013 [12:02:54] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [12:05:18] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [12:14:59] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74131 [12:16:59] New patchset: Mark Bergsma; "Revert "Split off wikidata into a separate LVS service 'text-varnish'"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74137 [12:17:10] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74137 [12:24:50] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 12:24:44 UTC 2013 [12:25:40] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [12:27:50] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 12:27:43 UTC 2013 [12:28:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [12:29:00] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 12:28:54 UTC 2013 [12:29:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [12:29:33] New patchset: Mark Bergsma; "Split off wikidata into a separate LVS service 'text-varnish'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74140 [12:30:14] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74140 [12:31:11] New patchset: Mark Bergsma; "Revert "Split off wikidata into a separate LVS service 'text-varnish'"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74142 [12:31:30] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74142 [12:34:20] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 12:34:14 UTC 2013 [12:34:50] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [12:36:37] New patchset: Mark Bergsma; "Remove the LVS service 'dns_auth'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74143 [12:37:23] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74143 [12:49:43] New patchset: Mark Bergsma; "Sort the monitor parameters for consistent ordering" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74144 [12:50:29] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74144 [12:50:29] paravoid: hi [12:50:35] paravoid: would you review a gerrit patchset please ? [12:54:24] average: for efficiency, paste the url + a quick summary :-] [12:55:00] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 12:54:59 UTC 2013 [12:55:40] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:20] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 12:58:13 UTC 2013 [12:58:30] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:50] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 12:58:43 UTC 2013 [12:59:20] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:14] New patchset: coren; "Tool Labs: Exec environ needs uwsgi for vassals" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74145 [13:00:41] New review: coren; "Ze change, she is trivial-e!" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/74145 [13:01:03] Change merged: coren; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74145 [13:02:40] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 13:02:38 UTC 2013 [13:02:50] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [13:07:11] New patchset: coren; "Detabify some files (in prevision of real changes)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74147 [13:07:31] paravoid: So we did, in fact, all agree to four spaces no hard tabs right? [13:07:53] hashar: also ^^ [13:07:57] i think so [13:08:09] Care to +2, then, to give imprimatur? :-) [13:08:10] I guess that is the consensus [13:08:52] manifests/site.pp +1803, -1803 [13:08:52] * hashar has disconnected (too many lines) [13:09:31] That's just a :retab before I do changes; don't want to mix whitespace w/ substantive changes. :-) [13:10:54] problem is that sometime :retab will introduce unwanted changes [13:11:09] I did check. :-) [13:12:53] looking [13:14:13] site.pp is horrible [13:14:16] that needs to die [13:15:59] In general you mean? I couldn't agree more. [13:16:12] But yeah, I'm not about to do a change /that/ substantive. :-) [13:17:41] that is the first time I read site.pp entirely [13:17:51] that is huge pile of mess with ton of configuration instead of roles hehe [13:18:21] "Huge pile of mess" sounds about right. [13:18:23] :-) [13:22:46] New review: Hashar; "site.pp has a few additional spaces that could used to be cleaned up." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/74147 [13:22:59] Coren: there are sometime too many spaces [13:23:12] like class{ ended up with class____{ [13:23:28] also vim modelines would be nice [13:25:02] RECOVERY - Puppet freshness on cp1042 is OK: puppet ran at Wed Jul 17 13:24:53 UTC 2013 [13:25:42] PROBLEM - Puppet freshness on cp1042 is CRITICAL: No successful Puppet run in the last 10 hours [13:26:01] New patchset: Ottomata; "Puppetizing oozie client and server" [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/69804 [13:27:21] if anyone should get the sudden urge to clean up admin.pp, please don't, I'm in the middle of it [13:27:31] speaking of huge piles of elephant dung [13:27:40] hmm this channel is logged. ah well [13:28:12] RECOVERY - Puppet freshness on cp1044 is OK: puppet ran at Wed Jul 17 13:28:04 UTC 2013 [13:28:32] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [13:28:45] hashar: I didn't actually rewrite things that had tabs instead of spaces internally so that, visually, nothing changed. [13:28:52] RECOVERY - Puppet freshness on cp1041 is OK: puppet ran at Wed Jul 17 13:28:50 UTC 2013 [13:29:04] hashar: In many cases, it's for cosmeting "line things up" purposes. [13:29:22] But yeah, you're right about the modelines. Will add 'em. [13:29:22] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [13:29:48] Coren: that prevents trivial mistakes :-] [13:29:52] (at least from me hehe) [13:30:46] Change merged: Ottomata; [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/69804 [13:32:53] RECOVERY - Puppet freshness on cp1043 is OK: puppet ran at Wed Jul 17 13:32:46 UTC 2013 [13:32:53] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [13:33:10] erm, wut [13:33:50] doesn't YouTube /really/ count video views embedded in