[01:32:34] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset -0.01775324345 secs [01:34:56] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset -0.01705110073 secs [02:01:36] !log LocalisationUpdate completed (1.22wmf5) at Mon Jun 3 02:01:35 UTC 2013 [02:01:45] Logged the message, Master [02:02:19] !log LocalisationUpdate completed (1.22wmf4) at Mon Jun 3 02:02:19 UTC 2013 [02:02:29] Logged the message, Master [02:07:36] !log LocalisationUpdate ResourceLoader cache refresh completed at Mon Jun 3 02:07:36 UTC 2013 [02:07:46] Logged the message, Master [04:31:26] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:32:17] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.126 second response time [04:40:27] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:41:17] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.128 second response time [05:00:26] PROBLEM - NTP on ssl3003 is CRITICAL: NTP CRITICAL: No response from NTP server [05:02:36] PROBLEM - NTP on ssl3002 is CRITICAL: NTP CRITICAL: No response from NTP server [05:18:48] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [05:19:39] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [05:27:38] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:28:29] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.198 second response time [06:21:43] PROBLEM - Host wtp1008 is DOWN: PING CRITICAL - Packet loss = 100% [06:22:03] RECOVERY - Host wtp1008 is UP: PING OK - Packet loss = 0%, RTA = 0.22 ms [07:53:26] PROBLEM - Puppet freshness on db1032 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:26] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:27] PROBLEM - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:27] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:28] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:28] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:29] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [07:53:29] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [08:01:46] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset -0.004249453545 secs [08:02:36] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset -0.008611083031 secs [10:50:16] New patchset: Akosiaris; "Adding packages sqoop, hbase, hive, pig to apt" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66397 [10:55:28] Change abandoned: Akosiaris; "Should avoid a new branch. Maybe even update this? https://www.mediawiki.org/wiki/Git/git-review#Sub..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66397 [11:00:41] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:00:58] New patchset: Akosiaris; "Adding packages sqoop, hbase, hive, pig to apt" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66398 [11:02:20] Change merged: Akosiaris; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66398 [11:02:32] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.137 second response time [11:54:16] mwscript extensions/TimedMediaHandler/maintenance/retryTranscodes.php --wiki commonswiki --error "av_interleaved_write_frame(): Invalid argument" [11:54:21] New review: ArielGlenn; "You really want one ChangeId in your commit message, and that would be from the first changeset you ..." [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/64095 [11:54:21] that would be what j^ wants, right? [11:54:49] paravoid: yes [11:55:00] hi there :) [11:55:06] I wasn't sure if you were around [11:55:36] hi, wonders of irc ping [12:09:39] paravoid: thakns, looks like it worked, new encodes coming in [12:10:43] perfect [13:04:03] New patchset: Mark Bergsma; "Increase backend_weight from 20 to 100, to improve chash distribution" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66541 [13:05:43] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66541 [13:29:35] New patchset: Mark Bergsma; "Revert "Update the ldap scripts to pep8 compliant"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66542 [13:30:09] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66542 [13:31:20] New patchset: Mark Bergsma; "Revert "Fix various noc files to be pep8 compliant"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66543 [13:31:34] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66543 [13:36:27] akosiaris: aptmethod error receiving 'http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/dists/precise-cdh4/InRelease': [13:36:30] '404 Not Found' [13:41:20] akosiaris: yeah i noticed that too... I think that the cloudera repo should provide one but it does not... [13:41:25] paravoid: ^^ [13:41:34] talking to yourself? :P [13:41:36] * akosiaris stupid me :P [13:41:53] that's just InRelease I guess [13:42:05] the signed inline release file [13:42:07] akosiaris: I just merged your reprepro change on sockpuppet [13:42:17] after I reverted the other stuff that wasn't merged there [13:43:14] mark: thanks. I did not feel like running the puppet-merge with all the other stuff waiting to be merge too. I was trying to figure out whos' it was and pester him [13:43:23] yeah [13:44:36] akosiaris: nice thanks for the debianization stuff! i'm excited to try it out [13:44:50] i was looking, afaict right now it just installs the kafka .jar(s), right? [13:44:55] none of the bin/ scripts? [13:45:22] ottomata: exactly [13:45:29] that's what i am working on right now [13:45:55] ok cool, paravoid wanted me to avoid installing that myriad of scripts in /usr/bin, so if you're working it now [13:46:04] could you use my single kafka.sh wrapper instead of all of those? [13:46:13] and if you like, adapt it for any changes in 0.8? [13:46:15] (I can work on that later too) [13:46:37] https://gerrit.wikimedia.org/r/#/c/53170/10/debian/kafka.sh [13:46:42] send it to me and I will what i can do [13:46:50] ok cool. I 'll have a look into it [13:46:53] ahh, i'll email you 3 changes we should bring over from the 0.7.2 attempt [13:46:54] there are 2 more [13:47:32] ottomata: thanks for reaching out to twitter btw [13:47:59] and again, I don't mind not going through packaging/debianization if we can find other good ways [13:51:34] aye, ja! [13:51:42] welp, i mean, if akosiaris' stuff works, that is good enough for me [13:52:00] he's already done the work on it thus far. we might have to revisit this issue when we get around to doing storm [13:52:12] but at least in storm's case there's no scala/sbt to deal with [13:52:39] thank god [13:53:40] File "pool/main/b/bigtop-jsvc/bigtop-jsvc_1.0.10.orig.tar.gz" is already registered with different checksums! [13:53:40] md5 expected: c09da51e99fce3e6d8415ed8888ddcc6, got: ed33f68d0478c9afbbf33073be388c09 [13:53:48] nice [13:53:52] god damn ... what did those cloudera guys did ? [13:54:05] * akosiaris will figure it out [13:54:06] inserted a CIA backdoor in their code? [13:54:09] lol [13:54:26] yeah what a perfect example why just downloading tarballs without verification is a horrible idea ;) [13:55:09] ori had a similar idea for python modules which would be to set up our own pip mirror [13:55:19] but then, it is not that hard to package a python module. [13:55:23] right [13:55:38] the trouble of setting up and maintaining a pip mirror is probably more than actually packaging up those few modules that we need [13:55:45] yup [13:55:53] New patchset: Cmjohnson; "Adding new mac address ms-be1" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66545 [13:56:07] next thing: package up mediawiki extensions *grin* [13:56:50] Change merged: Cmjohnson; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66545 [13:59:04] ottomata: so, zookeeper? [13:59:07] paravoid: is it ok to assume that upstream tag versions have no correlation with debian tag versions ? [13:59:15] average: what do you mean? [13:59:42] oh, paravoid, yeah thanks for that review [13:59:43] paravoid: let's say upstream has 0.1 , 0.2 , 0.3 . Is it ok that debian packages will be for example 0.4 , 0.5 , 0.6 ? [13:59:44] upstream would have a "v1.0", debian would have "debian/1.0-1" [13:59:46] haven't got to that one yet [13:59:47] but [13:59:53] yes, we can make zookeeper its own module [13:59:54] i'm for that [13:59:55] no [14:00:04] (no?) [14:00:10] that was to average :) [14:00:11] sorry [14:00:13] oh ha, ok [14:00:27] yeah, i just have to check about where debian's zookeeper package keeps its config files [14:00:41] cdh4 has a /etc//conf convention [14:00:57] i betcha deb/ubu's has just /etc/zookeeper/ [14:01:05] but it should be doable [14:01:06] paravoid: the problem is that whenever I do git-dch --release , the new version is like "2.0.13ubuntu1" [14:01:25] paravoid: how can I avoid that ? I am forced to do manually --new-release="2.0.14" [14:01:33] which doesn't seem like a good idea [14:03:00] that's fine [14:03:18] the Debian package for upstream 0.1 should have a version of 0.1-1 [14:03:39] and subsequent packages for the same upstream due to packaging changes would be 0.1-2, -3 etc.) [14:04:30] Change merged: Ottomata; [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/66337 [14:05:59] Does WMF have any guidelines on deploying java internally? I know we have a few things deployed with Java and I was wondering if we have some document with information about stuff like how it should run, where logs should go, what to do if it is crashing, etc. [14:06:19] aahhhhhh hahah [14:06:26] manybubbles, welcome to the land of fun! [14:06:45] so, no? [14:06:51] "You want to deploy java?" => "lolno" [14:07:27] i mean, no, but i have been struggling with this issue for a few months now myself? there are no WMF guidelines so much, but afaict we try to stick to debian guidelines. /usr/share/java for .jars, etc. [14:07:50] but now i'm curious, whatcha working on? :) [14:08:34] me? I'll be working on search [14:08:42] ottomata: Manybubbles is the new Hero of Search! [14:09:11] but I also saw an email about a Suggester which looks like Java too. [14:09:11] excellent! [14:09:13] ah right, so solr / lucene stuff? [14:09:19] first assignment: rewrite to non-java ;-) [14:09:22] haha [14:09:31] :) [14:09:39] and not C# either hehe [14:09:46] yeah - solr/lucene stuff [14:10:08] and also not ruby [14:10:09] there is a lucene reimplemented in C... [14:10:10] aye. i think the answer you are going to get is: build a .deb that does stuff the debian way [14:10:18] and use that to deploy :) [14:10:38] ottomata: ;-) [14:11:05] ottomata: sure. but there is more to it then that, fortunately and unfortunately. [14:11:08] mark: Fun quiz. If it were rewritten in perl, would your (quite justified) hatred of JVM/.NET win over your (more disputable) loathing of perl? :-) [14:11:08] there is already a debian package for solr in apache bigtop [14:11:23] paravoid: so basically I have to use --new-version right ? [14:11:26] i don't hate perl [14:11:34] we just didn't standardize on it [14:11:42] so if it would be rewritten in perl, something would be wrong ;) [14:12:22] there's a Lucy module on CPAN [14:12:30] it works(have used it) [14:12:50] Lucy is a light-weight Lucene implementation in C [14:13:05] average: that is what I was thinking of! [14:13:15] manybubbles: :) [14:13:17] I think, thought, we'll probably want Solr [14:13:29] with all the bells and whistles. [14:14:37] and the thing with the jvm stuff is that there are a lot of extra ways you should be monitoring it and lots of fun stuff you can do when it goes sideways to generate forensic data [14:15:41] I"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." -- A. Saint-Exupery [14:15:46] I'm gonna drop a quick quote here.. [14:15:47] :) [14:16:04] paravoid: may I ask about the --new-version ? is it possible to avoid it ? [14:16:53] paravoid: I mean, I understand that if upstream is 0.2 , then the debian tag must be 0.2-\d , but can it do that automatically or does it have to be done manually with --new-version [14:17:02] ? [14:18:32] I don't use git-dch [14:18:34] just dch [14:18:52] ok [14:18:56] thanks :) [14:19:03] git-dch is just a tool to compile a changelog from git commits [14:19:10] it doesn't always make sense to use it [14:19:26] and you should certainly pick the version yourself, it just generates an example that works sometimes [14:19:35] (but not in our case, as it appends 'ubuntu') [14:19:55] I briefly looked at changing that ubuntu thing for package builder hosts once, but it wasn't trivial I think [14:24:07] PROBLEM - Puppet freshness on mw1149 is CRITICAL: No successful Puppet run in the last 10 hours [14:24:23] somebody at cloudera has really not understood debian packaging... [14:24:36] so... i got a file... bigtop-jsvc_1.0.10.orig.tar.gz [14:25:05] at Apr 22 it contained a single dir bigtop-jsvc-1.0.10-cdh4.2.1 and at May 28 it contained a single dir bigtop-jsvc-1.0.10-cdh4.3.0 [14:25:07] PROBLEM - Puppet freshness on mw1028 is CRITICAL: No successful Puppet run in the last 10 hours [14:25:21] dafuq ? [14:25:26] lol [14:29:07] PROBLEM - Puppet freshness on palladium is CRITICAL: No successful Puppet run in the last 10 hours [14:33:37] RECOVERY - Host ms-be1 is UP: PING OK - Packet loss = 0%, RTA = 27.00 ms [14:33:53] hmm, akosiaris, may 28 cdh 4.3 was released, looks like they just updated their directories to match the version…not sure why they wouldn't have the tarball name change too [14:35:47] PROBLEM - swift-container-updater on ms-be1 is CRITICAL: Timeout while attempting connection [14:35:47] PROBLEM - swift-container-server on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:07] PROBLEM - DPKG on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:07] PROBLEM - swift-object-auditor on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:18] PROBLEM - Disk space on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:19] PROBLEM - swift-object-replicator on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:19] PROBLEM - swift-account-replicator on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:27] PROBLEM - RAID on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:37] PROBLEM - swift-object-updater on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:37] PROBLEM - swift-account-auditor on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:37] PROBLEM - swift-container-replicator on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:37] PROBLEM - SSH on ms-be1 is CRITICAL: Connection timed out [14:36:37] PROBLEM - swift-account-server on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:38] PROBLEM - swift-object-server on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:38] PROBLEM - swift-account-reaper on ms-be1 is CRITICAL: Timeout while attempting connection [14:36:47] PROBLEM - swift-container-auditor on ms-be1 is CRITICAL: Timeout while attempting connection [14:37:07] PROBLEM - Puppet freshness on amssq59 is CRITICAL: No successful Puppet run in the last 10 hours [14:37:07] PROBLEM - Puppet freshness on aluminium is CRITICAL: No successful Puppet run in the last 10 hours [14:37:07] PROBLEM - Puppet freshness on amssq40 is CRITICAL: No successful Puppet run in the last 10 hours [14:37:07] PROBLEM - Puppet freshness on analytics1001 is CRITICAL: No successful Puppet run in the last 10 hours [14:37:07] PROBLEM - Puppet freshness on cp1023 is CRITICAL: No successful Puppet run in the last 10 hours [14:39:10] PROBLEM - Puppet freshness on amssq32 is CRITICAL: No successful Puppet run in the last 10 hours [14:39:10] PROBLEM - Puppet freshness on amssq43 is CRITICAL: No successful Puppet run in the last 10 hours [14:39:10] PROBLEM - Puppet freshness on amssq50 is CRITICAL: No successful Puppet run in the last 10 hours [14:39:10] PROBLEM - Puppet freshness on analytics1017 is CRITICAL: No successful Puppet run in the last 10 hours [14:39:10] PROBLEM - Puppet freshness on analytics1018 is CRITICAL: No successful Puppet run in the last 10 hours [14:40:41] ottomata: the .orig should only have the upstream tarball. Not their own changes inside. [14:41:28] ottomata: hive, sqoop, pig and rest debs are in apt [14:42:56] danke! at 4.2.1? [14:43:47] nope. 4.3 [14:44:22] do you want 4.2.1 or care to upgrade ? [14:44:56] upgrade is cool, but i think we have to do it all at once [14:44:58] so we'd need all of them [14:45:31] i was getting some dependency issue when trying hive 4.3.0 without hbase 4.3.0 etc. [14:45:50] that should be fixed now. both are at 4.3.0 [14:46:04] oh hmm, true, hm, ok, [14:46:06] i guess that's fine [14:46:08] hm [14:46:20] i'd rather run all cdh packages at the same version [14:46:22] but it is probably fine [14:46:27] including hadoop etc. [14:46:38] those are at 4.3.0 too at the repo [14:46:41] oh! [14:46:41] ok [14:46:43] cool [14:46:44] danke [14:46:47] then that should be fine i think [14:47:06] i will upgrade that stuff before I try to apply the new puppetization to a hadoop node then [14:47:06] thank you [14:47:11] bitte schon [14:48:21] PROBLEM - NTP on ms-be1 is CRITICAL: NTP CRITICAL: No response from NTP server [14:51:07] akosiaris: are the 4.2.1 hadoop .debs still in apt? [14:51:22] ah, no they aren't [14:51:22] hm [14:51:26] is there a way to pass dch parameters from git-dch ? [14:51:31] like for example --force-distribution [14:55:51] PROBLEM - Host ms-be1 is DOWN: PING CRITICAL - Packet loss = 100% [14:57:24] no idea :) [15:00:30] RECOVERY - SSH on ms-be1 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.1 (protocol 2.0) [15:00:37] * mark upgrades his Eclipse install [15:00:45] RECOVERY - Host ms-be1 is UP: PING OK - Packet loss = 0%, RTA = 26.61 ms [15:00:49] the whaaaat? [15:01:19] I liked eclipse a lot when I was programming in Java [15:01:22] is that yet another python script on our cluster? [15:04:03] are you thinking I'm getting old again? ;) [15:04:26] mark: Java is a kind of crummy language so it needs a good IDE. [15:04:36] it does [15:18:31] New patchset: Andrew Bogott; "Add a custom job that runs pep8 on each .py file" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66548 [15:18:47] hashar: there's the custom script [15:19:01] now to figure out why I can't submit a review to jenkins-job-builder [15:33:29] Change abandoned: Andrew Bogott; "Moved this file to the integration/jenkins repo" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66548 [15:34:32] paravoid: what does this say on your machine ? git-buildpackage --version [15:34:47] which machine? :) [15:35:02] I use Debian stable on my desktop, Debian unstable on my build box and I have a precise chroot for wikimedia work [15:35:46] paravoid: ok, it seems that git-buildpackage has some new stuff that I'd like to use [15:36:17] paravoid: for example the -D switch came in april 2013 [15:37:07] paravoid: can we use the new one ? [15:37:42] paravoid: because you probably have a different version than I have (I'm using ubuntu raring) [15:37:44] New patchset: Ottomata; "Puppetizing analytics1020 with roles/hadoop.pp!" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [15:38:29] New review: Ottomata; "Faidon, I'm adding you as a reviewer mainly to get feedback on the new roles/hadoop.pp file." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [15:39:35] New patchset: Ottomata; "Puppetizing analytics1020 with roles/hadoop.pp!" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [15:45:28] @notify binasher [15:45:29] I'll let you know when I see binasher around here [15:47:51] what's -D? [15:48:16] average: ^ [15:49:05] paravoid: --distribution [15:49:17] paravoid: it's like in the changelog when you see "raring" or "precise" [15:49:22] paravoid: that one you can set with -D [15:49:36] is that git-buildpackage? [15:49:36] paravoid: I need that because our packages need to be with "wikimedia" in that field [15:49:42] paravoid: yes it is, but a newer version [15:49:47] you know you can just vi debian/changelog, right? :) [15:49:54] paravoid: yes but.. [15:50:09] or just use dch directly [15:51:00] remember that if there were no human involvement required, we didn't need you to build the packages in the first place ;-) [15:52:57] alright [15:59:00] mark , paravoid should you change your mind http://garage-coding.com/releases/git-buildpackage/ [15:59:10] in the meantime I'll edit the distribution manually [15:59:36] feel free to use whatever git-buildpackage version you want [15:59:56] as long as it can be kept compatible with previous versions, e.g. no newer incompatible gbp.conf options [16:00:46] oh cool :) [16:00:47] thanks [16:06:31] mark, did you object to the patch you reverted here https://gerrit.wikimedia.org/r/#/c/66542/ or were you just backing it out because it wasn't on sockpuppet? (If the latter, I will re-submit and merge now.) [16:06:41] * Jeff_Green wonders about tridge:/data @ %100 [16:06:48] just the latter [16:06:51] you can remerge any time [16:06:58] as long as you also merge on sockpuppet and don't block others :) [16:08:16] RECOVERY - Puppet freshness on db46 is OK: puppet ran at Mon Jun 3 16:08:11 UTC 2013 [16:08:16] RECOVERY - Puppet freshness on lvs6 is OK: puppet ran at Mon Jun 3 16:08:11 UTC 2013 [16:08:16] RECOVERY - Puppet freshness on tarin is OK: puppet ran at Mon Jun 3 16:08:12 UTC 2013 [16:08:16] RECOVERY - Puppet freshness on mc16 is OK: puppet ran at Mon Jun 3 16:08:13 UTC 2013 [16:08:16] RECOVERY - Puppet freshness on labsdb1001 is OK: puppet ran at Mon Jun 3 16:08:13 UTC 2013 [16:08:17] RECOVERY - Puppet freshness on db55 is OK: puppet ran at Mon Jun 3 16:08:14 UTC 2013 [16:08:17] RECOVERY - Puppet freshness on db1010 is OK: puppet ran at Mon Jun 3 16:08:14 UTC 2013 [16:08:18] RECOVERY - Puppet freshness on brewster is OK: puppet ran at Mon Jun 3 16:08:15 UTC 2013 [16:08:18] RECOVERY - Puppet freshness on wtp1021 is OK: puppet ran at Mon Jun 3 16:08:15 UTC 2013 [16:08:26] RECOVERY - Puppet freshness on mw109 is OK: puppet ran at Mon Jun 3 16:08:16 UTC 2013 [16:08:26] RECOVERY - Puppet freshness on snapshot4 is OK: puppet ran at Mon Jun 3 16:08:16 UTC 2013 [16:08:26] RECOVERY - Puppet freshness on mw70 is OK: puppet ran at Mon Jun 3 16:08:17 UTC 2013 [16:08:26] RECOVERY - Puppet freshness on mw1185 is OK: puppet ran at Mon Jun 3 16:08:17 UTC 2013 [16:08:26] RECOVERY - Puppet freshness on mw40 is OK: puppet ran at Mon Jun 3 16:08:18 UTC 2013 [16:08:27] RECOVERY - Puppet freshness on mw1218 is OK: puppet ran at Mon Jun 3 16:08:18 UTC 2013 [16:08:27] RECOVERY - Puppet freshness on lvs2 is OK: puppet ran at Mon Jun 3 16:08:19 UTC 2013 [16:08:28] RECOVERY - Puppet freshness on es1009 is OK: puppet ran at Mon Jun 3 16:08:19 UTC 2013 [16:08:28] RECOVERY - Puppet freshness on mw1040 is OK: puppet ran at Mon Jun 3 16:08:20 UTC 2013 [16:08:29] RECOVERY - Puppet freshness on mw107 is OK: puppet ran at Mon Jun 3 16:08:20 UTC 2013 [16:08:29] RECOVERY - Puppet freshness on mw53 is OK: puppet ran at Mon Jun 3 16:08:21 UTC 2013 [16:08:30] RECOVERY - Puppet freshness on mw1062 is OK: puppet ran at Mon Jun 3 16:08:22 UTC 2013 [16:08:30] RECOVERY - Puppet freshness on srv296 is OK: puppet ran at Mon Jun 3 16:08:22 UTC 2013 [16:08:31] RECOVERY - Puppet freshness on mw1022 is OK: puppet ran at Mon Jun 3 16:08:23 UTC 2013 [16:08:31] RECOVERY - Puppet freshness on lvs1001 is OK: puppet ran at Mon Jun 3 16:08:23 UTC 2013 [16:08:32] RECOVERY - Puppet freshness on cp1023 is OK: puppet ran at Mon Jun 3 16:08:23 UTC 2013 [16:08:32] RECOVERY - Puppet freshness on mw1132 is OK: puppet ran at Mon Jun 3 16:08:24 UTC 2013 [16:08:33] RECOVERY - Puppet freshness on mw1001 is OK: puppet ran at Mon Jun 3 16:08:24 UTC 2013 [16:08:33] RECOVERY - Puppet freshness on sq64 is OK: puppet ran at Mon Jun 3 16:08:25 UTC 2013 [16:08:34] RECOVERY - Puppet freshness on amssq59 is OK: puppet ran at Mon Jun 3 16:08:25 UTC 2013 [16:08:34] RECOVERY - Puppet freshness on amssq40 is OK: puppet ran at Mon Jun 3 16:08:25 UTC 2013 [16:08:36] RECOVERY - Puppet freshness on mc2 is OK: puppet ran at Mon Jun 3 16:08:27 UTC 2013 [16:08:36] RECOVERY - Puppet freshness on aluminium is OK: puppet ran at Mon Jun 3 16:08:27 UTC 2013 [16:08:36] RECOVERY - Puppet freshness on sq81 is OK: puppet ran at Mon Jun 3 16:08:28 UTC 2013 [16:08:36] RECOVERY - Puppet freshness on mc1011 is OK: puppet ran at Mon Jun 3 16:08:28 UTC 2013 [16:08:37] RECOVERY - Puppet freshness on ms-fe1003 is OK: puppet ran at Mon Jun 3 16:08:29 UTC 2013 [16:08:37] RECOVERY - Puppet freshness on analytics1001 is OK: puppet ran at Mon Jun 3 16:08:30 UTC 2013 [16:08:38] RECOVERY - Puppet freshness on srv285 is OK: puppet ran at Mon Jun 3 16:08:30 UTC 2013 [16:08:38] RECOVERY - Puppet freshness on ms-be10 is OK: puppet ran at Mon Jun 3 16:08:31 UTC 2013 [16:08:39] RECOVERY - Puppet freshness on mw87 is OK: puppet ran at Mon Jun 3 16:08:32 UTC 2013 [16:08:39] RECOVERY - Puppet freshness on cp3006 is OK: puppet ran at Mon Jun 3 16:08:32 UTC 2013 [16:08:40] RECOVERY - Puppet freshness on mw1178 is OK: puppet ran at Mon Jun 3 16:08:33 UTC 2013 [16:08:40] RECOVERY - Puppet freshness on virt2 is OK: puppet ran at Mon Jun 3 16:08:33 UTC 2013 [16:08:41] RECOVERY - Puppet freshness on mw1219 is OK: puppet ran at Mon Jun 3 16:08:33 UTC 2013 [16:08:41] RECOVERY - Puppet freshness on mw1093 is OK: puppet ran at Mon Jun 3 16:08:34 UTC 2013 [16:08:42] RECOVERY - Puppet freshness on mw1072 is OK: puppet ran at Mon Jun 3 16:08:34 UTC 2013 [16:08:42] RECOVERY - Puppet freshness on mw1134 is OK: puppet ran at Mon Jun 3 16:08:35 UTC 2013 [16:08:46] RECOVERY - Puppet freshness on mw1031 is OK: puppet ran at Mon Jun 3 16:08:35 UTC 2013 [16:08:46] RECOVERY - Puppet freshness on professor is OK: puppet ran at Mon Jun 3 16:08:36 UTC 2013 [16:08:46] RECOVERY - Puppet freshness on mw111 is OK: puppet ran at Mon Jun 3 16:08:37 UTC 2013 [16:08:46] RECOVERY - Puppet freshness on db1011 is OK: puppet ran at Mon Jun 3 16:08:38 UTC 2013 [16:08:46] RECOVERY - Puppet freshness on manutius is OK: puppet ran at Mon Jun 3 16:08:39 UTC 2013 [16:08:47] RECOVERY - Puppet freshness on db29 is OK: puppet ran at Mon Jun 3 16:08:41 UTC 2013 [16:08:47] RECOVERY - Puppet freshness on search27 is OK: puppet ran at Mon Jun 3 16:08:42 UTC 2013 [16:08:48] RECOVERY - Puppet freshness on sq82 is OK: puppet ran at Mon Jun 3 16:08:43 UTC 2013 [16:08:48] RECOVERY - Puppet freshness on magnesium is OK: puppet ran at Mon Jun 3 16:08:43 UTC 2013 [16:08:49] RECOVERY - Puppet freshness on sq78 is OK: puppet ran at Mon Jun 3 16:08:43 UTC 2013 [16:08:49] RECOVERY - Puppet freshness on cp1015 is OK: puppet ran at Mon Jun 3 16:08:43 UTC 2013 [16:08:50] RECOVERY - Puppet freshness on search1011 is OK: puppet ran at Mon Jun 3 16:08:44 UTC 2013 [16:08:50] RECOVERY - Puppet freshness on ms-be1004 is OK: puppet ran at Mon Jun 3 16:08:44 UTC 2013 [16:08:51] RECOVERY - Puppet freshness on mw1139 is OK: puppet ran at Mon Jun 3 16:08:44 UTC 2013 [16:08:56] RECOVERY - Puppet freshness on db59 is OK: puppet ran at Mon Jun 3 16:08:46 UTC 2013 [16:08:56] RECOVERY - Puppet freshness on db73 is OK: puppet ran at Mon Jun 3 16:08:47 UTC 2013 [16:08:56] RECOVERY - Puppet freshness on search1006 is OK: puppet ran at Mon Jun 3 16:08:48 UTC 2013 [16:08:56] RECOVERY - Puppet freshness on amssq32 is OK: puppet ran at Mon Jun 3 16:08:48 UTC 2013 [16:08:56] RECOVERY - Puppet freshness on amssq43 is OK: puppet ran at Mon Jun 3 16:08:49 UTC 2013 [16:08:57] RECOVERY - Puppet freshness on mw1080 is OK: puppet ran at Mon Jun 3 16:08:49 UTC 2013 [16:08:57] RECOVERY - Puppet freshness on mw114 is OK: puppet ran at Mon Jun 3 16:08:50 UTC 2013 [16:08:58] RECOVERY - Puppet freshness on srv270 is OK: puppet ran at Mon Jun 3 16:08:50 UTC 2013 [16:08:58] RECOVERY - Puppet freshness on srv240 is OK: puppet ran at Mon Jun 3 16:08:51 UTC 2013 [16:08:59] RECOVERY - Puppet freshness on srv282 is OK: puppet ran at Mon Jun 3 16:08:51 UTC 2013 [16:08:59] RECOVERY - Puppet freshness on srv280 is OK: puppet ran at Mon Jun 3 16:08:52 UTC 2013 [16:09:00] RECOVERY - Puppet freshness on mw115 is OK: puppet ran at Mon Jun 3 16:08:52 UTC 2013 [16:09:00] RECOVERY - Puppet freshness on srv259 is OK: puppet ran at Mon Jun 3 16:08:53 UTC 2013 [16:09:01] RECOVERY - Puppet freshness on tmh1 is OK: puppet ran at Mon Jun 3 16:08:54 UTC 2013 [16:09:01] RECOVERY - Puppet freshness on mc8 is OK: puppet ran at Mon Jun 3 16:08:54 UTC 2013 [16:09:02] RECOVERY - Puppet freshness on sq55 is OK: puppet ran at Mon Jun 3 16:08:55 UTC 2013 [16:09:06] RECOVERY - Puppet freshness on mw1220 is OK: puppet ran at Mon Jun 3 16:08:56 UTC 2013 [16:09:06] RECOVERY - Puppet freshness on db60 is OK: puppet ran at Mon Jun 3 16:08:57 UTC 2013 [16:09:06] RECOVERY - Puppet freshness on mc1013 is OK: puppet ran at Mon Jun 3 16:08:57 UTC 2013 [16:09:06] RECOVERY - Puppet freshness on mw1203 is OK: puppet ran at Mon Jun 3 16:08:57 UTC 2013 [16:09:06] RECOVERY - Puppet freshness on mw1145 is OK: puppet ran at Mon Jun 3 16:08:58 UTC 2013 [16:09:07] RECOVERY - Puppet freshness on mw1090 is OK: puppet ran at Mon Jun 3 16:08:59 UTC 2013 [16:09:07] RECOVERY - Puppet freshness on terbium is OK: puppet ran at Mon Jun 3 16:08:59 UTC 2013 [16:09:08] RECOVERY - Puppet freshness on linne is OK: puppet ran at Mon Jun 3 16:09:00 UTC 2013 [16:09:08] RECOVERY - Puppet freshness on sq59 is OK: puppet ran at Mon Jun 3 16:09:00 UTC 2013 [16:09:09] RECOVERY - Puppet freshness on ssl1002 is OK: puppet ran at Mon Jun 3 16:09:00 UTC 2013 [16:09:09] RECOVERY - Puppet freshness on wtp1010 is OK: puppet ran at Mon Jun 3 16:09:04 UTC 2013 [16:09:10] RECOVERY - Puppet freshness on db44 is OK: puppet ran at Mon Jun 3 16:09:04 UTC 2013 [16:09:10] RECOVERY - Puppet freshness on mw1112 is OK: puppet ran at Mon Jun 3 16:09:04 UTC 2013 [16:09:11] RECOVERY - Puppet freshness on es1008 is OK: puppet ran at Mon Jun 3 16:09:04 UTC 2013 [16:09:11] RECOVERY - Puppet freshness on db1057 is OK: puppet ran at Mon Jun 3 16:09:04 UTC 2013 [16:09:12] RECOVERY - Puppet freshness on ms-be1003 is OK: puppet ran at Mon Jun 3 16:09:05 UTC 2013 [16:09:12] RECOVERY - Puppet freshness on wtp1008 is OK: puppet ran at Mon Jun 3 16:09:05 UTC 2013 [16:09:16] RECOVERY - Puppet freshness on mc1006 is OK: puppet ran at Mon Jun 3 16:09:07 UTC 2013 [16:09:16] RECOVERY - Puppet freshness on ms-be1002 is OK: puppet ran at Mon Jun 3 16:09:07 UTC 2013 [16:09:16] RECOVERY - Puppet freshness on ssl3001 is OK: puppet ran at Mon Jun 3 16:09:09 UTC 2013 [16:09:16] RECOVERY - Puppet freshness on stat1001 is OK: puppet ran at Mon Jun 3 16:09:10 UTC 2013 [16:09:16] RECOVERY - Puppet freshness on cp1039 is OK: puppet ran at Mon Jun 3 16:09:11 UTC 2013 [16:09:17] RECOVERY - Puppet freshness on mw1200 is OK: puppet ran at Mon Jun 3 16:09:11 UTC 2013 [16:09:17] RECOVERY - Puppet freshness on mw65 is OK: puppet ran at Mon Jun 3 16:09:12 UTC 2013 [16:09:18] RECOVERY - Puppet freshness on db36 is OK: puppet ran at Mon Jun 3 16:09:12 UTC 2013 [16:09:18] RECOVERY - Puppet freshness on srv268 is OK: puppet ran at Mon Jun 3 16:09:13 UTC 2013 [16:09:19] RECOVERY - Puppet freshness on sq37 is OK: puppet ran at Mon Jun 3 16:09:15 UTC 2013 [16:09:19] RECOVERY - Puppet freshness on arsenic is OK: puppet ran at Mon Jun 3 16:09:15 UTC 2013 [16:09:26] RECOVERY - Puppet freshness on mw1174 is OK: puppet ran at Mon Jun 3 16:09:16 UTC 2013 [16:09:26] RECOVERY - Puppet freshness on search35 is OK: puppet ran at Mon Jun 3 16:09:16 UTC 2013 [16:09:26] RECOVERY - Puppet freshness on mw92 is OK: puppet ran at Mon Jun 3 16:09:17 UTC 2013 [16:09:26] RECOVERY - Puppet freshness on cp1021 is OK: puppet ran at Mon Jun 3 16:09:17 UTC 2013 [16:09:26] RECOVERY - Puppet freshness on mw1082 is OK: puppet ran at Mon Jun 3 16:09:18 UTC 2013 [16:09:27] RECOVERY - Puppet freshness on chromium is OK: puppet ran at Mon Jun 3 16:09:18 UTC 2013 [16:09:27] RECOVERY - Puppet freshness on srv271 is OK: puppet ran at Mon Jun 3 16:09:19 UTC 2013 [16:09:28] RECOVERY - Puppet freshness on srv298 is OK: puppet ran at Mon Jun 3 16:09:21 UTC 2013 [16:09:28] RECOVERY - Puppet freshness on analytics1018 is OK: puppet ran at Mon Jun 3 16:09:21 UTC 2013 [16:09:29] RECOVERY - Puppet freshness on mw1158 is OK: puppet ran at Mon Jun 3 16:09:22 UTC 2013 [16:09:29] RECOVERY - Puppet freshness on db56 is OK: puppet ran at Mon Jun 3 16:09:22 UTC 2013 [16:09:30] RECOVERY - Puppet freshness on lvs4 is OK: puppet ran at Mon Jun 3 16:09:23 UTC 2013 [16:09:30] RECOVERY - Puppet freshness on analytics1017 is OK: puppet ran at Mon Jun 3 16:09:24 UTC 2013 [16:09:31] RECOVERY - Puppet freshness on mw1135 is OK: puppet ran at Mon Jun 3 16:09:24 UTC 2013 [16:09:31] RECOVERY - Puppet freshness on sq77 is OK: puppet ran at Mon Jun 3 16:09:25 UTC 2013 [16:09:32] RECOVERY - Puppet freshness on mw1121 is OK: puppet ran at Mon Jun 3 16:09:25 UTC 2013 [16:09:37] RECOVERY - Puppet freshness on mw1059 is OK: puppet ran at Mon Jun 3 16:09:26 UTC 2013 [16:09:37] RECOVERY - Puppet freshness on ms1004 is OK: puppet ran at Mon Jun 3 16:09:26 UTC 2013 [16:09:37] RECOVERY - Puppet freshness on wtp1020 is OK: puppet ran at Mon Jun 3 16:09:27 UTC 2013 [16:09:37] RECOVERY - Puppet freshness on es2 is OK: puppet ran at Mon Jun 3 16:09:27 UTC 2013 [16:09:37] RECOVERY - Puppet freshness on cp1001 is OK: puppet ran at Mon Jun 3 16:09:28 UTC 2013 [16:09:38] RECOVERY - Puppet freshness on mw1086 is OK: puppet ran at Mon Jun 3 16:09:28 UTC 2013 [16:09:38] RECOVERY - Puppet freshness on db1056 is OK: puppet ran at Mon Jun 3 16:09:29 UTC 2013 [16:09:39] RECOVERY - Puppet freshness on mc13 is OK: puppet ran at Mon Jun 3 16:09:29 UTC 2013 [16:09:39] RECOVERY - Puppet freshness on virt1005 is OK: puppet ran at Mon Jun 3 16:09:31 UTC 2013 [16:09:40] RECOVERY - Puppet freshness on mc1002 is OK: puppet ran at Mon Jun 3 16:09:33 UTC 2013 [16:09:40] RECOVERY - Puppet freshness on harmon is OK: puppet ran at Mon Jun 3 16:09:33 UTC 2013 [16:09:41] RECOVERY - Puppet freshness on mw91 is OK: puppet ran at Mon Jun 3 16:09:34 UTC 2013 [16:09:41] RECOVERY - Puppet freshness on amssq50 is OK: puppet ran at Mon Jun 3 16:09:34 UTC 2013 [16:09:42] RECOVERY - Puppet freshness on wtp1016 is OK: puppet ran at Mon Jun 3 16:09:34 UTC 2013 [16:09:42] RECOVERY - Puppet freshness on mw1 is OK: puppet ran at Mon Jun 3 16:09:35 UTC 2013 [16:09:46] RECOVERY - Puppet freshness on mw1026 is OK: puppet ran at Mon Jun 3 16:09:36 UTC 2013 [16:09:46] RECOVERY - Puppet freshness on sq49 is OK: puppet ran at Mon Jun 3 16:09:38 UTC 2013 [16:09:46] RECOVERY - Puppet freshness on lvs3 is OK: puppet ran at Mon Jun 3 16:09:38 UTC 2013 [16:09:47] RECOVERY - Puppet freshness on db1045 is OK: puppet ran at Mon Jun 3 16:09:40 UTC 2013 [16:10:26] RECOVERY - Puppet freshness on mc1016 is OK: puppet ran at Mon Jun 3 16:10:15 UTC 2013 [16:10:26] RECOVERY - Puppet freshness on mw1215 is OK: puppet ran at Mon Jun 3 16:10:24 UTC 2013 [16:10:26] RECOVERY - Puppet freshness on mw52 is OK: puppet ran at Mon Jun 3 16:10:25 UTC 2013 [16:10:29] mark: OK -- sorry for leaving that mess. Was derailed by post-bikeshed-trauma [16:10:36] RECOVERY - Puppet freshness on mw15 is OK: puppet ran at Mon Jun 3 16:10:26 UTC 2013 [16:10:56] RECOVERY - Puppet freshness on search1004 is OK: puppet ran at Mon Jun 3 16:10:47 UTC 2013 [16:10:56] RECOVERY - Puppet freshness on mw113 is OK: puppet ran at Mon Jun 3 16:10:48 UTC 2013 [16:11:10] RECOVERY - Puppet freshness on mw1149 is OK: puppet ran at Mon Jun 3 16:10:55 UTC 2013 [16:12:37] RECOVERY - Puppet freshness on mw1028 is OK: puppet ran at Mon Jun 3 16:12:34 UTC 2013 [16:15:57] New patchset: Andrew Bogott; "Revert "Revert "Update the ldap scripts to pep8 compliant""" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66554 [16:15:58] New patchset: Andrew Bogott; "Revert "Revert "Fix various noc files to be pep8 compliant""" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66555 [16:18:40] New patchset: Yurik; "New Saudi Telecom IP ranges per carrier request" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66556 [16:19:56] RECOVERY - Puppet freshness on palladium is OK: puppet ran at Mon Jun 3 16:19:55 UTC 2013 [16:20:56] RECOVERY - Puppet freshness on mw1138 is OK: puppet ran at Mon Jun 3 16:20:47 UTC 2013 [16:23:04] Change merged: Andrew Bogott; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66554 [16:23:17] Change merged: Andrew Bogott; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66555 [16:33:37] PROBLEM - Puppet freshness on mw1076 is CRITICAL: No successful Puppet run in the last 10 hours [16:33:56] RECOVERY - Puppet freshness on mw1076 is OK: puppet ran at Mon Jun 3 16:33:51 UTC 2013 [16:34:26] mark, can you give me a quick rundown of the exim refactors you're hoping to get done for RT? [16:34:52] I just want it to use the same exim template that's used for sodium (and should be for mchenry's replacement) [16:35:24] the exim.conf template [16:35:51] it would at least be nice if certain things like IP ACLs were not duplicated in lots of manual files [16:36:17] mark: ok. I'll have a look and see if I can understand how to do that. thanks. [16:36:19] the RT mail server is a pretty basic mail server config with a few RT specific routers/transports [16:36:25] let me know if I can help [16:36:30] cool [16:37:49] New patchset: Yurik; "Added new Safaricom Kenya carrier" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66557 [16:38:25] PROBLEM - Puppet freshness on mw1091 is CRITICAL: No successful Puppet run in the last 10 hours [17:00:16] !log ms-be1011 replacing disk at slot5 [17:00:25] Logged the message, Master [17:05:03] paravoid: hi [17:05:13] paravoid: would you like to review my workflow for debianization please ? [17:05:31] paravoid: I also have a deb if you'd like to review please [17:05:35] New patchset: Mark Bergsma; "Get rid of the default backend 'backend' for 2nd tier sites" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66562 [17:05:40] paravoid: this is for importing libdclass-dev into apt.wikimedia.org [17:06:15] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66562 [17:09:50] average: sure [17:16:02] New patchset: Mark Bergsma; "Get rid of the default backend 'backend' for 2nd tier sites" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66564 [17:17:26] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66564 [17:20:50] New patchset: Mark Bergsma; "Correct variable name" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66565 [17:21:27] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66565 [17:28:11] paravoid: Andrew is having a look at it right now, when he's done I will give you a link to it [17:31:22] paravoid, you should go ahead and check it out if you got a sec, i can only review it for obvious things that would be bad [17:31:38] https://gerrit.wikimedia.org/r/#/admin/projects/analytics/dclass [17:32:19] New review: JanZerebecki; "I think (theoretically) rewriting the request internally onto w/index.php should work. But I do not ..." [operations/apache-config] (master) C: -1; - https://gerrit.wikimedia.org/r/65443 [17:32:21] hmm, average, you can use a debian/dirs file [17:32:26] there's no review requests there? [17:32:28] instead of manually running mkdir in your debian/rules [17:32:50] yeah, average pushed directly instead of for review, next time he will push for review [17:33:01] his stuff is in the 'newpackage' branch? [17:33:20] yes [17:33:29] paravoid: please have a look at the newpackage branch [17:34:11] paravoid: in particular the DEBIAN.md file http://goo.gl/tucWl [17:34:35] not the easiest way to review :) [17:35:18] average: http://www.debian.org/doc/manuals/maint-guide/dother.en.html#dirs [17:35:39] please bear with me this time. Next time I promise to use git-review [17:35:54] so, you're using a weird mix of dh & traditional debhelper [17:36:04] also http://www.debian.org/doc/manuals/maint-guide/dother.en.html#install [17:36:05] you should have separate install & binary-arch targets [17:36:16] but rather use override_dh_* for the ones you actually want to override [17:36:22] "man dh" should be a good guide for that [17:36:57] also, you should use dh-autoreconf instead of manually calling libtoolize/aclocal etc. [17:37:13] use debian/install instead of the manual cp [17:37:36] do you actually use JAVA_HOME anywhere? [17:37:42] you shouldn't define DESTDIR [17:37:53] that's automatically set by debhelper [17:38:07] use a "debian" branch instead of "newpackage" [17:38:28] paravoid: yes JAVA_HOME is being used [17:38:32] you should use 2.0.14-1 as a version, not 2.0.14 [17:38:49] and you shouldn't include in the changelog all the upstream changes, just the Debian ones [17:39:57] you shouldn't modify stuff out of debian/ in your branch either [17:40:21] you have a bunch of unrelated changes [17:40:27] incl. "back_from_package" [17:40:48] yes, those are leftovers, I'll have to clean those up [17:41:32] I pretty much agree with what you wrote. I'll have to fix them and then get back with an actual gerrit patchset [17:41:49] yeah, it's easier to do such reviews with gerrit [17:42:49] (thanks :) ) [17:43:30] where's the upstream source coming from? [17:43:36] I think I've seen this code before somewhere [17:43:40] it's weather.com's isn't it? [17:44:00] https://github.com/TheWeatherChannel/dClass ? [17:44:10] paravoid: yes [17:44:17] that's the one [17:44:24] right [17:44:31] have you modified it in any way? [17:44:59] paravoid: yes, drdee and I added a java JNI wrapper, but then the original author added one as well [17:45:09] (there is a varnish module that uses that) [17:45:10] paravoid: personally I plan to add an XS Perl wrapper in the near future when time will allow [17:45:20] so are we going to use their wrapper then? [17:45:40] paravoid: we use ours at the moment, unless drdee wants to use the upstream one [17:45:59] I have no opinion on which of the two, but upstream sounds like a more fair bet [17:46:15] anyway, got to go in a minute [17:46:32] feel free to add me as a reviewer when you push that into gerrit [17:46:45] paravoid: yes, thank you for looking at this [17:46:50] I think you should import master as it is into gerrit, fork into a wikimedia branch with your changes [17:46:58] and have a debian branch on top of wikimedia [17:47:12] or you can just have upstream's master + debian branch and put your changes into debian/patches/ [17:47:45] I've never done debian/patches/ [17:47:50] where could I read about that ? [17:47:59] or is that a branch name ? [17:54:21] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:21] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:21] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [17:54:21] PROBLEM - Puppet freshness on db1032 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:21] PROBLEM - Puppet freshness on ms-be1 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:22] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:22] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:23] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:23] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:24] PROBLEM - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:24] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:25] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:25] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:26] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [17:54:58] New patchset: ArielGlenn; "dump wb_terms table for wikibase repos" [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/66570 [17:55:20] !log dropped old rowiki db from s3, left over from the migration to s7 4+ years ago [17:55:33] Logged the message, Master [17:57:14] Change merged: ArielGlenn; [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/66570 [17:59:18] hey, are you having a meeting? can someone PM me the hangout link please? [18:01:58] MaxSem, I don't think we know which meeting we're having yet [18:15:30] New patchset: Jdlrobson; "Story 767: Make photos upload to beta commons on labs" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/66584 [18:21:37] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/66584 [18:22:27] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:23:17] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.127 second response time [18:29:21] ori-l: ah, right. let me see what's going on with git-deploy [18:34:01] New patchset: Demon; "Set max commit summary lengths for Gerrit" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66665 [18:36:07] Ryan_Lane: thanks very much [18:36:25] ori-l: I think salt is being filtered from the network [18:36:36] did the network policy change recently? [18:37:06] nooooo idea. [18:38:44] New patchset: ArielGlenn; "dump all page titles in all namespaces (bug #19542)" [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/66666 [18:41:57] Change merged: ArielGlenn; [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/66666 [18:51:33] New review: Aaron Schulz; "250?" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66665 [18:52:27] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:52:41] New review: Demon; "I wanted it large enough to not annoy people unless they really messed up. I'm fine with lowering it..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66665 [18:53:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.162 second response time [18:57:45] New review: Ori.livneh; "By the way, I think long commit message lines are often the result of people editing commit messages..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66665 [18:59:37] New patchset: Demon; "Puppetize gitblit configuration" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/61036 [19:04:08] New review: Demon; "Usually it's the reverse...people editing them locally and then uploading to Gerrit. I don't think I..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66665 [19:14:12] !log aaron synchronized php-1.22wmf4/includes/WebRequest.php '7f31f0029bdc4526b41ad09968e17e10838087a6' [19:14:20] Logged the message, Master [19:15:50] !log aaron synchronized php-1.22wmf5/includes/WebRequest.php '7715ffc9e4216497edc696fa686103dcc9698e9d' [19:15:58] Logged the message, Master [19:18:13] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.22wmf5 [19:18:21] Logged the message, Master [19:19:54] New patchset: Reedy; "enwiki to 1.22wmf5" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/66868 [19:20:31] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/66868 [19:23:57] andrewbogott: I am back :D [19:24:55] PROBLEM - Disk space on mc15 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [19:25:47] RECOVERY - Disk space on mc15 is OK: DISK OK [19:26:37] hashar: welcome back! I'm not sure I understand your last comment "I wanted to add…" [19:28:42] hold on, wife asking for a flight quote :D [19:30:53] andrewbogott: so integration/jenkins.git was missing the pep8/pyflakes jobs [19:31:31] don't those jobs come from job-builder? [19:31:34] andrewbogott: so I added them in Jenkins and wrote a quick .pep8 file. Annnnnd pep8 1.3.3 does not support ignoring subdirectories :-( [19:31:53] Why do you need to ignore subdirs? [19:32:19] some submodules under /tools/ contains upstream python scripts that do not pass pep8 :) [19:32:31] ah. [19:32:37] but I can figure that out later on [19:32:46] Well… should I change my update script so that it takes a list of dirs? Or a list of dirs to exclude? [19:33:09] to create the pep8 and pyflakes I made a Jenkins Job Builder configuration change : https://gerrit.wikimedia.org/r/#/c/66568/1/integration.yaml,unified [19:33:29] that reuse some templates to build the job for us [19:33:40] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66320 [19:35:09] hashar: OK, so, not directly related to my patches, right? [19:35:28] na unrelated [19:35:44] I have merely used that change to test out my changes in JJB/Zuul :D [19:36:42] !log aaron synchronized php-1.22wmf5/includes/diff/DifferenceEngine.php '22a544be01809226c1069cff90cdc174df81b7e5' [19:36:50] Logged the message, Master [19:37:03] brb [19:44:37] Change abandoned: Dzahn; "(no reason)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/59001 [19:47:42] New patchset: Ottomata; "Moving compression configs to mapred-site.xml where they belong" [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/66869 [19:47:47] New patchset: Faidon; "New Saudi Telecom IP ranges per carrier request" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66556 [19:48:07] New patchset: Faidon; "Added new Safaricom Kenya carrier" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66557 [19:48:32] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66556 [19:48:35] New patchset: Ottomata; "Moving compression configs to mapred-site.xml where they belong" [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/66869 [19:48:43] andrewbogott: so I will try out your python script on gallium and find out how well it works :-} [19:48:57] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66557 [19:48:59] andrewbogott: will deploy it between my two conf calls this evening, aka in roughly 40 minutes [19:49:09] Change merged: Ottomata; [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/66869 [19:49:12] I predict it will complain a lot :) [19:54:14] Change abandoned: Dzahn; "(no reason)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/62032 [19:55:21] New patchset: Dzahn; "contint: Add rewrite rules for favicon.ico to favicon.php" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/62125 [19:56:11] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/62125 [19:57:23] New review: Faidon; "I think it's a fine way to structure this, so +1 on that, but -1 on some other things that I found w..." [operations/puppet] (production) C: -1; - https://gerrit.wikimedia.org/r/66553 [19:57:43] New patchset: Ottomata; "Adding role/hadoop.pp classes" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [20:01:36] New patchset: Ottomata; "Adding role/hadoop.pp classes" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [20:02:15] alright, cool, paravoid, i fixed those few things. I also removed it from being used in site.pp on analytics1020 for now. We need to get alex to put the cdh 4.2.1 .debs back in our apt. [20:02:21] they got replaced with 4.3.0 today [20:02:36] so, if you don't mind I will merge this and apply it in labs [20:02:37] s'ok? [20:02:38] and? they don't work? [20:02:44] they do work [20:02:48] but we'd ahve to upgrade everythign all at once [20:02:53] oh? [20:02:55] how come? [20:03:08] i mean, it would most likely be ok to run different versions of stuff at the same time [20:03:12] but i thikn we'd prefer not to risk it [20:03:20] i asked drdee if he minded if I upgraded [20:03:26] he told me he'd prefer I didn't right now [20:03:36] okay [20:03:49] sure, I don't mind merging [20:03:52] cool [20:03:54] mergeing... [20:04:14] I already gave it a +1 before, if it's labs that's even better :) [20:04:21] er [20:04:23] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66553 [20:04:24] you didn't fix the 4 spaces [20:04:27] ah right [20:04:29] sigh [20:04:30] k will do [20:04:44] sorry for pestering you with whitespace :) [20:05:52] naw its cool [20:05:59] why'd we end up with 4 spaces rule, do you know? [20:06:11] it was a compromise between 2 and tabs I guess :) [20:06:40] New patchset: Ottomata; "2 -> 4 spaces" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66871 [20:06:42] psh [20:06:48] Well, it's the reasonable point between "tabs are teh evil" and "2 spaces hurt my eyes!!1!" [20:07:19] ohhhhh aesthetics, we are lucky that we don't all have to agree on syntax highlighting rules [20:07:28] My own position is "pick one and stick with it ffs." :-) [20:07:34] uh huh [20:07:43] we now have tabs, 2 spaces, and 4 spaces in our manifests :) [20:07:49] yeah... [20:07:54] puppet style guide says to use 2 spaces, waaah crying [20:07:55] but whatevs [20:08:01] i'm only crying because I like 2 sapces [20:08:02] spaces [20:08:06] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66871 [20:09:08] New patchset: Ottomata; "Updating cdh4 module to latest merged change." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66872 [20:09:16] I kinda lean towards the Mark position that two spaces is annoyingly thin. I can *live* with it, but it's lean. [20:09:43] but then you run out of screen real estate way faster [20:09:46] have to scroll right more [20:10:13] stupid IRC - takes forever to connect and then when I look away I get unauthenticated and can't reauthentice [20:10:25] I can't wait to stop using this crummy irc client [20:10:25] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66872 [20:10:56] RECOVERY - Puppet freshness on mw1091 is OK: puppet ran at Mon Jun 3 20:10:47 UTC 2013 [20:11:58] so, we never assigned this week's RT duty [20:12:01] I guess I'll just do a second week in a row [20:12:15] I didn't do much triaging last week anyway [20:23:18] "Good Guy paravoid" [20:24:34] paravoid: so after upgrading ceph do you think it might be stable enough to move forward then? [20:24:45] i certainly hope so [20:24:58] well, Hope is the next best thing I guess [20:25:07] good news is all of our more serious bugs are supposedely fixed there [20:25:16] bad news is, there might be an entirely new set of bugs [20:25:43] * AaronSchulz thinks of http://academic.brooklyn.cuny.edu/english/melani/cs6/hope.html [20:26:13] $ git log --oneline v0.56.4..origin/cuttlefish |wc -l [20:26:13] 2776 [20:26:20] so... [20:27:26] !log allowing salt out of analytics subnet [20:27:33] !log dist-upgrading vanadium [20:27:34] Logged the message, Mistress of the network gear. [20:27:42] Logged the message, Mistress of the network gear. [20:27:52] it is getting stabler with each release [20:28:03] and I complained a bit about their release management [20:29:56] PROBLEM - DPKG on vanadium is CRITICAL: DPKG CRITICAL dpkg reports broken packages [20:30:54] AaronSchulz: I'm currently waiting for 0.61.3 to get released (.2 has a very serious bug), sage said "due in 1-2 days" yesterday [20:30:56] RECOVERY - DPKG on vanadium is OK: All packages OK [20:31:17] AaronSchulz: so I'll definitely upgrade when it's released and maybe we can turn it on again on Thursday or so [20:31:18] paravoid: so...due in 24 hours [20:31:19] New patchset: MaxSem; "Update mobile device detection" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66874 [20:31:34] paravoid: at least upgrading will be easy if nothing is using it (assuming those scripts finish) [20:31:36] it's in their qa cluster [20:32:23] I recently read they have 14 racks of equipment for QA, kind of amazing [20:32:29] New review: MaxSem; "Needs to go live with the next MobileFrontend update on Wednesday." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66874 [20:32:36] Coool, thanks paravoid, labs is running cdh4 hadoop with all that stuff now, weeee [20:32:49] paravoid: at least we know there is scale/perf testing going on [20:33:09] AaronSchulz: delete^Wpurge^Wnuke^WeraseDeletedFiles? [20:36:30] meow? [20:36:34] what of it? [20:37:02] waiting for a +2? [20:39:46] looks like it [20:40:03] paravoid: using the filename option will be fun for utf8 names [20:41:58] well that's why there is --filekey :) [20:56:31] New patchset: Dzahn; "turn on RewriteEngine for favicon redirects, follow-up to change 62125" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66875 [20:57:03] Krinkle: ^ yea, it's not more than that actually, the slash is actually ok [20:58:00] New patchset: Krinkle; "contint: Turn on RewriteEngine for favicon redirects" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66875 [20:58:58] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66875 [20:59:34] New review: Krinkle; "(1 comment)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66875 [20:59:57] New patchset: Krinkle; "contint: Move apache logs to readable place for localhost testing" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/61997 [21:00:30] New review: Krinkle; "Don't merge for now." [operations/puppet] (production) C: -1; - https://gerrit.wikimedia.org/r/61997 [21:01:28] Krinkle: but using "git review -d " that is what i'd want ... [21:01:44] how do the others have a meaning "outside gerrit? [21:01:58] They can be searched for [21:02:14] It's best to just use git hashes, those are easy to find and easy to use everywhere [21:02:50] They are listed on every gerrit change page and can obviously be looked up with "git show" and "git log" as well as on gerit-search, github, gitweb etc. [21:03:28] ok [21:04:13] Krinkle: favicon issue resolved then [21:04:19] Yep :) [21:04:58] Thx again mutante [21:05:11] New patchset: ArielGlenn; "wikiretriever can now get user info for all users of a wiki" [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/66876 [21:17:49] !log upgrading php5-redis on all appservers to version bult from github master 2d0f29bdaf29b071aea29b8fb9ee4158c2b69d72 [21:17:56] Logged the message, Master [21:27:36] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:28:26] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.149 second response time [21:32:56] New patchset: Asher; "redis: default to pkg ensure present instead of to a specific version" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66877 [21:34:44] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66877 [21:38:19] PROBLEM - Host labstore3 is DOWN: PING CRITICAL - Packet loss = 100% [21:39:09] RECOVERY - Host labstore3 is UP: PING OK - Packet loss = 0%, RTA = 26.82 ms [21:43:03] New patchset: Asher; "sort options for consistency" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66878 [21:45:17] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/66878 [22:00:24] hii paravoid, are you still around? [22:00:33] yep [22:00:34] thinking about the role/hadoop.pp thing [22:00:55] would it make sense to do role/kraken/hadoop.pp, role/kraken/zookeeper.pp [22:00:56] etc.? [22:01:30] I think role::hadoop is fine [22:02:56] hm, right but i 'm thinking about this [22:03:01] because i'm working on zookeeper [22:03:11] and I know chad said he wants to use zookeeper for something [22:03:19] so, just a role/zookeeper.pp might not be good enough [22:03:30] he'll have different zookeeper servers (probably) [22:03:39] (we could use the same ones, but it would be nice to be able to use different ones) [22:04:01] so, at the very least i'm starting to think a role::kraken namespace might be useful [22:15:01] what are we going to use zookeeper for? [22:15:04] kafka isn't it? [22:15:16] so that would be role::kafka::zookeeper or something, won't it? [22:16:49] hmmmmm, zookeeper is used by kafka, but not only [22:17:01] chad wants to use it for solr I think [22:17:13] its more standalone than kafka [22:17:14] but you made the point that's going to be different [22:17:17] which is a good point [22:17:26] that would be role::search or something like that [22:17:34] right, but we can/will use it for things other than kafka too [22:17:42] storm uses it (i think) [22:17:43] for different roles though :) [22:18:11] hm? not really [22:18:15] we'll have 3 zookeeper servers [22:18:21] kafka and storm will both connect to them [22:18:32] oh [22:18:34] this is kraken's zookeeper [22:18:43] let's worry about that when that time comes? [22:18:59] renaming a bunch of role classes is trivial [22:19:01] yeah that's cool, so you think I should just make a role/zookeeper class for now? [22:19:16] role::zookeeper, etc. [22:19:20] for what? [22:19:33] for puppetizing the zookeeper servers [22:19:38] in kraken [22:19:43] what are these servers for? [22:20:10] right now, they will be used by kafka…i thiiiink that's all we're using them for right now [22:20:15] but really, they have nothing to do with kafka on their own [22:20:38] but I thought kafka 0.8 doesn't use zookeeper? [22:20:56] the producers don't [22:21:23] oh the consumers still do? [22:21:25] okay [22:21:39] yeah, consumers and brokers [22:21:49] okay [22:22:13] paravoid: ottomata: what's the recommended method for producer high availability in 0.8? [22:22:14] tbh, if we are to bikeshed, I'd prefer the namespace to remain role::analytics and move the existing role::analytics class under it [22:22:26] kraken just feels weird to me [22:22:30] i'm fine with that [22:22:39] i keep going back and forth between the two [22:22:52] should I keep them all in role/analytics.pp then? [22:22:53] but it's obviously a style/naming issue, not anything that's technically more correct or wrong [22:22:55] or subdirs? [22:22:58] yeah [22:23:12] binasher: I don't know exactly, I just remember hearing that on our conversation with Magnus [22:23:17] the librdkafka person [22:23:23] binasher, afaik, the producers talk with the brokers to find out about cluster configuration [22:23:29] and the brokers talk with zookeeper [22:23:34] we asked for 0.8 & zookeeper to be implemented and he basically asked "why both?" [22:24:05] yeah, i don't know much more than that [22:24:13] ottomata: same file / subdirs are both fine with me [22:24:35] so, it removes the zookeper dependency from the producers, but dunno about how good the HAness is compared to 0.7.2 [22:24:43] ok cool, i'll figure that out then, paravoid. [22:24:52] subdirs might be annoying, since they won't puppet autoload [22:24:54] would have to import [22:25:18] we've got import "role/*.pp" in site.pp right now [22:25:23] we'll see [22:25:27] !log replaced redis-server in precise-wikimedia with 2.6.13 [22:25:35] Logged the message, Master [22:25:35] but ok, i'll move this stuff back under role::analytics classes [22:25:37] danke [22:26:27] yeah it felt a bit weird how analytics1020 had both role::analytics & role::hadoop, I had to look up what the former is [22:29:51] New patchset: Ottomata; "Initial commit of zookeeper module." [operations/puppet/zookeeper] (master) - https://gerrit.wikimedia.org/r/66882 [22:30:47] also separate submodule? [22:31:54] New review: Ottomata; "Cool, ok. I've seperated out zookeeper from cdh4 here." [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/65408 [22:33:48] New review: Ottomata; "Oh, to answer your question Faidon," [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/65408 [22:34:01] yes [22:34:40] we're extracting out the zookeeper puppetization from the cdh4 module so that chad can use it too. debian's package should be just fine (and probably better). [22:34:40] meh, it's confusing me but alright [22:34:58] what's confusing you? ehhe, you told me to make it its own module, didnt' you? [22:35:00] but you should definitely talk with hashar about setting up jenkins for operations/puppet/* [22:35:16] he has to do it manually [22:35:27] he showed me how (i think I wrote it down) to do it [22:35:41] so I should be able to submit that change for him [22:35:44] you mean per each submodule? [22:35:47] yeah [22:35:49] bleh [22:35:52] yeah [22:37:25] ergh, perhaps I did not write it down...grrr [22:37:30] i think he put it in wikitech somewhere [22:38:29] I guess we can be more strict with puppet-lint under operations/puppet/* can't we :) [22:38:41] but anyway, let's establish our baseline [22:39:36] don't we need to actually have style guidelines first? [22:39:47] * LeslieCarr ducks [22:40:23] wasn't the lint checked removed from gerrit anyways? [22:40:47] LeslieCarr: we kinda agreed on that [22:41:23] https://bugzilla.wikimedia.org/show_bug.cgi?id=48020 [22:42:27] ..but instead it was removed [22:45:22] ottomata: if you switch to Debian's zookeper make sure to a) reprepro delete it b) remove the grep-dctrl -X -S zookeeper portion of reprepro's shell hook [22:45:44] did alex already add it? [22:45:47] yes [22:45:55] ah, yup [22:45:57] I wonder why cloudera decided to roll their own... [22:46:16] maybe historical? dunno how long the deb zookeeper one has been around [22:46:31] Jan 2010 [22:46:38] hm, dunno [22:49:35] ottomata: do we really need bigtop? [22:50:20] alex was banging his head against the wall with cloudera changing the tarball but keeping the same name/version (which produced an md5 checksum) [22:50:29] but it seems to me like bigtop is some hadoop testing suite [22:50:47] doesn't seem relevant to us [22:50:50] huh? [22:50:59] oh [22:51:02] i don't know [22:51:17] i think bigtop is packaging things [22:51:28] and trying to get them to debian and other package repos [22:51:38] hm it says packaging too, right [22:51:46] and I *think* that cloudera is using their tests suites or something [22:51:48] i'm not exactly sure [22:51:54] actually, i only learned about bigtop recently [22:52:04] huh [22:52:07] i'm not sure how much better it is than cdh, but it seems more generic, which is nice [22:52:21] cdh is nice because they release things very consistently, and the community is very good [22:52:41] we're not going to re-evaluate now, but if I was starting from scratch right now i'd consider bigtop [22:53:01] they're actually producing debs [22:53:03] wow [22:53:06] ydah [22:53:25] that's kind of amazing [22:53:30] http://bigtop.apache.org/team-list.html [22:53:44] its people from a bunch of dists [22:55:07] huh, canonical [22:55:16] james page is listed as the last uploader of zookeeper in Debian/Ubuntu [22:55:29] whoa [22:55:36] looking at the wiki docs for install instructions [22:55:41] one of the steps is [22:55:46] 1. Format the namenode: sudo /etc/init.d/hadoop-hdfs-namenode init [22:55:50] which uh, is awesome! [22:55:53] its built into the init script [22:56:00] i puppetized doing that with an exec { } [22:56:16] i mean, i'd still have to puppetize in the same way, but wouldn't have to call out to hdfs bin [22:56:47] i mean, ok ok , i'm so tempted to go with bigtop instead of cdh4, but no nono, later maybe one day :) [22:56:51] https://launchpad.net/~hadoop-ubuntu/+archive/testing is what he ended up doing [22:57:13] ? [22:57:25] seems old [22:57:26] !log kaldari synchronized php-1.22wmf5/extensions/TimedMediaHandler/resources/mw.MediaWikiPlayerSupport.js 'syncing mw.MediaWikiPlayerSupport.js for TimedMediaHandler bug' [22:57:34] Logged the message, Master [22:57:35] oh that's the last update to zookeeper? [22:58:14] no, the last one is in Debian experimental/Ubuntu raring [22:58:31] but that seems to have been a past Canonical effort on bringing Hadoop to Ubuntu [22:58:37] oh hm [22:58:38] i see [22:58:45] "Hadoop Ubuntu Packagers" team [22:59:05] last activity about a year ago [22:59:06] who knows... [23:00:30] oh, heh, the same guy is also doing Ceph for Ubuntu apparently [23:00:33] small world isn't it [23:00:44] James Page? [23:00:49] yes [23:00:56] huh [23:00:59] yup, small world! [23:00:59] hah [23:01:01] and jenkins [23:01:16] and solr [23:01:17] busy guy [23:05:50] https://fosdem.org/2013/schedule/speaker/james_page/ [23:05:54] PROBLEM - NTP on ssl3002 is CRITICAL: NTP CRITICAL: No response from NTP server [23:06:00] "Automatic OpenStack Testing on Ubuntu" [23:06:40] ah.. even Technical Lead of the Ubuntu Server Team [23:07:16] !log DNS update - point contacts away from broken singer [23:07:24] Logged the message, Master [23:10:04] PROBLEM - NTP on ssl3003 is CRITICAL: NTP CRITICAL: No response from NTP server [23:29:50] ACKNOWLEDGEMENT - Host lanthanum is DOWN: CRITICAL - Host Unreachable (208.80.154.13) LeslieCarr host is in the commissioning process and not yet up [23:31:15] ACKNOWLEDGEMENT - Puppet freshness on knsq16 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:15] ACKNOWLEDGEMENT - Puppet freshness on knsq17 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:15] ACKNOWLEDGEMENT - Puppet freshness on knsq18 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:15] ACKNOWLEDGEMENT - Puppet freshness on knsq19 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:15] ACKNOWLEDGEMENT - Puppet freshness on knsq20 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:16] ACKNOWLEDGEMENT - Puppet freshness on knsq21 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:16] ACKNOWLEDGEMENT - Puppet freshness on knsq22 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:17] ACKNOWLEDGEMENT - Puppet freshness on knsq23 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:17] ACKNOWLEDGEMENT - Puppet freshness on knsq24 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:18] ACKNOWLEDGEMENT - Puppet freshness on knsq26 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:18] ACKNOWLEDGEMENT - Puppet freshness on knsq27 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:19] ACKNOWLEDGEMENT - Puppet freshness on knsq28 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:19] ACKNOWLEDGEMENT - Puppet freshness on knsq29 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr all of these are decommissioned [23:31:40] if they're decom why are they in puppet? [23:31:43] er [23:31:46] puppet & nagios that is [23:33:37] why in nagios… unsure [23:33:52] they're in decom.pp in puppet [23:36:54] RECOVERY - Apache HTTP on mw1058 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 747 bytes in 0.060 second response time [23:40:15] RECOVERY - Apache HTTP on mw1089 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 747 bytes in 0.056 second response time [23:42:17] ACKNOWLEDGEMENT - Apache HTTP on mw1171 is CRITICAL: CRITICAL - Socket timeout after 10 seconds LeslieCarr mw1171 is broken rt5231 [23:42:18] ACKNOWLEDGEMENT - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours LeslieCarr mw1171 is broken rt5231 [23:42:18] ACKNOWLEDGEMENT - SSH on mw1171 is CRITICAL: Server answer: LeslieCarr mw1171 is broken rt5231 [23:46:24] PROBLEM - SSH on sq51 is CRITICAL: Server answer: [23:47:24] RECOVERY - SSH on sq51 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [23:47:34] PROBLEM - SSH on lvs1002 is CRITICAL: Server answer: [23:48:34] RECOVERY - SSH on lvs1002 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.1 (protocol 2.0) [23:58:27] binasher: interesting balancing deletion statement locking vs using the insert buffer for a particular index [23:59:17] * AaronSchulz realizes he has another assignment to review