[00:00:25] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.184 second response time [00:51:01] PROBLEM - Puppet freshness on ms-be5 is CRITICAL: No successful Puppet run in the last 10 hours [01:24:09] New review: Alex Monk; "PleaseStand, please remove your -1." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/71932 [01:36:46] New patchset: Alex Monk; "Enable CAPTCHA for all edits of non-confirmed users on pt.wikipedia in order to reduce editing activity" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [01:54:49] PROBLEM - Puppet freshness on mw1001 is CRITICAL: No successful Puppet run in the last 10 hours [02:05:53] PROBLEM - Puppet freshness on db78 is CRITICAL: No successful Puppet run in the last 10 hours [02:14:49] !log LocalisationUpdate completed (1.22wmf9) at Mon Jul 15 02:14:48 UTC 2013 [02:15:01] Logged the message, Master [02:27:57] !log LocalisationUpdate completed (1.22wmf10) at Mon Jul 15 02:27:57 UTC 2013 [02:28:07] Logged the message, Master [02:34:53] PROBLEM - Puppet freshness on searchidx1001 is CRITICAL: No successful Puppet run in the last 10 hours [02:41:53] PROBLEM - Puppet freshness on rubidium is CRITICAL: No successful Puppet run in the last 10 hours [02:42:20] !log LocalisationUpdate ResourceLoader cache refresh completed at Mon Jul 15 02:42:20 UTC 2013 [02:42:31] Logged the message, Master [02:42:53] PROBLEM - Puppet freshness on ekrem is CRITICAL: No successful Puppet run in the last 10 hours [02:42:53] PROBLEM - Puppet freshness on mw1007 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:53] PROBLEM - Puppet freshness on mw1043 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:53] PROBLEM - Puppet freshness on manganese is CRITICAL: No successful Puppet run in the last 10 hours [02:42:53] PROBLEM - Puppet freshness on mw1041 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:54] PROBLEM - Puppet freshness on mw1063 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:54] PROBLEM - Puppet freshness on mw1197 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:55] PROBLEM - Puppet freshness on mw1087 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:55] PROBLEM - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:56] PROBLEM - Puppet freshness on search1024 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:56] PROBLEM - Puppet freshness on solr1003 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:57] PROBLEM - Puppet freshness on mw1210 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:57] PROBLEM - Puppet freshness on search18 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:58] PROBLEM - Puppet freshness on stat1 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:58] PROBLEM - Puppet freshness on mw58 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:59] PROBLEM - Puppet freshness on solr3 is CRITICAL: No successful Puppet run in the last 10 hours [02:42:59] PROBLEM - Puppet freshness on mw121 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:00] PROBLEM - Puppet freshness on sq76 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:00] PROBLEM - Puppet freshness on srv292 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:01] PROBLEM - Puppet freshness on titanium is CRITICAL: No successful Puppet run in the last 10 hours [02:43:53] PROBLEM - Puppet freshness on amssq53 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:53] PROBLEM - Puppet freshness on analytics1014 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:53] PROBLEM - Puppet freshness on cp3009 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:53] PROBLEM - Puppet freshness on cp1005 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:53] PROBLEM - Puppet freshness on cp3012 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:54] PROBLEM - Puppet freshness on db1001 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:54] PROBLEM - Puppet freshness on db1031 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:55] PROBLEM - Puppet freshness on db1044 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:55] PROBLEM - Puppet freshness on db39 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:56] PROBLEM - Puppet freshness on helium is CRITICAL: No successful Puppet run in the last 10 hours [02:43:56] PROBLEM - Puppet freshness on mc1007 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:57] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:57] PROBLEM - Puppet freshness on ms10 is CRITICAL: No successful Puppet run in the last 10 hours [02:43:58] PROBLEM - Puppet freshness on mw [03:05:45] New review: MZMcBride; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [03:23:25] New review: PiRSquared17; "Seems fine. Thanks." [operations/mediawiki-config] (master) C: 1; - https://gerrit.wikimedia.org/r/73716 [03:25:58] New review: Parent5446; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [06:52:56] PROBLEM - Puppet freshness on grosley is CRITICAL: No successful Puppet run in the last 10 hours [07:00:56] PROBLEM - Puppet freshness on mw56 is CRITICAL: No successful Puppet run in the last 10 hours [07:05:03] New review: Nemo bis; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [08:01:27] New patchset: Nemo bis; "Enable CAPTCHA for all edits of non-confirmed users on pt.wikipedia in order to reduce editing activity" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [08:03:29] New review: Nemo bis; "I've clarified the commit summary as regards the community notification of 2008 (to which only two u..." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [08:22:56] PROBLEM - Puppet freshness on manutius is CRITICAL: No successful Puppet run in the last 10 hours [08:56:16] PROBLEM - SSH on pdf3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:57:06] RECOVERY - SSH on pdf3 is OK: SSH OK - OpenSSH_4.7p1 Debian-8ubuntu3 (protocol 2.0) [09:21:45] New patchset: QChris; "Replicate analytics/kraken to kraken on github" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73735 [10:02:40] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:03:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 7.885 second response time [10:51:02] PROBLEM - Puppet freshness on ms-be5 is CRITICAL: No successful Puppet run in the last 10 hours [11:10:39] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:11:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.125 second response time [11:55:09] PROBLEM - Puppet freshness on mw1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:06:48] PROBLEM - Puppet freshness on db78 is CRITICAL: No successful Puppet run in the last 10 hours [12:26:48] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:27:38] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.126 second response time [12:35:48] PROBLEM - Puppet freshness on searchidx1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:40:16] !log rebooting kaulen to pick up some upgrades, per rt 5460 [12:40:28] Logged the message, Master [12:42:48] PROBLEM - Puppet freshness on rubidium is CRITICAL: No successful Puppet run in the last 10 hours [12:43:48] PROBLEM - Puppet freshness on ekrem is CRITICAL: No successful Puppet run in the last 10 hours [12:43:48] PROBLEM - Puppet freshness on manganese is CRITICAL: No successful Puppet run in the last 10 hours [12:43:48] PROBLEM - Puppet freshness on mw1007 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:48] PROBLEM - Puppet freshness on mw1041 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:48] PROBLEM - Puppet freshness on mw1043 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:49] PROBLEM - Puppet freshness on mw1063 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:49] PROBLEM - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:50] PROBLEM - Puppet freshness on mw1087 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:50] PROBLEM - Puppet freshness on mw1197 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:51] PROBLEM - Puppet freshness on mw121 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:51] PROBLEM - Puppet freshness on mw1210 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:52] PROBLEM - Puppet freshness on mw58 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:52] PROBLEM - Puppet freshness on search1024 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:53] PROBLEM - Puppet freshness on search18 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:53] PROBLEM - Puppet freshness on solr3 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:54] PROBLEM - Puppet freshness on solr1003 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:54] PROBLEM - Puppet freshness on sq76 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:55] PROBLEM - Puppet freshness on srv292 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:55] PROBLEM - Puppet freshness on stat1 is CRITICAL: No successful Puppet run in the last 10 hours [12:43:56] PROBLEM - Puppet freshness on titanium is CRITICAL: No successful Puppet run in the last 10 hours [12:44:48] PROBLEM - Puppet freshness on amssq53 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:48] PROBLEM - Puppet freshness on analytics1014 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:48] PROBLEM - Puppet freshness on cp1005 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:48] PROBLEM - Puppet freshness on cp3009 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:48] PROBLEM - Puppet freshness on cp3012 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:49] PROBLEM - Puppet freshness on db1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:49] PROBLEM - Puppet freshness on db1031 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:50] PROBLEM - Puppet freshness on db1044 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:50] PROBLEM - Puppet freshness on db39 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:51] PROBLEM - Puppet freshness on helium is CRITICAL: No successful Puppet run in the last 10 hours [12:44:51] PROBLEM - Puppet freshness on mc1007 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:52] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:52] PROBLEM - Puppet freshness on ms10 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:53] PROBLEM - Puppet freshness on mw1032 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:53] PROBLEM - Puppet freshness on mw43 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:54] PROBLEM - Puppet freshness on mw124 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:54] PROBLEM - Puppet freshness on pc1 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:55] PROBLEM - Puppet freshness on potassium is CRITICAL: No successful Puppet run in the last 10 hours [12:44:55] PROBLEM - Puppet freshness on praseodymium is CRITICAL: No successful Puppet run in the last 10 hours [12:44:56] PROBLEM - Puppet freshness on rdb1002 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:56] PROBLEM - Puppet freshness on sq54 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:57] PROBLEM - Puppet freshness on sq58 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:57] PROBLEM - Puppet freshness on srv255 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:58] PROBLEM - Puppet freshness on srv273 is CRITICAL: No successful Puppet run in the last 10 hours [12:44:58] PROBLEM - Puppet freshness on wtp1015 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on db1022 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on cp1010 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on ms-fe1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on mw1003 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on mw1024 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:48] PROBLEM - Puppet freshness on mw1033 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:49] PROBLEM - Puppet freshness on mw1046 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:49] PROBLEM - Puppet freshness on mw1069 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:50] PROBLEM - Puppet freshness on mw1150 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:50] PROBLEM - Puppet freshness on mw106 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:51] PROBLEM - Puppet freshness on mw1189 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:51] PROBLEM - Puppet freshness on mw1201 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:52] PROBLEM - Puppet freshness on mw1205 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:52] PROBLEM - Puppet freshness on mw2 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:53] PROBLEM - Puppet freshness on mw35 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:53] PROBLEM - Puppet freshness on mw79 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:54] PROBLEM - Puppet freshness on mw98 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:54] PROBLEM - Puppet freshness on rdb1003 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:55] PROBLEM - Puppet freshness on search33 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:55] PROBLEM - Puppet freshness on srv193 is CRITICAL: No successful Puppet run in the last 10 hours [12:45:56] PROBLEM - Puppet freshness on wtp1007 is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on amslvs1 is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on calcium is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on amssq48 is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on analytics1004 is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on db1033 is CRITICAL: No successful Puppet run in the last 10 hours [12:46:48] PROBLEM - Puppet freshness on db1040 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:48] PROBLEM - Puppet freshness on antimony is CRITICAL: No successful Puppet run in the last 10 hours [12:48:48] PROBLEM - Puppet freshness on cp1058 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:48] PROBLEM - Puppet freshness on cp3011 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:48] PROBLEM - Puppet freshness on dataset1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:48] PROBLEM - Puppet freshness on db1002 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:49] PROBLEM - Puppet freshness on db1014 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:49] PROBLEM - Puppet freshness on labstore1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:50] PROBLEM - Puppet freshness on labstore1 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:50] PROBLEM - Puppet freshness on labstore3 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:51] PROBLEM - Puppet freshness on mc1012 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:51] PROBLEM - Puppet freshness on ms-be12 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:52] PROBLEM - Puppet freshness on ms-fe1002 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:52] PROBLEM - Puppet freshness on mw1027 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:53] PROBLEM - Puppet freshness on mw1104 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:53] PROBLEM - Puppet freshness on mw1206 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:54] PROBLEM - Puppet freshness on mw1208 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:54] PROBLEM - Puppet freshness on mw1211 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:55] PROBLEM - Puppet freshness on mw42 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:55] PROBLEM - Puppet freshness on mw75 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:56] PROBLEM - Puppet freshness on search25 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:56] PROBLEM - Puppet freshness on solr1 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:57] PROBLEM - Puppet freshness on srv242 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:57] PROBLEM - Puppet freshness on ssl1 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:58] PROBLEM - Puppet freshness on virt11 is CRITICAL: No successful Puppet run in the last 10 hours [12:48:58] PROBLEM - Puppet freshness on wtp1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:49:49] PROBLEM - Puppet freshness on amssq51 is CRITICAL: No successful Puppet run in the last 10 hours [12:49:49] PROBLEM - Puppet freshness on amssq56 is CRITICAL: No successful Puppet run in the last 10 hours [12:49:49] PROBLEM - Puppet freshness on analytics1002 is CRITICAL: No successful Puppet run in the last 10 hours [12:49:49] PROBLEM - Puppet freshness on analytics1022 is CRITICAL: No successful Puppet run in the last 10 hours [12:49:49] PROBLEM - Puppet freshness on bast1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:51:48] PROBLEM - Puppet freshness on analytics1008 is CRITICAL: No successful Puppet run in the last 10 hours [12:51:48] PROBLEM - Puppet freshness on cp1038 is CRITICAL: No successful Puppet run in the last 10 hours [12:51:48] PROBLEM - Puppet freshness on db1058 is CRITICAL: No successful Puppet run in the last 10 hours [12:51:48] PROBLEM - Puppet freshness on db65 is CRITICAL: No successful Puppet run in the last 10 hours [12:51:48] PROBLEM - Puppet freshness on db77 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on analytics1011 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on analytics1019 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on brewster is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on cp1065 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on dataset2 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on db1024 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:48] PROBLEM - Puppet freshness on db32 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:49] PROBLEM - Puppet freshness on db51 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:49] PROBLEM - Puppet freshness on db57 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:50] PROBLEM - Puppet freshness on es5 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:50] PROBLEM - Puppet freshness on mc1008 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:51] PROBLEM - Puppet freshness on mw1138 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:51] PROBLEM - Puppet freshness on mw26 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:52] PROBLEM - Puppet freshness on mw33 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:52] PROBLEM - Puppet freshness on mw36 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:53] PROBLEM - Puppet freshness on mw64 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:53] PROBLEM - Puppet freshness on sq50 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:54] PROBLEM - Puppet freshness on stat1002 is CRITICAL: No successful Puppet run in the last 10 hours [12:52:54] PROBLEM - Puppet freshness on tarin is CRITICAL: No successful Puppet run in the last 10 hours [12:52:55] PROBLEM - Puppet freshness on wtp1013 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on aluminium is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on db1010 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on db46 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on db55 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on es1009 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on labsdb1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:48] PROBLEM - Puppet freshness on mw1022 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:49] PROBLEM - Puppet freshness on mw1040 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:49] PROBLEM - Puppet freshness on mw1062 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:50] PROBLEM - Puppet freshness on mw107 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:50] PROBLEM - Puppet freshness on mw109 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:51] PROBLEM - Puppet freshness on mw1132 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:51] PROBLEM - Puppet freshness on mw1185 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:52] PROBLEM - Puppet freshness on mw1218 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:52] PROBLEM - Puppet freshness on mw40 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:53] PROBLEM - Puppet freshness on mw53 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:53] PROBLEM - Puppet freshness on mw70 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:54] PROBLEM - Puppet freshness on snapshot4 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:54] PROBLEM - Puppet freshness on sq55 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:55] PROBLEM - Puppet freshness on sq64 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:55] PROBLEM - Puppet freshness on sq81 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:56] PROBLEM - Puppet freshness on srv296 is CRITICAL: No successful Puppet run in the last 10 hours [12:53:56] PROBLEM - Puppet freshness on wtp1021 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on amssq59 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on colby is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on cp3006 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on db31 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on lvs6 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on mc1011 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:48] PROBLEM - Puppet freshness on ms-fe1003 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:49] PROBLEM - Puppet freshness on mw1072 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:49] PROBLEM - Puppet freshness on mw1134 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:50] PROBLEM - Puppet freshness on mw117 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:50] PROBLEM - Puppet freshness on mw1178 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:51] PROBLEM - Puppet freshness on mw1219 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:51] PROBLEM - Puppet freshness on mw87 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:52] PROBLEM - Puppet freshness on professor is CRITICAL: No successful Puppet run in the last 10 hours [12:54:52] PROBLEM - Puppet freshness on search1006 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:53] PROBLEM - Puppet freshness on search1011 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:53] PROBLEM - Puppet freshness on srv285 is CRITICAL: No successful Puppet run in the last 10 hours [12:54:54] PROBLEM - Puppet freshness on virt2 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on amssq32 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on amssq43 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on analytics1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on cp1015 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on db1011 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on db29 is CRITICAL: No successful Puppet run in the last 10 hours [12:55:48] PROBLEM - Puppet freshness on db59 is CRITICAL: No successful Puppet run in the last 10 hours [12:57:48] PROBLEM - Puppet freshness on cp1001 is CRITICAL: No successful Puppet run in the last 10 hours [12:57:48] PROBLEM - Puppet freshness on cp1039 is CRITICAL: No successful Puppet run in the last 10 hours [12:57:48] PROBLEM - Puppet freshness on cp1054 is CRITICAL: No successful Puppet run in the last 10 hours [12:57:48] PROBLEM - Puppet freshness on analytics1020 is CRITICAL: No successful Puppet run in the last 10 hours [12:57:48] PROBLEM - Puppet freshness on cp3019 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:48] PROBLEM - Puppet freshness on amssq36 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:48] PROBLEM - Puppet freshness on cp1008 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:48] PROBLEM - Puppet freshness on cp1009 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:48] PROBLEM - Puppet freshness on cp1064 is CRITICAL: No successful Puppet run in the last 10 hours [12:58:48] PROBLEM - Puppet freshness on cp1068 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on amssq41 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on amssq60 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on analytics1006 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on analytics1024 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on cp1057 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:48] PROBLEM - Puppet freshness on db1028 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:49] PROBLEM - Puppet freshness on db38 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:49] PROBLEM - Puppet freshness on db68 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:49] PROBLEM - Puppet freshness on ersch is CRITICAL: No successful Puppet run in the last 10 hours [12:59:50] PROBLEM - Puppet freshness on lvs3 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:50] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:52] PROBLEM - Puppet freshness on ms-be7 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:52] PROBLEM - Puppet freshness on mw1039 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:52] PROBLEM - Puppet freshness on mw1177 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:52] PROBLEM - Puppet freshness on mw1184 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:53] PROBLEM - Puppet freshness on mw120 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:53] PROBLEM - Puppet freshness on mw96 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:54] PROBLEM - Puppet freshness on searchidx2 is CRITICAL: No successful Puppet run in the last 10 hours [12:59:54] PROBLEM - Puppet freshness on ssl1004 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:54] PROBLEM - Puppet freshness on amssq58 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:54] PROBLEM - Puppet freshness on cp1019 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:54] PROBLEM - Puppet freshness on db1003 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:54] PROBLEM - Puppet freshness on db1041 is CRITICAL: No successful Puppet run in the last 10 hours [13:00:54] PROBLEM - Puppet freshness on db1043 is CRITICAL: No successful Puppet run in the last 10 hours [13:02:48] PROBLEM - Puppet freshness on amslvs3 is CRITICAL: No successful Puppet run in the last 10 hours [13:02:48] PROBLEM - Puppet freshness on analytics1013 is CRITICAL: No successful Puppet run in the last 10 hours [13:02:48] PROBLEM - Puppet freshness on analytics1026 is CRITICAL: No successful Puppet run in the last 10 hours [13:02:48] PROBLEM - Puppet freshness on cp1011 is CRITICAL: No successful Puppet run in the last 10 hours [13:02:48] PROBLEM - Puppet freshness on cp1004 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on amssq46 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on amssq62 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on cp1055 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on es8 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on ms1002 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on db1053 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:48] PROBLEM - Puppet freshness on mw102 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:49] PROBLEM - Puppet freshness on mw6 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:49] PROBLEM - Puppet freshness on srv241 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:50] PROBLEM - Puppet freshness on virt5 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:50] PROBLEM - Puppet freshness on virt7 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:51] PROBLEM - Puppet freshness on ssl3002 is CRITICAL: No successful Puppet run in the last 10 hours [13:03:51] PROBLEM - Puppet freshness on wtp1023 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on amssq38 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on analytics1027 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on cp1052 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on cp1059 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on db1006 is CRITICAL: No successful Puppet run in the last 10 hours [13:04:48] PROBLEM - Puppet freshness on db1008 is CRITICAL: No successful Puppet run in the last 10 hours [13:05:54] PROBLEM - Puppet freshness on amssq33 is CRITICAL: No successful Puppet run in the last 10 hours [13:05:54] PROBLEM - Puppet freshness on amssq42 is CRITICAL: No successful Puppet run in the last 10 hours [13:05:54] PROBLEM - Puppet freshness on amssq45 is CRITICAL: No successful Puppet run in the last 10 hours [13:05:54] PROBLEM - Puppet freshness on amssq49 is CRITICAL: No successful Puppet run in the last 10 hours [13:05:54] PROBLEM - Puppet freshness on cp1014 is CRITICAL: No successful Puppet run in the last 10 hours [13:06:54] PROBLEM - Puppet freshness on amssq39 is CRITICAL: No successful Puppet run in the last 10 hours [13:06:54] PROBLEM - Puppet freshness on amssq61 is CRITICAL: No successful Puppet run in the last 10 hours [13:06:54] PROBLEM - Puppet freshness on analytics1005 is CRITICAL: No successful Puppet run in the last 10 hours [13:06:54] PROBLEM - Puppet freshness on analytics1025 is CRITICAL: No successful Puppet run in the last 10 hours [13:06:54] PROBLEM - Puppet freshness on cp1045 is CRITICAL: No successful Puppet run in the last 10 hours [13:09:54] PROBLEM - Puppet freshness on amssq52 is CRITICAL: No successful Puppet run in the last 10 hours [13:09:54] PROBLEM - Puppet freshness on analytics1012 is CRITICAL: No successful Puppet run in the last 10 hours [13:09:54] PROBLEM - Puppet freshness on analytics1015 is CRITICAL: No successful Puppet run in the last 10 hours [13:09:54] PROBLEM - Puppet freshness on cp1007 is CRITICAL: No successful Puppet run in the last 10 hours [13:09:54] PROBLEM - Puppet freshness on cp1061 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on cp1013 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on cp1050 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on cp3010 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on db1036 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on db1048 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on gadolinium is CRITICAL: No successful Puppet run in the last 10 hours [13:11:54] PROBLEM - Puppet freshness on lvs5 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:55] PROBLEM - Puppet freshness on ms-be4 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:56] PROBLEM - Puppet freshness on ms5 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:56] PROBLEM - Puppet freshness on mw1044 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:56] PROBLEM - Puppet freshness on mw1101 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:57] PROBLEM - Puppet freshness on mw1162 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:57] PROBLEM - Puppet freshness on mw119 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:58] PROBLEM - Puppet freshness on mw1196 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:58] PROBLEM - Puppet freshness on mw5 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:59] PROBLEM - Puppet freshness on mw90 is CRITICAL: No successful Puppet run in the last 10 hours [13:11:59] PROBLEM - Puppet freshness on mw83 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:00] PROBLEM - Puppet freshness on search1014 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:00] PROBLEM - Puppet freshness on search21 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:01] PROBLEM - Puppet freshness on sq79 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:02] PROBLEM - Puppet freshness on srv275 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:02] PROBLEM - Puppet freshness on srv290 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:02] PROBLEM - Puppet freshness on wtp1024 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:03] PROBLEM - Puppet freshness on zirconium is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on db1020 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on lvs1 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on ms-be1012 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on mw1111 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on mw1124 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:54] PROBLEM - Puppet freshness on mw1125 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:55] PROBLEM - Puppet freshness on mw1130 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:56] PROBLEM - Puppet freshness on mw1147 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:56] PROBLEM - Puppet freshness on mw1161 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:56] PROBLEM - Puppet freshness on mw1214 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:57] PROBLEM - Puppet freshness on mw1190 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:57] PROBLEM - Puppet freshness on mw38 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:58] PROBLEM - Puppet freshness on mw8 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:58] PROBLEM - Puppet freshness on search36 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:59] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [13:12:59] PROBLEM - Puppet freshness on solr2 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:00] PROBLEM - Puppet freshness on sq51 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:00] PROBLEM - Puppet freshness on srv301 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:01] PROBLEM - Puppet freshness on wtp1011 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:01] PROBLEM - Puppet freshness on wtp1018 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:54] PROBLEM - Puppet freshness on amssq44 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:54] PROBLEM - Puppet freshness on cp1063 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:54] PROBLEM - Puppet freshness on analytics1023 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:54] PROBLEM - Puppet freshness on db45 is CRITICAL: No successful Puppet run in the last 10 hours [13:13:54] PROBLEM - Puppet freshness on db43 is CRITICAL: No successful Puppet run in the last 10 hours [13:19:18] my irc client crash sometime :( [13:19:57] you have that going on don't you [13:21:37] I got too many channel logs I guess [13:21:53] uh oh [13:23:54] PROBLEM - Puppet freshness on amssq34 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on cp1002 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on cp1012 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on db1023 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on db34 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on es1005 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:54] PROBLEM - Puppet freshness on mw1139 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:55] PROBLEM - Puppet freshness on mc1005 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:55] PROBLEM - Puppet freshness on pc1003 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:56] PROBLEM - Puppet freshness on search1018 is CRITICAL: No successful Puppet run in the last 10 hours [13:23:56] PROBLEM - Puppet freshness on srv258 is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on cp1046 is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on db40 is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on fenari is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on ms-be1 is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on mw20 is CRITICAL: No successful Puppet run in the last 10 hours [13:24:54] PROBLEM - Puppet freshness on search14 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [13:32:54] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [14:22:58] New review: Ottomata; "We're pretty sure there is a git hook somewhere that creates the github repos on gerrit repo creatio..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/72130 [14:23:05] New patchset: Ottomata; "Adding github replication to for jmxtrans to wikimedia/puppet-jmxtrans" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/72130 [14:23:11] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/72130 [14:36:08] New patchset: Ottomata; "gerrit.pp alignment and s/"/'/g change." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73755 [14:36:55] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73755 [14:39:43] is wikitech down? [14:40:18] seems so [14:40:24] any one aware? [14:41:15] ottomata: did you do something that broke wikitech? [14:41:23] uhhhhhh no? whaaa [14:41:23] nice. it sure is down [14:41:23] grrr [14:41:28] ottomata: noticed you merged some puppets [14:41:43] yeah, all for a different server, but i'm cehcking [14:41:50] * matanya wonders how long, and why no one noticed :D [14:41:50] I was on it a little earlier and it was fine [14:41:50] ottomata: https://wikitech.wikimedia.org/wiki/ - 404 [14:41:50] me too [14:41:50] so I can't of course list the instances or anything I guess [14:41:50] cause those require.. wikitech to do so [14:41:50] :-/ [14:41:56] heh, catch 22 [14:42:16] i was reading some stuff, after clicking a link -- puff it was gone [14:42:40] apergos, just ssh root@wikitech.wikimedia.org [14:42:43] don't need instance name :p [14:43:04] (i no nothing about wikitech setup right now) [14:43:47] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73756 [14:44:05] 1) there's wikitech-static [14:44:13] 2) iirc it's virt0 but ottomata's way should work too [14:44:18] if it's running in labs I will need an instance name, I will need to ssh through the labs bastion host I believe [14:44:35] its not, cirt0 [14:44:40] virt0 is correct [14:44:40] I'm already looking at wikitech-static and it says nthign of use about wikitech :-D [14:44:40] it is virt0 per DNS [14:44:40] "wikitech is the name of the Linode instance running this wiki. " [14:44:40] yeahright [14:44:41] * matanya shouldn't ask such questions in the future... [14:44:43] apergos: try labsconsole? [14:45:03] I'm now on virt0 and going to poke around some [14:45:19] anyway, i agree it smells like a bad puppet change [14:45:20] apergos: just steal a wikitech from appscluster [14:46:03] looks like php isn't running? i get the index.php file [14:46:05] https://wikitech.wikimedia.org/wiki [14:46:07] from taht [14:46:13] ottomata: you think maybe merged something extra on sockpuppet that was in gerrit master but not sockpuppet? [14:46:24] its possible, i didn't see anything when I merged some stuff recently [14:46:57] apergos, mind if I restart apache and see what happens? [14:47:05] go ahead [14:47:57] same deal, getting these [14:47:58] [Mon Jul 15 14:47:51 2013] [error] [client 107.14.223.240] File does not exist: /srv/org/wikimedia/controller/wikis/w/index.php/ [14:49:01] Coren: ↑ [14:49:16] Coren: I saw you poked about this in labs [14:49:16] AzaToth: I know. Looking into it. [14:49:44] It's actually completely insane; the script /is/ there, apache tries, but thinks ENOENT [14:49:56] well, the / at the end is throwing it off I think [14:50:05] if you remove it, you get the php source [14:50:20] ottomata: wouldn't that be webserver config [14:51:03] yeah, i jsut tried enabling php5 module, but that didn't work because it seems the libphp5.so doesn't exist, hmmm [14:51:13] i think there were some puppet changes recently that had to do with php and apache module conflicts, right? [14:51:46] hmm dunno [14:51:49] maybe not [14:52:03] i could start running things to try and fix this, but I might end up breaking more stuff [14:52:05] 2013-07-15 13:26:19 status installed libapache2-mod-php5 5.3.10-1ubuntu3.6+wmf1 [14:52:09] this is utc time [14:52:27] 2013-07-15 13:26:19 remove libapache2-mod-php5 5.3.10-1ubuntu3.6+wmf1 [14:52:27] hm [14:52:41] 2013-07-15 13:26:19 status half-configured libapache2-mod-php5 5.3.10-1ubuntu3.6+wmf1 [14:52:48] and some more fun things like that in dpkg.log [14:52:51] nice! [14:52:52] apergos: dpkg -l | grep -v ^ii [14:53:04] and borked packages? [14:53:15] (i wish we dind't all log in as root so we could see who did that and ask them) [14:53:36] lots of stuff says 'rc' blah blah [14:53:41] rc is ok [14:54:04] rc only means the package has been removed, but the config is still there [14:54:12] libapache2-mod-php5 stands out in the list [14:54:13] apergos: dpkg -l | grep -v ^ii | grep -v ^rc [14:54:21] iF? [14:54:26] nada [14:54:56] so "apt-cache policy libapache2-mod-php5" lists it as installed? [14:55:00] 2013-07-15 13:25:55 startup archives unpack [14:55:00] this is the start of it [14:55:19] no [14:55:27] Installed: (none) [14:56:08] ottomata: PHP understands trailing path components as a query. [14:56:20] apergos: you could find whom went su by checking /var/log/auth [14:57:02] (More precisely, it ignores them and allows you to do so) [14:57:14] yes, i know, but PHP isn't loaded so bwerp [14:57:26] bwerp indeed. [14:58:00] c libapache2-mod-php5 - server-side, HTML-embedded scripting language (Apache 2 module) [14:58:23] Is wikitech returning 404 for everything for anyone else, or is it just me? [14:58:43] Yeah, it's down. [14:58:45] looking through the puppet log first to eliminate that [14:59:48] anomie: https://bugzilla.wikimedia.org/show_bug.cgi?id=51368 [15:00:09] Something removed libapache2-mod-php5 [15:00:51] hah, try running mediawiki without that! [15:01:05] Has to be puppet, but I can't find the matching ensure => anywere yet. [15:01:26] notice: /Stage[main]/Webserver::Apache2/Package[apache2]/ensure: ensure changed '2.2.22-1ubuntu1.3' to '2.2.22-1ubuntu1. [15:01:26] 4' [15:01:29] that [15:01:39] I et we have ensure latest [15:01:41] *bet [15:01:55] That shouldn't /remove/ the package. [15:02:05] it installed that one [15:02:28] and immediately aftr was the logging of installing the libapache2-mod-php5, starting to configure, then removing it [15:03:05] jeremyb: Many people do. [15:03:33] The following packages have unmet dependencies: [15:03:33] libapache2-mod-php5 : Conflicts: libapache2-mod-php5filter but 5.3.10-1ubuntu3.6+wmf1 is installed. [15:03:33] libapache2-mod-php5filter : Conflicts: libapache2-mod-php5 but 5.3.10-1ubuntu3.6+wmf1 is to be installed. [15:03:38] There's our problem. [15:04:03] http://p.defau.lt/?GkziOzcNha4rnqoeBEInOA [15:04:10] Why in blazes is it trying to install mod-php5filter? [15:04:37] * Coren forcibly removes it and tries to rerun puppet [15:05:00] New patchset: QChris; "Replicate analytics/kraken to kraken on github" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73735 [15:05:19] good luck [15:05:40] looks like a circular dep [15:06:38] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73735 [15:07:17] AzaToth: No, you can't have both mod-php5 and mod-php5filter installed at the same time. [15:07:51] that's right, it's one or the other [15:08:04] but I do wish I knew why it suddenly wanted the filter version [15:08:28] Removing php5filter by hand and putting in php5 worked (Wikitech is back now) but I'm looking to see if puppet is going to undo this. [15:08:42] I"m camped on tail -f puppet.log [15:08:46] Hey, who killed by puppet? [15:08:48] same reason ;-) [15:08:49] my* [15:08:52] nope [15:08:59] just watching, not shooting [15:09:07] notice: Caught TERM; calling stop [15:09:23] wunnerful [15:09:29] Somone killed my puppet and started a new one. [15:09:50] Who is on pts/8 from bastion1? :-) [15:09:53] * Coren readies the trout. [15:10:40] I am pts/7 [15:11:20] Coren, sorry [15:11:25] was me, I thought it was stuck. [15:11:26] booooo [15:11:33] do you remember the time then irc clients came builtin with "slapped X with a trout"? [15:11:39] * apergos goes back to watching the puppet log [15:11:48] I certainly do [15:11:57] Coren, anyway, you fixed it! [15:12:01] Nikerabbit: they don't any more? [15:12:06] andrewbogott: Watch your puppet run, see if it tries to replace php5 with php5filter again. :-) [15:12:52] andrewbogott: I found the problem and fix it, not sure we caught the /cause/ yet. [15:13:11] this sounds familiar, lemme see if I can find a note about this... [15:13:15] the apache upgrade switch to php5filter; not clear if that's some unhelpful default or if puppet is doing so on purpose. [15:13:59] no complains in the puppet output [15:14:51] why am I tailing the puppet log when it's the dpkg log I want to watch :-D [15:15:10] apergos: for next time, here you have: sudo perl -n -e '/(\w+ \d+ \d+:\d+:\d+) .*? pam_unix\(su(?:do)?:session\): session opened for user root by (\w+)\(uid=\d+\)/ && print "$1 - $2\n"' /var/log/auth.log [15:15:56] thanks, but it wasn't a rogue user (nor even a mistaken user) [15:16:11] apergos: I know, that's why I said, for the next time [15:16:20] I was definitely mistaken. [15:16:25] :-D [15:16:27] i libapache2-mod-php5 - server-side, HTML-embedded scripting language (Apache 2 module) [15:16:33] c libapache2-mod-php5filter - server-side, HTML-embedded scripting language (apache 2 filter module) [15:16:35] As should be [15:16:38] uh huh [15:16:49] another crisis averted :-/ [15:16:50] Coren: dpkg --purge libapache2-mod-php5filter [15:17:09] AzaToth: I was waiting to see what went on with puppet first. :-) [15:17:17] k [15:17:21] so far a ig fat nothing [15:17:24] *big [15:17:41] (and I use aptitude; it does dependencies better IMO) [15:17:47] ohh some naughty person purged the php filter package :-P [15:17:53] Coren: not always [15:17:58] naughty? [15:18:00] we don't use phpfilter [15:18:04] is this still unfixed? [15:18:06] do the backread [15:18:07] paravoid: now you do [15:18:09] it's fixed [15:18:28] We really need to figure out what cause php5filter to end up installed in the first place. [15:18:34] apergos: until next puppet showdown [15:18:42] ensure latest mos probably [15:18:48] just remove ensure latest [15:18:59] Coren: whats the pinning of libapache2-mod-php5 and libapache2-mod-php5filter? [15:19:20] 500/100 as usual [15:19:26] or some funky 1001? [15:20:11] paravoid: ensure latest upgraded the apache, but it really shouldn't have installed -mod-php5filter *over* the dependency. [15:20:27] they bothh provides phpapi-20100525 and possible that's the thing which are required to be installed [15:20:48] unless you disclose the pinning, I can't know though [15:21:42] apergos: ? [15:21:56] Pin: release o=Wikimedia [15:21:56] Pin-Priority: 1001 [15:22:01] Coren: that's really besides the point [15:22:10] 1001... [15:22:12] the point is that a human would never approve that sequence of events [15:22:16] AzaToth: yes, that's correct [15:22:23] man webserver.pp is full of ensure=>latest [15:22:25] it's also public, in our puppet repo [15:22:30] paravoid: ok [15:22:30] Is there any reason I shouldn't change /all/ of them to 'present' right now? [15:22:50] andrewbogott: please do [15:23:39] Yeah, present is much safer regardless of anything else. [15:24:00] this isn't the first time this has happened with apache/php [15:24:48] New patchset: Andrew Bogott; "s/latest/present/g" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73758 [15:24:49] OK, patch written with my eyes closed, coming up... [15:26:23] I troubles me a bit that one of those classes explicitly said 'latest version' in a comment, as though it's on purpose/necessary [15:26:46] !log Recreating Solr index [15:26:56] Logged the message, Master [15:33:33] PROBLEM - SSH on gadolinium is CRITICAL: Server answer: [15:34:33] RECOVERY - SSH on gadolinium is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.1 (protocol 2.0) [15:39:14] bblack: hey [15:39:20] hey [15:39:24] bblack: vhtcpd spews some warnings across the fleet [15:39:39] not sure if they're harmless or not [15:40:10] well, it did on friday [15:40:12] where can I see it? I don't see it the first one I checked so far [15:40:21] oh Friday there was a lot going on, we fixed that [15:40:26] no, after you left [15:40:48] I was very close to reverting your change until I noticed it was happening before [15:42:06] Jul 15 15:36:20 cp3005 vhtcpd[29152]: recv from 127.0.0.1:80 timed out after receiving 0 bytes [15:42:09] lots of these [15:42:19] root@cp3005:/var/log# grep -c 'timed out' /var/log/syslog [15:42:19] 225 [15:42:28] and it was way worse before i restarted vhtcpd on that box [15:43:02] yes, we did always get those periodically before, was was varnish just sort of hanging and not replying. vhtcpd times out and retries and eventually varnish accepts the same request. [15:43:19] okay [15:43:21] but if there's a lot of them, it could be worth looking at [15:43:37] checking 3005 [15:43:43] I was debugging an issue on cp3005 on friday, someone reporting that a page got stuck in cache and the purge was not going through [15:44:02] there were a lot of these and a action=purge repeatedly didn't purge the page [15:44:09] there were purges processing thouthg [15:44:20] I restarted vhtcpd and it fixed itself [15:44:20] hmmmmm 2013/07/15 15:43:58 socat[2616] E getaddrinfo("cp3004.eqiad.wmnet", "(null)", {1,2,1,6}, {}): Name or service not known [15:44:32] my normal bastion setup doesn't seem to work for these like it does for cp10[34]x.eqiad [15:44:33] there's no cp3004.eqiad :) [15:44:43] 3xxx is esams [15:44:46] .esams.wikimedia.org [15:44:50] New patchset: Ottomata; "Removing special treatment of packet-loss.log on analytics udp2log instances." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73764 [15:44:50] oh, duh [15:44:55] * bblack needs more coffee [15:45:00] welcome to this side of the pond [15:45:02] ;) [15:45:37] the timeouts seem to be much more on 3128 than 80 [15:45:49] whatever that means [15:46:08] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73764 [15:46:11] it was friday night, I didn't really have much strength to dig deep into it tbh :) [15:48:36] it looks not too bad to me, I think. The rate of those is pretty small compared to the overall invalidation rate, and those esams varnishes are using a lot more CPU in general than the ones I've been looking at in eqiad (so they have a reason to be a bit slow/hangy on responses occasionally) [15:49:00] and the queue is eventually moving all items through and whatnot in the long run, although it gets behind by a few hundred or thousand entries while waiting on those timeouts temporarily [15:49:00] ok [15:49:23] RECOVERY - Disk space on analytics1006 is OK: DISK OK [15:49:31] so that issue is probably either too much lag since action=purge and actual purge because of a large queue size, or a different issue altogether [15:49:38] well, it looks ok from the perspective of not being a horrible bug in vhtcpd [15:49:44] it's still weird that varnish does that though [15:49:58] New patchset: Ottomata; "Using /var/log/udp2log/packet-loss.log in filter files for analytics udp2log instances" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73765 [15:50:05] I worry about varnishd in general, though. on cp3005 it's got a lot of CPU's locked up at 99.9% [15:50:14] if it's doing it for simple body-less localhost traffic, it might be doing for user traffic too [15:50:36] I donno if that's just becase it's really that busy, or some other bug going on [15:50:45] nod [15:51:04] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73765 [15:51:16] ps -efL|grep varnish|wc -l ----> 2030 [15:51:32] I mean, I don't want to get into the threads-vs-events debate, but 2030 threads on 12 cpus can't be the best way to go about things :P [15:51:44] it does serve 150MB/s :) [15:56:07] a lot of iowait/sintr, too [16:05:34] New review: Parent5446; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [16:09:51] New review: Nemo bis; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [16:10:07] New patchset: Nemo bis; "[WIP] Enable CAPTCHA for all edits of non-confirmed users on pt.wikipedia in order to reduce editing activity" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [16:10:35] New patchset: Ottomata; "Adding logrotate_template parameter to udp2log::instance" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73770 [16:10:36] New patchset: Ottomata; "Using logrotate_udp2log_analytics.erb for analytics udp2log instances" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73771 [16:13:52] New patchset: Ottomata; "Adding logrotate_template parameter to udp2log::instance" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73770 [16:14:41] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73770 [16:15:01] New patchset: Ottomata; "Using logrotate_udp2log_analytics.erb for analytics udp2log instances" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73771 [16:15:36] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73771 [16:18:29] New review: Alex Monk; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/69982 [16:23:03] New patchset: Ottomata; "Fixing size unit in analytics udp2log logrotate template" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73773 [16:23:31] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73773 [16:53:23] PROBLEM - Puppet freshness on grosley is CRITICAL: No successful Puppet run in the last 10 hours [17:00:11] New patchset: Ori.livneh; "Rewrite of EventLogging module" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/71927 [17:00:18] paravoid, when you created our apt module did you consider & reject the puppetlabs apt module? [17:00:36] I'm noticing that e.g. the mogodb module pulls in 'apt' but is expecting the puppetlabs apt module. Somewhat confusing. [17:01:23] PROBLEM - Puppet freshness on mw56 is CRITICAL: No successful Puppet run in the last 10 hours [17:02:44] we should just modify the mongodb one [17:03:11] is it breaking, or just being redundant? [17:03:39] ori-l: It's not currently broken in an important way (although 90% of the unit tests fail) [17:03:48] It just seems like it could be a running gag, having a name conflict like that. [17:03:56] May crop up every time we use an upstream module. [17:04:28] ori-l: Presumably the mongodb module /is/ broken in every way that depends on 'apt' but we happen not to be using those parts :( [17:05:32] imo, reusability is a lofty goal in software engineering but puppet is so deeply flawed that you're almost always better off rolling your own. faidon's apt module is very solid; we should probably just drop all the junk from the mongodb module. i can take a look sometime this week if you like. [17:06:08] I pretty much agree, it's just the stealthy nature of this problem that worries me. [17:06:32] But, sure, I'll let you rip the guts out of the mongo module :) [17:06:33] ah, i see what you're saying. upstream module looks nice -- requires apt -- "no problem, we have that" [17:06:41] * andrewbogott nods [17:07:17] it's probably adequate to just mention this as a potential pitfall in the puppet guide in wikitech [17:07:23] Makes me think we should rename 'apt' to 'wmfapt' or something to solve the namespace issues. [17:07:42] then someone might introduce the puppetlabs apt module, though [17:07:46] hey guys, i'm on vacation until the 24th, i'm gonna actually quit IRC and paging, reachable via manual mail or phone if necessary. cya later [17:07:54] andrewbogott: I don't remember :) [17:08:02] mutante: *wave* [17:08:05] mutante: have fun! [17:08:12] thanks all:) cu [17:08:13] * paravoid really needs some VAC [17:08:20] paravoid, how sad will it make you if I rename that module? [17:08:41] to what? [17:08:49] !log Fixing blog setup of themes, theme may reset to defaults for next few minutes as i tinker with it. [17:08:53] Dunno, something different -- you choose :) [17:09:00] Logged the message, RobH [17:09:02] 'apter' [17:09:11] Something that distinguishes it from the puppetlaps module of the same name [17:10:04] andrewbogott: there are a *lot* of apt puppet modules about -- i don't think the potential for confusion is as great as all that. 'wmfapt' seems a bit ugly. [17:10:05] paravoid, I'm also open to an argument that I shouldn't care about this. [17:11:02] if you think that the puppetlabs apt is better, feel free to replace it [17:11:10] having two apt modules sounds a bit wrong though [17:11:23] besides the extra cruft, they may also conflict with each other, e.g. in the management of sources.list.d [17:12:10] paravoid: I don't really want to use the upstream one, I'm just alarmed because we are currently using a module (mongodb) that thinks that 'apt' is the upstream one while using ours... [17:12:43] Renaming would make it explicit that we don't have the module that mongo is expecting. [17:13:30] …I don't really care about mongo in particular, just having custom modules with the same names as upstream modules seems like it can cause this kind of confusion ongoingly. [17:14:11] hurm. any gerrit+git experts around? [17:14:17] i'm experiencing a new level of git pain [17:14:38] * apergos is curious but not an expert [17:15:13] I started with what appeared to be a clean checkout, made one change, did a git commit and git-review [17:15:14] * greg-g point Jeff_Green to ^demon  [17:15:20] +s [17:15:24] jeff_Green, what's happening? [17:15:33] Errors running git rebase -i remotes/gerrit/production [17:15:33] Interactive rebase already started [17:16:00] What happens if you do git rebase --abort? [17:16:33] now apparently I'm ahead of origin/production by 4 commits [17:16:55] Jeff_Green: git pull [17:16:59] it's always 4 commits, even when I never commit more than a single change before merging... [17:17:01] Jeff_Green, is this on a labs box by chance? [17:17:07] * andrewbogott scowls at ori-l [17:17:13] nope, my usual laptop [17:17:14] Don't git pull, ever! [17:17:34] for the record, I have not git pulled since my last successful merge. lesson learned [17:17:37] :) [17:17:43] heh [17:17:47] uhhhhhh? [17:17:48] nor have I made multiple commits between merges [17:17:51] <^demon> andrewbogott: pull --ff-only is my favorite tho :) [17:17:54] ok. [17:18:18] ori-l, I'm waging a personal crusade in favor of fetch/rebase [17:18:41] I in fact always get pull :-P (on laptop) [17:19:05] <^demon> andrewbogott: `git pull --rebase` [17:19:07] <^demon> Less typing :) [17:19:12] I'm pretty well reaching the conclusion that no matter what you do, git+gerrit will stab you [17:19:21] awww [17:19:30] ^demon: Slippery slope, people use that and they start to think that 'pull' is a good idea. Splitting up the steps enforces mindfulness :) [17:19:40] andrewbogott: git config pul.rebase [17:19:40] Anyway, um, holy wars aside... [17:19:40] pull, even [17:19:43] for what I do, pull should be fine [17:19:52] Jeff_Green, here's what I'd do. [17:20:01] I *always* want to start from exactly what's at head, make one change, and commit it, and merge it [17:20:12] First, note down the id of the patch that you wrote. [17:20:25] i'm more than happy to toss teh stupid 3 line change [17:20:31] pretty curious about what the 4 commits are that git log shows you have... [17:20:38] it shows 2 [17:20:44] and 2 merges? [17:20:47] Then get yourself a clean branch: git fetch origin; git checkout -b newbranch origin [17:20:55] Then cherry pick: git cherry-pick [17:21:28] apergos: Merge branch 'production', remote branch 'origin' into production (that's me trying to roll back my local production branch to match origin/production after a git stash [17:21:38] right [17:21:41] and "decom hosts that have moved to frack puppet" -- 3 line change to one file [17:21:51] that's all, before that is other peoples stuff [17:22:19] andrewbogott: git checkout -b origin/production ? [17:22:34] so you can always git --reset HEAD~4 (I guess it's 4) with a soft reset so your stuff is still in working dir [17:22:44] git checkout -b origin [17:22:45] check it with git log [17:22:51] Or maybe origin/production -- should be the same. [17:22:56] what's with all the fatals? [17:22:58] then git diff to see what you have in the working dir... [17:22:58] unless your origin is silly :) [17:23:01] what fatals? [17:23:12] http://ur1.ca/edq1f [17:23:23] andrewbogott: maybe my origin is silly :-) [17:23:52] andrewbogott: so I now show 3 branches: [17:23:57] * origin/production [17:23:58] origin [17:24:02] test [17:24:05] is that a lot (for fatals)? [17:24:12] Is 'test' the branch you just created? [17:24:24] surely it is ancient cruft [17:24:32] if it were up to me I'd purge all branches and start fresh [17:24:32] It's probably the good old test branch from back in the day [17:24:47] https://ganglia.wikimedia.org/latest/graph.php?r=week&z=xlarge&title=MediaWiki+errors&vl=errors+%2F+sec&x=0.5&n=&hreg[]=vanadium.eqiad.wmnet&mreg[]=fatal|exception>ype=stack&glegend=show&aggregate=1&embed=1 [17:24:49] ...but that takes research I'm unmotivated to do [17:24:59] RoanKattouw: sounds right [17:25:02] Jeff_Green, it's easy, but -- we can do that later. [17:25:10] k [17:25:10] seems liek more exceptions than usual but not fatals [17:25:19] or right, blue is exception [17:25:54] * apergos guesses this is shortly going to be out of their area [17:25:57] lemme look on fluorine [17:26:04] Jeff_Green, if you are currently on a branch called 'origin/production' then you missed something. [17:26:17] ApiFormatXml::recXmlPrint? [17:26:23] git checkout -b [17:26:26] andrewbogott: I did "git checkout -b origin/production" [17:26:36] Right, you skipped an argument [17:26:43] ori-l: Ugh, that means there's a bug in an API module, feeding things that the XML printer doesn't like [17:26:48] So you made a new branch called origin/production that was identical to whatever your current branch was at the time. [17:26:58] andrewbogott: OH [17:27:24] It's git checkout -b newbranchname [base] where base defaults to "what I'm on now" [17:27:30] RoanKattouw: ahhhh, that's a nice pro tip. it's 'wbeditentity' fwiw [17:27:44] Oh, I see [17:27:50] Sounds like WikiData / WikiBase [17:28:01] yeah, all wikidatawiki [17:28:13] 2013-07-15 17:01:20 mw1200 wikidatawiki: [XXX] /w/api.php?action=wbeditentity&format=xml Exception from line 1699 of /usr/local/apache/common-local/php-1.22wmf9/includes/GlobalFunctions.php: Internal error in ApiFormatXml::recXmlPrint: (zh, ...) has integer keys without _element value. Use ApiResult::setIndexedTagName(). [17:28:15] full of these [17:28:34] andrewbogott: maybe I should delete the local "origin/production" branch because git is complaining about an ambiguous refname [17:28:57] Jeff_Green: git branch -D origin/production [17:29:01] Yeah, that's why you don't want local branches with the same name as upstream branches :) [17:29:24] Local branches with the same name as files in your rep gets you the same kind of problems [17:29:47] you'd think git could warn you when you try to do it, rather than after you did it by accident [17:30:03] "hey, I see you're trying to create a local branch stupidly. cut it out" [17:30:40] Git is the ultimate unix tool, always gives you more than enough rope :( [17:30:40] That's part of why I limit myself to such a narrow vocabulary and lots of ritualistic patterns... [17:30:54] <^demon> aliases! [17:30:54] andrewbogott: yeah but it still stabs you [17:31:05] <^demon> people make typos :) [17:31:33] ^demon: I nevre make typos. [17:31:38] me neither [17:31:40] apergos: i told #wikimedia-wikidata, looking at the Wikibase git log [17:31:44] To deal with git one has to be both very careful and consistent and disciplined, and also very knowledgeable about how to clean things up [17:31:50] great [17:32:04] ori-l, ok you now have a task at the end of this page: https://wikitech.wikimedia.org/wiki/Puppet_Todo [17:32:08] ok so now after some counter-stabbing I have one local branch 'production' and I did a 'git merge origin production" and things look happy-ish again [17:32:09] have at least one tool in your toolbelt that will always get you out of trouble (for git) [17:32:13] good [17:32:15] andrewbogott: :) [17:32:19] hm git merge? really? [17:32:21] ori-l, if you run the spec tests on that module the problem is VERY obvious. [17:32:32] <^demon> RoanKattouw: Indeed, which is why I have `git ohcrap` aliased to `git reset --hard origin/master` :D [17:32:43] Jeff_Green, don't ever user merge either. [17:32:53] Sorry, I should write a manifesto to get this out of my system [17:33:00] :-D [17:33:01] sounds like it [17:33:13] <^demon> `git ohcrap` should be part of git-core :) [17:33:19] Given how we use Gerrit, git merge is an "if you think you need this, you're probably wrong" kind of command [17:33:19] i know you prefer to use rebase, but in this case I have no local changes so I don't see the difference [17:33:36] I'm just trying to fetch and apply any changes that were merged since my prior fetch [17:33:37] * apergos twitches [17:33:38] Oh that works [17:33:42] <^demon> RoanKattouw: s/we use/everyone uses/ [17:34:01] New patchset: Jgreen; "remove three hosts that were moved to frack puppet" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73782 [17:34:10] WOOoOooOoOoOoOo!!! [17:34:14] Yeah, gerrit defines a clear distinction between upstream and downstream modules. Merge is for upstream (== gerrit) and rebase for downstrem (== humans) [17:34:18] ^demon: Some people use Gerrit with merge commits :) but yeah as soon as Gerrit is in the picture 'git merge' is "only use this if you really really know what you're doing" territory [17:34:45] Change merged: Jgreen; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73782 [17:34:49] !log depooling mw1173 to troubleshoot DIMM Failure [17:35:00] Logged the message, Master [17:35:43] Jeff_Green, the pattern that I described above (git fetch origin; git checkout -b origin) gets you onto a new clean branch that's fully up-to-date. It also leaves any existing work behind, intact, wherever you were before. [17:36:10] Anyway. [17:36:21] but your existing work is on the branch you on, right? [17:36:23] err [17:36:35] your existing work remains at the local branch you just flipped off of [17:36:38] right? [17:36:51] topic branches ftw [17:37:02] (whre topic includes bugfix or whatever) [17:37:24] Jeff_Green, right. [17:37:34] So, you get a fresh start, then you can cherry-pick or whatever. [17:37:46] good to know [17:38:00] for me i'm *always* happy to just redo the change [17:38:05] That pattern is, basically: grab what you can, abandon ship [17:38:22] because my changes are always two orders of magnitude more trivial than the git overhead to apply them [17:39:51] * andrewbogott nods [17:39:57] You can always skip the 'grab what you can' part :) [17:40:10] New patchset: Andrew Bogott; "Fix one of many failing mongodb tests." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73785 [17:40:16] so here's my q, trying to figure out what went wrong this time [17:40:31] what I did was this: I was on my local 'production' branch [17:40:50] I edited the file out of the git tree [17:41:00] had a fresh copy though [17:41:11] git fetch, git merge origin production [17:41:50] copied the edited file back into the tree [17:42:01] git commit -a and saw just that file in the commit [17:42:07] git review [17:42:16] uh oh, you lost me at git review, I don't use that [17:42:19] and there was where I started seeing issues [17:42:37] Something was either already scrambled at step one, or else got scrambled in the merge. [17:43:26] The thing about you being four commits ahead of origin… suggests that when you merged you wound up with something different from upstream somehow. [17:43:39] At which point 'git review' was doomed. [17:44:23] apergos: https://bugzilla.wikimedia.org/show_bug.cgi?id=51376 ; aude looking into it [17:44:41] Jeff_Green, sorry, I see now that all I'm saying is "I don't know" [17:44:42] ok great, thanks for the heads up [17:44:52] andrewbogott: cool [17:44:58] But, going forward, start with a clean topic branch and things will go better. [17:45:24] andrewbogott: ok [17:46:35] notpeter, can I get a +1 for https://gerrit.wikimedia.org/r/#/c/73585/ ? I changed exactly one character in the live part of that patch, want to make sure that '/' was an oversight on your part and not part of some subtle scheme. [17:47:37] Jeff_Green, I will try to find time to write a guide with pictures and stuff about this, probably flooding you with suggestions on IRC is not helpful :/ [17:47:57] oddly it is [17:48:01] but both are good [17:48:19] the amusing thing is that I used git for years before dealing with gerrit, and never ran into trouble [17:48:24] now I run into it constantly [17:49:06] New review: Pyoungmeister; "looks right to me." [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/73585 [17:49:27] thx notpeter [17:50:37] New patchset: RobH; "rt 5403 qchris member of restricted for bastion access" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73787 [17:50:56] Jeff_Green, want I should merge this patch on sockpuppet? [17:51:05] just did [17:51:08] ;k [17:51:26] Change merged: Andrew Bogott; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73585 [17:51:50] New review: RobH; "all the voices in my head agree this is a great patchset" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/73787 [17:51:51] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73787 [17:52:40] Anyone here blocked any IPs recently? http://lists.wikimedia.org/pipermail/wikitech-l/2013-July/070378.html [17:54:24] TimStarling did one on the 26th of June... I wonder if that's it ("07:13 Tim: deploying squid config change to block API DoS attack") [17:54:35] !log powering down mw1173 [17:54:45] Logged the message, Master [17:54:45] Change merged: Andrew Bogott; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73785 [17:56:38] New patchset: Jgreen; "remove classes from decommissioned fundraising servers" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73789 [17:57:08] Krenair: Yesterday (Sunday 12:00ish UTC) I think apergos blocked some from the API because the API was falling over [17:57:16] 4 ips [17:57:19] no bots [17:57:30] Change merged: Jgreen; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73789 [17:58:00] and we didn't attempt to contact, we were in the middle of an outage; that's unfortunate but in such a case... [17:58:18] Yeah ... [17:58:20] Are you able to share which IPs those were? [17:58:46] probably not here [17:58:55] I suppose we might as well unblock them? [17:59:07] at this point I think that's fine, it was meant to be temporary [17:59:12] and is marked as such in the file too [17:59:24] I have a meeting in 0 minutes though [17:59:39] Would the blocks cause 403 errors? Or just reject connections? [17:59:43] as does all of ops [17:59:48] oh, ok [17:59:51] 403 probably [17:59:58] not all, just those interested :) [18:00:08] apergos: Just let me know when you've unblocked them and I can respond [18:00:15] sure [18:01:13] hashar: I assume there's no progress with zuul and debian-glue? [18:01:28] AzaToth: as I said, it depends on slaves being able to reach Zuul commits :-] [18:01:49] yea, and that progress hasn't moved? [18:02:12] can't read RT you know, so I don't know the status of your ticket [18:02:30] nm I can do that now actually, wrong meeting \o/ [18:03:24] AzaToth: you could be added to the cc list of the rt [18:03:41] AzaToth: I will try to get the Zuul ref fixed this week. [18:03:41] ok [18:04:14] got to catch up with ^demon to find out the best method to do it [18:04:24] basically the commits crafted by Zuul need to be fetchablea from the slaves [18:04:37] so either we publish the Zuul git repo or we push the references back to Gerrit :] [18:04:39] New patchset: SuchABot; "added mapred.system.dir" [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/73791 [18:04:52] <^demon> hashar: I responded to your e-mail. My initial impression was that pushing them to gerrit isn't the best plan. [18:05:56] RECOVERY - Host mw1173 is UP: PING OK - Packet loss = 0%, RTA = 1.41 ms [18:06:09] I am on a call for the next hour , will attempt to chat/reply after though you will probably lunching [18:07:16] Change merged: Ottomata; [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/73791 [18:08:36] PROBLEM - Apache HTTP on mw1173 is CRITICAL: Connection refused [18:09:50] RoanKattouw: done, please explain that we were in the middle of firefighting hence not time to find and notify people for this one [18:09:58] Will do [18:10:19] was there ever a post mortem for that as an outage? maybe not [18:10:37] PROBLEM - Puppet freshness on cp1043 is CRITICAL: No successful Puppet run in the last 10 hours [18:10:37] RECOVERY - Apache HTTP on mw1173 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 747 bytes in 1.018 second response time [18:10:47] PROBLEM - Puppet freshness on cp1041 is CRITICAL: No successful Puppet run in the last 10 hours [18:11:07] PROBLEM - Puppet freshness on cp1044 is CRITICAL: No successful Puppet run in the last 10 hours [18:16:16] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: wikimedia, closed, special to 1.22wmf10 [18:16:27] Logged the message, Master [18:20:29] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: wikibooks, wikivoyage, wikiversity to 1.22wmf10 [18:20:40] Logged the message, Master [18:22:31] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: wikiquote and wiktionary to 1.22wmf10 [18:22:42] Logged the message, Master [18:23:40] PROBLEM - Puppet freshness on manutius is CRITICAL: No successful Puppet run in the last 10 hours [18:25:23] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: everything non wikipedia to 1.22wmf10 [18:25:34] Logged the message, Master [18:27:45] New patchset: Reedy; "Everthing non wikipedia to 1.22wmf10" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73793 [18:28:15] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73793 [18:28:36] apergos: There hasn't been one yet. I suppose mark and myself should collaborate on writing one [18:28:51] there's the page for incident reports [18:29:05] Mostly it's "the sun came up and I went back to sleep" [18:29:06] you could write what you know and ping us to fill in [18:29:09] Yeah [18:29:17] Will do that when I get a moment, which may not be today [18:29:22] okay [18:29:32] so the root cause of the incident is the timezone you were in ? [18:29:38] haha [18:29:43] =) [18:41:44] !log depooling mw1163 to troubleshoot DIMM error [18:41:55] Logged the message, Master [18:42:41] !log powercycling mw1163 [18:42:51] Logged the message, Master [18:45:20] PROBLEM - Host mw1163 is DOWN: PING CRITICAL - Packet loss = 100% [18:46:54] $wgParsoidSkipRatio [18:49:35] New patchset: Ottomata; "Adding spetrea to admins::restricted so he has an account on bastion hosts." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/69313 [18:49:49] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/69313 [18:55:25] New patchset: Ottomata; "Adam Baso access on stat1002. RT 5446" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73795 [18:55:50] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73795 [18:58:33] !log labstore3 system CPU rocketed from ~10% to ~60% [18:58:43] Logged the message, Master [18:59:32] hashar, is ^^ related to a 503 I just receive in beta api? [18:59:40] MaxSem: might [18:59:59] I guess none of the files can be read in a timely fashion [19:00:11] so apache ends up reaching the timeout [19:00:13] New patchset: RobH; "updating icinga contacts with new opsen" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73796 [19:05:17] New review: RobH; "from hells heart i pageth thee" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/73796 [19:05:18] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73796 [19:05:47] are we still in nfs server overload land? hashar [19:06:06] apergos: yeah different issue than this morning though [19:06:27] and in -labs [19:06:57] oh a new issue. grnd [19:06:58] apergos: this morning it was mostly wait io due to a project doing lot of file operations. Now that is system which is at 60% [19:07:00] just grand [19:07:08] ah I see it from the graph [19:07:16] maybe the nfs bug Coren talked about which hit us every 2 weeks. [19:07:34] oh every second sunday night [19:07:40] is there any cron job that uh.. ? :-D [19:08:43] RoanKattouw, re post-mortem: I think you can just flesh out my mail to ops [19:10:02] RobH: "chekov screams again"? [19:11:06] i take pride in bastardizing literature. [19:11:18] though i must really upset folks who actually like to quote things properly. [19:11:20] heh [19:36:22] robh: any reason why I shouldn't use raid1-lvm.cfg for carbon? [19:37:24] hrmm, the 10GB / is tiny. [19:38:05] but otherwise seems ok [19:38:15] wondering if I should make a new partman recipe with more [19:38:28] 10GB is so small for disks we get in today [19:38:32] carbon is dual 500s? [19:38:36] yes [19:38:37] or 250s... [19:38:38] ok [19:39:08] hrmm, i dunno [19:39:13] maybe run it by m-ark [19:39:29] ? [19:39:30] well, its just a tftpd server at the moment [19:39:37] so you can use that partman recipie [19:39:45] but it wont be large enough when we have to put apt repo on there [19:39:46] it is lvm so we could expan [19:39:53] well, the / isnt lvm [19:39:56] the rest of disk is [19:40:07] so any lvm mount would be a directory within / [19:40:36] lookin at brewster [19:40:55] heh..... yea [19:40:58] it's not impossible to resize / but would require an unmount [19:40:59] cmjohnson1: use the raid1-lvm [19:41:13] when the apt repo gets moved there, it goes into /srv, which we can put into lvm at that time [19:41:19] so should be ok [19:41:27] ah...cool that will work then..awesome! thx [19:41:31] welcome [19:43:12] New patchset: Ottomata; "Updating README.md with gerrit URL" [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/73851 [19:43:47] Change merged: Ottomata; [operations/puppet/cdh4] (master) - https://gerrit.wikimedia.org/r/73851 [19:48:31] !log Shutting down EventLogging services on vanadium ahead of Iba8cc5d7b deployment. [19:48:42] Logged the message, Master [19:49:04] sniff. json2sql-db1047 RUNNING pid 16932, uptime 68 days, 21:26:08 [19:50:19] New patchset: Jforrester; "Stop the prevention of anons getting VisualEditor for enwiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73852 [19:52:46] ori-l: goodnight, compadre [19:53:00] :) [20:02:26] !log Rebooting vanadium to complete kernel upgrade to 3.2.0-49. [20:02:37] Logged the message, Master [20:06:10] * Nemo_bis wonder what TTM jobs do [20:07:00] paravoid: ready for https://gerrit.wikimedia.org/r/#/c/71927/ now [20:07:17] ok [20:07:32] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/71927 [20:09:19] ori-l: merged [20:10:03] paravoid: thanks, running puppet [20:12:25] New patchset: Ori.livneh; "vanadium: role::logging::eventlogging -> role::eventlogging" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73855 [20:12:53] paravoid: ^ my mistake :-/ [20:13:22] Jeff_Green, manifesto: https://wikitech.wikimedia.org/wiki/Help:Git_rebase [20:13:37] andrewbogott: thank you! [20:14:25] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73855 [20:14:35] done [20:14:41] thanks [20:20:22] New patchset: Ori.livneh; "Remove 'mediawiki_errors' class & related files" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73859 [20:20:23] heya ori-l got a sec again to brain bounce a puppet thang? [20:20:30] how is it going ori-l ? [20:20:50] paravoid: see above. [20:20:59] ottomata: in an hour or so? [20:21:26] aye, i'm out in about an hour, no worries, i think if I think about this enough i'll know what I want to do [20:21:28] tahnks [20:21:44] k [20:23:01] PROBLEM - DPKG on labstore3 is CRITICAL: DPKG CRITICAL dpkg reports broken packages [20:24:01] RECOVERY - DPKG on labstore3 is OK: All packages OK [20:24:51] !log Disabled MediaWiki errors Ganglia module on vanadium [20:25:03] Logged the message, Master [20:25:13] paravoid: could you merge https://gerrit.wikimedia.org/r/#/c/73859/ ? [20:25:27] oh yeah, sorry [20:25:40] np [20:25:41] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73859 [20:25:55] done [20:27:41] PROBLEM - Host labstore3 is DOWN: PING CRITICAL - Packet loss = 100% [20:28:21] RECOVERY - Host labstore3 is UP: PING OK - Packet loss = 0%, RTA = 26.73 ms [20:28:25] paravoid, ubuntuish q for you [20:28:44] i want to export an env var [20:28:52] for a cdh4 piece (oozie) [20:28:58] i want any oozie client to ahve this env var exported [20:29:17] the included oozie shell script wrapper doesn't really read any config files [20:29:31] !log catrope Started syncing Wikimedia installation... : Updating VisualEditor to master [20:29:31] should I have puppet install a file in /etc/profile.d? [20:29:34] Logged the message, Master [20:29:36] /etc/profile.d/oozie.sh perhaps? [20:29:39] profile.d? [20:29:40] no [20:30:03] from: http://oozie.apache.org/docs/3.1.3-incubating/DG_Examples.html [20:30:03] To avoid having to provide the -oozie option with the Oozie URL with every oozie command, set OOZIE_URL env variable to the Oozie URL in the shell environment. For example: [20:30:11] basiccally, i just want to puppetize OOZIE_URL [20:30:15] not sure of hte best place to do that [20:31:30] the upstart script? [20:31:36] no, there's no daemon [20:31:45] this is just a client shell wrapper to submit oozie jobs [20:31:56] which users runs oozie? [20:31:59] just random users? [20:32:02] yes [20:32:13] real people, if there is a regular thing we want running [20:32:17] we run it as the stats user [20:32:38] but for one offs, anybody who can run hadoop jobs can submit their own oozie workflows and coordinators [20:33:15] meh [20:33:23] maybe profile.d indeed, dunno [20:33:32] it doesn't look especially clear, but maybe it's not that bad [20:37:40] ok danke [20:38:27] ottomata: see modules/env in mediawiki-vagrant for one way to manage those [20:39:47] * RoanKattouw WTFs @ https://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&m=cpu_report&s=by+name&c=API+application+servers+pmtpa&h=&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 [20:39:57] Someone hitting the testwiki API perhaps? Didn't testwiki move to eqiad, though? [20:39:58] !log catrope Finished syncing Wikimedia installation... : Updating VisualEditor to master [20:40:08] Oh, nm, it's scap of course [20:40:08] Logged the message, Master [20:40:10] I'm an idiot [20:44:21] !log Graceful reload of Zuul to fast-forward deployment to Ia53d412b029205 [20:44:32] Logged the message, Master [20:45:04] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73852 [20:45:17] Krinkle: hey :) [20:45:41] Krinkle: do you got a timeslot this week to do the CI checkin together ? [20:45:56] Yeah [20:45:58] Krinkle: could not attend today, I had an appointment :/ [20:46:11] !log catrope synchronized wmf-config/InitialiseSettings.php 'Enable VisualEditor for all users (anons too) on enwiki' [20:46:11] ... it was today? [20:46:13] Logged the message, Master [20:46:22] Krinkle: I think? I canceled it long time ago. [20:46:44] i remember an email about that from you but it didn't mention any date or anything [20:46:48] so I wasn't sure what it was about [20:46:54] Krinkle: oops [20:47:02] I thought it was for last week [20:47:32] hashar: it's next monday, not this monday (the scheduled one, anyway) [20:47:32] Krinkle: ah maybe last week. Anyway if you are willing to, we can get one this week. I let you pick the date. [20:47:52] tomorrow 6PM [20:48:53] Krinkle: works for me. I will be at the coworking place [20:49:03] 'the' ? [20:50:05] Krinkle: I work from time to time at a coworking place with various web peoples :) [20:50:33] nice [20:50:44] hashar: Is it in France? [20:50:50] yeah [20:51:22] Krinkle: there is a bunch of pictures on http://cantine.atlantic2.org/les-espaces/ [20:51:47] hashar: What is it like? [20:51:50] nice [20:52:01] PROBLEM - Puppet freshness on ms-be5 is CRITICAL: No successful Puppet run in the last 10 hours [20:53:32] Krinkle: sending an invite for 6pm, I will have to be out by 6pm45 but we can start a 15 minutes earlier. [20:56:26] New patchset: Ori.livneh; "Fix socket ID handling in EventLogging module" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73863 [20:56:34] paravoid: ^^ [20:56:43] i think i'm nearly there [20:57:19] apergos: is still ms-be5 off? [20:57:19] is ms-be5 still off even :) [20:57:51] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73863 [20:58:26] done [20:58:46] thanks again [20:59:37] no worries [20:59:44] sorry for being slow, in a meeting now [20:59:50] that I didn't know I was going to be in last week :) [21:00:13] but I didn't cancel since I am still around, just slow [21:13:22] RECOVERY - Host lanthanum is UP: PING WARNING - Packet loss = 80%, RTA = 0.29 ms [21:15:32] PROBLEM - Host lanthanum is DOWN: CRITICAL - Host Unreachable (208.80.154.13) [21:16:31] paravoid: http://ganglia.wikimedia.org/latest/graph.php?r=hour&z=xlarge&title=EventLogging&vl=events+%2F+sec&x=&n=&hreg[]=vanadium.eqiad.wmnet&mreg[]=%5E%28client-generated-raw%7Cserver-generated-raw%7Cvalid-events%29%24>ype=stack&glegend=show&aggregate=1 [21:16:34] :) [21:16:53] :) [21:17:08] :-] [21:17:14] you should try 'eventloggingctl status' on vanadium, it's cool [21:17:24] hey hashar! :) [21:17:33] busy busy in conf call :-] [21:17:57] ori-l: yesterday I discovered unit testing in python. That is as much fun as phpunit [21:18:33] hashar: yes, it could be better. i like 'nose2' though [21:18:54] hashar: btw, 'vagrant run-tests' runs mediawiki's phpunit tests on the guest vm [21:19:00] as of yesterday :) [21:19:03] awesome! [21:20:47] ori-l: I think you had a script to parse the exception/fatal logs, is that just in a dream ? [21:21:16] I looked it up last week and couldn't find it. [21:21:24] New patchset: RobH; "rt5074 lanthanum moved to internal ip" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73867 [21:21:51] hashar: https://git.wikimedia.org/blob/mediawiki%2Ftools%2Ffluoride.git/78c9bfd0e9f7a4a7b5b75522b81e70c38eb9aa6c/errproc.py [21:22:03] ah yeah that one [21:22:07] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73867 [21:22:29] ori-l: I think we could play with it on beta since there is an event logging instance there now :) [21:23:01] but i basically committed the state of my working dir for the sake of having something to collab on, nothing really useful [21:24:35] d'oh, need to fix one more thing [21:29:52] * RobH stabs puppet until it dies [21:29:55] update brewster damn you. [21:40:14] New patchset: RobH; "removing lanthanum from autopart" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73868 [21:41:39] New patchset: Ori.livneh; "Drop unsupported write concern parameter from MongoDB URI" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73869 [21:41:44] ^ paravoid, i think that's the last one. [21:42:47] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73869 [21:42:54] New review: RobH; "im out of witty stuff to type today" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/73868 [21:42:55] thanks again! [21:42:55] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73868 [21:49:52] !log All EventLogging services start/running; data looks good. [21:49:53] off to bed \O/ [21:50:04] hashar: good night! [21:50:04] Logged the message, Master [21:50:20] daughter.teeth(16).grow() [21:50:21] :( [21:50:24] good job ori-l [21:50:26] going to be a lonnng night [21:50:26] hehe [21:50:30] have fun everyone [21:50:47] paravoid: thanks very very much for the help! [21:51:50] paravoid: https://dpaste.de/fNKTr/raw/ :)) [21:52:06] woo [21:52:07] that's great [21:52:22] EL is really getting to be nice [21:52:45] maybe you need more of a project page on wikitech to advertise it better for third-parties? [21:53:07] https://wikitech.wikimedia.org/wiki/EventLogging [21:53:38] Elsie: mostly out-of-date now and horribly inadequate [21:53:57] but yeah, I need to fix that. [21:54:11] Sounds like it fits in fine with that wiki. ;-) [21:54:21] Though I saw there's now an Obsolete namespace. [21:54:43] an Obsolete namespace or an obsolete namespace? [21:55:18] https://wikitech.wikimedia.org/w/index.php?title=Special%3AAllPages&from=&to=&namespace=110 :-) [21:56:02] PROBLEM - Puppet freshness on mw1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:00:53] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73758 [22:26:24] New patchset: Ori.livneh; "Add 'fluoride' git-deploy target." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73885 [22:26:31] Ryan_Lane: ^ [22:26:41] though I didn't know where the mapping to a git repository URL is declared [22:27:06] Wait, so there's fluoride and fluorine? :) [22:27:20] the name is a pun on fluorine, since it handles mediawiki errors [22:27:27] ori-l: There isn't one. You're responsible for creating a repo in /srv/deployment/whatever [22:27:34] or a checkout [22:27:49] well, will handle; i need to consolidate some code fragments in that repo first. [22:28:00] ah, good to know. [22:28:31] i need a root to do that for me, though. [22:29:50] !log kaldari synchronized php-1.22wmf10/extensions/WikiLove/WikiLove.hooks.php 'Fixing WikiLove regression on wmf10' [22:30:02] Logged the message, Master [22:36:29] PROBLEM - Puppet freshness on searchidx1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:38:04] New review: Ori.livneh; "Needs a deploy dir in tin; I can't create one myself." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73885 [22:43:29] PROBLEM - Puppet freshness on rubidium is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on ekrem is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on manganese is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on mw1007 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on mw1041 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on mw1043 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on mw1063 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:29] PROBLEM - Puppet freshness on mw1087 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:30] PROBLEM - Puppet freshness on mw1171 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:30] PROBLEM - Puppet freshness on mw1197 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:31] PROBLEM - Puppet freshness on mw121 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:31] PROBLEM - Puppet freshness on mw1210 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:32] PROBLEM - Puppet freshness on mw58 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:32] PROBLEM - Puppet freshness on search1024 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:33] PROBLEM - Puppet freshness on search18 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:33] PROBLEM - Puppet freshness on solr1003 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:34] PROBLEM - Puppet freshness on solr3 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:34] PROBLEM - Puppet freshness on sq76 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:35] PROBLEM - Puppet freshness on srv292 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:35] PROBLEM - Puppet freshness on stat1 is CRITICAL: No successful Puppet run in the last 10 hours [22:44:36] PROBLEM - Puppet freshness on titanium is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on analytics1014 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on amssq53 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on cp3012 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on cp1005 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on cp3009 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on helium is CRITICAL: No successful Puppet run in the last 10 hours [22:45:29] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:30] PROBLEM - Puppet freshness on mc1007 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:30] PROBLEM - Puppet freshness on db1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:31] New patchset: Andrew Bogott; "Add sysctlfile module and one use case." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73888 [22:45:31] PROBLEM - Puppet freshness on db1044 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:31] PROBLEM - Puppet freshness on db39 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:32] PROBLEM - Puppet freshness on mw1032 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:32] PROBLEM - Puppet freshness on db1031 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:33] PROBLEM - Puppet freshness on ms10 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:33] PROBLEM - Puppet freshness on mw43 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:34] PROBLEM - Puppet freshness on mw124 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:34] PROBLEM - Puppet freshness on pc1 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:35] PROBLEM - Puppet freshness on rdb1002 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:35] PROBLEM - Puppet freshness on praseodymium is CRITICAL: No successful Puppet run in the last 10 hours [22:45:36] PROBLEM - Puppet freshness on potassium is CRITICAL: No successful Puppet run in the last 10 hours [22:45:36] PROBLEM - Puppet freshness on sq54 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:37] PROBLEM - Puppet freshness on srv255 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:37] PROBLEM - Puppet freshness on sq58 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:38] PROBLEM - Puppet freshness on wtp1015 is CRITICAL: No successful Puppet run in the last 10 hours [22:45:38] PROBLEM - Puppet freshness on srv273 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] PROBLEM - Puppet freshness on cp1010 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] PROBLEM - Puppet freshness on db1022 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] PROBLEM - Puppet freshness on ms-fe1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] PROBLEM - Puppet freshness on mw1003 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] PROBLEM - Puppet freshness on mw1024 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:29] ^demon: RoanKattouw: paravoid: Any idea what the status is of the bugzilla mail server that runs wikibugs? There's like half a dozen changes to the wikibugs bot pending (some even merged) in Gerrit, but they're all no-ops as it seems that server is not under puppet control or something. [22:46:29] PROBLEM - Puppet freshness on mw1033 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:30] PROBLEM - Puppet freshness on mw1046 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:30] PROBLEM - Puppet freshness on mw106 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:30] PROBLEM - Puppet freshness on mw1069 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:31] PROBLEM - Puppet freshness on mw1150 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:31] PROBLEM - Puppet freshness on mw1189 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:32] PROBLEM - Puppet freshness on mw1201 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:32] PROBLEM - Puppet freshness on mw1205 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:33] PROBLEM - Puppet freshness on mw2 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:33] PROBLEM - Puppet freshness on mw35 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:34] PROBLEM - Puppet freshness on mw79 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:34] PROBLEM - Puppet freshness on mw98 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:35] PROBLEM - Puppet freshness on rdb1003 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:35] PROBLEM - Puppet freshness on search33 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:36] PROBLEM - Puppet freshness on srv193 is CRITICAL: No successful Puppet run in the last 10 hours [22:46:36] PROBLEM - Puppet freshness on wtp1007 is CRITICAL: No successful Puppet run in the last 10 hours [22:47:29] PROBLEM - Puppet freshness on amslvs1 is CRITICAL: No successful Puppet run in the last 10 hours [22:47:29] PROBLEM - Puppet freshness on amssq48 is CRITICAL: No successful Puppet run in the last 10 hours [22:47:29] PROBLEM - Puppet freshness on analytics1004 is CRITICAL: No successful Puppet run in the last 10 hours [22:47:29] PROBLEM - Puppet freshness on calcium is CRITICAL: No successful Puppet run in the last 10 hours [22:47:29] PROBLEM - Puppet freshness on db1033 is CRITICAL: No successful Puppet run in the last 10 hours [22:47:40] New review: Andrew Bogott; "Andrew and Leslie, let me know if you approve of this in concept and I will flesh it out a bit." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73888 [22:48:14] apergos: Perhaps we can get some changes on there by other means if it that server can't be upgraded yet? All this waiting for > 6 months doesn't seem very workable either. [22:48:33] Krinkle: no idea [22:48:41] <^demon> Krinkle: It's all run from mchenry I believe. [22:48:44] mutante would be your best bet but he's on vac [22:48:47] <^demon> But beyond that, I know little. [22:49:29] PROBLEM - Puppet freshness on cp1058 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on antimony is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on cp3011 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on dataset1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on db1002 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on db1014 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:29] PROBLEM - Puppet freshness on labstore1 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:30] PROBLEM - Puppet freshness on labstore1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:30] PROBLEM - Puppet freshness on labstore3 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:31] PROBLEM - Puppet freshness on mc1012 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:31] PROBLEM - Puppet freshness on ms-be12 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:32] PROBLEM - Puppet freshness on ms-fe1002 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:32] PROBLEM - Puppet freshness on mw1027 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:33] PROBLEM - Puppet freshness on mw1104 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:33] PROBLEM - Puppet freshness on mw1206 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:34] PROBLEM - Puppet freshness on mw1208 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:34] PROBLEM - Puppet freshness on mw1211 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:35] PROBLEM - Puppet freshness on mw42 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:35] PROBLEM - Puppet freshness on mw75 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:36] PROBLEM - Puppet freshness on search25 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:36] PROBLEM - Puppet freshness on solr1 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:37] PROBLEM - Puppet freshness on srv242 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:37] PROBLEM - Puppet freshness on ssl1 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:38] PROBLEM - Puppet freshness on virt11 is CRITICAL: No successful Puppet run in the last 10 hours [22:49:38] PROBLEM - Puppet freshness on wtp1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:50:29] PROBLEM - Puppet freshness on amssq51 is CRITICAL: No successful Puppet run in the last 10 hours [22:50:29] PROBLEM - Puppet freshness on amssq56 is CRITICAL: No successful Puppet run in the last 10 hours [22:50:29] PROBLEM - Puppet freshness on analytics1002 is CRITICAL: No successful Puppet run in the last 10 hours [22:50:29] PROBLEM - Puppet freshness on analytics1022 is CRITICAL: No successful Puppet run in the last 10 hours [22:50:29] PROBLEM - Puppet freshness on bast1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:52:29] PROBLEM - Puppet freshness on cp1038 is CRITICAL: No successful Puppet run in the last 10 hours [22:52:29] PROBLEM - Puppet freshness on analytics1008 is CRITICAL: No successful Puppet run in the last 10 hours [22:52:29] PROBLEM - Puppet freshness on db1058 is CRITICAL: No successful Puppet run in the last 10 hours [22:52:29] PROBLEM - Puppet freshness on db65 is CRITICAL: No successful Puppet run in the last 10 hours [22:52:29] PROBLEM - Puppet freshness on db77 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:29] PROBLEM - Puppet freshness on aluminium is CRITICAL: No successful Puppet run in the last 10 hours [22:54:29] PROBLEM - Puppet freshness on db1010 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:29] PROBLEM - Puppet freshness on db46 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:29] PROBLEM - Puppet freshness on db55 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:29] PROBLEM - Puppet freshness on es1009 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:30] PROBLEM - Puppet freshness on labsdb1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:30] PROBLEM - Puppet freshness on mw1022 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:31] PROBLEM - Puppet freshness on mw1040 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:31] PROBLEM - Puppet freshness on mw1062 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:32] PROBLEM - Puppet freshness on mw107 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:32] PROBLEM - Puppet freshness on mw109 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:33] PROBLEM - Puppet freshness on mw1132 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:33] PROBLEM - Puppet freshness on mw1185 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:34] PROBLEM - Puppet freshness on mw1218 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:34] PROBLEM - Puppet freshness on mw40 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:35] PROBLEM - Puppet freshness on mw53 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:35] PROBLEM - Puppet freshness on mw70 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:36] PROBLEM - Puppet freshness on snapshot4 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:36] PROBLEM - Puppet freshness on sq55 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:37] PROBLEM - Puppet freshness on sq64 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:37] PROBLEM - Puppet freshness on sq81 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:38] PROBLEM - Puppet freshness on srv296 is CRITICAL: No successful Puppet run in the last 10 hours [22:54:38] PROBLEM - Puppet freshness on wtp1021 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on amssq59 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on colby is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on db31 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on cp3006 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on lvs6 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on mc1011 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:29] PROBLEM - Puppet freshness on ms-fe1003 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:30] PROBLEM - Puppet freshness on mw1072 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:30] PROBLEM - Puppet freshness on mw1134 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:31] PROBLEM - Puppet freshness on mw117 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:31] PROBLEM - Puppet freshness on mw1178 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:32] PROBLEM - Puppet freshness on mw1219 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:32] PROBLEM - Puppet freshness on mw87 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:33] PROBLEM - Puppet freshness on professor is CRITICAL: No successful Puppet run in the last 10 hours [22:55:33] PROBLEM - Puppet freshness on search1006 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:34] PROBLEM - Puppet freshness on search1011 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:34] PROBLEM - Puppet freshness on srv285 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:35] PROBLEM - Puppet freshness on virt2 is CRITICAL: No successful Puppet run in the last 10 hours [22:55:52] ffs [22:56:29] PROBLEM - Puppet freshness on amssq32 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on amssq43 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on analytics1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on db1011 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on cp1015 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on db59 is CRITICAL: No successful Puppet run in the last 10 hours [22:56:29] PROBLEM - Puppet freshness on db29 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:29] PROBLEM - Puppet freshness on analytics1020 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:29] PROBLEM - Puppet freshness on cp1001 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:29] PROBLEM - Puppet freshness on cp1054 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:29] PROBLEM - Puppet freshness on cp1039 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:29] PROBLEM - Puppet freshness on cp3019 is CRITICAL: No successful Puppet run in the last 10 hours [22:58:37] New patchset: GWicke; "Slightly increase the Parsoid template update dequeue rate" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73894 [23:00:29] PROBLEM - Puppet freshness on amssq41 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on amssq60 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on analytics1006 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on analytics1024 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on cp1057 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on db1028 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:29] PROBLEM - Puppet freshness on db38 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:30] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/73894 [23:00:30] PROBLEM - Puppet freshness on db68 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:30] PROBLEM - Puppet freshness on ersch is CRITICAL: No successful Puppet run in the last 10 hours [23:00:31] PROBLEM - Puppet freshness on lvs3 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:31] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:32] PROBLEM - Puppet freshness on ms-be7 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:32] PROBLEM - Puppet freshness on mw1039 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:33] PROBLEM - Puppet freshness on mw1177 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:33] PROBLEM - Puppet freshness on mw1184 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:34] PROBLEM - Puppet freshness on mw120 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:34] PROBLEM - Puppet freshness on mw96 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:35] PROBLEM - Puppet freshness on searchidx2 is CRITICAL: No successful Puppet run in the last 10 hours [23:00:35] PROBLEM - Puppet freshness on ssl1004 is CRITICAL: No successful Puppet run in the last 10 hours [23:01:29] PROBLEM - Puppet freshness on amssq58 is CRITICAL: No successful Puppet run in the last 10 hours [23:01:29] PROBLEM - Puppet freshness on cp1019 is CRITICAL: No successful Puppet run in the last 10 hours [23:01:29] PROBLEM - Puppet freshness on db1003 is CRITICAL: No successful Puppet run in the last 10 hours [23:01:29] PROBLEM - Puppet freshness on db1041 is CRITICAL: No successful Puppet run in the last 10 hours [23:01:29] PROBLEM - Puppet freshness on db1043 is CRITICAL: No successful Puppet run in the last 10 hours [23:02:07] !log gwicke synchronized wmf-config/CommonSettings.php 'Slightly increase Parsoid dequeue rate' [23:02:18] Logged the message, Master [23:03:29] PROBLEM - Puppet freshness on amslvs3 is CRITICAL: No successful Puppet run in the last 10 hours [23:03:29] PROBLEM - Puppet freshness on analytics1013 is CRITICAL: No successful Puppet run in the last 10 hours [23:03:29] PROBLEM - Puppet freshness on analytics1026 is CRITICAL: No successful Puppet run in the last 10 hours [23:03:29] PROBLEM - Puppet freshness on cp1004 is CRITICAL: No successful Puppet run in the last 10 hours [23:03:29] PROBLEM - Puppet freshness on cp1011 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:27] ori-l: what's fluoride? [23:04:29] PROBLEM - Puppet freshness on amssq46 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on amssq62 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on cp1055 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on db1053 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on es8 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on ms1002 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:29] PROBLEM - Puppet freshness on mw102 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:30] PROBLEM - Puppet freshness on mw6 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:30] PROBLEM - Puppet freshness on srv241 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:31] PROBLEM - Puppet freshness on ssl3002 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:31] PROBLEM - Puppet freshness on virt5 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:32] PROBLEM - Puppet freshness on virt7 is CRITICAL: No successful Puppet run in the last 10 hours [23:04:32] PROBLEM - Puppet freshness on wtp1023 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:26] also, let me track down what's changing the ownership of /srv/deployment [23:05:29] PROBLEM - Puppet freshness on amssq38 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:29] PROBLEM - Puppet freshness on analytics1027 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:29] PROBLEM - Puppet freshness on cp1052 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:29] PROBLEM - Puppet freshness on cp1059 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:29] PROBLEM - Puppet freshness on db1006 is CRITICAL: No successful Puppet run in the last 10 hours [23:05:36] !log csteipp synchronized php-1.22wmf10/extensions/CentralAuth 'Fix sul2 regression' [23:05:46] Logged the message, Master [23:06:46] Ryan_Lane: vanadium gets forwarded mw errors / fatals via udp from fluorine. I had a ganglia metric module in operations/puppet, but bundled into the EventLogging module for reasons of convenience and laziness [23:06:56] cool [23:06:57] I removed them as part of the refactoring of that module, so now they need a home. [23:07:22] PROBLEM - Puppet freshness on amssq33 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:22] PROBLEM - Puppet freshness on amssq39 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:22] PROBLEM - Puppet freshness on amssq42 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:22] PROBLEM - Puppet freshness on amssq45 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:22] PROBLEM - Puppet freshness on amssq49 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:22] PROBLEM - Puppet freshness on amssq54 is CRITICAL: No successful Puppet run in the last 10 hours [23:07:27] I'm going to merge that, but I want to figure out what's screwing up perms first [23:07:43] Ryan_Lane: thanks & no rush [23:08:18] RoanKattouw: I'm done. You're up whenever you're ready. [23:08:45] I need some root help to attach the node debugger to a hanging Parsoid worker on wtp1001: Please do a "kill -SIGUSR1 1325" on wtp1001 [23:09:00] i can do [23:09:21] TimStarling: no idea [23:09:25] gwicke: done. [23:09:28] RobH: thanks! [23:09:32] welcome [23:09:47] wrong channel, but still [23:09:58] RobH: meh: [23:10:01] node debug -p 1325 [23:10:03] debug> There was an internal error in Node's debugger. Please report this bug. [23:10:04] EPERM, Operation not permitted [23:10:06] Error: EPERM, Operation not permitted [23:10:22] PROBLEM - Puppet freshness on amssq52 is CRITICAL: No successful Puppet run in the last 10 hours [23:10:22] PROBLEM - Puppet freshness on analytics1012 is CRITICAL: No successful Puppet run in the last 10 hours [23:10:22] PROBLEM - Puppet freshness on analytics1015 is CRITICAL: No successful Puppet run in the last 10 hours [23:10:22] PROBLEM - Puppet freshness on cp1007 is CRITICAL: No successful Puppet run in the last 10 hours [23:10:22] PROBLEM - Puppet freshness on cp1056 is CRITICAL: No successful Puppet run in the last 10 hours [23:10:51] gwicke: i can do that fine... as root. [23:11:02] root@wtp1001:~# node debug -p 1325 [23:11:03] connecting... ok [23:11:09] and then im in debug> [23:11:22] AaronSchulz: probably about right [23:11:29] but still only a ~1.5us overhead [23:11:41] RobH: can you do a 'pause' followed by a 'bt'? [23:11:58] found it [23:12:12] New patchset: Ryan Lane; "Ensure /srv/deployment's group is wikidev" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73897 [23:12:15] RobH: http://nodejs.org/api/debugger.html [23:12:22] PROBLEM - Puppet freshness on cp1013 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on cp1050 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on cp3010 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on db1048 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on db1036 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on gadolinium is CRITICAL: No successful Puppet run in the last 10 hours [23:12:22] PROBLEM - Puppet freshness on ms-be4 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:23] PROBLEM - Puppet freshness on lvs5 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:23] PROBLEM - Puppet freshness on ms5 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:24] PROBLEM - Puppet freshness on mw1044 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:24] PROBLEM - Puppet freshness on mw1162 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:25] PROBLEM - Puppet freshness on mw1101 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:25] PROBLEM - Puppet freshness on mw119 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:27] PROBLEM - Puppet freshness on mw5 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:27] PROBLEM - Puppet freshness on mw1196 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:27] PROBLEM - Puppet freshness on mw83 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:27] PROBLEM - Puppet freshness on mw90 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:28] PROBLEM - Puppet freshness on search1014 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:28] PROBLEM - Puppet freshness on search21 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:29] PROBLEM - Puppet freshness on sq79 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:29] PROBLEM - Puppet freshness on srv275 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:30] PROBLEM - Puppet freshness on srv290 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:30] PROBLEM - Puppet freshness on wtp1024 is CRITICAL: No successful Puppet run in the last 10 hours [23:12:31] PROBLEM - Puppet freshness on zirconium is CRITICAL: No successful Puppet run in the last 10 hours [23:12:51] gwicke: pming you output [23:13:06] TimStarling: it wouldn't surprise me if we were talking about bodiless functions [23:13:08] RobH: thanks.. that is very undefined ;) [23:13:19] uhh, shoudl i restart it or anything [23:13:20] ? [23:13:22] PROBLEM - Puppet freshness on db1020 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on lvs1 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on mc1009 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on ms-be1012 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on mw1111 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on mw1124 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:22] PROBLEM - Puppet freshness on mw1125 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:23] PROBLEM - Puppet freshness on mw1130 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:23] PROBLEM - Puppet freshness on mw1147 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:24] PROBLEM - Puppet freshness on mw1161 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:24] PROBLEM - Puppet freshness on mw1190 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:25] PROBLEM - Puppet freshness on mw1214 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:25] PROBLEM - Puppet freshness on mw38 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:26] PROBLEM - Puppet freshness on mw8 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:26] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:27] PROBLEM - Puppet freshness on search36 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:27] PROBLEM - Puppet freshness on solr2 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:28] due to pause is why i ask. [23:13:28] PROBLEM - Puppet freshness on sq51 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:28] PROBLEM - Puppet freshness on srv301 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:29] PROBLEM - Puppet freshness on wtp1011 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:29] PROBLEM - Puppet freshness on wtp1018 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:32] yeah, let's not talk in this channel [23:13:39] we all hate you icinga-wm. [23:13:45] RobH: no, I was hoping for actual source locations [23:14:02] this points to towards something native, possibly regexps [23:14:05] undefined~ [23:14:08] !! [23:14:09] heh [23:14:20] I don't hate icinga-wm, I hate our horribly broken setup that breaks every few days [23:14:22] PROBLEM - Puppet freshness on amssq44 is CRITICAL: No successful Puppet run in the last 10 hours [23:14:22] PROBLEM - Puppet freshness on analytics1023 is CRITICAL: No successful Puppet run in the last 10 hours [23:14:22] PROBLEM - Puppet freshness on cp1063 is CRITICAL: No successful Puppet run in the last 10 hours [23:14:22] PROBLEM - Puppet freshness on db1004 is CRITICAL: No successful Puppet run in the last 10 hours [23:14:22] PROBLEM - Puppet freshness on db43 is CRITICAL: No successful Puppet run in the last 10 hours [23:14:36] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73897 [23:14:43] so the question becomes 'who will become so annoyed by this first and fix it' [23:15:02] man a lot of our task lists have that as a modifier. [23:15:04] RobH: oh, we have been bitten by a bug in the V8 regexp engine repeatedly [23:15:26] oh, my fix this was in reference to icinga puppet freshness alert, cuz its been an issue for awhile [23:15:39] ah, k [23:17:23] RobH: can you try the same on 27403? [23:20:00] New patchset: Ryan Lane; "Add 'fluoride' git-deploy target." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73885 [23:21:13] gwicke: eh? [23:21:23] oh, trace, yes [23:21:50] New review: Ryan Lane; "I've fixed the directory ownership issue, so you can create the directory yourself." [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/73885 [23:21:51] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73885 [23:21:57] gwicke: all undefined as well for -p 27403 on wtp1001 [23:22:10] RobH: similar line numbers? [23:22:11] paused and backtraced [23:22:17] line 0-9 yep [23:22:24] oh, wait, slots [23:22:31] line numbers are....... not identical, but similar [23:22:44] ok, thanks! [23:22:50] ori-l: heh. I didn't pay enough attention on this patchset [23:22:54] gwicke: http://pastebin.com/2ihvbsYa [23:23:16] you removed the sync_hook_link for eventlogging [23:24:00] d'oh [23:24:22] PROBLEM - Puppet freshness on amssq34 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:22] PROBLEM - Puppet freshness on cp1002 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:22] PROBLEM - Puppet freshness on cp1012 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:22] PROBLEM - Puppet freshness on db1023 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:22] PROBLEM - Puppet freshness on db34 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:23] PROBLEM - Puppet freshness on es1005 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:23] PROBLEM - Puppet freshness on mc1005 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:24] PROBLEM - Puppet freshness on mw1139 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:24] PROBLEM - Puppet freshness on pc1003 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:25] PROBLEM - Puppet freshness on search1018 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:25] PROBLEM - Puppet freshness on srv258 is CRITICAL: No successful Puppet run in the last 10 hours [23:24:30] New patchset: Ryan Lane; "Add sync_hook_link back in for eventlogging" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73898 [23:24:46] New patchset: Cmjohnson; "giving carbon raid1-lvm.cf" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73899 [23:25:04] I was doing the puppet merge on sockpuppet, which has more distinct colors and thought "this doesn't look right" [23:25:17] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73898 [23:25:22] PROBLEM - Puppet freshness on cp1046 is CRITICAL: No successful Puppet run in the last 10 hours [23:25:22] PROBLEM - Puppet freshness on db40 is CRITICAL: No successful Puppet run in the last 10 hours [23:25:22] PROBLEM - Puppet freshness on fenari is CRITICAL: No successful Puppet run in the last 10 hours [23:25:22] PROBLEM - Puppet freshness on ms-be1 is CRITICAL: No successful Puppet run in the last 10 hours [23:25:22] PROBLEM - Puppet freshness on mw20 is CRITICAL: No successful Puppet run in the last 10 hours [23:25:22] PROBLEM - Puppet freshness on search14 is CRITICAL: No successful Puppet run in the last 10 hours [23:25:51] thanks [23:26:12] yw [23:26:22] force running puppet on sockpuppet [23:26:30] then you should be good to go for the new repo [23:26:59] Change merged: Cmjohnson; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/73899 [23:27:18] Ryan_Lane: I should wait for the puppet run before creating the directory? [23:27:24] nah [23:27:26] oh, you mean I *need* to wait for it for the permission to be fixed [23:27:31] hm [23:27:34] it should be fixed [23:27:41] I haven't checked; will do now. [23:28:05] sockpuppet needs to run to update its salt info and such [23:28:15] yep, fixed. [23:28:45] RobH: could you open such a debugger session with sudo in a screen running as gwicke? [23:28:59] then I could attach to that with screen -x [23:29:07] and try to figure out which page this is [23:29:27] gwicke: i have to remember that trick [23:30:29] ori-l: screen -x is pretty handy in general [23:32:20] yes, I didn't know you could attach to a session that is already attached elsewhere [23:33:22] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [23:33:22] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:22] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:23] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:23] PROBLEM - Puppet freshness on mw1173 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:23] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:23] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [23:33:24] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [23:36:26] gwicke: uhhh, i dunno if im allowed to do that =P (me doesnt wanna get in trouble) [23:36:35] seems legit to me, but... [23:37:41] RobH: since I can deploy that code already, I don't see how entering the debugger would give me additional caps [23:38:02] oh, hrmm, true [23:38:12] i misread. [23:38:14] lemme try [23:38:43] you can probably start it as 'parsoid' too [23:38:46] !log reinstalling carbon [23:38:56] Logged the message, Master [23:40:02] heh [23:40:08] Cannot open your terminal '/dev/pts/1' - please check. [23:40:12] PROBLEM - Host carbon is DOWN: CRITICAL - Host Unreachable (208.80.154.10) [23:40:18] google 'that only happens when you su to a user then try to run screen' [23:40:23] no shit. [23:40:35] hmm [23:41:15] maybe "sudo -u gwicke bash -l" and then 'screen'? [23:41:32] RECOVERY - Host carbon is UP: PING OK - Packet loss = 0%, RTA = 0.27 ms [23:41:44] heh, got it working [23:42:17] cool, see the screen [23:45:07] so online all the folks say 'assign rights to the pts' but thats bullshit [23:45:15] as it opens your pts to everyone on box reading [23:45:57] seems script command has side effect of opening a new terminal device (which i point at dev null) and permission issue is gone (for just that one screen while active) [23:45:59] yay linux. [23:53:21] Ryan_Lane: do I need to commit some change to the repo to do the initial sync? [23:53:42] I got 'fatal: Unknown commit none/master', full log at https://dpaste.de/kvey6/raw/ [23:54:34] I've seen that error before [23:54:39] Don't remember what it meant though [23:55:19] RobH: just found out that the loop is in the tokenizer, thanks for your help!! [23:55:22] ori-l: yes [23:55:25] it can't be empty [23:55:56] for the initial sync there's a crappy bootstrapping that needs to occur [23:56:13] when I get some time to work on this again, I'm going to fix that [23:57:33] it's no big deal, but is that all? [23:57:39] or should I do anything else? [23:58:49] I believe that's it [23:58:52] made a commit, same error [23:59:16] https://dpaste.de/juQ0B/raw/ [23:59:42] hmmm [23:59:56] that's a git error...