[00:11:47] RECOVERY - Puppet freshness on wtp1 is OK: puppet ran at Sun Jun 9 00:11:42 UTC 2013 [00:12:27] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 00:12:20 UTC 2013 [00:12:27] PROBLEM - Puppet freshness on wtp1 is CRITICAL: No successful Puppet run in the last 10 hours [00:12:37] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [00:15:17] RECOVERY - Puppet freshness on lardner is OK: puppet ran at Sun Jun 9 00:15:11 UTC 2013 [00:15:27] RECOVERY - Puppet freshness on tola is OK: puppet ran at Sun Jun 9 00:15:19 UTC 2013 [00:15:47] PROBLEM - Puppet freshness on lardner is CRITICAL: No successful Puppet run in the last 10 hours [00:16:18] PROBLEM - Puppet freshness on tola is CRITICAL: No successful Puppet run in the last 10 hours [00:16:57] RECOVERY - Puppet freshness on kuo is OK: puppet ran at Sun Jun 9 00:16:55 UTC 2013 [00:17:47] PROBLEM - Puppet freshness on kuo is CRITICAL: No successful Puppet run in the last 10 hours [00:34:17] RECOVERY - Puppet freshness on wtp1 is OK: puppet ran at Sun Jun 9 00:34:11 UTC 2013 [00:34:27] PROBLEM - Puppet freshness on wtp1 is CRITICAL: No successful Puppet run in the last 10 hours [00:34:37] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 00:34:28 UTC 2013 [00:34:37] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [00:35:07] RECOVERY - Puppet freshness on lardner is OK: puppet ran at Sun Jun 9 00:35:04 UTC 2013 [00:35:47] PROBLEM - Puppet freshness on lardner is CRITICAL: No successful Puppet run in the last 10 hours [01:05:06] PROBLEM - Parsoid on wtp1012 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:06:57] PROBLEM - Parsoid on wtp1004 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:07:27] PROBLEM - Parsoid on wtp1016 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:18:47] RECOVERY - Parsoid on wtp1004 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.005 second response time [01:21:17] RECOVERY - Parsoid on wtp1016 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.010 second response time [01:21:47] RECOVERY - Parsoid on wtp1012 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.005 second response time [01:31:38] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset 0.00244820118 secs [01:46:58] PROBLEM - Memcached on mc15 is CRITICAL: Connection timed out [01:47:48] RECOVERY - Memcached on mc15 is OK: TCP OK - 0.029 second response time on port 11211 [01:57:45] Ops, need someone to kick gerrit in the butt. It has closed Zuul's event stream once again. [01:57:49] CI is down [01:58:04] ValueError: No JSON object could be decoded [01:58:04] 2013-06-09 01:55:17,217 ERROR gerrit.GerritWatcher: Exception on ssh event stream: [01:58:04] Traceback (most recent call last): [01:58:35] !log Zuul is halted on a broken Gerrit event stream again. [01:58:44] Logged the message, Master [02:01:32] !log LocalisationUpdate completed (1.22wmf5) at Sun Jun 9 02:01:31 UTC 2013 [02:01:45] Logged the message, Master [02:02:16] !log LocalisationUpdate completed (1.22wmf6) at Sun Jun 9 02:02:15 UTC 2013 [02:02:18] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset 0.003571748734 secs [02:02:23] Logged the message, Master [02:07:43] !log LocalisationUpdate ResourceLoader cache refresh completed at Sun Jun 9 02:07:43 UTC 2013 [02:07:51] Logged the message, Master [02:08:50] Reedy: Are you able to restart Gerrit? [02:10:38] PROBLEM - DPKG on mc15 is CRITICAL: Timeout while attempting connection [02:11:38] RECOVERY - DPKG on mc15 is OK: All packages OK [02:42:57] marktraceur: Do you know anyone in ops who might be awake at this time? [02:43:24] I've whois'ed pretty much anyone in this channel I know but no luck [02:43:54] Hrm [02:44:01] Krinkle: Whatcha need to do? [02:44:07] bblack? [02:44:15] Well, see wikisal [02:44:26] > Zuul is halted on a broken Gerrit event stream again. [02:44:29] wiki...sal? [02:44:31] Oh, hum. [02:44:36] Server Admin Log [02:44:40] bit.ly/wikisal [02:44:48] Yeah, I wouldn't know [02:45:33] Do we not have opsen in India or southeast Asia? Seems like spreading the load would be a sane thing. [02:45:35] RoanKattouw_away: You mentioned yesterday that Lesie was so nice to merge and deploy the apc fix, but the change is still pending in Gerrit. Did I miss something? [02:46:18] marktraceur: Funny you say that. I assume you are looking towards Asia because of what time it is. [02:46:25] Yup [02:46:41] But it's getting more rarer that devops people work at 9-5 times custom to their local timezone [02:46:43] you're awake [02:46:44] I'm awake [02:47:00] Reedy's awake (I think) [02:47:22] 8PM, 3AM and 4AM respectively [02:47:25] Mm. [02:47:28] :P [02:47:34] tim would be awake but its a weekend here [02:47:36] thoguh it is weekend, so.. [02:47:41] yeah [02:47:55] Bloody WMF staff, not working weekends. :P [02:48:03] How dare they not work themselves bloody [02:48:38] Argh! I hate this. (unrelated) [02:48:43] A GitHub repository MIT-licensed [02:48:51] linking to friggin Wikipedia for the license [02:48:57] That's just wrong in so many ways [02:49:12] especially given that the first paragraph explains how "MIT" is ambiguous [02:49:25] they clearly didn't even read the page they linked to [02:49:44] and even if it wouldn't be ambiguous, linking to Wikipedia is still wrong :P [02:50:12] https://github.com/deftjs/DeftJS/blob/master/src/js/Deft/promise/Chain.js [02:50:54] Krinkle: I was confused, she didn't merge that one after all [02:51:28] RoanKattouw_away: no worries, but I suppose that means it wasn't deployed either? [02:51:38] or did it get deployed? [02:51:58] Yeah it's not deployed [03:28:23] !log git.wikimedia.org is 503 Service Temporarily Unavailable [03:28:31] Logged the message, Master [03:29:07] RoanKattouw_away: Looks like this week just doesn't know how to end. Stuff keeps falling from the sky. [03:29:48] New patchset: Krinkle; "contint: Disable php-apc on gallium" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/67551 [03:31:51] PROBLEM - SSH on spence is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:39:09] New patchset: Ori.livneh; "Add 'Programs' namespace on MetaWiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/67626 [03:40:45] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:45] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [03:40:45] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:45] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:45] PROBLEM - Puppet freshness on ms-be1 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:46] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:46] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:47] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:47] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:48] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:48] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [03:40:49] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [03:54:05] PROBLEM - Parsoid on wtp1009 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:55:45] PROBLEM - Parsoid on wtp1022 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:56:06] PROBLEM - Parsoid on wtp1011 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:57:35] RECOVERY - Parsoid on wtp1022 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.179 second response time [04:03:55] RECOVERY - Parsoid on wtp1011 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.004 second response time [04:08:59] RECOVERY - Puppet freshness on tola is OK: puppet ran at Sun Jun 9 04:08:54 UTC 2013 [04:09:28] PROBLEM - Puppet freshness on tola is CRITICAL: No successful Puppet run in the last 10 hours [04:09:58] RECOVERY - Puppet freshness on kuo is OK: puppet ran at Sun Jun 9 04:09:54 UTC 2013 [04:10:58] PROBLEM - Puppet freshness on kuo is CRITICAL: No successful Puppet run in the last 10 hours [04:28:10] RECOVERY - Puppet freshness on wtp1 is OK: puppet ran at Sun Jun 9 04:28:05 UTC 2013 [04:28:28] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 04:28:19 UTC 2013 [04:28:29] PROBLEM - Puppet freshness on wtp1 is CRITICAL: No successful Puppet run in the last 10 hours [04:29:02] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [04:29:09] RECOVERY - Puppet freshness on lardner is OK: puppet ran at Sun Jun 9 04:29:06 UTC 2013 [04:29:58] PROBLEM - Puppet freshness on lardner is CRITICAL: No successful Puppet run in the last 10 hours [04:31:29] RECOVERY - Puppet freshness on tola is OK: puppet ran at Sun Jun 9 04:31:24 UTC 2013 [04:32:10] PROBLEM - Parsoid on wtp1022 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:32:28] PROBLEM - Puppet freshness on tola is CRITICAL: No successful Puppet run in the last 10 hours [04:32:48] RECOVERY - Puppet freshness on kuo is OK: puppet ran at Sun Jun 9 04:32:45 UTC 2013 [04:33:00] PROBLEM - Puppet freshness on kuo is CRITICAL: No successful Puppet run in the last 10 hours [04:51:19] PROBLEM - NTP on spence is CRITICAL: NTP CRITICAL: Offset unknown [04:54:19] RECOVERY - NTP on spence is OK: NTP OK: Offset 0.1314362288 secs [05:07:29] New review: Krinkle; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47307 [05:08:34] New review: Krinkle; "(1 comment)" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47307 [05:14:21] New patchset: Krinkle; "wgRC2UDPPrefix: Use hostname-".org" instead of lang.site" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47307 [05:14:47] RECOVERY - Parsoid on wtp1009 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.018 second response time [05:15:21] New review: Krinkle; "Added votewiki and testwikidatawiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47307 [05:19:17] <^demon> !log restarted gerrit, again [05:19:26] Logged the message, Master [05:35:37] New review: Demon; "recheck" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/67533 [05:48:33] PROBLEM - NTP on ssl3002 is CRITICAL: NTP CRITICAL: No response from NTP server [05:52:44] PROBLEM - NTP on ssl3003 is CRITICAL: NTP CRITICAL: No response from NTP server [06:26:01] PROBLEM - Puppet freshness on mw1115 is CRITICAL: No successful Puppet run in the last 10 hours [06:35:42] RECOVERY - Puppet freshness on mw1115 is OK: puppet ran at Sun Jun 9 06:35:34 UTC 2013 [06:45:05] RECOVERY - SSH on spence is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:58:16] PROBLEM - DPKG on mc15 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [06:59:06] RECOVERY - DPKG on mc15 is OK: All packages OK [07:09:46] PROBLEM - RAID on mc15 is CRITICAL: Timeout while attempting connection [07:10:46] RECOVERY - RAID on mc15 is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [07:21:08] PROBLEM - Parsoid on wtp1009 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:21:56] PROBLEM - Parsoid on wtp1006 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:46:54] New patchset: Krinkle; "contint: Disable php-apc on gallium" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/67551 [07:49:54] PROBLEM - Parsoid on wtp1001 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:52:24] PROBLEM - Parsoid on wtp1015 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:55:34] PROBLEM - Disk space on mc15 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [07:56:35] RECOVERY - Disk space on mc15 is OK: DISK OK [08:01:35] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset -0.004470348358 secs [08:02:24] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset -0.00704741478 secs [08:05:44] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:06:34] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.128 second response time [08:11:02] PROBLEM - Parsoid on wtp1007 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:23:12] PROBLEM - SSH on spence is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:53:50] PROBLEM - Disk space on mc15 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:54:41] RECOVERY - Disk space on mc15 is OK: DISK OK [09:16:23] PROBLEM - Parsoid on wtp1002 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:16:33] PROBLEM - Parsoid on wtp1012 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:14:54] PROBLEM - Parsoid on wtp1003 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:16:54] PROBLEM - Parsoid on wtp1004 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:22:54] RECOVERY - Parsoid on wtp1004 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 1.613 second response time [11:47:04] PROBLEM - Disk space on mc15 is CRITICAL: Timeout while attempting connection [11:48:04] RECOVERY - Disk space on mc15 is OK: DISK OK [12:19:43] RECOVERY - Puppet freshness on wtp1 is OK: puppet ran at Sun Jun 9 12:19:38 UTC 2013 [12:19:51] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 12:19:48 UTC 2013 [12:20:02] PROBLEM - Puppet freshness on wtp1 is CRITICAL: No successful Puppet run in the last 10 hours [12:20:42] RECOVERY - Puppet freshness on lardner is OK: puppet ran at Sun Jun 9 12:20:34 UTC 2013 [12:20:42] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [12:21:11] PROBLEM - Puppet freshness on lardner is CRITICAL: No successful Puppet run in the last 10 hours [12:23:31] RECOVERY - Puppet freshness on tola is OK: puppet ran at Sun Jun 9 12:23:23 UTC 2013 [12:23:51] PROBLEM - Puppet freshness on tola is CRITICAL: No successful Puppet run in the last 10 hours [12:25:01] RECOVERY - Puppet freshness on kuo is OK: puppet ran at Sun Jun 9 12:24:59 UTC 2013 [12:25:22] PROBLEM - Puppet freshness on kuo is CRITICAL: No successful Puppet run in the last 10 hours [12:32:24] New review: Alex Monk; "shellpolicy" [operations/mediawiki-config] (master) C: -1; - https://gerrit.wikimedia.org/r/67626 [12:34:43] PROBLEM - Parsoid on wtp1019 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:23:38] PROBLEM - Host wtp1008 is DOWN: CRITICAL - Plugin timed out after 15 seconds [13:24:18] RECOVERY - Host wtp1008 is UP: PING OK - Packet loss = 0%, RTA = 0.30 ms [13:24:58] PROBLEM - RAID on mc15 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [13:25:38] PROBLEM - Parsoid on wtp1014 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:25:53] RECOVERY - RAID on mc15 is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [13:29:38] PROBLEM - Parsoid on wtp1013 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:32:48] PROBLEM - Parsoid on wtp1017 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:40:51] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [13:40:51] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:51] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:51] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:51] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:52] PROBLEM - Puppet freshness on ms-be1 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:52] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:53] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:53] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:54] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:54] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [13:40:55] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [14:04:11] PROBLEM - NTP on spence is CRITICAL: NTP CRITICAL: Offset unknown [14:07:16] RECOVERY - NTP on spence is OK: NTP OK: Offset 0.1051975489 secs [14:41:46] PROBLEM - NTP on spence is CRITICAL: NTP CRITICAL: No response from NTP server [14:42:46] RECOVERY - NTP on spence is OK: NTP OK: Offset -0.3747802973 secs [15:05:46] PROBLEM - NTP on spence is CRITICAL: NTP CRITICAL: No response from NTP server [15:07:22] Is stuff still broken? [15:08:44] gitblit is giving 503s [15:13:51] Can't even login to the host it's on [16:10:55] !log restarting Parsoid [16:11:04] Logged the message, Master [16:11:14] RECOVERY - Parsoid on wtp1014 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.013 second response time [16:11:14] RECOVERY - Parsoid on wtp1019 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.030 second response time [16:11:23] RECOVERY - Parsoid on wtp1001 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.007 second response time [16:11:23] RECOVERY - Parsoid on wtp1022 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.007 second response time [16:11:45] RECOVERY - Parsoid on wtp1009 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.002 second response time [16:11:54] RECOVERY - Parsoid on wtp1012 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.003 second response time [16:11:54] RECOVERY - Parsoid on wtp1015 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.009 second response time [16:11:54] RECOVERY - Parsoid on wtp1002 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.009 second response time [16:12:03] RECOVERY - Parsoid on wtp1007 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.006 second response time [16:12:03] RECOVERY - Parsoid on wtp1013 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.011 second response time [16:12:03] RECOVERY - Parsoid on wtp1017 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.019 second response time [16:12:03] RECOVERY - Parsoid on wtp1003 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.030 second response time [16:12:03] RECOVERY - Parsoid on wtp1006 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.005 second response time [16:12:04] RECOVERY - Parsoid on wtp1011 is OK: HTTP OK: HTTP/1.1 200 OK - 1373 bytes in 0.027 second response time [16:16:43] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 16:16:39 UTC 2013 [16:16:54] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [16:16:54] RECOVERY - Puppet freshness on mexia is OK: puppet ran at Sun Jun 9 16:16:48 UTC 2013 [16:17:53] PROBLEM - Puppet freshness on mexia is CRITICAL: No successful Puppet run in the last 10 hours [16:18:03] RECOVERY - Puppet freshness on lardner is OK: puppet ran at Sun Jun 9 16:17:57 UTC 2013 [16:18:53] PROBLEM - Puppet freshness on lardner is CRITICAL: No successful Puppet run in the last 10 hours [16:20:23] RECOVERY - Puppet freshness on wtp1 is OK: puppet ran at Sun Jun 9 16:20:18 UTC 2013 [16:20:54] RECOVERY - Puppet freshness on tola is OK: puppet ran at Sun Jun 9 16:20:49 UTC 2013 [16:20:54] PROBLEM - Puppet freshness on wtp1 is CRITICAL: No successful Puppet run in the last 10 hours [16:21:14] PROBLEM - Puppet freshness on tola is CRITICAL: No successful Puppet run in the last 10 hours [16:22:03] RECOVERY - Puppet freshness on kuo is OK: puppet ran at Sun Jun 9 16:21:55 UTC 2013 [16:22:13] PROBLEM - Puppet freshness on kuo is CRITICAL: No successful Puppet run in the last 10 hours [17:03:09] PROBLEM - Packetloss_Average on analytics1003 is CRITICAL: STALE [17:03:23] PROBLEM - Packetloss_Average on analytics1005 is CRITICAL: STALE [17:05:37] PROBLEM - Packetloss_Average on analytics1006 is CRITICAL: STALE [17:06:13] PROBLEM - Packetloss_Average on analytics1008 is CRITICAL: STALE [17:06:14] PROBLEM - Packetloss_Average on analytics1004 is CRITICAL: STALE [17:12:53] PROBLEM - Packetloss_Average on analytics1009 is CRITICAL: STALE [17:31:30] Change abandoned: Ori.livneh; "Sarah rescinded her request." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/67626 [17:58:32] oh ori-l [17:59:23] odder: hi [18:01:41] dramah, ori-l, dramah [18:02:08] did someone say dramah?! [18:02:19] !log deploy Parsoid 836ae1e, hopefully fixing the hanging worker issue [18:02:26] Logged the message, Master [18:07:16] SickPanda: nope, you must have misread that ;) [18:07:33] ARE YOU ACCUSING ME OF NOT BEING LITERATE, odder?! [18:07:50] illiterate [18:09:38] this looks more like an AgryPanda to me [18:09:40] * odder runs away [18:09:57] RAWR [18:10:07] * odder screaming 'Never say no to panda' [18:10:25] Nah, young panda is learning the ways of the wild. :P [18:10:29] odder: IF YOU KEEP DOING THIS YOU WILL HAVE MADE ME FORCIBLY RETIRED FROM BEING NICE! [18:10:39] that is like, being blocked! [18:11:04] SickPanda, Are you actually sick or is this a new nick? [18:11:10] oh I'm actually sick [18:11:16] i've had a horrible cold since morning [18:11:20] Awww [18:11:30] I today learnt what a 'full body sneeze' feels like [18:11:33] I would have thought YoungPanda would have been a more suitable nick. ;) [18:11:40] lol [18:11:44] I'm 22 and that is way too old [18:11:52] This really isn't the weather for a cold dude. [18:12:04] Im sweating my ass off in this weather. [18:17:04] Theo10011: I must've caught the Wikiplague, fair number of people in staff have it [19:05:47] odder: in the future, I'll be sure the pay the bridge toll to the troll before attempting to cross the bridge. [19:07:31] ori-l: ah well, don't worry about this too much [19:08:42] New review: GWicke; "Some background: This patch is needed to enable expansion reuse from Parsoid to avoid most API reque..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/67497 [19:08:55] odder: well, i won't lie: it put a serious dent in my sunday. but now i'll get a haircut and some fresh air and put it behind me. [19:10:29] * odder hugs ori-l [19:10:34] I assume I'm the troll there? [19:11:05] odder: :) *hug* [19:16:49] ori-l, you seem like a very smart person. You think what or how you were doing things was right? In case I hadn't objected to it, what you were doing was the right thing? [19:19:10] Theo10011, let's agree to ground rules: I'll respond to your question, and then you can respond to me if you wish, and we leave it at that, OK? I'm not interested in becoming even more depressed or angry about this. [19:19:28] ok, but let me say something first. [19:19:36] * SickPanda gives ori-l hugs too [19:20:12] SickPanda: :) thanks [19:20:17] Theo10011: OK, go ahead. [19:20:26] ori-l: try not to catch on the sickness. [19:20:39] First of all, I was going to leave a message on your talk page. Mostly, I wasn't happy there either, a few hours after reading, I wanted to apologize and remove some of the things, but since this was getting heated, I didn't want to remove my comments and be accused of something later. [19:20:49] To that extent, for whatever it's worth - I am sorry. [19:21:19] Sorry for what? [19:21:59] For my tone. I was out of line on a couple of occasions. I agree with the principle and what I percieved was the wrong way to go about it - but I could have been less...of an ass. [19:22:57] Well, thanks for saying so. Should I respond to your question above? [19:23:01] Please go ahead, I'll stop. [19:24:27] The short answer is "yes, I do". I was confused by the initial opposition and moved to research this and understand the relevant history and policy. [19:24:46] My findings (possibly incorrect, in which case please correct me) were that: [19:26:12] * odder mentions http://lists.wikimedia.org/pipermail/wikimedia-l/2006-December/072622.html under his breath [19:26:25] Theo10011, ori-l => not sure you guys remember that? :-) [19:27:00] (sorry, had to go AFK for a moment and still typing) [19:27:08] Odder, I do ;) but let him finish first. [19:27:10] ori-l, np [19:27:13] go ahead [19:27:20] - There is no policy stating that adding a namespace requires consensus [19:27:46] - There is no established precedent for determining consensus for adding namespaces [19:28:26] - That Sarah's proposal for the namespace was in all other relevant respects clearly aligned with MetaWiki's stated purpose and scope [19:29:16] I also reasoned that adding a namespace is not conceptually all that different from adding a category, and we don't require unanimous permission for that, either. (cont'd) [19:29:59] My conclusion that the gravity of adding a namespace was a side-effect of the requirement of deploying a configuration change, which lends the act more gravitas than is appropriate [19:30:34] The reason it is not managed on-wiki is that building a web-based management interface that covered all the angles would be technically challenging and no one has gotten around to it. [19:31:28] In that sense, it's not terribly different from account creation throttles. I have some experience in handling those requests, and the requirements there are simply that there be a good faith request from a credible user. [19:33:08] ori-l, I understand, can I respond? [19:34:05] Sarah seemed credible; no clear reason had been articulated for *not* doing it (you stress your lack of objections in your latest reply), so I thought there was no reason not to go ahead with it. I have the requisite rights for self-merging and deploying wmf-config changes. So I set a date for deploying the change, but I did add the proviso that this was only to be done "barring a clear and decisive argument against it", which [19:34:05] is not the same as "this is happening, like it or not". [19:34:10] Sure, go ahead. [19:34:30] Let me state a couple of things first. [19:34:56] (I'll remind you that I won't respond, in the interest of putting this issue to rest. But I am listening and will take to heart your reply.) [19:35:35] I truly feel bad that this has made you feel depressed, and I really want to apologize first of all, and re-state, I really don't want you take anything personally (though it might seem like it was.) - I truly am sorry for how this transpired. [19:36:16] This is not worth being depressed over, and I will go apologize on that page after finishing here if this hurt you. [19:36:26] Ok, so for the discussion itself. [19:36:41] I realized later that your account is actually older than mine. [19:36:47] by a couple of years. [19:37:04] It might be that you have the en.wp sense of things, about following policies to the letter. [19:37:20] Meta is a different place, policies can be changed or ignored or made up ad-hoc. [19:37:43] We don't have policies for 90% of what goes on there [19:37:49] but there is a wiki-way, that you know. [19:38:05] That almost all the tech staff know, to respect the local community and their consensus. [19:38:24] Namespaces aren't actually that common of a request, though lately they are brought up more and more. [19:38:46] I actually would have supported the namespace after Frank came on IRC and said what his intentions were. [19:39:21] What irked me a bit was a developer/tech staff who is actually in charge of the bug, the patch, the merge getting involved. [19:39:32] then in some way, undermining it. [19:39:40] That might not have been your intent, now that I read it. [19:39:58] You just researched it, and made a judgement call, and I can understand that now better. [19:40:45] Sarah, while being active elsewhere, is barely known on Meta. Second, last I remember she was working for the Open knowledge foundation, and requests like these are usually brought up by someone else. [19:42:01] The context here is also, that namespaces have been rejected when they were proposed by 2 different people. I wasn't involve in those discussions but I followed them. [19:42:07] (Sarah is a WMF contractor [employee?].) [19:42:22] odder, now she is. She just rejoined very recently. [19:42:26] I learned that later BTW [19:43:01] Frank and I had a bit of history because of the IEP. It was short-lived and a bad experience, the learning that the team took away from that was to respect the local community. [19:43:12] I think maybe that's why he wants to do it with support locally. [19:43:54] This was all fine, until I saw your message. I have a tendency to not have strong feelings myself, but I get involved when people I like oppose something and they are just pushed out of the way. [19:44:21] I thought that was the case here, reading now, your perspective was also different there. [19:44:33] Tech staff are just generally uninvolved in the local consensus process. [19:44:45] I think that's about all I have to say. (sorry it's a lot) [19:44:56] And please, please, accept my heartfelt apology. [19:45:22] Well, let my break my earlier commitment not to respond and say that I see your point about the blurring of the lines between advocate and implementer [19:46:06] Thank you. [19:46:52] From your userpage and your comments, you actually seemed like a really bright and interesting person. I thought we could have even had a few interesting conversations after seeing your page. [19:47:15] Thanks for reaching out, and I'm sorry if my behavior was insensitive to community norms. If it, it was unwittingly so. [19:47:18] You can read my very first comment to you on that page, to check if that was the case where I say something similar [19:47:28] Now, I understand bettter. [19:47:38] PLease just talk to someone if you ever feel personally hurt. [19:47:42] I'm at blame here. [19:47:57] but you can see the argument and my reasoning now (hopefully) [19:48:38] Yes, I do, and suggest we leave it at that. Here's to better and more interesting conversations in the future. [19:48:50] Here! [19:49:29] * Theo10011 hugs ori-l forcibly [19:50:01] :) Now for that haircut! [19:50:31] odder: that was a very nice email from brion :) [19:51:19] SickPanda: of course it was :) [19:51:28] I don't know any other type of emails from brion [19:51:36] has brion ever been angry, ever? [19:53:34] [silence] [19:53:39] heh [19:54:46] greg-g: http://lists.wikimedia.org/pipermail/wikitech-l/2013-June/069870.html looks beautiful indeed <3 [20:06:06] New review: MZMcBride; "This is related to bug 16043." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/65570 [20:09:40] odder: thanks :) [20:28:03] https://bugzilla.wikimedia.org/show_bug.cgi?id=20079#c7 [21:30:28] wee ^demon [21:30:50] <^demon> Whoops, I signed on IRC? [21:31:40] ? [21:31:56] you fixed gitblit :-) [21:34:07] <^demon> Yeah, I saw the e-mails. [21:40:21] https://git.wikimedia.org/commit/mediawiki/extensions/MobileFrontend/07b140af66d63a16d06b06001c70724619040493 [21:40:32] doesn't seem to work? [21:41:59] odder: it hasn't been merged yet, so it isn't in master [21:42:30] odder: https://gerrit.wikimedia.org/r/#/c/67545/ [21:42:36] ori-l: does that mean you can actually see the link? [21:42:50] I'm being redirected to https://git.wikimedia.org/repositories/ [21:43:11] yes, me too. I dug up the gerrit change in the course of investigating. [21:44:54] oh crap, I already did https://meta.wikimedia.org/w/index.php?diff=5553961&oldid=5553805 [21:46:28] well, gerrit submissions are actually in git; AFAIK they are 'merged' on submission to a separate tree (refs/for/master or whatever). If gitblit were configured to track it, your links would have worked, presumably [21:51:57] well, I guess we can use Gerrit for now, people should be able to click on the [view] link anyway [22:17:35] odder: I filed a bug (49369) and CC'd you on it [22:32:55] !log gitblit fixed as of 1h ago: ^demon> Yeah, I saw the e-mails. [22:33:03] Logged the message, Master [22:33:15] worth logging, and blame on the weekend disturbers :) [22:51:05] odder: the change was in GitBlit after all -- it was the URL that was wrong: https://git.wikimedia.org/commit/mediawiki%2Fextensions%2FMobileFrontend.git/07b140af66d63a16d06b06001c70724619040493 [22:51:18] you have to URL-encode '/' in the repository name [22:51:35] scribunto to the rescue! [22:51:35] ah yes :) [22:51:39] thanks ori-l [22:51:55] np, I'll update the bug [23:03:48] New patchset: Demon; "Set default activity duration to 1 day" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/67642