[00:18:56] PROBLEM - Puppet freshness on magnesium is CRITICAL: Puppet has not run in the last 10 hours [00:18:56] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [00:21:47] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:36:02] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.055 seconds [01:08:35] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:13:23] !log restarted udp2log on locke [01:13:33] Logged the message, Master [01:15:29] PROBLEM - Varnish traffic logger on cp1028 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:15:29] PROBLEM - Varnish traffic logger on cp1022 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:15:38] PROBLEM - Varnish traffic logger on cp1026 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:15:38] PROBLEM - Varnish traffic logger on cp1044 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:15:47] PROBLEM - Varnish traffic logger on cp1024 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:15:56] PROBLEM - Varnish traffic logger on cp1042 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:05] PROBLEM - Varnish traffic logger on cp1027 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:14] PROBLEM - Varnish traffic logger on cp1041 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:23] PROBLEM - Varnish traffic logger on cp1021 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:23] PROBLEM - Varnish traffic logger on cp1025 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:32] PROBLEM - Varnish traffic logger on cp1043 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:16:41] PROBLEM - Varnish traffic logger on cp1023 is CRITICAL: PROCS CRITICAL: 2 processes with command name varnishncsa [01:20:53] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 7.129 seconds [01:21:56] PROBLEM - Puppet freshness on manganese is CRITICAL: Puppet has not run in the last 10 hours [01:22:23] RECOVERY - Varnish traffic logger on cp1027 is OK: PROCS OK: 3 processes with command name varnishncsa [01:22:23] RECOVERY - Varnish traffic logger on cp1021 is OK: PROCS OK: 3 processes with command name varnishncsa [01:22:32] RECOVERY - Varnish traffic logger on cp1025 is OK: PROCS OK: 3 processes with command name varnishncsa [01:22:32] RECOVERY - Varnish traffic logger on cp1041 is OK: PROCS OK: 3 processes with command name varnishncsa [01:22:41] RECOVERY - Varnish traffic logger on cp1043 is OK: PROCS OK: 3 processes with command name varnishncsa [01:22:41] RECOVERY - Varnish traffic logger on cp1023 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:08] RECOVERY - Varnish traffic logger on cp1028 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:08] RECOVERY - Varnish traffic logger on cp1022 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:17] RECOVERY - Varnish traffic logger on cp1044 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:17] RECOVERY - Varnish traffic logger on cp1026 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:17] RECOVERY - Varnish traffic logger on cp1024 is OK: PROCS OK: 3 processes with command name varnishncsa [01:23:35] RECOVERY - Varnish traffic logger on cp1042 is OK: PROCS OK: 3 processes with command name varnishncsa [01:40:28] PROBLEM - MySQL Slave Delay on db1025 is CRITICAL: CRIT replication delay 232 seconds [01:40:46] PROBLEM - MySQL Slave Delay on storage3 is CRITICAL: CRIT replication delay 249 seconds [01:46:28] RECOVERY - MySQL Slave Delay on db1025 is OK: OK replication delay 8 seconds [01:46:46] RECOVERY - MySQL Slave Delay on storage3 is OK: OK replication delay 11 seconds [01:55:01] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:09:07] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 6.189 seconds [03:23:31] PROBLEM - Puppet freshness on mw8 is CRITICAL: Puppet has not run in the last 10 hours [04:48:25] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [04:48:25] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [04:48:25] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [04:48:25] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [04:48:25] PROBLEM - Puppet freshness on singer is CRITICAL: Puppet has not run in the last 10 hours [04:48:26] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [04:48:26] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [04:48:27] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [04:48:27] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [04:49:28] PROBLEM - MySQL Replication Heartbeat on db1020 is CRITICAL: CRIT replication delay 181 seconds [04:50:13] PROBLEM - MySQL Slave Delay on db1020 is CRITICAL: CRIT replication delay 194 seconds [04:55:37] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 181 seconds [04:55:46] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 182 seconds [05:00:07] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [05:00:16] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [05:03:25] RECOVERY - MySQL Replication Heartbeat on db1020 is OK: OK replication delay 0 seconds [05:03:52] RECOVERY - MySQL Slave Delay on db1020 is OK: OK replication delay 1 seconds [05:43:01] PROBLEM - check_all_memcacheds on spence is CRITICAL: MEMCACHED CRITICAL - Could not connect: 10.0.8.17:11000 (Connection timed out) [05:43:10] PROBLEM - Memcached on srv267 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:44:49] RECOVERY - Memcached on srv267 is OK: TCP OK - 0.003 second response time on port 11000 [05:46:01] RECOVERY - check_all_memcacheds on spence is OK: MEMCACHED OK - All memcacheds are online [05:57:16] PROBLEM - check_all_memcacheds on spence is CRITICAL: MEMCACHED CRITICAL - Could not connect: 10.0.11.29:11000 (timeout) 10.0.8.9:11000 (Connection timed out) 10.0.2.201:11000 (timeout) [05:57:52] PROBLEM - Apache HTTP on srv259 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:59:04] PROBLEM - SSH on srv259 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:00:25] RECOVERY - SSH on srv259 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:02:04] RECOVERY - check_all_memcacheds on spence is OK: MEMCACHED OK - All memcacheds are online [06:02:31] RECOVERY - Apache HTTP on srv259 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.023 second response time [06:11:22] PROBLEM - check_all_memcacheds on spence is CRITICAL: MEMCACHED CRITICAL - Could not connect: 10.0.8.36:11000 (timeout) 10.0.8.23:11000 (timeout) 10.0.8.25:11000 (timeout) [06:13:01] PROBLEM - SSH on srv273 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:15:52] PROBLEM - SSH on srv262 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:16:28] PROBLEM - Apache HTTP on srv262 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:16:55] PROBLEM - Apache HTTP on srv273 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:17:31] RECOVERY - SSH on srv273 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:18:16] RECOVERY - Apache HTTP on srv273 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.237 second response time [06:20:58] RECOVERY - Apache HTTP on srv262 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.023 second response time [06:21:52] RECOVERY - SSH on srv262 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:23:58] PROBLEM - Memcached on srv262 is CRITICAL: Connection refused [06:33:34] PROBLEM - SSH on srv260 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:34:46] PROBLEM - Apache HTTP on srv260 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:37:37] RECOVERY - Memcached on srv262 is OK: TCP OK - 0.008 second response time on port 11000 [06:38:13] PROBLEM - Apache HTTP on srv275 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:39:34] RECOVERY - Apache HTTP on srv275 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.027 second response time [06:39:52] RECOVERY - SSH on srv260 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:40:19] RECOVERY - Apache HTTP on srv260 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.030 second response time [06:41:40] PROBLEM - SSH on srv272 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:42:07] PROBLEM - Memcached on srv260 is CRITICAL: Connection refused [06:42:07] PROBLEM - Apache HTTP on srv272 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:44:04] PROBLEM - Puppet freshness on zhen is CRITICAL: Puppet has not run in the last 10 hours [06:45:07] RECOVERY - Apache HTTP on srv272 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.728 second response time [06:46:28] RECOVERY - SSH on srv272 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [06:52:55] RECOVERY - Memcached on srv260 is OK: TCP OK - 0.003 second response time on port 11000 [06:55:46] RECOVERY - check_all_memcacheds on spence is OK: MEMCACHED OK - All memcacheds are online [07:35:04] PROBLEM - Puppet freshness on oxygen is CRITICAL: Puppet has not run in the last 10 hours [08:04:10] PROBLEM - Host search32 is DOWN: PING CRITICAL - Packet loss = 100% [08:05:40] RECOVERY - Host search32 is UP: PING OK - Packet loss = 0%, RTA = 1.40 ms [08:47:04] PROBLEM - Puppet freshness on ms-be1007 is CRITICAL: Puppet has not run in the last 10 hours [08:47:04] PROBLEM - Puppet freshness on ms-be1010 is CRITICAL: Puppet has not run in the last 10 hours [08:47:04] PROBLEM - Puppet freshness on ms-be1011 is CRITICAL: Puppet has not run in the last 10 hours [08:53:04] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [09:32:40] PROBLEM - check_all_memcacheds on spence is CRITICAL: MEMCACHED CRITICAL - Could not connect: 10.0.8.26:11000 (Connection timed out) 10.0.8.29:11000 (timeout) 10.0.8.32:11000 (timeout) [09:35:40] RECOVERY - check_all_memcacheds on spence is OK: MEMCACHED OK - All memcacheds are online [09:45:07] PROBLEM - check_all_memcacheds on spence is CRITICAL: MEMCACHED CRITICAL - Could not connect: 10.0.8.17:11000 (Connection timed out) 10.0.8.22:11000 (timeout) 10.0.8.8:11000 (timeout) 10.0.8.9:11000 (timeout) 10.0.8.13:11000 (Connection timed out) [09:48:16] PROBLEM - SSH on srv267 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:48:25] PROBLEM - Apache HTTP on srv267 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:49:37] RECOVERY - check_all_memcacheds on spence is OK: MEMCACHED OK - All memcacheds are online [09:49:37] RECOVERY - SSH on srv267 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [09:49:46] RECOVERY - Apache HTTP on srv267 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.185 second response time [10:20:04] PROBLEM - Puppet freshness on magnesium is CRITICAL: Puppet has not run in the last 10 hours [10:20:04] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [11:23:04] PROBLEM - Puppet freshness on manganese is CRITICAL: Puppet has not run in the last 10 hours [12:20:49] PROBLEM - Host search32 is DOWN: PING CRITICAL - Packet loss = 100% [12:23:31] RECOVERY - Host search32 is UP: PING OK - Packet loss = 0%, RTA = 0.36 ms [12:24:34] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [12:29:58] PROBLEM - Puppet freshness on spence is CRITICAL: Puppet has not run in the last 10 hours [12:32:22] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [12:45:16] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [12:54:52] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:01:10] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:08:58] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:13:46] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:16:55] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:19:36] hrmmmmmm [13:19:44] BGP is a problem? [13:20:00] mark is very idle [13:20:04] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:23:13] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:25:01] PROBLEM - Puppet freshness on mw8 is CRITICAL: Puppet has not run in the last 10 hours [13:37:19] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [13:54:34] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [14:13:22] mark is very idle because he is trying to catch a flight here [14:26:58] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [14:30:17] heh [14:30:57] apergos: but I still want to know if that BGP CRITICAL is a real problem or not ;) [14:31:36] me too, but so far I haven't seen any other symptoms [14:31:58] still kinda early for leslie [14:32:02] apergos: are you going too? [14:32:09] going? [14:32:12] SF [14:32:15] I'm already here, it's been a week [14:32:21] ohhh, hah! [14:32:26] so, it's early for you too [14:33:07] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [14:33:27] yes, very [14:33:58] today my task will be to find a pharmacy that is open, a source of herbal tea and a microwave in this hotel, and a place with soup [14:34:04] that is open on a sunday [14:34:54] microwave?? not tap with water that's already hot? is the town really that dead? [14:35:30] * jeremyb usually just has an herbal tea supply already in his bag almost all the time ;) [14:39:49] well I would have had to bring the little wire thing you put the leaves in [14:40:05] plus the leaves are in a jar... [14:40:07] :-/ [14:40:51] oh. mine are already individaully bagged and then the bags are individually wrapped in something that seems like it could be water/airtight [14:41:23] I see [14:41:42] (and then i just stick a couple of those in a really small tupperware) [14:41:51] same tea that I use to make iced tea [14:50:04] PROBLEM - Puppet freshness on ms-be1009 is CRITICAL: Puppet has not run in the last 10 hours [14:50:04] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [14:50:04] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [14:50:04] PROBLEM - Puppet freshness on singer is CRITICAL: Puppet has not run in the last 10 hours [14:50:04] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [14:50:05] PROBLEM - Puppet freshness on virt1001 is CRITICAL: Puppet has not run in the last 10 hours [14:50:05] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [14:50:06] PROBLEM - Puppet freshness on virt1003 is CRITICAL: Puppet has not run in the last 10 hours [14:50:06] PROBLEM - Puppet freshness on virt1002 is CRITICAL: Puppet has not run in the last 10 hours [15:02:40] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [15:30:34] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [15:40:10] PROBLEM - ps1-d1-sdtpa-infeed-load-tower-A-phase-Y on ps1-d1-sdtpa is CRITICAL: ps1-d1-sdtpa-infeed-load-tower-A-phase-Y CRITICAL - *2600* [15:41:40] RECOVERY - ps1-d1-sdtpa-infeed-load-tower-A-phase-Y on ps1-d1-sdtpa is OK: ps1-d1-sdtpa-infeed-load-tower-A-phase-Y OK - 2388 [15:52:28] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [15:55:37] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:00:16] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:13:28] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:23:04] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:25:37] mark is very idle because it's WEEKEND [16:25:56] and would be idle because he would be flying now if he hadn't missed his flight [16:27:21] aw [16:29:04] ouch [16:29:16] mark: so, do we worry about those or not? [16:30:39] no, not really [16:30:52] there's something weird going on with that switch / access router, but i haven't had time to figure out what yet [16:31:00] but it's been going on for a few days, I think it'll be ok [16:37:01] PROBLEM - Puppet freshness on analytics1011 is CRITICAL: Puppet has not run in the last 10 hours [16:37:01] PROBLEM - Puppet freshness on analytics1012 is CRITICAL: Puppet has not run in the last 10 hours [16:37:01] PROBLEM - Puppet freshness on analytics1013 is CRITICAL: Puppet has not run in the last 10 hours [16:37:01] PROBLEM - Puppet freshness on analytics1015 is CRITICAL: Puppet has not run in the last 10 hours [16:37:01] PROBLEM - Puppet freshness on analytics1014 is CRITICAL: Puppet has not run in the last 10 hours [16:37:02] PROBLEM - Puppet freshness on analytics1018 is CRITICAL: Puppet has not run in the last 10 hours [16:37:02] PROBLEM - Puppet freshness on analytics1017 is CRITICAL: Puppet has not run in the last 10 hours [16:37:03] PROBLEM - Puppet freshness on analytics1016 is CRITICAL: Puppet has not run in the last 10 hours [16:37:03] PROBLEM - Puppet freshness on analytics1021 is CRITICAL: Puppet has not run in the last 10 hours [16:37:04] PROBLEM - Puppet freshness on analytics1020 is CRITICAL: Puppet has not run in the last 10 hours [16:37:04] PROBLEM - Puppet freshness on analytics1019 is CRITICAL: Puppet has not run in the last 10 hours [16:37:05] PROBLEM - Puppet freshness on analytics1022 is CRITICAL: Puppet has not run in the last 10 hours [16:37:05] PROBLEM - Puppet freshness on analytics1023 is CRITICAL: Puppet has not run in the last 10 hours [16:37:06] PROBLEM - Puppet freshness on analytics1025 is CRITICAL: Puppet has not run in the last 10 hours [16:37:06] PROBLEM - Puppet freshness on analytics1026 is CRITICAL: Puppet has not run in the last 10 hours [16:37:07] PROBLEM - Puppet freshness on analytics1024 is CRITICAL: Puppet has not run in the last 10 hours [16:37:07] PROBLEM - Puppet freshness on analytics1027 is CRITICAL: Puppet has not run in the last 10 hours [16:37:08] PROBLEM - Puppet freshness on es1007 is CRITICAL: Puppet has not run in the last 10 hours [16:37:08] PROBLEM - Puppet freshness on es1008 is CRITICAL: Puppet has not run in the last 10 hours [16:37:09] PROBLEM - Puppet freshness on es1010 is CRITICAL: Puppet has not run in the last 10 hours [16:37:09] PROBLEM - Puppet freshness on es1009 is CRITICAL: Puppet has not run in the last 10 hours [16:37:10] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:37:30] mark: ok. fwiw, looks like it only started alerting on that in here at 12:24:34 UTC. so 4 hrs and change ago [16:37:56] (vs. "a few days") [16:39:13] good luck with the next flight [16:39:18] hope they don't screw it up [16:39:32] I am sick btw so I will not be very online today [16:39:48] apergos: got tea? [16:39:55] I drank some [16:39:59] it didn't do much [16:40:08] good... errr, kinda [16:40:10] I have some cold/flu syrup which I will take in a bit [16:40:16] I am trying to eat a little first [16:43:28] PROBLEM - BGP status on csw2-esams is CRITICAL: (Service Check Timed Out) [16:45:07] PROBLEM - Puppet freshness on zhen is CRITICAL: Puppet has not run in the last 10 hours [17:36:07] PROBLEM - Puppet freshness on oxygen is CRITICAL: Puppet has not run in the last 10 hours [17:48:34] RECOVERY - BGP status on csw2-esams is OK: OK: host 91.198.174.244, sessions up: 4, down: 0, shutdown: 0 [18:48:07] PROBLEM - Puppet freshness on ms-be1007 is CRITICAL: Puppet has not run in the last 10 hours [18:48:07] PROBLEM - Puppet freshness on ms-be1010 is CRITICAL: Puppet has not run in the last 10 hours [18:48:07] PROBLEM - Puppet freshness on ms-be1011 is CRITICAL: Puppet has not run in the last 10 hours [18:53:58] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [20:21:07] PROBLEM - Puppet freshness on magnesium is CRITICAL: Puppet has not run in the last 10 hours [20:21:07] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [21:08:03] New patchset: Jeremyb; "bug 40122 - disable GlobalBlocking on fishbowl, private wikis" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/23286 [21:24:07] PROBLEM - Puppet freshness on manganese is CRITICAL: Puppet has not run in the last 10 hours [21:52:19] New patchset: saper; "(bug 40123) Unlock wikimania2010wiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/23287 [22:31:01] PROBLEM - Puppet freshness on spence is CRITICAL: Puppet has not run in the last 10 hours [23:26:04] PROBLEM - Puppet freshness on mw8 is CRITICAL: Puppet has not run in the last 10 hours