[02:24:59] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1865s [02:28:49] PROBLEM - MySQL replication status on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 2095s [02:40:19] RECOVERY - MySQL replication status on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 0s [02:47:39] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 3s [03:41:35] RECOVERY - DPKG on db57 is OK: All packages OK [03:41:35] RECOVERY - DPKG on db56 is OK: All packages OK [03:41:45] RECOVERY - DPKG on db58 is OK: All packages OK [03:42:15] RECOVERY - DPKG on db55 is OK: All packages OK [04:19:28] RECOVERY - Disk space on es1004 is OK: DISK OK [04:21:08] RECOVERY - MySQL disk space on es1004 is OK: DISK OK [04:42:32] PROBLEM - MySQL slave status on es1004 is CRITICAL: CRITICAL: Slave running: expected Yes, got No [05:28:12] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [05:32:12] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [07:05:44] PROBLEM - Squid on brewster is CRITICAL: Connection refused [07:25:16] PROBLEM - LVS Lucene on search-pool2.svc.pmtpa.wmnet is CRITICAL: Connection timed out [07:29:06] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours [07:36:26] RECOVERY - LVS Lucene on search-pool2.svc.pmtpa.wmnet is OK: TCP OK - 8.992 second response time on port 8123 [07:46:16] PROBLEM - Lucene on search6 is CRITICAL: Connection timed out [08:10:36] PROBLEM - LVS Lucene on search-pool2.svc.pmtpa.wmnet is CRITICAL: Connection timed out [08:41:59] !log restarted lsearchd on search6 [08:42:01] Logged the message, Master [08:44:14] RECOVERY - LVS Lucene on search-pool2.svc.pmtpa.wmnet is OK: TCP OK - 0.004 second response time on port 8123 [08:47:14] RECOVERY - Lucene on search6 is OK: TCP OK - 0.003 second response time on port 8123 [09:50:27] PROBLEM - Disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 430871 MB (3% inode=99%): [09:55:37] PROBLEM - MySQL disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 392858 MB (3% inode=99%): [09:59:27] PROBLEM - Router interfaces on cr1-sdtpa is CRITICAL: CRITICAL: host 208.80.152.196, interfaces up: 76, down: 1, dormant: 0, excluded: 0, unused: 0BRxe-0/0/1: down - Core: cr1-eqiad:xe-5/2/1 (FPL/GBLX, CV71026) [10Gbps wave]BR [10:06:37] PROBLEM - Router interfaces on cr1-eqiad is CRITICAL: CRITICAL: host 208.80.154.196, interfaces up: 86, down: 1, dormant: 0, excluded: 0, unused: 0BRxe-5/2/1: down - Core: cr1-sdtpa:xe-0/0/1 (Level3/FPL, CV71026) {#2008} [10Gbps wave]BR [10:18:45] RECOVERY - Router interfaces on cr1-eqiad is OK: OK: host 208.80.154.196, interfaces up: 88, down: 0, dormant: 0, excluded: 0, unused: 0 [10:22:45] RECOVERY - Router interfaces on cr1-sdtpa is OK: OK: host 208.80.152.196, interfaces up: 78, down: 0, dormant: 0, excluded: 0, unused: 0 [11:05:35] RECOVERY - MySQL slave status on es1004 is OK: OK: [15:38:45] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [15:42:55] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [16:34:26] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [17:39:37] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours