[01:58:23] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 842s [02:05:43] PROBLEM - ps1-d2-sdtpa-infeed-load-tower-A-phase-Z on ps1-d2-sdtpa is CRITICAL: ps1-d2-sdtpa-infeed-load-tower-A-phase-Z CRITICAL - *2413* [02:12:53] PROBLEM - MySQL replication status on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1711s [02:32:25] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 0s [02:36:05] RECOVERY - MySQL replication status on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 6s [02:58:44] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [04:39:57] PROBLEM - MySQL slave status on es1004 is CRITICAL: CRITICAL: Slave running: expected Yes, got No [06:13:49] PROBLEM - Disk space on hume is CRITICAL: DISK CRITICAL - free space: /a/static/uncompressed 24279 MB (2% inode=99%): [07:01:07] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [08:11:39] PROBLEM - Puppet freshness on singer is CRITICAL: Puppet has not run in the last 10 hours [08:26:11] PROBLEM - Puppet freshness on es1002 is CRITICAL: Puppet has not run in the last 10 hours [09:23:19] PROBLEM - mobile traffic loggers on cp1044 is CRITICAL: PROCS CRITICAL: 6 processes with args varnishncsa [09:23:19] PROBLEM - mobile traffic loggers on cp1041 is CRITICAL: PROCS CRITICAL: 7 processes with args varnishncsa [09:23:19] PROBLEM - mobile traffic loggers on cp1043 is CRITICAL: PROCS CRITICAL: 5 processes with args varnishncsa [09:33:09] RECOVERY - mobile traffic loggers on cp1041 is OK: PROCS OK: 2 processes with args varnishncsa [09:33:09] RECOVERY - mobile traffic loggers on cp1044 is OK: PROCS OK: 1 process with args varnishncsa [09:33:29] RECOVERY - mobile traffic loggers on cp1043 is OK: PROCS OK: 1 process with args varnishncsa [09:48:49] RECOVERY - MySQL slave status on es1004 is OK: OK: [10:16:37] PROBLEM - mobile traffic loggers on cp1044 is CRITICAL: PROCS CRITICAL: 7 processes with args varnishncsa [10:16:37] PROBLEM - mobile traffic loggers on cp1041 is CRITICAL: PROCS CRITICAL: 7 processes with args varnishncsa [10:16:37] PROBLEM - mobile traffic loggers on cp1043 is CRITICAL: PROCS CRITICAL: 6 processes with args varnishncsa [10:26:17] RECOVERY - mobile traffic loggers on cp1044 is OK: PROCS OK: 3 processes with args varnishncsa [10:26:17] RECOVERY - mobile traffic loggers on cp1041 is OK: PROCS OK: 4 processes with args varnishncsa [10:26:17] RECOVERY - mobile traffic loggers on cp1043 is OK: PROCS OK: 2 processes with args varnishncsa [10:52:38] PROBLEM - mobile traffic loggers on cp1043 is CRITICAL: PROCS CRITICAL: 6 processes with args varnishncsa [11:02:38] RECOVERY - mobile traffic loggers on cp1043 is OK: PROCS OK: 1 process with args varnishncsa [12:37:57] hi can someone help me troubleshoot my svn access [13:07:49] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [13:38:39] any ops ? [15:07:05] PROBLEM - Host amslvs3 is DOWN: PING CRITICAL - Packet loss = 100% [15:30:25] PROBLEM - BGP status on csw2-esams is CRITICAL: CRITICAL: host 91.198.174.244, sessions up: 3, down: 1, shutdown: 0BRPeering with AS64600 not established - BR [17:10:54] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [18:00:50] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 198 MB (2% inode=60%): /var/lib/ureadahead/debugfs 198 MB (2% inode=60%): [18:20:10] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 198 MB (2% inode=60%): /var/lib/ureadahead/debugfs 198 MB (2% inode=60%): [18:21:00] PROBLEM - Puppet freshness on singer is CRITICAL: Puppet has not run in the last 10 hours [18:29:50] RECOVERY - Disk space on srv223 is OK: DISK OK [18:35:00] PROBLEM - Puppet freshness on es1002 is CRITICAL: Puppet has not run in the last 10 hours [22:12:19] New patchset: Lcarr; "adding in accept all from localhost to logging fw" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1713 [22:12:24] anyone around that can do a code review for me ? [22:12:33] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1713 [22:19:02] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1713 [22:19:03] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1713 [22:24:38] New patchset: Lcarr; "fixing accept localhost" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1714 [22:24:51] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1714 [22:24:58] New review: Lcarr; "subnet masks are fun!" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1714 [22:24:58] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1714 [22:28:58] New patchset: Lcarr; "Revert "fixing accept localhost"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1715 [22:29:12] New patchset: Lcarr; "Revert "adding in accept all from localhost to logging fw"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1716 [22:29:27] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1715 [22:29:27] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1716 [22:29:33] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1715 [22:29:34] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1715 [22:30:12] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1716 [22:30:13] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1716 [23:17:25] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [23:52:45] anyone fancy a pipermail discrepency investigation? [23:54:41] http://lists.wikimedia.org/pipermail/wikimania-l/2011-December/003247.html is truncated. effeietsanders and i both received an accurate copy [23:55:07] gzip archive version is truncated as well [23:55:42] you can see effeietsanders quoted the message here: http://lists.wikimedia.org/pipermail/wikimania-l/2011-December/003249.html [23:55:59] so you can see what should be in the archive [23:56:46] is there any particular or designated person that cares about pipermail/mailman breakages? [23:58:19] i can file a bug or provide an accurate copy of the message or both [23:58:22] hexmode: ^^^ [23:59:16] jeremyb: it is my vacation, but i'll take a peek ;) [23:59:40] jeremyb: peeked. Does the next line start with "From"? [23:59:51] errmmmmm, i should've checked that