[00:02:55] RECOVERY - Lucene on search1015 is OK: TCP OK - 0.027 second response time on port 8123 [00:16:55] Is there any easy way for somebody to see how many mailing list members are actually receiving mail (and don't have delivery disabled)? The list is way to large to scroll [00:17:29] members receiving mail? [00:17:38] Mailman is ancient. [00:17:47] I doubt there is any way to see that. [00:18:04] RD: copy paste said list into excel? [00:19:29] How can I view the list without scrolling multiple pages? [00:20:12] I figured, Theo10011 - I'm wondering if there's a way an admin can dig it out of somewhere lol [00:23:44] RD: you can send "who password" to -request [00:24:11] RD: *and* you can set number of entries to list in the web ui on one page [00:24:19] Theo10011: now any nice, modern equivalent? [00:24:21] Oh really? [00:24:42] I didn't know how. The list has some 34,000 members and I want to see how many are actually getting the mail. [00:24:56] Over 9000? [00:26:02] Prob. not open-source. GNU Mailman is from 1998-99 if I recall. [00:26:36] saper: Tell me more about -request? [00:26:59] Not searchable, the archives are all text based, and a pain to search through. [00:27:03] RD: yourlistname-request@xxxx [00:27:28] Theo10011: Mailman is still getting worked on :p [00:27:34] and is opensource [00:27:40] Theo10011: I know problems with mailman, looking for replacement :) using mostly search (not very good but still) from gmane.org [00:28:21] I heard Dadamail and Sympa are open-source. Never tried or seen either. [00:28:34] I'd wish WMF would develop something. [00:28:43] The only replacement I know of for Mailman is Sympa but it's interface sucks [00:28:44] It can't be too much to work to get one of those, can it? [00:29:17] Theo10011: Why do you want fancy interfaces for the mailing list... its a mailing list after all [00:29:24] oh I can imagine list of feature requests, like, facebook integration :) [00:29:27] heh [00:29:41] mailman's email interface is broken [00:29:43] p858snake|l, it would help with archives. It is a pain to look through them. [00:29:48] saper: Like this post on facebook! [00:30:00] heh [00:30:10] the command I use most is "set authenticate \n set delivery off" [00:30:51] Theo10011: somehow I like it most when mailman instances deliver mbox file as a hidden feature [00:31:00] The archives are text based, if at least they were HTML, it would make going through large discussion slightly less painful. [00:31:33] but I noticed recently, as I get older I am sticking to the text terminal even more, strange [00:31:48] heh you are getting used to it. [00:31:50] oh there's pipermail! [00:32:22] I like gmane's interface [00:35:56] Damn, the emaill list does not show if deliv is enabled [00:36:58] ouch sorry [00:37:18] That was a very large email though lol :) [01:49:19] PROBLEM - Disk space on stafford is CRITICAL: DISK CRITICAL - free space: /var/lib/puppet 758 MB (3% inode=92%): [01:55:37] RECOVERY - Disk space on stafford is OK: DISK OK [01:55:37] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [01:57:43] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [02:06:43] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [02:06:43] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [02:15:00] !ops get your mother please [02:18:36] !log LocalisationUpdate completed (1.19) at Fri Mar 23 02:18:35 UTC 2012 [02:18:39] Logged the message, Master [03:10:31] is Dispenser's math on https://jira.toolserver.org/browse/MNT-1225 that replag won't resolve for 48 days possible? [03:20:58] RECOVERY - Puppet freshness on linne is OK: puppet ran at Fri Mar 23 03:20:52 UTC 2012 [03:44:04] PROBLEM - SSH on amslvs1 is CRITICAL: Server answer: [03:46:10] RECOVERY - SSH on amslvs1 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [04:41:47] !rt 1333 | Reedy [04:47:25] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [04:49:22] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [05:13:41] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [05:52:23] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [05:56:08] poor puppy [06:39:25] !log tstarling synchronized php-1.19/includes/filerepo/file/LocalFile.php 'r114442' [06:39:29] Logged the message, Master [07:37:54] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 199 MB (2% inode=61%): /var/lib/ureadahead/debugfs 199 MB (2% inode=61%): [07:50:30] RECOVERY - Disk space on srv223 is OK: DISK OK [09:49:13] PROBLEM - MySQL Slave Delay on db24 is CRITICAL: CRIT replication delay 182 seconds [09:53:43] RECOVERY - MySQL Slave Delay on db24 is OK: OK replication delay 0 seconds [10:19:39] PROBLEM - udp2log processes on locke is CRITICAL: CRITICAL: filters absent: /a/squid/urjc.awk, [10:30:09] RECOVERY - udp2log processes on locke is OK: OK: all filters present [10:36:27] PROBLEM - udp2log processes on locke is CRITICAL: CRITICAL: filters absent: /a/squid/urjc.awk, [10:38:24] RECOVERY - udp2log processes on locke is OK: OK: all filters present [10:44:42] PROBLEM - udp2log processes on locke is CRITICAL: CRITICAL: filters absent: /a/squid/urjc.awk, [10:46:48] RECOVERY - udp2log processes on locke is OK: OK: all filters present [11:57:38] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [11:59:17] PROBLEM - swift-container-auditor on ms-be1 is CRITICAL: PROCS CRITICAL: 0 processes with regex args ^/usr/bin/python /usr/bin/swift-container-auditor [11:59:35] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [12:03:29] RECOVERY - swift-container-auditor on ms-be1 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-auditor [12:09:39] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [12:09:39] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [13:37:24] PROBLEM - MySQL Slave Delay on db1035 is CRITICAL: CRIT replication delay 189 seconds [13:37:51] PROBLEM - MySQL Replication Heartbeat on db1035 is CRITICAL: CRIT replication delay 201 seconds [13:41:00] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 276 MB (3% inode=61%): /var/lib/ureadahead/debugfs 276 MB (3% inode=61%): [13:45:57] RECOVERY - MySQL Slave Delay on db1035 is OK: OK replication delay 0 seconds [13:46:24] RECOVERY - MySQL Replication Heartbeat on db1035 is OK: OK replication delay 0 seconds [13:48:57] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 197 MB (2% inode=61%): /var/lib/ureadahead/debugfs 197 MB (2% inode=61%): [13:55:42] RECOVERY - Disk space on srv221 is OK: DISK OK [13:57:30] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [14:01:24] RECOVERY - Disk space on srv222 is OK: DISK OK [14:01:42] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [15:14:47] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [15:18:59] RECOVERY - RAID on db1020 is OK: OK: State is Optimal, checked 2 logical device(s) [15:19:08] RECOVERY - DPKG on db1020 is OK: All packages OK [15:19:08] RECOVERY - MySQL Slave Delay on db1020 is OK: OK replication delay seconds [15:19:26] RECOVERY - MySQL Slave Running on db1020 is OK: OK replication [15:19:26] RECOVERY - Disk space on db1020 is OK: DISK OK [15:19:26] RECOVERY - MySQL Replication Heartbeat on db1020 is OK: OK replication delay seconds [15:19:44] RECOVERY - MySQL disk space on db1020 is OK: DISK OK [15:19:53] RECOVERY - Full LVS Snapshot on db1020 is OK: OK no full LVM snapshot volumes [15:20:20] RECOVERY - MySQL Idle Transactions on db1020 is OK: OK longest blocking idle transaction sleeps for seconds [15:20:29] RECOVERY - SSH on db1020 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [15:20:38] RECOVERY - MySQL Recent Restart on db1020 is OK: OK seconds since restart [15:25:35] PROBLEM - Host db1020 is DOWN: PING CRITICAL - Packet loss = 100% [15:26:56] RECOVERY - Host db1020 is UP: PING OK - Packet loss = 0%, RTA = 26.43 ms [15:38:47] RECOVERY - Host cp1017 is UP: PING OK - Packet loss = 0%, RTA = 27.32 ms [15:43:04] PROBLEM - RAID on cp1017 is CRITICAL: Connection refused by host [15:43:58] PROBLEM - Disk space on cp1017 is CRITICAL: Connection refused by host [15:44:25] PROBLEM - Frontend Squid HTTP on cp1017 is CRITICAL: Connection refused [15:45:28] PROBLEM - SSH on cp1017 is CRITICAL: Connection refused [15:45:28] PROBLEM - Backend Squid HTTP on cp1017 is CRITICAL: Connection refused [15:45:46] PROBLEM - DPKG on cp1017 is CRITICAL: Connection refused by host [15:49:04] PROBLEM - Host magnesium is DOWN: PING CRITICAL - Packet loss = 100% [15:53:52] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [15:54:29] Hello [15:54:37] Is wmflabs currently down? [15:56:08] no? [15:59:34] RECOVERY - Host magnesium is UP: PING OK - Packet loss = 0%, RTA = 26.91 ms [16:07:49] PROBLEM - NTP on cp1017 is CRITICAL: NTP CRITICAL: No response from NTP server [16:37:49] RECOVERY - RAID on cp1017 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [16:38:16] RECOVERY - DPKG on cp1017 is OK: All packages OK [16:38:16] RECOVERY - Disk space on cp1017 is OK: DISK OK [16:38:25] RECOVERY - SSH on cp1017 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [16:40:31] RECOVERY - Lucene on search1001 is OK: TCP OK - 0.027 second response time on port 8123 [16:41:25] RECOVERY - NTP on cp1017 is OK: NTP OK: Offset 0.04408991337 secs [16:45:28] RECOVERY - Host cp1019 is UP: PING OK - Packet loss = 0%, RTA = 26.49 ms [16:49:49] PROBLEM - Disk space on cp1019 is CRITICAL: Connection refused by host [16:50:16] PROBLEM - Frontend Squid HTTP on cp1019 is CRITICAL: Connection refused [16:50:52] PROBLEM - RAID on cp1019 is CRITICAL: Connection refused by host [16:51:28] PROBLEM - SSH on cp1019 is CRITICAL: Connection refused [16:51:28] PROBLEM - Backend Squid HTTP on cp1019 is CRITICAL: Connection refused [16:51:37] PROBLEM - DPKG on cp1019 is CRITICAL: Connection refused by host [16:57:41] PROBLEM - MySQL Slave Delay on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [16:58:08] RECOVERY - Frontend Squid HTTP on cp1017 is OK: HTTP OK HTTP/1.0 200 OK - 27535 bytes in 0.162 seconds [16:59:02] PROBLEM - MySQL Idle Transactions on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [16:59:11] RECOVERY - Backend Squid HTTP on cp1017 is OK: HTTP OK HTTP/1.0 200 OK - 27399 bytes in 0.162 seconds [16:59:29] PROBLEM - MySQL Recent Restart on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [16:59:38] PROBLEM - MySQL Replication Heartbeat on db1047 is CRITICAL: CRIT replication delay 424 seconds [17:00:23] PROBLEM - MySQL Slave Running on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [17:00:41] PROBLEM - Host cp1019 is DOWN: PING CRITICAL - Packet loss = 100% [17:26:20] RECOVERY - RAID on cp1019 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [17:26:29] RECOVERY - Host cp1019 is UP: PING OK - Packet loss = 0%, RTA = 26.43 ms [17:27:23] RECOVERY - SSH on cp1019 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [17:27:41] RECOVERY - DPKG on cp1019 is OK: All packages OK [17:28:08] RECOVERY - Disk space on cp1019 is OK: DISK OK [17:35:38] PROBLEM - Packetloss_Average on emery is CRITICAL: CRITICAL: packet_loss_average is 22.8082502521 (gt 8.0) [17:42:54] i got two mails from mw13.pmtpa.wmnet with missing translations (Subject: <enotif_subject>) another mail from mw40.pmtpa.wmnet was ok. [17:55:44] RECOVERY - Frontend Squid HTTP on cp1019 is OK: HTTP OK HTTP/1.0 200 OK - 27535 bytes in 2.824 seconds [17:56:29] RECOVERY - Backend Squid HTTP on cp1019 is OK: HTTP OK HTTP/1.0 200 OK - 27399 bytes in 2.300 seconds [18:04:06] !log asher synchronized wmf-config/db.php 'returning db32, pulling db52 for migration' [18:04:09] Logged the message, Master [18:17:03] RECOVERY - Packetloss_Average on emery is OK: OK: packet_loss_average is 2.98938149123 [18:19:09] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [18:20:30] PROBLEM - Host cp1019 is DOWN: PING CRITICAL - Packet loss = 100% [18:21:06] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [19:54:19] RECOVERY - MySQL Recent Restart on db1047 is OK: OK 7948912 seconds since restart [19:54:19] RECOVERY - MySQL Slave Running on db1047 is OK: OK replication Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: [20:12:01] PROBLEM - Router interfaces on mr1-eqiad is CRITICAL: CRITICAL: Device does not support ifTable - try without -I option [20:13:58] RECOVERY - Router interfaces on mr1-eqiad is OK: OK: host 10.65.0.1, interfaces up: 32, down: 0, dormant: 0, excluded: 0, unused: 0 [21:58:57] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [22:00:54] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [22:07:16] nighty~ o/ [22:10:57] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [22:10:57] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [22:48:59] RECOVERY - MySQL Slave Delay on db1047 is OK: OK replication delay seconds [22:48:59] RECOVERY - MySQL Replication Heartbeat on db1047 is OK: OK replication delay seconds [22:49:26] RECOVERY - MySQL Idle Transactions on db1047 is OK: OK longest blocking idle transaction sleeps for 0 seconds [22:55:08] PROBLEM - MySQL Slave Delay on db1047 is CRITICAL: CRIT replication delay 21069 seconds [22:55:17] PROBLEM - MySQL Replication Heartbeat on db1047 is CRITICAL: CRIT replication delay 21054 seconds [23:05:25] !log reedy synchronized wmf-config/InitialiseSettings.php 'wmgAutopromoteOnce empty arrays' [23:05:28] Logged the message, Master [23:07:52] !log reedy synchronized wmf-config/CommonSettings.php 'wgAutopromoteOnce' [23:07:55] Logged the message, Master [23:08:47] !log reedy synchronized wmf-config/CommonSettings.php 'wgAutopromoteOnce' [23:08:51] Logged the message, Master [23:09:07] !log reedy synchronized wmf-config/InitialiseSettings.php 'wmgAutopromoteOnce' [23:09:10] Logged the message, Master [23:13:22] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34005 - Change uploader flag configuration on Russian Wikipedia' [23:13:25] Logged the message, Master [23:13:46] gn8 folks [23:15:31] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34005 - Change uploader flag configuration on Russian Wikipedia' [23:15:35] Logged the message, Master [23:30:23] PROBLEM - Router interfaces on mr1-eqiad is CRITICAL: CRITICAL: No response from remote host 10.65.0.1 for 1.3.6.1.2.1.2.2.1.8 with snmp version 2 [23:32:20] RECOVERY - Router interfaces on mr1-eqiad is OK: OK: host 10.65.0.1, interfaces up: 32, down: 0, dormant: 0, excluded: 0, unused: 0 [23:49:13] !log reedy synchronized php-1.19/extensions/MoodBar/ 'r114466' [23:49:16] Logged the message, Master