[00:23:11] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:24:24] !log asher synchronized wmf-config/db.php 'pullin db32 for revision alter' [00:24:27] Logged the message, Master [00:25:08] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 6.337 seconds [00:33:59] PROBLEM - MySQL Slave Delay on db32 is CRITICAL: CRIT replication delay 348 seconds [00:34:26] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 376 seconds [00:59:20] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:03:23] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 2.889 seconds [01:14:10] !log Re-enabled the donations queue consumer in Jenkins [01:14:13] Logged the message, Master [01:35:42] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 69 MB (0% inode=61%): /var/lib/ureadahead/debugfs 69 MB (0% inode=61%): [01:39:09] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:45:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 0.034 seconds [01:46:30] RECOVERY - Disk space on srv219 is OK: DISK OK [02:00:24] Does anyone have root access to the Toolserver here? [02:01:28] * matthewrbowker doesn't see DaB. or noxy :( [02:01:30] *nosy [02:01:55] I've seemed to use up all my database connection (30) and kill everything hasn't helped [02:03:05] That's not good :( [02:09:19] Dispenser: I have no problem using your tools. I've been trying reflinks, is there another one that connects to the database more? [02:09:40] r/more/ [02:09:47] https://toolserver.org/~dispenser/view/Dab_solver [02:10:39] Hmmm... darn :( [02:11:06] while true; do mytop -h sql-s1; done [02:11:18] that's been running for 10 minutes now [02:12:43] Dispenser: You do know about the database issue on s1, right? (before I assume stupidly ;) ) [02:13:23] Yes, I also noticed my custom query killer was rather slow in killing [02:15:44] Interesting... [02:15:46] Hmmm... interesting "ps -u dispenser" shows an "rmytop" still running [02:16:49] As well as 2x screen-4 and 3x bash [02:17:24] Not running for more than a second before dieing and restarting [02:17:47] !log LocalisationUpdate completed (1.19) at Thu Mar 22 02:17:47 UTC 2012 [02:17:51] Logged the message, Master [02:17:53] and displaying 'rmytop: User 'dispenser' has exceeded the 'max_user_connections' resource (current value: 15)' twenty times a second [02:17:55] For which, the bash? [02:18:09] ah [02:19:03] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:19:48] RECOVERY - check_job_queue on spence is OK: JOBQUEUE OK - all job queues below 10,000 [02:23:17] any idea how long the alter table causing replag will take? [02:24:01] nm: On Toolserver? [02:24:15] yes, enwiki [02:25:12] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 0.025 seconds [02:25:28] nm: Hmmm... They said upwards of a day: https://jira.toolserver.org/browse/MNT-1225 [02:25:36] tyvm [02:25:43] np [02:32:25] Dispenser: I'm trying to remember how to see all active connections, BTW. I haven't left XD [02:36:29] 0.008 sec/hash * 524,000,000 history revision = 4,192,000 seconds or 48 days [02:37:17] err.. 48 hours [02:40:01] Wow... hopefully that should finish up soon then :/ [02:43:45] have there been any changes in the Patrol functionality in the past four days or so? [03:04:12] RECOVERY - Puppet freshness on ms2 is OK: puppet ran at Thu Mar 22 03:03:51 UTC 2012 [03:34:21] PROBLEM - Disk space on stafford is CRITICAL: DISK CRITICAL - free space: /var/lib/puppet 758 MB (3% inode=92%): [04:11:27] nayone around? [04:11:38] I'm having difficulty purging the server cache for a particular page [04:11:56] you can see the problem at http://en.wikipedia.org/wiki/User:Prodego/Sandbox [04:12:01] there should not be a space at the end [04:15:41] Why do you think that syntax is legal? [04:16:11] Links can't contain newlines. [04:27:44] RECOVERY - Disk space on stafford is OK: DISK OK [05:53:14] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [05:55:20] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [06:03:26] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [06:03:26] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [06:29:32] PROBLEM - Disk space on ms1002 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [06:30:08] PROBLEM - RAID on ms1002 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [06:31:56] PROBLEM - DPKG on ms1002 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [06:47:06] PROBLEM - Disk space on search1015 is CRITICAL: DISK CRITICAL - free space: /a 3220 MB (2% inode=99%): [07:58:07] PROBLEM - Disk space on srv224 is CRITICAL: DISK CRITICAL - free space: / 235 MB (3% inode=61%): /var/lib/ureadahead/debugfs 235 MB (3% inode=61%): [07:58:07] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 255 MB (3% inode=61%): /var/lib/ureadahead/debugfs 255 MB (3% inode=61%): [07:58:16] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 19 MB (0% inode=61%): /var/lib/ureadahead/debugfs 19 MB (0% inode=61%): [08:02:19] PROBLEM - Disk space on srv220 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [08:02:19] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 251 MB (3% inode=61%): /var/lib/ureadahead/debugfs 251 MB (3% inode=61%): [08:04:34] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 4 MB (0% inode=61%): /var/lib/ureadahead/debugfs 4 MB (0% inode=61%): [08:10:34] RECOVERY - Disk space on srv224 is OK: DISK OK [08:10:43] RECOVERY - Disk space on srv219 is OK: DISK OK [08:10:52] RECOVERY - Disk space on srv222 is OK: DISK OK [08:14:55] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 215 MB (3% inode=61%): /var/lib/ureadahead/debugfs 215 MB (3% inode=61%): [08:15:13] RECOVERY - Disk space on srv221 is OK: DISK OK [08:17:01] RECOVERY - Disk space on srv220 is OK: DISK OK [08:17:01] RECOVERY - Disk space on srv223 is OK: DISK OK [08:42:09] PROBLEM - Puppet freshness on linne is CRITICAL: Puppet has not run in the last 10 hours [09:10:12] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [09:36:09] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [09:49:13] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [14:33:46] !log reedy synchronizing Wikimedia installation... : [14:33:50] Logged the message, Master [14:46:34] sync done. [15:12:26] PROBLEM - Host search1015 is DOWN: PING CRITICAL - Packet loss = 100% [15:13:20] PROBLEM - Host search1016 is DOWN: PING CRITICAL - Packet loss = 100% [15:28:29] Does anyone know what is that legal code thing on commons' upload wizard? it links to $2.... [15:44:54] RECOVERY - Host search1015 is UP: PING OK - Packet loss = 0%, RTA = 26.85 ms [15:48:30] RECOVERY - Host search1016 is UP: PING OK - Packet loss = 0%, RTA = 26.44 ms [15:54:21] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [15:56:27] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [16:05:27] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [16:05:27] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [16:37:24] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 275 MB (3% inode=61%): /var/lib/ureadahead/debugfs 275 MB (3% inode=61%): [16:51:29] RECOVERY - Disk space on srv221 is OK: DISK OK [18:08:14] !log preilly synchronized php-1.19/extensions/ZeroRatedMobileAccess/ZeroRatedMobileAccess.body.php 'changes for zero needed for carrier testing header of landing page only for mswiki' [18:08:18] Logged the message, Master [18:27:38] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 212 MB (2% inode=61%): /var/lib/ureadahead/debugfs 212 MB (2% inode=61%): [18:37:50] PROBLEM - Disk space on srv224 is CRITICAL: DISK CRITICAL - free space: / 27 MB (0% inode=61%): /var/lib/ureadahead/debugfs 27 MB (0% inode=61%): [18:38:08] PROBLEM - Disk space on srv220 is CRITICAL: DISK CRITICAL - free space: / 203 MB (2% inode=61%): /var/lib/ureadahead/debugfs 203 MB (2% inode=61%): [18:38:08] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 263 MB (3% inode=61%): /var/lib/ureadahead/debugfs 263 MB (3% inode=61%): [18:40:14] RECOVERY - Disk space on srv222 is OK: DISK OK [18:40:14] RECOVERY - Disk space on srv220 is OK: DISK OK [18:43:50] PROBLEM - Puppet freshness on linne is CRITICAL: Puppet has not run in the last 10 hours [18:46:14] PROBLEM - Disk space on srv224 is CRITICAL: DISK CRITICAL - free space: / 227 MB (3% inode=61%): /var/lib/ureadahead/debugfs 227 MB (3% inode=61%): [18:48:38] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 163 MB (2% inode=61%): /var/lib/ureadahead/debugfs 163 MB (2% inode=61%): [18:50:26] RECOVERY - Disk space on srv224 is OK: DISK OK [18:50:44] RECOVERY - Disk space on srv219 is OK: DISK OK [19:11:53] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [19:37:36] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [19:47:29] !log rebooting ms1002, had stuck rsyncs, and kswapds at 100% cpu, weirdness like "ls /export/upload/wikipedia/am/0/00" hanging. [19:47:33] Logged the message, Master [19:49:18] PROBLEM - Host ms1002 is DOWN: PING CRITICAL - Packet loss = 100% [19:50:21] RECOVERY - Disk space on ms1002 is OK: DISK OK [19:50:30] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [19:50:30] RECOVERY - Host ms1002 is UP: PING OK - Packet loss = 0%, RTA = 26.43 ms [19:50:48] RECOVERY - RAID on ms1002 is OK: OK: State is Optimal, checked 2 logical device(s) [19:51:42] RECOVERY - DPKG on ms1002 is OK: All packages OK [19:58:54] RECOVERY - Puppet freshness on ms1002 is OK: puppet ran at Thu Mar 22 19:58:20 UTC 2012 [19:59:12] RECOVERY - Host magnesium is UP: PING WARNING - Packet loss = 37%, RTA = 65.15 ms [20:26:12] PROBLEM - Host search1015 is DOWN: PING CRITICAL - Packet loss = 100% [20:29:21] RECOVERY - Puppet freshness on magnesium is OK: puppet ran at Thu Mar 22 20:29:10 UTC 2012 [20:31:00] FYI: I got the following message as I tried to open several version of an arcticle at teh same time: http://p.defau.lt/?kcqil8RKZ4abwtgJDhUMsg [20:34:51] Hi, thumbnail is not generated and purge does not work for me: http://commons.wikimedia.org/wiki/File:Sphynx_kitten_from_Belarus_-_March_2012_-_HD.ogv [20:38:48] RECOVERY - Host search1015 is UP: PING OK - Packet loss = 0%, RTA = 26.41 ms [20:43:09] PROBLEM - SSH on search1015 is CRITICAL: Connection refused [20:43:18] PROBLEM - DPKG on search1015 is CRITICAL: Connection refused by host [20:44:57] PROBLEM - RAID on search1015 is CRITICAL: Connection refused by host [20:50:22] PROBLEM - MySQL Replication Heartbeat on db1007 is CRITICAL: CRIT replication delay 326 seconds [20:50:31] PROBLEM - Lucene on search1015 is CRITICAL: Connection refused [20:50:40] PROBLEM - MySQL Slave Delay on db1007 is CRITICAL: CRIT replication delay 345 seconds [20:56:49] RECOVERY - MySQL Replication Heartbeat on db1007 is OK: OK replication delay 0 seconds [20:57:07] RECOVERY - MySQL Slave Delay on db1007 is OK: OK replication delay 0 seconds [20:59:04] PROBLEM - Host search1016 is DOWN: PING CRITICAL - Packet loss = 100% [20:59:58] RECOVERY - SSH on search1015 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [21:05:13] RECOVERY - Host search1016 is UP: PING OK - Packet loss = 0%, RTA = 26.41 ms [21:10:10] PROBLEM - Disk space on search1016 is CRITICAL: Connection refused by host [21:10:37] PROBLEM - RAID on search1016 is CRITICAL: Connection refused by host [21:10:55] PROBLEM - SSH on search1016 is CRITICAL: Connection refused [21:11:22] PROBLEM - DPKG on search1016 is CRITICAL: Connection refused by host [21:17:58] PROBLEM - Lucene on search1016 is CRITICAL: Connection refused [21:28:37] PROBLEM - NTP on search1015 is CRITICAL: NTP CRITICAL: No response from NTP server [21:29:58] RECOVERY - SSH on search1016 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [21:34:55] PROBLEM - NTP on search1016 is CRITICAL: NTP CRITICAL: Offset unknown [21:36:16] PROBLEM - Host search1015 is DOWN: PING CRITICAL - Packet loss = 100% [21:37:37] RECOVERY - Disk space on search1016 is OK: DISK OK [21:37:55] RECOVERY - RAID on search1016 is OK: OK: Active: 6, Working: 6, Failed: 0, Spare: 0 [21:38:04] RECOVERY - Host search1015 is UP: PING OK - Packet loss = 0%, RTA = 26.58 ms [21:38:31] RECOVERY - DPKG on search1016 is OK: All packages OK [21:41:13] RECOVERY - NTP on search1016 is OK: NTP OK: Offset 0.08279848099 secs [21:42:43] PROBLEM - Host search1015 is DOWN: PING CRITICAL - Packet loss = 100% [22:14:31] RECOVERY - Lucene on search1016 is OK: TCP OK - 0.027 second response time on port 8123 [22:19:49] RECOVERY - Host search1015 is UP: PING OK - Packet loss = 0%, RTA = 26.55 ms [22:25:04] PROBLEM - SSH on search1015 is CRITICAL: Connection refused [22:34:49] PROBLEM - Host search1015 is DOWN: PING CRITICAL - Packet loss = 100% [22:44:24] gn8 folks [23:01:13] RECOVERY - Host db1020 is UP: PING OK - Packet loss = 0%, RTA = 26.54 ms [23:05:34] PROBLEM - MySQL Replication Heartbeat on db1020 is CRITICAL: Connection refused by host [23:05:43] PROBLEM - SSH on db1020 is CRITICAL: Connection refused [23:06:02] PROBLEM - DPKG on db1020 is CRITICAL: Connection refused by host [23:06:12] PROBLEM - mysqld processes on db1020 is CRITICAL: Connection refused by host [23:06:12] PROBLEM - MySQL Slave Delay on db1020 is CRITICAL: Connection refused by host [23:06:28] PROBLEM - Disk space on db1020 is CRITICAL: Connection refused by host [23:06:29] PROBLEM - MySQL Slave Running on db1020 is CRITICAL: Connection refused by host [23:06:55] PROBLEM - Full LVS Snapshot on db1020 is CRITICAL: Connection refused by host [23:07:13] PROBLEM - MySQL disk space on db1020 is CRITICAL: Connection refused by host [23:07:13] PROBLEM - MySQL Idle Transactions on db1020 is CRITICAL: Connection refused by host [23:07:31] PROBLEM - RAID on db1020 is CRITICAL: Connection refused by host [23:07:31] PROBLEM - MySQL Recent Restart on db1020 is CRITICAL: Connection refused by host [23:13:13] RECOVERY - Host search1015 is UP: PING OK - Packet loss = 0%, RTA = 26.41 ms [23:30:55] !log reedy synchronized php/cache/interwiki.cdb 'Updating interwiki cache' [23:30:59] Logged the message, Master [23:32:25] PROBLEM - NTP on db1020 is CRITICAL: NTP CRITICAL: No response from NTP server [23:38:52] RECOVERY - SSH on search1015 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [23:49:52] RECOVERY - Disk space on search1015 is OK: DISK OK [23:50:46] PROBLEM - Host search-pool4.svc.eqiad.wmnet is DOWN: CRITICAL - Network Unreachable (10.2.2.14) [23:50:55] RECOVERY - RAID on search1015 is OK: OK: Active: 6, Working: 6, Failed: 0, Spare: 0 [23:51:04] PROBLEM - Host search-pool4.svc.pmtpa.wmnet is DOWN: CRITICAL - Network Unreachable (10.2.1.14) [23:51:13] PROBLEM - Host search-pool1.svc.eqiad.wmnet is DOWN: PING CRITICAL - Packet loss = 100% [23:51:22] PROBLEM - Host search-prefix.svc.eqiad.wmnet is DOWN: CRITICAL - Network Unreachable (10.2.2.15) [23:51:40] RECOVERY - DPKG on search1015 is OK: All packages OK [23:51:49] PROBLEM - Host search-pool2.svc.eqiad.wmnet is DOWN: PING CRITICAL - Packet loss = 100% [23:51:58] PROBLEM - Host search-prefix.svc.pmtpa.wmnet is DOWN: CRITICAL - Network Unreachable (10.2.1.15) [23:52:07] PROBLEM - Host search-pool3.svc.eqiad.wmnet is DOWN: PING CRITICAL - Packet loss = 100%