[00:06:49] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [00:08:55] !log awjrichards synchronized php/extensions/MobileFrontend/MobileFrontend.body.php 'r114506' [00:11:01] RECOVERY - LVS HTTP on m.wikimedia.org is OK: HTTP OK HTTP/1.1 200 OK - 0.106 second response time [00:25:52] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:29:55] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 5.939 seconds [00:35:50] !log awjrichards synchronized php/extensions/MobileFrontend/stylesheets/beta_common.css 'r114508' [00:36:09] !log awjrichards synchronized php/extensions/MobileFrontend/templates/LeaveFeedbackTemplate.php 'r114508' [00:36:36] !log awjrichards synchronized php/extensions/MobileFrontend/templates/MobileFrontendTemplate.php 'r114507' [00:37:30] !log awjrichards synchronized wmf-config/CommonSettings.php 'Bumping MobileFrontend resource version #' [00:41:58] !log awjrichards synchronized php/extensions/MobileFrontend/stylesheets/beta_common.css 'r114509' [00:42:17] bonne nuit~ [00:42:50] !log awjrichards synchronized wmf-config/CommonSettings.php 'Bmping resource version for MobileFrontend' [00:48:04] PROBLEM - NTP on sq34 is CRITICAL: NTP CRITICAL: No response from NTP server [00:55:36] !log awjrichards synchronized php/extensions/MobileFrontend/templates/LeaveFeedbackTemplate.php 'r114508' [00:56:51] !log awjrichards synchronized php/extensions/MobileFrontend/MobileFrontend.body.php 'r114507' [01:04:54] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:11:12] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 6.645 seconds [01:46:54] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:53:03] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 4.113 seconds [02:17:30] PROBLEM - mysqld processes on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:17:48] PROBLEM - Full LVS Snapshot on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:17:52] !log LocalisationUpdate completed (1.19) at Tue Mar 27 02:17:52 UTC 2012 [02:17:57] PROBLEM - MySQL Slave Running on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:18:24] PROBLEM - MySQL Recent Restart on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:18:24] PROBLEM - MySQL Idle Transactions on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:18:33] PROBLEM - RAID on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:19:27] RECOVERY - mysqld processes on db1047 is OK: PROCS OK: 1 process with command name mysqld [02:19:45] RECOVERY - Full LVS Snapshot on db1047 is OK: OK no full LVM snapshot volumes [02:19:54] RECOVERY - MySQL Slave Running on db1047 is OK: OK replication Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: [02:20:21] RECOVERY - MySQL Recent Restart on db1047 is OK: OK 271886 seconds since restart [02:20:21] RECOVERY - MySQL Idle Transactions on db1047 is OK: OK longest blocking idle transaction sleeps for 0 seconds [02:20:30] RECOVERY - RAID on db1047 is OK: OK: State is Optimal, checked 2 logical device(s) [02:25:36] PROBLEM - MySQL Replication Heartbeat on db1047 is CRITICAL: CRIT replication delay 313 seconds [02:25:36] PROBLEM - MySQL Slave Delay on db1047 is CRITICAL: CRIT replication delay 313 seconds [02:28:36] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:34:45] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 0.034 seconds [02:36:06] RECOVERY - MySQL Slave Delay on db1047 is OK: OK replication delay 0 seconds [02:36:06] RECOVERY - MySQL Replication Heartbeat on db1047 is OK: OK replication delay 0 seconds [02:41:03] RECOVERY - Puppet freshness on brewster is OK: puppet ran at Tue Mar 27 02:40:53 UTC 2012 [03:06:46] [[Tech]]; MZMcBride; /* Global things */ +reply; https://meta.wikimedia.org/w/index.php?diff=3596534&oldid=3595436&rcid=3201829 [03:09:42] PROBLEM - MySQL Slave Delay on db1047 is CRITICAL: CRIT replication delay 317 seconds [03:09:51] PROBLEM - MySQL Replication Heartbeat on db1047 is CRITICAL: CRIT replication delay 325 seconds [03:19:18] PROBLEM - MySQL Recent Restart on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:19:27] PROBLEM - RAID on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:19:36] PROBLEM - MySQL Idle Transactions on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:20:30] PROBLEM - DPKG on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:20:39] PROBLEM - mysqld processes on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:20:57] PROBLEM - MySQL Slave Running on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:21:06] PROBLEM - Full LVS Snapshot on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [03:23:03] RECOVERY - MySQL Slave Running on db1047 is OK: OK replication Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: [03:23:21] RECOVERY - MySQL Recent Restart on db1047 is OK: OK 275672 seconds since restart [03:23:30] RECOVERY - RAID on db1047 is OK: OK: State is Optimal, checked 2 logical device(s) [03:25:18] RECOVERY - Full LVS Snapshot on db1047 is OK: OK no full LVM snapshot volumes [03:26:57] RECOVERY - mysqld processes on db1047 is OK: PROCS OK: 1 process with command name mysqld [03:27:51] RECOVERY - MySQL Idle Transactions on db1047 is OK: OK longest blocking idle transaction sleeps for 0 seconds [03:28:45] RECOVERY - DPKG on db1047 is OK: All packages OK [04:02:11] RECOVERY - MySQL Replication Heartbeat on db1047 is OK: OK replication delay 0 seconds [04:02:38] RECOVERY - MySQL Slave Delay on db1047 is OK: OK replication delay 1 seconds [04:27:14] PROBLEM - Puppet freshness on search15 is CRITICAL: Puppet has not run in the last 10 hours [04:53:22] PROBLEM - Disk space on search1022 is CRITICAL: DISK CRITICAL - free space: /a 3596 MB (3% inode=99%): [04:55:46] PROBLEM - Disk space on search1021 is CRITICAL: DISK CRITICAL - free space: /a 3593 MB (3% inode=99%): [05:10:28] PROBLEM - Disk space on search1021 is CRITICAL: DISK CRITICAL - free space: /a 4301 MB (3% inode=99%): [05:47:34] PROBLEM - Puppet freshness on search6 is CRITICAL: Puppet has not run in the last 10 hours [05:47:34] PROBLEM - Apache HTTP on srv278 is CRITICAL: Connection refused [05:48:28] PROBLEM - Puppet freshness on search1016 is CRITICAL: Puppet has not run in the last 10 hours [05:59:52] PROBLEM - Disk space on search1022 is CRITICAL: DISK CRITICAL - free space: /a 4297 MB (3% inode=99%): [06:02:34] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [06:02:34] PROBLEM - Puppet freshness on search1006 is CRITICAL: Puppet has not run in the last 10 hours [06:08:52] RECOVERY - Apache HTTP on srv278 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.045 second response time [06:13:31] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [06:13:31] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [06:17:07] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 188 MB (2% inode=61%): /var/lib/ureadahead/debugfs 188 MB (2% inode=61%): [06:21:28] RECOVERY - Disk space on srv223 is OK: DISK OK [06:25:41] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [06:25:41] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [06:27:38] PROBLEM - Disk space on search1022 is CRITICAL: DISK CRITICAL - free space: /a 3585 MB (3% inode=99%): [06:38:17] PROBLEM - Host emery is DOWN: CRITICAL - Host Unreachable (208.80.152.184) [06:47:16] PROBLEM - Host kaulen is DOWN: CRITICAL - Host Unreachable (208.80.152.149) [06:50:52] RECOVERY - Host kaulen is UP: PING OK - Packet loss = 0%, RTA = 0.79 ms [07:00:01] RECOVERY - Host emery is UP: PING OK - Packet loss = 0%, RTA = 0.92 ms [07:04:40] PROBLEM - SSH on emery is CRITICAL: Connection refused [07:04:58] PROBLEM - DPKG on emery is CRITICAL: Connection refused by host [07:05:07] PROBLEM - udp2log log age on emery is CRITICAL: Connection refused by host [07:05:25] PROBLEM - udp2log processes on emery is CRITICAL: Connection refused by host [07:05:25] PROBLEM - Disk space on emery is CRITICAL: Connection refused by host [07:06:19] PROBLEM - RAID on emery is CRITICAL: Connection refused by host [07:21:46] RECOVERY - udp2log log age on emery is OK: OK: all log files active [07:22:04] RECOVERY - Disk space on emery is OK: DISK OK [07:22:04] RECOVERY - udp2log processes on emery is OK: OK: all filters present [07:23:31] RECOVERY - RAID on emery is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [07:23:58] RECOVERY - SSH on emery is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [07:24:16] RECOVERY - DPKG on emery is OK: All packages OK [07:52:28] PROBLEM - Host cp1017 is DOWN: PING CRITICAL - Packet loss = 100% [07:53:49] RECOVERY - Host cp1017 is UP: PING OK - Packet loss = 0%, RTA = 26.42 ms [07:59:56] !log archived old server admin logs since the old page was too long for my connection to download :-/ [08:05:13] PROBLEM - MySQL Replication Heartbeat on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:05:31] PROBLEM - MySQL Slave Delay on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:07:55] PROBLEM - Full LVS Snapshot on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:07:55] PROBLEM - DPKG on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:08:40] PROBLEM - MySQL Idle Transactions on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:08:49] PROBLEM - MySQL Recent Restart on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:08:49] PROBLEM - RAID on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:09:43] PROBLEM - mysqld processes on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:09:52] PROBLEM - MySQL Slave Running on db1047 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:14:49] RECOVERY - MySQL Idle Transactions on db1047 is OK: OK longest blocking idle transaction sleeps for 0 seconds [08:14:58] RECOVERY - MySQL Recent Restart on db1047 is OK: OK 293163 seconds since restart [08:14:58] RECOVERY - RAID on db1047 is OK: OK: State is Optimal, checked 2 logical device(s) [08:15:03] ok note folks that logging is broken right now, I'm looking into it [08:15:34] RECOVERY - MySQL Replication Heartbeat on db1047 is OK: OK replication delay 0 seconds [08:15:36] ok looks like it's back [08:15:52] RECOVERY - MySQL Slave Delay on db1047 is OK: OK replication delay 0 seconds [08:15:52] RECOVERY - mysqld processes on db1047 is OK: PROCS OK: 1 process with command name mysqld [08:16:01] RECOVERY - MySQL Slave Running on db1047 is OK: OK replication Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: [08:16:10] RECOVERY - Full LVS Snapshot on db1047 is OK: OK no full LVM snapshot volumes [08:16:10] RECOVERY - DPKG on db1047 is OK: All packages OK [08:41:14] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [08:43:11] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [09:10:11] PROBLEM - Puppet freshness on sq34 is CRITICAL: Puppet has not run in the last 10 hours [09:32:39] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [09:32:39] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [09:37:00] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 199 MB (2% inode=61%): /var/lib/ureadahead/debugfs 199 MB (2% inode=61%): [09:43:18] PROBLEM - Disk space on srv224 is CRITICAL: DISK CRITICAL - free space: / 215 MB (3% inode=62%): /var/lib/ureadahead/debugfs 215 MB (3% inode=62%): [09:51:33] RECOVERY - Disk space on srv224 is OK: DISK OK [09:51:42] RECOVERY - Disk space on srv223 is OK: DISK OK [09:51:51] RECOVERY - Disk space on srv222 is OK: DISK OK [09:51:51] RECOVERY - Disk space on srv219 is OK: DISK OK [10:01:00] PROBLEM - Host cp1017 is DOWN: PING CRITICAL - Packet loss = 100% [10:08:21] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [11:40:08] * schoolcraftT slaps sDrewth upside da head with a hairy goldfish [11:41:35] fuck off todd [11:54:27] do we know if *anyone* is using the external editor interface? [11:55:03] the one for article text, i mean, not for uploaded files [11:57:31] good luck finding out. mw is used by third parties who may not be subscribed to any list [11:57:57] I tried usin an external editor through ff at one point but it was too slow, painfully slow in fact so I gave it up [11:58:02] otherwise I would do it in a heartbeat [12:05:21] PROBLEM - Auth DNS on ns0.wikimedia.org is CRITICAL: CRITICAL - Plugin timed out while executing system call [12:07:18] RECOVERY - Auth DNS on ns0.wikimedia.org is OK: DNS OK: 0.036 seconds response time. www.wikipedia.org returns 208.80.154.225 [12:09:42] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 98 MB (1% inode=61%): /var/lib/ureadahead/debugfs 98 MB (1% inode=61%): [12:09:42] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 252 MB (3% inode=61%): /var/lib/ureadahead/debugfs 252 MB (3% inode=61%): [12:20:12] RECOVERY - Disk space on srv219 is OK: DISK OK [12:26:39] RECOVERY - Disk space on srv223 is OK: DISK OK [13:46:16] PROBLEM - BGP status on cr2-eqiad is CRITICAL: CRITICAL: host 208.80.154.197, sessions up: 10, down: 1, shutdown: 0BRPeering with AS1257 not established - The + flag cannot be used with the sub-query features described below.BR [14:02:09] PROBLEM - MySQL Replication Heartbeat on db16 is CRITICAL: CRIT replication delay 185 seconds [14:02:27] PROBLEM - MySQL Slave Delay on db16 is CRITICAL: CRIT replication delay 187 seconds [14:16:06] PROBLEM - swift-container-auditor on ms-be1 is CRITICAL: PROCS CRITICAL: 0 processes with regex args ^/usr/bin/python /usr/bin/swift-container-auditor [14:29:09] PROBLEM - Puppet freshness on search15 is CRITICAL: Puppet has not run in the last 10 hours [14:37:15] RECOVERY - swift-container-auditor on ms-be1 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-container-auditor [14:40:40] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35516 - Add Skin: namespace to MW.org' [14:40:41] Logged the message, Master [14:57:30] RECOVERY - Puppet freshness on search1016 is OK: puppet ran at Tue Mar 27 14:57:17 UTC 2012 [14:58:24] RECOVERY - Puppet freshness on search1006 is OK: puppet ran at Tue Mar 27 14:58:03 UTC 2012 [15:11:53] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 32825 - Favicon for siwiki' [15:11:54] Logged the message, Master [15:19:02] RECOVERY - MySQL Slave Delay on db16 is OK: OK replication delay 28 seconds [15:20:32] RECOVERY - MySQL Replication Heartbeat on db16 is OK: OK replication delay 21 seconds [15:24:05] how do i do a link to pages like "Manual:What is MediaWiki?" from another projects? the "?" is causing problems... [15:25:27] dammint, must encode url [15:32:50] RECOVERY - BGP status on cr2-eqiad is OK: OK: host 208.80.154.197, sessions up: 10, down: 0, shutdown: 1 [15:45:44] PROBLEM - BGP status on cr2-eqiad is CRITICAL: CRITICAL: host 208.80.154.197, sessions up: 10, down: 1, shutdown: 0BRPeering with AS1257 not established - The + flag cannot be used with the sub-query features described below.BR [15:47:19] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35161 - Incubator configuration updates' [15:47:21] Logged the message, Master [15:47:50] RECOVERY - BGP status on cr2-eqiad is OK: OK: host 208.80.154.197, sessions up: 11, down: 0, shutdown: 0 [15:49:29] PROBLEM - Puppet freshness on search6 is CRITICAL: Puppet has not run in the last 10 hours [16:04:29] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [16:06:54] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34527 - Create a Arbcom namespace on Russian Wikipedia' [16:06:56] Logged the message, Master [16:08:47] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34527 - Create a Arbcom namespace on Russian Wikipedia' [16:08:49] Logged the message, Master [16:15:26] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [16:15:26] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [16:26:53] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [16:26:53] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [16:32:36] !log reedy synchronized wmf-config/InitialiseSettings.php 'prep work for new wikis' [16:32:37] Logged the message, Master [16:48:58] !log reedy ran sync-common-all [16:49:00] Logged the message, Master [16:56:06] !log reedy synchronized wmf-config/InitialiseSettings.php 'Config for lezwiki' [16:56:08] Logged the message, Master [16:58:51] !log reedy synchronized wmf-config/InitialiseSettings.php 'Config for lezwiki' [16:58:53] Logged the message, Master [17:00:19] !log reedy synchronized wmf-config/InitialiseSettings.php 'Config for lezwiki' [17:00:21] Logged the message, Master [17:10:41] RECOVERY - Puppet freshness on search15 is OK: puppet ran at Tue Mar 27 17:10:11 UTC 2012 [17:15:38] RECOVERY - Puppet freshness on search6 is OK: puppet ran at Tue Mar 27 17:15:37 UTC 2012 [17:28:23] PROBLEM - Disk space on srv220 is CRITICAL: DISK CRITICAL - free space: / 279 MB (3% inode=61%): /var/lib/ureadahead/debugfs 279 MB (3% inode=61%): [17:36:23] does anyone here know of the XFF project is still alive? [17:36:47] RECOVERY - Disk space on srv220 is OK: DISK OK [18:05:04] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35290 - Create Slovenian Wikiversity' [18:05:06] Logged the message, Master [18:05:43] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35290 - Create Slovenian Wikiversity' [18:05:45] Logged the message, Master [18:10:48] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34351 - Create Wikisource in Belarusian' [18:10:50] Logged the message, Master [18:14:41] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34351 - Create Wikisource in Belarusian' [18:14:43] Logged the message, Master [18:16:41] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 34351 - Create Wikisource in Belarusian' [18:16:43] Logged the message, Master [18:22:14] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35138 - Create Gujarati Wikisource' [18:22:15] Logged the message, Master [18:26:01] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35138 - Create Gujarati Wikisource' [18:26:03] Logged the message, Master [18:39:24] Reedy: Could you double check the creation of lez.wikipedia? When I autocreated my account earlier no notification was pushed to irc.wikimedia.org/#central whereas the autocreation on the other 3 wikis did appear there [18:39:48] maybe a non-issue, but just in case :) [18:55:15] Krinkle-away: I've no idea what you would check for that [18:55:38] Your account is listed in the logs... [18:57:23] I'm not overly worried [19:00:32] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 35138 - Create Gujarati Wikisource' [19:00:34] Logged the message, Master [19:06:17] PROBLEM - Packetloss_Average on locke is CRITICAL: CRITICAL: packet_loss_average is 10.4674729464 (gt 8.0) [19:11:23] !log reedy synchronized wmf-config/InitialiseSettings.php 'Remove ruwiki arbcom talk from namespaceprotection' [19:11:23] Logged the message, Master [19:12:08] PROBLEM - Puppet freshness on sq34 is CRITICAL: Puppet has not run in the last 10 hours [19:12:44] RECOVERY - Packetloss_Average on locke is OK: OK: packet_loss_average is 2.00242292035 [19:20:01] !log reedy synchronized wmf-config/InitialiseSettings.php 'Fix lezwiki namespace' [19:20:02] Logged the message, Master [19:23:15] Reedy: ok, np [19:27:59] bye bye nagios notifications :-] [19:28:17] i would say we're sad to see you go but i doubt many people are [19:28:33] we will now be able to use this channel again [19:41:54] hashar: We should just put all the Nagios bots in #wikimedia-nagios. I think we have at least three (WMF, Toolserver & labs) [19:42:45] multichill|2: will have to ask operations team [19:42:54] they seem to like having their bot in #wikimedia-operations [19:44:52] don't enjoy , just was thought that it is more obvious [19:45:15] anyway, that is something you can easily change [19:45:24] yep [20:07:18] multichill: Not _another_ channel please [20:07:44] Divide and conquer! [20:07:49] besides I think it makes more sense to put bots in channels based on audience rather than type of bot [20:08:06] who would want to be notified of things in all 3?I don't know anyone who is in ops of all three [20:08:16] well, I know a few in both labs and wmf-main, but still.. [20:19:56] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 33789 - Enable botadmin usergroup on ml.wikipedia' [20:19:57] Logged the message, Master [20:48:19] can I ask the git question of the day? [20:48:19] just did git commit --amend -a [20:48:19] now running git-review [20:48:19] I get: error refusing to lose untracked file at 'Makefile' [20:48:19] how do I fix this? [21:16:26] woosters: https://www.mediawiki.org/w/index.php?title=Talk:Wikimedia_Engineering/2012-13_Goals&curid=83973&diff=516486&oldid=516303 [21:17:33] LeslieCarr: mark: https://www.mediawiki.org/wiki/Talk:Wikimedia_Engineering/2012-13_Goals -- is IPv6 something to add to ops's goals for the next fiscal year? [21:18:57] i'd say sure, have ipv6 enabled for whitelisted resolvers as a next FY goal [21:20:45] LeslieCarr: I shouldn't be the one to add it to that page.... also I figure if it's a goal it should have people/time allotted for it [21:21:03] hah, people and time …. [21:21:17] that's a good one ;) [21:21:35] this is the hard work of management, right? engineering tradeoffs. what is actually something worth prioritizing and what do we have to leave till the next year [21:22:35] yeah [21:22:38] and $ [21:22:53] i really want more caching centers for lower latency but each one is $ [21:23:12] anyone here got a million sitting around to help run one? :) [21:24:00] LeslieCarr: should we do it like a university? "you get to name the caching center after yourself" (kidding) [21:24:06] hehehe [21:24:39] i'd let them make up the short name that we call it :) their name could be in dns records of random servers for all time ! [21:24:51] haha [21:24:58] it's the least exciting sponsorship naming right ever :( [21:25:57] Moar ipv6! ;-) [22:04:19] [[Tech]]; 24.13.55.67; /* tazxcvbnmklop090sasdefvbnmkhyswdfwhsz.lf'jkpohgjkbgvflkjdhn.lidfjgjhoijgrkjbnfiksjhcjdsdocd cspc[]cddcmvfvd */ new section; https://meta.wikimedia.org/w/index.php?diff=3598831&oldid=3596534&rcid=3203184 [22:04:27] [[Tech]]; Mathonius; Reverted changes by [[Special:Contributions/24.13.55.67|24.13.55.67]] ([[User talk:24.13.55.67|talk]]) to last version by MZMcBride; https://meta.wikimedia.org/w/index.php?diff=3598833&oldid=3598831&rcid=3203185 [22:44:02] Lesbian Wikipedia [22:44:42] very much [22:44:54] No more nagios in here? Thank the good Lord. [22:48:24] <^demon> I'm not getting enough nagios warnings :( [22:48:41] join wikimedia-operations and you'll get all the ones you missed [22:48:59] <^demon> I'm there. I'm just so used to seeing them twice that I feel like I'm missing out [23:12:36] !log tstarling synchronized php-1.19/cache/trusted-xff.cdb [23:12:38] Logged the message, Master [23:19:13] gn8 folks [23:19:28] good night [23:20:14] hey AaronSchulz we've got a swift cluster in labs!! [23:20:41] sounds good