[00:23:05] New review: Reedy; "Patch Set 1: Code-Review+2" [operations/mediawiki-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49489 [00:23:30] PROBLEM - Host mw1085 is DOWN: PING CRITICAL - Packet loss = 100% [00:23:46] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49489 [00:25:27] RECOVERY - Host mw1085 is UP: PING OK - Packet loss = 0%, RTA = 26.49 ms [01:36:11] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [01:37:32] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [02:07:59] PROBLEM - Puppet freshness on srv246 is CRITICAL: Puppet has not run in the last 10 hours [02:08:26] RECOVERY - MySQL disk space on neon is OK: DISK OK [02:08:35] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [02:29:48] !log LocalisationUpdate completed (1.21wmf9) at Mon Feb 18 02:29:47 UTC 2013 [02:29:53] Logged the message, Master [03:38:54] New patchset: Tim Starling; "Disable the LuaSandbox profiler" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49611 [03:39:46] New review: Tim Starling; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49611 [03:39:47] Change merged: Tim Starling; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49611 [03:40:45] !log tstarling synchronized wmf-config/CommonSettings.php [03:40:50] Logged the message, Master [04:09:35] PROBLEM - MySQL Slave Running on db1047 is CRITICAL: CRIT replication Slave_IO_Running: Yes Slave_SQL_Running: No Last_Error: Error BIGINT UNSIGNED value is out of range in (enwiki.article_f [04:09:35] PROBLEM - MySQL Slave Running on db1043 is CRITICAL: CRIT replication Slave_IO_Running: Yes Slave_SQL_Running: No Last_Error: Error BIGINT UNSIGNED value is out of range in (enwiki.article_f [04:09:44] PROBLEM - MySQL Slave Running on db59 is CRITICAL: CRIT replication Slave_IO_Running: Yes Slave_SQL_Running: No Last_Error: Error BIGINT UNSIGNED value is out of range in (enwiki.article_f [04:16:02] PROBLEM - Puppet freshness on db62 is CRITICAL: Puppet has not run in the last 10 hours [04:28:47] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [04:30:08] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [04:40:02] PROBLEM - Puppet freshness on mc1006 is CRITICAL: Puppet has not run in the last 10 hours [04:42:08] PROBLEM - Puppet freshness on snapshot4 is CRITICAL: Puppet has not run in the last 10 hours [04:47:05] PROBLEM - Puppet freshness on amslvs1 is CRITICAL: Puppet has not run in the last 10 hours [04:55:02] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [05:12:08] PROBLEM - Puppet freshness on labstore2 is CRITICAL: Puppet has not run in the last 10 hours [05:12:44] RECOVERY - Puppet freshness on palladium is OK: puppet ran at Mon Feb 18 05:12:36 UTC 2013 [05:14:41] RECOVERY - Puppet freshness on sq85 is OK: puppet ran at Mon Feb 18 05:14:35 UTC 2013 [05:15:19] New patchset: Tim Starling; "Take db1043 out of rotation due to lag" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49613 [05:16:13] New review: Tim Starling; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49613 [05:16:14] Change merged: Tim Starling; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49613 [05:16:47] RECOVERY - Puppet freshness on db1026 is OK: puppet ran at Mon Feb 18 05:16:40 UTC 2013 [05:16:50] !log tstarling synchronized wmf-config/db-eqiad.php [05:16:52] Logged the message, Master [05:23:32] RECOVERY - Puppet freshness on kaulen is OK: puppet ran at Mon Feb 18 05:23:17 UTC 2013 [05:24:08] PROBLEM - Puppet freshness on sq41 is CRITICAL: Puppet has not run in the last 10 hours [05:27:08] RECOVERY - MySQL Slave Running on db1043 is OK: OK replication Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Error: [05:28:11] RECOVERY - MySQL disk space on neon is OK: DISK OK [05:28:38] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [05:29:14] RECOVERY - Puppet freshness on mc1003 is OK: puppet ran at Mon Feb 18 05:28:42 UTC 2013 [05:30:17] PROBLEM - MySQL Slave Delay on db1043 is CRITICAL: CRIT replication delay 3192 seconds [05:31:11] RECOVERY - Puppet freshness on knsq23 is OK: puppet ran at Mon Feb 18 05:30:55 UTC 2013 [05:38:53] RECOVERY - MySQL Slave Delay on db1043 is OK: OK replication delay 0 seconds [05:44:11] New patchset: Tim Starling; "Revert "Take db1043 out of rotation due to lag"" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49616 [05:45:10] New review: Tim Starling; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49616 [05:45:11] Change merged: Tim Starling; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49616 [05:45:43] !log tstarling synchronized wmf-config/db-eqiad.php [05:45:44] Logged the message, Master [06:07:14] PROBLEM - Puppet freshness on virt0 is CRITICAL: Puppet has not run in the last 10 hours [06:14:53] PROBLEM - LVS Lucene on search-pool4.svc.eqiad.wmnet is CRITICAL: Connection timed out [06:16:23] RECOVERY - LVS Lucene on search-pool4.svc.eqiad.wmnet is OK: TCP OK - 0.028 second response time on port 8123 [06:57:55] PROBLEM - SSH on amslvs1 is CRITICAL: Server answer: [07:01:20] PROBLEM - LVS Lucene on search-pool4.svc.eqiad.wmnet is CRITICAL: Connection timed out [07:03:34] apergos, ^^ [07:03:47] I see it [07:04:19] search in pool 4 doesn't work [07:04:38] RECOVERY - LVS Lucene on search-pool4.svc.eqiad.wmnet is OK: TCP OK - 0.027 second response time on port 8123 [07:05:14] RECOVERY - SSH on amslvs1 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1 (protocol 2.0) [07:14:32] still broken? [07:15:20] (I tried a randm search on a couple wikis that should be in that pool, seem ok) [07:17:14] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [07:17:41] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [07:40:02] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:41:41] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.110 seconds [07:48:08] RECOVERY - MySQL disk space on neon is OK: DISK OK [07:48:35] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [08:11:41] PROBLEM - Puppet freshness on knsq28 is CRITICAL: Puppet has not run in the last 10 hours [08:33:26] PROBLEM - LVS Lucene on search-pool4.svc.eqiad.wmnet is CRITICAL: Connection timed out [08:35:59] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:37:38] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.047 seconds [08:38:59] RECOVERY - LVS Lucene on search-pool4.svc.eqiad.wmnet is OK: TCP OK - 9.023 second response time on port 8123 [09:25:14] New review: Silke Meyer; "Patch Set 4: Code-Review+1" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/48979 [09:33:02] New patchset: Platonides; "(bug 29079) Add a simple change to puppet" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49621 [09:41:42] Platonides: I don't think ops have that kind of humor :-} [09:43:39] New review: Nikerabbit; "Patch Set 1: Code-Review-1" [operations/puppet] (production) C: -1; - https://gerrit.wikimedia.org/r/49621 [09:47:36] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 193 seconds [09:48:12] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 200 seconds [09:54:57] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [09:55:24] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [10:13:15] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [10:13:42] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [10:23:30] New review: Dzahn; "Patch Set 1: Code-Review+2" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/49220 [10:23:50] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49220 [10:27:48] New review: Dzahn; "Patch Set 1: Code-Review-2" [operations/puppet] (production) C: -2; - https://gerrit.wikimedia.org/r/49621 [10:44:55] RECOVERY - MySQL disk space on neon is OK: DISK OK [10:45:04] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [11:15:59] New patchset: Hashar; "beta: memcached on the two apaches boxes" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49261 [11:17:00] New review: Hashar; "Patch Set 2: Code-Review+2" [operations/mediawiki-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49261 [11:17:13] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49261 [11:35:28] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 212 seconds [11:35:55] PROBLEM - MySQL Slave Delay on db32 is CRITICAL: CRIT replication delay 230 seconds [11:39:37] RECOVERY - MySQL Slave Delay on db32 is OK: OK replication delay 4 seconds [11:40:58] RECOVERY - MySQL Replication Heartbeat on db32 is OK: OK replication delay 0 seconds [11:55:09] uff, no hashar [11:55:21] do we have an equivalent of https://bugzilla.wikimedia.org/show_bug.cgi?id=36994 for the main cluster? [11:55:53] TimStarling "complained" that his I/O graphs have ben removed, not long ago :) [12:05:39] New review: Dzahn; "Patch Set 1: Code-Review+2" [operations/apache-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49197 [12:07:39] New review: Dzahn; "Patch Set 1: Verified+2" [operations/apache-config] (master); V: 2 - https://gerrit.wikimedia.org/r/49197 [12:07:39] Change merged: Dzahn; [operations/apache-config] (master) - https://gerrit.wikimedia.org/r/49197 [12:09:01] PROBLEM - Puppet freshness on srv246 is CRITICAL: Puppet has not run in the last 10 hours [12:13:48] !log mw1041 - down - CPU Machine Chk error on mgmt - removing from dsh groups (RT-4517) [12:13:50] Logged the message, Master [12:22:46] !log mw1045 - eth0 NO-CARRIER - check cable (RT-4545) [12:22:47] Logged the message, Master [12:24:42] !log gracefulling eqiad Apaches via dsh for redirect.conf change [12:24:43] Logged the message, Master [12:27:45] dzahn is doing a graceful restart of all apaches [12:28:25] !log dzahn gracefulled all apaches [12:28:26] Logged the message, Master [12:30:06] !log adding wikiartpedia domains to DNS / authdns-update [12:30:08] Logged the message, Master [12:32:07] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [12:33:10] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [12:33:49] !log wikiartpedia.(biz|co|info|me|mobi|net) activated - redirect to wikipedia.org (RT-4240) [12:33:50] Logged the message, Master [12:34:04] PROBLEM - MySQL Slave Delay on db32 is CRITICAL: CRIT replication delay 181 seconds [12:34:58] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 185 seconds [12:36:46] RECOVERY - MySQL Replication Heartbeat on db32 is OK: OK replication delay 0 seconds [12:37:40] RECOVERY - MySQL Slave Delay on db32 is OK: OK replication delay 0 seconds [12:48:20] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [12:49:22] RECOVERY - MySQL disk space on neon is OK: DISK OK [13:23:43] RECOVERY - Puppet freshness on mc1006 is OK: puppet ran at Mon Feb 18 13:23:12 UTC 2013 [13:27:16] RECOVERY - Puppet freshness on snapshot4 is OK: puppet ran at Mon Feb 18 13:27:03 UTC 2013 [13:31:40] RECOVERY - Puppet freshness on amslvs1 is OK: puppet ran at Mon Feb 18 13:31:17 UTC 2013 [13:32:07] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 191 seconds [13:32:25] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 201 seconds [13:44:43] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 181 seconds [13:45:02] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 184 seconds [13:58:10] New patchset: Hashar; "extract squid redirector to its own class" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49652 [13:59:12] New patchset: Hashar; "extract squid redirector to its own class" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49652 [13:59:58] lo mark :-] [14:00:05] got a squid change for you :-} [14:00:14] PROBLEM - LVS Lucene on search-pool4.svc.eqiad.wmnet is CRITICAL: Connection timed out [14:01:41] RECOVERY - LVS Lucene on search-pool4.svc.eqiad.wmnet is OK: TCP OK - 0.027 second response time on port 8123 [14:03:22] ok [14:03:52] mark: is it ok to make public the list of mobile user agent we use to redirect? [14:04:02] just wondering [14:04:06] the beta squid does not use puppet [14:04:12] New review: Mark Bergsma; "Patch Set 2: Code-Review+2" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/49652 [14:04:25] i wouldn't know why not [14:04:27] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49652 [14:04:27] well it does use puppet but not to maintain the squid conf :-] [14:11:36] New review: Mark Bergsma; "Patch Set 5: Code-Review+2" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/47067 [14:11:52] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47067 [14:13:14] \O/ [14:15:19] time for patchset 25? ;-) [14:16:50] PROBLEM - Puppet freshness on db62 is CRITICAL: Puppet has not run in the last 10 hours [14:18:18] morrrrrrrning! [14:19:22] meester mark, do you have a sec to comment on my hdfs group ownership question? [14:21:10] yes :) [14:22:50] Ha, thanks :) [14:23:04] one follow up q [14:23:35] should I add us real people to the stats group via puppet on analytics1010, or just do it manually since we will wipe and repuppetize all this soon anyway? [14:24:13] if you can do it via puppet easily then I don't see why not [14:24:24] mk, great, yeah should be easy [14:24:39] i'll just do it in the role::analytics class then and make it happen on all the nodes there [14:25:22] New patchset: Hashar; "beta: disable wikidata on the beta cluster" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49656 [14:27:44] New review: Hashar; "Patch Set 1: Code-Review+2" [operations/mediawiki-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49656 [14:27:58] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49656 [14:29:40] pulling that in prod [14:34:29] New patchset: Hashar; "beta: actually set wmgUseWikibaseClient to false" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49657 [14:37:13] New review: Hashar; "Patch Set 1: Code-Review+2" [operations/mediawiki-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49657 [14:37:17] yeah I know [14:37:18] typo [14:37:26] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49657 [14:46:53] New patchset: Ottomata; "Adding Analytics team members to 'stats' group." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49663 [14:47:49] mark, this should do just what we talked about, wouldn't mind just merging it but perhaps you should review it real quick? [14:48:49] New patchset: Ottomata; "Adding Analytics team members to 'stats' group." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49663 [14:55:29] New review: Mark Bergsma; "Patch Set 2: Code-Review+1" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/49663 [14:55:50] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [14:57:03] danke [14:57:16] New review: Ottomata; "Patch Set 2: Verified+2 Code-Review+2" [operations/puppet] (production); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49663 [14:57:30] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49663 [15:01:28] New patchset: Ottomata; "Removing include of ldap and nss on analytics nodes." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49665 [15:10:31] New patchset: Hashar; "adapt squid redirector for the beta context" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:11:56] mark: turns out the squid redirector conf has to be adapted for beta as well :-D https://gerrit.wikimedia.org/r/#/c/49667/ [15:12:03] mark: .org --> beta.wmflabs.org [15:12:07] that is an endless task [15:12:50] added you as a revivewer [15:13:24] PROBLEM - Puppet freshness on labstore2 is CRITICAL: Puppet has not run in the last 10 hours [15:15:42] must not have a beginning trailing dot [15:15:48] and you give it two beginning dots? :) [15:17:36] New review: Mark Bergsma; "Patch Set 1:" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:19:07] New review: Hashar; "Patch Set 6:" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47742 [15:20:28] New patchset: Hashar; "contint::website regroups apache + basic files" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47742 [15:20:51] New review: Ottomata; "Patch Set 1: Verified+2 Code-Review+2" [operations/puppet] (production); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49665 [15:20:57] Change merged: Ottomata; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49665 [15:21:31] mark: ggrrr [15:21:39] mark: I changed my mind while writing the code doh [15:25:24] PROBLEM - Puppet freshness on sq41 is CRITICAL: Puppet has not run in the last 10 hours [15:25:56] New review: Hashar; "Patch Set 1:" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:26:15] New patchset: Hashar; "adapt squid redirector for the beta context" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:26:34] mark: updated the redirector change https://gerrit.wikimedia.org/r/49667 :-D [15:26:39] btw, I love your reply on ops list [15:26:43] [15:26:46] [15:26:51] mark: "yes" [15:27:07] that is a great way to approve things :-] [15:27:39] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:27:53] New review: Mark Bergsma; "Patch Set 2: Code-Review+2" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/49667 [15:28:03] yes [15:28:03] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:28:17] New review: Mark Bergsma; "Patch Set 2:" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49667 [15:29:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.035 seconds [15:29:22] ROFL [15:29:45] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [15:30:12] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [15:31:27] mark: seems to redirect properly. Thanks!!! [15:41:18] err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at ')' at /etc/puppet/manifests/misc/fundraising.pp:207 on node i-0000034b.pmtpa.wmflabs [15:41:18] sniff [15:59:07] lucid issue :-D [15:59:10] wave [15:59:12] see you later [16:04:07] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:08:27] PROBLEM - Puppet freshness on virt0 is CRITICAL: Puppet has not run in the last 10 hours [16:09:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 4.437 seconds [16:22:27] LeslieCarr: mutante: want to comment on RT 2675 when you have a spare minute? [16:28:00] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 183 seconds [16:28:27] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 195 seconds [16:29:48] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 7 seconds [16:30:15] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 10 seconds [16:30:51] New patchset: Ottomata; "Rsyncing slow-parse logs from fluorine to dumps.wikimedia.org." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49678 [16:35:12] PROBLEM - MySQL Slave Delay on db32 is CRITICAL: CRIT replication delay 183 seconds [16:37:00] RECOVERY - MySQL Slave Delay on db32 is OK: OK replication delay 1 seconds [16:47:45] New patchset: Alex Monk; "(bug 45113) Set cswiktionary favicon to the same as enwiktionary" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49681 [16:48:15] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [16:48:42] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [16:51:24] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 191 seconds [16:51:42] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 199 seconds [16:52:27] New patchset: Alex Monk; "(bug 45124) Allow wikidatawiki sysops to add/remove confirmed status" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49682 [17:00:46] New patchset: Reedy; "Add new symlinks" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49684 [17:01:06] New review: Reedy; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49684 [17:01:07] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49684 [17:01:36] !log reedy synchronized live-1.5/ [17:01:39] Logged the message, Master [17:02:09] !log reedy synchronized docroot [17:02:10] Logged the message, Master [17:02:12] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [17:02:30] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [17:03:29] !log reedy synchronized wmf-config/ [17:03:31] Logged the message, Master [17:19:36] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [17:20:51] New patchset: Alex Monk; "(bug 44587) Trwiki FlaggedRevs autopromotion config" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49685 [17:20:57] RECOVERY - MySQL disk space on neon is OK: DISK OK [17:23:57] RECOVERY - Puppet freshness on knsq28 is OK: puppet ran at Mon Feb 18 17:23:41 UTC 2013 [17:26:56] matanya: hi [17:27:08] matanya: you're talking about the nagios check? [17:27:32] yes, I did [17:28:06] !log reedy synchronized php-1.21wmf10 'Initial sync out' [17:28:07] Logged the message, Master [17:28:35] matanya: i looked at it a little. seemed to mostly be a copy of code from some other place. where is it being copied from? [17:29:04] the it is listed in the code itself [17:29:20] and also stated to be on free license [17:29:47] I use it in our own prod servers for quite a long time [17:30:12] ok, but we need to know where it came from. so we can e.g. check to see if there's a newer version at that place [17:30:19] * jeremyb_ opens the changeset again [17:32:27] jeremyb_: check linux stats is from : http://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/check_linux_stats/details [17:33:14] check openmanage is from : check_openmanage [17:33:26] *folk.uio.no/trondham/software/check_openmanage.html [17:33:55] I wrote the other two [17:42:19] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: test2wiki to 1.21wmf10 [17:42:20] Logged the message, Master [17:44:48] matanya: so where is the canonical copy for the ones you wrote? will gerrit be that copy? [17:44:58] yes [17:45:43] this is gerrit 47514 i see [17:46:04] matanya: there's some trailing whitespace to be fixed [17:46:04] I just added https://linux.dell.com/repo/community/deb/OMSA_7.1/ into a script there, after checking it works [17:46:10] (i.e. the stuff in red) [17:46:15] yes, I see [17:49:07] jeremyb_: I guess it can be avoided by downloading right from the source [17:49:18] or using the deb, if ops prefer [17:49:20] check-openmanage_3.7.9-1_all.deb [17:49:22] sure [17:50:06] so modifying the script to install all might be a better approach [17:50:13] anyway, it's definitely a LeslieCarr thing [17:50:26] matanya: was there a particular place we wanted to use these checks? [17:50:43] on our servers :) [17:51:24] some ops were complaining nagios/icinga sucks and need more hardware checks, so I handled it [17:53:37] !log reedy Started syncing Wikimedia installation... : Build 1.21wmf10 message cache [17:53:38] Logged the message, Master [17:54:17] !log reedy Finished syncing Wikimedia installation... : Build 1.21wmf10 message cache [17:54:19] Logged the message, Master [17:55:59] ok, i'm out jeremyb_ [17:56:14] thank you, I'll try to get LeslieCarr look at it :) [17:57:49] New patchset: Pyoungmeister; "giving yuvi panda access to stat1" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49087 [17:58:45] New review: Pyoungmeister; "Patch Set 2: Code-Review+2" [operations/puppet] (production) C: 2; - https://gerrit.wikimedia.org/r/49087 [17:58:55] Change merged: Pyoungmeister; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49087 [18:02:06] hmm, puppet q for someone (mark maybe?) [18:02:16] i'm writing a define that has a bunch of default parameters [18:02:30] I want some of the defaults to be configurable, but to take defaults from the main module class [18:02:35] (this is for puppetization of limn) [18:02:36] so [18:03:05] is there a way to use predefined defaults from main required class scope in a define? [18:03:07] e.g. [18:03:08] $data_directory = "${limn::data_directory}/${name}", [18:03:31] so far, it looks like not. [18:03:41] ${limn::data_directory} is rendered as undef in my template [18:06:10] PROBLEM - Puppet freshness on sq80 is CRITICAL: Puppet has not run in the last 10 hours [18:10:49] New review: Ori.livneh; "Patch Set 1: Code-Review+1" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/49678 [18:23:50] ottomata: I don't know puppet all that well, but perhaps you can leave the default values in the function signature as 'undef', and then use an if () to replace them with main module class's values [18:25:12] ottomata: something like this maybe: http://projects.puppetlabs.com/issues/6621#note-3 [18:25:14] ottomata: ori-l: puppet vars are immutable btw. once a value is set then it's that value forever [18:25:29] ya, i've done that before, just so annoying [18:25:30] right [18:25:32] jeremyb_: well, you can use a different name [18:25:35] i've done that by using a diff name [18:25:39] just so inelegant [18:25:46] i think i'm just going to hardcode the default strings into the parameter default [18:26:06] in this case at least, if someone wants to change them its not that hard to change in both places [18:26:09] have you looked at all at hiera? [18:26:09] its a simple module [18:26:10] i think so too, but my hunch is that you might be able to get what you want using a different compositional paradigm [18:26:15] barely, it looks cool [18:26:17] you could use a yaml backend [18:26:30] but i have no specific suggestions, so this is a mostly useless hunch :) [18:27:50] hiera looks cool fo sho, but i'm not going to try to get ops to start using that for limn puppetization :p [18:28:49] ergh, i need fooOOood. gonna grab some, be back on in 30 mins or so [18:31:22] * jeremyb_ just ate :) [18:36:10] PROBLEM - Puppet freshness on knsq26 is CRITICAL: Puppet has not run in the last 10 hours [18:36:28] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 183 seconds [18:37:13] PROBLEM - Puppet freshness on sq79 is CRITICAL: Puppet has not run in the last 10 hours [18:38:16] RECOVERY - MySQL Replication Heartbeat on db32 is OK: OK replication delay 0 seconds [18:38:16] PROBLEM - Puppet freshness on amssq43 is CRITICAL: Puppet has not run in the last 10 hours [18:41:17] 2 scaps this time [18:42:56] Reedy: first as tragedy, then as farce [18:43:31] disconnected ssh agent just went lolno [18:43:47] but it had the desired effect, cache rebuilt at least [18:43:49] just needs to push [18:45:44] !log reedy Started syncing Wikimedia installation... : Build 1.21wmf10 message cache, and push it this time [18:45:46] Logged the message, Master [18:52:23] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 196 seconds [18:52:32] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 199 seconds [18:57:57] New patchset: Hashar; "beta: basic role to get mysql packages installed" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49703 [19:03:45] !log reedy Finished syncing Wikimedia installation... : Build 1.21wmf10 message cache, and push it this time [19:03:47] Logged the message, Master [19:11:07] New patchset: Legoktm; "(bug 45083) Enable AbuseFilter IRC notifications on Wikidata" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49704 [19:15:56] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [19:16:05] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [19:19:23] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: testwiki and mediawikiwiki to 1.21wmf10 [19:19:25] Logged the message, Master [19:20:53] New review: Vogone; "Patch Set 1: Code-Review+1" [operations/mediawiki-config] (master) C: 1; - https://gerrit.wikimedia.org/r/49682 [19:25:13] [27ec6071] 2013-02-18 19:24:55: Fatal exception of type MWException [19:25:26] On mw.org getting diffs. [19:28:48] Coren: Yeah [19:30:48] book where are all our ops! :-D [19:32:07] apergos: [19:32:08] snapshot1002: rsync: write failed on "/apache/common-local/php-1.21wmf10/cache/l10n/l10n_cache-ace.cdb": No space left on device (28) [19:34:09] oh noes [19:34:14] New patchset: Legoktm; "(bug 45083) Enable AbuseFilter IRC notifications on Wikidata" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49704 [19:34:18] Oh [19:34:23] Hmm [19:34:30] Did I kill the wmf8 localisation cache? [19:34:40] * Reedy waits for his terminal to return [19:35:09] do we need php-1.21wmf6/ ? [19:35:37] I see it ovr there (snapshot1002) [19:37:43] Reedy: [19:38:00] Not really. But I'm just killing the wmf8 l10n cache now [19:38:05] Should give more space back [19:38:28] /dev/sda1 9.9G 8.1G 1.4G 86% / [19:38:58] ok [19:39:23] great [19:44:26] !log reedy synchronized php-1.21wmf10/cache/l10n/ [19:44:28] Logged the message, Master [19:48:35] so whatever the nice sync script is that creates common-local, could you run that for mw1165? [19:48:44] then it will need a fulll sync... [19:49:27] Needs running as root initially [19:49:35] on mw1185 /apache links to /usr/local/apache [19:49:49] Running sync-common as root should be enough on mw1185 [19:49:51] I really have no idea where stuff is supposed to be now [19:50:02] Oh [19:50:02] ok lemme trry that [19:50:07] You fixed the permissions [19:50:16] reedy@mw1185:~$ sync-common [19:50:17] Copying to mw1185 from 10.0.5.8... [19:50:35] I did? [19:50:36] I did not [19:51:20] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 208 seconds [19:51:29] PROBLEM - Packetloss_Average on oxygen is CRITICAL: CRITICAL: packet_loss_average is 8.53024395349 (gt 8.0) [19:51:38] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 216 seconds [19:51:45] ok lemme know what I need to run on 1165 [19:52:25] apergos - getting paged [19:52:28] oh [19:52:37] yeah I just saw it [19:52:44] I misread 85 as 65 [19:52:52] reedy@mw1165:~$ sync-common [19:52:53] install: cannot change owner and permissions of `/usr/local/apache/common-local': No such file or directory [19:52:53] Unable to create common-local, please re-run this script as root. [19:53:03] So yeah, please run sync-common as root on mw1165 :) [19:53:15] ok well lemme see about db53, if [19:53:30] actually I wonder if there is any ops person awake in sf that can look at it [19:53:44] it's noon there or so, right? [19:53:50] yeah but it's a holiday [19:53:50] s2 snapshot host in pmtpa.. [19:54:06] Is anything actively using it? [19:54:34] If not, it probably can just be left to see if it catches up [19:54:40] running sync-common [19:54:45] s2 in pmtpa [19:54:56] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [19:54:58] oh [19:54:59] ok good [19:55:14] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [19:55:15] now what was the page that I got and yet it didn't show up here? [19:55:41] bits.wikimedia.org forbidden [19:55:42] there's a watchmouse bits alert come through on email [19:55:46] nimsoft [19:55:51] weird [19:56:01] I wonder if it was l10n cache related.. [19:56:08] ah maybe [19:56:22] well let's (I''m doing that sync) geet this last server back to normal and see [19:56:50] this has turned into a very long day :-D [19:57:04] might have to take wed off in trade (I guess otday was a holiday but meh, worked) [19:58:28] !log reedy synchronized php-1.21wmf10/extensions/WikimediaMessages [19:58:30] Logged the message, Master [19:58:57] !log reedy synchronized php-1.21wmf10/extensions/DataValues [19:58:59] Logged the message, Master [19:59:25] !log reedy synchronized php-1.21wmf10/extensions/Diff [19:59:26] yay [19:59:26] Logged the message, Master [19:59:32] deploy! [19:59:57] !log reedy synchronized php-1.21wmf10/extensions/Wikibase [19:59:58] Logged the message, Master [20:00:00] :D [20:04:57] New patchset: Hashar; "role::cache::upload now uses varnish on labs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49708 [20:07:33] i suppose we wait for wikidatawiki to wmf10 now? [20:07:42] Reedy: ^ [20:07:42] once it's done synching [20:08:12] yeah [20:08:13] New patchset: Hashar; "beta: basic role to get mysql packages installed" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49703 [20:08:27] New patchset: Hashar; "role::cache::upload now uses varnish on labs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49708 [20:08:51] you might want to resync 1165 at the end (given it's still going) [20:09:19] heh [20:14:00] so, some localisation is missing on wmf10 :/ [20:14:03] wikibase-nolanglinks [20:14:21] no big deal on test2wiki [20:15:26] !log reedy Started syncing Wikimedia installation... : Rebuild l10n cache for wikidata related extensions [20:15:27] Logged the message, Master [20:15:32] yay! [20:15:38] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 188 seconds [20:16:23] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 200 seconds [20:16:59] PROBLEM - MySQL disk space on neon is CRITICAL: Connection refused by host [20:17:26] PROBLEM - ircecho_service_running on neon is CRITICAL: Connection refused by host [20:21:11] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [20:21:47] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [20:21:59] mw1170: @ERROR: access denied to common from mw1170.eqiad.wmnet (10.64.32.40) [20:22:41] mw1182: @ERROR: access denied to common from mw1182.eqiad.wmnet (10.64.32.52) [20:22:41] mw1182: rsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9] [20:22:41] mw1182: Copying to mw1182 from mw10.pmtpa.wmnet...failed [20:24:10] I didn't watch the last scap run.. So I wonder if that's why we had the missing l10n cache from some apaches [20:25:25] oooh, maybe it's random apaches but https://test2.wikipedia.org/wiki/0.6726935175950355 looks better [20:25:39] compared to https://test2.wikipedia.org/wiki/0.05981862042577668 [20:25:41] not so great [20:26:16] I'll run sync-dir on the l10n cache when htis has finished [20:26:32] ok [20:27:43] out of there for some sleep [20:27:45] * hashar waves [20:31:50] !log reedy Finished syncing Wikimedia installation... : Rebuild l10n cache for wikidata related extensions [20:31:51] Logged the message, Master [20:32:03] yay [20:32:21] I don't trust it though [20:32:31] I think that might've been the l10n error problem before [20:32:58] test2 looks better [20:33:28] sync-dir is taking a while [20:33:32] which suggests it is copying data [20:33:36] * aude nods [20:34:00] poor nfs1 [20:34:00] http://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&m=cpu_report&s=by+name&c=Miscellaneous+pmtpa&h=nfs1.pmtpa.wmnet&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 [20:35:01] yeow [20:35:26] ok well I'll wait [20:36:24] Might be worth running sync-common using dsh on the apaches to make sure they're all actually in sync [20:36:31] The scap errors do look rather suspicious [20:37:12] !log reedy synchronized php-1.21wmf10/cache/l10n/ 'Scap is a liar' [20:37:14] Logged the message, Master [20:37:21] aude: sooooo [20:37:40] hmmmm [20:38:11] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: wikidatawiki to 1.21wmf10 [20:38:12] Logged the message, Master [20:38:13] snapshot1002: rsync: write failed on "/usr/local/apache/common-local/wikiversions.cdb": No space left on device (28) [20:38:13] snapshot1002: rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.9] [20:38:14] Again? :/ [20:38:42] oh grrrrr [20:39:15] :( [20:40:54] I'm going to wind up installing this with a different partition setup later, this is unacceptable [20:40:56] anyways... [20:41:06] can I lose wmf6 and 7? [20:41:37] Reedy: [20:41:52] yup, and 8 [20:41:52] of course if you resync and they are there, I'll just get them again [20:41:57] or will I? [20:42:02] Likely [20:42:09] mkdir and stop mwdeploy writing? [20:42:11] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 192 seconds [20:42:17] *sigh* [20:42:29] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 195 seconds [20:44:36] New patchset: Ottomata; "Adding puppet Limn module." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49710 [20:45:50] well I think it will run now, try it [20:47:10] er when the rest of the sync finishes [20:48:07] New patchset: Ottomata; "Adding puppet Limn module." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49710 [20:52:32] New review: Ottomata; "Patch Set 2:" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49710 [20:53:17] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [20:54:11] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [21:01:23] RECOVERY - Packetloss_Average on oxygen is OK: OK: packet_loss_average is 3.80601430769 [21:03:11] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 186 seconds [21:03:47] RECOVERY - MySQL disk space on neon is OK: DISK OK [21:04:05] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 206 seconds [21:04:23] RECOVERY - ircecho_service_running on neon is OK: PROCS OK: 2 processes with args ircecho [21:09:56] RECOVERY - Puppet freshness on srv246 is OK: puppet ran at Mon Feb 18 21:09:40 UTC 2013 [21:19:28] New patchset: Reedy; "mediawikiwiki, wikidatawiki, testwiki and test2wiki to 1.21wmf10" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49775 [21:20:32] New review: Reedy; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49775 [21:20:33] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49775 [21:22:14] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 23 seconds [21:22:59] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [22:04:14] PROBLEM - Puppet freshness on mw48 is CRITICAL: Puppet has not run in the last 10 hours [22:16:21] New patchset: Hoo man; "Kill CONTENT_MODEL_WIKIBASE_QUERY for WB_NS_QUERY on wikidatawiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49784 [22:17:25] New review: Reedy; "Patch Set 1: Code-Review+2" [operations/mediawiki-config] (master) C: 2; - https://gerrit.wikimedia.org/r/49784 [22:17:26] what time is the scribunto deployment planned? [22:17:39] Change merged: jenkins-bot; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49784 [22:20:09] !log reedy synchronized wmf-config/CommonSettings.php [22:20:11] Logged the message, Master [22:21:24] Danny_B: 90 minutes or so I think [22:23:23] oki, hopefully i won't fall sleep uncommonly today [22:23:47] who will be taking care? [22:26:45] Change abandoned: Platonides; "54 minutes, it wasn't a bad time." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/49621 [22:36:10] me [22:37:08] !log tstarling synchronized php-1.21wmf10/extensions/Scribunto [22:37:09] Logged the message, Master [22:37:47] !log tstarling synchronized php-1.21wmf9/extensions/Scribunto [22:37:48] Logged the message, Master [22:40:21] so…its live? [22:42:16] no [22:42:20] !log tstarling Started syncing Wikimedia installation... : [22:42:21] Logged the message, Master [22:42:36] the deployment window is from 23:00, it's only 22:42 [22:45:41] !log tstarling Finished syncing Wikimedia installation... : [22:45:42] Logged the message, Master [22:49:40] hosts allow = 10.0.0.0/16 10.64.0.0/22 10.64.16.0/24 208.80.152.0/24 [22:49:47] ah ok [22:49:51] * TimStarling works out who to blame for that [22:50:12] $ host mw1174 [22:50:12] mw1174.eqiad.wmnet has address 10.64.32.44 [22:50:17] not in any of those ranges [22:52:51] MaxSem: is this still needed? https://de.wikipedia.org/w/index.php?title=MediaWiki:Articlefeedbackv5-found-percent&action=history [22:53:45] no idea - probably not [23:08:26] PROBLEM - Host mw1085 is DOWN: PING CRITICAL - Packet loss = 100% [23:09:02] RECOVERY - Host mw1085 is UP: PING OK - Packet loss = 0%, RTA = 26.59 ms [23:10:35] Nemo_bis: to the AFT-Message: http://en.wikipedia.org/wiki/Special:ArticleFeedbackv5/Golden-crowned_Sparrow?uselang=de seems ok, today L10N-Update apparently worked [23:11:01] MaxSem: ^ [23:11:11] whee [23:11:18] !log tstarling Started syncing Wikimedia installation... : [23:11:19] Logged the message, Master [23:12:13] but maybe i', hitting the right servers again ;-) [23:14:51] !log tstarling Finished syncing Wikimedia installation... : [23:14:52] Logged the message, Master [23:16:00] snapshot1002: rsync: recv_generator: mkdir "/usr/local/apache/common-local/php-1.21wmf7/bin" failed: Permission denied (13) [23:16:15] yeah, that's a good way to fix a server-side permission denied error [23:16:19] run it as root on the client! [23:21:24] hey TimStarling, if you have a second, could you take a look at https://gerrit.wikimedia.org/r/#/c/49678/ [23:21:51] he's only deploying a new templating system i guess [23:22:17] maybe after this deployment is done [23:24:48] !log fixed rsyncd.conf on scap proxies to allow connections from new apaches [23:24:49] Logged the message, Master [23:25:25] !log on snapshot1002 -- fixed incorrect owner on some MW files, killed broken apt-get and upgraded wikimedia-task-appserver [23:25:26] Logged the message, Master [23:33:26] * cswiktionary, cswikisource, cswikiquote, cswikibooks, cswikinews [23:33:59] something tells me a certain familiar user is involved with this [23:34:22] Heh. [23:34:26] whose surname is a single letter [23:35:26] It isn't clear how the wikis in the first batch were selected. [23:36:08] it seems to be enwiki plus anyone who wanted their favourite wiki included was accepted [23:36:27] New review: Demon; "Patch Set 1: Code-Review+1" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/49196 [23:37:49] https://meta.wikimedia.org/wiki/Lua is pretty spectacularly unhelpful in terms of explaining this to the community. [23:38:25] New review: Techman224; "Patch Set 1: Code-Review+1" [operations/mediawiki-config] (master) C: 1; - https://gerrit.wikimedia.org/r/49682 [23:44:25] New patchset: Tim Starling; "Enable Scribunto on 14 wikis" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49793 [23:44:54] New review: Tim Starling; "Patch Set 1: Verified+2 Code-Review+2" [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/49793 [23:44:55] Change merged: Tim Starling; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/49793 [23:44:56] quick, grab all the cool names in the module namespace! [23:45:12] TimStarling- Are we deploying Scribunto master, or the old version that was tagged in wmf9? [23:45:33] master, I updated it [23:45:34] Heh, all the CS projects except cswiki? [23:45:39] so including the language library [23:45:56] lol, I had missed that [23:46:02] maybe he's banned there [23:47:02] !log tstarling synchronized wmf-config/InitialiseSettings.php [23:47:02] Logged the message, Master [23:47:03] TimStarling- Maybe I'm looking in the wrong place, but /home/wikipedia/common/php-1.21wmf9/extensions/Scribunto doesn't seem to be master [23:47:38] we're at 10, no? [23:47:45] Only on test wikis [23:47:57] anomie: On which server? [23:48:04] Susan- fenari [23:49:46] TimStarling- I see it's live on enwiki, but missing mw.ustring, mw.language, and such. [23:49:57] anomie: yeah, I guess I forgot to run git submodule update [23:51:16] !log tstarling synchronized php-1.21wmf10/extensions/Scribunto [23:51:18] Logged the message, Master [23:51:40] !log tstarling synchronized php-1.21wmf9/extensions/Scribunto [23:51:41] Logged the message, Master [23:51:59] better now? [23:52:09] Yes, showing up now [23:52:56] http://cs.wikipedia.org/wiki/Wikipedista:Danny_B. [23:53:06] This user has been deleted at the request of its owner. [23:54:31] he's not blocked though, he was only blocked for 2 days in May 2012 [23:55:04] nice. hopefully the cs.* folks don't mind getting Lua [23:55:16] New review: Reedy; "Patch Set 2: Code-Review-1" [operations/mediawiki-config] (master) C: -1; - https://gerrit.wikimedia.org/r/49704 [23:55:31] how are things looking? [23:56:24] fine [23:56:29] Reedy: does that syntactically make a difference?