[00:03:25] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:19:19] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.019 seconds [00:39:43] RECOVERY - SSH on ms1002 is OK: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [00:52:28] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:53:49] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [01:01:46] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [01:01:46] PROBLEM - Puppet freshness on analytics1007 is CRITICAL: Puppet has not run in the last 10 hours [01:01:46] PROBLEM - Puppet freshness on ms-be1007 is CRITICAL: Puppet has not run in the last 10 hours [01:01:46] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [01:05:04] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.458 seconds [01:14:58] PROBLEM - MySQL Slave Delay on db1025 is CRITICAL: CRIT replication delay 289 seconds [01:18:43] RECOVERY - MySQL Slave Delay on db1025 is OK: OK replication delay 29 seconds [01:38:13] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:40:46] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [01:54:07] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 3.712 seconds [02:24:43] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [02:24:43] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [02:26:16] !log LocalisationUpdate completed (1.21wmf6) at Sun Dec 30 02:26:16 UTC 2012 [02:26:27] Logged the message, Master [02:26:49] PROBLEM - Puppet freshness on mw1157 is CRITICAL: Puppet has not run in the last 10 hours [02:29:22] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:38:13] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.026 seconds [02:46:01] RECOVERY - Puppet freshness on mw1157 is OK: puppet ran at Sun Dec 30 02:45:47 UTC 2012 [02:50:31] PROBLEM - Memcached on virt0 is CRITICAL: Connection refused [02:53:14] RECOVERY - Puppet freshness on sockpuppet is OK: puppet ran at Sun Dec 30 02:53:02 UTC 2012 [02:53:50] hrmmm, memcache again [02:55:55] RECOVERY - Puppet freshness on neon is OK: puppet ran at Sun Dec 30 02:55:34 UTC 2012 [02:55:55] RECOVERY - Puppet freshness on tin is OK: puppet ran at Sun Dec 30 02:55:52 UTC 2012 [03:16:37] RECOVERY - Memcached on virt0 is OK: TCP OK - 0.020 second response time on port 11000 [03:46:12] PROBLEM - Puppet freshness on db1047 is CRITICAL: Puppet has not run in the last 10 hours [03:46:12] PROBLEM - Puppet freshness on ms-fe1003 is CRITICAL: Puppet has not run in the last 10 hours [03:46:12] PROBLEM - Puppet freshness on ms-be1010 is CRITICAL: Puppet has not run in the last 10 hours [03:46:12] PROBLEM - Puppet freshness on ms-fe1004 is CRITICAL: Puppet has not run in the last 10 hours [03:46:12] PROBLEM - Puppet freshness on sq48 is CRITICAL: Puppet has not run in the last 10 hours [03:46:13] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [04:52:54] PROBLEM - Puppet freshness on solr2 is CRITICAL: Puppet has not run in the last 10 hours [04:54:51] PROBLEM - Puppet freshness on vanadium is CRITICAL: Puppet has not run in the last 10 hours [05:04:54] PROBLEM - Puppet freshness on solr1003 is CRITICAL: Puppet has not run in the last 10 hours [05:04:54] PROBLEM - Puppet freshness on solr3 is CRITICAL: Puppet has not run in the last 10 hours [05:05:57] PROBLEM - Puppet freshness on solr1001 is CRITICAL: Puppet has not run in the last 10 hours [05:52:54] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [06:06:15] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [06:08:14] apergos: search14 ^^ [06:25:45] RECOVERY - Lucene on search14 is OK: TCP OK - 3.009 second response time on port 8123 [06:42:20] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [06:45:47] RECOVERY - Lucene on search14 is OK: TCP OK - 9.001 second response time on port 8123 [06:48:20] PROBLEM - Puppet freshness on analytics1001 is CRITICAL: Puppet has not run in the last 10 hours [06:57:20] PROBLEM - Puppet freshness on ssl3001 is CRITICAL: Puppet has not run in the last 10 hours [07:00:20] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [07:27:11] RECOVERY - Lucene on search14 is OK: TCP OK - 8.995 second response time on port 8123 [07:33:41] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [07:38:20] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [08:06:50] RECOVERY - Lucene on search14 is OK: TCP OK - 0.008 second response time on port 8123 [08:17:47] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [08:19:53] PROBLEM - Puppet freshness on stat1 is CRITICAL: Puppet has not run in the last 10 hours [08:24:50] RECOVERY - Lucene on search14 is OK: TCP OK - 8.995 second response time on port 8123 [08:35:47] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [09:01:38] RECOVERY - Lucene on search14 is OK: TCP OK - 2.998 second response time on port 8123 [09:02:37] not that it returns any result anyway [09:05:32] PROBLEM - Puppet freshness on silver is CRITICAL: Puppet has not run in the last 10 hours [09:05:32] PROBLEM - Puppet freshness on zhen is CRITICAL: Puppet has not run in the last 10 hours [09:05:48] !log Search reported broken with no results at all returned on en.wikt, (en|ru).source etc. "Lucene on search14 is CRITICAL" since 3h ago. [09:05:54] For the records... [09:05:59] Logged the message, Master [09:12:44] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [09:16:02] RECOVERY - Lucene on search14 is OK: TCP OK - 0.004 second response time on port 8123 [09:23:22] this server has network spikes, though CPU load looks light regardless [09:26:59] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [09:27:08] mmm, but load averages are high [09:27:26] we're being DoSed, lol [09:29:24] Nemo_bis, but search14 doesn't index these wikis [09:29:42] it's enwiki.nspart1.sub1.hl and eswiki [09:29:59] apergos or paravoid, maybe you're around? [09:35:53] MaxSem: my two sentences didn't imply causation :p [09:36:59] Can't problems on a lucene server spread elsewhere? Of course I've no idea. [09:38:39] well LVS Lucene is unhappy too [10:08:14] RECOVERY - Lucene on search14 is OK: TCP OK - 0.003 second response time on port 8123 [10:19:11] PROBLEM - Lucene on search14 is CRITICAL: Connection timed out [10:27:08] PROBLEM - MySQL Replication Heartbeat on db1035 is CRITICAL: CRIT replication delay 203 seconds [10:28:11] PROBLEM - MySQL Slave Delay on db1035 is CRITICAL: CRIT replication delay 244 seconds [10:33:17] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 189 seconds [10:33:35] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 188 seconds [10:38:09] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [10:38:09] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [10:55:06] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [11:00:57] RECOVERY - Lucene on search14 is OK: TCP OK - 0.003 second response time on port 8123 [11:01:15] RECOVERY - MySQL Slave Delay on db1035 is OK: OK replication delay 0 seconds [11:01:16] RECOVERY - MySQL Replication Heartbeat on db1035 is OK: OK replication delay 0 seconds [11:03:03] PROBLEM - Puppet freshness on analytics1007 is CRITICAL: Puppet has not run in the last 10 hours [11:03:03] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [11:03:03] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [11:03:03] PROBLEM - Puppet freshness on ms-be1007 is CRITICAL: Puppet has not run in the last 10 hours [11:52:45] PROBLEM - Puppet freshness on mw55 is CRITICAL: Puppet has not run in the last 10 hours [12:25:45] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [12:25:45] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [12:36:42] PROBLEM - Puppet freshness on cp1028 is CRITICAL: Puppet has not run in the last 10 hours [13:30:42] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:41:21] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.027 seconds [13:47:04] PROBLEM - Puppet freshness on db1047 is CRITICAL: Puppet has not run in the last 10 hours [13:47:04] PROBLEM - Puppet freshness on ms-be1010 is CRITICAL: Puppet has not run in the last 10 hours [13:47:04] PROBLEM - Puppet freshness on ms-fe1003 is CRITICAL: Puppet has not run in the last 10 hours [13:47:04] PROBLEM - Puppet freshness on sq48 is CRITICAL: Puppet has not run in the last 10 hours [13:47:04] PROBLEM - Puppet freshness on ms-fe1004 is CRITICAL: Puppet has not run in the last 10 hours [13:47:04] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [14:13:19] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:23:58] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 6.674 seconds [14:25:01] RECOVERY - swift-account-reaper on ms-be3 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-reaper [14:30:16] PROBLEM - swift-account-reaper on ms-be3 is CRITICAL: PROCS CRITICAL: 0 processes with regex args ^/usr/bin/python /usr/bin/swift-account-reaper [14:40:10] PROBLEM - Puppet freshness on neon is CRITICAL: Puppet has not run in the last 10 hours [14:42:43] apergos: ping? [14:43:15] yes? [14:43:25] hi [14:43:50] Thehelpfulone wants something from us but I'm unsure on if/how I can do it [14:44:27] is there a ticket? [14:44:42] don't think so [14:44:55] can you summarize then? [14:45:21] apergos, sure, I'd like to add an email address to a user account on enwiki because the user lost the password for the account [14:45:44] X! is the main user account, and he's lost the password for his bot account, SoxBot and didn't set an email address for it [14:46:32] this was done for an admin account a few days ago by Reedy, I don't know if there was a ticket for that? [14:46:37] PROBLEM - MySQL Slave Delay on db1007 is CRITICAL: CRIT replication delay 182 seconds [14:47:55] hmmmm [14:47:57] I see [14:48:03] lemme check rt [14:48:16] PROBLEM - MySQL Replication Heartbeat on db1007 is CRITICAL: CRIT replication delay 229 seconds [14:49:55] RECOVERY - MySQL Replication Heartbeat on db1007 is OK: OK replication delay 0 seconds [14:49:57] I don't offhand see anything in there (which makes sense, reedy doesn't do rt) [14:50:04] RECOVERY - MySQL Slave Delay on db1007 is OK: OK replication delay 0 seconds [14:50:14] so our problem would be, how does the user verify that it's really his bot account [14:50:47] what's the main user account? I assume this is en wiki? [14:51:38] oh, X! , I see [14:51:55] how did the password get lost? i.e. compromise, the bot has been inactive, or what? [14:52:30] hmm only two edits to the bot page and both of them ... yesterday? [14:53:32] and what about this redirect to User:Yetanotherbot? [14:53:53] Thehelpfulone: [14:54:07] PROBLEM - Puppet freshness on solr2 is CRITICAL: Puppet has not run in the last 10 hours [14:54:20] apergos, I think he wants to get back access to the old account [14:54:37] and the password got lost because when he retired ~1 year ago he gave another user his code from the toolserver [14:54:42] but that had his password in it [14:54:54] once he was informed he had to change it [14:55:01] correctly [14:55:01] but he forgot what it was :) [14:55:06] I see [14:56:04] PROBLEM - Puppet freshness on vanadium is CRITICAL: Puppet has not run in the last 10 hours [14:56:17] ok, well I see old revisions of the bot page that say X! owned it, and they created the page so that seems legit [14:57:33] ah used to be User:Soxred93 [14:57:45] he did request it in #wikimedia-tech on Friday evening too as "Yetanotherx" [14:57:49] yeah, enwiki admin and crat [14:58:38] ok I give the thumbs up for this [14:59:22] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:00:42] hmm now there are a few subsidiary fields in the user table having to do with email [15:00:50] let's see if there's a maintenance script for this [15:00:55] okay [15:01:13] I'm not sure which email X! is using so if you just use the one from his main account that should be good [15:01:25] i.e. I'm not sure just stuffing something into user_email is enough [15:01:31] well I would ask which one the user wants [15:04:46] nope don't see a script [15:05:47] can't see anything in SAL to see what was done [15:06:07] PROBLEM - Puppet freshness on solr3 is CRITICAL: Puppet has not run in the last 10 hours [15:06:07] PROBLEM - Puppet freshness on solr1003 is CRITICAL: Puppet has not run in the last 10 hours [15:06:27] likely not logged [15:06:52] yeah, tim logged it when he did it in on Aug 21 [15:07:10] PROBLEM - Puppet freshness on solr1001 is CRITICAL: Puppet has not run in the last 10 hours [15:10:01] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 6.364 seconds [15:10:04] is the user on irc now? [15:10:21] there is a changePassword script that could be used [15:10:30] they would need to change the pwd immediately [15:11:32] Thehelpfulone: [15:11:51] they are on IRC, but they're AFK because they're based in the US [15:11:56] ah [15:12:10] ok, well if I'm on this evening when they are around, you can point them to me [15:12:19] sure thanks [15:12:32] otherwise they might try in their evening or see if someone else is around who can take care of it [15:12:43] anyone who can run a maintenance script on the db will do [15:13:04] ok [15:32:04] PROBLEM - Puppet freshness on tin is CRITICAL: Puppet has not run in the last 10 hours [15:40:01] !log reedy synchronized php-1.21wmf6/extensions/ParserFunctions [15:40:12] Logged the message, Master [15:45:52] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:54:07] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [16:00:07] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.033 seconds [16:00:07] PROBLEM - Puppet freshness on sockpuppet is CRITICAL: Puppet has not run in the last 10 hours [16:33:52] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:46:19] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.022 seconds [16:49:10] PROBLEM - Puppet freshness on analytics1001 is CRITICAL: Puppet has not run in the last 10 hours [16:50:40] PROBLEM - MySQL Replication Heartbeat on db1007 is CRITICAL: CRIT replication delay 188 seconds [16:50:58] PROBLEM - MySQL Slave Delay on db1007 is CRITICAL: CRIT replication delay 196 seconds [16:58:10] PROBLEM - Puppet freshness on ssl3001 is CRITICAL: Puppet has not run in the last 10 hours [17:03:16] PROBLEM - MySQL Slave Delay on db1007 is CRITICAL: CRIT replication delay 189 seconds [17:03:16] PROBLEM - MySQL Replication Heartbeat on db1007 is CRITICAL: CRIT replication delay 189 seconds [17:12:25] RECOVERY - MySQL Slave Delay on db1007 is OK: OK replication delay 28 seconds [17:13:46] RECOVERY - MySQL Replication Heartbeat on db1007 is OK: OK replication delay 0 seconds [17:14:04] PROBLEM - Host srv278 is DOWN: PING CRITICAL - Packet loss = 100% [17:16:10] RECOVERY - Host srv278 is UP: PING OK - Packet loss = 0%, RTA = 0.57 ms [17:19:46] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:20:40] PROBLEM - Apache HTTP on srv278 is CRITICAL: Connection refused [17:34:10] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.040 seconds [17:34:16] 55 [17:34:25] * Nemo_bis fail [17:35:04] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [17:57:43] RECOVERY - Apache HTTP on srv278 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.078 second response time [18:08:04] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:21:07] PROBLEM - Puppet freshness on stat1 is CRITICAL: Puppet has not run in the last 10 hours [18:22:37] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.074 seconds [18:56:13] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:07:10] PROBLEM - Puppet freshness on silver is CRITICAL: Puppet has not run in the last 10 hours [19:07:10] PROBLEM - Puppet freshness on zhen is CRITICAL: Puppet has not run in the last 10 hours [19:08:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.026 seconds [19:42:26] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:56:40] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.038 seconds [20:27:32] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:33:37] New patchset: Reedy; "Bug 43525 - n:cs: site settings" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/41558 [20:34:32] New patchset: Reedy; "Bug 43525 - n:cs: site settings" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/41558 [20:34:56] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/41558 [20:35:23] !log reedy synchronized wmf-config/InitialiseSettings.php [20:35:32] Logged the message, Master [20:39:59] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 4.854 seconds [20:56:11] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [21:04:08] PROBLEM - Puppet freshness on analytics1007 is CRITICAL: Puppet has not run in the last 10 hours [21:04:08] PROBLEM - Puppet freshness on ms-be1007 is CRITICAL: Puppet has not run in the last 10 hours [21:04:08] PROBLEM - Puppet freshness on ms-be1005 is CRITICAL: Puppet has not run in the last 10 hours [21:04:08] PROBLEM - Puppet freshness on ms-be1006 is CRITICAL: Puppet has not run in the last 10 hours [21:14:02] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:29:09] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.022 seconds [21:54:03] PROBLEM - Puppet freshness on mw55 is CRITICAL: Puppet has not run in the last 10 hours [22:00:57] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:09:48] PROBLEM - Host ms-be1004 is DOWN: PING CRITICAL - Packet loss = 100% [22:11:36] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 3.513 seconds [22:27:03] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [22:27:03] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [22:37:12] huh, bad error msg from gerrit. i load a gerrit page and then log in to gerrit in a different tab and then try to do something in the original tab (in this case go from the changeset overview page to view a diff of an indvidual patch set on that change) and it complains that i'm no longer logged in. of course i'm *newly* logged in not no longer [22:37:17] anyway, refresh fixed it [22:38:09] PROBLEM - Puppet freshness on cp1028 is CRITICAL: Puppet has not run in the last 10 hours [22:47:00] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:53:51] New patchset: Tim Starling; "Bug 43466: make https canonical for uzwiki" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/41561 [22:55:32] Change merged: Tim Starling; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/41561 [22:56:24] nice [22:56:55] !log tstarling synchronized wmf-config/InitialiseSettings.php [22:57:03] Logged the message, Master [22:59:27] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.027 seconds [23:31:51] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:32:08] TimStarling, around? [23:32:19] yes [23:32:45] great, I need a password reset or email reset on a user account - a quick explanation: [23:33:42] User:X! retired ~1 year ago and gave the code to his bot, User:SoxBot to another user - but in that code was the password for his bot, so he had to reset the password. He came back recently but forgot what he reset the password to and didn't set an email address. [23:33:56] X! is an enwiki admin and former crat [23:34:42] he's on IRC right now, just asking him to join this channel [23:35:52] Thehelpfulone: YEs? [23:36:25] so you want it set to your gmail account? [23:36:40] Yetanotherx, ^^ that's for your bot [23:37:45] Yes, I would. [23:39:39] check your inbox [23:39:48] Thanks, TimStarling :) [23:39:55] no problem [23:40:09] TimStarling, thanks, also is this documented somewhere? apergos tried to do it earlier but apparently there's a number of email fields and he couldn't find a script for it [23:41:11] All right, password is reset, now I just need the global account unlocked. I can find a steward for that, though. Thanks, TimStarling [23:41:34] I don't think there is any documentation [23:42:30] I don't really want a documented procedure on en.wp, that would increase the rate of shell requests by a factor of 10 [23:42:42] heh sure, I meant more wikitech [23:46:06] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 336 bytes in 0.020 seconds [23:48:03] PROBLEM - Puppet freshness on db1047 is CRITICAL: Puppet has not run in the last 10 hours [23:48:03] PROBLEM - Puppet freshness on ms-fe1003 is CRITICAL: Puppet has not run in the last 10 hours [23:48:03] PROBLEM - Puppet freshness on ms-be1010 is CRITICAL: Puppet has not run in the last 10 hours [23:48:03] PROBLEM - Puppet freshness on sq48 is CRITICAL: Puppet has not run in the last 10 hours [23:48:03] PROBLEM - Puppet freshness on zinc is CRITICAL: Puppet has not run in the last 10 hours [23:48:04] PROBLEM - Puppet freshness on ms-fe1004 is CRITICAL: Puppet has not run in the last 10 hours