[00:00:05] cool [00:00:12] so, when you stop the volume [00:00:18] you'll need to restart glusterd [00:00:21] same with when you start it [00:00:25] it seems to be broken [00:08:33] hey guys, i dont know if i'm in the right place, i was just wondering how to put an image i've uploaded to the commons as the main photo of an article [00:10:02] you probably want to ask in #wikimedia-commons [00:10:07] or on #wikipedia [00:10:11] okay thanks! [00:10:53] Ryan_Lane: Wait, I can't stop and restart a couple of volumes and /then/ restart glusterd? It needs one restart per action? [00:11:59] andrewbogott: once you do one, it becomes unresponsive [00:12:05] wow. OK [00:12:06] it won't take any more commands [00:12:07] yeah [00:12:13] it's broken, for sure [00:30:59] @labs-project-info swift2 [00:30:59] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 51 seconds ago [00:31:09] @labs-project-info swift3 [00:31:09] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 1 seconds ago [00:31:34] @labs-project-info ryanlandsucks [00:31:34] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 1 seconds ago [00:43:52] Ryan_Lane: The partial failures look like this: http://dpaste.org/OpWKa/ [00:44:11] I haven't yet tested to see if a stop/start fixes it. [00:45:06] * Ryan_Lane nods [00:47:40] Ryan_Lane: Should I be restarting glusterd on /every/ node or just the one where I'm issuing the start/stop? [00:48:13] just the one [00:49:02] hm. done, but still unresponsive [00:49:33] Ah, here we go, I was just impatient [00:52:43] petan: https://labsconsole.wikimedia.org/w/index.php?title=User_talk:Jeremyb&diff=next&oldid=5535 [00:52:50] * jeremyb runs away [01:07:34] PROBLEM Total processes is now: WARNING on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS WARNING: 176 processes [01:07:59] @labs-project-info wm-review [01:07:59] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 22 seconds ago [01:08:20] @labs-project-info wmreview [01:08:20] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 43 seconds ago [01:12:32] RECOVERY Total processes is now: OK on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS OK: 97 processes [01:22:05] @labs-project-info testproject-mike [01:22:06] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 57 seconds ago [01:24:03] How do I determine access to a named machine like "vanadium"? I ping and ssh to various bastion hosts and screw around with vanadium.{eqiad,wmflabs,wmfnet,wikimedia.org,some.other.domain.system} until I make a connection, but surely there's a better way. [01:25:54] spagewmf: what do you mean? [01:26:07] if it's labs, it's pmtpa.wmflabs [01:26:15] if until we get an eqiad zone [01:26:30] if it's production, it's .wmnet [01:26:41] ok, I get what you mean :D [01:26:56] because we also have systems with public ips that are either wmflabs.org or wikimedia.org [01:27:02] you just kind of know [01:27:05] spagewmf: you can add it all as "search" in /etc/resolv.conf [01:27:13] that's doable too, yes [01:29:34] Ryan_Lane, OK so this one is vanadium.eqiad.wmnet and seems available without going through a bastion. But I don't understand the rules and every time someone mentions a machine I don't use every day I futz around. [01:30:34] spagewmf: "host vanadium" on fenari tells me [01:31:07] that it is eqiad.wmnet and that would mean you DO need a bastion, because it is internal IP only [01:31:35] public IP = wikimedia.org , private IP = dc.wmnet [01:31:57] mutante, that's gold, Jerry. Gold! [01:32:19] though it doesn't work for wmflabs hosts [01:33:24] I'm not sure where the page with that tip belongs. Anyway I updated the infobox on https://wikitech.wikimedia.org/view/Vanadium with server_nodename. [01:34:04] spagewmf: on labs: if public IP then hostname.wmflabs.org [01:34:57] and for private IP, .wmflabs where production has .wmnet [01:39:14] mutante, great stuff but needs to be on a wiki page. I guess wikitech?, though its https://wikitech.wikimedia.org/view/Server_roles seems out-of-date. [01:42:19] @labslproject-info puppet-cleanup [01:42:34] @labs-project-info puppet-cleanup [01:42:34] The project Puppet-cleanup has 3 instances and 6 members, description: {unknown} [01:44:12] spagewmf: you know, this is actually auto-generated http://doc.wikimedia.org/puppet/ [01:44:24] so it will not need human updates when stuff is changed in puppet [01:44:43] and it can also tell you that vanadium is in eqiad.wmnet and so on [01:45:02] or it is looking at site.pp in puppet repo [01:45:47] and yeah, there have been different opinions on which templates to still use and update on wikitech.. [01:46:53] andrewbogott: migration going well? [01:46:58] more excellence, add it to the "Finding out about a server" page (teach a man to fish and he won't pester you on IRC). [01:47:22] yes, I notice emery isn't mentioned on wikitech. Hmm. [01:47:23] Ryan_Lane: Yep, slow but sure [01:47:25] cool [01:47:33] what % copied? [01:48:02] exit [01:48:20] Um… 70% maybe [01:48:54] ok [01:48:57] I only ask because I'm meeting someone for dinner at 7:30 [01:49:14] 7:30 seems realistic. [01:51:16] I'll be back online after I get done with dinner, but it may not be till like 9:30 [01:53:58] @labs-project-info cloudadmins [01:53:58] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 48 seconds ago [01:55:51] @labs-project-info allnovausersstatic [01:55:51] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 41 seconds ago [01:56:10] @labs-project-info testproject [01:56:10] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 1 seconds ago [01:56:19] no need to apologize, wm-bot [01:58:02] Ryan_Lane: It's a race to see if I can migrate every other project in the time it takes Bots to rsync [01:58:13] :D [02:00:32] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 150 processes [02:26:25] @labs-project-info andrewteststhebot [02:26:25] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 50 seconds ago [02:37:52] I wrote up mutante's explanation on https://wikitech.wikimedia.org/view/Category:Servers which links to https://wikitech.wikimedia.org/view/How_to_access_a_server , hope this helps. [02:37:53] RECOVERY Free ram is now: OK on wikistream-1 i-0000016e.pmtpa.wmflabs output: OK: 25% free memory [02:37:53] RECOVERY Free ram is now: OK on sube i-000003d0.pmtpa.wmflabs output: OK: 40% free memory [02:45:52] PROBLEM Free ram is now: WARNING on wikistream-1 i-0000016e.pmtpa.wmflabs output: Warning: 11% free memory [04:31:43] RECOVERY Current Load is now: OK on parsoid-roundtrip7-8core i-000004f9.pmtpa.wmflabs output: OK - load average: 4.30, 4.45, 4.96 [06:17:43] RECOVERY Total processes is now: OK on nova-precise1 i-00000236.pmtpa.wmflabs output: PROCS OK: 146 processes [06:28:33] PROBLEM Total processes is now: WARNING on vumi-metrics i-000004ba.pmtpa.wmflabs output: PROCS WARNING: 151 processes [06:30:33] PROBLEM Total processes is now: WARNING on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS WARNING: 155 processes [06:38:43] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [06:39:06] @instance-info nova-precise1 [06:39:11] -_- [06:39:14] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.10.4.60 source code licensed under GPL and located at https://github.com/benapetr/wikimedia-bot [06:39:17] @commands [06:39:17] Commands: there is too many commands to display on one line, see http://meta.wikimedia.org/wiki/wm-bot for a list of commands and help [06:39:48] @labs-instance-info nova-precise1 [06:39:54] rawr [06:40:02] @labs-instance nova-precise1 [06:40:19] !resource openstack [06:40:20] https://labsconsole.wikimedia.org/wiki/Nova_Resource:openstack [06:43:32] RECOVERY Total processes is now: OK on vumi-metrics i-000004ba.pmtpa.wmflabs output: PROCS OK: 147 processes [06:45:14] something is definitely wrong with testlabs. [06:45:20] oh? [06:45:46] on testlabs-abogott I didn't have a homedir before /or/ after a reboot. [06:45:54] The 'before' part is puzzling [06:47:15] hm [06:47:18] The gluster volume looks right. Not getting mounter though. [06:47:25] *mounted [06:47:47] yeah [06:47:50] lemme check something [06:48:14] y'know, this other testlabs instance (puppetdoc2) is totally fine [06:48:25] it is? [06:48:41] I can't ss to it [06:48:45] I'm rebooting to see if it mounts gluster [06:48:45] ssh [06:48:50] ah [06:49:57] Yep, came up, rw, looks fine. [06:50:04] So I rescind my concerns about testlabs. [06:50:09] It's just that one instance for some reason. [06:50:17] (and maybe nove-precise1?) [06:50:28] that's weird [06:50:39] yeah [06:50:56] this was rebooted? [06:51:17] testlabs-abogott was, yes. [06:51:50] mount just doesn't know about /home at all [06:52:08] is nova-precise still messed up? [06:52:13] gluster is failing [06:52:14] *nova-precise1 [06:52:26] *sigh* [06:52:32] PROBLEM dpkg-check is now: CRITICAL on testing-arky i-0000033b.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [06:53:03] well, gluster is failing on this one specific instance [06:53:08] I dunno about nova-precise1 [06:53:15] nova-precise1 is having kernel issues [06:53:29] it's unrelated [06:53:52] RECOVERY dpkg-check is now: OK on integration-androidsdk i-000004c8.pmtpa.wmflabs output: All packages OK [06:55:22] PROBLEM dpkg-check is now: CRITICAL on mw1-21beta-lucid i-00000416.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [06:55:32] RECOVERY Total processes is now: OK on parsoid-spof i-000004d6.pmtpa.wmflabs output: PROCS OK: 150 processes [06:58:38] ah [06:58:49] andrewbogott: this instance isn't in the auth.allow list [06:59:16] is testlabs-abogott brand new? [06:59:37] Not today new [06:59:41] we turned off the glustermanager script [06:59:42] PROBLEM dpkg-check is now: CRITICAL on conventionextension-trial i-000003bf.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [06:59:47] so it's not handling the auth.allow list [06:59:59] 10/24 says labsconsole [07:00:08] hm [07:00:12] I wonder why it isn't listed [07:00:28] Oh, I guess we should turn glustermanager back on though, huh? [07:00:46] yep [07:00:53] I'll get that [07:01:01] lemme de-hack the script first [07:01:03] bleh [07:01:05] it's back in [07:01:12] puppet restarted itself [07:03:46] ah ha [07:03:58] this instance doesn't have an ldap entry at all [07:04:02] which doesn't make any sense [07:04:32] I haven't used it in a while. So if you're not curious we can just delete it. [07:04:48] well, I do wonder how it doesn't have one... [07:04:53] was it deleted at some point [07:04:53] Is it possible that I tried to delete it already, and it survived in nova but died in ldap? [07:04:54] ? [07:04:55] ah [07:04:57] could be [07:05:16] that would definitely cause this issue [07:05:32] it's reassuring that this is due to a hosts.allow problem ;) [07:05:47] yep, nothing to do with today's changes. [07:05:50] yeah [07:05:52] PROBLEM dpkg-check is now: CRITICAL on newchanges-bot i-00000419.pmtpa.wmflabs output: DPKG CRITICAL dpkg reports broken packages [07:06:12] So I've seen at least a couple of instances reboot and work properly. [07:06:18] we should reboot bastion [07:06:21] and bastion-restricted [07:06:28] (bastion first) [07:06:39] Yeah, good idea. That'll kick me off irc [07:06:44] heh [07:06:50] ok. I'm going to reboot bastion1 [07:07:06] !log bastion rebooting bastion.wmflabs.org [07:07:08] Logged the message, Master [07:08:21] bastion is up [07:08:24] and working with gluster [07:08:52] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [07:09:18] !log rebooting bastion-restricted.wmflabs.org [07:09:18] rebooting is not a valid project. [07:09:23] !log bastion rebooting bastion-restricted.wmflabs.org [07:09:24] Logged the message, Master [07:10:33] RECOVERY Current Users is now: OK on bots-1 i-000000a9.pmtpa.wmflabs output: USERS OK - 0 users currently logged in [07:10:53] RECOVERY dpkg-check is now: OK on newchanges-bot i-00000419.pmtpa.wmflabs output: All packages OK [07:14:23] welcome back [07:14:35] seems it's working well :) [07:14:42] yep [07:15:05] Not much for us to do now but wait for complaints [07:15:43] yep [07:15:57] and to fix the keys, but that's for tomorrow :) [07:38:53] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [08:08:54] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [08:39:43] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [09:11:12] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [09:41:13] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [10:11:13] PROBLEM host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [10:19:52] PROBLEM Current Load is now: WARNING on parsoid-roundtrip7-8core i-000004f9.pmtpa.wmflabs output: WARNING - load average: 5.17, 5.33, 5.08 [10:24:44] RECOVERY Current Load is now: OK on parsoid-roundtrip7-8core i-000004f9.pmtpa.wmflabs output: OK - load average: 4.75, 4.89, 4.95 [10:24:58] !log wikidata-dev wikidata-dev-9 Looking for a memory problem. Tried to increase memory to 512 MB in /etc/php5/apache2/php.ini (nts: http://www.mediawiki.org/wiki/Manual:Errors_and_symptoms) [10:24:59] Logged the message, Master [10:26:55] @labs-resolve 36 [10:26:55] I don't know this instance - aren't you are looking for: I-00000236 (nova-precise1), I-00000336 (tutopuppet), I-00000362 (syslogcol-ac), I-00000369 (robh-spl), I-00000536 (home-migrate-lucid), [10:27:33] ACKNOWLEDGEMENT host: i-00000236.pmtpa.wmflabs is DOWN address: i-00000236.pmtpa.wmflabs CRITICAL - Host Unreachable (i-00000236.pmtpa.wmflabs) [10:36:29] @labs-info i-000000af [10:36:29] [Name i-000000af doesn't exist but resolves to I-000000af] I-000000af is Nova Instance with name: bots-sql2, host: virt6, IP: 10.4.0.41 of type: s1.small, with number of CPUs: 1, RAM of this size: 1024M, member of project: bots, size of storage: 90 and with image ID: lucid-server-cloudimg-amd64.img [12:46:54] PROBLEM Free ram is now: WARNING on aggregator1 i-0000010c.pmtpa.wmflabs output: Warning: 19% free memory [12:51:53] RECOVERY Free ram is now: OK on aggregator1 i-0000010c.pmtpa.wmflabs output: OK: 20% free memory [13:32:52] PROBLEM Free ram is now: WARNING on aggregator1 i-0000010c.pmtpa.wmflabs output: Warning: 19% free memory [14:08:59] @labs-info aggregator1 [14:08:59] [Name aggregator1 doesn't exist but resolves to I-0000010c] I-0000010c is Nova Instance with name: aggregator1, host: virt6, IP: 10.4.0.79 of type: m1.medium, with number of CPUs: 2, RAM of this size: 4096M, member of project: ganglia, size of storage: 50 and with image ID: lucid-server-cloudimg-amd64.img [16:55:20] @labs-resolve i-000004db.pmtpa.wmflabs [16:55:20] The i-000004db.pmtpa.wmflabs resolves to instance I-000004db with a fancy name parsoid-roundtrip5-8core and IP 10.4.0.125 [17:23:13] @labs-info i-000000fd [17:23:14] [Name i-000000fd doesn't exist but resolves to I-000000fd] I-000000fd is Nova Instance with name: patchtest2, host: virt6, IP: 10.4.0.74 of type: m1.small, with number of CPUs: 1, RAM of this size: 2048M, member of project: patchtest, size of storage: 30 and with image ID: lucid-server-cloudimg-amd64.img [17:45:31] 12/18/2012 - 17:45:31 - Creating a home directory for kelson at /export/keys/kelson [17:45:34] PROBLEM Total processes is now: WARNING on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS WARNING: 159 processes [17:46:54] PROBLEM Current Load is now: WARNING on bots-sql1 i-000000b5.pmtpa.wmflabs output: WARNING - load average: 8.44, 13.04, 8.25 [17:50:32] RECOVERY Total processes is now: OK on bots-salebot i-00000457.pmtpa.wmflabs output: PROCS OK: 114 processes [17:50:42] 12/18/2012 - 17:50:41 - Updating keys for kelson at /export/keys/kelson [17:51:35] So…. are homedirs working OK for everyone? [18:01:54] RECOVERY Current Load is now: OK on bots-sql1 i-000000b5.pmtpa.wmflabs output: OK - load average: 0.70, 1.74, 4.66 [19:20:30] Hi [19:21:16] I wanted to reset my password, but I can't. I'm able to log with the tmp one, but after giving the new password, I always get the following error: [19:21:19] There was either an authentication database error or you are not allowed to update your external account. [19:21:44] So, I'm stucked. I can not log in :( [19:25:32] Kelson: So, to confirm -- each time you reset it your temp password works? [19:27:35] andrewbogott: I can not reset many times within 24 hours [19:27:51] Oh, I see. [19:28:11] hm. weird [19:28:15] People have complained of this problem before, I'm trying to remember what it was... [19:28:26] of course 85% of the time it turns out to be capslock or something like that :( [19:28:40] andrewbogott: I'm stucked on the "Change password" board [19:29:50] andrewbogott: the only one error you can do is not giving two times exactly the same new password... and then you have another error [19:34:23] Kelson: OK, I don't know much but let me do the one thing that sometimes works when logins break... [19:38:05] Kelson, do you know whether or not you enabled two-factor auth? [19:39:16] andrewbogott: I don't know what it is.... so I guess "no" [19:39:31] so when resetting your password you're leaving the 'token' field empty [19:39:38] I am presuming you see the token field [19:39:44] yes, I see it [19:40:23] andrewbogott: I do not know what it is and how to use it [19:40:37] Leaving it empty is the right choice. [19:40:44] Just, if you typed something in there that might interferer. [19:40:44] andrewbogott: by giving my new password I leave it empty [19:40:52] OK, that's right. [19:41:11] So… I don't think I'm going to be much help :( Ryan_Lane may be of more use if/when he exits his meeting. [19:41:49] andrewbogott: ok :( Thank you for having at least tried. [19:42:10] Kelson: which form are you using? [19:42:18] is this the mail-me-a-password form? [19:42:48] Ryan_Lane: https://labsconsole.wikimedia.org/wiki/Special:PasswordReset [19:43:03] Hi, how do I find out who's the project owners for a project? I only find a list of members. I'm reading https://labsconsole.wikimedia.org/wiki/Help:Move_your_bot_to_Labs [19:43:08] let me see if it works for me [19:43:22] danmichaelo: for bots it's Damianz and petan [19:43:36] Ryan_Lane: I received an email with subject "Account details on Labs" containing a temp. password [19:43:38] I need to add that to the project page [19:43:46] Kelson: I'm testing it using my account [19:44:27] thx, but how would you find out without asking here? [19:44:40] I'm not sure it's possible... [19:44:45] any progress in moving ~? [19:44:50] giftpflanze: done [19:45:01] giftpflanze: instances need to be rebooted for it to take effect on them [19:45:26] oh [19:45:39] i guess it's not done automatically? [19:45:57] to reboot all instances? [19:46:04] we've decided against that [19:46:10] well [19:46:13] Kelson: hm. it worked for me... [19:46:13] how to reboot bots-4? [19:46:21] Ryan_Lane: ok, and what about the webtools project? [19:46:22] sudo reboot ;) [19:46:51] may i reboot it? [19:47:25] danmichaelo: I think Platonides [19:48:20] Ryan_Lane: I have forwarded you the labs msg to rlane@wikimedia.org [19:49:16] cool [19:49:17] lemme try [19:50:12] hm. I wonder if the two-factor stuff is breaking this [19:50:43] 12/18/2012 - 19:50:42 - Updating keys for laner at /export/keys/laner [19:55:29] Kelson: how was your account created? [19:55:32] 12/18/2012 - 19:55:32 - Creating a home directory for danmichaelo at /export/keys/danmichaelo [19:56:11] giftplanze: I encourage you to subscribe to https://lists.wikimedia.org/mailman/listinfo/labs-l [19:56:22] Kelson: I don't see an account for you in ldap [19:56:34] Ryan_Lane: The creation of the account is 6 months old.... [19:56:42] ahhh [19:56:46] your cn is: Emmanuel Engelhart [19:56:53] Ryan_Lane: not sure, but you may have done it. [19:57:01] did you want that as your username, or Kelson? [19:57:08] Ryan_Lane: Kelson [19:57:54] Kelson: ok, let me rename your cn [19:58:27] Ryan_Lane: then today I: (1) reset the password (2) changed my email address to kelson@kiwix.org (3) wanted to log to gerrit, was unsucessful (4) have logout from labs (5) not able neither to relogin nor to reset the password [19:58:27] Bah, you're going to change his cn and then the password form is going to work and we're never going to know why [19:58:49] Kelson: it works now [19:58:56] <^demon> Ryan_Lane: Speaking of renames....paravoid was going to attempt a full rename (MW, LDAP & Gerrit). Dunno if he actually did that yet. [19:59:12] <^demon> Provided he doesn't make Stuff Explode, we should finally script that up. [20:00:09] andrewbogott: I know what the problem was [20:00:25] andrewbogott: the password form will likely find the user by uid if cn wasn't found [20:00:32] 12/18/2012 - 20:00:31 - Updating keys for danmichaelo at /export/keys/danmichaelo [20:00:32] 12/18/2012 - 20:00:31 - Updating keys for kelson at /export/keys/kelson [20:00:35] but, it uses cn for auth [20:00:39] so, it failed [20:00:48] it's technically a bug [20:00:50] Oh, weird. [20:01:05] Ryan_Lane: so, I should recall "Forgotten your login details?" [20:01:08] it shouldn't even attempt to find the user by uid [20:01:22] Kelson: yep, have it send you a new request [20:01:27] I used your old temporary password [20:01:40] That's not related to the stuff I did with uid vs. shell name is it? [20:01:52] andrewbogott: I don't think so [20:03:00] Ryan_Lane: works! thx! [20:03:06] great. yw [20:03:26] * andrewbogott would never have figured that out [20:03:35] I turned on the ldap debug logs [20:04:17] Ryan_Lane: The logs live on virt0, or elsewhere? [20:04:30] well, I meant the ones for mediawiki [20:04:33] it's in the config file [20:04:35] commented out [20:04:59] Ah, ok. [20:05:40] 12/18/2012 - 20:05:40 - Updating keys for kelson at /export/keys/kelson [20:09:01] !log bots rebooted bots-4 [20:09:03] Logged the message, Master [20:17:39] !resource ganglia [20:17:39] https://labsconsole.wikimedia.org/wiki/Nova_Resource:ganglia [20:22:52] RECOVERY Free ram is now: OK on aggregator1 i-0000010c.pmtpa.wmflabs output: OK: 1181% free memory [20:50:31] 12/18/2012 - 20:50:30 - Updating keys for mwang at /export/keys/mwang [21:41:33] Anyone know a for seriously good speak to text system? [21:46:42] Damianz: im afraid they are all non-free if they are good [21:46:48] like Dragon Naturally Speaking [21:46:52] and something from IBM [21:47:40] http://en.wikipedia.org/wiki/Category:Speech_recognition_software [21:48:03] Sucks... was tempted to play with the google speach api, thinking my interesting asterisk idea might be harder than I'm willing to make effort for [21:51:03] speech to text , or text to speech? [21:52:46] Damianz: http://en.wikipedia.org/wiki/List_of_speech_recognition_software#Open_Source [21:53:20] http://shout-toolkit.sourceforge.net/ [21:53:48] Interesting [21:53:52] And speach -> text [21:54:21] so far this should have all been speech -> text [21:54:42] text to speech is like: festival [21:54:46] or "flite" which is festival lite [21:54:54] apt-cache show flite [21:55:11] apt-cache show festival [21:55:37] apt-cache search festvox [21:56:34] flite is easier to setup, basically just install and pipe anything into | flite [21:57:00] just dont expect amazing quality without installing other better voices besides the default one [22:30:32] 12/18/2012 - 22:30:31 - Updating keys for laner at /export/keys/laner [22:32:49] olaph: howdy [22:32:53] @search mediawiki [22:32:53] Results (Found 13): morebots, labs-home-wm, labs-nagios-wm, labs-morebots, gerrit-wm, extension, revision, info, bots, labs-project, openstack-manager, wl, deployment-prep, [22:33:03] @search wiki [22:33:03] Results (Found 82): morebots, labs-home-wm, labs-nagios-wm, labs-morebots, gerrit-wm, wiki, labs, extension, wm-bot, gerrit, revision, bz, instancelist, instance-json, amend, credentials, sal, info, sudo, access, blueprint-dns, bots, rt, pxe, group, pathconflict, terminology, etherpad, osm-bug, manage-projects, rights, new-labsuser, quilt, labs-project, openstack-manager, wikitech, load, load-all, wl, docs, ssh, documentation, start, link, socks-proxy, magic, labsconf, resource, account-questions, deployment-prep, security, project-discuss, git, port-forwarding, report, db, instance, bot, bug, pl, projects, accountreq, bastion, puppetmaster::self, git-puppet, addresses, initial-login, gerritsearch, deployment-beta-docs-1, sudo-policies, forwarding, labsconsole, sudo-policy, puppetmasterself, search, gitweb, htmllogs, mobile-cache, botsdocs, mail, labswiki, requests, [22:33:11] Ryan_Lane: heyhey! [22:33:29] too many docs. heh [22:33:48] https://labsconsole.wikimedia.org/wiki/Help:Single_Node_MediaWiki [22:34:08] !single-node-mediawiki is https://labsconsole.wikimedia.org/wiki/Help:Single_Node_MediaWiki [22:34:08] Key was added [22:34:22] !single-node-mediawiki | olaph [22:34:22] olaph: https://labsconsole.wikimedia.org/wiki/Help:Single_Node_MediaWiki [22:34:28] that'll give you a mediawiki installation [22:34:32] !access | olaph [22:34:32] olaph: https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [22:34:46] that'll give you access to your instance, after creating it [22:34:49] !instances [22:34:50] need help? -> https://labsconsole.wikimedia.org/wiki/Help:Instances want to manage? -> https://labsconsole.wikimedia.org/wiki/Special:NovaInstance want resources? use !resource [22:34:54] !security [22:34:54] https://labsconsole.wikimedia.org/wiki/Help:Security_Groups [22:35:04] lots of initial docs, sorry ;) [22:35:04] 12/18/2012 - 22:35:04 - Creating a project directory for openstack-wiki [22:36:44] :) no worries! many thanks [22:37:16] let me know if you have any issues [22:38:11] Hmm to order pizza or not [22:38:29] Damianz: order 80 [22:38:40] Nah, not that hungry [22:38:51] fine fine. 75 [22:39:02] It is 2 for tuesday though, shame we have chinese for lunch tomorrow [22:39:24] we need frontend people for labs. I want to steal webplatform's people: http://docs.webplatform.org/wiki/Main_Page [22:39:43] You mean so we can make it look less shit? [22:39:45] I guess it makes sense that their wiki would look nice, since it's a wiki about html/js/css [22:40:13] yes. labs looks terrible [22:40:15] I don't really like webplatform tbh... it's so far into smw that I get lost thinking it's mediawiki [22:40:23] :D [22:40:23] silly tags and things [22:40:35] yeah, it's *very* smw centric [22:40:52] * Damianz waits until smw push and update and BREAK ALL THE THINGS [22:41:09] thinking of breaking all the things [22:41:12] I need to deploy to labsconsole [22:41:36] hm. seems it's been merged already [22:41:41] Hmm [22:41:52] just need to deploy then [22:42:20] wrong button? [22:42:22] yep [22:42:29] q is right next to tab [22:42:40] heh [22:43:03] Only like 2 weeks left of the year, I should start figuring out cdn stuff for next year.... you use fastly, right? [22:43:50] weird. I don't see his change anymore [22:44:07] duh [22:44:11] it isn't one of my changes. [22:44:19] looking in the wrong location [22:44:55] now to see if it breaks all the things [22:46:25] I really wish saltstack was packaged for windows nicly... though nothing is packaged for windows nicly so I'll forgive them [22:48:03] yeah [22:48:07] talk to them about it [22:48:11] they are really responsive [22:51:41] I've not really got a massive use for it atm, but will in the next few months.... basically my current plan involves saltstack being the management backbone of our new cdn platform (since akamai costs 3 eye balls, 2 arms and a leg). [22:58:35] nice [23:04:01] Hopefully it will be, still waiting on vendors to drop cross connects in some of the pops though >.> [23:10:18] have you seen reactors yet? [23:10:20] <3 [23:10:33] oh, right, I was going to upgrade salt today [23:12:09] Yeah - I like the idea for some things, but not others... depends on the usage though I guess [23:12:51] * Ryan_Lane nods [23:15:31] It would be interesting to see how a salt master behaves on 2 nodes with drdb replicating its data files hmm... tempting to hold out until it supports real clustering as I'll need multi site stuff, can probably jusitfy spending time working on that soon though. [23:16:30] why not use syndic? [23:16:57] it has clustering features... [23:17:17] arrrrghhhhh [23:17:25] seems I didn't upgrade libzmq on the salt master [23:17:29] *masters [23:24:21] Well syndic would give me distribution over the pops, but not master redundancy - ideally I could say /prefer/ this one else go to any of these and have it act as a true cluster... since everything will be dedicated links cross pops anyway, doing localisation properly is kinda hard though [23:26:38] * Ryan_Lane nods