[01:46:50] nice, looks like replag's finally going down [03:00:27] Hi, I'm trying to run a local instance of Mediawiki and have use the replicas (enwiki) via ssh tunnel. I've got the DB connection working, but the MW install scrpt says "There are MediaWiki tables in this database. To upgrade them to MediaWiki 1.22alpha, click Continue." I don't want to hose anything. Any advice? [03:03:14] hm, im not sure that will work [03:03:24] since mediawiki would want write access [03:04:23] Ah OK [03:04:27] AndyRussG: Install it in a blank database [03:04:36] Then change the database to the Wikipedia replication [03:04:40] in the config file [03:04:48] that still wont work either. [03:04:50] LocalSettings.php I think [03:04:55] lazyktm: Why wouldn't it? [03:05:08] because you can't write to the replicas? [03:05:14] Why do you have to? [03:05:37] well, im not really sure what AndyRussG is trying to do [03:05:38] Also, AndyRussG: You'll need to match the MediaWiki version to the replica's version [03:05:50] So make sure you download the right version, not the latest version [03:06:10] enwiki is on 1.22wmf21 [03:06:26] lazyktm: Pretty sure he just wants to create a mirror [03:06:30] In my particular case I don't have to write anything. I'm just trying to set up a dev environment for working on a maintenance script that checks a part of the DB [03:06:34] ah [03:06:35] ok [03:06:36] Specifically, the EP tables [03:07:17] I guess I could import those tables into a local MW database [03:07:18] AndyRussG: So do what I described. 1) Install the same version as the replication, 2) Install to a blank database, 3) Switch to the replication database in LocalSettings.php [03:08:08] OK that sounds cool [03:09:21] I should find 1.22wmf21 in the git repo? [03:10:01] It looks like mediawiki.org only has up to 1.21.2 [03:10:46] yes [03:10:51] clone mediawiki/core [03:11:06] Done that [03:11:40] $ git checkout wmf/1.22wmf21 [03:11:54] ^^ [03:12:10] Got it [03:12:25] Heads up 1.22wmf22 is supposed to release on Thursday [03:12:46] OK [03:12:57] It's just a little script, shouldn't take that long [03:13:01] You might want to set up a ....f drawing a blank [03:13:08] That thing that runs code automagically [03:13:20] help me out lazyktm...having a brain fart [03:13:24] huh? [03:13:26] cronjob? [03:13:28] yes [03:13:35] :) [03:13:37] You might want to set up a cron to update mediawiki weekly [03:14:01] https://www.mediawiki.org/wiki/MediaWiki_1.22/Roadmap [03:14:07] That would be smart if it were a bigger job [03:14:15] lazyktm and TParis, thank you so much for your help, I do have another question in fact [03:14:24] sure [03:14:25] Which is (ta-ta): [03:15:11] What if I'd like to, in addition to have this as a runnable maintenance script, have a tool with a Web interface on tools, that runs the script against the current replicas and spits out a report? [03:16:07] We have all kinds of tools like that. [03:16:11] You can just stick a php script in your public_html [03:16:18] There is nothing wrong with them. But you dont need your own replica to do it. [03:16:42] Right [03:17:21] I mean, it would connect to the replicas and only read info, not write it [03:17:41] That's fine, lots of us do that [03:18:14] I guess I'd just have whatever bits and pices of MW the script needs in my tool's public directory? [03:18:35] You don't need anything from MediaWiki [03:18:41] I mean, it would seem a bit much to clone a whole MW install in the tool's directory. [03:18:43] Just bounce queries against the replica [03:18:55] The script itself will be using some stuff in MW [03:18:56] You can skip MediaWiki altogether [03:19:00] ic [03:19:07] Well there are some PHP classes that can help [03:19:14] botclasses.php for instance can be helpful [03:19:20] Peachy [03:19:26] is another php class [03:19:41] What do you need to pull from MediaWiki itself? [03:19:44] There is an API [03:19:52] https://en.wikipedia.org/wiki/Wikipedia:MAKEBOT#PHP and https://en.wikipedia.org/wiki/Wikipedia:PHP_bot_functions [03:19:54] You could just hit the API at your install [03:20:20] I just don't understand why you need to hit the interface. [03:20:33] AndyRussG: this is for the EP extension right? [03:20:38] Yessssss [03:20:46] TParis: the Education Program doesn't have an API ;-) [03:21:23] Basically it's a maintenance script to check that some redundant data in some of the tables is correct [03:21:49] Since I haven't written it yet, I'm not sure what exactly it'll have, but I was hoping to reuse bits and pieces of the extension itself [03:22:28] AndyRussG: its probably easiest to write it as a MW maintenance script, then turn it into a web tool once thats done [03:22:35] Such as some subclasses of \ORMTable [03:22:46] Yeah that's exactly the plan [03:23:02] AndyRussG: If I were you, I'd see what you can reutilize out of just the EP program's extension itself [03:23:14] Exactly [03:23:16] Instead of the core system [03:23:31] But the EP relies on classes from the core system [03:23:41] And global variables, etc etc [03:23:42] Does it? I haven't opened it up. [03:24:14] yeah, thats how most extensions are written [03:24:17] Well I'd look through the functions and see what's reusable w/o the core. But if you have to... [03:24:22] It's all ORMTable [03:24:22] http://www.mediawiki.org/wiki/Manual:ORMTable [03:24:26] Hmm [03:25:24] That looks like an extensible database... [03:25:40] ? [03:26:07] a database in a database... [03:26:25] Anywho, it's bed time [03:26:34] Gnight [03:26:41] Thanks a ton, TParis [03:26:52] Bye [03:28:14] So I think I have my answer, lazyktm: just put in my tool's directory as much (or as little) of a MW install as the maintenance script needs, no? [03:28:25] yeah [03:28:34] you probably need to set up a simple LocalSettings.php [03:28:36] K, got it [03:28:44] with dbname, dbuser [03:29:09] ...as appropriate for accessing the replicas [03:29:25] yeah [03:29:31] K [03:29:47] Thanks so much, lazyktm, really apreciate it [03:29:56] np [03:31:15] So I'll just put my nose to the grindstone and give it a few more turns before hitting the hay [03:31:33] :D [03:31:49] See ya, thanks again [05:49:00] Coren|Away: Ryan_Lane: petan: Do you have root on tools to fix this please? https://bugzilla.wikimedia.org/show_bug.cgi?id=55686 [05:49:02] it's been a week.. [06:09:20] YuviPanda: andrewbogott_afk [06:29:13] Krinkle: this is not just about being root, it's about fixing the script that makes homes which is on different server [06:29:20] but I can fix selected home's [06:29:56] petan: I understand, but at the moment I just want to get my stuff done :-) Please do if it's not too much trouble. [06:30:07] do what [06:30:09] cvn's [06:30:10] ? [06:30:24] sure, maybe the other ones listed on the bug as well, but I haven't heard from them. [06:31:09] done [06:31:46] ok I will do that later, in fact there could be cronned job that do that until it's fixed on nfs [06:32:04] petan: while you're here, do you know whether there's any plans around HTTPS support for labs projects outside tools? e.g. some kind of wildcard certificate that can be enabled. [06:32:22] not that I know but you can always setup https yourself [06:32:25] like on beta [06:32:30] hashar did that [06:32:38] Well, I mean with a proper certificate, not a self-signed one. [06:32:43] hm... [06:32:50] not that I know of :/ [06:33:56] I'm migrating tools from toolserver, most of 'em go into tools.wmflabs which now has https so it works properly when using JSON-P APIs from such tools. The CVN has its own service at cvn.wmflabs.org/api.php but it's http-only (and self-signed doesn't work because users wouldn't get the interface to accept it when used from javascript,it'll just be blocked) [06:34:38] so I'm going to be proxying it through tools.wmflabs.org/cvn for the time being. At the moment we just need to get away from t oolserver because we've already migrated all the bots and databases (the api entrypoint on toolserver.org is outdated using old data) [06:34:52] anyway, thx, I can make stuff work now and care about details later for a change. [06:35:43] Krinkle: i know nd some projects like acc.wmflabs.org have requested a certificate [06:35:46] know* [06:35:57] i think there's a *.wmflabs cert [06:36:08] https://bugzilla.wikimedia.org/show_bug.cgi?id=53175 [06:36:15] I heard about acc, https://bugzilla.wikimedia.org/show_bug.cgi?id=53175#c4 [06:36:19] https://github.com/countervandalism/infrastructure/issues/1 [06:36:45] ah [06:36:57] Havne't seen the latest reply yet [06:36:58] great [06:48:44] lazyktm: Filed a separate bug so that it doesn't become part of the acc.wmflabs bug [06:48:47] https://bugzilla.wikimedia.org/show_bug.cgi?id=55957 [06:48:57] cool [06:49:24] Krinkle: can you help me out by guiding me trough the new cvn methods? [06:49:42] i don't know how to start/stop bots and the like [06:49:48] matanya: #countervandalism [09:55:18] Hello, i'm a french student working on project http://en.wikipedia.org/wiki/User:FatJagm/WanderWiki, and i'm encountering issues while trying to create a database on tool labs, so i'm looking for help [09:55:40] what issues? [09:56:46] The file replicata.my.cnf doesn't appear in my repository and i can't find a way to force it to appear [10:10:56] Any ideas? [10:18:17] I didn't understand the project specifics yet. I don't see its source code or repository. [10:24:07] We haven't uploaded the source code yet since we're still in the alpha version. The corresponding tool account is wanderwiki if that is what you are looking for. [13:50:21] petan/Coren: https://commons.wikimedia.org/w/index.php?title=Special:AbuseLog&wpSearchUser=10.4.0.115 [13:50:25] this is the labs ip [13:50:28] is flooding AbsueFilter [13:50:36] what [13:50:43] yes [13:51:08] well, first of all that abuse filter kind of suck :P but that bot should be logged in... [13:51:28] yes :P [13:51:29] maybe User:Heb would tell you what's up [13:51:34] notifyed [13:51:40] but he dos not respond. [13:54:35] Krinkle|detached: metrics.wmflabs.org has proper https [13:54:41] Krinkle|detached: you can too, if you want [13:54:57] Krinkle|detached: there's a dynamichttp proxy service that can get you https outside of toollabs [13:55:16] Krinkle|detached: there's a patch from andrewbogott_afk that lets you hook into that directly from wikitech, but for now the procedure to get a domain to https is 'ping Yuvi' [13:56:16] you'll lose your public IP, though [14:51:09] YuviPanda: Will http and https have the same IP (presumably the IP of your proxy) [14:51:11] If so, go for it [14:51:14] Krinkle: yes [14:51:16] same IP [14:51:18] cvn.wmflabs -> cvn-apache2 [14:51:29] moment [14:51:41] anytime :) [14:51:59] Krinkle: what project? [14:52:53] hmm, shouldn't matter, actually [14:54:12] Krinkle: actually, what project? :D needed [14:55:58] cvn-apache2, cvn, cvn.wmflabs.org [14:56:00] YuviPanda: [14:56:06] Krinkle: ok [14:56:08] doing now [14:57:44] hey andrewbogott [14:57:48] patch mergeable yet? [14:58:12] YuviPanda: which? [14:58:17] andrewbogott: dynamicproxy [14:58:24] or yuviproxy, whichever you want to call it [14:58:57] Oh -- it works, but… I think we want the proxy and API to be properly packaged and deployed before exposing it. [14:59:12] I should have time to help with that this week. [14:59:13] andrewbogott: oh, gah [14:59:13] ok [14:59:15] nice [14:59:55] Let's see… the proxy itself requires that special nginx package, right? Is that the only nonstandard package? [15:00:44] andrewbogott: yup [15:00:49] andrewbogott: but the proxy is all puppetized, no? [15:00:54] proxy-dammit is fully puppetized [15:01:15] oh, great -- including the nginx package? I don't remember how that got handled. [15:01:38] Krinkle: http://cvn.wmflabs.org/ [15:01:45] andrewbogott: i think i used labsdebrepo [15:02:07] Ah, right. Ok, that's probably fine. [15:02:34] Krinkle: okay, it's still using the old ip address. can you remove the cvn.wmflabs.org hostname from the isntance now? [15:02:36] And then the API… it had a tangle of pip/venv dependencies, right? [15:02:43] andrewbogott: hmm, yep [15:03:18] If you're not working on it currently, I can have a look at adaptiing it to use Oslo. (Not sure how realistic that is, but it's worth a try.) [15:03:22] YuviPanda: Done, removed cvn hostname from 208.80.153.131 [15:03:36] Krinkle: should once once DNS propogates, since ➜ ~ curl -H "Host: cvn.wmflabs.org" metrics.wmflabs.org works [15:03:54] should work for https too [15:04:45] andrewbogott: hmm, I don't know how realistic that is either [15:04:56] andrewbogott: honestly, I still want to deploy this with git deploy, than use it as a deb [15:05:12] andrewbogott: I don't understand what advantage packing it as a deb gets us, while there are several disadvantages [15:05:27] YuviPanda: That wouldn't help with dependencies would it? [15:05:37] andrewbogott: well, with git deploy, we can just use virtualenv :P [15:05:46] similar to how things are done with nodejs [15:06:25] um… that sounds messy and scary. Have you already debated ryan about this? [15:06:30] no [15:06:43] andrewbogott: what npm does is messy and scary? :P [15:07:03] You can talk to him about it. Mostly I think using debian & puppet is the right way because that's how ~everything else already works. [15:07:29] andrewbogott: right, but honestly, if the only advantage is that 'it is the way everything else already is!', I don't find that very compelling [15:07:36] since this isn't something else that's from apt, it's a python web app [15:07:41] and pretending otherwise... [15:08:05] We have lots of custom python tools that we package and deploy via apt. [15:08:27] still not getting why that is better [15:08:40] I see it as more work for me (or for someone!) [15:08:47] and eventually me, since maintenance and all that [15:09:55] I understand the need to puppetize everything [15:10:00] doesn't imply debianize everything [15:10:59] YuviPanda: There are some simple (e.g. single-file) python tools that just live directly in puppet, and are deployed as files rather than via packages. [15:11:14] That would be OK, although we'd still want the dependencies handle by puppet/apt [15:12:16] andrewbogott: sure, but in this case that means: 1. using an older version of python-redis (that has API changes) 2. Finding an alternative for sqlalchemy (which, when I checked, isn't in debian) [15:12:40] (1) is not a significant change, 2. is. And I'm simply trying to understand what is the benefit at all of spending time making those changes [15:12:42] and I still don't. [15:12:59] if fetching the dependencies is a problem, we can trivially bundle them [15:13:08] (which is what the node services are moving to) [15:14:08] fetching can be a problem because pip does no signature validation, and I see how that is scary. But bundling ought to 'fix' that [15:16:04] I know that OS uses sqlalchemy, lemme see how they do it [15:16:34] ok [15:19:25] there seems to be a package called 'python-sqlalchemy' [15:20:04] production systems have: [15:20:05] ii python-sqlalchemy 0.7.8-1ubuntu1~cloud0 SQL toolkit and Object Relational Mapper for Python [15:20:05] ii python-sqlalchemy-ext 0.7.8-1ubuntu1~cloud0 SQL toolkit and Object Relational Mapper for Python - C extension [15:20:43] * andrewbogott needs breakfast [15:22:43] andrewbogott: + [15:22:46] n [15:22:49] gah, cat [15:22:59] andrewbogott: i mean, i didn't see it when i was checking :| [15:23:55] That makes this easier, right? [15:25:13] andrewbogott: a bit, but still: 1. version of flask is super old (0.8), 2. has no version of flask-sqlalchemy [15:25:43] andrewbogott: also, it would indeed be nice if there is a document or something explaining *why* we have to use those. from my POV, it just looks like extra hoops to jump through. [15:30:04] If there's any admins active, could we have php yaml installed on tools? (http://www.php.net/manual/en/yaml.setup.php) [15:30:37] YuviPanda: Any tips on debugging a 504 "gateway time-out" response for http://scholarship-alpha.instance-proxy.wmflabs.org/ ? [15:30:40] YuviPanda: The short anwer to 'why puppet and deb' is that using a small number of common well-understood tools make it easier to maintain things. If a problem lands in the lap of someone who's never heard of it, it's nice for them to know where to look. [15:31:34] bd808: usually means your backend exceeded 60s timeout [15:31:36] usually [15:32:04] YuviPanda: The backend is currently serving a 100% static html doc via apache :( [15:32:11] bd808: eugh, no idea :( [15:32:15] i can look when i'm back from dinner [15:32:16] brb [15:32:42] andrewbogott: i'll probably talk to ryan about this later, but it feels fairly discouraging to finish a piece of code and then have to wait for a month or so because of 'debianizing', even after most of it is in puppet [15:32:51] (brb) [16:09:12] * bd808 makes a note to bug labs folks about brokenness with the new wikimania-support project [16:37:58] back! [16:38:06] bd808: can you do a curl internally and see if it works? [16:38:16] bd808: if it doesn't, then instance. if it does, then proxy [16:41:23] YuviPanda: curl internally works fine. I think there is some network issue(s) related to that new project [16:41:28] oh [16:41:30] uh oh [16:42:47] YuviPanda: Adding NFS roles didn't work at first. Finally got that to work on one instance with help from Coren (I think). Today a new instance has the same NFS problem [16:44:03] I've got a conf call in a few minutes but I'll find someone to bug about the whole thing later [16:45:02] bd808: ok [16:45:07] bd808: sorry i couldn't be of more help [16:45:29] YuviPanda: No worries. :) [16:49:44] @Ryan_Lane: do you think we could have php yaml installed on tools? (http://www.php.net/manual/en/yaml.setup.php) [16:50:11] danmichaelo: isn't it already installed? [16:50:47] not when I checked yesterday, unless I did something wrong [16:50:48] danmichaelo: tools setup is more a Coren question than a Ryan_Lane question [16:50:53] * anomie wonders what danmichaelo is going to use yaml for [16:51:09] just like to have config files in yaml [16:51:23] could probably use json instead [16:51:30] * bd808 should really release a new version of that extension [16:51:38] hehe :P [16:51:45] bd808: there's no php5-yaml? [16:52:03] YuviPanda: Nope. I never found a maintainer for Debian [16:52:07] i usually install it with "pecl install yaml" [16:52:20] hehe [16:52:53] I also really have no interest in maintaing anymore which I'm sure shows in the releases [16:53:05] :D [17:09:24] danmichaelo: definitely poke Coren|Away, I don't know how to deal with php extensions that aren't on apt [17:13:57] cool, thx, but upon re-thinking I might actually just go for json instead to keep things simple :) [17:14:11] heh [17:14:11] ok [17:16:09] Coren|Away: https://bugzilla.wikimedia.org/show_bug.cgi?id=52630 [17:45:24] andrewbogott, Coren, Ryan_Lane: Do any of you have time/energy to look into weirdness with instances in my new wikimania-support project? Coren fixed role::labsnfs::client for the scholarship-alpha instance on Friday, but now that instance seems to have network reachability problems with instance-proxy.wmflabs.org. [17:50:24] Any new instances I create in the project seem to also be unable/unwilling to mount /home via NFS [19:20:27] bd808: I don't have much experience using NFS storage, but I can look. Presuming my internet doesn't crap out again. [19:20:50] bd808: I'll probably add myself as a project admin to test, do you mind? [19:21:16] andrewbogott: Nope. Have at it. No secrets in that project :) [19:22:38] andrewbogott: More worrisome at the moment than the NFS issues is the apparently routing problem between scholarship-alpha and YuviPanda's proxy. I'd like to look at website without ssh tunnelling. [19:22:48] bd808: Two issues, right? One having to do with network access, and one about home dir mounts? [19:23:04] Yes. Possibly but not necessarily related [19:23:05] bd808: instance-proxy is just the regular proxy [19:23:10] not the new one [19:23:12] probably not related [19:23:42] bd808: So, the first thing I see is that you don't have web ports open in your firewall. [19:23:52] So all instances should be unreachable webwise. [19:23:59] Is that likely the problem? [19:24:01] andrewbogott: well that would be a problem :) [19:24:14] Are you attached to the existing instances or can you make new ones? [19:24:21] andrewbogott: what puppet role manages firewall [19:24:53] bd808: It's configurable via the web interface, but I need to know the answer to ^ before I can advise about the best approach. [19:25:01] andrewbogott: I'm not attached. "Important" stuff is on NFS [19:25:09] bd808: what's the name of the instance? [19:25:18] OK. In that case you should create a new security group, called (presumably) 'web' [19:25:18] YuviPanda: scholarship-alpha [19:25:24] that opens firewall for web access. [19:25:28] Lemme find you some docs [19:25:53] bd808: can't curl it from tools project, probably a firewall [19:25:57] bd808: https://wikitech.wikimedia.org/wiki/Help:Security_Groups [19:26:29] So, you'll create a new 'web' group with rules to allow web access, then create new instances and specify that they're in that group (well, both 'web' and 'default') on creation. [19:26:47] We can't add security groups to instances after they exist… it's dumb but a limitation of underlying software. [19:26:49] andrewbogott: Awesome. Labs n00bishness was high on my list of possible root causes [19:27:09] * andrewbogott will look at the nfs thing, meanwhile [19:34:26] andrewbogott: Adding a port 80 rule to the default security group fixed my http problem. [19:34:49] cool [19:36:33] !log wikimania-support Added 0.0.0.0/0:80 rule to default security group [19:36:36] Logged the message, Master [19:37:38] !log wikimania-support Created "web" security group for :80 & :443 access [19:37:40] Logged the message, Master [19:44:15] bd808: I also can't make nfs homedirs work. I don't really know what's supposed to happen so probably we need to wait for Coren|Away for that. [19:44:23] Or you could just use gluster :) [19:46:44] andrewbogott: Ok. Coren did something to make it work for scholarship-alpha on Friday. For now that will work just fine. [19:46:58] I'll be interested in hearing what he did... [19:47:39] andrewbogott: This is what I know from irc logs: https://github.com/bd808/wmf-kanban/issues/24#issuecomment-26738100 [19:48:19] hm [21:12:02] enwiki replag is rising…again. [21:12:04] Coren|Away: ^ [21:15:08] hey labs :) [21:15:40] i did a quick search and don't see anyone talking about bastion [21:15:58] but it appears down as neither me nor DarTar can get to anything through it [21:16:21] (via ssh or the public addresses and hostnames) [21:16:49] yes ottomata, we think bastion's dead [21:17:09] which bastion? [21:17:59] bastion.wmflabs.org [21:18:21] you know there's a variety of bastions? [21:18:28] that one is also not working for me [21:18:30] yes [21:18:50] Ryan_Lane: is DNS messed up? [21:19:05] for those? [21:19:11] ah. crap [21:19:20] I restart opendj on virt1000 and forgot to restart pdns [21:19:23] labs dns it looks like? [21:19:24] ah [21:19:33] it doesn't handle ldap failures well [21:19:38] restarted [21:19:48] ahh resolves now [21:20:19] milimetric: it might be cached as bad in your browser [21:20:21] it is in mine [21:20:36] yep [21:20:53] stupid crappy pdns implementation [21:21:27] restarted it on virt0 as well [21:22:07] Ryan_Lane: thanks, the dashboard instance is back [21:22:11] which had broken it again [21:22:15] stupid stupid pdns [21:22:16] thanks Ryan_Lane :) [21:22:30] yw [21:22:39] when I'm back out that way we'll take pdns out back and go office space on it [21:23:41] Ryan_Lane: moving to bblack's dns? :) [21:23:48] eventually, yes [21:24:07] we're going to switch our DNS code to use designate (and openstack project) [21:24:19] and use gdnsd as the backend [21:24:58] s/and/an/ i guess [21:25:28] i see https://wiki.openstack.org/wiki/Designate [21:26:06] ok, have to run to ta3m :) [22:07:01] is there a php-specific guide for tools ? i.e. i'm migrating someone else's tool (not familiar with php), the .html file in public_html seems to load from outside, but it can't call any of the .php files. do i have to run a php interpreter on the grid (qsub), and if so why is html served without apache being sent to the grid? [22:11:17] notconfusing: apache should run .php files [22:20:34] valhallasw, dose that mean i have to jsub apache? [22:23:35] no [22:23:58] you should just open the php files from your browser [22:24:12] if that doesn't work, check ~/php_error.log [22:38:29] Hey guys, I am in a wikitech instance and it seems that an instance I am using manages Salt stack config and blocks me to actually create my own [22:38:46] renoirb: ah. yeah... so [22:38:47] :) [22:38:52] so :/ [22:39:01] you can install salt-master [22:39:07] then, you can point your instances to it [22:39:14] Uhm. Let's say I want to practice redeploying webplatform in there. [22:39:23] yes, that's what I am talking about and doing. [22:39:42] but when I reboot the instance the /etc/salt/minion gets rewritten. [22:42:44] yp [22:42:45] *yep [22:42:46] one sec [22:42:51] oh, thanks :) [22:44:38] no I get this error "Internal Error" 500 error, but i dont have a ~/php_error.log file in my homedir [22:44:48] or in the tool's homedir rather [22:45:30] renoirb: so, I'm going to "Manage Puppet Groups" [22:45:40] and I'm going to add some classes and variables to the project [22:45:42] ok. [22:46:12] Can I do something about it myself? [22:46:29] you can, but you'd need to know which variables and classes to add :) [22:47:44] renoirb: actually, do you want to test the new deployment system? [22:47:47] what do permissions need to be of .php files owned by my tool? [22:48:00] i have -rw-rw-r-- 1 mattsenate local-doi-bot 8251 Jul 12 10:09 doibot.php [22:48:19] renoirb: I just added two puppet variables: salt_master_finger_override and salt_master_override [22:48:31] salt_master_override is the hostname of your salt master [22:48:42] Ok, how can I use these? [22:48:43] salt_master_finger_override is the fingerprint of your server [22:48:59] on the "Manage Instances" page [22:49:04] click "configure" on the instance [22:49:17] then look for the "salt" section [22:49:19] going there [22:49:39] They jumped in my face, thanks! [22:49:54] on your salt master, run: salt-key -F [22:50:02] master.pub [22:50:20] ^^ that's the key's finger print you want to add to salt_master_finger_override [22:50:26] oh, I did not know the salt-key -F thanks! [22:52:24] so, if you want to use the new git-based deploy, we can set that up [22:52:30] Thank you very much for this. I'm going to build the infra there. [22:52:41] (practice deployment, I should say) [22:52:59] yeah, I'd say set things up like they are, and we can look at adding the new deployment stuff [22:53:07] did you split out the private and public data from the salt repo? [22:54:00] Uhm. I have to migrate to the new provider but I can spend time to remove hardcoded and go forward in the git-based-deployment in the same time [22:54:16] well, let's start off with the current rsync method [22:54:24] it's better to handle one thing at a time [22:54:28] not finished, i needed a separate infra first, where I am at now. [22:54:42] k [22:54:48] after the test infra is up and everything is working normally we can handle that [22:54:58] that's what I was thinking [22:55:41] I already have a few git repository on our jay.w3.org server. I am the only one who can push/pull from it but at least it is outside of WPD's current cloud provider [22:56:56] * Ryan_Lane nods [22:57:05] it would be nice to put our salt config on github as a public repo [22:57:12] assuming all the private data is split out [22:57:16] yes, I can't wait to do this. [22:57:27] also, it's not amazingly safe to put the private data in wikimedia labs either ;) [22:57:33] hahaha [22:57:41] so splitting the private data away is likely a good idea before putting it on labs [22:58:43] Sure. But I do not have infra at home to run it in my living room. Gotta use somewhere :] [22:59:04] Actually I could… but it diverts me from the priority. [23:00:50] well, just know that non-webplatform people have access [23:01:22] you can remove them from the project, but wikimedia operations engineers and some developers have full access [23:02:28] actually, I've removed many of the people from webplatform project [23:02:38] see the list of people with access at: https://wikitech.wikimedia.org/wiki/Special:NovaProject [23:02:56] notice that all of those people have full sudo permissions, via: https://wikitech.wikimedia.org/wiki/Special:NovaSudoer [23:04:05] I've allocated a public IP address for the project [23:05:04] ok [23:05:59] I've also added *.webplatform.wmflabs.org DNS address to it [23:06:17] thanks [23:06:19] yw [23:08:42] so, the virtualhosts will need to use webplatform.wmflabs.org; like docs.webplatform.wmflabs.org. the salt config may not be setup to support this [23:09:03] good point, ill work on that [23:09:23] I'd probably make a pillar hash to support it: {'docs': 'docs.webplatform.org','www': 'www.webplatform.org'} [23:09:33] then from the virtualhosts, use pillar['docs'] [23:10:06] then you can override the pillar in labs: {'docs': 'docs.webplatform.wmflabs.org','www': 'www.webplatform.wmflabs.org'} [23:21:34] how can i tell if i am a projectadmin on a tool? [23:23:14] @project foo [23:23:24] ehm.. there was a bot command [23:23:27] !help [23:23:28] !documentation for labs !wm-bot for bot [23:23:33] !wm-bot [23:23:33] http://meta.wikimedia.org/wiki/WM-Bot [23:24:04] @labs-project-users toollabs [23:24:04] I don't know this project, sorry, try browsing the list by hand, but I can guarantee there is no such project matching this name unless it has been created less than 55 seconds ago [23:29:14] !log wikimania-support Functional site running on scholarship-alpha [23:29:17] Logged the message, Master [23:37:48] @labs-project-users tools [23:37:48] Following users are in this project (displaying 19 of 288 total): Novaadmin, Ryan Lane, Coren, Addshore, Legoktm, Tim Landscheidt, Petrb, Darkdadaah, Wizardist, Fox Wilson, Jan, Ceradon, Dcoetzee, Krinkle, Jeremyb, Jmo, UA31, Andrew Bogott, DGideas, [23:37:52] heh [23:37:58] ping fest! [23:38:01] >.> [23:39:48] hehe, i needed that just for project admins [23:39:53] but maybe it didnt exist [23:40:03] @labs-project-admins tools