[15:25:52] Is there anyway that I can limit users I add to my project to only one instance of all the ones I have? [15:30:49] Do I just disable the account on the individual instances I don't want them on? [15:43:00] AmandaNP: there really is not a way to do that in Cloud VPS. The user authentication and authorization system is shared globally and based on the same LDAP directory that is used for all Wikimedia Developer account services (Cloud VPS, Wikitech, Gerrit, Phabricator, ...). [15:44:42] AmandaNP: to avoid XY problem issues (), maybe you could explain a bit more what you are hoping to achieve with per-instance user restrictions? There may be some other ways to accomplish your main goal. [15:45:16] !XY is https://en.wikipedia.org/wiki/XY_problem [15:45:16] Key was added [15:46:18] Fair enough, I have users that want access to my project just to test changes to the software as they code. Problem is, I have CU level data in the databases there and I'm not sure I want to also grant them access to make changes to the live server [15:46:54] a DevOps conference taking place within the game of Animal Crossing and being livestreamed on Twitch. https://www.twitch.tv/oncallmemaybe so fascinating [15:50:13] AmandaNP: hmmm... yeah that's going to be difficult I think. I assume this is about the account-creation-assistance project? [15:57:14] bd808: actually UTRS [16:00:16] AmandaNP: It is possible to use sudoer configuration at the project level to limit who can impersonate which users. The tools and deployment-prep projects use this capability to give different rights to project admins vs project members. [16:01:52] The problem is the database passwords exists in files that the application has to be able to read, but not these new users [16:02:08] application meaning web app (www-data) [16:02:33] There is some more complicated magic in the tools project that actually does disable ssh access to some instances to anyone who is not a project admin. I need to look, but I do not think we have made that into a generally usable system. [16:03:36] AmandaNP: in theory you can handle that with file level permissions, but that is also a thing that can be broken accidentally for sure. [16:04:29] Ya. I mean at worst I guess is requesting another project, but I don't want to take up more resources than I have to [16:08:37] AmandaNP: another direction you could go is making some local dev/test environment that would give everyone a reasonably good way to do it without access to the live deploy [16:09:11] MediaWiki-Vagrant has quite a few roles for things like that [16:10:01] I'm not sure I follow [16:11:19] You said above that the goal was to let folk test changes to the software. One way to do that is making it easy to have a fully functional local development environment. [16:12:31] https://www.mediawiki.org/wiki/MediaWiki-Vagrant is a virtual machine management tool that is used by some folk to make development environments for MediaWiki and other Wikimedia related FOSS projects [16:14:31] * AmandaNP looks [16:15:42] MediaWiki-Vagrant (MWV) includes Puppet code to automate setup of things like databases, wikis, support software. These are organized as "roles" which can setup complex systems. The "striker" role for example sets up 2 MediaWiki wikis, Phabricator, some OpenStack components, an LDAP server, a database server, etc [16:18:41] * bd808 browses https://github.com/UTRS/utrs [16:46:13] bd808: wow as if I didn't look far enough to understand my software already has a vagrant version called homestead... [16:46:27] "The Laravel framework has a few system requirements. All of these requirements are satisfied by the Laravel Homestead virtual machine, so it's highly recommended that you use Homestead as your local Laravel development environment." [16:50:31] !log wikistream Added BryanDavis (self) as project admin (T236551) [16:50:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [16:50:34] T236551: "wikistream" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236551 [17:26:09] !log redirects Added proxy for wikistream.wmflabs.org (T236551) [17:26:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Redirects/SAL [17:26:12] T236551: "wikistream" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236551 [17:31:05] !log wikistream Shutdown ws-web.wikistream.eqiad.wmflabs (T236551) [17:31:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [18:57:55] petan: I'm auditing existing wmflabs.org proxies and huggle-wl.wmflabs.org keeps showing up as an outlier. It seems to work, though — can you refresh my memory about that one? Is it looks like it's a cname for a different domain which is itself a proxy, or something like that? [20:11:59] !log wikistream Deleted ws-web.wikistream.eqiad.wmflabs (T236551) [20:12:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [20:12:01] T236551: "wikistream" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236551 [20:31:09] !log wikistream Deleting project; service migrated to Toolforge (T236551) [20:31:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [20:31:12] T236551: "wikistream" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236551 [20:48:49] Petscan.wmflabs.org is down since weeks now (https://bitbucket.org/magnusmanske/petscan/issues/167/petscan-is-down), is it possible that someone form wmcs reboot its service? [20:49:09] command is listed in https://wikitech.wikimedia.org/wiki/Nova_Resource:Petscan [20:50:31] andrewbogott: ping, you're listed as co-maintener :) [20:51:02] Framawiki: looking. That service is a bit of a pain to restart [20:52:20] the screen and sub process are running. looks like I have to get more creative to restart [21:03:39] !log petscan Killed apparently hung process on petscan4.petscan.eqiad.wmflabs (T251567) [21:03:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Petscan/SAL [21:03:42] T251567: PetScan problem: 504 Gateway Time-out - https://phabricator.wikimedia.org/T251567 [21:04:33] !log petscan Started a screen as magnus and then ~magnus/petscan/run.sh inside it (T251567) [21:04:36] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Petscan/SAL [21:04:48] Framawiki: I think I got the fragile beast back up and running [21:05:25] thanks bd808! I missed the ping because I was briefly out of the house [21:05:26] it was great! [21:05:51] \o/ thanks bd808 ! [21:06:01] It would be really cool if Magnus learned how to write systemd units... [21:06:38] this is an ugly "startup script": `for (( ; ; )); do sudo /home/magnus/petscan_rs/target/release/petscan_rs; done` [21:32:19] !log tools.totoazero deployed 863d1b9 hotarticles.py and others [21:32:21] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.totoazero/SAL [23:03:36] hi all, sorry for bodering you again, but I'm trying to connect to a cloud instance, and getting similar erratic behavior than in the past, I was able to login once, but now being reject. ssh -J dsaez@primary.bastion.wmflabs.org covid-data.wmf-research-tools.eqiad.wmflabs [23:03:44] dsaez@primary.bastion.wmflabs.org: Permission denied (publickey). [23:04:05] covid-data, eh? [23:04:11] I can ssh to the primary bastion, but the jump to my instance is failing ... [23:04:12] yep [23:04:16] hare, yes [23:05:12] sounds intersting [23:05:26] check covid-data.wmflabs.org [23:12:12] It seems to be hanging at [23:12:12] debug1: Connecting to covid-data.wmflabs.org [172.16.0.164] port 22. [23:15:35] Reedy, now is working... I had similar situation couple of weeks ago, not sure why is not stable .. [23:16:14] Is it under high load? [23:17:54] could be... do you think that is just the server not responding? [23:18:59] it seems something like that [23:19:10] hi [23:19:18] But it could be the underlying host [23:19:58] got you... [23:20:15] Last login: Thu Apr 9 07:10:43 2020 [23:20:15] root@covid-data:~# [23:20:19] WFM [23:20:33] yep, now ir working [23:20:42] *is [23:20:44] it's not under ridiculous load [23:21:35] I've also changed to KDE recently, and had few problems managing keys, but the weird thing is that it was working , then stoped and now working again