[09:11:35] anyone around who can edit this form? https://phabricator.wikimedia.org/maniphest/task/edit/form/8/ [09:12:05] I cannot [09:12:37] kormat: if needed, you can always become phab admin and then remove yourself as admin once you are done [09:12:50] > Allow members of any projectacl*phabricator, acl*security_team [09:13:00] for an SRE form, this seems.. suboptimal [09:13:05] marostegui: oh? [09:16:31] kormat: I am looking for the command, I thought I had it on my notes [09:16:45] did you store your notes on etherpad? [09:16:53] kormat: however, I would talk to Andre first, just to double check [09:17:03] kormat: Notepad [09:17:17] super sophisticated :) [09:20:05] kormat: sudo /srv/phab/phabricator/bin/user empower --user yourusername would be the way, but double check with Andre I think [09:23:28] kormat: I think I can edit [09:23:34] ah no wait [09:24:34] the UI is not clear to me if I'm editing the form or a new task that will be created [09:24:40] *from the UI [09:25:44] You must be able to configure an application in order to manage its forms. [09:26:51] yep, I can't [10:35:06] <_joe_> @all I disabled puppet on quite a few hosts as we're transitioning the last remaining restbase clients to use TLS [10:35:33] <_joe_> I will re-enable it during the day, I need to ensure no suprises happen anywhere [10:44:58] ack [11:16:39] I've got a few restbase healthy and ready to pool, anyone have a sec for a +1? https://gerrit.wikimedia.org/r/c/operations/puppet/+/632497 [11:17:34] sure [11:17:44] godog: thanks! [11:17:54] np hnowlan ! [11:19:06] <_joe_> hnowlan: please note that when you run puppet there [11:19:11] <_joe_> you will need to reenable [11:19:18] <_joe_> but it will restart restbase [11:19:38] <_joe_> it's safe, I just verified that's the case [11:24:19] _joe_: would you rather I hold off in light of your work? [11:24:40] it's low-priority [11:42:52] <_joe_> hnowlan: no go on [11:43:19] <_joe_> puppet is safe to run, it just needs to be coordinated as it will cause a restbase rolling restart [11:43:49] <_joe_> so lemme do it in a few mins so you have the all-clear [11:53:02] I actually have to run afk for lunch so don't rush anything on my part, I'll get to it later on [16:52:09] what's best practice to create a group in deployment-prep in puppet? I see we don't include the admin class on hosts there, couldn't see a pattern used elsewhere. Currently considering "don't" as the easiest route [16:54:02] volans: is ops-monitoring-bot@wikimedia.org actively used? [16:57:06] mutante: there is a phab user that is used with that name, I don't recall if the email is used per-se or needed for the account [16:57:56] volans: ah, i see. probably for email confirmation. gotcha. thx [16:58:05] not changing anything about it for now [16:58:20] maybe it can keep working without it :) [16:59:27] it's alright. just one thing I removed was catchpoint aliases [16:59:35] k [17:05:16] hnowlan: Puppet's built-in "group" resource can be used to create local groups on the instances where it is applied. Puppet will fail if the group in question actually exists in the LDAP directory that backs NSS in Cloud VPS instances and the resource does not match the state of the LDAP directory. This happens because Puppet on the instances does not have the rights to modify the LDAP directory. [17:13:02] oh [17:13:15] bd808: hnowlan: I think we had the issue with the mwdeploy user which is defined in puppet AND in ldap [17:13:29] and went to use something such as: User { 'mwdeploy': provider => ldap } [17:13:35] iirc [17:14:36] I think mwdeploy had the uid mismatch problem at some point back in the day (Puppet and LDAP not agreeing on the uid) [17:15:09] I think the issue was that sometime LDAP would not respond, which caused th euser to not be considered existent. Puppet would then create a local user with the next available uid [17:15:17] or [17:15:27] * bd808 tries to find the wikitech page where we hashed out the id numbers [17:15:31] maybe the uid is now reserved in puppet and matches the one from ldap [17:15:49] hashar: yeah, that was a big problem when the LDAP service was crashing a couple times a day [17:16:33] we are both showing how long we have been piddling with this stuff hashar because the LDAp service has be solid for >1.5 years :) [17:16:51] ldap has mwdeploy = 603 , while prod seems to use 498 [17:17:06] ahh good to know about LDAP! [17:17:19] I remember it was a struggle at some point :-\\ [17:17:19] https://wikitech.wikimedia.org/wiki/UID -- that's the id mapping page I was remembering [17:18:43] hashar: slapd still has a slow memory leak that makes it die, but we have added an extra layer of read-only replicas to it that keep Cloud instances from noticing the restarts of the servers "most" of the time [17:21:28] bd808: sounds like a good counter measure ;) [17:21:49] as for hnowlan question, sorry I derailed the conversation :D I don't know of a good practiec to create a group [17:22:36] but I guess it might be created in LDAP with a fixed gid or have puppet create it on instances but then you might have gid mismatch between hosts [17:22:46] "I see you like band-aids, so I put a band-aid on your band-aid." -- Xzibbit [19:52:48] Why the heck did I get a notification for https://phabricator.wikimedia.org/T264918 ? [19:52:59] i must be getting them for access requests, darn it. [19:53:28] yup. [20:04:50] robh: they made that tag a subscriber of the ticket [20:05:02] which is an odd way to do it i guess [20:05:33] let me move it to the actual "Tags: " [20:06:28] added project, removed subscriber. that likely changes notification details [20:07:23] ahh yeah, i see [20:07:27] that happens sometimes, very odd [20:09:26] i kind of expected that it's not even working :) [20:11:26] laptop battery critically low and away from an outlet..oops. bbiaw [20:18:22] I just merged a change to the cookbooks repo, do changes automatically take effect or do I need to do something to actually "deploy" [20:18:40] e.g. when I do a `sudo cookbook ...` does it automatically fetch the latest [20:18:41] dRUN PUPPET [20:18:48] ack [20:18:50] sorry for the caps [20:18:54] NO WORRIES [20:19:00] run puppet on teh cumin host or wait for next run [20:19:00] :P [20:19:08] got it