[03:06:41] !log tools.wikibugs Updated channels.yaml to: 60862cd3720d630dd7874645ee8d9e3cc8a0fd68 Send "Performance Team (Radar)" to #wikimedia-perf-bots [03:06:45] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [06:43:05] !log mailman shutoff mailman-01 the unpuppetized node, not needed anymore. Will delete later. [06:43:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Mailman/SAL [09:19:29] !log admin restarted wmcs-backup on cloudvirt1024 as it failed due to an image being removed while running (T276892) [09:19:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:19:33] T276892: wmcs-backups: handle VMs that are deleted between backups start and that VM backup - https://phabricator.wikimedia.org/T276892 [10:11:00] arturo: what does LGTM mean? legitimate? the comment is a bit cryptic to me [10:11:19] gifti: sorry, it means `looks good to me`. [10:11:37] ah, thx [10:24:04] gifti: +1, LGTM, etc are convoluted terms/concepts we use when reviewing stuff on engineering projects [10:39:51] !k8splay increased floating ip quota by 1 (T277706) [10:39:52] T277706: Request increased quota for k8splay Cloud VPS project - https://phabricator.wikimedia.org/T277706 [10:39:58] !log k8splay increased floating ip quota by 1 (T277706) [10:40:06] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:K8splay/SAL [11:01:47] !log dwl Upgraded quota to 45 cores, 160GB cinder, 182GB ram (T277681) [11:01:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Dwl/SAL [11:01:52] T277681: Request increased quota for dwl Cloud VPS project - https://phabricator.wikimedia.org/T277681 [11:10:41] !log tools starting VM tools-docker-registry-04 which was stopped probably since 2021-03-09 due to hypervisor draining [11:10:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:20:27] !log tools created 80G cinder volume tools-docker-registry-data (T278303) [11:20:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:20:32] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [11:23:51] !log toolsbeta created 2G cinder volume `toolsbeta-docker-registry-data` (T278303) [11:23:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:24:15] !log iiab Increase cinder volume quota to 200G (T277758) [11:24:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Iiab/SAL [11:24:18] T277758: Request increased quota for iiab Cloud VPS project - https://phabricator.wikimedia.org/T277758 [11:34:02] !log toolsbeta attached cinder volume `toolsbeta-docker-registry-data` as /dev/vdb on toolsbeta-docker-registry-01 [11:34:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:41:48] !log toolsbeta created VM toolsbeta-docker-registry-02 as Debian buster (T278303) [11:41:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:41:53] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [11:46:45] !log tools attach cinder volume `tools-docker-registry-data` to VM `tools-docker-registry-03` to format it and pre-populate it with registry data (T278303) [11:46:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:09:13] !log tools dettach cinder volume `tools-docker-registry-data` (T278303) [12:09:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:09:17] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [12:11:02] !log tools created VM `tools-docker-registry-06` as Debian Buster (T278303) [12:11:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:17:12] !log toolsbeta attach the `toolsbeta-docker-registry-data` volume to the `toolsbeta-docker-registry-02` VM [12:17:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [12:32:05] !log tools bump cinder storage quota from 80G to 400G (without quota request task) [12:32:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:32:57] !log tools snapshot cinder volume `tools-docker-registry-data` into `tools-docker-registry-data-stretch-migration` (T278303) [12:33:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:33:01] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [12:33:55] !log tools attach cinder volume `tools-docker-registry-data` to VM `tools-docker-registry-05` (T278303) [12:33:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:38:21] !log tools associate floating IP 185.15.56.67 with `tools-docker-registry-05` and refresh FQDN docker-registry.tools.wmflabs.org accordingly (T278303) [12:38:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:38:25] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [12:43:01] mmm [12:43:12] why does the new docker registry appears to be empty [12:45:43] ok, fixed, I was using the wrong directory ^^U [12:46:38] !log tools shutoff the old stretch VMs `tools-docker-registry-03` and `tools-docker-registry-04` (T278303) [12:46:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:46:43] T278303: Toolforge: migrate docker registry to Debian Buster - https://phabricator.wikimedia.org/T278303 [13:32:25] !log openstack deleting labs-bootstrapvz-jessie — Jessie is long-since deprecated and we no longer have any jessie VMs on cloud-vps [13:32:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Openstack/SAL [13:37:47] andrewbogott: a few deployment-prep Jessies are shutdown but not deleted yet [13:38:44] Majavah: Any reason to not delete them yet? [13:39:50] andrewbogott: mainly waiting to see if anything would break, I was planning to delete by end of this week but if you need them gone I don't have anything against it [13:41:06] Majavah: nope, you can delete at your leisure [13:41:23] We can clean up puppet code in the meantime, that won't prevent us from starting them up again. [13:43:18] I doubt those will be needed but as the remaining ones were the most complicated ones to handle I'll keep them until Friday or so [13:44:13] that's reasonable :) [17:25:32] bd808: I'm trying to set a DNS record for lists.wmcloud.org to test DKIM, and don't see how to do it in horizon, Amir1 said I should ask you for assistance [17:26:44] it's because the project is mailman, lists is not in its zone [17:26:56] the domain already exists? [17:27:14] oh, did I hack that into existence for you Amir1 ? [17:27:17] it does, so all we need is to delegate the domain to your project [17:27:38] yeah, we promise to burn it in fire once we are done [17:28:05] arturo: yeah but we need to add dkim record, not A or MX [17:28:29] arturo: +1 from me for delegating the zone to Amir1's project if you are in a place to do that easily [17:28:59] Amir1: the DKIM record is just a TXT record, our cloud can handle that [17:29:14] we would need a phabricator task with the request, for the paper trail, etc [17:29:36] sure, which project tags [17:30:15] `cloud-services-team (Kanban)` should work, since this is not a standard quota request [17:30:31] * legoktm creates [17:30:45] Thanks legoktm and arturo [17:30:53] bd808: ack, thanks! [17:31:06] hello everyone, I hope I don't interrupt the conversation. But I need to ask about a puppet error when trying to sign a cert for a new instance I'm creating. [17:31:34] thesocialdev: go ahead and ask [17:31:50] https://www.irccloud.com/pastebin/fyVl4buD/ [17:32:02] arturo: T278358 [17:32:03] T278358: Please add DNS DKIM record for lists.wmcloud.org - https://phabricator.wikimedia.org/T278358 [17:32:19] thesocialdev: did you run the puppet agent in the maps-master01 VM yet? [17:32:38] yes [17:32:43] https://www.irccloud.com/pastebin/vjvZnlLm/ [17:32:53] Ops, I have to recreate the cert sign now [17:33:13] legoktm: I'll be off after the meeting I'm on, but perhaps andrew can delegate the domain right away [17:33:23] that puppet cert dance is annoying and easy to forget a step of :) [17:34:04] https://phabricator.wikimedia.org/diffusion/CLIP/browse/master/maps-experiments/maps-master01.maps-experiments.eqiad1.wikimedia.cloud.yaml$13 why is that pointing to a deployment-prep puppetmaster? [17:34:41] thesocialdev: sorry :-S [17:35:34] Majavah: that's my fault, I just copy and paste what I'm used to do [17:35:55] But I created another instance in the maps-experiments project and it wasn't a problem [17:37:06] What's the right puppet master for maps-experiments? [17:37:16] please use the normal puppetmasters instead or create a project-specific master if that's needed, but using deployment-prep resources on an another project makes it a pain for deployment-prep maintenance as we need to get other projects maintainers to change hiera keys [17:37:31] just don't set that hiera key at all and it will use cloud-wide puppetmasters [17:38:38] both instances there are pointing at the deployment-prep puppetmasters -- https://openstack-browser.toolforge.org/project/maps-experiments [17:38:53] which is fine I suppose, but also kind of strange [17:39:55] is the cloud-wide puppetmaster the default one that's created with the instance? `puppetmaster.cloudinfra.wmflabs.org` [17:40:07] thesocialdev: yes [17:40:50] or just don't set the hiera value at all? [17:41:21] All instances start life pointed at the shared puppetmasters. Some projects need to have local secrets or rapid Puppet module development so they create their own project-local puppetmasters where they can mess with things [17:42:39] https://wikitech.wikimedia.org/wiki/Help:Standalone_puppetmaster is the help page for setting up a local puppetmaster if your project actually needs that [17:42:55] side note, why do so many deployment-prep instances have puppetmaster in their instance-specific hiera when it's set up project-wide? just makes changing it more annoying :/ [17:43:48] thanks bd808 I'll look into that, for the time being how can I sign the cert for maps-master01 at the main puppetmaster? [17:44:10] thesocialdev: it will autosigh [17:44:14] *autosign [17:45:04] Majavah: possibly at some point there was a second puppetmaster? though I think the real answer is because you haven't fixed it yet ;) [17:45:25] maybe there's an issue then, because I went on the quest for setting up the puppet master because I was hitting the following error [17:45:30] https://www.irccloud.com/pastebin/8ObWC1td/ [17:45:51] And now that I rolledback the puppet master config it's happening again [17:46:05] thesocialdev: run `rm -rf /var/lib/puppet/ssl` [17:46:13] and then `run-puppet-agent` [17:46:46] legoktm: the current one is on buster so it has some lifetime remaining, so I'll leave that to the next person and hope it's not going to be me [17:47:00] https://www.irccloud.com/pastebin/1HfNPLKo/ [17:47:54] thesocialdev: whats the result of `grep master /etc/puppet/puppet.conf` [17:48:18] `server = puppetmaster.cloudinfra.wmflabs.org` [17:48:30] thesocialdev: I revoked the old cert. Try the run again please? [17:49:17] bd808: awesome, now it's alive [17:49:21] w00t [17:50:10] I have to fix a lookup that I was doing for a deployment-prep hieradata, but other than that looks good! [17:50:12] Thanks you all [19:32:11] !log tools.lexeme-forms deployed 99257d861c (Portuguese adjectives) [19:32:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lexeme-forms/SAL [19:38:31] !log tools.lexeme-forms deployed ea6928faaa (clarify Norwegian Bokmål adjectives) [19:38:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lexeme-forms/SAL [20:19:08] !log tools.refill-api running `kubectl delete pods refill-api-6c78d8cdd-d6kfq` T278211 [20:19:12] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.refill-api/SAL [20:19:13] T278211: Refill tool stuck "waiting for an available worker" - https://phabricator.wikimedia.org/T278211 [22:44:46] !log tools.lexeme-forms deployed ffa45a58b1 (minifix) [22:44:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lexeme-forms/SAL