[00:02:47] bd808 sudo puppet cert generate $(hostname -f) was the command that would fix it :) [00:02:56] but appears hostname -f is broken under buster [00:03:00] paladox, mutante: https://wikitech.wikimedia.org/wiki/Puppetmaster exists now :) [00:03:06] (it dosen't show the full domain) [00:03:08] it appears we were just missing the "puppet cert generate" [00:03:11] bd808 awesome! thanks! [00:03:18] yay [00:03:24] yeah, hostname -f is messed up in the latest buster. I made a ticket about that somewhere [00:03:37] ^ that is what i was trying to figure out earlier :) [00:03:42] heh, ok. cool [00:03:48] because "hostname -A" works! [00:03:49] T240899 [00:03:49] T240899: `hostname -f` not showing FQDN on instances based on debian-10.0-buster base image - https://phabricator.wikimedia.org/T240899 [00:03:58] so what does it even mean that "ALL FQDNS" contain it [00:04:02] but "the" FQDN does not [00:04:15] and yea.. ok ..subscribing :) [00:04:23] i asked #debian :p [00:04:56] hostname -A | awk '{print $1}' [00:04:58] works [00:05:42] yea and there is just one.. so you don't need the awk [00:06:10] the bug seems to be that it is "a" FQDN but not "the" FQDN [00:06:58] sounds like "it could have multiple ones and this is just one of them" [00:11:29] !log tools Rebuiliding all stretch-ssd Docker images to pick up busybox [00:11:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [00:15:21] bd808: in /etc/hosts there is "127.0.1.1" AND "127.0.0.1" https://phabricator.wikimedia.org/T240899#5772850 [00:15:24] bd808 we somehow got hostname -f working now [00:15:33] commenting out the first one fixes the issue [00:15:40] paladox: i changed /etc/hosts [00:15:45] heh [00:16:27] /etc/cloud/templates/hosts.debian.tmpl [00:16:36] it literally is "one of multiple names". i guess "127.0.1.1" is a typo somewhere else? [00:16:48] only 127.0.0.1 is assigned on lo [00:17:44] mutante it's done in /etc/cloud/templates/hosts.debian.tmpl [00:17:53] i think that's an upstream thing [00:17:58] since i've seen it on other hosts [00:20:15] paladox: ACK, i grepped the puppet repo and it's only in one place ..in a test as example for invalid input [00:20:39] sounds like openstack upstream..yea [00:21:25] could ping jzerebecki but he might say first check the latest version [00:22:09] it's definitly in the latest version too :P [00:25:22] pinged upstream :) [00:30:28] !log devtools set puppetmaster: puppetmaster-1001.devtools.eqiad.wmflabs in hiera [00:30:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [00:30:59] paladox: but not for all instances? [00:31:25] i can set it in project prefix [00:31:27] * paladox does [00:31:32] no, don't [00:31:35] ok [00:31:44] that would only work if we change our host names [00:32:15] oh! [00:32:25] ok [00:32:32] well.. it could work [00:32:37] but only if you make a new prefix [00:32:42] that includes the "stage" part [00:37:21] this is upstream he says: https://github.com/canonical/cloud-init/blob/master/templates/hosts.debian.tmpl [00:47:12] !log devtools puppet cert generate puppetmaster-1001.devtools.eqiad.wmflabs [00:47:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [02:29:31] * AntiComposite switched a tool from using the API to using the DB and it now runs in almost half the time [02:30:16] Of course, that's still 18 minutes, but unless I started parallelizing that's not really going to drop any further [03:04:00] !log tools Really rebuilding all {jessie,stretch,buster}-sssd images. Last time I forgot to actually update the git clone. [03:04:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [03:35:01] !log tools.ldap switched over to new k8s cluster (`kubectl config use-context toolforge`) [03:35:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.ldap/SAL [07:24:30] !log tools.phpinfo switched over to new k8s cluster (`kubectl config use-context toolforge`) [07:24:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.phpinfo/SAL [07:36:44] !log tools.ldap switched to python3.7 webservice [07:36:45] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.ldap/SAL [07:51:29] !log tools.apt-browser switched over to python3.7 and new k8s cluster (`kubectl config use-context toolforge`) [07:51:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.apt-browser/SAL [10:29:03] !log etytree increase CPU quota to 18 cores T241716 [10:29:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Etytree/SAL [10:29:05] T241716: Request increased quota for etytreee Cloud VPS project - https://phabricator.wikimedia.org/T241716 [10:30:09] !log etytree increase CPU quota to 18 cores T241740 * [10:30:10] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Etytree/SAL [10:30:10] T241740: Request increased quota for eytree Cloud VPS project - https://phabricator.wikimedia.org/T241740 [11:21:49] !log tools upload k8s.gcr.io/cadvisor:v0.30.2 docker image to the docker registry as docker-registry.tools.wmflabs.org/cadvisor:0.30.2 for T237643 [11:21:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:21:52] T237643: toolforge: new k8s: figure out metrics / observability - https://phabricator.wikimedia.org/T237643 [11:27:01] !log toolsbeta [new k8s] cadvisor is running in the metrics namespace now (T237643) [11:27:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:27:04] T237643: toolforge: new k8s: figure out metrics / observability - https://phabricator.wikimedia.org/T237643 [11:51:02] !log tools [new k8s] deploy cadvisor as in https://gerrit.wikimedia.org/r/c/operations/puppet/+/561654 (T237643) [11:51:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:51:06] T237643: toolforge: new k8s: figure out metrics / observability - https://phabricator.wikimedia.org/T237643 [14:31:26] Hi everyone! I want to add some content to our tools' database on cloud, but I am not sure how to do that. I can connect to the database through sql tools but I am not sure how to do that through e.g. a python script. Any documentation or other hints I could use? Thanks :) [15:30:25] frimelle_: one way is to use legoktm's 'toolforge' library. It has some nice helper methods -- https://wikitech.wikimedia.org/wiki/User:Legoktm/toolforge_library [15:31:06] Ah, so it's not only for the cloned databases but can also be used for a tool's database? Great! [16:48:53] !log tools updated the ValidatingWebhookConfiguration for the ingress admission controller to the working settings [16:48:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [17:46:49] !log toolsbeta stashed uncommitted changes on the puppetmaster because they seem to be things that are already merged [17:46:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [18:08:01] !log tools.bridgebot Migrating to new kubernetes cluster [18:08:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.bridgebot/SAL [18:18:53] !log tools.bridgebot Now running on the new k8s cluster -- https://tools.wmflabs.org/k8s-status/namespaces/tool-bridgebot/ [18:18:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.bridgebot/SAL [18:32:23] !log tools.docker-registry Migrating to new kubernetes cluster [18:32:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.docker-registry/SAL [18:34:54] !log tools.gmt Migrating to new kubernetes cluster [18:34:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gmt/SAL [18:37:53] !log tools.hatjitsu Migrating to new kubernetes cluster [18:37:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.hatjitsu/SAL [19:02:44] does cloud VPS 'support' getting a second IP on an instance? i don't mean floating (public) IP, i see the forms for that, i just mean a second private IP because my puppet role wants to use a server and a service IP. [19:03:18] i could just add one with "ip" of course but i wouldn't do that without requesting it somehow from the pool so i can't cause conflicts [19:07:43] mutante: that sounds like a great question for jeh [19:07:54] Off the top of my head I don't know how to do it [19:08:17] bd808: ack, thanks! [19:09:13] !log tools.keystone-browser Migrated to new Kubernetes cluster and replaced lighttpd container with a bare Ingress [19:09:15] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.keystone-browser/SAL [19:13:21] !log tools.my-first-flask-oauth-tool Migrating to new kubernetes cluster [19:13:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.my-first-flask-oauth-tool/SAL [19:15:57] !log tools.my-first-flask-tool Migrating to new kubernetes cluster [19:15:58] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.my-first-flask-tool/SAL [19:16:03] (P.S. actually the puppet role would execute IP to add a secondary one that i have to provide in Hiera and i will go with localhost or something for now) [19:36:45] !log openstack create private flavor m1.small-ceph for testing IO limits T225320 [19:36:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Openstack/SAL [19:36:48] T225320: Ceph Proof of Concept Build and Testing - https://phabricator.wikimedia.org/T225320 [20:32:22] !log tools.mysql-php-session-test Migrating to new kubernetes cluster [20:32:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.mysql-php-session-test/SAL [21:15:41] * jeh changed his IRC setup and missed the message earlier [21:16:09] mutante The easiest way is to add a second interface to the VM, but we don't have that enabled in horizon [21:16:45] mutante I can create a second interface and assign it to the VM using the backend CLI tools if you'd like [21:17:46] jeh: ah! that would be great:) the project is "devtools" and i would like one each on phabricator-stage-1001 and phabricator-prod-1001 [21:18:19] i was going to hack around it but that is the best solution if not too hard [21:19:04] it's easy on my end, you'll need to update /etc/network/interfaces for the new device though [21:20:07] if i configure the right IP in Hiera.. then puppet would actually try to add it [21:20:20] even better :) [21:20:27] in production it is separate for v4 and v6 though [21:20:51] but i think we can ignore v6 [21:21:02] the code is like "if one of them is set ..then" [21:22:01] for reference I ran these commands [21:22:06] `openstack port create --network 7425e328-560c-4f00-8e99-706f3fb90bb4 --project devtools phabricator-stage-1001-eth1` [21:22:15] `nova interface-attach --port-id 4f7da2fe-f0bb-45de-be74-a086a618666b 367e980a-5088-43f6-a64b-2094f05f6f11` [21:22:38] which reserved the address 172.16.0.189, and attached it to the phabricator-stage-1001 VM [21:23:00] want to take a quick look before I do the other instance? [21:23:16] yes, just a moment [21:24:15] !log devtools add secondary interface to phabricator-stage-1001 [21:24:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [21:24:38] my puppet is disabled for other reasons.. will take me a few minutes.. re-enabling [21:24:46] thanks! brb [21:32:31] !log devtools configure 172.16.0.189 as "vcs" address v4 for phabricator-stage-1001 [21:32:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [21:32:56] !log tools.phab-ban Migrating to new kubernetes cluster [21:32:57] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.phab-ban/SAL [21:36:40] !log tools.phabulous Migrating to new kubernetes cluster [21:36:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.phabulous/SAL [21:38:54] !log tools.precise-tools Migrating to new kubernetes cluster [21:38:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.precise-tools/SAL [21:43:15] jeh: i need to first fix other unrelated puppet issues and missing Hiera stuff.. to actually see puppet do that.. but i can manually assign it with the exact same command puppet would run [21:43:32] because i can see that is just "ip addr add ..." [21:43:47] the difference is only it is added as an alias on the first interface [21:43:47] !log tools.tool-db-usage Migrating to new kubernetes cluster [21:43:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.tool-db-usage/SAL [21:44:16] awesome, you can do that or dhcp the eth1 interface. either way will work [21:45:08] * jeh starts work on phabricator-prod-1001's port [21:46:26] jeh: it works for me .. i just ran "sudo ip addr add 172.16.0.189/32 dev eth1" [21:46:36] though puppet will try to use eth0 [21:47:04] can you also just do eth0 ? [21:47:43] neutron has it reserved, so you can use it on either interface. [21:47:57] ok, cool, just making sure that was just a label [21:48:00] thanks again [21:48:26] phabricator-prod-1001-eth1 address is 172.16.0.198 [21:48:34] :) [21:48:35] you're welcome, happy to help :) [21:49:49] mutante I might need to adjust the allowed addresses on the eth0 port for that address, if you can't pass traffic on that IP once it's on eth0 let me know [21:50:07] !log devtools add secondary interface to phabricator-prod-1001 [21:50:09] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [21:50:16] jeh: ok! [21:50:34] !log devtools assigned 172.16.0.198/32 on eth0 on phabricator-prod-1001 [21:50:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [21:51:06] i forgot the word "dev" at first and the error message you get then is "Error: either "local" is duplicate, or "eth0" is a garbage. ;) [21:51:51] i'll get back to you if there are any issues (or just use eth1 and make that configurable) [22:15:01] !log tools.toolviews Migrating to new kubernetes cluster [22:15:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.toolviews/SAL [22:18:23] !log tools.trusty-tools Migrating to new kubernetes cluster [22:18:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.trusty-tools/SAL [22:20:19] !log tools.versions Migrating to new kubernetes cluster [22:20:21] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.versions/SAL [22:22:28] !log tools.static Migrating to new kubernetes cluster [22:22:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.static/SAL [22:24:38] !log tools.openstack-browser Migrating to new kubernetes cluster [22:24:38] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.openstack-browser/SAL [22:29:03] !log tools.openstack-browser-dev Migrating to new kubernetes cluster [22:29:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.openstack-browser-dev/SAL [22:37:14] !log devtools - sudo vi /srv/deployment/phabricator/deployment-cache/.config on both phabricator instances to fix deployment server (remove deployment-tin (!)) [22:37:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [23:33:33] !log tools.csp-report Moving to new kubernetes cluster and py37 [23:33:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.csp-report/SAL [23:49:39] Hi, it seems make-instance-vg is broken on buster. [23:49:42] we're getting: [23:49:53] Error: /Stage[main]/Labs_lvm/Exec[create-volume-group]/returns: change from 'notrun' to ['0'] failed: '/usr/local/sbin/make-instance-vg '/dev/vda'' returned 1 instead of one of [0] [23:52:06] /usr/local/sbin/make-instance-vg /dev/vda [23:52:06] /usr/local/sbin/make-instance-vg: failed to create new partition [23:59:51] paladox: file a bug and ping A.ndrew on it