[02:36:15] !log tools removing jhedden from sudo roots [02:36:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [04:34:58] !log tools add own root key to project hiera on horizon T278390 [04:35:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [04:35:02] T278390: Toolforge root for Majavah - https://phabricator.wikimedia.org/T278390 [09:07:19] There appears to be an issue with the https certificate of toolforge.org [09:09:03] @IVeertje what kind of issue? [09:09:08] Is the expired wmflabs SSL Cert on people's radar? [09:10:00] let me work on that! [09:10:05] :-) I just noticed [09:10:23] Thanks :) [09:24:53] !log project-proxy rebase & resolve merge conflicts in labs/private.git [09:24:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [09:34:06] !log project-proxy systemctl restart acme-chief (to workaround a bug) [09:34:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [10:52:51] o/ I was following https://wikitech.wikimedia.org/wiki/Help:Adding_Disk_Space_to_Cloud_VPS_instances#Cinder:_Attachable_Block_Storage_for_Cloud_VPS however, apparently it "Could not find any available weighted backend." in the volume logs which means I can't attach it to my VM. How should I go about debugging this? Volume is at: https://horizon.wikimedia.org/project/volumes/39de18fe-a768-47df-a015-9f3278c399fd/ [10:53:32] that one is new for me [10:54:11] tarrow: what project? [10:54:19] wikidata-dev [10:54:55] * tarrow knows nothing about adding storage to his VMs though and is just following the guide blindly [10:55:49] tarrow: could you please open a phab task? [10:56:01] arturo: `VOLUME_VOLUME_001_003` is the event id [10:56:07] arturo: sure! [10:56:25] thanks [10:56:48] dcaro: is there any chance ceph that error means the ceph pool is full? [10:56:51] I see the volume in rbd [10:57:12] they are not yet [11:01:02] tarrow: do you mind if I ssh to the vm to debug? [11:01:43] dcaro: be my guest :) [11:02:00] it's not attached to any VM yet though [11:02:16] ack [11:02:26] but the VM we wanna attach it to is a wb-product-testing.wikidata-dev.eqiad.wmflabs [11:03:17] dcaro / arturo: https://phabricator.wikimedia.org/T282109 is the phab [11:04:40] thanks [11:05:51] "Error connecting to ceph cluster." looking [11:06:47] cloudcontrol1004 was the one failing, looking [11:08:49] cheers [11:14:19] !log admin restarting cinder-volume on the eqiad control nodes to refresh the ceph libraries (T282109) [11:14:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:14:22] T282109: Failed to create volume on cloud-vps project wikidata-dev - https://phabricator.wikimedia.org/T282109 [11:15:25] dcaro: is there anything "I" should have done differently? [11:16:17] tarrow: no, we did an upgrade of ceph lately, and those processes did not pick up the new libraries, can you try again now? (you can remove the failed volume and use the same name if you prefer) [11:16:38] awesome, lemme try [11:16:55] deleting it now [11:19:24] seems to create fine this time [11:19:33] just following the rest of the guide [11:20:55] dcaro: looks like it it worked to me :) [11:21:01] awesome :), let me know if you find any other issues [11:21:42] dcaro: want me to assign you to the ticket and resolve it? Or shall I leave it to you? [11:21:56] tarrow: I'll get it, I'll add some notes too [11:22:17] awesome! Thanks for your help; enjoy the rest of the day :) [14:37:45] !log tools clear error state from tools-sgeexec-0913 [14:37:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:43:05] !log tools clear error states from all currently erroring exec nodes [14:43:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:31:34] !log admin about to migrating CloudVPS network to the cloudgw architecture T270704 [15:31:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [15:31:37] T270704: cloud: introduce new edge network architecture for eqiad1 and codfw1dev - https://phabricator.wikimedia.org/T270704 [15:55:27] no worries. :) Folks like Majavah and I are here to answer questions while others fix the things that are causing questions to be asked [17:48:58] !log quarry cleared out tmp files created by quarry web service that had filled the disk with find T282171 [17:49:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Quarry/SAL [17:49:02] T282171: quarry-web-01 out of disk space - https://phabricator.wikimedia.org/T282171 [17:54:26] Anyone who is active as a tool maintainer or code contributor, but not active in editing on a wiki: there is a call for discussion of Board election criteria for technical contributors at T281977 that you may want to participate in. [17:54:27] T281977: Reconsidering of eligibility criteria for technical contributions in elections - https://phabricator.wikimedia.org/T281977 [17:57:09] !log quarry restarting web service to remove banner for wikireplicas upgrade [17:57:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Quarry/SAL [19:32:54] toolforge question: is it possible to have a php webservice and node.js webservice active at the same time (on same tool account)? [19:38:58] sd0001: not practically [19:39:37] Majavah is there an impractical way? :) [19:42:47] yes, writing kubernetes deployment+service+ingress objects by hand, which I would not recommend unless you're familiar with k8s and is definitely not supported [19:43:25] (not supported = not our fault if things break) [19:44:29] I think that would require more quota allocated to that tool too, but that's doable [19:46:12] Majavah: ah in that case let me leave it at that; have never really worked with k8s [19:48:07] kubernetes lets you do all kinds of awesome stuff, but there's a reason why we write tooling around it instead of forcing people to deal with it directly :P [19:48:24] sd0001: can I ask if there is a XY problem at play here? What sort of single tool are you imagining with both php and nodejs runtime requirements? [19:49:49] often I have heard this asked and then found the reason was wanting a react/vue/whatever SPA with a php api backend. That can very much be done by pre-compiling the SPA and serving the static assets via the php webservice which also serves the api [19:51:10] It is also very possible to create 2 or more tools which work together somehow to deliver value to users. foo-api for backend + foo for frontend sort of thing [19:52:05] :( [20:45:57] how does one run a cronjob with a python script with a venv? [20:48:30] do you do the "python -m venv" thing from within an k8s shell? [20:57:42] Use webservice shell to create it [20:58:13] and then "source .../bin/activate" from a wrapper? [20:58:21] Just use full path [20:58:41] the ... is just where the venv is [20:58:49] So /data/project/yourrool/venv/bin/pyton3 whatever.ph [20:58:54] oh i see [20:58:54] Yep [20:58:55] k [20:59:20] But with Python spelt right :) [21:15:41] ok well that's working :-) [21:23:20] inductiveload: the `source .../activate` thing should work as well, but it also requires a shell session that often isn't needed. Using the /full/path/to/venv/bin/python3 trick removes the need for a modified $PATH in your environment (which is what sourcing activate mostly does). [21:32:21] i actually do have a (small) wrapper to generate the arguments for the python script [21:32:35] but calling direct worked too :-) [21:32:46] how can I get the pod for a given cronjob? [21:35:41] actually I just want to easily get the `kubectl logs` output for the last run of the cronjob [21:41:17] inductiveload: https://stackoverflow.com/questions/53647683/how-to-get-logs-of-jobs-created-by-a-cronjob describes one way [21:42:18] mrrrgh ok, fine, so there's no one liner here [21:43:47] There are some work in progress plans for helpers to make working with jobs easier -- https://wikitech.wikimedia.org/wiki/Wikimedia_Cloud_Services_team/EnhancementProposals/Toolforge_jobs -- but no implementation yet [21:44:19] it's fine, it's already pretty cool