[07:41:36] Hi, everybody, would it be possible to set up a (web) proxy through the horizon interface to a VPS instance and use `corkscrew` on said VPS so that it is possible to connect to the machine with SSH over the web proxy? Currently I can SSH into the VPS via bastion and I don't really need this other setup, it is more of a "what if" question. I actually tried but I couldn't get it to work, but I don't know if this de [07:41:36] ng that I am missing (I have never used `corkscrew` before) of if there is something else that blocks this setup. [07:51:00] CristianCantoro: I'm confused, what exactly are you trying to accomplish [08:06:38] Zppix: I'm trying to SSH in the VPS but going through the web proxy instead of bastion, as I said before it is just a "theoretical" question... [08:06:58] CristianCantoro: no, you have to go through bastion [08:07:09] AFAIK [08:07:56] I was wondering if there was some limitation or setup that prevented this that was put in place or if it is just a matter of configuring things in the right way [08:11:17] you can't ssh to the web proxies at all [08:11:25] they're web proxies, not bastions [09:09:39] !log toolsbeta Playing around with cookbooks by adding/removing etcd nodes, etcd might missbehave from time to time (T274497) [09:09:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [09:09:44] T274497: [toolforge] Automate addition/removal of etcd node - https://phabricator.wikimedia.org/T274497 [09:33:37] !log admin [codfw1dev] rebooting cloudcontrol2001-dev for kernel upgrade (T275753) [09:33:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:41:49] !log admin [codfw1dev] rebooting cloudcontrol2003-dev for kernel upgrade (T275753) [09:41:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:43:49] !log admin [codfw1dev] rebooting cloudcephosd2001-dev for kernel upgrade (T275753) [09:43:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:44:01] !log admin [codfw1dev] rebooting cloudbackup[2001-2002].codfw.wmnet for kernel upgrade (T275753) [09:44:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:45:38] !log admin [codfw1dev] rebooting cloudcontrol2004-dev for kernel upgrade (T275753) [09:45:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:51:01] !log admin [codfw1dev] rebooting cloudservices2002-dev for kernel upgrade (T275753) [09:51:06] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:53:00] !log admin [codfw1dev] rebooting cloudservices2003-dev for kernel upgrade (T275753) [09:53:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:59:52] !log admin [codfw1dev] rebooting cloudweb2001-dev for kernel upgrade (T275753) [09:59:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:01:23] !log admin [codfw1dev] rebooting cloudvirt200X-dev for kernel upgrade (T275753) [10:01:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:05:43] !log admin [codfw1dev] rebooting cloudcephosd2002-dev for kernel upgrade (T275753) [10:05:46] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:11:25] !log admin [codfw1dev] rebooting cloudcephosd2003-dev for kernel upgrade (T275753) [10:11:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:11:32] !log admin [codfw1dev] manually creating /boot/grub/ on cloudvirt2003-dev to allow update-grub2 to run (so it can reboot into a new kernel) (T275753) [10:11:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:24:09] !log admin [codfw1dev] purge old kernel packages on cloudvirt2003-dev to force boot into a new kernel (T275753) [10:24:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:28:39] !log tools.quickcategories deployed e7654cf4b3 (link PagePile batch creation from index) [10:28:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.quickcategories/SAL [10:29:53] !log admin [codfw1dev] rebooting cloudcephmon2001-dev for kernel upgrade (T275753) [10:29:58] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:38:47] !log admin [codfw1dev] rebooting cloudcephmon2002-dev for kernel upgrade (T275753) [10:38:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:43:23] !log admin [codfw1dev] rebooting cloudcephmon2003-dev for kernel upgrade (T275753) [10:43:27] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:45:19] !log admin rebooting cloudvirt1039 into a new kernel (T275753) --- spare [10:45:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:59:17] !log admin [eqiad] rebooting cloudcephmon1001 for kernel upgrade (T275753) [10:59:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:05:43] !log admin [eqiad] rebooting cloudcephmon1002 for kernel upgrade (T275753) [11:05:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:11:57] !log admin [eqiad] rebooting cloudcephmon1003 for kernel upgrade (T275753) [11:12:00] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:16:50] !log admin [eqiad] rebooting cloudcephosd1001 for kernel upgrade (T275753) [11:16:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:26:52] !log admin [eqiad] rebooting cloudcephosd1002 for kernel upgrade (T275753) [11:27:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:30:30] !log admin rebooting cloudcontrol1005 for kernel upgrade (T2 [11:30:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:30:33] T2: Get salt logs into logstash - https://phabricator.wikimedia.org/T2 [11:32:39] !log admin [eqiad] rebooting cloudcephosd1003 for kernel upgrade (T275753) [11:32:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:39:45] !log toolsbeta created puppet prefix 'toolsbeta-bastion' to hold new configuration for buster-based bastions (T275865) [11:39:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:39:51] T275865: Toolforge: migrate bastions to Debian Buster - https://phabricator.wikimedia.org/T275865 [11:41:41] !log admin [eqiad] rebooting cloudcephosd1004 for kernel upgrade (T275753) [11:41:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:42:43] !log admin rebooting cloudcontrol1004 for kernel upgrade (T275753) [11:42:46] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:46:26] !log toolsbeta `openstack server create --os-project-id toolsbeta --image debian-10.0-buster --flavor g2.cores2.ram4.disk40 --network lan-flat-cloudinstances2b --property description='buster bastion test' toolsbeta-bastion-05` (T275865) [11:46:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:46:30] T275865: Toolforge: migrate bastions to Debian Buster - https://phabricator.wikimedia.org/T275865 [12:00:56] !log admin rebooting cloudcontrol1003 for kernel upgrade (T275753) [12:00:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [12:07:22] !log admin [eqiad] rebooting cloudcephosd1005 for kernel upgrade (T275753) [12:07:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:38:44] !log admin [eqiad] rebooting cloudcephosd1006 for kernel upgrade (T275753) [13:38:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:45:40] !log admin [eqiad] rebooting cloudcephosd1007 for kernel upgrade (T275753) [13:45:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:51:36] !log admin [eqiad] rebooting cloudcephosd1008 for kernel upgrade (T275753) [13:51:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:54:20] !log admin [eqiad] downtimed alert1001 Ceph OSDs down alert until 18:00 GMT+1 as that is not under the host being rebooted (T275753) [13:54:23] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:17:21] !log admin [eqiad] rebooting cloudcephosd1009 for kernel upgrade (T275753) [14:17:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:25:20] !log admin [eqiad] rebooting cloudcephosd1010 for kernel upgrade (T275753) [14:25:23] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:31:49] !log admin [eqiad] rebooting cloudcephosd1011 for kernel upgrade (T275753) [14:31:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:38:14] !log admin [eqiad] rebooting cloudcephosd1012 for kernel upgrade (T275753) [14:38:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:44:26] !log admin [eqiad] rebooting cloudcephosd1013 for kernel upgrade (T275753) [14:44:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:51:02] !log admin [eqiad] rebooting cloudcephosd1014 for kernel upgrade (T275753) [14:51:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:58:35] !log admin [eqiad] rebooting cloudcephosd1015 (last osd \o/) for kernel upgrade (T275753) [14:58:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [15:06:28] Majavah: web proxy just pass traffic don't they? The idea was to proxy the traffic to port 22. Of course the problem then is to have SSH over HTTP, but that's why I was mentioning corkscrew [15:08:12] but, as I said, it was just a "theoretical" question, if nobody thought of this before probably there is some reason why it cannot be done... [15:12:00] ChanServ: the web proxy is literally a WEB proxy, it only speaks https [15:12:49] andrewbogott: you mispinged [15:13:04] you're right, sorry [15:13:10] I guess that if you are able to do some kind of ssh over https (you'll probably need a custom client and server, corkscrew might work) [15:13:14] CristianCantoro: ^^ (re webproxy) [15:13:27] never tried though xd [15:13:54] andrewbogott, dcaro yes, you can tunnel SSH over HTTPS :-) [15:14:09] you'll probably have to start the https ssh tunneler on port 443 or similar (22 is taken for regular ssh) [15:14:52] dcaro: I have also never tried, I was just wondering if it was possible :-) [15:16:03] what problem are you trying to solve? bypassing the bastion sounds like it could have some security implications [15:17:17] Majavah: no specific problem, it was just a curiosity if it could be done 😅 [15:17:28] I didn't want to bother, sorry [15:18:24] there is no such thing as a stupid question [15:35:51] !log toolsbeta removed toolsbeta-test-k8s-etcd-9 with depool from kubeadmin/etcd (T274497) [15:35:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [15:35:56] T274497: [toolforge] Automate addition/removal of etcd node - https://phabricator.wikimedia.org/T274497 [15:51:56] Hey all! I'm very new here so sorry if I ask stupid questions. I just got approved for my membership, but when I visited https://toolsadmin.wikimedia.org/tools/create I got a Bad Request. Refreshing didn't do anything. I'm pretty sure I'm logged in, and I can see the approval message in the Alerts section, but I can't seem to create a tool account. [15:51:56] Anyone has any ideas? [15:52:11] leranjun: try logging out and in again [15:52:40] Oh yeah that actually worked! [15:53:00] '=D  how could I not think of that lol [15:53:58] this is https://phabricator.wikimedia.org/T144943 fwiw [15:55:29] I see. Thanks for the prompt reply! [15:56:01] happy to help, in reply to your first sentence: there are no stupid questions [15:56:19] :') (y) [19:44:20] Hi all, I have a question about kube config for new tools: I've just created a new tool, and no ~/.kube/config file exists for the tool user, so any `webservice` command fails. I'm just seeing a null config (clusters: null, contests: null, current-context: "") with `kubectl config view`.  Is there documentation on configuring kube for a particular [19:44:20] tool/user, or just a sample ~/.kube/config?  I'm not seeing these steps in any of the how-to or quickstarts. Thanks for any help! [19:45:12] suriname0: how long ago did you create it? [19:45:27] A few hours ago [19:45:47] suriname0: there is a background service that should create the needed credentials. But if that has not happened in hours that process is likely stuck. [19:46:07] thanks, that's useful! I'll check again tomorrow [19:46:15] I'll check on the service and see if it's an obvious fix. [19:47:00] suriname0: what is the tool name? [19:47:10] tools.ores-inspect [19:49:11] * bd808 tries to remember where this runs in the modern cluster :) [19:49:30] maintain-kubeusers namespace afaik [19:51:17] `maintain-kubeusers-7f7b44754c-sffzd 0/1 CrashLoopBackOff 172 2d22h` [19:51:26] so yeah, it needs help :) [19:52:00] ouch [19:52:54] it seems to be thrashing on setting up ores-inspect. still looking [20:01:39] !log tools Deleted csr in strange state for tool-ores-inspect [20:01:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [20:02:23] suriname0: "Provisioned creds for user ores-inspect" -- I think you should be all set now [20:17:01] bd808: mind if I pm you regarding the private task. [20:17:31] Zppix: "the private task"? [20:17:36] T275594 [20:17:47] i meant to replace that in my message with the task number before i sent my msg :P [20:18:29] I'm not sure what you would tell me in chat that you can't tell everyone on the task [20:18:47] bd808: i just wanted to bounce some ideas off and didnt want to clutter the task, if they werent possible [20:23:07] but ill just put them on the task then [20:32:02] I'm going to reboot all the bastions in a minute. If everyone has an ssh session in progress you'd best close it out now. [20:32:10] s/everyone/anyone/ [20:39:09] !log toolsbeta rebooting all hosts [20:39:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [20:46:05] !log cloudinfra rebooting all hosts [20:46:09] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Cloudinfra/SAL [20:56:14] did you start rebooting the real toolforge yet? [20:57:27] Majavah: topic seems to say yes [20:58:02] Zppix: I don't see a sal entry for that, but many tools-based things just went down [20:58:37] Majavah: last sal entry is for cloudinfra "rebooting all hosts" [20:59:05] Zppix: cloudinfra != tools [20:59:24] but if infra is down you might not be able to reach tools [21:09:20] andrewbogott: qstat on tools-sgebastion-08 says "error: commlib error: got select error (Connection refused)" and then "error: unable to send message to qmaster using port 6444 on host "tools-sgegrid-master.tools.eqiad.wmflabs": got send error" [21:10:18] expected, maintence is on-going Wurgl [21:10:27] okay [21:10:57] Wurgl: see the recent cloud-announce email if you haven't already [21:13:47] folks are looking into the grid failure. the reboots went wider than planned (possible bug) and there is some fallout. [21:13:57] Majavah: got the grid back up [21:14:08] tools was an unexpected casualty just now [21:14:43] I'm going to be recovering grid queues now as I go since several need help [21:17:13] grid exec nodes are proving tricky to recover. [21:17:19] I'll get it back up. [21:21:29] qstat works again, no more mails from cron-daemon :) Thanks [21:21:49] !log tools hard rebooting tools-sgeexec-0952.tools.eqiad.wmflabs [21:21:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:22:11] Wurgl: don't be surprised if there's a few more. The cluster isn't fully healthy yet :) [21:22:43] my continuous jobs are stuck somewhere still [21:23:06] bstorm: could you ping me when you think it's stable? [21:23:13] Sure [21:23:20] There's a lot stuck right now :( [21:27:50] !log tools hard rebooting tools-sgeexec-0947 [21:27:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:40:11] !log tools.stewardbots restated stuck job stewardbot, sulwatcher seems to be doing fine [21:40:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [21:40:59] no need to ping me, got everything important back up [21:52:38] Ok :) [21:52:41] I was about to [21:52:50] The grid is pretty stable now [21:52:59] I'll look for stale-stuck jobs in a bit [21:53:05] thanks! [22:04:45] !log tools cleaned up grid jobs 1230666,1908277,1908299,2441500,2441513 [22:04:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [22:06:36] What does state "Rr" from qstat mean? [22:06:51] r = running. Okay. but R? [22:07:03] Wurgl: Restarted running [22:09:55] in my head I always pretended that it stood for "Really running" [22:23:44] thx … man page does not explain it (or I need new glasses …) [22:47:03] Hi, I'm getting started with Cloud VPS & Toolforge, and a question came up. I'm using Toolforge (lighttpd+PHP) to publicly share some scientific data supporting wiki-curators. And I'm using Cloud VPS to periodically run one of our tools to generate a data summary. What is the best practice for enabling my Cloud VPS instance to upload the data [22:47:03] summary to my tool on Toolforge? (I can manually create an SSH key on my Cloud VPS instance and add that here: https://toolsadmin.wikimedia.org/profile/settings/ssh-keys/ But that seems to go against the recommendation to always be ready to relaunch a Cloud VPS instance via Puppet.) Thanks! [22:48:29] ariutta: maybe put it on a webserver where it's generated and wget it there it's needed ? [22:48:33] where [22:48:48] it's all public data, right [22:50:34] ideally, use puppet to create a systemd timer on your cloud VPS that runs a command which dumps it into the webserver document root. if you puppetize you can always recreate cloud VPS instances simply by adding your role to a new instance [22:51:17] ariutta: instance to instance ssh/scp/sftp is not recommended. Securely handling the keys is very challenging. mutante's suggestion of a web endpoint is a good one. Another possibility would be setting up a rsync server on your data generation instance and then fetching from it as your tool account. [22:51:32] happy to show code examples if interested [22:55:13] !log clouddb-services rebooting clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 before (hopefully) many people are using them [22:55:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Clouddb-services/SAL [22:56:37] Yes, it's all public. I was hoping that my Toolforge tool could be the webserver, and I tried to get my "summarizer" code running on Toolforge. But my "summarizer" code is kind of complicated academic software, so it was easier to get it running on Cloud VPS, not Toolforge. I'd appreciate any code examples showing how to get my summary data onto a [22:56:37] webserver. (I was hoping to run my webserver on Toolforge instead of Cloud VPS.) [22:57:33] ariutta: or move everything into the cloud VPS project together? [22:57:49] or have a webserver on both [22:58:15] you could apply role(simplelamp2) to get one the quick way, btw [22:58:30] for the cloud VPS side [23:01:06] Cool, I hadn't heard of that! Toolforge looked nice for security and ease of use. Is role(simplelamp2) a good option if maintaining a LAMP stack isn't really our "core competency"? I'm worried about falling behind of security updates, etc. [23:02:59] setting up a webserver in Toolforge should be just putting stuff in ~/public_html of your tool [23:03:02] ariutta: it installs an apache, php, memcached and mysql [23:03:15] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Web [23:03:59] ariutta: yea, it's kind of made for that.. to get a simple LAMP stack on cloud VPS without doing it manually. you can still manually manage the actual website config if you tell it so [23:04:56] how large is this scientific data going to be? megabytes or gigabytes? [23:05:03] re: upgrades,, there is [23:05:08] https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS/Admin/Managing_package_upgrades [23:07:22] legoktm: I got the webserver going on Toolforge. The challenge was uploading a summary generated on Cloud VPS. The size of the data might get into the low gigabytes, totaled up over the lifetime of the project: https://wikipathways-data.toolforge.org/ [23:09:17] Toolforge /data/project uses NFS for storage which uhhh isn't the greatest for serving large files. We now have https://wikitech.wikimedia.org/wiki/Help:Adding_Disk_Space_to_Cloud_VPS_instances#Cinder for VPS projects which is better [23:12:20] mutante: that's pretty cool. I could have one Cloud VPS instance generate the summaries and another one host the data. [23:13:01] legoktm: oh, I meant the total. The largest individual file currently appears to be 57MB. [23:13:30] oh, I think that's fine then [23:14:52] Yeah, the total data is pretty small, nothing like sequencing data [23:17:53] is freenode rejecting logins from the cloud servers? [23:18:05] billinghurst: shouldnt be why? [23:18:07] coibot is joining and leaving [23:18:24] try giving it a good restart? [23:18:27] * COIBot (~root@nat.openstack.eqiad1.wikimediacloud.org) has joined [23:18:27] * COIBot has quit (Remote host closed the connection) [23:18:32] I did [23:18:58] it had left, and now it cannot maintain a grip [23:19:05] no code change [23:19:27] maybe I need Beetstra to restart the instance [23:19:45] that is beyond my knowledge set [23:20:10] :/ the log messages in ~/coibot.err are pretty useless [23:20:42] liwa3 is not having issues [23:20:47] !log clouddb-services rebooting clouddb-wikilabels-02 for patches [23:20:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Clouddb-services/SAL [23:21:13] bd808, can you please restart the instance, that is usually Beetstra's appraoch when it goes quirky [23:21:26] the kick it harder solution [23:21:32] I don't know what "the instance" means? [23:21:43] coibot itself [23:21:47] the grid engine job for that bot? [23:22:01] if that is what it is called [23:23:24] * billinghurst is simple muggins who restarts the scripts and kills lines and starts scripts when it chokes on its own vomit [23:23:49] the coibot tool does not seem to have any grid jobs running at all [23:24:02] nor any kubernetes containers [23:24:33] And https://wikitech.wikimedia.org/wiki/Tool:Coibot is a redlink, so I don't know how it is supposed to be started [23:25:17] its not being blocked by freenode im able to telnet to freenode from toolforge fine [23:25:28] quoting m:user talk:Beetstra => @Billinghurst: you did not tell me that that also did not solve your problem. I have applied my ultimate motto: "If violence did not solve your problem, you did not use enough" ... I have restarted the whole instance, and started the script clean. --Dirk Beetstra T C (en: U, T) 18:56, 19 January 2021 (UTC) [23:25:58] bd808: maybe its on vps, since it starts with ~root@ [23:26:52] bash history on tools.coibot has not been added to since mid-2019 [23:27:09] oh, it is on its own, it got moved [23:27:15] there's a linkwatcher VPS project [23:27:36] that is its separate friend [23:27:52] https://openstack-browser.toolforge.org/server/coibot.linkwatcher.eqiad1.wikimedia.cloud [23:28:38] I ssh to both coibot and liwa3 [23:28:55] holy hell. 4 cores and 8G of ram for an irc bot? [23:29:11] not just an ircbot [23:29:15] its an onwiki [23:30:21] it is one of our conflict of interest and spambot defences [23:30:55] billinghurst: you are a project admin for that Cloud VPS project, so you should be able to restart the instance using Horizon or by running `sudo reboot` on the instance itself. [23:31:32] okay, doing sudo reboot [23:31:40] There is no documentation at https://wikitech.wikimedia.org/wiki/Nova_Resource:Linkwatcher either, so I'm going to slowly back away. I have no way to learn how this is supposed to work. [23:32:24] I am good on the internals, it is the macro reboot that is new [23:37:07] bd808, thx for the direction [23:37:40] billinghurst: write some system admin procedure docs for this project pretty please. :) [23:39:29] can you point me to some best examples. Doc writing is not my best attribute, and I am just arms and legs here [23:39:48] it would probably also be a good idea to clean up some/all of tools.coibot and add a README there saying that it is all on another Cloud VPS project now [23:40:09] okay [23:40:11] billinghurst: the most basic info would be something like https://wikitech.wikimedia.org/wiki/Tool:Stashbot#Maintenance [23:40:37] just where to go and what things to type is helpful [23:41:19] and when you say clean up tools.coibot, are you meaning the "become coibot"component, ror somethign else? [23:41:26] *or [23:42:27] ugh, still joins and quits in IRC [23:43:35] part of the issue could be it doesnt seem to be authing [23:44:30] AUTHing being ? [23:44:43] * billinghurst is arms and legs [23:44:47] to nickserv [23:44:53] okay [23:48:35] Zppix, there is both a sub authenticate script and sub hyoerauthenticate parts so guess that they are failing for some reason [23:53:40] okay, dropped that back on Beetstra, thanks for the direction