[00:14:55] sudo bash script create smack toolforge [08:07:14] !log CVN matanya set flags +AV on NovakWatchmen in #cvn-wp-hr [08:07:16] matanya: Unknown project "CVN" [08:07:26] I though so [08:13:20] !log cvn matanya set flags +AV on NovakWatchmen in #cvn-wp-hr [08:13:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Cvn/SAL [16:11:16] legoktm just got `ConnectionRefusedError: [Errno 111] Connection refused` for core again [16:11:23] (for codesearch) [17:01:43] hello [17:01:52] i need some help about kubernetes [17:02:26] in deployment.yaml, i command it to use python3.7 [17:03:01] but it seems to be using 3.5 instead, and my code doesn't really run well under 3.7 [17:03:37] this is my development.yaml : https://hastebin.com/bibunabaku.yaml [17:04:05] and this is the sh file I use to manage my kubernete : https://hastebin.com/quporiniha.sh [17:04:41] running exec python3 on the kubernete returns : Python 3.5.3 [17:07:04] Evrifaessa: first question is if you have reapplied the deployment.yaml to your namespace since you changed it to ask for a python 3.7 container? Sometimes folks get confused and think that Kubernetes reads the file directly from their $HOME. [17:08:06] bd808: it's my first time trying to use k8s, and i really don't know how to use it [17:08:09] Looking at https://k8s-status.toolforge.org/namespaces/tool-trwikisignaturebot/pods/trwikisignaturebot.bot-7c8b8b7b5c-8mzv2/ it appears that your container is running python3.7 [17:08:41] executing "python3 ./trwikisignaturebot/pywikibot-core/pwb.py signbot" gets it running with 3.5.3 [17:11:11] Evrifaessa: how are you testing that? Do you mean that you end up with python 3.5 when you change things in your management script, or when you get an interactive shell some other way? [17:11:48] when directly executing "python3" using the management script, i can see that it runs 3.5.3 [17:12:12] The only place we have Python 3.7 is inside the Kubernetes cluster. python3 on the Toolforge bastion nodes would be 3.5 [17:12:41] so to get 3.7 you need to first do something like `webservice --backend=kubernetes python3.7 shell` [17:13:03] see this : https://hastebin.com/apovurugex.sh [17:13:18] i use this like "./manager.sh run" [17:13:26] and it runs the script for me [17:13:51] Evrifaessa: yes, but that is intended to happen *inside* the Kubernetes pod [17:13:58] oh [17:14:10] that seems correct.. [17:14:19] that is the `command: [ "/data/project/trwikisignaturebot/trwikisignaturebot.sh", "run" ]` part of the deployment.yaml [17:14:25] but how do i get this to run inside k8s pod? [17:16:36] Evrifaessa: let's back up a step or two. What are you hoping to accomplish with your tool? This deployment.yaml you have is copied from one to make a python bot that runs continuously. I have a hunch that you really want to run a pywikibot script periodically (like once a day or something) [17:17:22] nope, my bot is planned to run continuously and check recent changes to make actions [17:18:07] it will run continously in the background, and then check if users signed their messages in talk pages with ~~~~, and sign if they forgot to [17:18:26] it was running perfectly on AWS until amazon tried to charge me $500 for the redis serve [17:18:29] r* [17:18:35] fun! [17:19:05] So I jsut used my superpowers to "become" your trwikisignaturebot tool [17:19:12] okay [17:19:32] Once I have done that, `kubectl get pods` shows the Kubernetes "pods" that are running [17:19:52] a pod is roughly a Docker container if you are familiar with Docker [17:20:09] (there is more too it that that, but its an ok way to start thinking about pods) [17:20:10] no, i never used k8s, docker or other things like that [17:20:33] i just like running my code by the basic way, lol [17:22:05] Evrifaessa: we are all learning all the time. :) Another way to think of a pod is like an AWS instance. Again in reality there are differences, but roughly they are similar. A pod is a place where your code cann run that is isolated from other code running on the same underlying computer. [17:22:19] So I can see that your pod is indeed running. [17:22:32] and ... now it is not. [17:22:53] let me see if I can remember the kubectl command that might tell us why it stopped [17:23:43] your deployment object is gone now too? [17:24:08] uh, are you talking about deployment.yaml? [17:24:31] it exists in the main directory of the tool [17:24:39] I'm talking about the state of the objects that are defined in deployment.yaml inside the Kubernetes cluster [17:25:30] umm... how do i check if they are gone? and what do you mean by "gone"? [17:25:41] Evrifaessa: maybe I confused myself earlier. After you created the deployment.yaml file did you then load it into the Kubernetes cluster? [17:25:58] something like `kubectl create --validate=true -f $HOME/deployment.yaml` [17:26:03] nope [17:26:08] i never used the kubectl command [17:26:12] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Kubernetes#Kubernetes_continuous_jobs [17:26:12] i just used the management script [17:26:28] in toolforge bastion node [17:26:31] i guess [17:27:02] *nod* the `start` action of the management script does the kubectl command [17:27:17] and the `stop` action removes the deployment objects [17:27:37] i did that, but then used stop and removed it [17:28:25] i first used the start action [17:28:28] and then tried using run [17:28:32] The $KUBECTL line under "start" in your script has the wrong path to the deployment.yaml I think [17:28:47] ah, no it does not [17:29:12] its just looks weird because you copied a script that was setup to live in $HOME/bin rather than $HOME [17:29:42] in the example in here : https://wikitech.wikimedia.org/wiki/Help:Toolforge/Kubernetes#Kubernetes_continuous_jobs [17:29:45] it was running in bin [17:29:46] The TOOL_DIR line in the script walks back out of your $HOME because of that [17:29:58] yeah, I wrote the example :) [17:30:00] ooo [17:30:23] but i just tried getting it running in the main directory instead [17:30:35] by removing all instances of /bin from the code [17:30:50] Change the TOOL_DIR line to `TOOL_DIR=$(cd $(dirname $0) && pwd -P)` [17:31:14] okay [17:31:22] removing the `/..` part in the `cd` command will keep it in your $HOME [17:31:42] now done [17:31:50] but really you could change that line to `TOOL_DIR=$HOME` as well [17:32:28] should I? [17:32:43] Evrifaessa: after doing your last edit, you also need to remove the extra "trwikisignaturebot" values in the other commands [17:32:59] your TOOL_DIR looks fine now [17:33:25] done [17:34:33] should I now try running it? [17:35:31] Evrifaessa: so using that particular wrapper script, `trwikisignaturebot.sh start` should create the Deployment object which will then create the Pod that your code runs inside of, and then it will run `trwikisignaturebot.sh run` inside of that pod [17:35:51] you can use `trwikisignaturebot.sh tail` to see anything the bot writes to stderr and stdout [17:36:13] and `trwikisignaturebot.sh attach` to get an interactive shell inside the same Pod that is running your bot [17:36:23] by the way [17:36:26] how can I use pip [17:36:30] inside k8s? [17:36:51] yes, and also inside of a virtualenv [17:36:55] Attaching to pod... [17:36:55] error: unable to upgrade connection: container not found ("bot") [17:37:01] attach returned this [17:37:23] `kubectl get po` shows that your pod is failing to start [17:37:30] yeah [17:37:31] No module named 'setuptools' [17:37:34] i need to use pip [17:37:38] and install my needed modules [17:38:20] Evrifaessa: yes. the wrapper script does not handle the initial venv creation [17:38:53] so the first thing you need is an interactive python3.7 shell: `webservice --backend=kubernetes python3.7 shell` [17:39:22] The wrapper script expects to find the venv at $HOME/venv-k8s-py37 [17:39:26] Defaulting container name to interactive. [17:39:26] Use 'kubectl describe pod/interactive -n tool-trwikisignaturebot' to see all of the containers in this pod. [17:39:26] If you don't see a command prompt, try pressing enter. [17:39:43] hit enter if you don't have a prompt yet [17:39:54] I can't hit enter [17:40:04] it directly terminates the command [17:40:13] idk how to explain this [17:40:14] but uh [17:40:16] tools.trwikisignaturebot@interactive:~$ [17:40:31] it gets out of the command before giving me a chance to hit enter [17:40:38] that prompt means you are now inside the pod [17:40:53] the "@interactive" is the pod's name [17:40:58] \o/ [17:40:59] so you are in! [17:41:06] tools.trwikisignaturebot@interactive:~$ python3 [17:41:06] Python 3.7.3 (default, Jul 25 2020, 13:03:44) [17:41:07] wow [17:41:09] python 3.7.3 [17:41:10] finally [17:41:24] tools.trwikisignaturebot@interactive:~$ python3 -m pip [17:41:35] usr/bin/python3: No module named pip [17:41:37] now `python3 -m venv $HOME/venv-k8s-py37` is needed to make the initial virtual environment [17:41:53] okay [17:42:02] and then `source $HOME/venv-k8s-py37/bin/activate` to "enter" the venv [17:42:16] i'm inside venv [17:42:17] then you should finally be able to do `pip3 ...` things [17:42:27] cool! [17:42:44] is my kubernete going to use this to get the modules? [17:42:54] so what i install here will affect my k8s, right? [17:43:29] yes, because the wrapper script does steps to enter the venv as well before it does the commands in the "run" section [17:43:39] awesome [17:44:00] tysm :^) [17:44:18] And I now see how the tutorial part I wrote about this assumes you will figure all of this out on your own :/ [17:44:31] heheh [17:44:43] but in the end, i could manage to get it running with your help [17:44:48] oh wait.. [17:44:52] "just read the code" is maybe not the best help document ;) [17:45:10] don't we have tr_TR.utf8 in k8s? [17:45:18] as a locale [17:45:25] probably not, no [17:45:29] oh [17:45:37] i recently saw a phabricator task for this [17:45:50] https://phabricator.wikimedia.org/T263339 [17:46:29] we were talking about that in the team meeting today. There is a proposed patch to fix the gird engine nodes to have a lot more locales. We have not applied it yet to either the gord or the Kubernetes containers [17:46:45] but we are aware, and I think working on it [17:46:55] *gird engine [17:46:58] alrighty [17:46:59] heh [17:47:10] I can't type "grid" apparently [17:47:14] hahah [17:47:34] btw i'll have to use redis for my scrit [17:47:36] script* [17:47:41] hope k8s support redis [17:47:53] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Redis_for_Toolforge [17:48:17] alright [17:48:18] tysm [17:48:20] byee :) [17:48:27] the "strange" thing you need to do when using the redis in Toolforge is to prefix your keys [17:48:41] oh [17:48:51] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Redis_for_Toolforge#Security [17:49:26] so the server is located in tools-redis.svc.eqiad.wmflabs:6379, i guess [17:49:38] prefixing keys should not be a big problem for me [17:49:52] correct on the host and port [17:50:39] the prefix can be hard coded into your script. it doesn't really need to be secret, but it should be random to make it harder for your tool and other tools to collide on using the same key [17:51:03] got it [17:51:34] good luck! And thanks for helping me figure out more things to document :) [17:51:45] thank you for your help :)) [20:12:16] !log phlogiston removing project per #T263134 [20:12:19] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Phlogiston/SAL [20:12:19] T263134: Delete Phlogiston instances - https://phabricator.wikimedia.org/T263134 [21:38:30] !log tools ran an 'apt clean' across the fleet to get ahead of the new locale install [21:38:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL