[02:08:13] !log mediawiki-vagrant Deleted mwv-builder-02 after copying wiki content to new mwv-builder-03 Buster instance (T236530) [02:08:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Mediawiki-vagrant/SAL [02:08:17] T236530: "mediawiki-vagrant" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236530 [02:51:42] * ST47 waves bd808 [02:51:51] Was there anything else you needed from me regarding my wikitech/developer account issues? (if you're around) [02:53:09] ST47: I think I know what you want. I will try to make the changes during my work day tomorrow (starting in ~12 hours) [02:53:49] messing about in the LDAP directory is not a great thing to do on the weekend :) [02:54:09] okay! [02:55:15] Sounds good to me. Appreciate your help explaining to me. There are so many accounts :P [02:57:45] well... there are 2 :) [03:34:25] With the OAuth consumer extension, if I request a specific grant that is admin-only (like viewdeleted), does that mean that users who don't have that permission won't be able to complete the oauth flow, or just that the resulting keys won't have that permission? [03:35:05] !log wikimania-support Replaced scholarships-02 with scholarships-03 running Debian Buster (T236579) [03:35:08] they just won't be able to use it [03:35:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikimania-support/SAL [03:35:09] T236579: "wikimania-support" Cloud VPS project jessie deprecation - https://phabricator.wikimedia.org/T236579 [03:35:19] the permission, I mean [03:35:39] right. Okay, thanks [03:35:52] there is no mechanism for limiting OAuth apps to specific user groups currently [03:37:11] probably not much point to it either since the authorization process is initiated by the app, so there isn't really a way to tell the user won't be able to do it until they went through with the authorization [11:47:17] !log tools upload image `nginx-ingress-controller` v0.25.1 (0439eb3e11f1) to docker registry (T236249) [11:47:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:47:22] T236249: Toolforge: new k8s: upload internal docker images to our registry - https://phabricator.wikimedia.org/T236249 [11:58:22] !log tools upload image `calico/kube-controllers` v3.8.0 (df5ff96cd966) to docker registry (T236249) [11:58:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:58:26] T236249: Toolforge: new k8s: upload internal docker images to our registry - https://phabricator.wikimedia.org/T236249 [12:01:11] !log tools upload image `calico/cni` v3.8.0 (539ca36a4c13) to docker registry (T236249) [12:01:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:03:24] !log tools upload image `calico/calico/pod2daemon-flexvol` v3.8.0 (f68c8f870a03) to docker registry (T236249) [12:03:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:03:29] T236249: Toolforge: new k8s: upload internal docker images to our registry - https://phabricator.wikimedia.org/T236249 [12:04:51] !log tools upload image `calico/node` v3.8.0 (cd3efa20ff37) to docker registry (T236249) [12:04:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:18:57] !log tools upload image `kube-scheduler` v1.15.1 (b0b3c4c404da) to docker registry (T236249) [12:19:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:19:02] T236249: Toolforge: new k8s: upload internal docker images to our registry - https://phabricator.wikimedia.org/T236249 [12:20:15] !log tools upload image `kube-proxy` v1.15.1 (89a062da739d) to docker registry (T236249) [12:20:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:22:00] !log tools upload image `kube-controller-manager` v1.15.1 (d75082f1d121) to docker registry (T236249) [12:22:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:23:25] !log tools upload image `kube-apiserver` v1.15.1 (68c3eb07bfc3) to docker registry (T236249) [12:23:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:24:59] !log tools upload image `coredns` v1.3.1 (eb516548c180) to docker registry (T236249) [12:25:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:25:04] T236249: Toolforge: new k8s: upload internal docker images to our registry - https://phabricator.wikimedia.org/T236249 [14:34:31] !log tools icinga downtime toolschecker for 1h (T235627) [14:34:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:34:35] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [14:42:08] !log tools deleted `role::toollabs::proxy` from the `tools-proxy` puppet profile (T235627) [14:42:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:42:13] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [14:43:03] !log tools adding `role::wmcs::toolforge::proxy` to the `tools-proxy` puppet prefix (T235627) [14:43:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:45:20] !log tools created VMs tools-proxy-05 and tools-proxy-06 (T235627) [14:45:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:57:29] !log tools drained tools-worker-1031.tools.eqiad.wmflabs to clean up disk space [14:57:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:58:55] !log tools added `webproxy` security group to tools-proxy-05 and tools-proxy-06 (T235627) [14:58:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [14:59:00] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [15:00:02] Hi, tools-sgebastion-08 (tools-dev) is currently collapsed (fork can't create processes) [15:00:18] Is this known? [15:03:39] !help [15:03:39] jem_: If you don't get a response in 15-30 minutes, please create a phabricator task -- https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?projects=wmcs-team [15:04:09] jem_: there is a pretty agressive limit on what each user can run on the bastion [15:04:28] Ah [15:04:51] But I don't understand... [15:05:12] depending on what you are doing, the job grid or a Kubernetes shell are probably better options [15:05:39] I just opened a temporal screen session [15:06:04] As the Wikimedia Spain server is down for the moment [15:06:21] jem_: you are running 3 copies of "./ircbot.php" [15:06:37] which is likely using up all of your process quota [15:06:45] Yes, is just one bot with three tasks [15:06:50] Hum [15:06:52] bots should not be run on the bastion directly. [15:07:19] jem_: https://wikitech.wikimedia.org/wiki/Help:Toolforge/Grid [15:07:21] Ok, this was just an emergency solution [15:08:04] I'll migrate it to another server, no problem [15:08:18] it is fine to run your bot in toolforge for sure. Just not on the tiny little bastion servers where everyone has to share [15:08:20] Could you kill ircbot.php then? [15:08:58] I understand and in fact I use WM-ES server for everything possible [15:09:06] jem_: should I kill the entire screen session, or just the php processes inside it? [15:09:16] I just can't access to DBs, of course [15:09:21] better just the php [15:09:36] So I close the rest gracefully [15:10:17] And sorry for that, lesson learned [15:10:57] jem_: you don't need to use WM-ES servers for everything possible. You are welcome to run your bots in Toolforge. But not on the bastions :-P [15:11:00] !log tools Killed ircbot.php processes started by jem on tools-sgebastion-08 per request on irc [15:11:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:12:04] arturo: Do you mean I should use a virtual cloud machine? [15:12:12] Thanks, bd808 [15:12:16] jem_: read the url about grid [15:12:22] jem_: you should use the job grid -- https://wikitech.wikimedia.org/wiki/Help:Toolforge/Grid [15:12:35] Ok [15:12:50] Sorry, I'm walking right now :) [15:13:13] So just writing here is complicated [15:13:47] jem_: :) no worries. Read up on the job grid and come back to ask questions when you have them [15:14:43] !log tools refresh hiera to use tools-proxy-05 as active proxy T235627 [15:14:46] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:14:47] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [15:15:31] bd808: Thanks again, anyway I'm still getting the fork errors when I login [15:16:09] Could you kill the irssi process, for example? [15:16:36] Or tell me how many processes I have over the limit [15:16:47] !log tools tools-proxy-05 has now the 185.15.56.5 floating IP as active proxy T235627 [15:16:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:20:31] jem_: there are a *lot* of /bin/bash processes owned by you on tools-sgebastion-08 [15:20:51] * bd808 counts 11 [15:21:19] Yes, my fault [15:21:35] I actually don't see an irssi process for you [15:22:16] That's strange... [15:22:26] it looks like you have a bunch of outbound ssh sessions running? [15:22:55] `pstree -clapu jem` is a nice way to see what is associated with your user [15:23:02] Yes, I try to uae just one main screen and connections from there [15:23:10] use* [15:23:29] Yes, but I can't do anything in the shell :) [15:24:13] Please just kill everything [15:24:30] I think I can recover more or less [15:24:59] heh. [15:25:42] So the irssi was there :) [15:26:09] I guess so :) [15:26:41] Ok, thanks again, now I'll rebuild it from other server [15:26:44] !log tools Killed all processes owned by jem on tools-sgebastion-08 [15:26:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:27:10] (not wmf nor wm-es) [15:53:00] !log tools.bd808-test2 Restarting grid based webservice to test new toolforge proxy registration [15:53:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.bd808-test2/SAL [15:54:29] !log tools shutting down tools-proxy-03 T235627 [15:54:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [15:54:32] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [15:54:43] !log tools.bd808-test Restarting k8s based webservice to test new toolforge proxy registration [15:54:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.bd808-test/SAL [15:54:50] and that worked too [15:54:56] so things are looking good to me [15:54:57] !log tools tools-proxy-05 has now the 185.15.56.11 floating IP as active proxy. Old one 185.15.56.6 has been freed T235627 [15:55:00] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [16:06:04] !log tools delete VM instance `tools-test-proxy-01` and the puppet prefix `tools-test-proxy` [16:06:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [16:25:14] !log puppet-diffs syncing puppet facts from tools-puppetmaster-01 [16:25:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Puppet-diffs/SAL [16:29:25] !log wikimania-support Rebooting scholarships-03 to ensure that MediaWiki-Vagrant managed LXC container starts on instance boot as expected [16:29:26] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikimania-support/SAL [16:29:47] and it did :) [16:30:01] \o/ [16:31:36] I think that only leaves one known issue with Buster and Cloud VPS (T236487) [16:31:37] T236487: geoipupdate missing on buster on Cloud VPS - https://phabricator.wikimedia.org/T236487 [19:08:05] mobrovac, Krenair, what is the best way to get https://gerrit.wikimedia.org/r/c/operations/puppet/+/545702 on beta cluster parsoid servers? [19:10:40] actually not sure if that script applies to beta. [19:15:45] Hey. In a day, settings of nginx, nodejs have not changed? My websockets suddenly began to fall (502). [19:20:51] !log tools.faebot Admins killed youtube-dl process running on tools-sgebastion-07 [19:20:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.faebot/SAL [19:22:00] Iluvatar_: the tools front proxy was moved a few hours ago to a new instance. That would have interrupted things that were happening at the same time. [19:22:23] Iluvatar_: are you not able to do something right now that was working previously? [19:24:06] Yes, after a long lack of response, the websocket returns error 502. nodejs. [19:24:49] source code of my nodejs server has not changed. [19:25:25] Iluvatar_: have you tried restarting the webservice already to see if that changes anything? [19:25:37] Iluvatar: what's the tool name and URL? [19:26:04] Yes of course. Had tried. [19:27:01] !log tools.lziad Admins killed `node dist/index.js` process running on tools-sgebastion-07. Please use the job grid or kubernetes instead [19:27:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lziad/SAL [19:28:17] toolname swviewer; url: wss://tools.wmflabs.org/iluvatarbot/:9030 [19:47:09] Something happened at ~ 12:10 (last activity in logs). Or over the next few hours. [19:47:41] 12 oclock in which timezone [19:49:42] UTC [20:02:37] Iluvatar_: https://tools.wmflabs.org/iluvatarbot/ is returning a page that says "Upgrade required". I am confused that your said the tool is swviewer but the url you gave is for iluvatarbot [20:05:31] On one tool, two kubernetes containers cannot be run in same time. Therefore swviewer (php, access to wss), iluvatarbot (nodejs, server of wss) [20:06:38] and the wss side, which is the iluvatarbot tool, is the part that is working differently than expected? [20:09:02] Yes, wss (IluvatarBot) 592 error. https://imgur.com/a/jWuV9EG :) [20:09:38] 502* [20:19:35] Iluvatar_: 502 is a "bad gateway" response which if it is coming from the front proxy means that it could not reach your tool's backend web service. Let me see if I can verify that there are errors in the logs of the front proxy. [20:21:54] Iluvatar_: the front proxy's error/log has a lot of lines related to iluvatarbot saying "upstream prematurely closed connection while reading response header from upstream". "Upstream" from the point of view of the front proxy is your nodejs webservice. [20:23:27] Error in source code of my server? Hmm... Date of last change in June. [20:24:45] Iluvatar_: Trying to hit the proxied url directly from the front proxy instance using curl, I get an HTTP 426 response "upgrade required" but it does not include an "Upgrade: ..." header telling my client what protocol to switch to [20:25:55] We do have a newer version of nginx as the proxy since around 15:30 UTC today. [20:26:37] I wonder if the older nginx treated the 462 respinse that is missing the expected "Upgrade: ..." header differently? [20:26:45] *426 [20:30:49] bd808, in the new proxy server, I can see the `proxy_set_header Upgrade $http_upgrade;` setting there.. [20:31:09] yeah, the nginx config looks correct to me [22:50:50] !log tools.admin Live hacked tool-admin-web/src/Tools.php for front proxy change [22:50:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.admin/SAL [22:55:54] !log openstack run labs-ip-alias-dump on cloudservices1003 and cloudservices1004 T235627 [22:55:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Openstack/SAL [22:56:00] T235627: Toolforge: upgrade main proxy servers to Debian Buster - https://phabricator.wikimedia.org/T235627 [22:58:37] !log tools.admin Updated to 28b15c5 (Rely on split-horizon DNS to find active proxy server) [22:58:38] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.admin/SAL [22:59:21] jeh: ^^ that should be future-proof as long as we fix the dns aliaser stuff :) [22:59:44] sounds good :)