[06:01:50] !log wm-bot Restarted bot completely, two instances refused to reconnect. [06:02:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wm-bot/SAL [11:31:55] !log cloudinfra cleanup local changes in ops/puppet git repo in cloud-puppetmaster-03 [11:31:57] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Cloudinfra/SAL [12:23:34] I can't access https://whois-referral.toolforge.org/gateway.py?lookup=true&ip=90.38.14.114 nor https://ipcheck.toolforge.org/index.php?ip=90.38.14.114 - is it just a coincidence, or a sign of an issue? [12:23:53] this is the message https://usercontent.irccloud-cdn.com/file/JPsDiceG/image.png [12:26:39] Urbanecm: I also saw it a couple of times today. It seems to be intermittent [12:26:42] I don't have more info [12:27:35] arturo: thanks. Is there sth I can do? [12:27:58] perhaps open a phab task so we don't forget to check what's going on with the ingress [12:28:08] (k8s ingress) [12:55:36] greg-g: Who should I contact to ask about pushing Blubber generated Docker images to the WMF-repo? [12:56:41] Also wonder how we differentiate from a patch-build and a minor/major build that should be released. [12:57:08] I.e. we might not want to deploy on every successfull build in Jenkins. [13:03:19] greg-g: I'm away, but feel free to ping me at karl.wettin@wikimedia.se instead. [14:09:38] what's the best way to serve large files from the toolforge? all options appear kind of slow (not more than 1MB/s download speed) [14:10:39] !log iiab adding Sam Reed (reedy) as a user so he can investigate an rsync issue [14:10:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Iiab/SAL [14:22:08] kalle: marxarelli or longma or generally the #wikimedia-releng channel [14:30:06] greg-g: oh, I'm in the wrong channel haha sorry [14:30:15] Didn't see that. [14:38:17] @op [14:38:35] I am running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 2.8.1.0 [libirc v. 1.0.3] my source code is licensed under GPL and located at https://github.com/benapetr/wikimedia-bot I will be very happy if you fix my bugs or implement new features [14:38:35] @help [14:40:40] greg-g: There we go, I wasn't identified and thus thrown out of the channel. Really thought I asked in there before :) Thanks! [14:45:25] !log admin icinga downtime every cloud* lab* host for 60 minutes for keystone maintenance [14:45:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:50:16] Reedy: should be working now; try logging in again? [14:51:51] andrewbogott: horizon works for me [14:52:42] yeah, for me too [14:54:29] thanks! [15:50:46] hi cloud folks, toolforge question - is there a way to take over an existing project if the only maintainer has been inactive for a long time (and isn't responding to messages)? [15:52:20] GenNotability: I think this page is what you're looking for https://wikitech.wikimedia.org/wiki/Help:Toolforge/Abandoned_tool_policy [15:52:56] andrewbogott: yup, that's exactly what I'm looking for [15:52:57] thank you! [16:05:31] andrewbogott: i can't see project members in Horizon per the email you just sent out (I get a red box saying "Error: Unable to list members." when I go to Access -> Project Members on any of projects). I'm nearly 100% certain that I could see the project members prior. suggestion? [16:06:04] isaacj: I will look at that; in the meantime you can use https://openstack-browser.toolforge.org/ [16:06:25] :thumbs up: [16:24:16] bennofs: the download "rate limit" you are seeing is likely actually caused by the NFS read rate limiting that each base server attached to the Toolforge NFS share has. [16:24:56] files can't serve out faster than they can be read from disk bascially [16:57:17] bd808: does that read limit also apply to /data/scracth? [16:57:26] bennofs: yes [16:58:43] All NFS client connections originating from Cloud VPS and Toolforge are subject to network traffic based rate limiting [16:59:14] andrewbogott: my VM remains inaccessible to me [16:59:43] $ ssh cyberbot-exec-iabot-01.cyberbot.wikimedia.cloud [16:59:43] channel 0: open failed: administratively prohibited: open failed [16:59:43] stdio forwarding failed [16:59:45] kex_exchange_identification: Connection closed by remote host [16:59:58] bstorm: ^ [17:00:57] bd808: ^ [17:01:08] Cyberpower678: your host name is cyberbot-exec-iabot-01.cyberbot.eqiad1.wikimedia.cloud [17:01:13] Cyberpower678: ssh cyberbot-exec-iabot-01.cyberbot.eqiad1.wikimedia.cloud [17:01:22] Ah. Let me try that. [17:02:05] cyberpower678@cyberbot-exec-iabot-01.cyberbot.eqiad1.wikimedia.cloud: Permission denied (publickey). [17:02:06] :-( [17:02:24] This major login restructure is a major headache. :-( [17:02:57] ssh cyberbot-exec-iabot-01.cyberbot.eqiad1.wikimedia.cloud works for me [17:04:13] But it's not working over proxy jump for some reason. :/ [17:05:06] Cyberpower678: I'm going to take a look at it because it is showing an old puppet commit [17:05:08] Last puppet commit: (3aea8111d8) Bstorm - tools-grid: Install correct version of php-igbinary [17:05:24] At least on console [17:05:36] It may or may not have puppet doing its thing quite right [17:05:50] Nah [17:05:54] It's up to date now [17:05:56] Last puppet commit: (13f99dc27d) Andrew Bogott - openldap: increase query size limit [17:07:09] Oct 6 17:04:46 cyberbot-exec-iabot-01 sshd[17005]: Accepted publickey for cyberpower678 from 172.16.1.136 [17:07:15] That looks like login worked? [17:07:26] bstorm: different machine. That's WinSCP [17:07:31] It just tunnels through [17:07:34] Ah ok, so this is ssh config [17:07:40] I see [17:07:55] Hrm [17:08:15] But the proxy jump gave me a different fingerprint than the actual machine is supposed to give me. So it's jumping somewhere it shouldn't. [17:08:28] That fingerprint may have been for the bastion [17:08:36] I doubt it. [17:09:03] You should see a failed login somewhere for cyberpower678 [17:10:12] Bastion is SHA256:s+xuLo91PcVIFcFdxPQC7IXgJ2nYxaXcqa7bKE7/ufA [17:11:06] I see you connecting to the bastion and immediately closing the session [17:11:27] looking around a bit [17:12:38] The login path is working fine for me [17:12:56] What's your config snippet look like right now? [17:13:07] Ah. It's fixed now. [17:13:19] My identities file wasn't getting passed. [17:13:25] Ah ok, that'll do it [17:13:48] I wasn't finding quite the access denial I was looking for, but the log was very busy [18:40:38] !log tools draining and cordoning tools-k8s-worker-52 and tools-k8s-worker-38 for ceph migration [18:40:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [18:51:32] !log tools uncordoned tools-k8s-worker-52 [18:51:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [19:04:45] !log tools uncordoned tools-k8s-worker-38 [19:04:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:29:53] !log admin draining cloudvirt1013 for upgrade to 10G networking [21:29:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [21:30:09] !log admin moved cloudvirt1013 out of the 'ceph' aggregate and into the 'maintenance' aggregate for T243414 [21:30:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [21:30:11] T243414: relocate/reimage cloudvirt1013 with 10G interfaces - https://phabricator.wikimedia.org/T243414 [22:09:33] !help Hello, I've closed T223777 and T246593 as resolved? Is it okay? [22:09:33] If you don't get a response in 15-30 minutes, please create a phabricator task -- https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?projects=wmcs-kanban [22:09:34] T223777: Add ca_ES.UTF-8 locale to Toolforge hosts - https://phabricator.wikimedia.org/T223777 [22:09:34] T246593: Add eu_ES.utf8 locale to Toolforge - https://phabricator.wikimedia.org/T246593 [22:09:50] Kizule: definitely! That's helpful. [22:10:11] bstorm: Should be closed and T263339? [22:10:11] T263339: Add tr_TR.utf8 locale to ToolForge - https://phabricator.wikimedia.org/T263339 [22:10:28] those are both fixed for the grid, but not yet for kubernetes containers [22:10:51] Ah yeah, that's why tr_TR.utf8 was still open [22:11:08] bd808: Yea, okay. I'll leave task for tr_TR.utf8 as open. [22:11:29] Thanks bstorm. [22:11:55] Do you have plan to fix it and for kubernetes containers? [22:11:57] I cannot say if the others are in the kubernetes containers yet (likely not), but they will be if we close that one. [22:12:01] I think we should just add the "all locales" package to the containers. its probably a bit bloated, but easier than all the "oops, we also need..." tasks [22:12:15] It'll speed up the build at least :) [22:12:37] Yea, adding "all locales" would be easier, instead of having to create task for each locale. [22:13:06] Per T263339#6481311 it won't take much space. [22:13:06] If my 'git fetch' for puppet ever finishes I'll put up a patch for adding that to the containers [22:15:00] bstorm: Ok.