[03:19:55] zhuyifei1999_: does your video2commons tool do GPU-based encoding, or CPU-based? Would you do GPU-based if that were an option? [03:20:52] It is currently CPU based. I'm not sure ffmpeg supports GPU based (and afaik GPU isn't an option on wmcloud) [03:21:53] don't remember WMCS having GPU products, boy would that have been cool [04:11:22] !help Hi, I get error when I want to use scp to upload file from my laptop to instance [04:11:23] Zoranzoki21: If you don't get a response in 15-30 minutes, please create a phabricator task -- https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?projects=wmcs-team [04:11:26] channel 0: open failed: administratively prohibited: open failed [04:11:28] stdio forwarding failed [04:11:29] kex_exchange_identification: Connection closed by remote host [04:11:31] lost connection [04:11:53] I use this scp ~/Downloads/Matična_ploča.pdf zoranzoki21@srwiki.dev-srwiki.dev-eqiad.wmflabs:/tmp [04:13:21] Zoranzoki21: is this something you've done successfully in the past? Or a new action you're trying to work out? [04:15:16] New action which I want to do [04:15:22] (to learn :) ) [04:15:28] ok. And you're using a mac? [04:15:31] kizule@kizule-laptop:~/Downloads$ scp Matična_ploča.pdf zoranzoki21@srwiki-dev.srwiki.dev-eqiad.wmflabs:/tmp same error [04:15:36] andrewbogott: Linux [04:15:45] have you set up a proxy command already? Does ssh work? [04:15:50] ssh works [04:16:36] ok, that's a good sign. Let me dig a bit [04:17:04] um, well, wait — srwiki-dev.srwiki.dev-eqiad.wmflabs looks like a typo [04:17:12] dev-eqiad.wmflabs isn't a valid domain [04:17:14] Yes, I know [04:17:20] I tried with correct [04:17:24] Same error shows [04:17:47] But I copy-pasted wrong line from terminal :/ [04:17:48] ok, show me the corrected command please? [04:18:03] kizule@kizule-laptop:~/Downloads$ scp Matična_ploča.pdf zoranzoki21@srwiki-dev.srwiki.dev-eqiad.wmflabs:/tmp [04:18:33] that still looks invalid to me [04:18:37] your domain should be eqiad.wmflabs [04:19:20] kizule@kizule-laptop:~/Downloads$ scp Matična_ploča.pdf zoranzoki21@srwiki-dev.srwiki.dev.eqiad.wmflabs:/tmp [04:19:27] This? [04:19:32] ok, let's back up [04:19:35] what is your instance name? [04:19:39] and what is your project name? [04:19:53] (If ssh is working you must have had the domain right at some point) [04:20:07] Username with which I connect is zoranzoki21 [04:20:14] And ssh works for me [04:20:28] as should :D [04:20:33] show me a working ssh command? [04:21:05] zoranzoki21@srwiki-dev:/tmp$ ssh srwiki-dev.srwiki-dev.eqiad.wmflabs [04:21:16] huh wrong terminal [04:21:34] Bjt [04:21:41] *But same command :) [04:21:47] so if you use the same domain for scp as for ssh you might have better luck [04:21:52] or at least that's a good place to start [04:22:24] I use same domain for scp as for ssh, I don't know another :D [04:22:38] so far all of the scp commands you have pasted use a different domain [04:22:59] Which different? [04:23:05] for example srwiki-dev.srwiki.dev.eqiad.wmflabs that has five components [04:23:13] whereas the one for ssh has four [04:24:19] for scp I use zoranzoki21@ because username on my pc is kizule. For ssh in config is defined User zoranzoki21 [04:25:01] when I say 'domain' I am referring to the fully-qualified hostname. Your host has the name srwiki-dev.srwiki.dev.eqiad.wmflabs [04:25:06] that is the name you need to use for scp [04:25:17] um, wait, sorry, now I pasted the wrong thing :) [04:25:34] you need to use srwiki-dev.srwiki-dev.eqiad.wmflabs [04:25:41] which is different from srwiki.dev.srwiki-dev.eqiad.wmflabs [04:25:42] see? [04:26:46] :D works [04:26:58] cool! [04:29:33] Where I can see log on instance? [04:30:45] On /var/log is much files [04:32:12] it depends on what log you want to see [04:32:39] All ok, I found what I need: https://prnt.sc/q94jpw [04:33:02] Can I delete these? [04:33:22] I mean files [04:33:49] Because there are archives of old files which I think I wont'need, only takes space [04:35:01] Too, why this shows everytime when I log in or do sudo -i puppet agent --test --verbose [04:35:03] (/Stage[main]/Profile::Ldap::Client::Labs/Notify[LDAP client stack]/message) defined 'message' as 'The LDAP client stack for this host is: sssd/sudo' [04:36:47] Zoranzoki21: they all do that, it's for admins to track some work in progress [04:36:50] you can ignore it [04:36:58] Zoranzoki21: deleting old log files is generally fine [04:37:06] Yes, I mean on old [04:37:23] if it's an open logfile that's being currently written to, better to do 'truncate —size 0 ' so whoever's writing to it doesn't get confused [04:38:24] All is ok now, thank you very much for help! I'm going to eat... [04:38:49] bon appétit [04:49:52] :D ty [08:03:37] thanks andrewbogott :-) challenging help request [15:01:18] Technical Advice IRC meeting starting in 60 minutes in channel #wikimedia-tech, hosts: @CFisch_WMDE & @James_F - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [15:15:41] !log tools.integraality Deploy latest from Git master: 257618f1 (T240312) [15:15:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.integraality/SAL [15:50:57] Technical Advice IRC meeting starting in 10 minutes in channel #wikimedia-tech, hosts: @Lucas_WMDE - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [17:24:49] !log tools deleted and/or truncated a bunch of logfiles on tools-worker-1031 [17:24:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [17:35:22] I think there's an issue with whatever process creates replica.my.cnf - I created a tool a few days ago ( multicompare ), and it still has no replica.my.cnf. I created one today, and it didn't get it either - however today's was very recent ( catcompare ) [18:10:21] SQL: thanks for the ping on that. It does look like our "maintain-dbusers" service has broken itself. I will restart it, and then try to figure out why that did not page the admin team [18:12:19] bstorm_: ^ I actually need to run, but if you have time or want to delegate to somebody else... the last log line from it before I restarted the process on labstore1004 was `Dec 01 22:45:45 labstore1004 /usr/local/sbin/maintain-dbusers[12624]: Could not connect to labsdb1009.eqiad.wmnet due to (2003, "Can't connect to MySQL server on 'labsdb1009.eqiad.wmnet' ([Errno -2] Name or service not known)"). Skipping.` [18:13:07] Ahhh, ok thanks [18:13:30] !log tools Restarted maintain-dbusers on labstore1004. Process had not logged any account creations since 2019-12-01T22:45:45. [18:13:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [18:14:09] I'm not surprised it broke, but I thought we had that service monitored now. [18:14:11] jeh: if you have a moment to look, I'm in an impromptu meeting. Makes me wonder what's up there. [18:14:27] sure, I can take a look [18:14:28] There was DB maintenance...that may have left it broken and be no biggie? [18:14:30] Thnks [18:15:03] yeah, I think it got sad when a db was offline, but the fact it got stuck and didn't either restart itself or page is more worrying [18:18:53] SQL: confirmed that multicompare and catcompare both have their replica.my.cnf after the service restart. Thanks again for the ping. [19:26:07] As for the restart, we may have set it intentionally to not restart more than x times. That can be fixed. As for paging, I see that it did page. However, it did not stay listed as failed because it may not have a specific alert for it, just systemd [19:26:31] It was in the context of masses of services failing for network issues, I see. [19:26:41] So when most everything recovered, we didn't check that one. [19:38:15] T240496 [19:38:16] T240496: Ensure maintain-dbusers is monitored independently of just systemd - https://phabricator.wikimedia.org/T240496 [20:54:37] bstorm_: maybe it should check for a proc instead of systemd state? [20:54:50] jeh: tyvm, sorry, I ended up going out [20:59:37] Zppix, nah, it'll work fine once it's actually monitored I suspect :) It's just not monitored right for some reason [20:59:48] weird [21:00:57] !log openstack schedule icinga downtime until Dec 20th 2019 on cloudvirt1022 for ceph testing T239918 [21:00:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Openstack/SAL [21:00:59] T239918: Deploy Ceph Nautilus on Buster - https://phabricator.wikimedia.org/T239918