[00:24:19] Hello. Does anyone know if you can run the tee command, or its equivalent, in bash mode to save output? [00:25:45] krenair@tools-sgebastion-07:~$ echo 'hello world' | tee hello_world.txt [00:25:45] hello world [00:25:45] krenair@tools-sgebastion-07:~$ cat hello_world.txt [00:25:45] hello world [00:25:47] yes [00:26:49] Although I believe interactive programs are discouraged, for some reason [00:27:32] because everyone running everything on one machine is a Bad Idea [00:28:31] so people send long-running stuff off to non-bastion hosts for execution [00:29:00] krenair: if if run cat output.txt I get no such file [00:29:31] gonna have to show the commands leading up to this [00:30:23] sorry I want to run a bash txt file [00:30:37] ? [00:30:51] It just a series of sql Xwiki "select..." > X.txt [00:31:10] It's quite a lot so I want to save the output to see if there are any errors [00:31:33] so you pipe stuff through tee but then the file you give tee does not get created? [00:32:22] I put tee at first but it doesn't like that [00:33:21] please just paste the command, describe what you expect to happen and what actually happens [00:34:37] one sec [00:36:50] https://imgur.com/a/Ki5yer5 [00:37:18] i want to run the tee save output for sql commands [00:37:34] save the output you see in the shell as a file [00:38:04] so, if there are errors after running thousands to sql selects i can see which one raised an exception [00:38:07] I dont see where youre trying to use tee [00:38:22] at beginning of text file [00:38:43] it's behind the shell in the image [00:38:58] oh on notepad behind putty [00:39:00] behind the command prompt [00:39:05] yes sir [00:40:12] please run 'cat test.txt' so I can see the actual contents of the actual file [00:41:56] https://imgur.com/a/Hez35kg [00:43:04] I tried what you suggested with the echo [00:43:15] okay this is not what you want to do at all [00:43:57] No. I want to save the output that is printed in putting. It shows where exceptions were raised [00:44:07] in putty* [00:44:34] is the idea that the first sql command has output copied to one file, the second has output copied to another file, and the output from both goes to output.txt? [00:45:21] yes [00:46:24] I suggest you get rid of the echo and tee lines from your current test.txt [00:46:33] ok [00:46:52] change the '>' characters after the sql commands to be '| tee' [00:47:18] then when you run 'source test.txt', do 'source test.txt | tee output.txt' [00:48:01] there are other ways to do this but for the sake of example use of tee [00:51:00] one sec [00:54:18] krenair: that save the results of the second command to the output. see second image https://imgur.com/a/Hez35kg [00:55:00] I save the txt, putty and results in that image, one behind the other. [00:55:34] second image? [00:55:42] actually it saved the results of the first command [00:55:51] yes, scroll down to second image [00:56:21] there is no second image [00:56:55] ok I reposted https://imgur.com/a/tSyQfE1 [00:58:33] it doesnt look like you hit enter in putty [00:59:21] I cleared it and then wrote the command so it would be clearer [00:59:28] but I will say that your zhwiki SQL command looks invalid [00:59:42] and that it will likely output to stderr instead of stdout [00:59:45] and that you should see an error [00:59:51] Yes, I want it to raise an erro [01:00:00] and you want the error to go to the file? [01:00:26] so I can see if the output of it be raised as an exception is saved in output [01:00:52] put this before the pipe characters after the sql commands: 2>&1 [01:01:58] that should redirect all stderr to stdout [01:02:13] sql viwiki_p "select * from logging where log_title = 'Doctor_Strange:_Phù_thủy_tối_thượng'" 2>&1 tee Doctor_Strange_film%_vi.txt; [01:02:15] lik that? [01:02:39] no [01:02:46] you dropped the pipe character [01:03:01] sql viwiki_p "select * from logging where log_title = 'Doctor_Strange:_Phù_thủy_tối_thượng'" 2>&1 | tee Doctor_Strange_film%_vi.txt; [01:03:09] ok let me try [01:06:23] thank you that is more like it [01:06:27] https://imgur.com/a/Abf2f2l [01:06:39] but, it's saving everything to that one file [01:06:56] isnt that what you want? [01:07:02] is there a way to just save what's printed in the putty screen? [01:07:15] ... but that's what it does [01:09:33] That can work, thank you. But, is there a way to save the output of the putty screen by itself as a file without the results from each command? [01:10:00] I can do this mysql workbench through mysqld in the command prompt [01:10:30] I do not understand what you are trying to do [01:10:54] I thought you wanted the results of those commands saved [01:11:06] what other output is there? [01:11:44] the putty output is the third file [01:12:06] so run the test.txt file [01:12:24] 1st sql command saved into 1st txt file [01:12:26] so you just want output.txt, and do not want the individual files that come out of the sql commands [01:12:38] 2nd sql command saved into 2nd txt file [01:13:01] all output printed into putty window saved into 3rd txt fil [01:13:10] yes sir [01:13:40] well just remove the '2>&1 | tee ...' parts from inside the file [01:14:24] so just sql zhwiki_p "select * from logging where log_title = '奇異博士_(電影)" > Doctor_Strange_film%_zh.txt; [01:15:09] no [01:15:40] you remove all piping and output redirection stuff from the end of the sql lines [01:15:43] not just replace it [01:16:25] I am sorry. I don't follow [01:17:37] If I removed the piping and output, won't that save everything to one file? [01:18:00] Yes [01:18:06] That's what you were trying to do [01:18:30] No, three separate files [01:19:12] I feel like we're going round in circles here [01:20:05] anyway it's past 2AM for me, I need to be up in 6 hours [01:20:09] maybe someone else can help [01:20:52] I'm sorry. But, isn't there a way to save ONLY what's printed on the putty screen [01:25:17] Krenair: thank you for help. Sleep well. I'm sorry if I didn't explain it clearly [04:52:35] Yes, hello. [04:52:40] legoktm: yt? [04:56:59] The column ipblocks.ipb_by_text has gone missing from the English Wikipedia database replica as of March 2019. [04:57:16] Does anyone know where it went? [04:57:48] https://www.mediawiki.org/wiki/Manual:Ipblocks_table doesn't mention it going away and says MediaWiki 1.33 has the field. And yet. [04:58:24] Related to the other day, unrelated to ipblocks, https://en.wikipedia.org/wiki/Wikipedia:Database_reports/Potentially_untagged_misspellings got updated btw. [05:06:42] https://tools.wmflabs.org/admin/tools is getting truncated? [05:14:01] Okay, I filed https://phabricator.wikimedia.org/T225046 about that. [05:26:38] !help [05:26:39] Marybelle: If you don't get a response in 15-30 minutes, please create a phabricator task -- https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?projects=wmcs-team [05:26:52] I already filed two. [05:26:56] What's a third. [05:32:15] https://phabricator.wikimedia.org/T225048 [08:56:52] !log integration move integration-slave-docker-1059 and integration-slave-docker-1058 to cloudvirt1028 (T223971) [08:56:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Integration/SAL [08:56:56] T223971: Old cloudvirt (with Intel Xeon) are twice slower than new ones (Intel Sky Lake) - https://phabricator.wikimedia.org/T223971 [10:21:34] hi, what's the process for reviewing toolforge registrations these days? https://toolsadmin.wikimedia.org/tools/membership/status/524 has been stuck for a while [10:24:18] tgr: approved [10:26:31] thanks! [11:38:30] !log toolsbeta delete instances arturo-sgeexec-sssd-test-2, arturo-sgeexec-sssd-test-1, arturo-bastion-sssd-test, unused [11:38:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:42:09] !log toolsbeta create VM `toolsbeta-k8s-master-arturo-3` for T215531 (so I have 3 master nodes in this k8s deployment) [11:42:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [11:42:11] T215531: Deploy upgraded Kubernetes to toolsbeta - https://phabricator.wikimedia.org/T215531 [12:32:21] !log toolsbeta drop puppet prefix `toolsbeta-k8s-master-arturo` and create `toolsbeta-arturo-k8s-master` since there is also `toolsbeta-k8s-master` which get applied to my VMs T215531 [12:32:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [12:32:24] T215531: Deploy upgraded Kubernetes to toolsbeta - https://phabricator.wikimedia.org/T215531 [12:33:23] !log toolsbeta drop VM instances toolsbeta-k8s-master-arturo-[1-3] and create toolsbeta-arturo-k8s-master-[1-3] T215531 [12:33:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [12:40:42] !log toolsbeta rebase git repos in toolsbeta-puppetmaster-02. There was some rebase problems in labs/private that required me re-creating by hand one of the [local] patches (puppetdb secrets) [12:40:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [14:01:13] Technical Advice IRC meeting starting in 60 minutes in channel #wikimedia-tech, hosts: @CFisch_WMDE & @bd808 - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [14:50:59] Technical Advice IRC meeting starting in 10 minutes in channel #wikimedia-tech, hosts: @CFisch_WMDE & @bd808 - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [15:03:30] !log tools.wikiloves Deploy latest from Git master: 5317bb7, aabd6ec (T224862) [15:03:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikiloves/SAL [18:16:41] !log tools depooling and moving tools-sgeexec-0921 and tools-sgeexec-0929 [18:16:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [18:24:45] !log maps moving maps-puppetmaster to cloudvirt1029 [18:24:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Maps/SAL [18:30:02] zhuyifei1999_: I'd like to move commonsarchive-prod to a new host; can you advise about when it might be safe to do so? [18:30:40] should be safe anytime [18:30:50] it's not like it connects to other instances [18:31:04] cool, I'll do it now then. Thanks/! [18:32:00] ;) [18:33:20] !log tools repooled tools-sgeexec-0921 and tools-sgeexec-0929 [18:33:23] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [18:57:18] Hydriz: sometime in the next few days I'd like to move incubator-mw, dumps-4 and dumps-5. Are there better/worse times to move them? Or steps that need taking before they go down? [19:12:15] bearloga: mind if I move discovery-testing-01 to a new host? It'll be down for a few minutes during the copy. [19:34:11] !log deployment-prep moving deployment-imagescaler03 to cloudvirt1029 [19:34:15] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/SAL [19:50:14] !log tools.wikibugs Updated channels.yaml to: 445187388f45f0a9dd7d038cdfe41a5990e373a8 Add #wikimedia-codehealth channel [19:50:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [19:51:55] T225130 [19:51:56] T225130: Whitelist wikibugs for #wikimedia-codehealth - https://phabricator.wikimedia.org/T225130 [20:02:03] !log tools.stashbot Restarted bot to pick up config change adding #wikimedia-codehealth [20:02:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stashbot/SAL [20:02:36] kostajh: ^^ done. The bot should show up in the channel in the next couple of minutes [20:02:45] bd808: thanks! [20:02:55] * bd808 needs to make that something that can happen without a restart [20:16:40] Anyone here who is involved in the actor table normalization? Is that "new" table already fully functional? [20:16:52] I am trying to migrate a tool to the new schema: basically from "SELECT ... FROM revision_userindex WHERE rev_user=..." to "SELECT ... FROM revision JOIN actor ON rev_actor=actor_id WHERE actor_user=..." [20:17:21] However, performance is really *baaaad*; that bad in fact, that the server reports "SQLSTATE[HY000]: General error: 2006 MySQL server has gone away" [20:19:33] mys_721tx: we've got a patch queued up that may help -- https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/514548/ -- probably not going to be live today though [20:19:42] Can we make changes to proposed O auth consumers [20:19:43] https://meta.wikimedia.org/wiki/Special:OAuthConsumerRegistration/list [20:20:07] I mean changing the OAuth "callback URL" [20:20:29] gyan: there are a few fields you can edit after submitting, but not many. Its ok to put in a new request though and just increment the version number [20:20:53] There are so many consumers. [20:20:53] callback can not be edited, so if you need to change that its a new request [20:21:04] Can we delete old and expired cosumers, [20:21:22] admins can disable them [20:21:44] there is no self-serve off switch though as far as I know [20:22:39] looks like today there are 874 approved consumers and only 72 disabled ones [20:22:45] MisterSynergy: you can probably get more help with a full query or an EXPLAIN [20:23:06] thnx, will have a look [20:23:40] I'm going to guess that part of MisterSynergy's problem is that the actor view requires 8 correlated sub-queries. That's what the patch I pointed to is trying to help with [20:23:44] https://tools.wmflabs.org/sql-optimizer looks helpful for doing the EXPLAIN [20:24:14] +1 sql-optimizer isn't perfect but its the easiest way to try out an EXPLAIN [20:26:08] @bryan if we select "Allow consumer to specify a callback in requests and use "callback" URL above as a required prefix" The app is not working the way it should be. [20:27:15] Please ignore the above sentence. [20:27:40] bd808: Thanks for the update! Please let me know once it's clear to proceed. [20:30:12] now I have an EXPLAIN. shall I post a link here? It gives me five tips, none of them useful (for me) [20:31:06] there are indeed lots of subqueries on tables which I do not query [20:33:07] MisterSynergy: share and some of us can look. There might be an easy fix... maybe :) [20:33:20] https://tools.wmflabs.org/sql-optimizer?use=dewiki_p&sql=SELECT+MIN%28rev_timestamp%29+AS+first_edit%2C+MAX%28rev_timestamp%29+AS+last_edit+FROM+revision+JOIN+actor+ON+revision.rev_actor%3Dactor.actor_id+WHERE+actor_user%3D1546577 [20:34:07] I think a lot of replica users could benefit in understanding by viewing https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/modules/profile/templates/labs/db/views/maintain-views.yaml [20:34:55] maybe even an explanation on the sql-optimizer page, 'Bare in mind that access to the raw tables is not provided, your queries on _p databases are executed on top of the views defined here' [20:37:01] MisterSynergy: the 2 table scans for the revision table are going to be a part of the cause for slowness. I wonder what happens if you lookup the actor_id first and then just search revision using that? [20:37:41] MIN()/MAX() queries on giant tables are always problematic [20:38:07] (except when the field being min/max'ed is indexed) [20:38:24] well it worked before the migration was started :-) [20:39:08] do you mean that I do something like "SELECT ... FROM revision WHERE rev_actor=(SELECT actor_id FROM actor WHERE actor_user=...)" instead? [20:39:30] that would avoid the join, but it is not really quicker [20:39:32] MisterSynergy: I actually mean 2 disconnected sql queries [20:40:16] select actor_id from actor where actor_user=1546577; save the value, and then select ... using that value [20:40:54] trying... [20:41:03] does not look good [20:41:21] the actor lookup took 0.01s for me. revision scan still running [20:44:01] I did this with the rev_user field in the previous schema, typically for 5...15 user IDs one after another. It was quick enough for a webservice (order of 1 to 10 seconds per request, depending on edit count of the involved user IDs) [20:44:36] MisterSynergy: this is fast... and I think right -- select min(revactor_timestamp), max(revactor_timestamp) from revision_actor_temp where revactor_actor = 1546577; [20:45:09] okay, how temporary is the revision_actor_temp table? [20:45:11] revactor_actor is actually the actor_user value [20:45:26] good question [20:45:38] https://www.mediawiki.org/wiki/Manual:Revision_actor_temp_table is not helpful ;-) [20:45:44] bstorm_: do you know when the revision_actor_temp is scheduled to die? [20:45:57] sadly no [20:46:11] MisterSynergy: its a custom view that only exists in the Wiki Replica databases [20:46:12] anomie might? [20:47:08] no it's not [20:47:19] its from MediaWiki Core [20:48:01] https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/core/+/refs/heads/master/maintenance/tables.sql#460 [20:48:02] oh, you're right [20:48:46] custom view would be my first assumption too FWIW [20:48:50] I was looking at revision_userindex's definition, not revision_actor_temp [20:49:35] sounds it IS a temporary table for the next months or a year or so, right? [20:49:39] * bd808 would benefit from better docs on the view layer like everyone else [20:49:56] maybe if we had like [20:49:58] bd808: T215466, has no timeframe because it has several blockers. [20:49:59] T215466: Remove revision_comment_temp and revision_actor_temp - https://phabricator.wikimedia.org/T215466 [20:50:01] somewhere you put in a database name [20:50:04] you get a list of views [20:50:06] and its annotated [20:50:14] this view is a fullview of the underlying table, documented at x [20:50:22] this view is a partial view, documented at y [20:50:40] ? [20:51:11] I could use it for my tool in the meantime, but what do I do when it finally goes away? I'm going to have the same problem then, or will the revision table be able to provide the same performance (for queries such as mine) [20:51:42] bd808: Also, `select min(rev_timestamp), max(rev_timestamp) from revision_userindex where rev_actor = 1546577;` should be just as fast now without directly referring to revision_actor_temp. [20:52:55] Oh, right. Because revision_userindex is setup to use the same indexes you get with revision_actor_temp [20:53:11] * bd808 does not hold this data model in his head yet [20:55:03] MisterSynergy: I think anomie's suggestion of using revision_userindex now is the right one. Long term... hopefully we can keep that working or at least have a fairly simple migration path to whatever is 'better' [20:58:02] MisterSynergy: in case you are feeling lost here, you are not alone. I've been following all the changes, but only casually and its confusing for me too. [20:58:43] its already quite helpful [20:59:21] I am trying this solution with an extra query for the actor_id and then the modified query on revision_userindex [21:01:21] Have to say that the database structure increasingly requires MySQL expert knowledge. As a hobby programmer with mere self-taught SQL skills I am definitely out of my comfort zone here :-) [21:02:25] the replicas have to follow what MediaWiki does [21:02:43] and MediaWiki is certainly under no obligation to have its schema be easy to understand or perform well in our case [21:04:02] yeah of course, I do not question the necessity to make MediaWiki fit for the future [21:04:42] it just become more complicated to contribute with tools and tech as a volunteer Wikimedian [21:05:02] that is to some extent inevitable [21:05:30] I know, and I am in fact a strong advocate for that exact direction [21:05:49] :/ that's the really sad part. someday™ we may be able to provide more documentation and tooling to make things easier [21:06:10] there are lot of non-tech aspects of Wikimedia projects with the same problem. More complex wikitext, "lua modules", and so on [21:06:47] heh, lua modules [21:07:10] I think generally without that the wikitext complexity would be far worse [21:07:16] Wiki[mp]edia work because of all of the volunteers, but as things grow we tend towards more and more complicated knowledge needed to keep things working smoothly [21:07:48] use of Wikidata in Wikipedia is (or: would be) also quite a complication for many editors [21:08:16] The Technical Engagement team cares a lot about these problems, but we can only work on a few things at a time mostly because of staffing [21:09:01] I think we are going to try to find more ways to invite volunteers to help us with keeping up on documentation and other fixes, but its going to take some time to build that community [21:09:32] * bd808 wishes for more money and time [21:11:56] Sorry for coming in between. I am getting a "JWT didn't validate". after authorizing my app. Is this because the app is not authorized? [21:12:04] At least my tool works again with the discussed solution :-) [21:12:09] I mean approved [21:12:33] gyan: it should work for your user account without being authorized [21:13:19] "JWT didn't validate" sounds like an implementation problem (I think I've seen it before) [21:15:35] gyan: is that an error coming back from the OAuth server, or something that your code is saying? [21:16:18] yes my code. Some implementation error. But strange is that was working before [21:16:52] Let me figure it out else will ping here. [21:18:08] @Bryan Hope your wishes will come true. [22:14:08] !log project-proxy Per jeh's investigation, added cloudservices1004 IP to match cloudservices1003 rule in 'proxy' security group rules for port 5668 [22:14:10] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [22:36:44] !log project-proxy Updating 'proxy' security group rules for port 5668 to remove decommissioned IPs - 208.80.154.136 silver, 208.80.155.117 labs-ns0, 208.80.152.32 virt0 (!), 208.80.153.48 labtestservices2001, 208.80.154.92 labcontrol1001 [22:36:45] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [22:44:40] !log project-proxy Updating 'proxy' security group rules for port 5668 to remove decommissioned IP - 208.80.154.147 californium T189921 [22:44:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [22:44:43] T189921: decom californium - https://phabricator.wikimedia.org/T189921 [22:50:23] !log project-proxy Added cloudcontrol1004 IP to match cloudcontrol1003 rule in 'proxy' security group rules for port 5668 T225168 [22:50:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Project-proxy/SAL [22:50:31] T225168: New OpenStack control-plane nodes can't talk to novaproxy - https://phabricator.wikimedia.org/T225168 [23:59:08] bd808: you probably want to have a look at T225170