[00:02:09] zhuyifei1999_: its kind of exciting to see y'all work out how to do this :) [00:02:17] lol [01:44:23] !log tools.tb-dev Redacted a password (may or may not be the database password before the replica.my.cnf regenerated) in .mysql_history, to 'T179599 REDACTED' [01:44:26] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.tb-dev/SAL [01:44:26] T179599: Adoption of tb-dev Red Link Recovery tools - https://phabricator.wikimedia.org/T179599 [02:13:55] bd808: for https://wikitech.wikimedia.org/wiki/Help:Toolforge/Abandoned_tool_policy#Adoption #2, is the committee supposed to send/draft the message or should the requester do it? [03:39:04] zhuyifei1999_: good question. I don't have an answer. [03:39:22] uh ok [03:39:23] I think either way could work [03:39:52] I know I made up the rules, but that doesn't actually make me the source of truth ;) [03:40:41] I think the committee should clarify the parts that are ambiguous [03:41:20] verifying that the email was sent would be easier I suppose if the committee did it [03:41:39] the talk page posts are public so that can be verified either way [03:44:02] hmm [03:45:27] I think this falls under "The committee is granted leeway to determine its own internal policies and procedures, but these must be documented on Wikitech and may be subject to alteration by the Wikimedia Foundation for technical, privacy, or legal reasons." [03:48:50] k [03:55:17] I changed my site css at wikitech to make the background of Help namespace pages blue like the Manual namespace pages on mediawiki.org. I'm kind of liking it. [03:58:51] * zhuyifei1999_ got so used to mw.o background changing that I already forgot about it [08:38:03] kees said that he'll do it when he wakes up [08:38:26] I'm still pretty confused about what to do with the requestToken error [09:37:52] Hi, just in case someone can / want help bringing up our test Discourse instance in wmflabs... https://phabricator.wikimedia.org/T179649 [09:38:18] (Or just shed some light) [12:32:58] (03Draft1) 10Paladox: Gerrit: Re add certificate for its-phabricator temporary [labs/private] - 10https://gerrit.wikimedia.org/r/388430 [12:33:00] (03PS2) 10Paladox: Gerrit: Re add certificate for its-phabricator temporary [labs/private] - 10https://gerrit.wikimedia.org/r/388430 [12:34:41] (03CR) 10Dzahn: [V: 032 C: 032] Gerrit: Re add certificate for its-phabricator temporary [labs/private] - 10https://gerrit.wikimedia.org/r/388430 (owner: 10Paladox) [13:11:16] (03PS1) 10BBlack: add dummy GS-2017 keys [labs/private] - 10https://gerrit.wikimedia.org/r/388440 [13:11:32] (03CR) 10BBlack: [V: 032 C: 032] add dummy GS-2017 keys [labs/private] - 10https://gerrit.wikimedia.org/r/388440 (owner: 10BBlack) [14:48:13] https://blog.famzah.net/2010/06/11/openssh-ciphers-performance-benchmark/ Tested it and... I'm significantly bandwidth limited [15:16:26] Dispenser: you are using a raspberry pi as the server on your end, correct? [15:18:42] if so, that's a pretty CPU constrained environment so I would think that disabling compression at least might help [15:19:05] I'm not sure about cipher selection but there are probably some that are a little better/worse too [15:39:52] Today's test was connecting to tools-dev.wmflabs.org with file `head -c 67000000 /dev/urandom > ~/test-file` [15:43:24] But the Raspberry Pi uses of Mobile SoC and we know they're faster than an xbox 360. In seriousness, disabling or weakening encryption is a palatable option when you want 70 GB Kiwik dump of enwiki to not take all week to transfer over the 100 Mbit/s LAN. [15:57:54] Dispenser: if ~ means your home directory on bastion, please use somewhere else like /tmp [15:58:07] or it'll eat all the bastion's NFS IO [16:00:16] My bandwidth is kind of low, like .8 MB/s up and 3 MB/s down. So not eating many IOPs [16:22:59] bd808: correction regarding yesterday: It fails at $accessToken, not $requestToken [16:46:40] !log testlabs Running stress-ng test on labvirt1015stresstest* vms for T171473 [16:46:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Testlabs/SAL [16:46:43] T171473: labvirt1015 crashes - https://phabricator.wikimedia.org/T171473 [17:26:43] I wanted to give mw vagrant a second try (after I gave up on it earlier this year), but unfortunately it's still horribly slow for me (min 10s page load time). How might I find out what exactly is killing it's performance? I don't have much of a clue where to start, might be anything (maybe CPU, maybe Networking, maybe disk io)... [17:31:52] eddiegp: what OS is your host computer? [17:32:06] Arch Linux [17:32:29] ok. and are you trying to run it with LXC as the container or VirtualBox? [17:32:36] VirtualBox [17:32:43] * bd808 assumes we are talking about mediawiki-vagrant [17:32:59] And once again your assumptions are right :) [17:33:10] Switching to LCX *should* be much faster. I've never tried to get it running on Arch though [17:33:30] VirtualBox is not ver great on Linux hosts in my experience [17:33:41] its much better tuned for Windows and OSX [17:34:04] Hmm, okay, I've got something to test in that case ;) [17:34:27] eddiegp: https://github.com/wikimedia/mediawiki-vagrant/blob/master/support/README-lxc.md would be the place to start looking for LXC tips [17:34:35] Yeah, I've not really thought about changing the provider yet (just followed the wiki page, which suggests virtualbox). [17:35:45] bd808: Thanks! I'll have a try and maybe come back when running into some issues ;) [17:37:03] cool. There are usually folks in #mediawiki and #wikimedia-tech with mediawiki-vagrant knowledge too [20:29:33] (03CR) 10BryanDavis: [C: 032] crontab: make tempfile use utf-8 encoding [labs/toollabs] - 10https://gerrit.wikimedia.org/r/383770 (https://phabricator.wikimedia.org/T156174) (owner: 10Zhuyifei1999) [20:30:36] (03Merged) 10jenkins-bot: crontab: make tempfile use utf-8 encoding [labs/toollabs] - 10https://gerrit.wikimedia.org/r/383770 (https://phabricator.wikimedia.org/T156174) (owner: 10Zhuyifei1999) [21:19:31] !log tools Deployed misctools 1.26 (T156174) [21:19:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:19:35] T156174: Rewrite /usr/local/bin/crontab in python; fix bugs - https://phabricator.wikimedia.org/T156174 [21:31:09] bd808: Can we get documentation HOW we should rewrite scripts? Like https://commons.wikimedia.org/wiki/User:Dispenser/GIF_check [21:31:39] Or how to store the results of a database query into a table? [21:32:51] Dispenser: related, I filed T179628 to see if we can get an ok to allow temporary tables [21:32:53] T179628: Consider granting `CREATE TEMPORARY TABLES` to labsdbuser - https://phabricator.wikimedia.org/T179628 [21:34:00] I think you are going to have to use something more advanced than mysql scripts to work with multiple dbs [21:34:04] I think can I actually do a FROM (SELECT * FROM ) ... [21:37:02] How's that low-level NOSQL interface for MySQL? (I once tried and only after fully implementing concluded the startup times were killing performance and not graph-DB nature of my operations) [21:37:28] I've never used it [21:40:01] just looking a bit at this script I think there are a couple of ways you could do it. Not sure which is better without knowing the number of rows in each chunk of results [21:40:52] you can make your gif_size table by selecting from a replica and then putting the results into a table on tools-db for sure [21:42:32] But you need a file inbetween [21:42:40] they you would either find the rows from that table that fit your where critera *or* start from the categorylinks side [21:43:24] a file or maybe better just streaming things in memory from one cursor to another [21:43:42] like I said you are going to need something more advanced than the mysql shell [21:43:56] php, python, perl, tcl, ... whatever [21:44:05] you need business logic [21:44:46] this script makes the database server do all the work, but with split dbs some of that has to move into your script [21:45:56] Kind defeats the point of SQL [21:48:02] with this particular example you may be able to get away with a sub-select [21:48:27] its going to be pretty brutal on the server I think, but this looks like it was always pretty brutal [21:48:29] What about those FEDERATED tables [21:49:10] J.amie mentioned that somewhere as a possibility, yeah. I don't know when we may get time from him to look into it more though [21:50:00] One of the problems we are facing is that there is a small amount of DBA time to go around, and its not our turn at the front of the line right now [21:51:49] I've rewritten https://commons.wikimedia.org/wiki/User:Dispenser/Wrong_Extension using sub-selects, Let's see how long this'll take. Previously cron ran it weekly in October for 41 min, 41 min, 41 min, and 42 min. [21:57:03] There is a fair chance that it will be comparable. Behind the scenes mysql may figure out that the subselect is large and actually use a temp table anyway [21:59:21] yeah that's a materialized subquery [22:02:12] SELECT ... FROM tablea, (SELECT ... FROM ... WHERE ...) temptableb <= force a temp table, iirc [22:04:49] Without zhuyifei's hack: https://tools.wmflabs.org/tools-info/optimizer.py select_type:SIMPLE key:none, Ref:None, rows:41307252, extra:Using where; Using filesort [22:05:39] optimizer is broken for complex queries last time I tried it [22:06:12] (creates syntax errors in sql) [22:06:46] also for `show explain for` there should be a bit of wait before the whole query plan is available [22:07:03] I don't need the page table, it only there to minimize stored table size (since 40 million * 4 byte INT < 40 Million * 255-byte string) [22:13:30] "107 rows in set (13 min 23.63 sec)" So only a 3.23x improvement, half of the old DB 25 minute posted on 7 July 2014 [22:16:06] "only" [22:17:25] lol [22:20:40] zhuyifei1999_: I'll try to get the puppet patches up next week to switch to oge-crontab from the perl script :) [22:21:00] ok thanks [23:28:15] bd808 hi, im wondering what happends if i get this error [23:28:16] NFS requires a host-only network to be created. [23:28:18] please? [23:28:23] with mediawiki-vagrant [23:28:57] The only way I've figured out to fix that is rebooting the host computer [23:29:12] ah thanks [23:29:14] its some weird bug in vagrant [23:29:24] I chased it for like 2 days and gave up :) [23:29:31] heh thanks :) [23:35:22] !unicorn [23:35:22] 🦄 [23:37:32] lol