[00:00:52] on the bastion or as a k8s/grid job? [00:01:02] bastion [00:01:14] I cant submit jobs as myself [00:01:29] If I do it as the tool its recursive [00:08:04] Betacommand: you can submit a grid job as your ssh user [00:08:29] when did that change? didnt think that was possible [00:08:35] the kill on the bastion is probably the resource limits for user shells [00:08:53] grid has be possible as a user as long as I've been around [00:09:01] we don't talk about it much :) [00:09:41] but yeah, I test grid stuff with jsub as bd808 all the time [00:09:59] I know when i migrated it wasnt possible [00:10:38] could be. I'm sure you were moving over from toolserver before I discovered tool labs [00:10:50] Yeah, Im a legacy user [00:11:09] coming up on 15 years [00:11:27] nice! I'm getting really close to 7 :) [01:30:42] alright who's doing something nfs-heavy on tools-sgebastion-07 [01:31:58] ls is really slow, https://grafana-labs.wikimedia.org/d/QSE7tV-Wk/toolforge-bastions?orgId=1 shows high NFS response time and 1m load through the roof [01:41:06] load average: 0.50, 1.30, 1.63 [01:42:13] there's quite a few long running scripts [01:51:22] yeah it's back to normal now [09:46:21] mutante: regarding your email in the ops@ thread, see T255787 [09:46:39] stashbot: ? [09:47:18] T255787: Reconcile and/or understand differences between cloud-vps and prod hiera lookups - https://phabricator.wikimedia.org/T255787 [09:47:18] See https://wikitech.wikimedia.org/wiki/Tool:Stashbot for help. [09:48:44] alright [10:00:44] !log paws enabled `paws.wmflabs.org` and `*.paws.wmflabs.org` as valid ingress domains (acme-chief TLS cert, haproxy, etc) (T195217) [10:01:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [10:01:06] T195217: Simplify ingress methods for PAWS - https://phabricator.wikimedia.org/T195217 [15:50:53] Hey folks - hopefully going to set up a low-pri bot task on Toolforge soon(tm), which will need to parse every page in articlespace on enwiki (as Elasticsearch wouldn't cut it for the regex I need to use). I'm planning on doing this by just taking the DB dump list of articles for enwiki and iterating through them - two questions: 1) is there a better way of doing this that I don't see, and 2) any suggestions for a reasonable rate limit for the [15:50:55] ensuing API:Parse calls :) cheers! [15:56:09] Naypta: We don't replicate text to cloud [15:56:22] So yeah, you'd need to be using dumps [15:56:26] Or parse [15:56:50] yeah, I'm planning on using the dump article list, because it's not critical that I get every single article name, but for each article I do get, I need the latest revision, so parse seems sensible [15:57:53] Naypta: I'm not sure if there is a "better" way to iterate across the entire article space on any given wiki. For rate limiting, I would suggest that you at minimum use &maxlag=10 and respect the backoff messages you will recieve [15:59:24] sure thing! not necessary to use any separate client rate limiting beyond maxlag though? [15:59:55] We do have a copy of the cirrus data that you could get access to which may let you go more expensive regex queries than the on-wiki exposed version does. [15:59:58] it's really not an urgent operation, so maxlag lower than 10 won't be an issue anyhow [16:00:40] https://wikitech.wikimedia.org/wiki/CloudElastic [16:01:07] ooh, that might be useful - but at the same time, it needs to run that regex on the article text of every article in mainspace, so I'm still not sure even with that whether it would work reliably [16:01:38] I could maybe plaintext refine for articles containing tags, but that's... pretty well every article anyway, so the point is sort of moot [16:05:15] The fine folks in the Wikimedia Search team might be able to help you decide if it would be possible. ebernhardson (probably not online today) is likely the most knowledgable individual. The team hangs out in the #wikimedia-discovery irc channel. [16:15:56] ta very much bd808 - will have a chat with them :) [16:22:20] what's killing poor wikibugs? [16:23:44] bd808L me [16:23:51] oops. Replace L with : [16:24:01] Mass-editing ~600 tasks, intentionally not silently. [16:24:02] you're so mean andre__ :) [16:24:12] bd808: That's my job. [16:24:20] fair point! [16:24:50] See T228575 for context [16:24:51] T228575: Decrease number of open tickets with assignee field set for more than two years (aka cookie licking) (March-June 2020 edition) - https://phabricator.wikimedia.org/T228575 [16:32:41] getting rid of cookie licking sounds good for corona, though ^^ [16:36:09] andre__: RIP my inbox too :) [16:42:27] should be rather easy to filter? See also https://www.mediawiki.org/wiki/Phabricator/Help/Managing_mail [16:44:00] well, these are tasks that I am/was legitimately following for some reason. Just a lot of really stale ones because I've been hanging out here for a while now. I will survive :) [16:49:51] https://bash.toolforge.org/quip/2rt8zXIBLkHzneNNRMC4 :P [16:55:02] bd808:search for that timeframe and that @aklapper bot as sender, mass-delete, done? [16:55:56] Majavah: Ah, right, that Bash thingy, so historians can get a better idea of my character. Thanks! :) [20:41:22] Quick question if anyone's about who knows - when the Toolforge "latest" db dumps are updated, what happens? (Essentially, if I'm reading from those files over a long period of time using a tokenised scanner, is there a risk the file content is going to change while I'm reading from them, so do I need to copy to a temp dir?) [20:53:41] is the command to move to the new domain as simple as stopping and then typing: webservice --canonical --backend=gridengine [type] start [..] [21:01:36] if you don't have anything that is host-dependent in your webservice, yes [21:02:38] (if you did, the command would still work) [21:35:32] looks like my cgi-bin isnt got clobbered its not running the scripts, its just displaying the source file [21:41:13] nvm fixed it, I had an old entry in the .conf file I missed [22:55:05] Naypta: afaict, the NFS server is rsync'ed into and a new file will be created with a new inode, then the new file will be moved onto the old file. when you open the old file before the move you have a reference to the old inode and the move will not break the reference [22:55:32] zhuyifei1999_: brilliant, that's exactly what I was hoping would be the case :) cheers! [22:55:47] :)