[09:08:16] Hi, is puppet failing expected? [09:08:31] I just got “ PROBLEM - puppet on puppet-paladox.git.eqiad.wmflabs is UNKNOWN: UNKNOWN: Failed to check. Reason is: failed_to_parse_summary_file” at around 7am [09:08:38] And also failing for other hosts too [09:09:36] I’m currently mobile so not sure what the error is [09:19:45] Maybe Andrewbogott or arturo you may know ^? [09:21:27] paladox: looks like it's broken everywhere; I'll investigate [09:25:28] should be fixed now. Thanks for noticing. [11:14:34] Thanks! [13:45:31] arturo: hey! Do you know anything about rate limiting on the dump servers? [13:45:49] T222349 has been opened for quite some time and I'm wondering who I should ping to get things moving [13:45:55] T222349: Do not rate limit dumps from internal network - https://phabricator.wikimedia.org/T222349 [15:08:40] gehel: he's on a holiday today. Let me take a peek [15:09:11] bstorm_: thanks! he was the one who did not look to be "away" from an IRC point of view :) [15:09:38] :) possibly sneakin' around [15:09:51] I wouldn't put that past him :) [15:10:20] bstorm_: I'm happy to give you more context, but the short version is: we're downloading dumps from external mirrors to get better download rates than from our own DC, which does sounds somewhat crazy [15:10:54] I'm not sure if there is a good reason to rate limit internally, and if there is not, I'm not really sure how to make that distinction [15:11:55] I thought we’re supposed to admonish people who pop into chat on holidays? ;) https://twitter.com/eraserbones/status/1200466869498466304 [15:12:06] Huh. Interesting. [15:12:41] Lucas_WMDE: I will if it isn't just a bouncer config 😁 [15:12:49] But it might be [15:14:09] gehel: I'll dig into the config a bit. There are some reasons, but it depends on which network boundaries are being crossed, etc. Out of curiosity, what rate are you getting externally as I poke around [15:15:07] bstorm_: 30M/s (not amazing, but still 15x faster than the 2M/s internally :) [15:15:24] Thx :) [15:16:07] Hrm, that's both doing rsync or other https download? [15:16:17] just HTTP downloads [15:16:40] The rate limit in the config looks universal [15:16:40] didn't try rsync (do we even expose rsync on the dump servers?) [15:17:26] I'd probably stick to http...the rsync configs are around mirroring [15:17:40] and tbh, this is not really blocking, since we have a workaround (external mirrors), but it does cause some confusion and it just seems weird [15:18:09] meaning: don't drop everything you're doing to work on that, but I'm making sure that it is somewhere in your todo list [15:18:34] Oh, wait, if it's external mirrors, then it's another server I don't have control over that is getting the better rate [15:18:38] So that makes sense [15:19:05] I wanted to at least get a solid context around it before moving on with anything else :) [15:19:13] yep, I'm not expecting you to improve external mirrors [15:19:25] ;) [15:20:27] Yeah, but if external mirrors are better, that's legit. These servers aren't terribly great. They have amazing amounts of storage and 10G network, but they get pegged really easily by NFS and some things like that. However, it does have some older settings from when they had smaller servers and network connections [15:21:03] Wasn't seeing a lot of room for improvement on the memory limits. Will have to check carefully on this one when I get a chance, though. Thanks for the context, etc. [15:21:50] if there are good reasons to use external mirrors instead of our own dump servers, I'm good with it! Just want to make sure that the limits aren't there just for "historical reasons" or because we did not take the internal use case into account [15:21:58] thanks a lot for looking into that! [15:22:13] Last bandwidth limit raise was 5 years ago from 265k to 2048k...definitely time to re-evaluate 😁 [15:22:36] 64K should be enough for everyone? [15:23:39] I'll do some checks. Could be :) This limit likely predates upgrades, though. [15:26:07] thanks! [19:00:11] bd808: so if an instance in cloud VPS gets on the jessie list but is a false positive because it was upgrade in place, so image name is old but actually it's running something newer. is it better to just say "you can strike these off the list" and show they are not jessie or would you really like to see fresh reinstalls from newer images, even if it means users have to ask for temp. quota changes [19:02:50] well, i think in this case it's neither and we'll ask for a single new unified project and in return multiple old ones can be removed [19:35:40] was VisualEditor intentionally rmeoved from Wikitech? [19:49:34] musikanimal: Yes. [19:50:44] bummer :( Is there a task for this? I tried searching [19:50:46] Hopefully fixed soon(TM) [19:51:02] okay, all I wanted to hear! ty :) [19:51:09] musikanimal: It'll be fixed once Parsoid moved into MediaWiki core. [19:51:15] musikanimal: Probably before March. [19:51:52] Unfortunately Parsoid-in-PHP-but-not-on-the-appserver isn't a model that works with how wikitech is deployed. [19:53:28] oh okay, I was going to ask if the SUL wikis would be affected too. Guess not, which is good :) [19:59:12] wikitech would have to add the mediawiki roles like wtp does when "$use_php" is true [19:59:22] i guess [20:00:14] We'll just wait for the code to land inside MW, and everything will Just Work™.