[00:00:14] HTTP1/1 frames are sent in what we call "HTTP/2" [00:01:07] even without the ats-tls/h2 termination in front, you'll always end up with CGI seeing HTTP1/1 [00:01:43] I suppose one could e.g. launch a custom H2-capable server in Node.js or through user-land libraries with a long-running PHP process, but it's not anything MW would need to worry about today. [00:10:36] looks like the cgi standard does allow other server protocol versions, so I guess it's possible. Maybe a raw nginx with tls, http and fpm would expose it that way? I'm not sure. maybe there are other layers that currently would or possibley have to change it down to 1.1 [00:40:02] I was about to ask what the proxies actually use (probably 1.1) [00:40:55] it prints "HTTP/2.0" for my local wiki with https/http2/modhttp2 [00:41:55] ack, so it is a possibility, I didn't realise that. but relatively unlikely in practice I guess, given people either don't use http2 yet, or use some kind of proxy or cdn and thus get 1.1 internally. [00:42:30] Krinkle: also, for apache, modhttp2 requires php fpm, so fastcgi_finish_request() will exist in that case [00:42:39] right [00:43:18] * AaronSchulz was reading about people wondering why SERVER_PROTOCOL was 1.1 on cloudfare (due it the proxies using 1.1) [00:43:29] I think given that expensive updates are queueable (e.g. LinksUpdate) and that running job already uses a socket as fallback that afaik is pretty good at being non-blocking all around, its' really just about post-send deferred updates on a server without fpm support. [00:44:05] running those synchronously might be the simplest path forward for everyone involved. [00:44:13] is that what we did before 1.35? [00:48:56] 1.23 added the internal curl request, and 1.34 added the header magic stuff [00:49:32] before 1.23, synchronous was the only option [00:51:11] right, that's for job execution, which works non-blcoking withotu fpm because it's a separate request that uses allow_abort(false) but the caller lets go of it or something. [00:51:24] but postsend we were presumably always sync without fpm/hhvm [00:51:40] the internal http request was turn off by default in 1.29 afaik [00:55:15] do we use http 2 push? I see that for wiki.png in devtools...huh [00:58:24] no, that's preload [00:58:27] we don't use http2 push [00:58:41] and the feature has been deprecated in the spec / will not be a thing in the future for any web apps I expect [01:00:20] maybe nginx is using the `Link: preload ` response header to synthethise a push? [01:00:29] that would be wasteful and unfortunate [01:01:06] there was talks amongst cdns at some point to do that, but afaik didn't go beyond experimentation at least not by default. Cloudflare ended up using it as an extended attribute I think [01:01:19] something like ` Link: <>; rel=preload; … push` or something like that [01:01:51] so what did (or was intended to) change in 1.35 wrt jobs and deferred updates? [04:37:42] git diff 23eaa5aa95c2196b22a70d216c25f66d463b09c5..2d7fe2d6c82861679d79902be6e4588abe9c0b87 -- includes/MediaWiki.php [09:32:22] I believe that's an option in nginx and other servers, turning preloads into pushes automatically [09:32:30] might be an unfortunate default that AaronSchulz ran into [21:17:33] what is the recommended way of adding temporary logging code to an extension? do it in master, cherry-pick to the deployment branch, then revert both? or work exclusively in the deployment branch? [21:25:44] i'll ask on -tech [21:52:31] ori: wfDebugLog, AdhocDebug, wmf-branch only. [21:52:49] unless you need it for beta or for multiple weeks [21:58:32] perfect, thanks [21:58:36] AdhocDebug is just what i needed [22:00:41] AdHocDebug*, for any fellow greppers reading along