[00:00:33] I don't get that header back from the server [00:01:14] is it just giving a protocol error if a server sends a Proxy-Connection HTTP header over HTTP/2.0? [00:02:25] what would the CLI be to test that? from the pastebin, I seem to be sending only the following: [00:02:26] * Using Stream ID: 1 (easy handle 0x7fd1b700ee00) [00:02:26] } [5 bytes data] [00:02:27] > HEAD /w/load.php?lang=en&modules=ext.cite.styles%7Cext.uls.interlanguage%7Cext.visualEditor.desktopArticleTarget.noscript%7Cext.wikimediaBadges%7Cjquery.makeCollapsible.styles%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.page.gallery.styles%7Cmediawiki.skinning.interface%7Cmediawiki.toc.styles%7Cskins.vector.styles%7Cwikibase.client.init&only=styles&skin=vector HTTP/2 [00:02:27] > Host: en.wikipedia.org [00:02:27] > user-agent: curl/7.67.0 [00:02:27] > accept: */* [00:03:42] think it has to do with response headers, not sending request headers [00:04:05] maybe compile nghttp2 with the --with-debug flag, or look at wireshark, etc. [00:07:33] no wait proxy-connection is a request header isn't it [00:13:25] protocol exchange forcing http/1.1: https://pastebin.com/KLCCDCSE [00:13:32] temp20191117, I need to go to sleep now, not that I'm proving particularly helpful anyway. I recommend making a ticket on phabricator.wikimedia.org (login can be done through normal wikimedia SUL accounts), add the #traffic project and mention that stackoverflow page. maybe also see what other info you can dig up by looking at it in wireshark [00:14:09] I have no such account; I created this one as a throw away [00:14:10] hmph that does appear to be a Proxy-Connection HTTP 1.1 response header there [00:14:18] do you have an account to log into wikipedia? [00:14:49] no; I just browse and edit anonymously [00:23:39] temp20191117, you're not behind a MITM proxy are you? [00:24:01] nope, just a simple Asus router [00:25:21] see I don't get that Proxy-Connection header you get [00:28:18] with HTTP/2, headers tend to be all lower-case, whereas with HTTP/1.1 they're Mixed-Case; not sure if that makes a difference, though it bit many Let's Encrypt clients when LE switch CDNs [00:30:24] especially LE clients written in (bash) shell that used curl to do stuff [00:30:57] https://github.com/lukas2511/dehydrated/issues/559 [00:34:39] that's why in my HTTP/2 command output (https://pastebin.com/ihpY38Hk ) it says "name: [proxy-connection]" and not "name: [Proxy-Connection]". Not sure if that's important. Also a reminder that I first noticed this with the Safari web browser, though cURL show it's not just restricted to that [00:37:48] probably not [00:39:37] well, I have no account on phab, and it seems no one else besides you Krenair seems to be digging into things, so I'm not sure what's next [00:44:13] guess I'll make one [00:44:43] okay [00:44:48] (a ticket) [00:46:07] do you need any more output from my side of things, or is the various cURL stuff sufficient? [00:46:35] also, another SO question that explicitly mentions HTTP 2 in relattion to NSPOSIXErrorDomain:100 https://stackoverflow.com/questions/50920112/ [00:48:44] it may have something to do with the Upgrade header per this weblog post: https://megamorf.gitlab.io/2019/08/27/safari-nsposixerrordomain-100-error-with-nginx-and-apache.html [00:49:28] "Given that all major browsers do not support HTTP/2 without TLS anyway and that no Upgrade header is allowed for HTTP/2 over TLS the solution here is to remove the header from the nginx reponse to the client, so implement one of the following options:" [00:49:46] I don't know [00:50:06] perhaps something to put in the phab notes as a possible lead? [00:50:07] the people I'd want to look at this are foundation staff who won't be around for a good few hours at least [00:50:29] they might want more info, in which case me opening the ticket on your behalf might be problematic if I can't contact yoy [00:50:31] you* [00:50:35] I've opened https://phabricator.wikimedia.org/T238509 [00:50:40] okay [00:51:17] do they hang out here sometimes? I'm in Eastern TZ so may be able to drop back [00:51:48] #wikimedia-traffic most likely [00:51:55] k [00:52:56] Brandon Black runs that stuff and I think he might be in that TZ or is close to it [00:53:05] k [00:54:28] there may be some EU-based people interested who might get involved sooner. we'll see [00:55:32] Brandon's profile page has an e-mail address [00:58:37] could be related to the Varnish -> ATS migration I guess [00:59:49] well, things used to always work, and then at some point I noticed that things sometimes didn't; don't recall the exact time though [01:00:09] can you add the weblog post to the ticket as a note? https://megamorf.gitlab.io/2019/08/27/safari-nsposixerrordomain-100-error-with-nginx-and-apache.html [01:02:38] done [01:03:09] cool, thanks. shows up on the refresh [01:03:26] also saw the bot post it to -traffic [01:05:11] actually, this SO question mentions very similar things (Safari, cURL): https://serverfault.com/questions/937253/https-doesnt-work-with-safari [01:05:33] though it talks about the "upgrade" header and not "proxy-connection" [01:06:33] regardless, thanks for your help Krenair: I think that's as far as we can take things at the moment until more people wake-up / look at things [01:07:05] thanks for pointing out the issue [14:16:26] Is there like a cli that I can use that pre checks my code for lint errors before it is submitted ? [14:17:16] Like for js I use npm... is there anything similar for php ? [14:17:26] *eslint [14:18:19] Also, something that runs automatically on commit would be great [15:16:49] Is there a parameter that submits a filled specialPage directly? I'm looking for one on Special:TemplateSandbox [15:19:53] I would be very surprised if this existed as a general parameter for any special page [15:20:12] imagine if you think you’re following an article link on a talk page, and suddenly you’ve *submitted* Special:Block [15:20:42] though for Special:TemplateSandbox that’s not really a concern, so that page could potentially have its own, custom parameter to the same effect [15:21:35] One sec, doesn't ->setMethod( 'GET' ) does exactly that ? [15:26:13] Lucas_WMDE: yeah, I would also be concerned about DoS, some special pages might be expensive. [15:31:15] to the more general problem. Users are testing modules in the mainnamespace on ptwiki. I think that is bad. [15:31:42] I was looking at using Special:TemplateSandbox, but apparently that does not work with mobile. [15:32:34] anyone have another suggestion, they are testing templates designed for main namespace and mobile is an important part of the discussion [16:18:28] hii