[02:19:12] anyone know what's causing the RSS feed error here? https://www.mediawiki.org/wiki/HHVM#Current_work [02:27:57] * Carmela peeks. [02:29:03] SudoKing: Looks like the RSS extension doesn't like hitting https://bugzilla.wikimedia.org? [02:29:42] Probably worth filing a bug in Bugzilla. [02:46:56] SudoKing: hmm, I think I broke that. [02:47:37] https://gerrit.wikimedia.org/r/#/c/152917/ probably. [02:49:47] SudoKing: yeah, url-downloader.wikimedia.org can't connect to bugzilla.wm.o [07:04:10] hello all [07:05:11] does the PCRE lib used by wikipedia-mailing list support filtering by character class (unicode script?) [08:09:08] hey all anyone around who can help me with upload errors on commons where the error is the file is corrupt or https://commons.wikimedia.org/wiki/Commons:FAQ#What_does_the_upload_error_This_file_contains_HTML_or_script_code_that_may_be_erroneously_interpreted_by_a_web_browser._mean.3F [08:57:09] :o https://meta.wikimedia.org/wiki/Mailing_lists/List_info [09:10:13] Wow, that went fast. [09:10:18] (With translations) [10:11:27] dejavu? O_o [12:09:22] regarding my problem yesterday, its not my ISP, its wikimedia enforcing ssl which is not complying with http..possibly a major bug in the future.. [12:10:42] could you elaborate what "not complying" means exactly? [12:13:30] had/have a similar problem with the flickr upload feature on commons as it won't upload most images from flickr i tried if i use commons on http but worked with all when i switched to https.. [12:14:44] its an API related problem.. [12:16:18] warpath: as was pointed out yesterday, it's a proxy on your end that's borking your http connections [12:17:43] its not, the error came on ur end, 198 ip range.. [12:18:18] warpath: http://pastebin.com/MR2nR14i was what you posted yesterday [12:18:32] i have been banguing my head over this for the last 24 hours...my ISp is shit, but it doesn't block any sites or ISP, i talked to the overall sales manager.. [12:18:43] warpath: it's not about blocking sites [12:19:06] warpath: your proxy thinks the connection is dead and gives up, even though it's not dead [12:19:15] switching to https takes the proxy out of the picture [12:19:19] * warpath has no proxy.... [12:19:45] your ISP proxies http connections, as evidenced by http://pastebin.com/MR2nR14i [12:20:29] google "WebProxy/1.0 Pre-Alpha" all u get as hits is my 2 pasteys [12:23:07] so? [12:23:10] i added an image and did a preview via my account on http, it crashed (page went blank), did it on another account with https, it worked normal.. [12:23:41] yes, because your ISP sends a 504 without content --> blank page [12:25:56] the errors appears on "Response Headers" in the web console.. [12:37:28] warpath: yes. Response from your ISPs proxy, in this case. [12:38:20] warpath: your computer --> ISP proxy ---> WMF reverse nginx proxy ---> WMF application servers [12:38:47] my isp has 3 ip's, (27.123.130.59, 10.142.144.24 and 183.81.133.150), is either a proxy? [12:38:49] warpath: you can get a response from the app servers (which is what you want), the nginx proxy (if the app servers are down) or from your ISP proxy (if it thinks the connection to the WMF is dead) [12:44:28] warpath: the first is certainly a proxy (it's a vodafone fiji IP that returns google), the second is network-internal, the third is not accessible as proxy from there, but that doesn't mean it's not a proxy. [12:45:11] * warpath is in fiji fyi.. [12:45:58] so the 3rd one possibly... [12:46:08] * warpath will contact the idiot sales manager... [16:17:24] Jeff_Green: after all, WMF mail servers are not designated as permitted sender for, e.g., gmail.com [16:18:27] hey [16:18:46] right [16:18:54] SPF checks only the envelope sender though [16:19:18] afaik wiki mail previously always goes out with wiki@wikimedia.org as the envelope sender [16:19:27] true [16:19:34] or at least as a wikimedia.org address [16:20:12] so maybe it's your email client seeing that legit/SPF-authorized envelope sender which doesn't match the From: line in the email header? [16:20:31] Jeff_Green: hmm, no, don't think so. A test email I sent two days ago actually used envelope-from [16:21:05] can you bounce it to me so I can check it out? jgreen@wikimedia.org [16:21:25] the return-path is wiki@wikimedia.org, though [16:21:44] valhallasw`cloud: thats what we are editing [16:22:13] valhallasw`cloud: return-path reflects the original envelope sender [16:22:21] Ohhhh. Okay. [16:22:38] that's an email header that's added by some MTA along the way (probably the first one that saw the message) [16:23:46] (I haven't actually seen the warning message in WMF context myself -- it was reported by someone else on the yahoo DMARC bug) [16:23:58] ok [16:24:19] I've been looking at DMARC a bit this week, I'm a little puzzled by yahoo [16:24:33] i need to research what's happening there [16:25:04] as far as I understand, DMARC means yahoo signs the e-mail, which is then checked with a public key in a DNS record [16:26:08] right [16:26:20] the bug had to do with messages originating at yahoo? [16:26:51] there's two bugs. On on yahoo messages to mailing lists (yahoo signs it, mailing list adds headers, receiver checks, signature fails and mail bounces) [16:27:13] and a second one on special:sendemail (MW sends email, receiver checks, finds no signature, rejects/bounces mail) [16:28:04] and a third one where Yahoo users don't receive emails due to some unclear bulk mail policy [16:28:13] ok [16:28:23] first one is https://bugzilla.wikimedia.org/show_bug.cgi?id=64818 [16:28:28] second one is https://bugzilla.wikimedia.org/show_bug.cgi?id=64795 [16:28:36] third one is https://bugzilla.wikimedia.org/show_bug.cgi?id=56414 [16:28:49] ok thanks, I'll read up on it [16:29:44] the short answer overall though, none of the VERP changes should really affect any of this [16:30:17] we're changing envelope sender from typically wiki@wikimedia.org to wiki-{long string}@wikimedia.org but otherwise treating it just like before [16:30:39] I see. [16:31:47] Will bounces be reported? (to the original sender and/or the intended receiver) [16:32:42] no, same as before [16:32:58] it's concievable we could add some kind of metric down the road I guess though? [16:33:29] tonythomas: is there any notification upon unsubscription at this point? [16:34:00] Jeff_Green: we havent started processing bounces yet [16:34:02] therefore, no [16:34:20] but when we do? [16:34:28] we will need some incoming mails exim changes for that [16:34:57] once we get a new router ready in exim4.conf [16:35:33] Jeff_Green: oh no -- you asked about notifications on unsubscibing the user ? [16:35:38] I got the question wrong [16:35:41] of course, yes [16:35:52] by email? :-P [16:36:03] Jeff_Green: no of course -- that wont work [16:36:11] j/k [16:36:19] we are having the wfDebugLog logging every unsubscribtion [16:36:35] ya [16:36:59] so from the user's side, what would they see when they get unsubscribed? [16:37:14] Jeff_Green: they will see the red box back in Special:Preferences [16:37:25] saying that their email is not confirmed [16:37:30] ah ok [16:37:32] thx [16:37:43] ha :) [16:38:12] and to mention -> we are sending the email as whole to wiki-admins in the case our regex functions fail to parse it [16:38:31] * wiki-admins listed in BounceHandler.php [16:38:37] yep [16:45:00] "Exception thrown by ext.centralNotice.bannerController" load.php:161 [16:45:00] "URIError: malformed URI sequence" URIError: malformed URI sequence [18:49:49] where can i see full source of wmf-config/flaggedrevs.php [18:49:49] [18:49:49] ? [18:50:06] ? [18:50:10] argh lags [18:50:48] Cladis: http://noc.wikimedia.org/conf/flaggedrevs.php.txt [18:52:22] thanks [22:27:27] https://ru.wikipedia.org/wiki/MediaWiki:Gadget-wikilinker.js people try to run it on a non-wikipedia and expect it to give local links if they exist [22:27:44] it's hard for me to understand why it should work the way they expect [22:31:38] RoanKattouw: jo [22:31:43] Hey [22:31:47] you do js? [22:31:52] see above please :) [22:32:26] they say 'it worked before but no longer does' and I'm surprised - didn't use it personally and i don't get a feeling of understanding its logic [22:32:44] importScript('MediaWiki:Stemmer.js') [22:32:50] So that would presumably have to be there [22:33:15] Other than that I don't see anything wiki-specific [22:33:15] yes it exists [22:34:08] i think stemmer does some language/encoding things [22:37:15] like it doesn't do detection of page existence on a wiki [22:39:14] and when i try to read it, i see it does processText which calls loadXMLDoc but i don't understand where it goes from there [22:40:00] oh dear :| it apparently does stateChanged which does eval() [22:56:17] what is responsible for LuceneSearch? i thought an extension but i dont see it at special:version even though its api works for wikipedia [22:56:21] anyone alive here? ... [22:56:39] sure [22:57:00] i think no one really know how lucene search works, that's one of the reason it's being axed [22:57:04] knows* [22:57:09] reasons* [22:57:10] Svetlana: depends which part of it, there is role/lucene.pp in puppet which sets up server stuff [22:58:21] also http://noc.wikimedia.org/conf/lucene-production.php.txt [22:59:28] http://git.wikimedia.org/blob/operations%2Fpuppet.git/e75a2b05c5dce2c1f3978608ffe4e08c655ce499/manifests%2Frole%2Flucene.pp [23:01:05] So has the commit that is referenced by wmf17 in vendor not been pushed? [23:01:30] fatal: reference is not a tree: d79ed843e09bc7da7991c66f9f81d26ccac81083 [23:01:37] Unable to checkout 'd79ed843e09bc7da7991c66f9f81d26ccac81083' in submodule path '../vendor' [23:01:51] someone asked about that a few hours ago [23:03:19] * Negative24 is checking who pushed the submodule pointer to vendor [23:04:54] MatmaRex: where did they already axe/remove it? [23:05:38] Svetlana: it's being replaced by CirrusSearch, i'm not sure off-hand what the status is [23:05:41] !e CirrusSearch [23:05:54] ah, i keep forgetting the bot is broken in this channel. https://www.mediawiki.org/wiki/Extension:CirrusSearch [23:06:08] MatmaRex: apparently I am in a need to find its exact current status as a gadget relies on it [23:06:09] CirrusSearch is default on several wikis now. I wonder where the deployment schedule is [23:06:28] exact current status of which one and where exactly? [23:06:29] There were some problems. [23:06:45] Like dildo's showing up when you're searching for toothbrushes. [23:07:18] i think they actually figured that one out :D commons has some funny pictures [23:07:57] a search for 'toothbrush' on commons no longer yields a woman dildoing herself with one on the first page of results [23:08:05] (i think.) [23:09:39] Still here. [23:10:21] Svetlana, https://lists.wikimedia.org/pipermail/wikitech-l/2014-July/077375.html [23:10:40] ^ deployment dates for new search [23:14:03] Reedy introduced the vendor submodule when creating the branch. That's usual, just the regular package linking done in a wmf branch. [23:14:18] but commit hasn't been made public [23:17:25] Reedy: ^ [23:20:02] What? [23:20:17] andre__: ok. strange. it appears to work on ru.wp but give a http timeout on ru.wn [23:20:24] There's 2 vendor submodules [23:20:27] using 'ruwiki' is ambigious :-( [23:20:40] The old https://github.com/wikimedia/mediawiki-core-vendor [23:20:43] actually no, it's a specific db name [23:20:51] And the newer https://github.com/wikimedia/mediawiki-vendor/branches [23:21:02] The older one doesn't have the wmf17 branch, but wmf17 shouldn't be pointing at it [23:21:12] I updated the wmf16 submodule to point at the enw one too [23:22:58] Reedy: vendor points at https://gerrit.wikimedia.org/r/p/mediawiki/vendor.git [23:23:06] Yes [23:23:15] The github mirrors are just easier to link to [23:23:21] gitblit is awful [23:23:28] and tree d79ed8 which reports as not public [23:23:44] https://github.com/wikimedia/mediawiki-vendor/commit/d79ed843e09bc7da7991c66f9f81d26ccac81083 [23:23:48] it looks like old search gave nice results for guessing how to wikilink a thing ('take shortest of first 3 results and it's ok to go'), but i don't appear to be able to convince cirrus search to give results to another wiki (eg ru.wn wants to guess that a word is a wikilink to local article if it exists and to a wikipedia article otherwise, while allowing for flexibility - not an exact match) [23:23:50] It's there on github, so it's on the source [23:23:55] as it's been mirrored/replicated there [23:24:02] Negative24: git fetch --all? [23:24:24] Negative24: try $ git submodule sync [23:24:57] Svetlana: cirrus on ru.wn doesn't know that wp exists. [23:25:35] Reedy: no go with git fetch -all trying git submodule sync... [23:26:05] try the fetch --all after the sync [23:26:20] or git submodule update --init vendor [23:27:48] git submodule sync worked but how come? [23:31:42] because the url of the git repository changed [23:31:47] tl;dr: submodules are silly [23:32:16] legoktm: Yeah. So many commands just to do basic tasks [23:32:31] I never knew about the sync option [23:35:35] Do wmf configs disable the mw_config interface? [23:49:32] uh, 2 questions, 1) why is it on ru.wp but not on ru.wn; 2) why does cirrus search with 'nearmatch' thing return empty results for 'Америки' where 'Америка' exists (and it does find it for a 'Америка' query)