[02:17:26] ^d: Someone just posted to my talk page asking if we should better advertise http:// in JavaScript being removed. [02:17:41] Some browsers won't load insecure resources over HTTPS by default. [02:19:22] <^d> :\ [02:20:41] https://meta.wikimedia.org/wiki/User_talk:MZMcBride#HTTPS_switch [02:22:09] <^d> A watchlist notice isn't a bad idea. [02:25:46] Yeah. [02:25:58] Perhaps e-mail the tech ambassadors recommending they set one up? [02:26:15] ^d: that's a custom feature per-wiki, not something available everywhere, right? [02:26:29] <^d> Right. [02:26:37] <^d> Elsie: I should probably join that list. [02:30:38] hah, yes you should! [02:34:51] oh geez [02:49:30] Elsie, some browsers? I think it's Chrome and Firefox, so I would imagine most browsers [02:51:42] oh, this is user specific/defined js? bah, notice is fine, can't fix it all for them before hand. [02:52:19] What about gadgets? [02:52:36] well what about gadgets? [02:52:36] Gadgets are pretty key functionality on a lot of wikis [02:52:56] "sure wish we had some testing on those" [02:52:57] ;) [02:53:20] Gadgets which make non-HTTPS requests while browsing in HTTPS are already broken [02:53:27] ^d: btw, since we have the code live on test2 for testing, can we turn it off on beta cluster so we can get the browser tests working again? :) [02:53:33] people writing them should already know this [02:53:38] Krenair: technically, yes, but...... [02:53:49] but what? [02:54:06] people are not always in the know and there's a lot of copy/paste going on [02:57:25] I guess we'll likely see more people effected by badly written/outdated gadgets/site-wide code, but this really is not anything new [02:58:04] It isn't, but we can try to mitigate better rather than worse [02:58:16] * Aaron|home is still a little worried about bots [02:58:19] I'm writing a note to -ambassadors right now about this [02:58:20] provide resources, "here's why stuff might have just broken and what you can do to fix it" [02:58:23] thank you greg-g [02:58:48] Aaron|home: if you could, it would be great if you could follow up Amir's wikitech-ambassadors email with some specifics about what people should do -- Amir just said "watch out!" without real steps [02:59:07] Aaron|home: wanna write up a quick (like, seriously, 2-3 sentences) blurb on "what to do if you're a bot developer/maintainer?" that I can put on the HTTPS metawiki page? [02:59:13] heh [02:59:21] I hadn't even considered bots... [02:59:26] yeah... [02:59:50] is there anything ELSE on the "oh wait what about foo" list? [02:59:52] greg-g: well either they can switch the https preference off as soon as it appears for the bot accounts or make sure their code does requests against https [03:00:01] supposedly pwb has been patched, but some users are running into a weird bug http://lists.wikimedia.org/pipermail/pywikipedia-l/2013-August/008214.html [03:00:25] those would be the two choices afaik [03:00:44] fundraising is already https. slow connections, we've already said we're ok with that. localisation (in terms of language) is, I assume, something we do not have to worry about. Some people have super terrible browsers that do not support HTTPS at all - oh well. [03:01:00] for bots that just do GETs and follow 302s, they might not have to do anything though [03:01:13] is our CA like spoofed or blocked or banned anywhere? :/ [03:01:17] but no bot will probably handle the 302s on POSTs [03:01:22] * sumanah may be getting SSL details wrong [03:01:31] localisation? I don't think SSL causes issues with that. unless you're talking about the central notice or something [03:01:46] Aaron|home: i guess that depends on whatever the standard library does [03:02:04] Krenair: right, I don't think it's something we have to worry about -- it's just part of my standard "have we forgotten about" litany [03:02:31] a11y, performance, analytics, i18n, mobile, etc. etc. [03:02:58] sumanah, well, is basically blocked everywhere except places where it's whitelisted. Which is almost everywhere, luckily. That's kind of how they work. Users can 'block' (remove) CAs if they want [03:02:59] legoktm: yeah, like webob will follow 302s on GETs normally unless you set a custom handler for that status [03:03:22] Krenair: sorry, did you forget a noun near the start of that? :) [03:04:21] sorry that was in reply to your comment about the possibility of the certificate authority getting spoofed/blocked/banned [03:04:26] so, analytics. I know we have stats *about* HTTPS login. Will HTTPS-only mess with our actual intake of any data in a way that will screw up important charts? I doubt it [03:04:49] "well, is basically blocked everywhere" - what is the noun there? the CA? [03:05:05] hmmm [03:07:48] Aaron|home: so pywikibot follows 302s for GET and HEAD requests, but not for POST [03:07:59] You have a list of trusted CAs. If any given CA is not on said list, you don't trust the certificate. I don't really consider that 'blocking' a CA. [03:08:21] CURLOPT_FOLLOWLOCATION seems false by default (for any PHP bots) [03:08:30] legoktm: yep, makes sense [03:08:43] legoktm: I wonder how it handles 307s on POST? [03:09:04] well [03:09:05] redirectable_response = ((response.status == 303) or [03:09:05] (response.status in [300, 301, 302, 307] and [03:09:05] in theory that would use a prompt per spec but that wouldn't make sense for that library [03:09:05] method in ["GET", "HEAD"])) [03:09:18] Krenair: got it. [03:09:33] so I don't think it would handle it at all [03:09:41] is there a search we could do to identify gadgets on WMF wikis that are primed to have trouble when we throw the HTTPS switch? [03:10:22] iirc, hoo ran a script a year or two ago that fixed most mixed content issues [03:11:42] https://encrypted.google.com/search?hl=en&q=%22http%22%20site:wikipedia.org%20intitle:gadget%20intitle:mediawiki [03:11:46] Lots of noise. [03:12:11] yeah, can't make much sense of that [03:12:13] yeah. [03:12:24] thanks for trying :/ [03:12:30] It's all "Retrieved from" text. [03:12:54] I would try searching each site individually via the API [03:13:23] Good luck. [03:17:09] Elsie: Krenair Aaron|home legoktm : patches welcome to this help section: https://meta.wikimedia.org/wiki/HTTPS#Help.21_My_code_is_broken.21 [03:18:41] i added a small note about pywikibot [03:18:56] ok [03:21:05] thanks legoktm [03:21:15] indeed [03:26:57] greg-g: in the blog post, do we want to mention that people can (after logging in) turn off HTTPS in their prefs? [03:28:07] sumanah: sure, probably good :) [03:28:21] Can't believe I forgot.. is it not on the HTTPS metaiwki page either? [03:28:22] * greg-g looks [03:28:46] doesn't appear so [03:35:43] greg-g, "Help! My code is broken!" part - gadget users generally can't change anything [03:35:54] unless they happen to be admins or interface editors [03:37:07] Krenair: s/user/author/ ? [03:37:10] yeah [03:37:14] * greg-g nods [03:37:32] done [03:37:44] "simply modifying any hardcoded urls from "http://..." to "//..." should fix the issue (this is called using "protocol relative urls")." doesn't feel like it's been written for gadget authors, but meh [03:39:23] Krenair: how would it feel if it were written for gadget authors? [03:39:32] (including ones whose native language is not English) [03:39:35] I am sincere [03:40:28] sumanah: added a "Disabling" section to blog and HTTPS page [03:40:39] Thank you greg-g [03:41:06] Well it would explain what the problem likely is, how to check that it is the issue they actually have, and they should easily be able to figure out how to fix it from that [03:41:59] targeted for a programmer rather than a non-technical admin [03:42:17] but, it really doesn't matter that much I don't think [03:42:36] Krenair: well, I don't want to make that page too big, but is what I sent to -ambassadors and wikibots-l sufficiently close to what you want? [03:43:55] http://lists.wikimedia.org/pipermail/wikitech-ambassadors/2013-August/000351.html [03:44:13] http:// to //? [03:44:19] I'm not sure everyone agrees with that. [03:45:15] Elsie: does protocol relative not work in some browsers/etc? [03:45:27] https://bugzilla.wikimedia.org/show_bug.cgi?id=52253 [03:45:31] Yes, basically. [03:46:30] Elsie: from the list in comment 0, I'm not worried [03:46:54] greg-g, yeah it's fine as far as I'm concerned [03:47:37] greg-g: The comment mentions that it's a long tail. [03:47:41] I'm not sure what the advantage to HTTP is. [03:47:53] Err, protocol-relative, I mean. [03:47:58] Elsie: and we don't support browser under .... I forget... 2% usage? [03:48:00] Words are hard. [03:48:06] As Elsie mentioned, Tim might disagree [03:48:11] greg-g: That percentage always moves. ;-) [03:48:15] yeah, right? [03:48:16] :) [03:50:27] I believe it's about ~1% usage - that's our threshold for caring about browsers [03:51:15] https://www.mediawiki.org/wiki/Compatibility#Browser [03:51:24] * greg-g edits to make it say 50% [03:51:45] greg-g: "browser compatibility? no, we only care about Bowser compatibility" [03:52:15] * sumanah gets a mushroom, grows to 3x her normal size [03:52:25] * greg-g throws a leader shell [03:52:30] * sumanah investigates Bowser integration [03:52:44] omg wikimedia foundation is not on internet.org?!!? [03:52:46] * sumanah stops making Super Mario Bros. references [03:53:53] greg-g: The point was more that HTTPS would always work, while protocol-relative might not. So the question became why to recommend protocol-relative, particularly as we're migrating toward always having HTTPS. [03:54:17] well, unless the person has crappy https connectivity... [03:54:35] see: users in africa [03:54:42] or the phillipines [03:54:44] Erik says if we think about those people, we're appeasing the Chinese. ;-) [03:54:44] or thailand [03:54:50] So we're good. [03:54:52] heh [03:55:00] ok, /me goes for real now [03:55:30] night [03:55:32] me too. [04:27:03] haha " TLS: Since a few of you seem to care about https..." [08:13:51] Reedy: hi *ping* [08:14:58] Reedy: ? [08:19:38] @notify Reedy [08:19:38] This user is now online in #wikimedia-dev. I'll let you know when they show some activity (talk, etc.) [08:36:04] Is there a Wikipedia API endpoind that can tell me how many incoming (internal) links and how many (internal) outgoing links a certain article has? [08:42:41] cff: https://en.wikibooks.org/wiki/Special:ApiSandbox#action=query&list=iwbacklinks&format=json&iwbllimit=10&titles=Main [08:42:46] er [08:42:59] cff: https://en.wikibooks.org/wiki/Special:ApiSandbox#action=query&list=iwbacklinks&format=json&iwbllimit=10&titles=Main%20Page even [08:50:42] gry: That doesn't give me the count [08:50:54] gry: it makes me iterate over all the links and count them [08:51:00] gry: I don't want that [08:51:49] I know about that API endpoint, that's what I'm using now, I'm looking for something that can give me those two counts for each article [08:52:08] instead of me having to iterate over them and count [08:53:34] I get connection reset by peer when I do it programatically... and I only do HTTP requests sequentially [08:54:01] I don't understand why. I have proper HTTP User-Agent, and I use the max lag parameter [08:57:17] On top of that, I don't need the other info about incoming and outgoing links... I only need to find out how many of those two types of links each article has [09:03:49] oh, oops. count array length client-side? [09:04:00] ok [09:07:53] gry: sure, but I need to download for each article a few KB just to do that [09:09:12] cff: no, there isn't one [09:09:35] cff: you have to get thelist and use its length [09:09:59] cff: or you could parse the data dumps if you want to avoid network traffic [09:10:37] cff: http://dumps.wikimedia.org/enwiki/20130805/ , look for "Wiki page-to-page link records." [09:11:05] (and you're probably going to need "Name/value pairs for pages." as well to parse that data) [09:16:06] MatmaRex: thanks for the suggestion, I could try that yeah, since working with Wikipedia API its a PITA given that it gives Connection reset by peer, even though it shouldn't since I'm respecting the guidelines [09:16:46] cff: hmm? oh what kind of queries? [09:19:01] I first run an HTTP request with {'generator':'allpages', 'prop':'coordinates', 'colimit':500, 'gaplimit':'max', 'coprop':'globe|country|region'}, 'allpages', 'gapcontinue') then from the response of that for each article that has coordinates, I make 3 more HTTP requests, each one happens one after another with 1 second sleep in between [09:19:46] 1 for getting the language links, 1 for links (outgoing links) and 1 for backlinks (outgoing links) [09:21:25] should I increase the sleep time between the requests? [09:21:30] hmm, that doesn't look like expensive operation [09:21:46] i'm not on the server ops team, but probably not [09:22:12] (so in other words, you want information about all links on all pages with coordinates?) [09:23:01] Yes, for all pages that have a main coordinate, give me all the language links, incoming links and outgoing links [09:24:18] I've tried to also combine coordinates|langlinks|links in a single request got the same reset by peer [09:24:21] i.e. {'generator':'allpages', 'prop':'coordinates|langlinks|links', 'colimit':500, 'lllimit':500, 'pllimit': 500, 'llurl':'true', 'gaplimit':'max', 'coprop':'globe|country|region', 'plnamespace':0}, 'allpages', 'gapcontinue') [09:24:42] for the backlinks I need to do another HTTP request to get them [09:24:50] cause you can't combine backlinks with the other props [09:25:42] i dunno, that should work :( [09:25:53] need to fiddle with it I guess [09:38:45] I basically get: [09:38:51] requests.exceptions.ConnectionError: HTTPConnectionPool(host='de.wikipedia.org', port=80): Max retries exceeded with url: /w/api.php?lllimit=500&action=query&prop=langlinks&llurl=true&titles=118000&maxlag=5&format=json (Caused by : [Errno 104] Connection reset by peer) [09:41:34] cff: is it following redirects? [09:41:39] it might be the https redirect stuff [09:42:16] although that should only be relevant if you're logged in... [09:45:02] valhallasw: yes, Requests Python library will automatically perform location redirection while using the GET HTTP verb [09:45:28] valhallasw: should I disable them? [09:45:33] valhallasw: I'm not authed [09:46:00] cff: hmm, then there should not be any redirection going on [09:46:00] what does [09:46:01] curl "http://de.wikipedia.org/w/api.php?lllimit=500&action=query&prop=langlinks&llurl=true&titles=118000&maxlag=5&format=json" [09:46:03] give you? [09:46:18] {"query":{"pages":{"2505095":{"pageid":2505095,"ns":0,"title":"118000"}}}} [09:47:00] that would imply it's the requests library doing something strange... [09:48:08] cff: do you have access to a packet sniffer? that might give some info. Otherwise, you could try running the code in pdb, and trying to debug from the stack trace. [09:48:12] (pdb = python debugger) [09:52:55] valhallasw: thx, I'll try to debug it and see what happens behind the scenes [09:55:32] cff: are you setting an explicit user agent? [09:55:33] might be that [09:55:35] (or not) [09:57:10] YuviPanda: yes, as I said above, I do [09:57:18] sorry, didn't read back enough [09:57:20] YuviPanda: that should give a 503, not a connection reset.. [09:57:21] it's a custom User-Agen [10:07:02] petan: bug in wp-bot, he display all duble [11:13:11] Steinsplitter: huh? [11:13:39] wm bot schow evry edit two times [11:14:17] Steinsplitter where [11:14:25] #wikimedia-commons-admin [11:22:03] I'm interested in getting read-only access to MediaWiki API where can I get it from and what are the requirements? or Can I use just use my username & login from Wikipedia as login? [11:22:14] as explained in https://www.mediawiki.org/wiki/API:Login [11:23:34] guillom: can items still be added to tech news 34? [11:24:01] maybe I should add https://bugzilla.wikimedia.org/show_bug.cgi?id=31816 to 35 instead, this one is already quite packed [11:24:21] cff: the api is entirely public. [11:24:58] cff: just visit /w/api.php on whatever wiki you want to query [11:25:02] e.g. https://en.wikipedia.org/w/api.php [11:25:15] and yes, it just uses the regular wiki accounts for logging in [11:43:05] cff, most of the API doesn't really require you to log in [11:48:14] Yes, I know but it gives me more results per request [11:48:21] If I'm logged in [11:48:48] which would mean a decrease in time required to get what I want [11:48:52] you can controlthe number of results using the XXlimit parameters [11:49:04] i suggest using "xxlimit=max" [11:49:06] yeah, but there is a max limit of 500 for non logged in users [11:49:16] for non-bots, actually [11:49:24] (and non-admins) [11:49:25] yes [11:49:45] the limit for bots/admins is 5000 [11:50:01] (the limits are also lower in some modules, often 50/500 instead) [12:05:23] Nemo_bis: yes, #34 isn't done. I'm hoping to finish it tonight. [12:17:00] guillom: ah, I was just about to create 35 [12:17:00] Nemo_bis: The publication rhythm has been disrupted by odder's wikibreak :/ I'm trying to get it running again. [12:18:47] yeah [12:18:47] guillom: why was the part on maintenance reports removed? [12:18:50] Can someone explain what the practical effect of this is? 18:33, 19 August 2013 (UTC) <-- I don't understand the question, isn't a functioning special page (half a dozen, actually) a practical effect? [12:18:52] Nemo_bis: It should still be there; I rewrote it a bit, but it was important, so it's still there, unless someone else removed it. [12:20:09] ah, no link to the bug [12:21:38] Ah, feel free to add it. If I removed it, I didn't do it on purpose. [12:24:19] guillom: also, any reason not to use int:? I'd like the bulletin to use the same translation as MediaWiki, so that users recognise it [12:32:42] What can I do in case of a Squid error when using the max lag parameter? https://www.mediawiki.org/wiki/Manual:Maxlag_parameter ? [12:33:20] Retry after a certain period of time? [12:33:53] I got {'error': {'code': 'maxlag', 'info': 'Waiting for 10.64.16.11: 6 seconds lagged'}, 'servedby': 'mw1134'} [12:35:46] cff: that's not a squid error; that's a lagged database. Wait for a while (>= maxlag), then try again [12:36:09] or set maxlag to something more aggressive (60 or so) if you're doing testing [12:36:30] ah, currently its set to 5 [12:37:03] which is saying 'if there is more than 5 seconds lag, please tell me to back off' [12:42:03] yeah, not what I want, I'm going to increase it as you said [12:43:10] I have timeouts at 30 seconds [12:43:27] *connection timeouts [12:48:50] !bzqs index.php &appendtext= [12:48:50] https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=index.php+%26appendtext%3d [12:50:44] Doesn't seem to be a ticket in... [12:50:55] * T13|needsCoffee goes to file one... [13:12:26] How does &wpIgnoreBlankSummary work? [13:12:37] On index.php? [13:13:04] I've tried setting it to =1 =true =yes and none seem to work. [13:28:17] T13: all of them should [13:28:23] T13: what are you expecting it to do? [13:28:32] it overrides the preference option that prevents saving without summary [13:29:15] Create a link --> //en.wikipedia.org/w/index.php?title=Wikipedia_talk:AutoWikiBrowser/CheckPage&action=edit§ion=new&preload=Wikipedia_talk:AutoWikiBrowser/CheckPage/PermissionRequest&summary=Requesting%20%5B%5BWP%3AAWB%7CAWB%5D%5D%20permissions&wpIgnoreBlankSummary=1&wpMinoredit=1&wpWatchthis=1 [13:29:40] It should allow a blank section title? [13:30:41] Reminder: You have not provided a subject/headline for this comment. If you click "Save page" again, your edit will be saved without one. [13:30:49] Skip ^ that [13:33:40] T13: it probably should [13:33:48] it doesn't? [13:34:07] as in, you can't add a section with blank heading if you have that pref on? [13:34:23] Correct. [13:34:59] I would be even happier if having that or a similar pref on would make section textbox go away. :p [13:35:29] well, then that's probably a bug. :P [13:39:51] !newbug [13:41:19] !newbug is You can start a new Bugzilla report at https://bugzilla.wikimedia.org/enter_bug.cgi [13:41:19] Key was added [13:42:22] MatmaRex or andre__ is index.php consider API under MediaWiki? [13:43:15] General/unknown? [13:43:34] Also andre__ - I was told you know stuff about the Bugzilla API. [13:43:44] T13: The primary entry points for MediaWiki are api.php and index.php. [13:44:01] So we can safely say that index.php is not considered part of API. [13:44:24] Most of the other components (outside of "API") apply to index.php. [13:44:38] Except JavaScript, which is usually load.php. [13:45:09] So, what is communicating directly to index.php via address bar/link considered? [13:45:33] I don't understand the question. [13:46:06] If you don't understand the Bugzilla components, just use "General/Unknown" and don't worry about it. [13:46:15] Someone else will fix it. [13:46:35] when in the move to HTTPS happening? [13:46:41] *is [13:46:55] malafaya: August 21, allegedly. [13:46:59] Though it'll likely be delayed. [13:47:06] today? lol [13:47:18] "Today" doesn't exist on the Internet. [13:49:21] in UTC, it's been more than half of 21st already :) [13:49:47] It's true. [13:49:51] i was looking for a more specific time within Aug 21 ;) [13:50:20] It's early in San Francisco. [13:50:33] 6:50? [13:51:07] Yes. [13:51:39] !b 53152 |MatmaRex [13:51:39] MatmaRex: https://bugzilla.wikimedia.org/53152 [13:52:07] seems like it would be easier to just give him the link [13:52:52] T13 uses IRC via flip-phone. [13:53:09] T13: General/Unknown is perfect for that bug. [13:53:30] I usually use General/Unknown because I'm lazy and it's a long drop-down menu. [13:53:50] T13|needsCoffee: is using IRC on AndChat on Samsung Axiom... [14:10:55] T13: I'm not sure we're going to be added more parameters to index.php anytime soon. [14:11:01] s/added/adding/ [14:11:51] Low priority enhancement... [14:12:30] Possibly wontfix. [14:12:49] oopss.. Updated priority on 53152 instead of 53153 by accident. [14:13:01] fixed. [14:13:02] Adding parameters to index.php in order to enhance on-wiki hacks is... sub-ideal. [14:16:27] There was a discussion about the deprecatedness of actions as opposed to special pages on wikitech-l a while ago [14:20:03] interesting, the machine actually serving glusterfs data for my instance varies every few days, it was labstore1 then 2 now 1 again [14:21:51] weird, nothing about glusterfs on wikitech, an anticipated damnatio memoriae? [14:40:03] T13: I know basics of the Bugzilla API, yes. Haven't played much with it though. [14:43:33] there is a bug that if you edit a section via mobile that it happens that the abusefilter thinks that all other sections of the page are removed? [14:44:13] while it actually wasn't emptied [14:46:15] Romaine: Yeah, it's logged [15:00:45] apergos, parent5446: hello [15:00:53] hello [15:01:28] Hey, start w/o me. I need to switch clients. Be back in a few seconds [15:01:34] see ya [15:02:28] so, today, i was making sure that diff dumps worked under all circumstaces; and i found out that normal dumps don't [15:02:41] ohhh [15:02:50] specifically, it doesn't work when a page is deleted and then undeleted [15:03:06] not too rare an occurrence unfortunately [15:03:28] because the undeleted page has new page id, but the same revision ids as the old page [15:03:37] yep [15:03:48] and when i delete the old page, i also delete all its revisions [15:04:09] which means that the revisions of the new page suddenly don't exist [15:04:11] right. the new page would just act like any other new page [15:04:18] oh, you delete the... heh [15:04:19] I see [15:04:54] reference counts or keep track of which action is last (yuck)? [15:05:21] so what i think i need to do is to track all revision ids that i encountered and if i try to delete one of those, don't actually delete it [15:05:41] for a full dump that's a lot of revisions [15:06:56] yeah; though i will use vector for that, which means 1 bit per revision; that means less than 100 MB of memory for enwiki [15:08:13] if a page is deleted, undeteled and then deleted again, will you be ok? [15:08:25] (yeah we have that, revert wars are sometimes nuts) [15:09:11] if that all happens between two dumps, then i will never see the undeleted page, so it's as if the page was just deleted [15:09:28] and if it happens during multiple dump runs, that should be okay too [15:09:35] great [15:09:41] yay for only seeing the public view! [15:12:34] hey we've figured out the big ve issues, https for china, and we're doing world peace in a minute parent5446 [15:12:37] ( :-P ) [15:12:56] Goddammit I always leave during the good parts [15:13:08] ya snooze ya lose [15:13:30] Did I miss the entire meeting? [15:13:49] so now i will fix this and then i think you will be finally able to try diff dumps [15:15:01] cool [15:15:20] well that depends on whether svick has anything else to bring up [15:15:53] there was a glitch with normal dumps where an undelete wuld follow a delete, that's what's being worked out now [15:16:25] OK [15:16:34] i can't think of anything else [15:16:55] me neither [15:17:07] I might go get food while it's still light outside, how novel! [15:17:31] Lol see you guys tomorrow [15:18:06] see you [15:18:31] Elsie: How do I get a valid token to perform that action via api.php? [15:18:51] http://en.wikipedia.org/w/api.php?title=Wikipedia_talk:AutoWikiBrowser/CheckPage&action=edit§ion=2&appendtext=*+{{AWBUser|{{REVISIONUSER}}}}+~~~~~&summary=Requesting%20[[WP%3AAWB|AWB]]%20permissions&token=%2B\ returns an error... [15:20:14] T13: see http://www.mediawiki.org/wiki/API:Edit#Token [15:23:00] https//en.wikipedia.org/w/api.php?title=Wikipedia_talk:AutoWikiBrowser/CheckPage&action=edit§ion=2&appendtext=%2A+%7B%7BAWBUser%7C%7B%7BREVISIONUSER%7D%7D%7D%7D+%7E%7E%7E%7E%7E&summary=Requesting%20%5B%5BWP%3AAWB%7CAWB%5D%5D%20permissions&token=6195efc3b45ada8cc529a95f5a0ef1f7%2B%5C [15:23:19] [15:23:34] hrmm. So... Assuming this can't be done from addressbar... [15:23:41] in "a link" [15:24:13] T13: yeah, it can't [16:31:50] why does https://fr.wikipedia.org/wiki/Sp%C3%A9cial:Pages_li%C3%A9es/Cat%C3%A9gorie:Wikip%C3%A9dia:Outil_de_retour_des_lecteurs not work [16:43:36] @link [[fr:Sp%C3%A9cial:Pages_li%C3%A9es/Cat%C3%A9gorie:Wikip%C3%A9dia:Outil_de_retour_des_lecteurs]] [16:43:36] https://fr.wikipedia.org/wiki/Sp%25C3%25A9cial:Pages_li%25C3%25A9es/Cat%25C3%25A9gorie:Wikip%25C3%25A9dia:Outil_de_retour_des_lecteurs [16:49:00] ? [17:29:27] YuviPanda|away: ping [17:37:58] so we are delaying the HTTPS rollout - robla or greg-g mind if I change the date in https://meta.wikimedia.org/wiki/Https to (sometime in Aug 2013)? [17:44:19] sumanah: yes....we *just* agreed to a new time [17:45:13] sumanah: I'm composing mail to wikitech-l and wikitech-ambassadors. please do update the page [17:45:22] August 28, 1pm PDT [17:46:28] ok, thanks robla [17:46:28] what's the new time? [17:46:28] I'll update it right now [17:46:55] new time: August 28, 1pm PDT [17:51:14] thank you [17:51:54] hey dMaggot. I'm about to head off to try to fix a minor medical issue (ear pain!) [17:52:13] YuviPanda|away: that's ok, I think I figured my way out [17:52:55] dMaggot: okay! The new patchset has issues, but I guess you know that already :) [17:53:10] YuviPanda|away: yep, but can't figure it out [17:53:18] YuviPanda|away: do you see something that pops up? [17:53:35] dMaggot: take a shot at it, perhaps ask marktraceur. I'm going to head off the computer now before my ears actually pop off [17:53:36] sorry [17:53:49] YuviPanda|away: k, will take a look [17:54:01] dMaggot: 17:50:07 PHP Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING, expecting ')' in UploadWizard.config.php on line 148 [17:54:08] from https://integration.wikimedia.org/ci/job/mwext-UploadWizard-lint/625/console [17:55:36] YuviPanda|away: yep, figured that out, I'll fix it right away [17:56:07] dMaggot: :) I'm off now, though. [17:57:41] ok, https://meta.wikimedia.org/wiki/HTTPS#August_28.2C_2013_-_Secure_log-in.2C_with_secure_browsing_and_editing_for_logged_in_users is now updated [17:57:49] sumanah: email sent [17:57:56] yup, good email robla [18:30:18] robla: if you are writing that email to wikitech-ambassadors, then I will wait for that; otherwise I think I should fwd the wikitech-l email to the wikimedia-l list [18:30:59] argh...sorry, forwarded [18:31:12] I thought I had already addressed it to wikitech-ambassadors [18:31:52] robla: oh no prob, I understand. [18:34:18] ok robla forwarded, thank you http://lists.wikimedia.org/pipermail/wikimedia-l/2013-August/127673.html [18:40:48] robla: We've gotta find a way to increase time between testing --> deployment (in general). If there were a week between the two, there'd be more time to announce to everywhere (and skip the announce --> delay --> re-announce cycles). [18:42:42] <^d> We could go back to deploying every 6 months or so. [18:43:51] Elsie: I'm not sure this particular case is one that the cautionary tale applies to. A lot of the requirements really didn't come up until after we were obviously serious about enabling the feature [18:44:36] even the licenseis wrong [18:44:44] bah, wrong channel. [18:48:46] ^d: :-) [18:49:17] It just seems that certain deployments always feel rushed. Most don't. Overall deployment time is much, much better. [18:49:32] And review time is improving, I think. [18:50:47] robla: I think some of the requirements not coming up was part of the problem. :-) [18:51:00] Though a longer testing period may not have caught them either, you're right. [18:51:45] we definitely should have gotten this out to test2 much sooner than we did [19:39:21] hi Ocaasi, good to see you :-) [20:45:06] Heh, okay [20:45:11] Where should I file the ticket? [20:45:23] Oops, wrong tab :P [20:49:07] File a ticket for what FastLizard4 ? [20:49:17] Wrong channel, like I said :P [21:07:32] how often is special:wantedcategories refreshed? on pl.wp? [21:35:27] gn8 folks [21:37:00] You know; that message about "technical maitenance" should be changed to reflect what is really happening [21:37:37] Lirodon: I agree [21:37:45] it's better than nothing though [21:39:50] like: "On August 28 at 8:00pm UTC, we will be making changes to make browsing Wikipedia more secure, find out more"? [21:53:01] Lirodon: greg-g will be working on that in about an hour, I think - sorry for the wait [21:53:18] * greg-g nods [21:53:35] greg-g, I was about to add it as a proposed campaign on meta's calendar thing [22:00:00] Lirodon: done [22:01:20] thank you greg-g [22:03:00] thank James_F :) [22:27:55] greg-g: Do you want me to make an actual banner about HTTPS? [22:28:19] greg-g: Note: "Make" does not necessarily mean "make and enable". :-) [22:28:30] James_F: sure, but let's not turn it on until tomorrow, I want to get tim's code in and available on a test wiki before we announce again :) [22:28:34] * greg-g nods [22:28:48] * James_F nods. [22:28:50] OK. [22:28:54] thankya [22:40:11] greg-g: Sorry for the delay; am hunting for previous examples for asking for translations. [22:43:46] * greg-g nods