[00:02:42] "WMF should audit the emails of the user and the children involved, and see whether its policies (weak as they are) have been violated." [00:03:10] I was under the impression the WMF cannot inspect the contents of emails sent through Special:EmailUser? [00:03:45] "I demand that the WMF give me ponies" [00:05:51] odder: link? [00:08:06] https://en.wikipedia.org/wiki/User_talk:Kiefer.Wolfowitz#Regarding_Salvio.27s_statement_on_the_page. and scroll down a bit [00:34:43] i don't believe we have the technical ability to look at emails odder [00:35:09] odder: also, if these things are happening, please encourage parents to contact the police/fbi/authorities in their jurisdiction [00:37:28] LeslieCarr: odder != Kiefer.Wolfowitz [00:38:01] I'm just your random Joe who found the comment on a Wikipedia talk page... [00:40:15] oh [00:40:20] i thought you were going to be responding [00:42:34] LeslieCarr: I don't think this needs responding, en.wp guys are quite capable of dealing with this nonsense [00:42:41] cool [00:42:42] woot [05:36:32] hm "decommission our current secondary data-center location in Tampa, Florida" https://wikimediafoundation.org/wiki/2013-2014_Annual_Plan_Questions_and_Answers#As_a_reader_of_the_Wikimedia_projects.2C_how_will_my_experience_change_as_a_result_of_this_plan.3F [05:38:03] why is that a 'hm'? [05:38:42] I'm confused by the word "decommission" [06:00:57] Nemo_bis: what is confusing about it? [06:02:57] TimStarling: does it mean closure? [06:03:04] yes [06:03:58] that's it, first time I hear about it [06:04:32] if we exclude a joke by TimStarling some months ago on how to save money [06:04:34] it's been a goal for a while [06:04:54] {{cn}} [06:04:56] tampa is not exactly the centre of the internet [06:04:58] tampa's transit situation is problematic [06:05:10] peering is not like in ashburn [06:05:16] aka what tim just said [06:05:17] :) [06:05:34] also, hurricanes have been a periodic concern [06:05:38] in terms of RTT and thus end user experience, there's no point in having it [06:05:51] I don't think hurricanes are really a serious concern [06:06:02] well they are at least something people think about [06:06:11] (i've seen discussions) [06:07:02] I talked to DC techs about it [06:07:33] https://wikitech.wikimedia.org/wiki/Hurricanes [06:08:27] there are weather events that can cause loss of connectivity, or even loss of our own capital assets [06:08:39] but tampa is not really vulnerable to those [06:08:51] you could have flash flooding in Ashburn, you know [06:09:22] and if natural disasters were really a major factor in location choice, do you think we'd have ulsfo? [06:09:40] i don't know so much about ashburn i think [06:09:47] earthquakes you mean? [06:09:52] yes [06:10:11] well first of all ulsfo is not primary [06:11:54] * jeremyb has now read the hurricanes log :) [06:11:56] * TimStarling waits for the second of all [06:13:40] you know that in the last couple of years, both NYC and Brisbane have been subject to DC outages due to inundation [06:14:00] and in both cases, the root cause was inundation of the basements, where they kept the generators [06:14:15] i have personally had significant first hand experience with 33 whitehall. but not since the hurricane [06:14:25] no, that wasn't the cause in NYC [06:14:32] the generators were on the roof [06:14:33] but the generators in tampa are well above the plausible flood level [06:14:56] it's the fuel pumps that failed. the pumps were in the basement [06:15:03] generators don't do so good with no fuel [06:16:31] in queensland we also had cable cuts due to flash floods [06:16:50] but you can have flash flooding pretty much anywhere you have rainfall [06:17:11] second is almost everywhere has some sort of problem. so you have copies in 2 or 3 different places and make sure that they're not too close to eachother and not vulnerable to the same sorts of problems. and ideally go for a place that at least has problems on the less frequent side (or more copies even... we have some degree of offsite stuff at your.org (sp?) and other places) [06:18:05] you know I used to think that a desert would be good place to put a DC [06:18:24] until huge areas of desert in australia were flooded, and nobody cared because nobody lived there [06:18:55] Didn't someone once suggest we have like a million datacenters for that reason? [06:19:00] and got ridiculed for it? [06:19:45] I wouldn't be surprised [06:20:19] cities at least have lots of people protecting them, wilderness not so much [06:21:18] Jasper_Deng: that's another can of worms... [06:22:21] anyway, natural disasters are not so scary for websites [06:22:29] it's http://meta.wikimedia.org/wiki/Foundation_wiki_feedback/Archive/2012#WikiMedia_Innovation [06:22:38] there are lots of really bad things that could happen to wikimedia that are more likely than natural disasters [06:22:38] anyway, istr more recent discussion about hurricanes than the page you linked. so at least time is definitely spent talking about it. whether it also means extra work for the techs I can't say [06:22:54] (foggy memory) [06:23:21] if it's such a minor concern then great [06:23:44] but at least e.g. you could have a tech with trouble getting to or from the DC [06:24:01] because of a storm [06:24:03] yes [06:24:22] the best policy is to set things up so that you can go for a week or two without anyone onsite [06:25:01] then your techs can evacuate rather than camp out in the server room eating MREs [06:25:58] or you can just proactively failover before the hurricane and leave another as the primary [06:26:15] anyway, transit sucks so we still shouldn't stay forever [06:28:05] and there's not much choice in datacentre providers and the quality is not as high as elsewhere [06:31:32] and maybe it's worth having a primary and secondary at different company's facilities? just in case there's some dispute or other problem (labor strike??!) [06:31:48] tampa and ashburn weren't always the same company but there was a merger so now they are [06:32:03] I thought the WMF owned its datacenters [06:32:08] no [06:32:10] in their entirety [06:32:14] no we sure don't [06:32:26] (well at least sdtpa. not sure about powermedium) [06:32:26] that would mean a much bigger budget than we have [06:34:16] i don't have a good feel for the differences between sdtpa and pmtpa but i think neither is self-sufficient. we need both up in order to have an operational tampa [06:34:23] * jeremyb welcomes corrections [06:34:38] also, about to reattempt sleep [06:34:40] both being up is still not enough for us to use as a fallback [06:34:43] good luck and good night [06:34:58] don't follow [06:35:50] our tampa facilities are not sufficient for us to fall back to in case of something wrong in ashburn [06:36:53] i wonder when https://en.wikipedia.org/wiki/Wikipedia:FAQ/Technical#How_about_the_connection.3F was last updated [06:37:28] maybe 5 years [06:37:48] apergos: but that's a new thing i assume? not an inherent flaw in tampa? i.e. tampa could be made to be sufficient? [06:38:16] (but maybe would require more capital) [06:38:29] TimStarling: oh, wow, it has yaseo!! [06:38:48] tampa cannot be made to be sufficient, really [06:38:51] it's not a new thing [06:39:08] it feels like not long ago that tampa was primary [06:39:19] last updated june 2013... and yet so out of date [06:39:26] er july :-D [06:39:58] well it feels like not long ago that ashburn wasn't even decided on [06:40:14] > As of late August 2006 [06:40:18] for DB size [06:40:30] :-D [06:40:31] nice [06:41:06] > History of Wikipedia Hardware [06:41:10] has a section "Phase IIIc: Feb 2004 to Present " [06:41:24] I think the ops team really really wants to be out of tampa, so they changed the minimum d/c requirements slightly so as to make Tampa not qualify [06:41:55] and obviously if you want two DCs which are duplicates of each other, when you buy hardware for one, you also have to buy it in the other [06:42:02] which wasn't done [06:42:26] but that was partly so that it could be sent straight to the new place instead of installed and then moved. right? [06:42:36] even before we switched the main site to eqiad, there were lots of misc servers there that weren't duplicated in tampa [06:43:21] jeremyb: have you found a citation? :) [06:43:26] Nemo_bis: for? [06:43:30] (though sleeping would be better) [06:43:36] for Tampa decommissioning [06:43:44] Nemo_bis: yuvi's cue? :) [06:44:00] heh [06:44:07] uhhhh, i could maybe find one. but i don't feel too motivated to do so. [06:44:13] he lives in a pseudorandomly generated TZ [06:44:23] I guessed so [06:44:28] I doubt one exists [06:44:36] i'm sure it does [06:45:35] if you ever find one, please add to https://meta.wikimedia.org/wiki/Wikimedia_servers or talk, even just a link I'll refactor :) [06:46:45] well nacht [06:47:04] cite this channel [06:47:20] what is this, wikipedia? [06:48:07] don't tell me we have citation authority criteria for meta now [06:48:07] Nemo_bis: last bit i can find about Tampa: https://blog.wikimedia.org/2013/01/19/wikimedia-sites-move-to-primary-data-center-in-ashburn-virginia/ [06:48:23] Also, I love what Tim just said. [06:50:08] Nemo_bis: http://wikimedia.7.x6.nabble.com/Fwd-Engineering-Product-Goals-for-2013-14-td5006400.html "we're planning to ramp down the Tampa data-center" [06:50:10] from erik [06:55:05] jorgeluis: yes, I remember about it; unless I and http://www.thefreedictionary.com/ramp+down are mistake, that doesn't mean close [06:55:12] *mistaken [06:56:11] ramp down is vague but can imply closure [06:56:33] yes, can; doesn't say so though [06:58:54] Nemo_bis: You seem very anti-closure ;-) [06:59:08] or are you just pro-documenting everything? [07:00:47] pro-documenting :) [07:01:20] or rather anti- me not knowing about such plans [07:01:47] heh [07:02:27] not because of myself but because it means something in decision-making or communication chain is broken [07:04:34] Nemo_bis is a part of the checks and balances of the WMF! [07:04:37] :) [07:04:48] Goodnight. [07:11:00] not quite what I meant :P [11:44:40] what happened earlier? my bot got a readonly: The wiki is currently in read-only mode [12:49:09] saper: what makes you think https://commons.wikimedia.org/wiki/File:AAR214-KSFO-Crash.ogg is PD-ineligible? [12:59:27] I wonder if it would fall under the PD exception for public speeches in Italy [13:02:40] odder: tech question, is there a way to find out from all my uploads which are used on atleast one wikipedia? [13:03:37] matanya: there are ts tools i believe [13:03:53] any example p858snake|l_ ? [13:04:17] not off the top of my head, they are no doubt documented somewhere on commons [13:05:18] matanya: one hack I can think of is to categorize your uploads and then use magnus's tool [13:06:18] thanks odder hacks are easy for me, i'm asking for a non-techie user. he can point and click at best. [13:09:20] Xe could go through my contribs and view each file and see the where used list if point and click is best they can do. . [13:09:47] T13|needsCoffee: oh please. they*. [13:10:00] we already have a perfectly good gender-neutral pronoun. :P [13:10:21] ... [13:11:22] * T13|needsCoffee is stressed about baby mama going under knife to have gall bladder cut out in 4 hrs and still has no coffee... [13:24:08] matanya: http://tools.wmflabs.org/glamtools/glamorous.php [13:24:52] uh, it seems the Spanish love my uploads [13:25:08] ooo [13:25:14] :0 [13:25:22] ah, I uploaded es.wikt logo :P [13:25:46] I didn't know GLAMourous can check a user's uploads [13:27:30] odder: it's been ages, you're #OLD I guess [13:28:19] Zh.wikipedia likes one of mine... [13:28:28] Chinese? [13:28:41] what an evil tool, my most popular images are so sad [13:28:52] Pre-WinXP logo.. [13:29:00] * odder created 120 logos and they're not that much used :-( [13:30:58] odder: probably because you didn't hijack a 2005 upload like me [13:31:15] lol... [13:36:19] weird fr.wiki translating food articles from English [13:47:32] Nemo_bis: [13:47:43] Total image usages 1521522 [13:47:48] This seems unlikely? [13:50:06] !help [13:50:06] !(stalk|ignore|unstalk|unignore|list|join|part|quit) [13:50:06] There are a lot of topics you could be asking about. Besides, this bot is mostly for experienced users to quickly answer common questions. Please just ask your question and wait patiently, as the best person to answer your question may be away for a few minutes or longer. If you're looking for help pages, we moved that to !helpfor. [13:51:02] There's a command to ask the bot which file defines a given class, and it's something really simple and obvious and I can't remember it. [13:51:14] !helpfor [13:51:14] http://www.mediawiki.org/wiki/Help:$1 [13:52:18] "The Visual Editor is on schedule to roll out by July. " [13:52:34] so basically it was rushed because it was in the plan? [13:55:46] odder: no it seems correct [13:56:41] Nemo_bis: does this mean my images are used over 1,5 million times? [13:56:50] odder: yes, lots of boring flags [14:09:39] TheWoozle: usually class name is the same as file name [14:09:55] TheWoozle: and when it's not, check out Autoloader.php for a huge map [14:11:07] https://www.google.com/cse?cx=010768530259486146519%3Atwowe4zclqy might also magically find the answer [14:11:15] *plug plug* [14:20:07] Thanks, MatmaRex. I found it in doc.wikimedia.org [14:22:48] Nemo_bis: what dark trickery is this?!?! (Looks rather useful... I may have to create one for my wikis.) [14:23:49] TheWoozle: just indexing some selected sources https://www.mediawiki.org/wiki/Wikimedia_technical_search [14:24:12] nothing you couldn't do yourself with some site: queries or a searchbar addon on firefox ;) [14:25:08] on second thought, I'll use your line as testimonial [14:25:25] you can't stop me because this channel is publicly logged, mwahahaha [14:28:38] * TheWoozle curses in regex [14:41:20] By [14:41:23] end of June 2014, opt-in experimental real-time collaboration and chat will be deployed to [14:41:26] production, leading to full build-out in the default mode in 2014-15. [14:41:28] chat? [14:43:00] what more do you need than IRC? [14:43:46] odder: where... was that? [14:43:59] (I think they were referring to EtherEditor extension, which went nowhere) [14:44:46] YuviPanda: Annual Plan [14:44:51] oh [14:44:54] I think they were referring to a built-in chat. [14:45:17] we should have that someday, I hope [14:46:25] YuviPanda: More likely VE [14:46:34] marktraceur: realtime Chat? [14:46:38] Since it's way closer to workable RTCE than EtherEditor [14:49:02] marktraceur: true, true. [14:49:55] YuviPanda: Also I totally know I owe you CR....I think I'm going to stay home today and do that in the near future. [14:50:04] marktraceur: <3 [14:50:08] I spent last night wondering why I was awake [14:50:22] {{siiiick}} [14:50:42] bbiab [16:09:15] * apergos eyes the clock [16:09:28] and also the caps in that hostname, yeow [16:11:48] parent5446: you visiting NYU today? [16:12:06] sumanah: yep, Stevens and NYU have a cross-registration program, so I'm taking a class here. [16:12:45] parent5446: ah! one of the days you're in Manhattan you should ping me and (if I am in NYC and can go to Manhattan) we could have coffee together [16:13:04] Yeah that'd be cool. I have class Monday through Friday until noon. [16:15:31] apergos, parent5446: hello [16:15:43] parent5446: for the whole summer? [16:15:56] Yes, until late August. [16:16:02] hello [16:16:45] apergos, svick: hey [16:17:42] i did the benchmarks today, and i learned some interesting things: C++ streams are quite slow and seeking, even to the current position, which should be very fast, is also quite slow [16:17:52] ;) [16:17:59] mmm [16:18:02] that's not too good [16:18:13] I'm used only to seek in C so... [16:18:53] and when i'm saying "slow", i mean it used too much CPU [16:19:38] so i tried switching to fopen/fwrite and eliminated all seeking, which was fast enough [16:21:02] so just appending to the end of the file or so? [16:21:03] Yeah, I'm guessing fstream just isn't optimized for that use case. The usual C functions should be fine. [16:22:05] with that, i tried adding 2.5 M "revisions" (actually header + random data) to a file with 25M revisions (taking 1.5 GB and 15 GB, resp.) [16:23:06] and with Randall's propsal (copying into a new file), it took ~6 minutes, while just appending to the end of the file (which is mostly what would my original propsal do) took ~12s [16:23:51] (this assumes retrieving revisions from DB and compression takes 0 time, which is far from realistic, but that's not what i wanted to mesure here) [16:23:54] save your code and numbers and throw em up someplace btw [16:24:04] even if it's in some little "tests" subdir someplace [16:24:38] yeah, i have the code in a branch on my computer now [16:24:46] whatsuuup [16:24:58] uh huh, let's run the numbers, [16:25:55] let's say 150k new revisions for en wiki a day [16:26:23] we're at 550 million + revs total (ok some of those are deleted but whatever) [16:26:59] so fr a month we are looking at 3 million or so new revs [16:27:14] but the difference is that the old file will have that 550 million in it already [16:27:29] (I"m getting these numbers from the adds/changes dumps of course) [16:28:01] i assumed 600 bytes of compressed data per revision; it looks like it's much less in reality [16:28:23] ah [16:29:23] I can give some real world figures for that too but only after the command runs :-) [16:31:26] i think we don't need exact numbers here; i think it means that Randall's proposal would slow it down by about an hour, which i think is acceptable [16:31:51] an hour for the gran total is peanuts, no question [16:32:41] but i think what you said yesterday makes sense: that it would force us to use a certain kind of compression (delta), which is not great [16:33:18] i mean, the forcing is not great, it would be nice to have options [16:34:14] I'm assuming we'd be using an LZMA-based delta compression if we did that. [16:34:27] do you have other thoughts about what compression algorithm would be best? [16:35:08] That was directed as svick I'm assuming? [16:35:17] yeah, i think something like LZMA makes sense [16:35:17] yep [16:36:17] we'd be compressing chunks of the data so recovery (restart after an interruption) would be possible if we knew what had been processed so far [16:36:48] as opposed to eg 7z as we use it now where you really can't recover, you're going to have to write the entire file from scratch [16:37:59] 14 908 853 742 bytes (uncompressed) for ... I'll tell you how many revisions in a couple minutes [16:38:36] the point is that block formatted algs like bz2 won't offer any special advantage here which is good [16:38:40] cause it is a hog [16:39:59] if we want delta compression, i think we won't be able to use the code from some library unmodified, but i think the modifications shoudl be doable [16:40:56] ok (but this is a good thing to ask around about, someone may have a library in mind) [16:41:13] as long as it has low enough level routines [16:41:58] Maybe look at the 7zip SDK and start from there. Open source, C++, ... [16:42:27] mm might be worth checking to see which algorithms do well on the sort of revision text we have [16:42:55] meaning, looking at average revision length, and how much duplication is present in text content, neighboring revisions, blah blah [16:43:31] parent5446: yeah, i plan to do something like that, but i won't actually get to this part for some time (i have it planned after the mid term) [16:44:00] Yeah I figured. That's going to be one of the more difficult parts. [16:44:48] it will [16:45:45] yeah, i would like to try different algorithms and things like that, depending on time (or after summer), if there's no time, i think just compressing bunch of revisions together will suffice [16:46:28] yep [16:47:03] well it's also ok to ask other folks on the list(s) if they are interested in crunching some numbers or if they know of some results already [16:47:32] maybe someone will emerge from their summer vacation stupor with some useful info :-D [16:47:45] 1 045 075 revisions [16:47:56] this was file enwiki-20130604-pages-meta-history22.xml-p018069218p018225000.bz2 fyi [16:49:00] compressed size; 688 245 853 [16:49:06] (bz2) [16:49:24] you think someone already tested various compression algorithms on revisions and didn't tell anyone about it? [16:49:32] 162 237 254 (7z) [16:49:47] Damn that's small [16:50:01] I think people benchmark compression algorithms and then the posts wind up some other plac [16:50:02] e [16:50:18] sometimes information leaks back to us and most of the time probably not [16:50:28] one SMS worth of data per revision? yeah, that's really small [16:50:53] not necessarily on revision text but maybe enough to get a general idea of suitability for us, to knock out some of the outliers [16:51:06] yes, 7z over a large pile of data is very efficient [16:51:11] not very fast but very efficient [16:51:43] I wonder if we provide only something around the size of the bz2s if we will have complaints [16:51:53] I suppose a lot of people use the 7zs because they are so much more compact [16:54:33] AFAIK, LZMA is so good on big files because it has big dictionary; but if we use it to delta compress a single revision based on the previous revision (or two), the dictionary won't be able to get that big [16:55:47] nope [16:55:58] cursed size/speed tradeoffs! [16:56:40] Take a look at this article [16:56:41] http://cis.poly.edu/suel/papers/delta.pdf [16:57:01] Describes some of the properties of delta encoding, so it might help [16:57:18] thanks, i will read it [16:59:13] Anyway, it's about time I catch my train. Is there anything else? [16:59:41] apergos, you said that others might have already done some tests, and you were right; there is even a prize for best compression of "compressed size of first 10^8 bytes of enwiki-20060303-pages-articles.xml" [16:59:47] :-D [17:00:26] * apergos looks forward to hearing more about this later... [17:00:37] though that's not history dump [17:00:55] based on your benchmarks form yesterday, how do you plan to proceed now? [17:01:17] i think i will stayy [17:01:30] i think i will stay with the format i proposed [17:01:40] because it doesn't force delta compression [17:02:19] all right [17:03:24] other comments/questions/etc? [17:03:31] stuff you need from us? [17:04:08] nope [17:04:21] ok, I'm good too [17:04:35] parent5446? [17:04:41] Nope, all good here. [17:04:45] catch that train! [17:04:53] ;) Thanks, see you both tomorrow. [17:04:57] see yas [17:04:57] now, i will work on saving page and revision data to the dump [17:05:08] cool [17:05:16] see you [17:05:23] see ya, happy coding as always [17:05:35] thanks [17:33:58] is the RSS feed for the watchlist broken or is it just me? [17:35:12] when I click on the link I get "Server Not Responding [17:35:12] We can't get to the feed you requested right now. It may be too slow or unreachable." [17:37:06] Hello! Today some problems arose on recent changes on hr.wikipedia, Firefox on Linux & Windows (different PC's, different users): [17:37:09] http://bits.wikimedia.org/hr.wikipedia.org/load.php?debug=false&lang=hr&modules=jquery%2Cmediawiki%2CSpinner%7Cjquery.triggerQueueCallback%2CloadingSpinner%2CmwEmbedUtil%7Cmw.MwEmbedSupport&only=scripts&skin=monobook&version=20130620T163512Z:106 [17:37:17] (monobook) [17:37:32] http://bits.wikimedia.org/hr.wikipedia.org/load.php?debug=false&lang=hr&modules=jquery%2Cmediawiki%2CSpinner%7Cjquery.triggerQueueCallback%2CloadingSpinner%2CmwEmbedUtil%7Cmw.MwEmbedSupport&only=scripts&skin=vector&version=20130620T163512Z:106 [17:37:36] (vector) [17:37:50] Spinner is buggy? [17:38:01] or Callback? [17:38:59] lncabh: works for me! [17:48:31] stemdA: could be related to https://bugzilla.wikimedia.org/show_bug.cgi?id=49935 [17:50:36] the-wub: defenitely seems similar [17:51:11] yep [18:09:53] domas: :-S [18:10:10] maybe it just does not work from mobile devices [18:13:07] lncabh: how much RAM do you have? [18:13:18] assuming such a question makes any sense for a mobile device, which I have no idea [22:48:51] greg-g: https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#New_Single_User_Login_system.2C_login_success_page_going_away [22:49:02] also csteipp [22:51:04] StevenW: thanks much! [22:51:08] np [22:51:22] I'm fucking excited man. Goodbye stupid icons, hello sane UX. [22:51:52] YAY [23:03:29] death to the scary icons! long live the new SUL! [23:05:50] meh [23:05:59] death to unrelated little cogs, long live sane menus [23:06:03] sane UX would be single-sign-on from various providers [23:06:05] just sayin [23:06:06] :) [23:08:25] "log in with your facebook account"? [23:08:30] that too! [23:08:39] I'm sure our community would really love that [23:08:53] how would it affect the community? [23:09:04] how about: "select the PRISM collaborator you would like to use for authentication:" [23:09:06] * Google [23:09:08] * Facebook [23:09:11] * Microsoft [23:09:14] * PalTalk [23:11:05] how does that apply? [23:11:06] domas: why doesn't facebook provide a log in with google link, then? [23:11:19] actually, yahoo provides log in with facebook! [23:11:30] yeah, because yahoo is as good as dead [23:11:42] because both are distinct authentication providers, is wikipedia authentication provider? [23:11:52] StevenW: ... [23:11:53] StevenW: https://meta.wikimedia.org/w/index.php?title=Tech/News/2013/29&diff=5640874&oldid=5640759 ain't no simple English [23:12:10] domas: it'll be providing openid and oauth soon, so yes [23:12:42] I don't think we'll be providing authentication for sites completely outside our control just yet [23:12:45] domas: and how does it matter that they are distinct providers? the idea is that people want single sign on [23:12:57] TimStarling: we won't, I'm just answering domas's question [23:13:16] csteipp doesn't want to have people hacking our accounts just to get facebook access [23:13:19] TimStarling: lastpass? :) [23:13:30] TimStarling: how would they get facebook access? [23:13:35] oh right, sorry [23:13:51] I forgot facebook doesn't have "sane UX" [23:13:54] facebook does have an openid option hidden somewhere deep in their preferences [23:14:07] it no longer works, as far as I can tell [23:15:01] TimStarling: *shrug*, I couldn't use wikipedia as any sort of identity provider because my account merge was not finalized [23:15:14] there was some german user with same username that blocked it [23:15:15] domas: it will be soon enough :) [23:16:57] well, I guess allowing community to edit javascript is safer [23:17:16] hehe [23:17:33] TimStarling: *shrug*, it _is_ used as identity provider around [23:17:36] as few others [23:17:45] TimStarling: I understand that wikimedia is top5 website [23:17:50] and you have to take things seriously [23:19:41] well, we are in the middle of a project to implement OAuth, including lots of core changes, that is taking things seriously isn't it? [23:19:56] odder: please edit boldly. [23:20:02] and we are changing the way CentralAuth works substantially [23:20:11] yup [23:20:30] middle of a project is always serious [23:20:34] then community can appretiate that [23:21:17] StevenW: I will, just letting you know [23:21:31] TimStarling: I see login form uses http and not https [23:21:32] StevenW: it really helps not to have to look things up, I appreciate your involvement [23:21:43] I guess I can click on 'use secure connection' [23:21:45] that is UX [23:22:41] Hi all. I want to understand why date fields in Russian shows in "F j Y" format. [23:22:42] domas if https://gerrit.wikimedia.org/r/#/c/68937/ gets merged and deployed we won't have to make users click through to use secure connection [23:22:52] It's a translation issue? [23:23:11] StevenW: "if" [23:23:13] "in future" [23:23:17] putnik: it's a per-language settings, yes, but it's not modifiable at translatewiki etc [23:23:19] login.wikimedia.org is https only [23:23:21] setting* [23:23:43] brb from a bus [23:24:12] MatmaRex, ok, how to change it? Because it realy awful. [23:24:28] putnik: also, what's "F j Y"? i don't have the table in my head :) [23:24:38] June 13 2001 [23:24:40] let me pastebin the relevent code somewhere [23:25:33] putnik: http://pastebin.com/ychVH0Xi these are the formats defined for Russian [23:25:43] they're probably settable in Preferences [23:26:29] putnik: https://ru.wikipedia.org/wiki/Служебная:Настройки#mw-prefsection-datetime for example [23:28:17] MatmaRex, omg! Sorry, wrong channel =) [23:28:34] putnik: generally the way to change those is by filing a bug :) [23:28:35] But maybe you can help too. I spoke about Wikidata. [23:28:37] oh. hm. [23:28:58] yeah, the language settings apply to all projects, i think [23:29:26] hm [23:29:52] or maybe they don't. apparently WD uses the English formats, but with translated text [23:30:03] MatmaRex, yes, it's work for me, but I think that https://www.wikidata.org/wiki/Q191782?uselang=ru should shows data in default Russian format, not in default project format. [23:31:38] putnik: all i can recommend for now is filing a bug :) i don't know why it does that [23:34:41] ohi;) [23:35:53] will wikipedia have two-factor auth for users? [23:36:04] asking for password reset on IRC does not count [23:37:44] no [23:37:58] how would that help with password resets? [23:38:03] except for wikitech.wikimedia.org [23:38:08] thanks to Ryan_Lane [23:38:15] Ryan_Lane: I just point out second factor [23:38:24] that could be used [23:38:47] domas: ah, you mean second factor just for password resets? isn't that what the email address is for? [23:39:07] logins, I was just pointing out that IRC is a valuable channel for wikipedia access resolutions [23:39:10] :) [23:39:24] as far as I know, we don't reset passwords for people [23:39:32] I guess it changed! [23:39:55] we have no way of verifying it's the actual user [23:40:35] until it is some Community Member [23:40:54] I know [23:40:55] :) [23:41:51] domas has probably read and/or experienced https://wikitech.wikimedia.org/wiki/Password_reset [23:42:05] we don't have a massive personal information network to use to spy on.. err I meant to verify our users [23:42:37] * Ryan_Lane nods [23:42:39] ;-) [23:43:40] well, for someone who don't handle personal information a lot, you have certainly lots of opinion about government compliance and security and what not :) [23:43:48] heeeheee [23:45:06] I guess there is external authentication right now [23:45:12] except that instead of OAuth it uses SMTP [23:45:55] domas: well, I did work for the government with a clearance before this. [23:46:27] and? [23:46:32] and facebook collects absurd amounts of information about users and does a piss poor job of limiting access to it [23:47:30] wikipedia collects absurd amounts of information about users and puts logs in publicly accessible directories [23:47:36] I didn't actually mention anything about government compliance or security. I mentioned something about collecting information on users. [23:48:19] and, uhm, sure facebook knows that I like bieber and google knows that I was searching for WoW encounter information [23:48:34] domas: and it knows all of your friends, and everywhere you check-i [23:48:36] in [23:49:11] square knows where I buy coffee [23:49:18] and people are tagged in pictures, which have geolocation info [23:49:54] yes, and google knows even more than facebook [23:50:03] I'm not saying what they are doing is good either [23:50:15] yeah, but I have all that data for my own good [23:50:38] and so does facebook, and all its advertisers and all the app owners [23:50:49] and thanks to prism, so does the government [23:51:10] now you're speculating [23:51:17] advertisers don't have access to my data [23:51:24] app owners don't have access to my data [23:51:31] not _all_ of it [23:51:34] and if government wants to have my data it has to go via a judge [23:51:35] but quite a bit of it [23:51:50] and as I am currently living in US, FISA cannot target me anyway [23:51:57] and according to prism it doesn't require a judge [23:51:57] \o/ [23:52:09] prism leaks* [23:52:21] speculations about leaks, you mean? [23:52:30] and similarly, the leaks infer that it can easily target citizens too [23:52:34] http://news.netcraft.com/archives/2013/06/25/ssl-intercepted-today-decrypted-tomorrow.html [23:52:41] I don't know anything about those [23:53:10] I just know that companies are acting as identity providers [23:53:45] and none of them allow login from others, either [23:53:54] so, I'm not sure what your original point was [23:54:37] my original point was that whenever you discuss UX, you talk about shape of a button instead of shape of a workflow [23:54:48] who's you? [23:54:58] "community" [23:55:02] :) [23:55:18] except that the ux improvements are being done by the foundation for the most part [23:55:23] Not always. When I said "sane UX" earlier I was referring to the new login workflow. [23:55:41] I don't want something - "community does not want" [23:55:55] thats an easy way to approach everything [23:56:26] if you discuss why people are not creating accounts or using them [23:56:31] maybe the answer is right there [23:56:45] I personally think it's a bad UX decision to allow login from other sites via the methods available right now. I == myself, not the community [23:56:54] how many active monthly accounts are there on wikimedia sites? [23:57:04] Ryan_Lane: because? [23:57:36] is that worse for the user who wants that? [23:57:41] is it worse for the user who does not want that? [23:57:49] ooooh, 101 is a packing lot \o/ [23:58:01] it's a complication that isn't likely to increase participation [23:58:26] our biggest contribution issue right now isn't how hard it is to create an account or to login [23:59:17] would real/verified identities change participation in any way? [23:59:42] and how are they real or verified? because they are linked against a facebook account? [23:59:53] would social sharing of activities change participation?