[00:39:21] any db guys around? [00:41:46] jackmcbarn: hey [00:42:03] legoktm: there you are [00:42:09] did you look at the api thing yet? [00:42:18] * legoktm goes to do that [09:57:52] Hey all. [09:58:17] I sent an e-mail to translators-l@ on Saturday, and it still hasn't reached the list. [09:58:21] Any ideas what might be going on? [11:00:32] apergos: hello [11:01:35] hello svick [11:01:53] wonder if there's a parent around for our last gsoc meeting (but not our last ever meeting :-) ) [11:02:25] pinged [11:02:33] ok, i'll wait [11:03:12] he'll show up in a sec [11:05:22] parent5446: hello [11:05:42] Hey [11:06:17] so, documentation is on a good way, it will be done by the GSOC deadline (today 19 UTC) [11:07:24] yay! [11:08:18] parent5446: i belive you wanted to have a look at the code when that's done [11:08:42] Yep, and a few others as well if I remember. [11:10:07] after the official end of the program, your time is of course your own, but there will be things like code cleanup, changes from testing and features that didn't get done... how should we handle that? [11:11:11] Well GitHub conveniently has inline file comments for commits. That could come in handy. [11:11:40] parent5446: doesn't gerrit have that too? [11:12:16] Yes but Gerrit views things as patches, and if I'm correct in GitHub you can just browse through all files. [11:12:26] I don't want people to have to go to github to look at previous comments etc [11:13:04] parent5446: you can't just leave comments on files on github, but. you can only leave comments on diffs for each commit. [11:13:10] parent5446: so not *that* different from gerrit [11:13:28] Oh, well nevermind then. [11:13:59] apergos: i probably won't work much this week, but i'll try to be responsive to bugs/questions/whatever; after that, i plan to continue working, probably starting with compressing comments [11:14:29] this is very good, my evil plan is working :-) [11:15:29] some of the times you are working on code, you might hang out in here [11:15:29] yeah there's wikimedia-dev I guess but meh, this is kinda the community/code interaction space [11:16:07] ok [11:17:01] maybe other folks will have questions for you as we get closer to production use [11:17:31] also you might see questions from people in here that you can easily answer (there are a lot of beginner questions peoople ahve about mw) and when you're feeling in the mood to answer [11:17:37] Not sure if it's on-topic, an anon user (xtalmath) is seeing equation render errors on enwiki. [11:17:59] on various pages or on a specific page? [11:18:02] yeah I see just "Failed to parse (unknown error): \begin{pmatrix} c t' \\ x' \end{pmatrix} = \begin{pmatrix} \cosh \varphi & - \sinh \varphi \\ - \sinh \varphi & \cosh \varphi \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix} = \mathbf \Lambda (\varphi) \mathbf v" [11:18:09] https://en.wikipedia.org/wiki/Rapidity [11:18:43] actually can someone else look at that, or me later, cause meeting right now [11:18:45] I tried to purge the page and it worked for me, but the user is still affected. [11:19:18] svick: since this is the last gsoc meeting, do you have questions for me, comments, things you want to bring up? [11:19:47] For anyone interested, xtalmath in in #wikipedia [11:20:19] apergos: i probably won't be on IRC much [11:20:39] apergos: i have few tiny things [11:21:00] sure [11:21:57] i think i noticed that you use .dd extension for incremental dumps; i've been using .id for incremental dumps and .dd for diff dumps, and the magic numbers in the files are that way too (MWID and MWDD) [11:22:34] we'll probably redo the options and all that tbh [11:22:51] what do you have in mind? [11:24:13] I haven't actually run any incremental dumps yet [11:24:13] just converting (full) xml dumps to the new format [11:24:35] options usually with long or short form, with single or double hyphens, standard getopt style [11:24:49] right [11:25:42] when you talk about incremental vs diff dumps, well either they are full dumps or they are incremental (diff) [11:26:17] these are details of course, functionality is much more important [11:26:49] well, the whole project is called “incremental dumps”, so that's what i've calling the normal dumps in the new format (because they can be updated incrementally) [11:27:01] yes, I got that :-) [11:27:11] but incremental dumps really are the dumps that are applied to fulls :-D [11:27:30] just like incremental backups etc [11:28:03] 00(Sorry for the interruption, but the problem I mentioned earlier was solved) [11:28:03] and indeed that is the central point of the project is to produce those, which you do [11:28:35] then we need some new names :-) (before we go to production and people start using it, not necessarily right now) [11:28:43] “dumps in new format” is not very catchy [11:29:37] ah in case there was any doubt in your mind, which there should not be, you are of course getting 'pass' or 'success' or whatever gsoc needs in order to consier that you completed well etc [11:30:24] yep, we wil want to rename em [11:30:24] and we absolutely do *not* want to ask people on the list for names [11:30:24] because we will never get done with that discussion (*cough*bikeshedding*cough*) [11:30:35] *consider [11:31:29] I nominate dumpfile.bikeshed and dumpfile.ibikeshed [11:31:35] apergos: right, thanks, i wanted to ask that just to make sure, i know that if there was some problem, you would tell me earlier than few hours before the end :-) [11:32:24] so the old ones were "xml dumps", these could be called binary or random access or the name of your girlfriend (I mean possible, right? people do any kind of weird thing) [11:32:40] I would have told you much much earlier, so that whatever it was could be fixed [11:32:44] we like success [11:32:59] right [11:33:38] * apergos offhandedly stabs parent5446  [11:33:49] when we're doing that ranimg, we also probably want to change the name of the executable, i noticed that idumps.com is some kind of porn site [11:33:56] hahahaha [11:33:59] XD [11:34:32] I would go there but not really feeling it right now... [11:35:11] mwbindump (not too huge, prolly not porn) but there are lots of possibilities [11:36:11] that sounds reasonable [11:38:11] if you're creative you'll find some phrase that happens to have your gf initials and it will be mw[initials]dump :-P [11:38:12] anyways... [11:38:46] what else gsoc-wise? [11:39:01] you already made sure your t-shirt details are in etc? [11:40:01] if you're mostly not irc, should I be sending email, or leave questions on your mw user talk page (or one of your dump sub-pages)? [11:40:26] yeah, the details seem to okay [11:41:35] apergos: all ok with wikipedia [11:41:41] tested. [11:42:39] apergos: you have do some stress testing ? [11:42:40] apergos: whatever works for you: jabber, email or talk pages (i'll make sure i'm watching them and have email sending set up); and i can join IRC if there is some discussion, but yeah, i won't be there most of the time [11:42:51] for testing [11:43:49] cortexA9: meeting now, sorry [11:44:01] ok apergos sorry. [11:44:15] svick: I probably won't do jabber too much, I'm on there all the time but stuff like this is better done where other people can see it later [11:44:32] apergos: sure, ok [11:44:32] it's totally fine to nag me there or whatever if you need something though [11:45:04] ok [11:48:19] thanks a lot for mentoring me, it has been a fun summer (and the parts that weren't fun are the fault of C++, not yours :-)) [11:48:24] :-D [11:48:48] well I am excited that we have a new contributor (you) and shiny new dumps which are really going to change the way people use them [11:49:32] my secret goal is still to be able to run incrementals on all projects once a day [11:49:37] how nice it would be.... [11:49:58] we shall see! [11:50:16] yeah, we'll see, i think we could get close to that [11:50:29] let's do it! [11:53:52] so, if there isn't anything else, see you when there's something to talk about [11:55:00] See you! [11:55:09] can someone time https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns for me? [11:55:09] I'm good and thanks for all the hard work! [11:55:09] and see how lng that takes? [11:55:14] not sure if it is my shitty connection or shitty code [11:55:20] see you on the lists and on the wiki! [11:56:17] 31 loosely counted seconds (not with a stop watch) [11:56:27] aaughhhh [11:56:31] * YuviPanda runs to find a profiler [11:56:37] yep [11:57:04] apergos: https://test.wikipedia.org/w/api.php/?action=query&list=allcampaigns? [11:57:30] < 1 sec [11:57:34] hmm, since that is not even remotely close to as bad, I'm going to guess it is a stupid algorithm somewhere. I don't even have any algorithms [11:57:51] :-D [11:57:57] query someplace? [11:58:04] they all are just directly indexed [11:58:12] parser calls, perhaps [11:58:15] I mean query scans all rows and test is small and commons is huge [11:58:22] something like that [11:58:33] apergos: sure, but I am pretty sure that none of the queries actually scan all tables [11:58:36] I don't know what allcampaings is anyways [11:58:44] or allcampaigns either [11:58:50] apergos: at least not according to EXPLAIN [11:59:09] apergos: it's the API for UploadCampaigns, which on desktop provides things like http://commons.wikimedia.org/wiki/Campaign:wlm-nl [12:00:03] you checked these on commoos (the explains) right? [12:00:22] (nice!) [12:00:31] apergos: i checked those on the local mw install. would that actually make a dfiferent? [12:00:34] *difference? [12:00:46] in the sense of how mysql optimizes things? [12:01:34] so I am still very much a mysql n00b but maybe it would tell you something different based on sizes of indexes and things, how it would have to handle the query ... maybe? [12:01:40] perhaps [12:01:45] i've access to the analytics slaves [12:08:36] apergos: do you have access to production dbs? [12:08:42] apergos: can you run an EXPLAIN or two for me? [12:09:19] I do and I can [12:09:25] sec [12:10:04] apergos: sweet. pastebinning, moment [12:10:23] apergos: https://dpaste.de/GkfP8/ [12:11:32] apergos: and https://dpaste.de/Z7wdd/ [12:13:54] https://dpaste.de/Qf1C9/ [12:15:08] rows 1616? that's nothing [12:15:27] also seems to be indexing fine [12:15:43] all primary keys chosen [12:15:48] yeah [12:15:52] yeah sure not anything in here [12:16:06] probably all the wikitext parsing [12:16:08] then [12:16:13] T305 BinaryPig - Scalable Malware Analytics in Hadoop [12:16:16] I... should cache those. [12:16:17] T501 Lessons from Surviving a 300Gbps Denial of Service AttackT305 BinaryPig - Scalable Malware Analytics in Hadoop [12:16:17] how much parsing are you doing? [12:16:30] what are those from parent5446 ? [12:17:00] apergos: on average, about 3-5 small parses (a couple of templates, at best) for every campaign [12:17:07] apergos: and commons has ~140 campaigns now, many with 0 parses. [12:17:20] apergos: i'm *not* caching the parses (I'm caching the sql results tho) [12:18:03] apergos: is our parser really *that* slow that sql queries over the network is faster?! :O [12:18:10] so can you ask if you less campaigns and see how long that takes? [12:18:18] s/if/for/ [12:18:30] like, try one, then ten, then 50 [12:18:31] ? [12:18:37] yeah, doing [12:19:00] https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns&uclimit=1 is instant [12:19:09] https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns&uclimit=10 is a couple of seconds [12:19:24] https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns&uclimit=50 is default, ~10s [12:19:36] https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns&uclimit=500 takes fucking forever. [12:19:48] * YuviPanda times them [12:20:54] profile on the 50 one might be interesting [12:21:22] 500 (actually only ~140 exist) returns in 17s. [12:21:26] 50 in 7s [12:21:29] 5 in 0.7s [12:21:42] and this is from inside the cluster, so network not a big factor [12:21:54] network would not be the factor here. just no way [12:21:56] apergos: hmm, so I guess I go in, add a lot more profiling calls, and then wait for the next deploy? [12:22:12] mmmmmm [12:22:16] indeed, with differing sizes. still, I have a healthy mistrust of my network :P [12:22:41] can you get the relevant page titles and imprt em to your local instanc? [12:22:50] then do profiling there? [12:22:55] apergos: probably, but i'll also need to import the associated templates, etc. [12:22:58] instead of having to sit around and wait I mean [12:23:00] which'll be a paint o hunt down [12:23:04] *pain to [12:23:09] well not if you [12:23:13] export all the templates :-P [12:23:29] but typically via special:export [12:23:35] you can just check 'templates too please' [12:23:41] and boom it gives them to you [12:24:04] apergos: won't work. Campaign pages are JSON stored under a Campaign: namespace, so I highly doubt export will pick those up automatically [12:25:12] apergos: or I could just be a good boy and cache the result of the parses, but that's not even remotely close to being 'simple', because of invalidation, etc [12:25:21] wonder how many templates they have [12:25:39] not too much, actually. Just WLM stuff right now [12:25:44] I could probably make a list in about 30 mins [12:26:11] I also have no idea how to properly cache and invalidate those, though. [12:26:17] ideally would've loved to use Redis for this... [12:28:00] and when the templates change this would need to be refreshed somehow, and I doubt that linksupdate, etc will consider these as normal pages [12:28:01] grr [12:28:07] mmm [12:28:39] 17 s [12:28:49] very slow [12:30:46] cortexA9: indeed [12:35:21] apergos: hmm, there seems to be no way for me to just export all pages from Campaign: namespace? [12:35:21] :( [12:35:23] * YuviPanda does hacks [12:36:11] from the namespace? mm [12:36:21] typically it's by category I guess [12:36:28] yeah [12:37:29] gah [12:38:15] apergos: maybe the load balancing have some problems ? [12:38:45] mm, bash oneliners! [12:39:49] so many time 17 s. [12:42:10] YuviPanda: ping [12:42:17] dMaggot: pong [12:42:24] YuviPanda: https://gerrit.wikimedia.org/r/#/c/85154/ [12:42:35] dMaggot: i saw your patch to UW. Been travelling (just landed in India a few hours ago). [12:42:53] dMaggot: will take a look at it in a bit. will make sure to merge it before the cutoff for deployment (Thursday this week) [12:43:54] YuviPanda: ok, just make sure it is deployed at the same time EventLogging's change is deployed (see the linked review in that review) [12:44:35] templates on commons: 130511 [12:44:36] dMaggot: ah, yeah. [12:44:43] apergos: not much :P [12:45:04] apergos: imported all campaigns to localhost, curl now takes 1.1s for all 140 of them [12:45:20] seriously [12:45:25] 1.1 secods? [12:45:35] yeah [12:45:37] locally [12:45:40] but if it's not expanding the tempaltes [12:45:43] yeah [12:45:44] because you don't have them [12:45:49] indeed [12:46:10] let me see if I can grep my way through one set of templates [12:48:11] * apergos is gonna get vagrant set up to be able to have a vanilla installation of X wiki of your choice with all tempaltes, project namespace and mediawiki: pages in there already [12:48:20] such an obvious thing to have [12:48:57] apergos: do you already use vagrant? [12:49:07] apergos: also, 'all templates' might be a bit much :P [12:49:08] I have used it [12:49:12] it's not that much really [12:49:19] apergos: but in general, 'sync all templates' might not be a bad iea [12:49:20] *idea [12:49:22] I've done this for en and it's fine [12:49:49] having a vanilla intance into which you can now import any other page (ok there are all the extensions that need to be set up, so that's a pain) [12:49:49] oh wah [12:49:55] oh? [12:50:08] as in, I didn't expect to be able to do this for en locally [12:50:19] ah [12:50:30] however, i might be thinking with the 'omg my laptop has only 10G of free space now!' mindset that i've had for the last 1.5 years [12:50:33] I have done fun little en experiments [12:50:55] http://meta.wikimedia.org/wiki/Data_dumps/Import_examples#Import_into_an_empty_wiki_of_a_subset_of_en_wikipedia_on_Linux_with_MySQL [12:51:04] apergos: I don't think Special:Export can handle that, tho [12:51:25] this is all via scripts + the api [12:51:33] anyways that's a bit of a sidetrack for you now [12:51:42] true [12:51:48] but vagrant role for that sounds nice :D [12:51:53] role::wikimedia::commons [12:51:57] only if they can be kept in sync [12:52:56] apergos: theoretically speaking, I can get the same effect by grabbing all the Template pages and importing them in arbitrary order... [12:54:34] yes you can [12:54:56] * YuviPanda files away idea for tool that does daily 'Template only!' dumps on toollabs [12:55:13] hey YuviPanda [12:55:23] hello sumanah! [12:55:43] YuviPanda: you saw Tim's email about RFC review Tuesday 24 September at 22:00 UTC? [12:55:50] I did! [12:55:54] ok! just wanted to check [12:56:09] sumanah: I'm not participating any RFCs *right now*, but I'll probably turn up anyway - jet lag willing... [12:56:16] right, that's what I figured [12:56:51] it's slightly at Silly O'Clock here, and as Daniel mentioned probably Europe too. [13:00:55] don't do another set of dumps, we already get adds/changes for all wikis every day [13:01:01] no deletions but who cares for this use case [13:01:31] apergos: templates only? [13:01:37] all [13:01:42] but so what, just grab the templates out [13:01:43] apergos: yeah, so that's large, no? [13:01:46] ah, hmm [13:01:49] for daily changes? no [13:01:54] hmm [13:01:55] once you have the base [13:01:58] anyways: [13:02:11] "Incremental Dumps", that's where the future is [13:02:14] :D [13:02:16] ok [13:02:24] and gsoc just concluded for that, hope to get em into production by end of the year [13:02:27] also, I'm in need of a way to mass null edit pages... [13:02:32] no you aren't [13:02:45] * apergos growls [13:02:49] apergos: :P [13:03:08] apergos: found a bug in Campaign: code where the campaign doesn't get picked up if it is only imported [13:03:20] apergos: apparently article creation hooks aren't run if it is imported via Special:Import [13:03:29] apergos: so I now need to edit them to trigger the hooks [13:03:47] how many pages [13:04:02] apergos: 140 [13:04:16] oh. when you said mass null edit I thought you really meant mass null edit [13:04:20] whew [13:04:32] apergos: hehe, it is 95% of all pages on this wiki.... [13:04:32] :P [13:06:38] when your wiki si teeny tiny :-P [13:07:27] apergos: :D [13:08:05] Krinkle|detached once you reattach yourself please take a look at https://www.mediawiki.org/wiki/Requests_for_comment/URL_shortening_system_for_Wikimedia_sites_to_support_QR_codes [13:08:16] so bet there is a maintenance script [13:08:17] checking [13:08:36] * ToAruShiroiNeko bets 10$ [13:10:04] edit.php [13:10:14] input from stdin [13:10:30] ewww [13:10:41] heh [13:10:46] whiner [13:11:35] apergos: oh, edit.php in mainteanance. I thought you meant EditPage.php [13:11:37] hence the ewww [13:13:01] ah no [13:13:24] apergos: what [13:13:41] so for each title in some list you would have to get the wikitext, shovel it into stdin of that, pass the args... [13:14:06] oh :) [13:14:29] apergos: yeah, just found that out when i xarg'd the wrong things into it :P [13:14:49] touch.py of pywikipediabot too [13:15:19] i'd have to set that thing up [13:15:52] doing [13:16:33] do you need to update the "touched" timestamp in db ? [13:16:58] page_touched in page. This timestamp is updated whenever the page changes in a way requiring it to be re-rendered, invalidating caches [13:17:12] mutante: there seems to be a bug where the article create hooks aren't run if I use Special:Import [13:17:34] mutante: i'm testing something in my dev wiki, and imported all Campaign: pages (~140) and the hook not running is problematic [13:17:35] ah, i didn't listen to all, just saw "null edit" [13:17:50] mutante: so I'll need to just edit them to trigger appropriate hook [13:18:45] mutante: heh, i'm not going to flood the job queue :) [13:19:21] ehm.. what apergos said :) [13:19:21] :-D [13:21:27] so the other thing is that then there will be a bunch of stuff in objcache table in your local install I bet, [13:21:27] which means that you might get very fast response time but [13:21:27] apergos: Empty set (0.00 sec) [13:21:27] if you clear that out maybe it will behave more like prod [13:21:27] after the null edits? [13:21:27] apergos: also, 1.1s was for only 4 pages, not 140. [13:21:27] apergos: nope, still setting up pywikipediabot :P [13:21:27] ok [13:21:27] check after [13:21:27] you wanna not have that cruft lying around [13:22:01] ok [13:26:03] if you just wanted to time 50 pages you probably would have had them edited by now by hand :-P [13:26:11] i know [13:26:11] grr [13:26:15] my network's slow [13:26:43] just saying... [13:34:16] apergos: grr, I can't seem to get this to work [13:34:48] ok, maybe you want to try pywikipedia then [13:35:05] just ser up your user config file to point to your local installation with your creds [13:35:47] yeah, doign that now [13:35:52] but honestly maybe you just want to opn 15 tabs [13:35:58] 140! [13:35:59] click edit on them all [13:36:01] save them all [13:36:04] do the next 15 [13:36:08] be done in 5 mins tops [13:36:50] fiiiineeee [13:37:48] it will just take me half an hour to write this script... (two hours later) well another 10 minutes and it will be done [13:37:58] hehe :D [13:38:13] apergos: hmm, for some reason, clicking edit takes.... so far 30s and still loading [13:38:20] I blame wikieditor + ACE [13:38:39] (three hours later) just 2 more minutes and I know it will be working... [13:40:35] ugh codeeditor [13:40:41] but ace shouldn't be that heavy [13:41:33] apergos: save also takes a good 30s [13:41:34] oh wait [13:41:37] i turned off my cache [13:41:38] grr [13:41:39] that's why [13:41:41] :-D [13:42:47] apergos: wooo, 4s even on super-small loads [13:43:11] goooood [13:46:58] apergos: 29 parse calls for 18 campaigns [13:47:53] mm hmm [14:46:09] helderwiki: I assume you read [[translatewiki:FAQ]] etc.? [14:46:31] hmm... probably a long time ago... heh [14:46:41] There you are helderwiki [14:46:56] Hi! :-) [14:47:53] So I have some ideas for displaying security bugs as such. [14:48:05] Resolution of bugs will be tougher. [14:48:31] Hm... [14:48:33] Talking about the enwp gadget btw [14:49:29] I'm particularly interested in the resolution actually (since it is a lot more common) [14:49:35] Security bugs return an error, so I should be able to do those by adding a .fail() to the current query. [14:49:59] Nemo_bis: I think this is the answer I was looking for: https://translatewiki.net/w/i.php?title=FAQ#LocalisationUpdate [14:51:40] Resolved bugs may require a page html scrape for the resolution. [14:52:01] Technical_13: ah, I was just going to ask about this (what was the problem with it) [14:52:38] Just that I'm not good at it yet. :p [14:53:53] do you have a sample URL/query which returns the resolution? [14:54:20] (I know a little of regex if that helps ;-) ) [15:07:57] helderwiki: good, can you close your own thread please? :) [15:16:27] someone knows what mark stats are? https://en.wikipedia.org/wiki/User_talk:Petrb#Tool_Labs_Homepage_.2F_Wiki [15:16:42] mark's request stats [15:16:50] they don't exist anymore [15:17:19] in that case I will remove them from main page... [17:39:43] Are virtual namespaces cheap? [17:41:25] Or are they expensive on the system? [17:41:47] PHP namespaces? MediaWiki namespaces? [17:42:04] MediaWiki [17:43:17] they're not expensive, no [17:43:18] what [eco]system? servers or projects as a whole considering users too? :) [17:43:32] we just send Nemo_bis a bill [17:43:33] Thanks ori-l [17:43:48] I'll make sure to forward mine too. [17:43:59] Technical_13: Nemo_bis thinks they add cognitive overhead to users and make interfaces cluttered [17:44:24] it's not a completely unfair criticism -- there are aspects of the interface that could be re-touched to be friendlier to namespaces [17:44:33] but you probably want to talk w/him about it [17:44:41] noo [17:44:58] namespaces are cheap; we should use more [17:45:17] there are about 64 namespaces I'd like to add to Meta ;) [17:45:30] On DDOwiki.com we have an established pseudo-namespace, and I was thinking it would be more effective to change it to a virtual namespace [17:45:50] pseudo-namespaces are soft social enforcement [17:46:02] Yes. [17:46:09] actual namespaces are stronger software enforcement [17:46:42] so it depends how much you want to sclerotise your wiki organisation design decision [17:46:43] And there is a constant inflow of new pages [17:48:01] Nemo_bis: sclerotize is a nice word, didn't know it before [17:48:16] So I was thinking it's better to add the vn which is automatic and case insensitive than try to keep up with creating new pn redirects. [17:48:38] sorry, * 72 namespace I'd like to add to Meta [17:48:51] ori-l: ;) it's more common in Italian usage I think [17:50:39] what's the name of the feature that auto-logs in when you visit a new SUL domain you somehow weren't already logged in on? (while logged into SUL) [17:50:52] csteipp: who do i bug about problems with that feature? [17:51:37] jeremyb, is it the "i'm being sent to abusefilter" bug? [17:53:00] YuviPanda: https://commons.wikimedia.org/w/api.php/?action=query&list=allcampaigns gave me a 504 timeout [17:53:34] YuviPanda: can't you just cache the parser output in $wgMemc and then have a hook on page save that just invalidates the cache? [17:54:55] quiddity: no, i think not [17:55:38] jeremyb, ok. Possibly the same Component - CentralAuth, in case that helps. [17:55:51] * jeremyb had to run away to deal with 3% battery left but now i'm back... [17:57:05] hah, guest :) [17:58:42] csteipp: so, i'm repeatedly getting the toastie that says i'm logged in now and that i should reload and my username shows up at the top. and i reload and i'm not logged in and then i get the toastie and i get the username at the top and i reload and i'm not logged in [19:24:16] csteipp: ping? [19:25:02] jeremyb: Sorry, didn't get your first ping [19:25:34] csteipp: np, you had naming problems :) [19:26:25] jeremyb: Any chance you can grab your cookies on the request that you're having problems with? [19:26:45] I've heard reports of this, but never able to get a capture of the headers [19:26:46] csteipp: maybe... [19:26:52] it's a phone [19:27:12] hmm... mobile safari? [19:27:18] android [19:27:36] i suppose i can start a proxy and tell it to use the proxy [19:28:40] also, apergos, sorry for the midair collision. didn't see your msg til much later [19:28:47] jeremyb: That would be helpful, but I know it's a pain [19:29:45] so it's definitely still happening [19:36:36] csteipp: but a proxy would have to MITM to get the cookies... [19:37:19] (some cookies are HTTP and some are HTTPS. and i guess some are marked to disallow access from JS) [19:37:31] let's see what else i can do [19:47:26] csteipp: so how do i transfer cookies to you? [19:48:04] jeremyb: pm? [19:48:20] or just the names are fine too [19:48:48] I'm mostly wanting to know do you have a centralauth cookie [19:54:31] csteipp: so, i'm logged in fine i think with HTTPS. it's just HTTP that's not logged in [19:54:36] i have cookies for both [19:54:46] one min [19:55:04] jeremyb: that's expected. [19:55:31] csteipp: why isn't it forcing https? [19:55:34] If you're logged in with https, those cookies wont be sent on https calls. [19:55:51] and if it's not going to force https then why does it repeatedly say i've been logged in? [19:55:57] it should send me a 30x [19:56:09] jeremyb: So after you login, you're visiting http? [19:56:39] yes, unfortuntely (AFAIK) there's no HTTPS everywhere available here [19:57:15] some domains (e.g. enwiki) are enforcing the HTTPS force. some are not [19:57:20] So yeah, MediaWiki should send you to https after login, and all links are https [19:57:34] did you see the links i sent? [19:58:17] Ah, looking... [19:59:22] i have another pair of images with values but idk how to transmit them to you. i guess plaintext email is fine (it's my phone account not main account so at least with it's current perms compromise ain't a big deal) [19:59:38] idk why these things aren't copy/pasteable [20:01:43] jeremyb: Values won't help much at this point, so don't worry about it [20:02:05] I'm not sure why you don't have a forceHTTPS cookie for meta.wm.o [20:02:32] So before this, you logged into meta over https, right? [20:02:48] And then you got this cookie list when you were viewing a page over http? [20:04:40] idk. i had logged in somewhere and i found some other place didn't have me logged in (i think first i tried enwiki and then visited commons). but my commons account didn't exist yet (wasn't autocreated yet) so i don't remember how i fixed that. then later (most recently) i used loginwiki [20:04:48] not sure if i ever logged into meta at all yet [20:04:51] csteipp [20:07:23] Hmm... well, we're pushing out a fix for the autocreate bug on Wed [20:08:31] ok [20:08:36] so hopefully that won't get in the way in the future. [20:08:44] well what's the name of the toastie bubble feature? [20:09:07] And so now when you're refreshing the http (right?) page, you're getting the login sliding in? [20:09:13] Or are you on https? [20:09:51] well if i explicitly go to https then i'm logged in from the beginning. if i go first to http then i'm logged in later. i guess that's what sliding means [20:12:11] Ah, that kinda makes sense... [20:13:03] (but if i then click "edit" or refresh I'm still not logged in) [20:13:35] Oh, I bet your browser doesn't allow an http site to clober a cookie set by the https site... [20:14:03] is there a cookie i should delete? [20:14:47] If you want to use http logged in, you should go to your preference, and check that box, then relogin [20:15:23] ewwww, no [20:15:33] Or you can set a cookie named "forceHTTPS" for .meta.wikimedia.org... then you should always be redirected [20:15:33] i mean so that it can be clobbered [20:16:02] Actually, you really don't want that to happen [20:16:12] (clobering cookies, that is) [20:16:21] Otherwise your session cookie will go over https [20:16:25] s/https/http/ [20:16:46] So really the issue is that you don't have the forceHTTPS cookie [20:16:52] right. i'm just saying maybe i can delete whatever http is trying to set and failing [20:16:56] right, but why not? [20:21:47] That's what I wish I knew :) [20:22:41] anomie: ping ^ [20:23:17] * anomie looks [20:23:24] ok. why do we have both ${db}_Session and mediaWiki.user.sessionId ? [20:23:47] mediaWiki.user.sessionId? [20:23:56] mediaWiki.user.sessionId is something not actually related to sessions [20:23:56] Where is that? [20:24:14] * csteipp is recovering from minor heart attack.... [20:24:15] hah, good naming! [20:24:24] csteipp: mediaWiki.user.sessionId() in JS generates that [20:24:25] csteipp: see the links :) [20:24:29] "Get an automatically generated random ID (stored in a session cookie)" [20:24:41] mediawiki.user* [20:24:52] (or is it mediaWiki? whatever) [20:25:29] some sneaky extensions which track you use it [20:25:56] does WMF own an AED? [20:27:38] https://en.wikipedia.org/wiki/Wikipedia:WTF%3F_OMG!_TMD_TLA._ARG! [20:28:13] jeremyb: I don't think so [20:29:02] well good thing chris is in recovery so fast [20:30:02] jeremyb: No forceHTTPS cookie is why it's not redirecting you to https, of course. The question is why that cookie isn't set for meta. I don't suppose you can grab the response headers for the query to Special:CentralAutoLogin/setCookies on meta that should be happening just before you get the sliding thing? [20:30:22] sure [20:30:37] now that i figured out how to get data off this phone i can do just about anything :P [20:39:38] hey, can some tech person look at this? https://commons.wikimedia.org/wiki/Commons:Village_pump#Upload_wizard_broken.3F [20:40:19] YuviPanda: marktraceur ^ [20:40:26] Looks like it's broken in wmf18 (again) [20:40:37] Ah christ, we probably need to backport that fix for EL [20:41:27] Reedy: https://gerrit.wikimedia.org/r/85008 fixes that [20:41:41] i.e. updating to master in wmf18 should do the trick [20:41:53] Let's do that now then [20:42:34] anomie: there's a lot of Special:CentralAutoLogin/setCookies for different domains [20:43:20] jeremyb: The interesting one is the one for meta, since that's the one you're saying you're not having forceHTTPS for. Isn't it? [20:44:58] anomie: i don't see one for meta? [20:46:02] darkweasel: Thanks for the ping, it's going out now [20:46:22] jeremyb: When you get the slidey thing, there should be one that has type=script, and should be for the domain you're currently on. That's the one that actually *does* the slidey thing, so if you get the slidey thing it must be there somewhere. The rest should have type=1x1, and be for other domains. [20:46:49] start? ? [20:47:48] marktraceur: thank you :) [20:47:49] ok, i guess i was looking at the wrong column [20:47:58] jeremyb: It starts out with start, then redirects to checkLoggedIn (on login.wikimedia.org), then createSession (back on the local wiki), then validateSession (on login again), then setCookies (on the local wiki) [20:49:53] fun [20:56:08] is this a good IRC channel to ask for multi-lingual template support? [20:57:13] Biosthmors: maybe #mediawiki-i18n is better? not sure [20:57:28] thx [21:03:21] well it doesn't look like there's much going on over there. https://en.wikipedia.org/wiki/Talk:Malaria#Reference_style towards the end of the thread talks about the Swahili Wikipedia not accepting a translation of the English article. [21:03:45] something with the references not working, because of some template? [21:04:25] this is the latest diff: https://en.wikipedia.org/w/index.php?title=Talk:Malaria&curid=100965&diff=574235478&oldid=574227031 [21:04:41] Set-Cookie:centralauth_Session=xyz; path=/; domain=meta.wikimedia.org; secure; httponly Set-Cookie:metawikiSession=xyz; path=/; secure; HttpOnly Set-Cookie:centralauth_User=Jeremyb-phone; expires=Wed, 23-Oct-2013 21:02:39 GMT; path=/; domain=meta.wikimedia.org; secure; httponly Set-Cookie:centralauth_Token=xyz; expires=Wed, 23-Oct-2013 21:02:39 GMT; path=/; domain=meta.wikimedia.org; secure; httponly [21:05:08] anomie: that's response headers from URL:https://meta.wikimedia.org/wiki/Special:CentralAutoLogin/setCookies?type=script&returnto=User%3AJeremyb&returntoquery=&proto=https [21:05:27] (xyz == redaction) [21:05:31] csteipp: ^ [21:05:35] have to run shortly [21:08:49] there are issues with sessions? [21:08:58] jeremyb: Hmm. You didn't go and disable your "Always use a secure connection when logged in" preference on meta, did you? Otherwise I'll have to try to look at it tomorrow. [21:34:23] anomie: no, i didn't [21:34:27] on the train now [21:35:27] anomie: but even if i had it should still log me in. (just would be HTTP not HTTPS) right? [21:35:44] jeremyb: True. Hmm. [21:43:19] jeremyb: Ah, I think I see the bug. Thanks. [23:29:54] Who's a wikitech-l mod? [23:30:54] ^demon: Is there a queued e-mail to wikitech-l? [23:40:48] <^demon> Maybe? Dunno. [23:41:16] <^demon> "There are no pending requests." [23:43:04] Elsie: wikitech will silently reject e-mail from addresses that are not subscribed to it, if that's what you're experiencing [23:43:08] (i learned it the hard way) [23:43:46] <^demon> No it won't. [23:43:57] <^demon> Oh, if you're not subscribed. nvm. [23:44:28] Oh, lame. [23:44:35] The Ace folks copied the list on their reply. [23:44:38] Perhaps I'll forward it. [23:45:11] I used the Redirect option. [23:45:26] ^demon: Thanks for checking. I didn't know messages were silently discarded. [23:46:07] <^demon> Yeah, wikitech-l gets tons of non-member spam. It's been set that way for awhile :\ [23:48:21] I'm not sure my redirect worked. [23:48:56] Elsie: it did [23:49:02] http://lists.wikimedia.org/pipermail/wikitech-l/2013-September/071965.html [23:49:05] Look at that! [23:49:11] Did that just come in? [23:49:12] funny mail headings. [23:49:25] <^demon> E-mail is hard, yo [23:49:26] Resent-Date, Resent-From. never seen those before. [23:49:37] Outlook has an obscure option called "Redirect". [23:49:41] Resent-To, too.