[01:27:41] Why was deleted contributions given it's own special page? Could it have not just been part of Special:Contributions's search options and off by default? [04:08:04] Krenair: It makes more sense if you know the database backend. [04:08:13] There's a revision table and an archive table. [04:08:39] Same reason Special:Contributions only shows edits (and somewhat page moves) instead of showing all of a user's contributions. [04:08:44] Moves, deletes, protections, etc. [04:09:12] If you were going to re-implement it, you'd really just get rid of the archive table (moving between tables is annoying and prone to issues). [04:09:20] And you'd make Special:Contributions a lot smarter. [04:09:27] Hindsight is 20/20, &c. [11:25:52] I get the following error when trying to set up parsoid according to the instructions: [11:25:52] Error: Cannot find module 'jsdom/level2/core' [11:27:20] vvv: old jsdom perhaps? [11:27:37] jsdom@0.2.10 active installed [11:28:02] That's what my packaged npm on latest Ubuntu installed [11:28:03] hmm. what's node -e 'console.log(require.resolve("jsdom"))' ? [11:28:34] also, what's your "node -v" ? [11:28:43] Eh: /home/vvv/.node_libraries/jsdom@0.2.10/index.js [11:28:43] undefined [11:29:17] "node -v" is undefined? [11:29:36] It's v0.4.9 [11:29:47] ok, trying to repro on ubuntu fresh install here. a sec [11:32:57] (still installing g++...) [11:34:48] repro'd [11:35:17] So I'm not alone, huh? [11:35:37] yeah. looks like a glitch in html5 package for node 0.4 [11:35:42] it appears to make a node 0.6 specific require [11:35:44] let me try a quickfix [11:38:20] oooohhh, somebody trying Parsoid ;) [11:38:52] $ npm i html5@0.3.5 [11:38:52] npm ERR! html5@0.3.5 not compatible with your version of node [11:38:52] npm ERR! Requires: node@>= 0.4.7 [11:38:52] npm ERR! You have: node@v0.4.9 [11:38:53] npm not ok [11:38:56] er, wow. [11:39:20] *vvv wonders what OS does gwicke use [11:39:28] vvv: Debian unstable [11:39:40] but I have a more recent node install [11:39:54] Self-compiled? [11:40:14] yes- but it worked with the packaged one too [11:40:36] 0.4.12 in Debian [11:41:06] vvv: you could try 'npm install npm' first [11:41:19] Sounds recursive [11:41:21] (the ERR! above was what I did get with "npm install npm") [11:41:28] I had a few similar issues that were fixed by updating npm [11:41:41] oh [11:42:26] gwicke: well, it started out with one guy complaining about Wikipedia article on regex being full of "useless" math. I gave him an illustration of how knowledge of CS may be useful for people to understand why you can't parse HTML/wikitext with regexes [11:43:25] And then he asked me whether there is a proper solution to parse wikitext, and I said "uh, not yet" [11:43:36] And decided to check parsiod out [11:43:39] trying [11:43:40] curl http://npmjs.org/install.sh | sudo sh [11:44:24] vvv: wikitext is not fully context-free and in any case needs a lot of look-ahead.. [11:44:40] I know :) [11:45:07] that followed by "npm i html5" seems to fix things. [11:45:08] 184 total passed tests, 494 total failures [11:45:26] hrm.. [11:45:51] so it seems, sadly, that ubuntu oneiric's packaged npm can't update itself [11:46:35] Looks like my npm install npm was luckier [11:46:36] 248 total passed tests, 430 total failures [11:46:46] ho. great! [11:47:19] 249 total passed tests, 429 total failures # after "svn up" [11:47:26] *vvv is going to compile node.js from source [11:47:40] ;) [11:47:43] *au goes update Parsoid page about "npm install npm" [11:49:23] au: just added a sentence.. [11:50:15] yay conflict [11:50:51] I love -oid suffix [11:50:55] au: sorry about that.. [11:50:59] Paranoid, Android... [11:51:00] np [11:51:07] we end up having two identical sentences :) [11:51:15] it won't hurt to remind folks twice I guess [11:51:27] scare them all away [11:51:32] bwahaha [11:51:47] vvv: ..or Factoid [11:52:12] Those are usually in tabloid [11:52:17] *s [11:52:31] anybody around who can create mw git repo for me? [11:52:34] :) [11:53:32] *au loves Monoids, ftr :) [11:53:56] hello [11:53:57] the first candidate name was Groupoid [11:54:06] but then Amgine suggested Parsoid instead [11:54:21] "You have a problem and you decide to use Haskell. Now you have a Monoid in the category of problems." -- http://goo.gl/GVyLp [11:54:25] Why groupoid? [11:55:01] indeed. what was group-y about the parser? [11:55:08] Ahaha [11:55:18] npm configure script is cool [11:55:20] vvv@pheleia:~/dev/npm$ ./configure [11:55:20] vvv@pheleia:~/dev/npm$ ./configure --help [11:55:20] ./configure --param=value ... [11:55:27] mostly because of 'Category in which every morphism is invertible' [11:55:47] is demon the only person who can create git repos at this time? [11:55:57] ooh, a proper mathy name [11:56:49] Parsoid seems far more search-engine-optimization friendly though. :) [11:57:51] yes, surprisingly [11:59:03] the typeclassopedia is quite nice for practical people who would like to know a bit of relevant category stuff like me [11:59:23] http://www.haskell.org/haskellwiki/Typeclassopedia [11:59:46] yup. geheimdienst++ for mediawikizing it [12:00:39] Okay [12:00:44] I updated node and npm [12:00:48] Same result :( [12:01:20] 248 tests passed? [12:01:30] Yes... [12:01:31] that's where we are :) [12:01:48] some of those tests depend on the time or the day of the week.. [12:01:55] and your locale [12:01:55] maybe it's a good idea to display it as a progress bar after all [12:02:07] Oh wait [12:02:13] So it's supposed to be so? [12:02:17] we don't rig a consistent time yet in the test harness [12:02:28] vvv: yup [12:02:48] *au pictures it as something like http://visionmedia.github.com/mocha/images/reporter-landing-fail.png [12:03:36] Okay [12:03:48] au: nice plane character ;) [12:03:56] ??? ??? ??? [12:04:24] is there a crashed plane character too? [12:05:44] in unicode? probably not :) [12:06:50] maybe somewhere in the lol script unicode plane [12:07:14] ?????? [12:07:36] (that's 20D5 Combining Clockwise Arrow Above) [12:07:59] ;) [12:08:35] I am cleaning up the transform setup in mediawiki.parser.js a bit [12:10:31] great! looking forward to the commit(s) [12:11:18] au: comment preservation seems to be a bit weak in the coffee -> js conversion, but even more so in js2coffee [12:11:20] *au still haven't heard back about the oDesk/contract setup things [12:11:47] coffee -> js preserves ### ... ### comments only [12:11:55] yes, I wonder why [12:12:29] au: the odesk stuff also took a while when I started [12:12:36] re js2coffee, that seems to be due to a narcissus isue [12:12:38] *issue [12:12:57] ah ok. I'll not worry about it then :) [12:13:12] there are other people involved, and I believe the person handling odesk at the foundation recently changed [12:13:25] k [12:14:44] also, nobody seems to like odesk.. [12:15:21] I do like it :-D [12:15:43] more exactly, my accountant likes its reports [12:15:58] *gwicke believes that a 10% cut is a bit too much for a simple web interface [12:16:50] maybe they have a discount [12:17:03] I hope so [12:17:18] oh- and they take another 3% or so in the currency conversion [12:17:20] or that might be a subject to raise [12:18:02] with the number of european contractor, it might be worth creating a European company [12:18:13] gwicke: I use direct wire transfers [12:18:59] gwicke: cost me the Odesk 30$ fee + my bank 25??? fee but then I get inter banking rates [12:19:37] ODesk rate does not seem very competitive when you transfers up large amount [12:19:43] hashar: thanks for the tip- have to check that out [12:19:47] (I mean more than a few thousands dollars) [12:19:52] ask your bank [12:21:22] the aim is to compare: Odesk: 2$ fee + some rate versus Bank: 30$ ODesk Fee + Bank Fee + better rate [12:21:54] hashar: yep- the break-even will be quite low [12:24:14] next step [12:24:21] I need to move to Germany [12:28:11] hashar: for currency stability reasons? ;) [12:28:36] yup and cheap real estate [12:29:00] my city is facing a real estate bubble much like the rest of France :-( [12:29:09] that differs a lot here [12:29:32] mind you, I can not buy the flat I am currently renting ! [12:29:52] which town? [12:30:05] Nantes in Britany, west coast [12:30:12] ah- nice! [12:30:16] I think that it is the sixth biggest city in France [12:30:28] not far from Quiberon [12:30:33] :-D [12:31:19] was there only once for a sailing event, but have nice memories of it [12:31:55] Quiberon bay is probably the most beautiful place to sail [12:32:16] in France at least [12:32:42] yes, I liked it a lot [12:33:10] we were in the ecole nationale de voile or something like it [12:33:51] much nicer than our olympic sailing center in Kiel [12:43:42] Hi! I am currently looking at some user agent strings in requests to the Wikimedia servers, and I come across the following two that I cannot place, but which seem to be internal: [12:43:59] Wikimedia OpenSearch to Apple Dictionary bridge [12:44:14] AppleDictionaryService [12:44:21] Can anybody tell me what they are? [12:45:13] *gwicke has no idea about those [12:45:23] Andre_Engels: the Wikpedia tab from /Applications/Dictionary.app would be my guess. [13:03:48] au: I guess we'll have to replicate most of the stuff in MW's sanitizer: https://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/includes/Sanitizer.php?view=markup [13:04:20] plus the added fun to make all this reversible.. [13:05:57] ah- but you already checked that, judging from checkCss ;) [13:06:00] aye :) [13:06:19] reversible, as in we need to preserve the (potentilly insane) original style in data- attributes? [13:06:41] that is still a bit open to debate [13:07:16] I would not mind to remove unsafe stuff from the source if the element is part of an edited dom subtree [13:07:38] agreed. esp. if it'd always be removed by the original .php renderer anyway [13:08:26] it might add noise to round-trip testing though [13:08:48] maybe we can sanitize the original source too, and only then compare with the serialization [13:08:48] *nod* [13:09:00] (nod to sanitizing both ends) [13:11:00] another fortunate difference is that we get attribs in decoded form, so there's no need to call to require('jsdom/browser/htmlencoding').HTMLDecode ourselves. (i.e. no need to replicate Sanitizer::decodeCharReferences ) [13:11:48] there is also decoding done in the tokenizer [13:13:30] the longer I work on this the more I get the impression that selective serialization of edited DOM parts is the only possible way to go to minimize dirty diffs [13:13:47] it is still untested though [13:14:50] is it mostly because of whitespace, or was there other sources for dirty diffs as well? [13:15:19] balancing tags also introduces artifacts [13:15:23] text changes in a paragraph could still force us to serialize the entire paragraph [13:15:32] so it might not be as local as we'd like [13:15:49] aye. [13:17:06] still bad for round-trip testing [13:17:07] (afk food, bbiab) [13:17:17] k [13:17:22] I wonder if we can test roundtrip at html side [13:17:43] i.e. compare w->h and w->h->w->h [13:17:49] instead of comparing w and w->h->w [13:17:54] hmm- maybe, yes [13:18:23] definitely something to try [14:07:25] au: I think we actually need to move extension processing before template expansion, to avoid expanding templates in extension contents [14:07:55] ok. do we anticipate extensions from expanded templates? [14:09:26] afaik that is not supported in the PHP parser either [14:09:35] ah. that massively simplifies things then. [14:09:44] will check though [14:14:10] extensions are a bit shaky in the tokenizer as well- I tried to keep the tokenizer configuration-independent, but that also means that it cannot change the parse mode when encountering extension tags [14:14:46] we could treat any non-html tag as a potential extension tag in the tokenizer [14:15:20] extensions are pre-registered in the parser [14:15:30] it could be checked when finding a <...> [14:16:05] would still be nice to keep the tokenizer configuration independent if possible [14:16:34] but it might as well turn out to be too hard [14:17:38] the differences between extension tags and non-extension tags concern the precedence between things like nowiki or html comments and the close tag [14:18:21] switching to comment or nowiki parse mode would miss the end tag [14:18:35] I think was just like a extension tag for the preprocessor [14:19:05] it can be implemented that way, yes [14:21:17] url protocols in auto-linked external links are another potential configuration problem [14:36:39] extensions can only return html fortunately [14:38:37] Platonides: a side effect of tokenizing the contents of extension tags is that both tokens and the plain text input can be made available to the extension [14:39:12] for most that might be wasted effort, but for things like it actually simplifies the implementation [14:40:08] I think there's a bug asking for that [14:40:25] you need two different types of tags [14:40:29] one abstract one [14:40:45] and another for those whose first action will be to call recursiveTagParse() [14:41:56] the data structure returned to the extension would be different in the tokenizer though [14:46:43] but in general the idea behind keeping the tokenizer configuration-independent is to avoid repeated wikitext tokenization completely [15:18:43] I wonder how common non-html / non-extension tags actually are [15:20:24] Like, non-existent tags? [15:20:57] yes, htmlish tags that are neither html nor extension tags [15:21:50] ? [15:22:21] if those are rare enough in html comments and a few other constructs, we could just terminate those constructs on encountering a closing 'martian' [15:22:26] vvv: yes [15:22:39] I thought those just get escaped [15:22:54] they are, and that would not change [15:23:45] I am just thinking about a way to make sure potential extension end tags take precedence over other constructs that would hide them [15:24:09] message key pairs [17:33:44] it might not be as easy as that, you might have to do some processing on the parameters [17:33:57] or even a log-action <--> raw-message pair [17:34:00] but we are only talking handful of logs right now [17:34:48] // move, move_redir [17:34:48] 'move/*' => 'MoveLogFormatter', [17:34:49] // delete, restore, revision, event [17:34:49] 'delete/*' => 'DeleteLogFormatter', [17:34:49] 'suppress/revision' => 'DeleteLogFormatter', [17:34:51] 'suppress/event' => 'DeleteLogFormatter', [17:34:53] 'suppress/delete' => 'DeleteLogFormatter', [17:34:56] 'patrol/patrol' => 'PatrolLogFormatter', [17:35:36] that's by my count 10 log types [17:36:06] so the only question is who does it? [17:36:39] you know where the entry point in the new log-system for this extension would be, right? [17:36:48] I know neither old or new log system [17:37:01] LogEntry::publish [17:37:16] but it should be fairly simple once pointed out :) [17:37:27] you need to create two rc objects, and add a hook where you can change the log text [17:37:57] take that page, no neew to create more rc objects, just add a hook should suffice [17:39:58] Nikerabbit: if you can write the initial skeleton and check it in, then I think we've got a lot of folks who can probably muddle through the backwards-compat part [17:40:35] robla: if that takes less than 20 minutes I can do that before i18ndeploy [17:43:57] Nikerabbit: To complete the bot-internals example: This is the way they do move-action: [17:43:58] generateRegex("MediaWiki:1movedto2_redir", 2, ref moveredirRegex, false); [17:44:08] Match mrm = ((Project)Program.prjlist[rce.project]).rmoveredirRegex.Match(rce.comment); [17:44:17] rce.movedTo = Project.translateNamespace(rce.project, mrm.Groups["item2"].Captures[0].Value); [17:44:37] *robla installs mw 1.18 to see what the old code path looked like [17:45:53] I was planning osmethng like this: http://translatewiki.net/static/llogs.txt < robla, Krinkle [17:46:05] and you need the messages too [17:46:05] that's from the raw C# source of one of the bots I could find [17:46:49] Nikerabbit: Cool, that looks straight forward [17:47:06] (untested) [17:48:19] nice....looks good in theory [17:52:31] https://bugzilla.wikimedia.org/show_bug.cgi?id=34508#c16 [17:53:38] robla: what are the wikies which have 1.19 already btw? [17:54:16] https://www.mediawiki.org/wiki/MediaWiki_1.19/Roadmap#Deployment_schedule [17:54:27] I probably have to merge &deploy to both 1.18 and 1.19 separately, unless you will hate me doing deployments mid migration [17:54:36] ??(hewikisource,??frwikisource,??eowiki,??betawikiversity,enwikiquote,??enwikibooks)????Done ??(mediawikiwiki,??strategywiki,??usabilitywiki,??simplewiki,simplewiktionary,??metawiki????Done [17:54:47] yup [17:56:09] it all depends on what all you're deploying [17:56:52] robla: small fixes to Narayam/WebFonts and enabling them on few projects [17:58:05] Nikerabbit: those two have already gotten a fair amount of production use already, right? [17:58:56] robla: relatively, we are slowly increasing the number of languages supported [17:59:07] sounds like it should be fine then [18:00:27] one thing I'd ask is to make sure you also deploy to beta.wmflabs [18:00:51] robla: is there some special way to do that? [18:01:12] petan and Reedy know the details on that [18:02:01] you don't need to do that before you deploy to whatever 1.18 wikis you do, but just make sure you do that well in advance of when we deploy 1.19 to those wikis [18:02:50] I can do if they tell me how [18:03:04] and if you don't get around to that, it's not the end of the world. for 1.20+, it'll be a bigger deal [18:03:35] in any case I will deploy both 1.18 and 1.19 (otherwise they would go backwards when they shift to 1.19) [18:04:21] Nikerabbit: alternatively to deploying to betalabs is to just deploy to test2 [18:05:09] *robla doesn't know if Narayam can be meaningfully deployed to test2 [18:05:20] it's prolly not enabled there [18:06:03] basically, I'm just looking to make sure we aren't trying to debug really basic 1.19 compatibility problems when we do our rollout [18:07:17] well, both WebFonts and Narayam are enabled in twn, which runs 1.20alpha [18:07:57] I suppose that's probably close enough :) [18:08:18] and I'm talking about stuff like https://www.mediawiki.org/wiki/Special:Code/MediaWiki/111945 :) [18:17:15] robla: You said "I think that's probably going to be the case" [18:17:38] Krinkle: meaning "no more 1.19 wikis until we get this sorted out" [18:17:41] robla: was that re: my "suggest that no wikis get 1.19 until this is fixed though" [18:17:42] ok [18:17:44] good :) [18:18:20] I was just looking at Roadmap#Deployment_schedule and saw commons was up for 2morrow, which would break commons upload patrol [18:23:27] perhaps we have time to fix it tomorrow morning? [18:32:53] could LogPage.php from 1.18 just be copied and adapted into the new LegacyLogging extension, or is it easier to start from scratch? [18:35:54] robla: easier to just copy relevant lines of code if any [18:43:28] robla: which bugs do you think we need to block on before doing any more deployments? just https://bugzilla.wikimedia.org/show_bug.cgi?id=34508#c16 ? [18:44:46] *robla looks at our list [18:47:23] saibo has put a timer on COM:VP for 1.19 deployment [18:47:40] I really like how he has prepped the page to collect comments and stuff [18:50:05] hexmode: I'm not sure how to organize this, but I think most of the high/highest ones are commons blockers [18:50:37] k, Let me see the list I can organize makes sense to you [18:51:37] we should probably just update http://etherpad.wikimedia.org/119triage [18:51:59] hexmode: ^ [18:52:21] robla: makes a ton of sense [18:59:11] hexmode: feel free to delete and start fresh on that page. just keep the query [18:59:32] robla: k [19:12:20] #34503 -- I thought they didn't use that on commons [19:13:27] !b 34503 [19:13:27] --elephant-- https://bugzilla.wikimedia.org/show_bug.cgi?id=34503 [19:14:35] hexmode: I don't know for sure [19:14:54] do you know for sure that it's not used on commons? [19:15:02] I was trying to ask saibo... let me see if someone else in -commons can help [19:15:13] this is flaggedrevs, right? [19:16:10] I don't think that has anything to do with FlaggedRevs [19:16:41] also, why is 34510 not a commons blocker? [19:16:45] !b 34510 [19:16:45] --elephant-- https://bugzilla.wikimedia.org/show_bug.cgi?id=34510 [19:18:36] robla: I was thinking most stuff on commons is file pages/uploading, not editing, but I'm not gonna push it :) [19:22:01] ugh... this is not pretty, but I'm glad we're doing this now instead of after commons [19:23:31] bbiaf [20:29:05] robla: saibo had a look at the etherpad, just fyi. Are we ready to adjust the deployment schedule? [20:30:57] hexmode: we still have time to fix the problems [20:31:14] ...but we will adjust if we can't get through the list [20:32:29] robla: should I try to find people to fix individual bugs? Seems like this is a time to ask individual devs to step in [20:33:56] so....I think the mess of Javascript stuff is going to be some combination of RoanKattouw_away and Krinkle. Tim-away has done some recent spulunking as well [20:34:14] the logging and permissions stuff, yes, it's time to find individuals [20:34:17] Krinkle: you're here? [20:34:20] yes [20:34:26] k, will do [20:34:37] logging and permissions stuff [20:37:25] https://bugzilla.wikimedia.org/show_bug.cgi?id=34538 <-interesting work by TIm [20:40:30] Tim-away: Yeah, some modules assumed that position:bottom means after or far enough into document ready by default, but that's wrong [20:40:52] I've never liked the name 'top' and 'bottom' since they describe implementation rather than purpose [20:41:15] which is no longer correct when asyncing from head [20:42:43] that's some of the stuff we've gotta fix before deploying. [20:43:12] (any further, that is) [20:43:48] robla: I've moved r34538 from 1.19wmf1 to 1.19.0 though, does not affect deployment afiak [20:43:57] whoa! [20:44:08] we don't have experimental enabled for the bottom queue, right ? [20:44:18] yeah, we do [20:44:22] what? [20:44:39] oh [20:44:43] okay, that's new to me [20:44:55] RoanKattouw_away turned it on for logged in users in an attempt to fix some other issues we were having [20:45:04] then why is it a configurable false by default if WMF is going to use it in 1.19 already (I thought it was for 1.20) [20:45:32] oh, I see. This way the race conditions are more likely to happen. [20:45:36] to find more bad code [20:45:41] nice [20:45:59] we're not doing it to try to find bad code [20:46:14] what does it fix? [20:46:18] Roan was grasping at straws to fix problems we were seeing... [20:46:33] *robla looks it up [20:47:23] the very last thing we're trying to do is root out bad user scripts, any more than we're trying to root out bad IRC log parsers ;-) [20:48:11] well, this will as a side effect do that. [20:48:18] however user scripts are unlikely to be affected [20:48:44] since user scripts are mostly written from before ResourceLoader, and before ResourceLoader everything was loaded from the top and everything required a document-ready wrapper [20:49:11] no race conditions. before 1.17 no user script could access the dom without a document-ready wrapper [20:49:31] but bugs like 34538 will be found more easily this way [20:52:50] *hexmode digs into RecentChanges to see who he can chase down [20:58:40] !branchpoints [20:58:40] --elephant-- I don't know anything about "branchpoints". [20:58:43] !branchpoint [20:58:43] --elephant-- I don't know anything about "branchpoint". [21:07:08] Krinkle: I think I've traced back to the original decision. Roan was working on the original incarnation of https://bugzilla.wikimedia.org/show_bug.cgi?id=34409 [21:07:54] robla: ah, yeah. that makes sense [21:08:05] he then got looking at it, and suggested in this channel to turn it on http://prototype.wikimedia.org/logs/%23wikimedia-dev/20120215.txt [21:08:06] although it does come back to an earlier point: dependencies [21:08:11] 'user.options' is a module [21:08:17] modules using it should declare it as a dependency [21:08:34] at 23:18:38 is when he suggested it [21:08:44] with experimentalLoading=true, they will load before the first module so it's practically impossible for it not to be loaded [21:08:56] but that's more a side affect [21:09:34] he only fully explained it to Trevor [21:09:46] !r 111695 [21:09:46] --elephant-- http://www.mediawiki.org/wiki/Special:Code/MediaWiki/111695 [21:09:48] that looks odd [21:09:54] interesting but odd [21:10:47] RoanKattouw_away: are you around? [21:11:54] Yeah [21:11:59] the end result is good, great! [21:12:06] well.... [21:12:07] RoanKattouw: Perhaps just remove it from mediawiki.js then ? [21:12:15] Ah, remove what? [21:12:15] makes no sense to leave that skeleton to me [21:12:20] Oh [21:12:21] Well [21:12:27] It's needed for user.tokens and user.options [21:12:36] yes, so those should load after mw.user, right ? [21:12:44] they can be loaded by default for sure [21:12:50] (and are now with experimental=true)_ [21:13:10] but now mediawiki.user.js depends on mw.user.options [21:13:11] No, those load really early [21:13:16] Krinkle: we didn't see an improvement on the watchlist bug until we got 111695 deployed correctly [21:13:24] *robla looks up watchlist bugnumber [21:13:25] mw.user.{options,tokens} load before almost anything else [21:13:42] !b 34469 [21:13:42] --elephant-- https://bugzilla.wikimedia.org/show_bug.cgi?id=34469 [21:13:53] robla: the watchlist bug will be fixed if experimental=true, even without r111695 I think [21:14:14] RoanKattouw: Hm.. [21:14:15] I really doubt it, since that's the state things were in all day Thurs/Fri [21:15:13] I believe you, but it doesn't make sense [21:15:40] RoanKattouw: If user is loaded by default and options/tokens on top before .load(), how can it not be loaded when e.g. a watchlist uses it? [21:16:26] That's exactly why that bug is fixed now [21:16:34] Because I made options&tokens load on top and NOT depend on mw.user [21:16:59] that's what you did earlier or with r111695 > [21:17:09] earlier=experimental async loading [21:17:27] experimental async loading came before 111695 was properly deployed [21:17:42] Yes [21:17:42] But I deployed it incorrectly at first [21:18:32] so experimental loading puts user.options embed before .user is loaded [21:18:40] yeah, that's a problem. [21:18:50] a bunch of us spent the better part of Friday afternoon beating our heads against the wall trying to isolate 34469 [21:19:20] *^demon mails some ibuprofen to the office [21:19:21] ...before realizing that r111695 hadn't made it all of the way out [21:19:35] ^demon: much appreciated [21:20:34] RoanKattouw: So a few weeks ago when we just implemented experimentalAsyncLoading loading, user.options was failing all over? [21:20:49] it would have to have split out user.options.get undefined errors [21:20:53] spit( [21:20:54] No, it's not related [21:21:12] I thought I'd fixed user.options but then people were using it in un-RL-ified Gadgets [21:21:20] well, async loading moves that queue to the top. [21:21:23] oh, I see. [21:21:39] So I got sick of that kind of crap and just made it top-load with no dependencies so it would always be available to everything [21:21:50] yeah [21:22:09] both when the dependency on user.options is missing as well as when dependency on mw.user is missing [21:22:41] ay [21:22:43] e [21:26:02] hexmode: it is quieter there [21:26:05] RoanKattouw: have you seen https://bugzilla.wikimedia.org/show_bug.cgi?id=34538 yet? [21:26:19] hexmode: what do you want to test? The irc.wikimedia.org channels ? Is the fix applied? [21:26:41] hashar: it isn't applied, I'd like to test before applying [21:26:46] Hah [21:26:49] hashar: ideas? [21:26:53] I thought I'd fixed something similar recently [21:27:21] I *did* [21:27:33] hexmode: where is the fix ? :-D [21:28:31] hashar: http://pastebin.com/Vnv1jgTR [21:28:33] there is a fix written for the irc.wikimedia.org bug? [21:28:43] Krinkle: one of them [21:28:48] the easier one [21:29:00] bug #? [21:29:18] https://bugzilla.wikimedia.org/show_bug.cgi?id=34495 [21:29:21] mediawiki-cvs: 2,290 unread; wikibugs: 12,000 unread [21:29:23] wee [21:29:49] timotijhof@gmail: 0 unread; ttijhof@wmf: 1 unread; krinklemail@gmail: 102 unread; [21:30:34] hashar: did you get that http://pastebin.com/Vnv1jgTR ? [21:30:48] hexmode: yup [21:31:05] k [21:31:09] hexmode: looks like that is fixing bug 34495 [21:31:24] hashar: right [21:31:37] is there a way to test, though? [21:31:49] I have NO idea how the IRC stuff works [21:32:11] so, we have an area to write tests for :) [21:32:16] if the PatrolLog.php write the text to a log file and then we have a daemon reading from the file [21:32:17] then [21:32:28] hashar: I'll commit now, then [21:32:36] we could push the change to 1.19wmf1, svn update the cluster without syncing it [21:32:38] It just writes to a UDP socket, you could use netcat to listen at the other end or somethig [21:32:54] thus test.wikipedia.org will have the change, and we can test patrolling there [21:33:27] does test.wikipedia not go to irc.wikimedia.org main stream? [21:33:39] irc://irc.wikimedia.org/#test.wikipedia [21:33:56] RoanKattouw: I'll try netcat first [21:34:00] just to make sure [21:35:06] Looks like: [21:35:06] [[Talk:Main Page]] http://test.wikipedia.org/w/index.php?diff=125808&oldid=117998 * Hashar * (+19) [21:35:17] which is https://test.wikipedia.org/w/index.php?title=Talk:Main_Page&diff=125808&oldid=117998 [21:35:48] hexmode: if the change is only on fenari we should be fine :-D [21:35:58] of course that needs some patrol rights [21:36:39] hashar: committed ... now to test [21:36:41] :) [21:39:21] back [21:39:56] is there a simple wiki irc room? [21:40:05] *hexmode tries #wikimedia-simple [21:40:16] #simple.wikipedia [21:40:23] (which should be #en-x-simple.wikipedia.org [21:40:31] we have a bug about renaming simple to en-x-simple :D [21:42:08] hashar: that will go automatically [21:42:21] if the rename happens in mw, the irc reporting will also go there [21:42:29] #$lang.$site [21:43:26] great! [21:43:33] not going to happen anytime soon though [21:44:14] is anyone looking at https://bugzilla.wikimedia.org/show_bug.cgi?id=34503 yet? [21:44:28] I'm about to see if I can find someone to fix it [21:47:57] sounds like an important issue