[00:00:09] here Europe yes [00:00:10] FORK THE PROJECT!!!1 [00:01:04] Yuvi|NoPower: does that mean battery? do you have a generator? [00:01:12] battery, jeremyb [00:01:15] battery + 3G [00:01:23] ^d: i repro'd [00:02:06] <^d> Oh? [00:03:43] only got it to happen that once [00:03:52] but i didn't script it or anything [00:06:42] <^d> Wonder what apache you hit. [00:08:46] gah, he's gone [00:08:49] was mw1022 [00:09:27] now mw1164 [00:10:31] I had mw1172 [00:34:47] so, this is a problem. maybe i'll reopen the bug [00:34:50] * jeremyb digs it up [00:34:57] manybubbles: are you there? [00:35:15] I am indeed! [00:35:44] let me read [00:35:49] manybubbles: see the last 50 mins in here [00:38:17] manybubbles: https://bugzilla.wikimedia.org/42423 [00:38:27] just caught up [00:40:05] btw, how does one determine if a given wiki is cirrus or not these days? [00:42:35] besides asking [greg-g, chad, nik] :) [00:42:40] checking [00:42:53] the answer is test2wiki and mediawiki.org [00:43:05] mediawiki.org only has it as an option - it isn't the default [00:43:13] right, but is there a dblist or localsettings or what? [00:43:13] we're moving slower than I'd like but we're moving.... [00:43:30] s/local/initialise/ [00:45:13] jeremyb: it is InitialiseSettings.php - wmgUseCirrus [00:45:37] ah, cool [00:45:39] jeremyb: and wmgUseCirrusAsAlternative [00:46:02] and beta's across the board cirrus i guess [00:48:33] jeremyb: master branch, too [00:53:18] brb [01:00:29] jeremyb: So I'm not really sure what to say about it. [01:01:01] jeremyb: i don't see any kind of spike in the statistics but I'm also able to reproduce from time to time [01:02:16] manybubbles: where are the stats? [01:02:37] jeremyb: commons is served out of these two machines: http://ganglia.wikimedia.org/latest/?c=Search%20eqiad&h=search1019.eqiad.wmnet&m=network_report&r=2hr&s=by%20name&hc=4&mc=2 and http://ganglia.wikimedia.org/latest/?c=Search%20eqiad&h=search1020.eqiad.wmnet&m=network_report&r=2hr&s=by%20name&hc=4&mc=2 [01:02:42] manybubbles: there's no way for end users to know which search backend the timeout came from? [01:02:56] in order to detect or rule out a pattern [01:03:20] jeremyb: not really. I had to dig through mediawiki-config and then through puppet. and the requests are being routed through lvs as well. [01:04:00] no, i mean to know whether it was 1019 or 1020 for a given timeout [01:05:53] i wonder if pybal does any logging for when a host is failing/passing checks [01:08:37] jeremyb: no way I know of [01:14:14] jeremyb: the only errors I see on there seem to be related to it getting requests for stuff it dosen't have - not commonswiki [01:15:44] or not: 2013-09-04 00:02:34,673 [Thread-8] WARN org.wikimedia.lsearch.frontend.HttpMonitor - Thread[Thread-13472828,5,main] is waiting for 12736 ms on /search/commonswiki/Upload%20size?namespaces=4&offset=0&limit=20&version=2.1&iwlimit=10&searchall=0 [01:16:05] woot :) [01:18:01] jeremyb: well, it is something. [01:18:08] no stack [01:19:02] manybubbles: what log was that? flourine? [01:19:26] search1019:/a/search/log/log [01:19:32] I'm not sure about flourine [01:20:12] jeremyb: stupid permissions. I'm not able to access flourine. I've got an RT ticket open on that.... [01:20:49] manybubbles: i know. remember restricted to mortals :) [01:21:29] jeremyb: ahk - it looks like we implement our own http server on top of sockets here. because why not. I hadn't quite reliazed [01:21:49] hah! [01:25:23] jeremyb: it looks like certain requests like to time out [01:29:36] jeremyb: so you can learn something from this: https://gist.github.com/nik9000/f9281d4c67052faca89c [01:30:05] jeremyb: it really _is_ timing out. I'm not sure why. [01:30:32] also, I'm not sure if those number (in seconds) are a real measure [01:30:44] because the count may not start when the request is received [01:30:59] can i have a couple of complete sample lines so i know what i'm looking at? [01:31:08] (i.e. before they are cut) [01:34:30] jeremyb: sure! [01:35:09] https://gist.github.com/nik9000/96ff32e49f102f58f253 [01:35:41] jeremyb: so there isn't a queue - if more than the allowed number of threads are hot the request is thrown on the floor [01:36:50] oh, haha, that's a way to "round" :) [01:37:20] does cirrus have the same problem? [01:37:26] jeremyb: simplest rounding ever [01:38:06] jeremyb: not really. Elasticsearch runs in netty which isn't _the_ standard java http server but it isn't uncommon [01:38:25] jeremyb: and no, it doesn't throw requests on the floor. btw, I can't find any instance of that in the logs [01:47:28] manybubbles: you think i should reopen that bug? [01:48:13] jeremyb: I think it might be worth filing a new one about timeouts [01:48:25] jeremyb: I'm pretty sure these really are timeouts [01:48:54] not crashing or whatever [01:49:01] and I'm not sure we'll actually fix them [01:49:35] yeah, but at least it's something to point people at. maybe they'll volunteer to be cirrus guinea pigs :P [01:49:45] yeah! [01:50:30] we could also count the timeouts and maybe even raise the timeout value [01:50:48] ok, I think I might go up to bed now [01:50:54] nacht [02:45:57] is there someone here who can help me with a javascript issue in a personal page on a wiki for me? [04:33:47] Elsie: we already have a more specific report about https://bugzilla.wikimedia.org/show_bug.cgi?id=3507 iirc, for user:m with that special m [04:46:59] Nemo_bis: I think he got himself renamed. [04:47:05] But nobody else has. [04:47:22] And I'm not sure cleanupUsernames.php catches the case I listed. [04:47:23] Dunno. [04:48:24] Elsie: yes but there is a difference between old invalid usernames and mistakes in capitalisation which MW still makes [04:51:06] lol I love abcTajpu: ∰ [05:11:31] but it's not http://www.decodeunicode.org/u+0271 [05:13:41] uh yes it is [09:35:37] ori-l: http://www.hastac.org/blogs/wadewitz/2013/09/03/struggle-over-gender-wikipedia-case-chelsea-manning contains a link https://en.wikipedia.org/w/index.php?title=Special:UserLogin&type=signup&campaign=loginCTA ; how much can campaigns be skewed by external links from places other than the designed ones? [10:41:13] anybody around who can help me with creating a bot please? i'm form Wikivoyage [14:02:23] https://tr.wikipedia.org/wiki/%C3%96zel:EngelListesi?wpTarget=&wpOptions%5B%5D=userblocks&wpOptions%5B%5D=rangeblocks&limit=500 [14:02:32] is it worthwile to ask a sortable option for this list? [14:02:46] I'd like to sort the blocks by the date of expiration [14:03:03] I am particularly interested in blocks that exceed a decade [14:04:12] or hide indef blocks while at it [15:06:03] apergos: hi [15:06:07] hey [15:07:05] hey parent5446 [15:07:08] right on time [15:08:07] Yep, hey. [15:08:53] so, the size of LZMA using groups from yesterday (2.3 MB) was wrong, it's actually 3.3 MB [15:09:07] I see the correction [15:09:24] not quite as impressive [15:09:50] I want to see this on a larger dataset though [15:09:58] not en wp but something bigger [15:10:25] yeah, it turns out i had a bug there that meant i didn't actually save any of the texts at all, it was all just metadata and indexes [15:10:31] :-D [15:10:34] wooopsie [15:10:45] Lol that's convenient for reducing size. [15:10:59] yeah :-) [15:11:07] http://dumps.wikimedia.org/trwiki/20130831/ trwiki might be a nice test (or something around that size) [15:11:09] Nemo_bis: it's a question for spagewmf really [15:11:29] not for daily work but for looking at compression numbers [15:11:30] ok, i will try that [15:12:43] assuming more data won't change the results, then zdelta and LZMA with groups are comparable, with LZMA being slightly better [15:13:26] but with LZMA, the problem is updating [15:13:55] one option is recompressing the last group for each page (when adding a new revision), but that recompression could be slow [15:14:30] yep [15:14:31] another option is to always use new group, but that would probably mean lots of small groups [15:14:41] which means size increase [15:14:48] yeah [15:15:03] running some numbers would be useful [15:15:08] otherwise it's just guesses [15:15:42] It seems like zdelta is becoming a better and better option. [15:15:52] Hopefully the tests can make this more evident or not [15:16:09] the tests will make the parameters clear, for sure [15:16:27] right, this would also depend on how often would be the dump done; running the dump daily would mean very small groups, every 14 days would be much better [15:16:28] how bit the tradeoff is between size and speed, that is [15:16:39] well we don't do every 14 days now [15:16:47] we shoot for every 8-9 days for small wikis [15:17:01] I'd like every 10 for the 'big' wikis (about to move some stuff around to try to make that happen [15:17:02] ) [15:17:22] and then once a month for en unless I can convince folks with money that twice a month is good [15:17:26] and we can buy some more storage [15:17:36] twice a month for en, seems like luxury... [15:17:52] anyways that's the existing dumps, if it turned out we could go more often [15:18:02] then I would say, why not? they get done on a rolling basis [15:18:19] so if it turned out that with incrementals we could do every 3 days, I'd go ahead and do every 3 [15:18:35] I'm skeptical though because we have the xml conversions to run in addition [15:18:39] need numbers... [15:19:11] can you imagine if we could do dailies for everything? [15:19:17] dang that would be sweeeeet [15:19:33] right, but like i'm saying, with LZMA and no recompression, running more often will most likely mean bigger size [15:19:48] yes, it will [15:20:15] but I can't really plan frequency based around compression efficiency etc [15:20:34] right [15:20:58] it's just one more thing to consider when choosing which compression to use [15:21:35] yep [15:22:13] i have also written some code to break down the dump based on what takes how much space (i have added it to the page) [15:22:31] yeah, that's vry useful [15:22:36] and that makes it clear that should also focus on compressing metadata and indexes [15:22:44] I'm particularly interested how those ratios will come out for a bigger wiki [15:23:10] but that's orthognoal to compressing text, so i will continue working on text first [15:23:23] right [15:24:55] in a little less than three weeks is code freeze time [15:25:01] so we should take that into account [15:25:30] yeah [15:26:56] think you can get all compression testing done by the end of the week so that next week you know what algorithms to use and what grroupings? [15:27:19] yeah, i think so [15:27:36] ok, that is a good timeframe I think [15:28:49] but after i know what to use, i have to make it work completely (mostly with updates), which it doesn't now [15:29:09] right [15:29:32] so, my plan now is to test: 1. current dumps 2. larger wiki 3. updates with LZMA [15:29:51] ok [15:31:48] what else do you have on your mind as we get near the endgame? [15:32:08] A big red buttont that says "Do Not Press" [15:32:20] +1 [15:32:46] I want one that say 'Press. Right Here. What Are You Waiting For?' [15:32:54] it would be rather large... [15:33:06] you were talking about some code review before, i assume that would be after the end of GSoC? (assuming there is someone to do it) [15:33:15] speaking of which [15:33:36] Reedy, who has c++ chops and... er... some free time to review a chunk o code? [15:34:19] Speaking of which, GoingNative (the C++ conference) starts today. Might be interesting to watch the keynote. [15:34:51] yay [15:35:02] apergos, I have [15:35:02] oops wrong chan [15:35:33] yay retracted [15:36:17] Hey, there is no retracting yays in this channel. It's against WMF policy. [15:36:22] MaxSem: would you want to have a look at the incremental dumps stuff svick has been working on? Too early for specific nitpicks but if you hav general 'we do it this way' or 'this is more efficient' sort of comments at this stage, that would be awesome [15:36:37] sure [15:36:53] sometime over the next week or two if that works for you [15:36:59] I ain't practiced it for 1.5 years though:) [15:36:59] Reedy: check pm [15:37:20] like falling off^Wriding a bicycle, you never forget.. right? [15:38:02] apergos, next week is all-staff, week after that will be busy too so after I return from SF or before that [15:38:17] ok, that's fine [15:38:33] you can look now if you want (bear in mind the code will be changing) [15:38:44] but if that's too son it can wait til after gsoc [15:39:00] *soon [15:39:09] MaxSem: i'm trying to use C++11, not sure how well do you know that, since you haven't used C++ it recently [15:39:23] just let me know so I can keep track of our timetable [15:39:35] I used it when it was 0x;) [15:39:44] hipster [15:39:54] ok [15:40:01] so where's the code? [15:40:03] ori-l: thanks, but aren't "campaigns" and other URL-based tracking events a widespread eventlogging technique? [15:40:25] operations/dumps/incremental in gerrit [15:40:35] branch gsoc [15:40:40] Nemo_bis: they use eventlogging, yes, but add a layer of abstraction on top [15:41:25] the code is realtively comment-free so that's a thing... [15:41:45] That just makes it even more of a challege. [15:42:12] yeah, i thought about adding comments near the end, not sure if there will be time for that now [15:42:14] ori-l: so there may be something smarter than "consider whoever registers from an URL with this parameter as coming from feature X"? [15:42:16] (you should save a couple days for that at/near the end) [15:42:31] well if not then I know what you will be doing first after gsoc :-D [15:42:35] Nemo_bis: I doubt it, but maybe [15:43:13] right [15:43:17] holy crap, gitblit is useless crap if it can't show you only master tree [15:43:29] ori-l: but in any case it would not be eventlogging usage's business? [15:43:31] use github [15:43:34] oh now [15:43:45] and yes I am in fact an open source bigot [15:44:23] Nemo_bis: i just mean that you'd get a better answer; sorry, i'm not usually up at this hour and a bit groggy [15:44:38] svick: leaving stuff for after is fine, just don't leave them for other folks to do [15:44:58] grrr [15:45:05] i.e. don't take a permanent vacation after [15:45:15] * MaxSem bites svick for using "smart" pointers [15:45:15] ori-l: sure, you've been more than helpful enough; sorry but I'm not able to remember your bio-TZ ^^ [15:45:36] i'm not usually able to remember it either, so it's ok :) [15:46:03] apergos: ok, i know writing documentation for someone else's code is nigh impossible [15:46:16] well it can be done but it's a drag [15:46:30] and it's also good for you to get in the habit (for any coder) [15:47:27] MaxSem: why? i try to use them only where necessary (but that's a lot of places), and i think it's much better than having to remember to delete things [15:48:10] apergos: right [15:48:33] * apergos cannot weigh in; they are used to malloc/free  [15:49:14] You're using the C++11 smart pointers right? Not auto_ptr. [15:49:17] C++ makes it harder to shoot yourself on foot, but when you do you blow your entire leg away. in such case, smart pointers are gatling guns autoaiming at your feet [15:49:46] parent5446: yes [15:49:51] call me an old fart;) [15:50:06] I can't for twould be a lie :-D [15:50:17] I'll have to do some code review myself at some point. Need to see more of what's going down internally. [15:53:29] MaxSem: I immediately see what you mean XD [15:54:26] At this point, though, functionality is a bit more important than some trivial code quality issues. Sure smart pointers can be dangerous, but for the time being things work out. [15:54:41] MaxSem: well, i didn't notice that so far and smart pointers seem like a really good idea to me; but this is my first big project in C++, so i certainly may not have enough experience to judge it properly [15:56:02] I support using smart pointers, but only in certain places. For example, in main.cpp, you have the createWriter() function, which is basically a constructor that uses the parameters from the CLI. For a function like that, it'd be better to return by value rather than allocating heap memory and making a pointer. [15:56:39] But like I said, don't worry about that for now. Testing and compression and all that are more important atm. [15:57:46] parent5446: i can't return by value there, because i have several classes that inherit from IDumpWriter and i don't know which one will be returned [15:58:16] ...so you can just return an IDumpWriter object. [15:58:56] It's basically a factory function. [15:59:51] i can't, IDumpWriter is an abstract class and you can't return that by value [16:00:32] Ah, shows how much I know. Didn't realize IDumpWriter was abstract (although in hindsight it should have been obvious). [16:00:47] I have another meeting in about 0 minutes [16:00:52] I wil be half following along here [16:01:12] and even if it wasn't, inheritance doesn't work properly when you return by value [16:01:14] I'll have to give better commentary once I've examined more of the codebase. [16:01:41] ahhh, 5 minutes of talking about pointers in C++ and I already see gibs flying around:P [16:02:17] parent5446: ok, you could consider waiting until i write the comments (which won't be right away), but that's up to you [16:02:36] Mhm I'll probably do that. [16:03:00] OK well I don't have anything else. [16:03:06] apergos? [16:03:13] nope [16:03:20] me neither [16:03:36] svick, I'll wait for comments then:) [16:03:50] MaxSem: ok [16:04:12] see you tomorrow [16:04:14] See you both tomorrow [16:04:28] and a short big-picture explanation of what is used how would be great;) [16:05:15] harrr harr harrr [16:05:47] does everyone see what's not true in std::cout << "reading dump: idumps r[ead] dump.id output.xml\n"; ?:P [16:06:34] Didn't look closely, but if I were to guess, the full "read" doesn't work? [16:07:09] lol [16:07:32] Anyway, I'm off. Got to actually get ready for the day. [16:07:47] true iostreams programs don't use \n:P [16:07:55] i think full "read" should work [16:08:48] MaxSem: std::endl is much more verbose and \n works on both platforms (Windows and Linux) [16:14:22] but it's not truistic! [17:19:50] Some service may be unavailable to European users [17:32:53] @techs: i canNOT move/delte 2013-09-04 17:32:11: Fatal exception of type FileBackendError https://commons.wikimedia.org/w/index.php?title=File:Rue_Général-Nouvion.JPG [17:33:37] Steinsplitter: it's being looked at, I believe. [17:33:38] Swift issue [17:38:12] ah, thx [19:08:54] https://bugzilla.wikimedia.org/show_bug.cgi?id=53770 fyi [19:09:02] a lot of problems in the last weeks.... [19:13:12] blog.wm.o is hosted by wordpress? [19:13:39] o_O hosted on wordpress server o_O [19:14:30] i can't see anything going back to wordpress... [19:14:32] legoktm: seems to be on our server... [19:14:53] brion: well the new privacy policy says "For example, our actions regarding your information on our blog are covered by this Privacy Policy, but because our blog is hosted by WordPress, WordPress may also collect information sent automatically by your browser or through cookies that they set. If you are curious about any third-party provider’s privacy practices, you should refer directly to their privacy policy." [19:14:58] i assume that's a mistake? [19:15:19] hmmm [19:16:00] legoktm: where do you see that? [19:16:34] https://meta.wikimedia.org/wiki/Privacy_policy/BannerTestA#What_This_Privacy_Policy_Does_.26_Doesn.E2.80.99t_Cover then expand the "More on what this privacy policy doesn't cover" [19:17:02] there we go, i see it if i expand that section on https://meta.wikimedia.org/wiki/Privacy_policy [19:17:15] yeah i'm pretty sure that's wrong unless somebody slipped it in [19:17:57] ok, i'll leave a note on the talk page [19:18:05] (legoktm: AngrySplitter *ggg* evry week problems with deleting/moving fiels) [19:18:14] o.O [19:18:58] this is a isuisse, for example it is not possible to "speedy takedown" copyright violations... this is a legal isuisse :/ [19:19:18] for exampel :P [19:20:06] brion: https://meta.wikimedia.org/wiki/Talk:Privacy_policy#Blog_not_hosted_by_WordPress.3F [19:20:11] legoktm: ok so the rumor is that we *plan* to move the blog to hosting on Wordpress's server….? [19:20:14] but it's not presently done that way [19:20:17] so i'll add a note to the note :D [19:20:20] o.O [19:20:27] why would we want to do that? [19:21:11] wordpress --> pay the traffic xD [19:21:33] possibly because our ops people don't want to have to maintain a single wordpress instance and keep it tuned for the occasional high-traffic post [19:23:31] i guess, but i'd think people would value the privacy of wordpress not tracking us a bit more [19:24:59] /w/win 5 [19:26:05] legoktm: hey i'm just glad it's not a Facebook page [19:26:16] hahah [19:26:28] brion: just you wait... [19:27:03] https://www.facebook.com/wikipedia [19:28:01] https://www.facebook.com/wikimedia is even better ;) [19:28:24] ^^ [19:29:57] hah [19:40:58] Facebook used to have networks. Wikimedia had one which was limited to people with @wikimedia.org address [19:41:04] the feature apparently disappeared [19:41:46] Who has these mailaddresses? Who can get it? [19:42:12] a while ago, anyone with cluster access could get one by hacking a conf fil [19:42:12] e [19:42:24] nowadays, only staff / contractors. [19:43:21] heh hacking config should be quite fun [19:44:11] Are there any other wmf's addreses'? [19:46:15] well there's the OTRS addresses like info@wikipedia.org and stuf [19:46:18] stuff* [19:46:32] Base-w: yea, there are aliases but they are kept in a private repo to not make it so spammy and expose people's private addresses [19:47:42] the public ones should be on wikitech and then you have all the lists on lists.wikimedia.org [20:03:54] the privacy policy draft talks about a number of things in the future with present tense [20:04:40] heh i havent read it yet [20:06:14] mutante, that was not what i asked) sure i know about otrs and mailing lists but i mean emails for people to get [21:15:18] yet another API question: When I look at the recreate triples at the Edit API, if setting recreate suppresses errors when a page doesn't exist, then what is the difference between setting none of them, and setting nocreate? [21:22:59] also, the parameter undo has the information Revision ID to undo. Overrides text, prependtext and appendtext, but I don't see the latter two parameters documented there [21:51:35] Er, hey, quick question: With the current state of VipsScaler, can we now scale PNG images up to the upload limit? [21:52:19] I've been asked to talk about it for a WMF blogpost, but I want to give accurate information [21:55:22] ...Wait, did I say that in the Tech report? Maybe I did [21:55:24] MartijnH: so fixit :) [21:57:11] No, I didn't [21:58:54] AdamCuerden: http://www.gossamer-threads.com/lists/wiki/wikitech/257435 [21:59:05] oh wait [21:59:11] thats like 2 years old [22:00:01] AdamCuerden: https://bugzilla.wikimedia.org/show_bug.cgi?id=51370 might have some info [22:01:39] °I'm going with yes, as this displays [22:01:40] http://i.huffpost.com/gen/1325379/original.jpg [22:01:44] ..No, not that [22:01:59] That was something really weird a friend linked me, that scares me [22:02:00] https://commons.wikimedia.org/wiki/File:Gustave_Dor%C3%A9_-_Miguel_de_Cervantes_-_Don_Quixote_-_Part_1_-_Chapter_1_-_Plate_1_%22A_world_of_disorderly_notions,_picked_out_of_his_books,_crowded_into_his_imagination%22.png [22:02:12] 99.15 MB. [22:02:29] Upload limit's 100 meg last I checked [22:03:22] took a little while but it made a thumbnail \o/ [22:03:37] course this thumb really should be a jpeg [22:03:40] AdamCuerden: yes, but shell users can upload server side to exceed that [22:03:42] we gotta add more smarts to the system :) [22:04:01] I thought it's higher than 100M assuming you use chunked uploads [22:04:24] though chunked uploads sometimes fail reassembling large files :( [22:04:37] \o/ [22:04:39] I mean :( [22:05:03] Ryan_Lane: heh [22:05:59] http://firefogg.org/dev/chunk_post.html [22:06:24] it's linked to on API:Upload [22:08:17] Hmm [22:08:30] Heh. We need a chunked upload upload tool [22:08:53] https://www.mediawiki.org/wiki/API:Upload#Uploading_from_URL [22:08:53] However, do chunked uploads apply to PNGs, or just video? [22:09:20] If they're just video, it doesn't really matter [22:09:47] PNGs over 100MB? [22:10:14] aye [22:10:22] sounds like if we $wgAllowCopyUploads and give a user upload_by_url user right too.. [22:10:26] they can upload "from URL" [22:10:32] kind of like FXPing [22:10:45] Basically, trying to figure out if PNGs are completely fixed by VipsScaler, or only mostly fixed [22:11:19] i'm not sure but i would guess chunked uploads apply to anything over 100MB but file type doesn't matter, as long as it's an allowed file type [22:11:42] right. [22:12:09] I'll have to look into that. It could be really useful for uploading TIFFs from the Library of Congress directly [22:15:54] If anyone wants to do a good deed (and reduce the MW core footprint), feel free to review https://gerrit.wikimedia.org/r/#/c/74096/. It's been siting in gerrit since July :P [22:26:21] csteipp: would it be bad if we set XFO: SAMEORIGIN on all the wikis? [22:27:49] <^d> kaldari: Search BZ for that. I know we've discussed it before. [22:28:22] kaldari: it increases the attack surface a bit, so I'd rather not, unless there's a good reason for it [22:28:59] fair enough [22:34:19] https://github.com/earwig/mwparserfromhell/ claims it's "outrageously powerful" :p [22:45:18] mutante: it is :) [23:01:03] gn8 folks