[06:05:09] Aha [06:05:14] nohup: cannot run command `restart.sh': No such file or directory [06:05:35] nohup ./restart.sh & [06:05:53] seems to work in the right directory [06:05:56] whew [06:06:01] well that was painful [06:06:03] hey wanna tweak the docs? :-D [06:06:06] !help [06:06:06] !(stalk|ignore|unstalk|unignore|list|join|part|quit) [06:06:06] There are a lot of topics you could be asking about. Besides, this bot is mostly for experienced users to quickly answer common questions. Please just ask your question and wait patiently, as the best person to answer your question may be away for a few minutes or longer. If you're looking for help pages, we moved that to !helpfor. [06:06:09] * Reedy glares at Elsie [06:06:14] !wm-bot [06:06:14] Hello, I'm wm-bot. The database for this channel is published at http://bots.wmflabs.org/~wm-bot/db/%23mediawiki.htm More about WM-Bot: https://meta.wikimedia.org/wiki/wm-bot [06:06:46] uh oh [06:06:49] It seems to be mute [06:06:55] @restart [06:06:56] Permission denied [06:07:23] this is hilariously HAL-like [06:07:23] !help [06:07:23] !(stalk|ignore|unstalk|unignore|list|join|part|quit) [06:07:23] There are a lot of topics you could be asking about. Besides, this bot is mostly for experienced users to quickly answer common questions. Please just ask your question and wait patiently, as the best person to answer your question may be away for a few minutes or longer. If you're looking for help pages, we moved that to !helpfor. [06:07:27] apparently just very lagged [06:07:32] all right then [06:07:34] you're not a trusted user [06:07:43] !wm-bot [06:07:43] Hello, I'm wm-bot. The database for this channel is published at http://bots.wmflabs.org/~wm-bot/db/%23mediawiki.htm More about WM-Bot: https://meta.wikimedia.org/wiki/wm-bot [06:07:58] 12913 wmib 20 0 278m 84m 8356 S 99.5 4.2 0:18.60 mono [06:07:59] lol [06:09:04] all righty then [06:10:00] better [06:20:19] you wanna close the bug (you did the work)? [06:22:15] heh [06:22:16] yeah [06:24:28] Good job Reedy.. [06:28:45] !omgevilbug | Reedy [06:28:45] Reedy: !omgevilbug [08:04:22] just wanting to get some eyes on [[Wikipedia:VisualEditor/Feedback#Edit_section_links_gone_AWOL]] [09:55:15] Hello, does anyone here know how I contact the owner(s) from wRaelBot? [09:55:59] Hello, does anyone here know how I contact the owner(s) from wRaelBot/get a wRaelbot? [09:56:33] hi akoopal [09:56:41] I asked the same Larsnl :P [09:57:31] lol, hadn't joined yet when you asked it [13:56:09] oo. [13:56:23] You guys are gonna launch VE on all the other wikis in three days? [13:56:36] gut luck [13:57:05] huh? [13:57:09] where was this announced? [13:58:20] raargh nooo [13:59:21] MatmaRex: yeah, where was this announced? [13:59:38] I shouldn't probably trust things posted on the internet that easily. [13:59:41] well i'd sure like to know [14:00:04] or maybe I should just learn to read properly. [14:00:13] It's "the week of July 15", I'm sorry. [14:00:31] well, ihaven't heard this being announced either :) [14:00:33] james is currently netsplitted as Guest72452 [14:00:46] and it's still apparnetly 7 am in SF [14:01:31] MatmaRex: https://en.wikipedia.org/wiki/Wikipedia:VisualEditor#Timetable [14:01:54] and https://en.wikipedia.org/wiki/Wikipedia:VisualEditor/Feedback#What_happens_in_3_days_.3F [14:02:13] wtf [14:02:18] and this is announced onen.wp? [14:02:19] which basically says: "They are going to have a meeting today, wait until 17:00 PST for more info." [14:02:32] and not the wikis whichit actually applies to? [14:02:50] Well, the timetable is on enwiki. [14:03:16] I'm assuming dewiki, frwiki and itwiki have been informed by their respective community liaisons. [14:04:03] also https://www.mediawiki.org/wiki/VisualEditor/status#2013-06-13_.28MW_1.22wmf7.29 is not updated anymore [14:04:24] @seen guillom [14:04:24] odder: Last time I saw guillom they were talking in the channel, they are still in the channel #mediawiki-visualeditor at 7/5/2013 2:03:18 PM (00:01:06.5631870 ago) [14:07:35] @seen wm-bot [14:07:35] hashar: I have never seen wm-bot [14:07:40] ahh [14:07:53] * hashar hands a mirror to wm-bot  [15:27:14] Hi. [15:27:22] load.php, line 342, calls http://meta.wikimedia.org/w/index.php?title=User:Pathoschild/Scripts/Regex_menu_framework.js by HTTP even though I am using HTTPS. [15:27:36] So Firefox blocks it. Even HTTPS Everywhere doesn't help. [15:28:43] On enwiki [15:28:46] AVRS: on what wiki? [15:29:00] AVRS: also, this is some local script, so ask local admins why it does that [15:29:11] (or check your whatever.js) [15:29:14] MatmaRex: from load.php [15:29:33] everything goes through load.php [15:29:39] including your whatever.js [15:29:41] MartijnH said "that's resourceloader I think" [15:29:49] ok, I'll try disabling that [15:30:03] just check if you're not explicitly loading that via HTTP somewhere [15:31:02] Indeed, thanks, it depends on my monobook.js. [16:12:26] I forget, where are the log files on terbium? [16:12:42] mlitn: ^ [16:17:40] kaldari: ssh fluorine, /a/mw-log/ [16:17:52] oh yeah, wrong server :) [16:21:19] apergos, parent5446: hello [16:22:16] i'm working on saving, specifically on indexes (because i need those first); i hope to have something to commit tonight [16:22:30] Hey [16:22:51] hello [16:23:05] what will the index have? [16:23:56] right now, mapping between page id and offset in the file where the page object is [16:24:16] and i will also have another index for tracking free space [16:24:43] Mhm, you said you were using btree indexes right? [16:25:07] yeah, that's the plan; i think those make the most sense for saving in a file [16:25:16] you're already writing a rudimentary binary format? [16:25:28] yes [16:25:32] great [16:25:43] Sounds good. We're basically making a database. [16:26:00] yeah, something like that [16:26:27] which will handle about three queries but it will do them really fast... we hope [16:26:39] Mhm [16:27:47] whatever happens, writing out plain xml in order (as the dumps are now), uncompressed, has simply got to be no slower than someone uncompressing and reading them now.. if we can meet that goal then everything else is golden [16:28:53] what are you passing as input at the moment? uncompressed stub ? [16:29:28] hmm, that's a pretty high goal, because i will have to do more than just decompress some text [16:31:29] I know [16:31:52] if it's significantly slower we'll have complaints [16:31:55] pretty much, i will use pages-meta-history (i don't need stub, i can ignore the text at first) for testing first and the output of dumpBackup later [16:32:01] ok [16:32:39] Mhm [16:32:53] right, i think it shouldn't be that much slower, but i will keep that in mind [16:32:58] yep [16:34:47] parent5446: yeah? [16:35:33] Yeah I think so as well. We'll be fine. [16:35:51] I'm interested in hearing how you will determine which revisions/pages to ask for from dumpBackup.php [16:36:10] again I don't expect it to be solved now, I'm just curious if you have thought about it [16:37:49] that's one thing i wanted to ask about your incremental dumps: why do you have the phase 1 that just records revision id? why not just find the last revision id that was before now - 12 hours [16:38:41] because it takes no time for me to get the current max rev id [16:38:55] select max [16:39:22] but finding the largest revid older than 12 hours... meh, now we have to ask the db to work a little [16:39:46] there's no requirement for you to look at only revs that have been sitting there for half a day [16:39:56] there should be an index on rev_timestamp so that shouldn't long either, but okay [16:40:25] i think it makes sense to give editors time to delete some really bad revisions/pages [16:40:28] because the adds/changes run daily, it makes me a bit more hesitant, I wanted to give folks a little time [16:40:34] right, to delete stuf that's problematic [16:40:41] yeah [16:41:13] but the thing is that you can't *just* get the revisions from day a to day b and fill in the gaps [16:41:26] you have to account for the stuff that's been deleted or oversighted [16:41:28] yeah, i know [16:41:47] and pages that have been moved around... so I was wondering what your thoughts were on it, if you have any yet [16:42:30] Good thing the pageid doesn't change on page moves. ;) [16:42:38] no kidding [16:43:16] my plan is to get the new added revisions, process those; then look at the relevant entries in log and for all pages that were mentioned in the log, compare their revision lists in the dump and in database and then change the dump so that they are same [16:44:11] this applies to revision un-/deletion, if i can find out in the log that an action was a normal page un-/deletion, it will make things simpler for those cases [16:44:34] with oversight you are going to have a harder time [16:44:45] because there things like the page title may be hidden [16:45:33] How do dumps usually handle hidden titles? I'm guessing there's an option to apply revdel info? [16:45:42] so first off [16:45:57] we get stubs which are 'give me page metadata for every page that's public' [16:46:00] you mean that normal users can't even find that something was oversigted? i think in that case the dump script will need admin-level access [16:46:24] then we find in the last dump run or ask the db for the content [16:46:49] if it's been oversighted in the meantime we'll get "deleted" in a bunch of tags, iirc [16:46:59] and those get written right into the dumps [16:47:22] user names, titles, comments, and text and maybe some other things can all have "deleted" in the xml tag as an attribute [16:47:36] wouldn't it make more sense just not to write such page to the dump? [16:47:41] the stub is already there [16:47:52] right [16:47:53] so the best we can do then is to note it for the use in the content file [16:47:53] Well if you're backing up a wiki, it's destructive to be missing pages. [16:48:01] *user [16:48:30] this avoids things like 'we have to track page moves' and 'we have to track undeletes and redeletes and undeletes again' [16:48:47] it also means we don't rely on the logs... sometimes the logs and the content aren't always in sync [16:48:50] sad but true [16:49:21] so having something that can check in a more rigorous way once in a while might be a good idea (if we know that there's been some problems) [16:50:13] yayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy http://infodisiac.com/blog/2013/07/new-edit-and-revert-stats/ [16:50:30] should that check every piece of metadata that can oversigted as well? [16:51:28] presumably if you try to retrieve a revision [16:51:36] with all the various other bits [16:51:47] some things may have placeholders [16:52:06] I'm not suggesting we periodically get all revisions anew from the db, that would be a disaster [16:52:29] right, that shouldn't be necessary to do this [16:52:34] the other thing about the stubs is we do regenerate them every time [16:52:37] from scratch [16:52:46] so all the metadata is accounted for [16:53:04] if something is hidden etc in between it and the page content getting written, the next stub will have it right [16:53:17] so that's something to bear in mind [16:53:46] processing the page content is the big headache and the big time sync, even if you wind up doing a two pass 'get all metadata' first [16:54:22] right [16:55:44] well anyways, just some thoughts, I will be interested to see how you address it when you get there [16:55:44] i wonder, would it be possible to stop replication on the db server dumps use while the dump is running? that way, we wouldn't have to worry about “in the meantime” much; that's probably not feasible, right? and in any case, it's not something we need to solve now [16:55:55] well ugh [16:56:11] some of our queries are already long and we get complaints [16:56:24] what I would love though (don't expect it in the short term) [16:56:28] Probably not a good idea. [16:56:56] is to have a server that we can run a pile of queries on so we have consistent dumps [16:57:00] from start to end [16:57:13] but in order to do that we would have to [16:57:20] be able to complete any given dump in... how long [16:57:23] a day? [16:57:34] if we were able to generate en wp in a day [16:57:47] well I'll owe someone a very good bottle of whisky or the drin of their choice [16:57:50] *drink [16:58:17] Yeah that would be quite the feat. [16:58:22] this though is a discussion for Asher (wanted: dump consistency. how do we get it?) [16:58:52] it takes about 10 days now, right? [16:59:04] something like that [16:59:23] in theory I can cut that down by a little when we move to the host in eqiad [16:59:31] more cores and more memory [16:59:52] 4 days or so is 7z recompression [17:01:05] the tables get dumped from scratch (as they should), that will only grow over time but so far we can manage them [17:01:35] abstracts are a drag and we should find a workround but compared to history they are low priority [17:02:18] i thought about that a bit too, but the other tables would be harder to make into incremental dumps, since there is no way of finding out what changed, i think [17:04:43] it would be tough [17:04:49] and it's definitely not priority [17:05:00] that's not the main issue for folks [17:05:06] yeah [17:05:38] after this project is done and deployed and every's happy at the end of September [17:05:45] * apergos <-- optimist [17:05:50] ;) [17:06:06] there's lots of time for all the other things if you are still interested in dumps and haven't jumped on some other part of the MediaWiki platform [17:06:45] how do the abstracts work? at first sight, not very well, the second page in there (Autism) has | ICD9 = 299.00, that's not intentional, is it? [17:07:16] yeah it is, in that the abstracts are supposed to grab the first piece of text (I forget exactly the algorithm) [17:07:20] it's very mechanistic [17:07:50] sure wish we knew how to find out who uses them [17:08:14] s/who/if someone/ [17:08:32] don't you have any statistics? [17:09:02] what would they look like? we have lots of folks that download the whole mess [17:09:12] right [17:09:16] how can we tell amongst all those IPs who is actually doing something with those files? [17:09:55] we could put out some calls on the reearch-l and a couple other lists, or ask on the blog but [17:10:19] really unless we simply "broke" them for a while we probaby would never hear from the bulk of the users, assuming there is a bulk [17:10:43] yeah, i get that [17:11:06] (which we might consider doing at some point, with a notice... i.e. maybe gzip them and see who notices, etc) [17:12:42] so... [17:13:09] any questions for me or parent5446 about any of this, or the code you're working on, or anything else? [17:13:18] comments, complaints... :-) [17:13:51] no, nothing else [17:13:59] I had a quick question: I know this was mentioned somwhere, but what repo are we working on? [17:14:14] operations/dumps/incremental [17:14:24] OK, thanks. [17:14:46] the gsoc branch [17:15:08] Got it. I'll add it to my notifications list. [17:15:31] if neither of you have anything else, see you on monday [17:15:50] OK see you on Monday. [17:16:07] only it's prolly good on sunday night or mon morning to write a short status on your wiki page for folks following along [17:16:13] just a couple of lines [17:16:38] I'll be here on and off through the weekend if you need anything or just want to bounce ideas around [17:16:51] otherwise, have a great one! [17:17:20] ok, thanks [17:23:48] MatmaRex: what calls jQuery.expr.filters.hidden() ? https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29&diff=562996477&oldid=562995036 [17:26:12] Nemo_bis: i think it corresponds to $(...).is(':hidden') calls [17:26:16] or the ':hidden' selector in general [17:26:23] which means that just about everything calls it [17:26:38] heh [17:26:44] Nemo_bis: is that the ULS thing? [17:27:16] Nemo_bis: as i said, it fires blocking AJAX API requests, which means that anycode that might be ran in parallel will have to wait for the requests [17:27:16] apparently [17:27:18] (basically) [17:27:41] which is precisely why this should not be done [17:27:42] I just wanted to check if that was ULS code [17:28:02] sorry, i'mknee-deep inVisualEditor right now [17:28:06] ULS fires ajax with async: false?! [17:28:16] yep [17:28:23] * YuviPanda bleaurghs a little [17:28:32] my reaction was the same as your when i discovered [17:28:33] but it does [17:28:36] and it does it on page load [17:28:48] there's a bug for this whose number i don't remember off-hand [17:28:59] i'm trying to figure out https://bugzilla.wikimedia.org/show_bug.cgi?id=50385 now [17:28:59] that's... very bad. [17:29:07] * YuviPanda looks in bz [17:30:02] I linked it from that section [17:30:32] * Nemo_bis currently has everything but terminal unusable because of profiling that bug [17:31:52] ah, perhaps it doesn't play well with articles full of images [17:33:17] YuviPanda: found it? https://bugzilla.wikimedia.org/show_bug.cgi?id=49935 [17:33:59] didn't find it, thanks Nemo_bis [17:40:38] MatmaRex: Wait, ULS does *blocking* AJAX request on *page load*? [17:40:41] * RoanKattouw headdesks [17:40:46] *requests [17:41:06] yes it does [17:41:12] request, actually. just one. [17:47:43] MatmaRex: Did you see how ULS loads like all of its JS in the top queue? The top queue is ~100KB after gzip now [17:48:22] i haven't seen, but i can imagine [17:48:45] the code in general is painful, eh [17:48:58] * Nemo_bis always amused at how it takes an en.wiki deploy to notice all that sort of stuff [17:48:59] RoanKattouw: you know that funny little dialog with list of all languages it supports? [17:49:12] Nemo_bis: i have complained about it for quite some time [17:49:26] but i don;t have the time to rewrite the thing myself, and they didn't feel like doing it either [17:49:35] RoanKattouw: you know why it is/was lazyloaded? [17:49:38] well, yes, I mean "really notice" as a group/community of devs [17:49:39] No? [17:49:41] because it took like ten seconds to render [17:49:56] Ouch [17:50:00] Also, how does anything take 10s to render [17:50:00] so it was made to lazyload so it's less noticeable [17:50:04] i don't even [17:50:05] well [17:50:08] it has like 300 items [17:50:08] Initializing VisualEditor on [[San Francisco]] is faster [17:50:19] So? [[San Francisco]] has thousands [17:50:20] and creating each one fired a bunch of $() selects [17:50:28] and reflows [17:50:31] after every single one [17:50:32] Nice [17:50:48] Oh, and so that user claiming :hidden is where most of the time is spent? I wouldn't discount that too quickly [17:50:54] i rewrote it a little and it takes a second or two [17:51:02] If they're doing something like $( 'body' ).find( ':hidden' ) .... [17:51:03] but i'm not sure if it's live anywhere yet [17:51:14] Or just $( ':hidden' ) or $( 'div:idden' ) or whatever [17:51:18] the entire ULSis pseudo-upstreamlibraries on github [17:51:18] MatmaRex: it was just deployed iirc [17:51:27] which are synced manually sometimes [17:51:32] Yeah they would have to be [17:51:34] by copying over the files [17:51:42] We don't allow people to deploy code from github directly [17:51:55] Also, I wish this code wasn't on github to begin with [17:52:30] Maybe if it was in Gerrit it would've been reviewed properly and we wouldn't have code in production with terrifying bugs like these [17:53:03] MatmaRex: https://gerrit.wikimedia.org/r/#/c/71973/ [17:53:26] Nemo_bis: ah, nice [17:53:45] ah it will be in next deploy though [17:54:41] RoanKattouw: AFAIK we have terrible JS CR backlogs in gerrit too [17:54:52] Yeah :( [17:55:06] It's true that we don't have the JS CR capacity we need [17:56:02] so teams have to do their best learning JS themselves [17:56:27] (as happens with other kinds of skills of course) [17:56:33] RoanKattouw: should be able to move them now, since we can have contributions to Gerrit via GitHub easily now (there's a bot) [18:38:36] I am not getting VisualEditor for section edit links anymore. Was that intentional? [18:38:51] StevenW: no, there's a bug for that [18:39:05] that's what you get deploying before a US holiday [18:39:49] StevenW: Fix is in Gerrit and was just +2ed a second ago [18:40:09] Awesome :) [18:50:33] hey AzaToth, thanks for the review; I expected my crap to need serious revision after trying to actually make it work with debian build stuff anyways [18:54:20] no probs [18:54:51] apergos: could quickly make a scons of it ツ [18:56:12] with luck there will be a nnew version up tomorrow sometime (woudl like it to be today but while it's usable, in order to address a couple issues I need to check some more crap about the debian build system and I'm about out of energy now, 10pm) [18:57:20] apergos: mwbzutils should be it's own repo imo [18:57:50] well I have thought about that but otoh we need 2 of those for the dumps, they are called right from the scripts [18:57:55] so I have put it off some more [18:58:13] they don't exist for any other reason (yet ;-) ) [18:58:28] better to make it clean from the beginning [18:59:19] ok, noted what I need to start with tomorrow and closed all the tabs [18:59:19] oof! [19:15:05] When should I expect to start seeing mobile improvements? [19:16:09] T13|lunch: can you be more specific? [19:17:46] There are multiple components that load horribly. Such as navigational boxes and wikiproject/noticeboard banners. [19:18:21] Page sections don't load correctly either. [19:18:33] T13|lunch: a) you might want to ask in #wikimedia-mobile b) but maybe not at pacific lunchtime [19:18:48] T13|lunch: define 'horribly' [19:18:49] c) styles for boxes, banners etc are something we're still working out [19:19:01] d) what do you mean by page sections don't load correctly? [19:19:02] is site JS even loaded on mobile? [19:19:07] I'm likely going to start creating tickets for everything... [19:19:08] brion you rascal [19:19:29] MatmaRex: there seems to be some. [19:19:42] T13|lunch: please do, but make sure they're actionable if possible :) [19:19:43] MatmaRex: iirc common.js etc isn't [19:19:44] loaded [19:20:03] MatmaRex: there might be mobile specific mobile.css or something like that. (unsure) [19:20:07] if they're bugs like "banner/page/template X looks like shit" then the best place to fix it is on the banner/page/template [19:20:20] Common.js isn't, but there is some loaded. [19:20:35] where new constructs are necessary, that's good target for bugzilla [19:20:54] such as the ongoing discussion of moving things from inline styles to distinct style blocks that can ship with a template [19:21:51] The section thing is if you have h2-abc h3-fgh h2-zxc then zxc won't show up unless you expand abc [19:23:51] T13|lunch: file that as a bug with a link to a sample page [19:24:02] An example is enwp.org/Regular_haircut where you can only see references etc if you expand tapers. [19:24:34] !newbug [19:24:38] T13|lunch: i can't reproduce that in firefox [19:24:43] http://en.m.wikipedia.org/wiki/Regular_haircut [19:24:46] tap on "References" [19:24:51] and it expands as expected [19:25:23] References doesn't even show up unless I expand tapers [19:25:41] can't reproduce it in iOS 6.1 Safari either [19:25:49] References shows up in the list as i expect [19:25:52] Galaxy S3 with Firefox [19:25:54] T13|lunch: are you using any of the beta settings? [19:26:02] I may be. [19:26:13] please check :) [19:28:46] T13|lunch: can't repro in beta or alpha mode either.... [19:28:51] which version of firefox/android is this? [19:29:38] Yeah, I tried all three and it isn't doing it for me today either (was last night) and beta was off. [19:30:17] whee [19:30:26] :D [19:30:48] brion: I take it you're on the mobile team? [19:30:59] yes :D [19:31:05] i'm mostly working on apps though [19:31:48] Any plans for making desktop mode for mobile more friendly? [19:32:13] T13|lunch: it's possible that something didn't propagate correctly from these div fiddlings: http://en.wikipedia.org/w/index.php?title=Regular_haircut&diff=562534503&oldid=562494232 [19:32:18] if you can find a way to repro it regularly on a test page do file :D [19:32:42] well there's some plans to kinda merge desktop and mobile :) but we'll see in the meantime [19:33:33] Cool. That diff was something else I was fixing. [19:33:50] I'm glad it helpped that too. [19:37:05] lunch…… back later :D [19:40:49] apergos: http://paste.debian.net/14622/ [19:45:07] nice [19:45:26] (I've already gone a different route of course) [19:46:39] should look into (somedayyyy) builds on non linux platforms [19:48:26] don't have any to test on though [21:26:01] * Elsie beats Reedy. [22:35:22] MatmaRex: thanks for the review on https://gerrit.wikimedia.org/r/#/c/60952/ [22:35:35] It seems to be up and working on enwiki Beta Labs. [22:37:31] StevenW: my pleasure [22:40:07] anyone in here a meta-admin? [22:40:18] any jquery gurus around? [22:40:23] mwalker: i am, via staff rights. [22:40:42] is this something i can do as a staffer, or do we need community-voted peoples? [22:41:07] I assume you can do it as a staffer... I need this interface message page deleted http://meta.wikimedia.org/w/index.php?title=Special%3AAllMessages&prefix=centralnotice-banner-autolink&filter=all&lang=en&limit=50 [22:41:18] sorry -- http://meta.wikimedia.org/wiki/MediaWiki:Centralnotice-banner-autolink [22:41:32] jorm: I could possible help you, but I think you will get mire response at #wikimedia-dev ツ [22:41:42] point. [22:41:51] whatcha messing with, jorm? [22:42:05] i've got a table, with rows, as a jquery dom element. [22:42:11] they have an attr in the tr [22:42:15] i want to sort them by that. [22:42:56] kaldari: stabilized yet? [22:43:56] T13|store: stable as I'll ever be :) [22:44:35] There was a question about echo notifications for reviewing new pages the other day. [22:45:29] Wondering if you might know if the reviewer added themself to Echo-blacklist would that prevent notifications for reviewing fron being sent from his actions? [22:45:51] What's the best way to file a design bug that is relevant to multiple extensions? [23:00:37] MediaWiki -> General/Unknown; 'design' keyword?