[00:30:41] DarTar just got a bunch of MWExceptions on metawiki that prevented anything from loading; I grepped the logs on fluorine and saw this: [00:30:50] 2013-02-08 00:23:56 mw1084 metawiki: [13ac8829] /wiki/Schema:OpenTask Exception from line 281 of /usr/local/apache/common-local/php-1.21wmf9/includes/context/RequestContext.php: Recursion detected [00:31:00] Clearing cookies fixed it [00:31:07] Has anyone run into this before? [00:31:50] The line being referenced is at the top of 'getLanguage()'. [00:32:59] ori-l: Really? [00:33:05] There's a fucktonne of them in the error logs [00:33:19] It's CentralNotice related seemingly [00:33:23] FR think it's E3 though ;) [00:34:05] Well, maybe, but we haven't touched anything that is plausibly connected to languages or user preferences, AFAIK. [00:34:39] I do like the 42 line stack traces [00:34:59] Including CentralNotice, CentralAuth and NewUserMessage [00:35:40] Yeah. [00:36:19] all these totally amazingly complex extensions [00:36:52] I think it has something to do with an anonymous user not being initialized correctly somehow [00:36:56] it seems to all trace back to Niklas [00:37:08] and c014e6b0 [00:37:11] Colonel Mustard in the Kitchen [00:37:29] Unhelpfully, they're all in India [00:37:49] https://gerrit.wikimedia.org/r/#/c/44227/ for those playing along at home [00:38:42] I vote to revert it then [00:38:49] +1. [00:38:59] ULS being slightly broken is better than a lot of things bein a lot broken [00:39:10] is it breaking anything besides centralnotice? [00:39:16] "Had to add recursion guard too." [00:39:30] that is never a good commit msg [00:39:36] mwalker: DarTar could not load any page on metawiki as a result. [00:39:51] mwalker: All the recursion detected exceptions have a Special:RandomBanner call [00:40:10] Though, most of the errors look to be on metawiki (atm) [00:40:33] wait; dartar couldn't load a page on metawiki because of centralnotice throwing an error?! [00:40:48] These look to be separate calls.. [00:41:08] they should be... BannerRandom is a AJAX call [00:41:47] mwalker: internal error and a blank page with the following error message: [13ac8829] 2013-02-08 00:23:56: Fatal exception of type MWException [00:41:52] Any page? [00:42:02] yeah any page from meta [00:42:20] http + https [00:42:43] ok -- so that's more insidious -- the CentralNotice path will happen after the page load -- so you wouldn't get a banner, but it shouldn't stop you from getting a page [00:42:56] were you logged in? [00:43:17] I think so, but I removed all my cookies so I can't reproduce it any more [00:43:24] kk [00:43:40] https://gerrit.wikimedia.org/r/#/c/48054/ [00:43:42] i has revert. [00:44:26] well -- hang on -- instead of throwing an exception -- why don't we just return a sane default -- like english [00:45:06] "English". "Sane" [00:45:23] ya ya ya :p ok! arabic :p [00:45:56] It's a shame we don't still have Klingon [00:45:57] but seriously -- I don't think it's safe to return null [00:47:23] we should return wgContLang actually [00:47:41] because the user at this point apparently doesn't have a language set or something [00:47:56] so; default it to the wiki language [00:57:46] Reedy https://gerrit.wikimedia.org/r/#/c/48055/ [00:57:50] pgehres: ^ [01:14:41] TimStarling: Hey. Have you any opinion on https://gerrit.wikimedia.org/r/#/c/48055/ as a fix to https://bugzilla.wikimedia.org/show_bug.cgi?id=44754 ? [01:24:33] gn8 folks [01:24:36] Thanks again! [01:25:37] Reedy: I am looking at it [01:30:36] Wikimedia seams down from Germany [01:31:06] really ? [01:31:06] LeslieCarr: ditto [01:31:06] shit [01:31:07] --- wikimedia-lb.esams.wikimedia.org ping statistics --- [01:31:07] 9 packets transmitted, 0 received, 100% packet loss, time 7999ms [01:31:18] 502 bad gateway from nginx on https [01:31:28] ah yeah, ok i know what i did [01:31:28] sorry [01:31:40] who pulled the plug out? [01:31:46] * LeslieCarr raises hand [01:32:31] LeslieCarr: Fine again, isn't it? [01:32:34] LeslieCarr, taking the site down again? still not feeling part of the team? ;-) [01:32:52] hehe [01:32:56] should be fine again [01:33:12] though there will be a small outage as bgp propogates route changes [01:34:12] LeslieCarr: When? [01:34:18] about now [01:34:24] :D ok [01:34:44] i thought you were all supposed to be in bed ? [01:34:47] the entire subcontinent [01:35:12] * hoo got a spare day tomorrow... :) [01:35:52] Reedy: for a start, autocreating a user on metawiki and then sending a message to them just because they viewed a notice on another wiki seems kind of ridiculous [01:37:35] could we maybe just disable NewUserMessage on metawiki for now? [01:37:40] hoo: how's it look now ? [01:37:55] borked again? [01:38:03] (router is rebooting, so curious) [01:38:09] well may or may not be borked [01:38:10] down [01:39:17] ... still down ... [01:39:31] ... and up again :) [01:40:01] "Error: ERR_CANNOT_FORWARD, errno (11) Resource temporarily unavailable at Fri, 08 Feb 2013 01:39:37 GMT " ... but the network's reachable [01:40:11] now 502 Bad Gateway [01:40:23] now working again :) [01:40:27] and back [01:40:39] cool [01:40:50] we need some better failover tools, but we're working on it [01:42:03] TimStarling: Seems to be more of an elegant fix [01:52:04] TimStarling: Any chance you could fix the permissions on /home/wikipedia/common/docroot/noc/conf/ please? [01:52:11] For some reason they're now root:root and 777 [01:52:37] But there's no write on the directory for anyone else [01:53:15] Ahm, 777? [01:53:18] That's write for everyone [01:53:29] On the files, yes [01:53:31] not on the directory [01:53:38] drwxr-xr-x 3 root root 8192 Jan 23 04:22 . [01:53:43] Oh [01:53:55] RoanKattouw: Feel free to fix it though please :p [01:54:33] everything in docroot/noc seems to be in a similar root:root mess [01:58:04] Looks like that should be owned by group wikidev and have g+w, right? [01:58:13] Yeah, please [01:58:13] I mean, it's the docroot, wikidevs should be allowed to write [01:58:20] It did at somepoint in the not too distant past [01:59:47] so TimStarling, Reedy; I guess I'm missing what Tim's fix is -- we have to serve banners from metawiki because that's where the translations are -- so how do we get centralauth to not create a new user object for them? [01:59:50] Reedy: Should be fixed now [02:00:12] I was fixing it also [02:00:15] mwalker|eyedr: Stop NewUserMessage attempting to leave talkpage messages on meta when the user isn't even visitng the wiki [02:00:21] well, stopping the extension completely.. [02:00:38] I did: chmod -R a-w noc && chgrp -R wikidev noc && chmod -R ug+w noc [02:00:47] +1 for killing... That extension causes headaches for ages [02:00:53] Reedy: ah -- but does that explain DarTar's issue? [02:01:15] site's not dead, i'm outta here :) [02:02:15] mwalker: Not sure. But it seemingly can't be replicated now... [02:03:32] "find -not -group wikidev" still gives plenty of responses [02:03:46] Reedy: ok; so my entry point into all this foo is getting an internationalized message -- is this a fix that needs to occur way up there; or is this more down in CentralAuth? [02:04:02] because somehow we're going to have to tell NewUserMessage what's going on [02:05:02] Well, we do log if an account is "created automatically" [02:05:08] https://meta.wikimedia.org/wiki/Special:Log [02:05:50] 2013-02-08 02:05:27 mw1061 enwikinews: [243656e1] /wiki/Special:AutoLogin?token=4fa41945dcbb346ec379767cf965574d Exception from line 281 of /usr/local/apache/common-local/php-1.21wmf9/includes/context/RequestContext.php: Recursion detected [02:06:13] backtrace? [02:06:27] ah, it's enwikinews, so not the same backtrace I guess [02:06:44] http://p.defau.lt/?u4_7MOq27uJ8l2xlB5DlbQ [02:07:01] There's one that involves AbuseFilter and not NewUserMessage [02:07:41] ok -- so putting aside the silly behaviour of NewUserMessage for a moment -- all this seems to come down to the User object not having language defined [02:09:30] right, so it *is* newusermessage on enwikinews [02:09:36] Certainly the newusermessage config change on metawiki has stopped spamming the exception logs on fluorine so much [02:09:45] And on Commons seemingly [02:10:02] so we have time to fix newusermessage properly now [02:10:03] First stack trace in that pastebin above [02:10:59] mwalker: no, the problem is that to find out the language, you need the user object, and to make a user object, you need the language [02:11:16] ah [02:11:23] so my fix is still relevant then [02:11:46] yes, but it will convert an exception into a breakage of unknown scale [02:11:59] at least we know about it at the moment and can fix it [02:12:04] that's what exceptions are for [02:12:06] fair point [02:14:28] so that last paste has both AbuseFilter and NewUserMessage as culprits [02:15:49] NewUserMessage is easily fixed, using the content language is the right thing to do in an AddNewAccount hook [02:28:12] TimStarling: Add to that a TitleBlacklist one [02:28:38] http://p.defau.lt/?4BVmEO8qgI9yrq4_lpZavg [02:29:27] NewUserMessage would seem to be the most common [02:31:59] so was RequestContext recently changed to throw this exception? [02:32:06] yeah [02:32:11] Niklas made a change [02:32:26] https://gerrit.wikimedia.org/r/#/c/44227/ [02:32:26] ok, well we can revert that and log it instead [02:32:52] Though, that looks to be before 1.21wmf8 too.. [02:33:51] oh, no.wmf8 was 16th Jan [02:35:24] that patch looks like it would convert a segfault to an exception [02:35:46] it's equivalent otherwise, isn't it? [02:36:15] it just adds another parameter to UserGetLanguageObject, and he probably added the recursion guard because he hit the segfault during testing [02:37:10] "Had to add.." would suggest he had problems [02:37:44] Though, if it was causing a load of segfaults on the cluster, presumably someone might've noticed/brought it up before [02:39:23] ok, well how about we have mwalker's patch, except with something noisier than wfDebug() [02:39:46] like trigger_error(..., E_USER_WARNING); [04:31:18] TimStarling: fwiw I don't remember seeing any segfaults when I did that [04:45:44] maybe it hit a recursion guard at some other place in the loop [08:08:34] Susan: do we still have those horrible links to #wikipedia in the wikimedia error page? [08:12:23] Let me deploy a bug real quick and we'll check [08:16:06] not if it's a 502 :p [08:26:48] challenge accepted [08:28:33] :o [09:15:39] Nemo_bis: Sorry, my client restarted and I missed your message. Yes, it's still there. [09:15:57] Nemo_bis: https://en.wikipedia.org/wiki/& [09:21:04] aww [09:23:07] Please try again in a few minutes.

[09:23:11] That's a bug. [09:23:46] And the copyright notice is pretty silly. [09:23:48] Lol [09:23:54] " Request: GET http://en.wikipedia.org/wiki/&, from 208.80.154.134 via cp1019.eqiad.wmnet (squid/2.7.STABLE9) to () [09:23:55] Error: ERR_ACCESS_DENIED, errno [No Error] at Fri, 08 Feb 2013 09:23:38 GMT [09:23:55] " [09:23:58] argh [09:24:09] What there is copyrighted? [09:24:55] [09:24:58] what [09:27:00] Susan: https://github.com/hmason/gitmarks_hm/blob/master/content/21c9b3cd2f1c93aaa8f73a7dc5ba67ac [09:27:10] Has that always been there? [09:27:13] That number thing [09:27:39] I think so. [09:55:41] Susan: "To what end?": proving that Wikipedia is unreliable, of course. A conspiracy by Britannica (or Baidu crackers). [09:58:14] Wikipedia is pretty unreliable. [10:00:44] the problem is that so is everything else [10:01:46] Yes, that's the part critics usually forget. :) [10:20:26] & in urls is blocked [10:20:32] due to broken clients [10:25:14] Platonides: it's just our standard example to easily get a wikimedia error when one misses it [11:38:40] can someone explain me how to read this table? https://noc.wikimedia.org/cgi-bin/report.py?db=1.21wmf8&sort=real&limit=5000 [11:39:09] or is there a doc for it? [11:39:28] i understand count, but i have trouble with the other columns [15:44:51] I'd love to retest some more older AFT(v5) bug reports, but it seems there have been quite a few design iterations. [15:44:51] So I look at http://www.mediawiki.org/wiki/Article_feedback/Version_5/Technical_Design#Query_string_options [15:44:51] Unfortunately I have no idea which value means what (I guess I'm too stupid to find it on some other wikipage). [15:44:57] And I don't see at all how the URL parameter aftv5_link=foo influences anything in the rendering of AFT in an en.wp article. [15:45:06] And I have no idea which values are on by default on test.wikipedia where there's also some AFT stuff to test. [15:45:12] This is probably all very well documentated among tons of other stuff, just not for somebody impatient like me who wants to quickly find and quickly test. [15:50:17] (specifically wondering how to get "See All Comments" button shown in http://bug-attachment.wikimedia.org/attachment.cgi?id=10724 of bug 37475) [15:52:57] andre__: I suspect your time would be better invested on other components [15:53:03] categorytree-collapse-bullet: Parse error at position 0 in input: [−] seen that? [15:53:14] https://pl.wikipedia.org/wiki/Kategoria:Podstawczaki here for example [15:53:17] saper: it's filed [15:53:23] Nemo_bis: got bug nr? [15:53:34] can't remember [15:54:10] https://bugzilla.wikimedia.org/show_bug.cgi?id=44459 [15:54:17] chrismcmahon: ^ do you have any tips for andre__ ? [15:54:49] Nemo_bis: yes and no. I try to take a llok at certain areas from time to time. Because we have enough of them. :) [15:54:58] (plus AFTv5 is alive and kicking on de.wp and fr.wp) [15:55:08] yeah [15:55:23] andre__: yes but they'll use a very different setup from what I see [15:55:45] anyway AFT is the second biggest component of our bugzilla currently, not much hopes around ther [15:55:47] that's why I even wonder *where* testing makes sense and where it's not a setup from a few months ago. [15:55:48] e [15:55:55] andre__: I have a feeling those should all be abandoned in some way. also, the production version of AFTv5 is enabled for all pages on http://en.wikipedia.beta.wmflabs.org [15:56:02] because there's test.wp, en.wp, and probably more instances. [15:56:16] except that beta labs is spinning for me... [15:56:25] chrismcmahon: does it also follow the correct configs as de.wiki? [15:57:00] * Nemo_bis got a fatal at last attempt to test a bug there [15:57:02] so I should give it a try over there at wmflabs.org, thanks. Still wondering which URL parameter means what, which ones are completely abandoned and will never ever see the light (=>WONTFIXing bug reports about them). [15:57:08] Nemo_bis: I doubt it. I don't think we have any non-English AFTv5 set up. We have a next-release AFT on the ee-prototype host right now, andre__ [15:57:25] chrismcmahon: "we" as in "on labs"? [15:57:43] andre__: yeah, extra URL parameters added manually should probably be ignored at this point [15:58:17] Nemo_bis: 'we' as in everybody. We don't any non-English AFTv5 anywhere yet. We're doing that right now. [15:58:23] don't have [15:58:33] That means that *a lot* of bug reports are obsolete. Still I'd highly highly appreciate guidelines *which* ones are. now. [15:59:04] andre__: can we talk about that next week? today is just impossible [15:59:12] Ah, I read something by Fabrice as "it's already running on de.wiki" [15:59:34] chrismcmahon: oh sure, no hurries. I just needed to share a slight level of frustration somewhere [15:59:49] comrade. I sympathize. [15:59:56] andre__: the main problem with AFT is that docs are on en.wiki [16:00:12] chrismcmahon: still, AFTv5 is deployed on de.wp. But maybe we interpret "deployment" differently right now :) [16:00:14] anyway [16:00:19] thanks for listening. [16:17:42] hi [16:18:17] how can i get a list of all commons files that have a use in some wiki? [16:19:03] (exept doing 10kk+ requests for every global file :)) [16:54:58] Base-w: toolserver? [17:34:41] jeremyb_: what do you mean? [17:34:51] it is possible at toolserver? [17:43:19] Long response time for all servers [17:44:32] hi vssun where are you? [17:44:52] From India [17:45:02] vssun: how long has this been happening for you? [17:45:23] I was trying to login for last 20 mins [17:45:55] experiencing this problem for last 20mins [17:46:12] ok, are you using HTTPS? [17:46:37] I tried both. but the result was same [17:46:54] evil Susan stealing me words [17:47:31] Nemo_bis: Did you mean me or Susan? [17:47:50] Susan :) [19:02:32] Hey AaronSchulz, have a minute to talk about https://gerrit.wikimedia.org/r/#/c/39171/ ? [19:04:38] csteipp: so the first problem is atomically renaming the row on each wiki (on the 7 dbs)? [19:04:53] Yep [19:05:22] And how to do that without holding open a db connection for each, updating, and then commiting them all. Or that might be a good way? [19:05:41] But it seemed like there should be a better way. [19:05:56] remind me, why are we pushing for this feature? [19:06:05] Stewards really want it. [19:06:16] Some people just want another name [19:06:32] But I guess there's some vandel renaming too [19:06:49] Sadly, renames happen. [19:07:12] csteipp: I mean it's nice to have renames, I'm just curious why it needs to be done soon [19:07:17] <^demon> People should pick better usernames ;-) [19:07:23] anyway, the second problem sounds like it could use BagOStuff::merge() [19:08:19] So E2 or 3 wants global profiles, which means everyone needs a global username, which means all the remaining conflicting usernames need to be cleaned up... [19:08:24] the first one could get seven connections at once and do a bunch of begin/commits (the chance of all the connections succeeded and a subset of trxs failing to commit is very low) [19:08:46] as long as this is not super frequent, that would work [19:08:53] though yes, it kind of sucks [19:09:09] one could use permanent locks + queue is another option [19:09:15] *as another [19:09:24] so the updates would "eventually" happen [19:09:25] So it would only need 7 connection? I thought transactions were per DB? But could be wrong.. [19:09:44] I think they are [19:10:18] They seem to be http://stackoverflow.com/questions/2239810/multiple-database-and-transactions [19:10:28] probably in Postgres I bet, since DBs have a lot of logical separation...I'll check for mysql [19:10:35] "permanent locks + queue"...? As in lock the local account, and have a queue to process, and some sort of retry if it fails? [19:10:49] yeah [19:12:34] it seems like multi-db trx might need XA...if it's internal thats fine (like the internal binlog+db engine xa updates) but it seems it is not [19:12:46] manually doing XA transactions wont work since that XA interface in mysql is a piece of shit [19:12:52] and you'd need manually trx managers anyway [19:13:01] so yeah, doing it all at once wont work [19:13:37] ^demon: 2PC actually works in postgres though :) [19:13:51] ^demon: don't you just love PG [19:13:54] * AaronSchulz trolls [19:14:14] * ^demon stabs [19:14:28] <^demon> I was so frustrated when I was working on the installer. [19:14:28] So is there a good way to locally lock an account? [19:14:34] mysql tends to have two types of features [19:14:45] <^demon> I got angry and almost ripped out pg support during that. [19:14:58] the basic ones lots of people use and ones that are documented as being there but don't work half the time [19:15:12] ^demon: ;) [19:15:29] at least PG doesn't half-ass stuff so much [19:15:54] ^demon: I think the way we use PG makes it trickier too [19:16:17] <^demon> Well duh, because our pg code is a flaming pile of horse shit. [19:16:23] though PG can be...verbose...sometimes regarless [19:16:26] Y'know, the one-time cost of abstracting the DB layout to no longer store *_user_text would probably be much less than the cost of updating all of these rows in perpetuity. [19:16:44] Susan: I've complained about that for years [19:16:51] :-) [19:16:53] we can keep the fields for IPs and for historicity but normalize [19:16:57] Susan: That probably wont happen in this century... though I'd love it [19:17:03] that's why I think this CA thing is kind of misguided [19:17:23] AaronSchulz: I'd give everything a user id, even IPs [19:17:37] well yeah, you could take it farther I suppose [19:17:48] <^demon> AaronSchulz: Few too many words. You meant "that's why I think CA is misguided" [19:18:06] You could give IPs a user ID or just null the _user_text field. [19:18:15] Either way, it seems like a better approach. [19:18:17] though I don't know about IP ids since they are more ephemeral in terms of the person using it [19:18:28] Susan: I would give everything a user id and then throw away the user_text fields [19:18:37] well, I guess it depends on how you use the IP id [19:18:41] it could work fine [19:18:49] That would still allow a lot of "fancy" db queries without to much additional if logic [19:18:56] hoo: I created the UserCache class for this idea [19:19:05] you just batch-load the name...eventually it could use memcached [19:19:32] AaronSchulz: Well, that's only a part solution... take a look at the linker code for rollbacks for example [19:19:41] it might also be nice to have CA reference users by ID and then have a central place with names (and we could use memcached), then you'd have like one UPDATE and some cache invalidations [19:19:51] I recently overhauled that... and it more or less has to use user_text (for the sake of performance) [19:19:55] anyway, all of this will never happen soon, so I should go back to coding ;) [19:20:36] * hoo stops dreaming of MediaWiki 2.0... a sane wiki software :P [19:20:40] gwicke: "sometimes you have to stop digging" [19:20:43] AaronSchulz, before you do... best way to lock a local account? Assuming there is no CA account attached to it? [19:21:07] you'd probably have to implement it! :) [19:21:15] put on your mining hat :) [19:21:30] <^demon> hoo: MediaWiki 2.0 will be written in Java :) [19:22:13] ^demon: There's a point where jokes just are to painful to be funny :P [19:32:09] Anybody want to confirm & deploy localization updates for ZeroRatedMobileAccess? https://gerrit.wikimedia.org/r/#/c/48152/ & https://gerrit.wikimedia.org/r/#/c/48153/ for wmf9/wmf8 [19:52:12] Hey AaronSchulz, last question (i hope) :) [19:52:32] So... assuming we go for lock + queue [19:53:12] If one job fails, and we have to roll everything back (manually, since we don't have transactions), do you think it would be appropriate to have another cenralauth table to keep track of the work? [19:54:37] I don't like storing data in the db, then deleting it when the job is done.. but I'm not seeing another way to keep it around. Unless we keep another BagOStuff I guess? [19:54:46] if you can't apply an update then you may not be able to rollback either [19:55:26] the point is to check if you can rename and then lock, if you could do that everywhere, then proceed with the updates (or otherwise reverse the locks) [19:55:49] at that point it's just a matter of retrying [19:55:56] heh, it's like promise and commit steps [19:56:17] gwicke: so we area reinventing two-phase commit I guess [19:56:26] *we are [19:56:47] and the jobs are like transaction managers for limbo transactions [19:56:52] * AaronSchulz thinks this is all terrible [19:56:54] ;) [19:57:11] csteipp: would it be possible to defer this feature until some CA rewrite or is it urgent? [19:57:51] Stewards have said it's one of their top projects.. I guess they spend a lot of time on each of these now, doing it by hand [19:58:03] And E2/3 has about a 6 month horizon. [20:27:18] csteipp: what feature is this? (requiring lock and queue)? [20:29:06] saper, CentralAuth global renaming [20:29:53] https://bugzilla.wikimedia.org/show_bug.cgi?id=14862 [20:31:17] I was wondering as I recently saw AaronSchulz patches to implement advisory locks on PostgreSQL [20:32:36] omg this is messy [20:32:53] what is messy? [20:33:11] the concept - that one needs to do this by walking on local databases [20:33:46] we're also chucking jobs into the queues of multiple other wikis (potentially hundreds) [20:34:32] we're also using global memcached keys to store an array of wiki ids which have yet to process the rename job [20:35:36] we're also updating the user table on each wiki [20:36:07] I had this nightmare when doing systems integration with service provisioning [20:36:30] I was rolling back on 3 different DB engines and trying to undo entries in the directory [20:36:53] but rollback logistics orders was fun [20:37:19] It's completely horrible [20:37:28] But I just don't have a better solution. [20:39:34] I see [20:39:52] thanks on working on this :/ [20:39:59] for working [20:41:08] Krenair, csteipp: I realize it's a larger project, but was eliminating the _user_text fields explored as an option? [20:41:20] The whole system is simply not scalable. And it's only going to get worse as Wikipedia ages. [20:41:30] Wikimedia, rather. [20:47:11] Krenair, csteipp: https://bugzilla.wikimedia.org/show_bug.cgi?id=31863 Hmmm. [21:28:30] Susan, that would make the rename process faster, yes, so the chance of a race condition would be smaller, but not zero. We still have a multi-database locking issue-- making sure that CentralAuth's ideas of usernames and attached accounts is the same as the local wikis.. [21:37:17] csteipp, Susan: of course, the ideal solution is to dump CentralAuth longer-term too. :-) [21:37:28] In favor of what? [21:37:39] Susan: Shared DB like we recommend for everyone else? [21:37:52] I'm not sure I understand the difference. [21:38:02] CentralAuth seems like a shared DB to me. [21:38:22] Susan: CentralAuth is local users + cruft to sort-of share it a bit, kinda. [21:39:27] Susan: "Full" CentralAuth (that we're talking about moving to) would mean that it's still "local users + cruft to sort-of share it a bit, kinda", but now things assume that all local users are the same local user everywhere (and, barring serious logic errors or race conditions causing unexpectedness, they are). [21:40:04] Talking about moving to --> is there a bug or RFC? [21:40:38] Susan: Getting rid of the cruft so that the "users" table is a single table whether you're on enwiki or dewiktionary or zhwikibooks or ... would be useful, but would require rewriting every revision ever to use the new global user IDs. [21:41:00] Right. [21:41:34] Susan: No bug AFAICR, but who knows? [21:41:52] So on the very long horizon, then. [21:41:56] Susan: Obviously before doing this we'd need to have a serious talk with everyone about implications, but it would simplify things a lot. [21:42:01] Yeah, long-term issue. [21:56:08] JS guru needed please [21:56:30] at pt.wiktionary, the Main Page tabs are "disabled" if you use Monobook [21:56:33] James_F, how would that work with currently unattached accounts? [21:57:02] malafaya: Taking a look [21:57:53] thanks, hoo [21:58:00] (are you hooman?) [21:58:13] malafaya: Yes... I don't see the exact problem [21:58:26] The "página principal" tab? [21:58:29] Works for me [21:58:34] i'm not sure if there's a difference when using IE or FF [21:58:42] that one works, but it's the only one [21:58:49] try "discussão" [21:59:25] malafaya: Does pt.wiktionary hide the h1 for the main page? [21:59:32] Susan, yes [21:59:35] There was an issue with tabs if you hide the h1. [21:59:39] Yeah, Meta-Wiki hit this issue. [21:59:59] ah, was a solution found? where can i read about it? [22:00:01] Works with firefox [22:00:09] Susan: Which browser? [22:00:21] All of them? Dunno. [22:00:30] with Vector, everything works fine [22:00:31] malafaya: body.page-Main_Page.action-view #jump-to-nav { [22:00:39] https://meta.wikimedia.org/wiki/MediaWiki:Common.css [22:00:43] Copy those rules to pt.wiktionary. [22:00:55] They include action=view limitations and some other enhancements. [22:01:28] Copying that one work [22:01:37] you need to switch the page name [22:01:41] but still worksforme [22:01:41] all of them or just #jump-to-nav? [22:01:56] Let's look. [22:02:15] .page-Wikcionário_Página_principal h1.firstHeading { [22:02:17] https://pt.wiktionary.org/wiki/MediaWiki:Common.css [22:02:21] You see that section? [22:02:46] yes [22:03:23] http://p.defau.lt/?GJmoduDXtRILhgJAm_JPJQ [22:03:27] You want that code. [22:04:31] * malafaya edits Common.css [22:05:15] Susan, it worked! [22:05:20] Thank you so much [22:05:26] No problem.