[04:17:55] oups, did something just break? [04:18:15] I'm seeing a huge bold 'Language select' text in the sidebar on Commons [04:18:59] Are you logged in? [04:19:37] Logged out. [04:20:02] That's expected, then. (i think) [04:20:23] No, it seems like some CSS does not apply to that particular
[04:22:36] http://imgur.com/ZiGm5eM [04:22:43] Seems fine for me. [04:29:18] http://imgur.com/AfY36xI [04:29:23] Does not seem fine for me. [04:30:15] what machine/browser? [04:30:33] (i have FF22 on Mac OS X) [04:30:42] Chrome on a Linux [04:31:43] just checked it out on chrome on my machine and still looks fine [04:32:01] but i don't doubt your issue :) just ruling out who is affected [05:09:16] odder: Does the issue occur in other browsers? [05:09:35] Has anyone else reported a similar issue? [05:12:38] Elsie: I can test it for you on a Firefox 3.4.5 [05:14:42] 3.0 rather, sorry. [08:06:48] did I dream a sentence in some report along the lines of "we evaluated ceph. we'll stay with swift"? [08:07:05] this commit seems to confirm there may be something true https://git.wikimedia.org/blobdiff/operations%2Fmediawiki-config.git/ac2aa6def523fbeb22d5fbdc2f3edbf173222729/wmf-config%2Ffilebackend.php [08:09:05] ah, http://markmail.org/message/dbmmbph2oelyjl6d [08:30:35] Why I can't use iwbacklinks as a prop but I can use langlinks, iwlinks ? [08:30:41] https://en.wikipedia.org/w/api.php [08:34:00] Nemo_bis: you dreamed it, we have not made a decision abou ceph vs swift yet [08:34:59] (in spite of what the report says) [08:42:33] apergos: yes, the report confirmed no decision was taken yet [08:42:46] I didn't dream but misremember :) [08:45:11] ok [08:49:39] How can I return all the links from the current page to only internal pages (no external links)? Using prop=links ? iwlinks doesn't return all the internal links... [08:52:29] What is plnamespace=0 ? [08:53:54] only main/articles ? [09:36:44] odder: what was the pl.wiki feedback tool again? [09:41:09] https://pl.wikipedia.org/wiki/Wikipedia:Zg%C5%82o%C5%9B_b%C5%82%C4%85d_w_artykule [09:46:09] hi andre__ [09:46:39] hi Nemo_bis [09:46:49] if you happened to know what components the multimedia team is in bugfixing mode for, I'd love to hear it :) https://www.mediawiki.org/wiki/Talk:Multimedia [09:47:13] (the page asks triaging but there is about a dozen components it may be talking of) [09:48:32] Nemo_bis: definitely UploadWizard and TMH [09:48:57] at the least [09:55:28] define "bugfixing mode" :) [09:55:38] maintenance? not dead yet? actively developed? [10:01:17] not my words [10:04:27] dunno then [10:04:59] but your question :P [10:14:50] what a nice enotif summary http://p.defau.lt/?f0JEVvfr6N_pFWaSwN_FEw [11:24:14] https://www.mediawiki.org/wiki/Wikimedia_MediaWiki_Core_Team wee, simplicity FTW [11:38:20] odder: for the wiki syntax of the page you mean? [11:39:13] Nemo_bis: no, I meant Wikimedia MediaWiki KimiWedia [11:40:12] odder: you can rename it to [[WMF MediaWiki core team]] if you aim for simplicity [11:43:30] Can I get the outgoing links as well as incoming links for each page generated by generator=allpages of the Wikipedia EN API ? [11:45:45] Why there is no prop=backlinks ? [11:53:26] I can't get the outgoing and incoming links of a page on one go can I? [11:53:39] I have to make another HTTP request for the incoming links [14:03:44] petan: is huggle's main tracker github or bugzilla? [14:03:55] bugzilla [14:04:19] ah, is there some warning I didn't see? [14:04:24] can you move https://github.com/huggle/huggle/issues/3 to bugzilla [14:04:45] (I can't lose tickets there) [14:04:49] *close [15:14:08] today wikimedai apages loads verry slow O_O [15:17:28] *https [15:21:02] Steinsplitter: Example link? [15:21:58] http://commons.wikimedia.org/wiki/Special:Watchlist [15:22:04] ah, now is a little bether [15:24:32] Elsie: https://commons.wikimedia.org/wiki/Commons:OTRS/Noticeboard [15:24:34] bah. slow [15:43:22] can someone have a look at the internal cause of [15:43:24] Request: POST http://zh.wikipedia.org/w/index.php?title=%E6%88%91%E7%84%A1%E6%B3%95%E6%88%80%E6%84%9B%E7%9A%84%E7%90%86%E7%94%B1&action=submit, from 10.64.0.127 via cp1019.eqiad.wmnet (squid/2.7.STABLE9) to 10.2.2.1 (10.2.2.1) [15:43:24] Error: ERR_ZERO_SIZE_OBJECT, errno [No Error] at Wed, 17 Jul 2013 15:42:13 GMT [15:44:23] it's triggered by putting -{H|=>zh-hans:SOMETHING;}-[[Category:Test]][[A]] on a page on zhwiki [15:44:59] but I can't reproduce it locally [15:47:58] Reedy? [15:55:08] Err [15:56:00] So, if squid is getting nothing, presumably the webserver has failed epically and died [15:57:58] Nothing obvious in the fatal logs for that title [15:58:05] 我無法戀愛的理 [15:58:06] 我無法戀愛的理由 [15:59:52] Elsie: what is about the slow SSL connection? [15:59:57] zh.wikipedia.org disables the user page link in the top nav? [16:00:10] Steinsplitter: Dunno. It seems fine for me. [16:00:31] Elsie: ask other users in #wikipedia-de, or ask Christoph Jackel (WMDE) [16:00:40] maby problems in europe? [16:00:51] Reedy: even no reason for web server's death? [16:00:54] Certainly possible. [16:01:18] I can reproduce liangent's error. [16:01:21] gj Elsie [16:01:28] liangent: Please file a bug in Bugzilla if there isn't one already. [16:02:09] There's a few segfaults in the apache logs [16:02:22] Interestingly, -{H|=>zh-hans:SOMETHING;}-[[Category:Test]] (without the [[A]]) saved fine. [16:03:24] Reedy: possibly in fss module I guess [16:03:33] Steinsplitter: How many user reports of slowness are there currently? [16:03:38] Steinsplitter: Is it only slow over HTTPS? [16:03:46] There's nothing in the apache syslogs relating that apache IP either [16:03:50] jepp, only https. [16:03:52] Elsie: Same or different apache on your error? [16:03:59] Elsie: -{H|=>zh-hans:SOMETHING;}-[[A]] works too iirc [16:04:32] Request: POST http://zh.wikipedia.org/w/index.php?title=User:MZMcBride&action=submit, from 10.64.0.134 via cp1011.eqiad.wmnet (squid/2.7.STABLE9) to 10.2.2.1 (10.2.2.1) [16:04:35] Error: ERR_ZERO_SIZE_OBJECT, errno [No Error] at Wed, 17 Jul 2013 16:04:24 GMT [16:05:35] First IP is squid [16:05:50] Second is.. [16:06:06] LVS? [16:06:26] liangent: Can you reproduce it on testwiki? [16:07:23] Reedy: nope. but you may have to set $wgLanguageCode='zh' to reproduce it [16:07:57] Lack of global user preferences is annoying. [16:08:24] Reedy: Trivial to reproduce on zh.wiktionary.org. [16:08:46] Elsie: It was more having it served from a known server to look as its logs [16:08:56] rather than from rand() [16:09:21] Change test.wikipedia.org's $wgLanguageCode to 'zh'. ;-) [16:11:12] done [16:11:37] Now, where did testwiki move to.. [16:11:49] mw1017 [16:12:09] Reedy: Request: POST http://test.wikipedia.org/w/index.php?title=Main_Page&action=submit, from 10.64.0.133 via cp1007.eqiad.wmnet (squid/2.7.STABLE9) to 10.64.0.47 (10.64.0.47) [16:12:09] Error: ERR_ZERO_SIZE_OBJECT, errno [No Error] at Wed, 17 Jul 2013 16:11:55 GMT [16:12:14] eedy@mw1017:~$ cd /var/log/apache2/ [16:12:14] -bash: cd: /var/log/apache2/: Permission denied [16:12:16] God damn it [16:12:32] So it's something to do with $wgLanguageCode. [16:12:36] And presumably the parser. [16:16:32] Jul 17 16:11:56 10.64.0.47 apache2[32304]: [notice] child pid 18621 exit signal Segmentation fault (11) [16:16:32] Jul 17 16:12:15 10.64.0.47 apache2[32304]: [notice] child pid 18635 exit signal Segmentation fault (11) [16:16:37] liangent: ^ Looks very suspicious [16:17:16] I pinged parentxxxx [16:17:30] apergos: hello, ok [16:17:49] Reedy: then the next step? [16:17:56] is it possible to gdb apache [16:18:12] We have wmerrors [16:18:36] what's that? [16:19:13] different error handling [16:19:15] window manager errors [16:19:18] * hoo has no diea [16:19:32] tbh, I can't say I really know how to debug apache segfaults [16:19:44] and I don't think I've enough permissions to do so [16:19:49] Is it reproducible? [16:19:50] Step 1: file a bug. [16:19:53] Step 2: wait for Tim or Roan. [16:20:06] hoo: Yes. [16:20:18] Oh.. Err [16:20:18] https://www.mediawiki.org/wiki/User:Reedy/MWRegexSegfault [16:20:18] hoo: On Wikimedia wikis with $wgLanguageCode = 'zh'. [16:20:48] Elsie: Do I have to do anything despite that to see segfaults? :D [16:21:05] hoo: Add the string "-{H|=>zh-hans:SOMETHING;}-[[Category:Test]][[A]]" to a page and try to save the page. [16:21:27] hoo: You should be able to trivially reproduce on test.wikipedia.org currently. [16:21:32] As it's been temporarily set to 'zh'. [16:21:57] Elsie: But I can't do any debugging on testwiki :P [16:22:01] yup, needs a root [16:22:01] (gdb) attach 19347 [16:22:02] Attaching to process 19347 [16:22:02] Could not attach to process. If your uid matches the uid of the target [16:22:02] process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try [16:22:02] again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf [16:22:04] ptrace: Operation not permitted. [16:22:15] liangent: File a bug. CC Tim [16:22:35] Reedy: i can gdb for you [16:22:52] akosiaris: Who are you? :-) [16:23:10] Opsen! [16:23:13] /hacker [16:23:18] Choose whichever you want ;) [16:23:41] Elsie: choose opsen for now [16:23:50] > Alexandros Kosiaris [16:23:51] Elsie: WORKSFORME -.- [16:23:52] Nice to meet you. [16:23:57] hoo: Prove it. [16:24:07] hoo: liangent said it was a WFM locally too [16:24:09] gimme some info to work with guys [16:24:16] Reedy: I still doubt it's related to fss [16:24:25] Svick: still see no parentxxxx so let's start [16:24:27] anyone interested in installing it locally and try? [16:24:33] akosiaris: Go to . Click "edit source". [16:24:47] akosiaris: If want to gdb to an apache process on mw1017 [16:24:50] Add the string "-{H|=>zh-hans:SOMETHING;}-[[Category:Test]][[A]]" and press save page. [16:24:55] It should error immediately. [16:25:03] Then reproduce it enough so it faults on that thread and we can backtrace [16:25:12] *we can get a [16:25:25] apergos: ok, i can now save revisions (it's in gerrit); i have encountered an interesting problem with saving comments [16:25:32] I saw the commit :-) [16:25:44] I lurk in a channel that notifies for gerrit commits... [16:25:51] let's hear the problem [16:26:19] Meh, I'm filing the bug. [16:26:21] works on wmf10 too -.- [16:26:39] hoo: On a Wikimedia wiki using $wgLanguageCode='zh'? [16:26:48] Elsie: No, locally [16:26:54] Locally is irrelevant. :P [16:26:55] comment is at most 255 bytes, so i want to save its length as a single byte; the problem is, when a comment has an invalid UTF-8 sequence, that's replaced by a special replacement character [16:27:01] hoo: are you familiar with php extensions? [16:27:15] if so can you install https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/php/FastStringSearch to your apache and try again? [16:27:23] for example, in the first revision of this page http://ten.wikipedia.org/w/index.php?title=File:Kefalonia_wikipedia_10_presentation_1.JPG&action=history [16:27:55] liangent: I think there's a deb in our apt repo [16:27:57] #0 0x00007f6230670be4 in _zval_ptr_dtor () from /usr/lib/apache2/modules/libphp5.so [16:27:57] #1 0x00007f6228760082 in _php_fss_close (rsrc=) at /root/fw-ports/php5-fss/php5-fss-0.0.1/fss.c:339 [16:27:57] #2 0x00007f623068efee in ?? () from /usr/lib/apache2/modules/libphp5.so [16:27:57] #3 0x00007f623068cd71 in zend_hash_del_key_or_index () from /usr/lib/apache2/modules/libphp5.so [16:27:57] #4 0x00007f623068f107 in _zend_list_delete () from /usr/lib/apache2/modules/libphp5.so [16:27:58] yeap [16:28:13] liangent: ^^ Looks like it is FSS related [16:28:16] i mean, the comment is 255 bytes in the DB, but it's something like 257 bytes in the dump [16:28:25] Reedy: Nobody's using debian :P [16:28:29] I am [16:28:30] * hoo hdies [16:28:35] so in those cases, i remove the special character at the and and save that [16:28:40] ah it was truncated at 255 bytes in the middle of a multibyte character of course [16:28:46] yes, that's legit [16:28:48] yeah, exactly [16:28:50] damn ... kicked out.. lol [16:29:04] Welcome back. :-) [16:29:11] Can you pastebin the full gdb output, akosiaris? [16:29:12] figures, I know the person who did this presentation :-D [16:29:29] Elsie: yeah i will... gimme a sec [16:29:36] it's possible that for old revisions you might encounter invalid utf8 though [16:29:47] both in comment and in text, you should allow for that [16:30:15] i don't do anything with the UTF text, i just treat it as a sequence of bytes [16:30:19] not via truncation but because some character snuck into the middle [16:30:22] good [16:30:38] Elsie: http://pastebin.com/5DrzRyMr [16:31:13] but if mediawiki does something similar to another comment with 255 characters and invalid UTF-8 in the middle, i'm not sure what to do with that [16:31:14] do mark your format with a version because one day the title length or some other thing will change and then... [16:31:37] yeah, i already do that [16:32:27] this will be a matter of scanning the old comments [16:32:33] liangent: Going to open a bug? [16:32:34] because we are well behaved about them these days [16:32:37] Reedy: I just did. [16:32:38] https://bugzilla.wikimedia.org/show_bug.cgi?id=51551 [16:32:57] cheers [16:33:11] I'll revert my debug hack to testwiki unless anyone has any objections [16:33:53] if something like that happens, my code throws an exception and crashes, so we would learn about as soon as that happened [16:35:07] you could try looking at fr, nl, el for those, see if anything cropped up [16:35:24] the question will be what you decide to do in that case, the revision is what it is [16:35:53] I know that revision text with non utf8 in it is produced as is [16:36:11] that shouldn't be a problem for me [16:36:12] I would expect the same for the comment, and that what you saw is an artifact of the truncation [16:36:33] worst case you use two bytes for the length :-/ [16:37:40] it's just a comment; wouldn't it be okay to truncate those bad comments too? [16:38:31] no [16:38:35] ok [16:39:08] you could put a replacement char in there that's one byte if you could find something appropriate I guess [16:39:25] but best to just keep it [16:40:04] someone later might be want or be able to correct those, we shouldn't undermine that [16:41:30] so what about the invalid character at the end? if i truncate that, and somebody would want to correct them based on a dump, they wouldn't be able to do that either [16:43:12] well this is not correctable, it's been truncated in the db [16:43:18] so it's lost data [16:43:54] but if someone snuck in an iso 8859-x chr in the middle of a comment then that could be converted to utf8 [16:43:57] Is there anyone available that can comment on the current status % of the Oauth project in the #wikimedia-office discussion? [16:44:20] eg if I look at fr wp I might suspect that all those are iso-8859-1 and try to make utf8 out of it successfully [16:44:35] perfect timing.. [16:44:40] hmm, ok [16:44:44] csteipp: I was just looking for you.. [16:45:00] csteipp: Can you comment on the current status % of the Oauth project in the #wikimedia-office discussion? [16:45:24] if you want to know what mediawiki does with such a thing.. [16:45:38] the thing to do is to overwrite some comment in your db with one that has such a character in the middle, then [16:45:41] dump it :-) [16:46:11] Special:Export ought not to munge it any differently than dumpBackup [16:46:52] Technical_13: I can if you let me know what the context is you're talking about? [16:48:50] hmm, that should be simple to do, i'll try that [16:49:03] lemme know what you find :-) [16:52:06] what else you got going on? [16:52:46] now i'm going to make sure i can read what i saved and then create the makefile [16:52:52] cool [16:53:18] make dumps [16:53:56] ah note that it is possible for two revisions to have the same text id (I dunno if this is going to matter for your layout but it happens) [16:54:17] make: *** No rule to make target `dumps'. Stop. [16:54:43] why does the current stub dump even contain text id? [16:54:55] because the page content dumps do not [16:55:25] we use the text id from the stub to request the contents [16:55:35] Svick: the purpose of the stub dumps is to pull everything out of the page+revision tables as quickly as possible, then a second run pulls the text via text id [16:55:41] without text id, the stub dumps would be pointless [16:55:54] liangent: you always find the weirdest bugs :D [16:56:18] for special:export format, there are no stubs, so the text id and page content are listed together [16:56:31] in the two phase dumps, they are written just in the stubs, where we will make use of them [16:56:53] whether we should have written them into the content dups to, to save users some headache, is another matter... [16:56:57] *too [16:56:59] *dumps [16:57:34] well in theory they're an internal implementation detail that means nothing if you're not doing the second dump pass :) [16:57:40] they shouldn't be exposed at all probably [16:57:44] in regular dumps [16:57:49] I think they absolutely should be exposed [16:57:53] along with page ids and rev ids [16:58:05] someone who wants to create a faithful mirror would preserve those [16:58:05] they're like inode numbers on a unix filesystem [16:58:09] your .tar.gz doesn't need em [16:58:10] so you think it makes sense to keep doing it in two phases, even with incremental dumps? [16:58:22] no but your db might want them [16:58:55] that depends on you, if you use the existing stubs for the second phase you don't have to worry about deleted revs etc [16:59:09] but you can't use it just as is obviously (it's not incremental) [16:59:30] right [16:59:55] there are probably plenty of folks who want just the metadata, in that sense providing something equivalent to stubs makes sense [17:00:17] the dup text ids does mean that the text content is identical btw, it's not same text id and different content [17:00:17] mmmm, metadata :) [17:00:33] * apergos piles some on a plate and slides it down the bar to brion [17:01:22] * brion NOM NOM NOM [17:01:28] whether you do it in two passes is another matter; you can create two files in one pass if that works better for you [17:01:40] or 6 [17:01:53] (current articles, current all, history x 2 ) [17:02:23] yeah, i think that makes more sense, because it means going to the DB only once for everything [17:02:32] well *cough* [17:02:46] so you go once to get the user/rev/blah blah info out of the db [17:03:08] but then you look at the so-called text contents and it's a pointer to an external cluster [17:03:23] on another server, so (for us) it's a separate call anyways [17:03:24] thanks Reedy for more enotifs :D [17:03:52] separate db connection, separate db query [17:04:55] right [17:05:25] that's us. other folks may have installations with all their text in local db [17:05:44] but if they do it's likely not to be as ginormous... so however it worked for us would be good enough for them :-P [17:06:00] Nemo_bis: it from user reports on zhwiki [17:06:16] though users just say "hey I can't edit this page" [17:06:26] :) [17:07:02] more than you probably want to know aobut it: https://wikitech.wikimedia.org/wiki/External_storage [17:07:24] but this on the other hand is vry intersting if not quite current: https://wikitech.wikimedia.org/wiki/Text_storage_data [17:07:37] knowing the format we store that stuff in is going to be useful to you later when optimizing for our setup [17:09:34] yeah, i briefly looked at the code that handles that before; i noticed that the revisions are compressed together in groups and when one revision from a group is accessed, the others stay in cache [17:09:37] brion: you should definitely chime in with thoughts, suggestions, hints... [17:10:40] diffhistory blob r some such, though I need to check what actually is happening with the current config [17:12:54] ES is all crazy nowadays i don't even understand it all ;) [17:14:34] did anyone ever? :-D [17:15:23] btw brion usually at this time of day (the last hour) we're on chatting about the new dumps so please feel free to drop by and add some words of wisdom [17:15:52] Svick: anything else on your plate that we should talk about today? [17:16:17] nothing else, i think [17:16:22] ok [17:16:42] I'm eager to be a guineau pig for linux builds :-) [17:16:59] right, that should be soon [17:17:06] :-) [17:17:33] ok, I"m gonna get going, tonight I actually have to run around to some stuff, but in case anything crops up I'll check in n later [17:18:11] ok, see you tomorrow [17:18:16] see ya [17:18:50] ^d: on https://gerrit.wikimedia.org/r/#/admin/projects/pywikibot/compat the "anonymous http" link is broken [17:19:27] <^d> Hrm [17:19:59] <^d> legoktm: For me too. Looking... [17:20:01] i'm getting the same, oddly [17:20:03] for another repo [17:20:04] error: RPC failed; result=22, HTTP code = 503 [17:20:16] <^d> Really? [17:20:17] <^d> Dammit. [17:20:21] <^d> I thought we'd solved that. [17:21:00] <^d> Hmmm, I can't ssh to manganese. [17:21:09] <^d> Oh there we go. [17:21:24] apergos: spiff i'll poke my head in from time to time :) [17:21:31] the link on git.wikimedia.org still works though [17:23:50] <^d> Yeah, git.wm.o is a different box. [17:25:19] ah, ok [17:25:44] <^d> qchris: Yo, so we're hitting those 503 errors & timeouts again, nothing in the error logs. [17:26:02] <^d> Memory & cpu usage are where they normally are. [17:28:22] ^d: Hi. ganglia shows high load. [17:28:40] ^d: and really high% of wait [17:28:57] <^d> I was just looking at top, hadn't checked ganglia. [17:29:56] load ~3, and ~20% wait [17:31:01] lots of waiting uploads :-/ [17:31:43] <^d> I did a jstack dump, the hanging clones are definitely waiting on the disk. [17:32:06] Was just about to suggest that :-) [17:32:34] <^d> https://gerrit.wikimedia.org/jstack.out [17:32:48] We could kill the waiting tasks, but I'd rather restart gerrit for now. [17:33:29] <^d> I wonder if this is related to our disk problem. [17:33:33] <^d> (Which we're replacing tomorrow) [17:33:42] We're having disk problems :-/ [17:33:46] That sounds related :-) [17:33:54] <^d> One of the disks in the raid array failed. [17:34:03] <^d> We're replacing it tomorrow morning first thing. [17:35:26] Does the server (not gerrit) log show disk problems as well in the last few hours? [17:38:48] <^d> Let me check. And let's take this to #-operations. We may move ahead with swapping the disk today. [17:40:48] ok [18:40:18] Elsie: oh no, it happened again :-( [18:41:45] http://imgur.com/Kl0VtV7 Elsie [18:42:00] !log payments cluster updated from from 2a9169765b94e0 to 855aa0f8c0bbc5 [18:42:11] Logged the message, Master [18:44:36] apparently someone noticed it before me: https://en.wikivoyage.org/wiki/Wikivoyage:Travellers'_pub#Related_sites_title_appearing_large [18:47:00] so few :( http://web.archive.org/web/*/http://etherpad.wikimedia.org/* [18:47:26] odder: yes, we have same problem on it.quote [18:47:38] and presumably on it.wiki as soon as it gets updated [18:48:14] I was told to try this but it didn't really help https://it.wikiquote.org/w/index.php?title=MediaWiki:Monobook.js&diff=prev&oldid=579455 [20:11:22] Nemo_bis: oggi Commons over httpS é molto lento? [20:15:18] unzo [20:27:09] We have 640 unique links, internal or external, to etherpad.wikimedia.org from Wikimedia wikis. Sounds horribly low. [20:30:08] Nemo_bis: sounds horribly high, IMO. should be 0 [20:41:54] YuviPanda: sure, but realistically there are thousands of pads and pretending they don't exist by hiding them better doesn't improve things [20:42:36] hmm, maybe I should write a monkey script that randomly goes and bumps those off. we should've also called it ephemeral pad, maybe :) [20:42:36] How do you know there are thousands of pads? [20:44:46] I don't "know" it, I guess [20:46:08] Right. [21:01:25] Nemo_bis: unzo sinifica si? [21:06:37] non so [21:07:36] ah, *imparato nuova parola* https e lentissimo. [22:42:47] csteipp, AaronSchulz: it seems that ?action=edit is preserved on the post-authentication redirect, but ?veaction=edit is not [22:43:00] Is that SUL2's fault, or VE's ;) [22:44:37] speaking of sul2, what's the current state of whether it should be possible to make a new account that's not global? [22:45:28] csteipp: StevenW pretty sure that's because ?action is preserved, but no other parameters are [22:45:48] must be sul2's fault [22:45:58] err I don't know why I pinged you there csteipp, sorry [23:05:44] Prodego: because StevenW pinged him? :) [23:06:31] maybe... let's go with that [23:07:47] Hi, we're reunning into some bugs on fr.wp [23:07:51] -e [23:09:09] https://fr.wikipedia.org/wiki/Le_Corsaire_noir [23:09:25] Some of us have a "__DISAMBIG__" on top of the page [23:10:04] I don't see it... [23:10:11] purge the page? [23:10:50] yep, it worked alright [23:11:06] but I can't remember ever seeing that bug