[01:31:36] I don't know if this is the right channel but I have been unable to do revision deletion on Commons for hours now. I keep getting a "entire web request took longer than 60 seconds and timed out" PHP fatal error. Is this the right place to report such things? [01:33:12] probably not - complete inability to revdelete things should be reported in phabricator as a security issue [01:35:20] Mkay. I asked around and other projects seem to have no trouble doing it. Commons logs show that revdel hasn't been performed for about 12 hours now so I don't know if it is just me or not. [01:38:32] And specific tags besides security, Krenair? [01:39:15] mediawiki-revision-deletion [01:39:35] performance [01:40:11] would be a good start [01:40:30] Done [10:33:18] If I undelete a number of pages at once, I get the following error: [XMrEMQpAMDwAAGzj1h4AAAAW] Caught exception of type Wikimedia\Rdbms\DBQueryError [10:33:33] (the first bit varies, of course) [10:34:13] From looking through phabricator, I think maybe it's related to T197464, though it seems similar to the resolved T176101 and T207419 [10:34:13] T207419: sql error: Error: 1048 Column 'fa_description_id' cannot be null - https://phabricator.wikimedia.org/T207419 [10:34:14] T176101: Cannot delete File:MKC,S.jpg on zhwiki due to DBQueryError - https://phabricator.wikimedia.org/T176101 [10:34:14] T197464: Fatal error when submitting edit and deletion/undeletion on Commons from "Error: 1205 Lock wait timeout exceeded" (WikiPage::lockAndGetLatest) - https://phabricator.wikimedia.org/T197464 [10:34:52] Am I on the right track at least? [13:09:46] This is all internal_api_error_DBQueryError stuff [13:16:11] McJill: hmm? [13:20:24] In reference to my above comments [13:20:48] Oh, you weren't in. >If I undelete a number of pages at once, I get the following error: [XMrEMQpAMDwAAGzj1h4AAAAW] Caught exception of type Wikimedia\Rdbms\DBQueryError [13:21:07] >From looking through phabricator, I think maybe it's related to T197464, though it seems similar to the resolved… [13:21:07] T197464: Fatal error when submitting edit and deletion/undeletion on Commons from "Error: 1205 Lock wait timeout exceeded" (WikiPage::lockAndGetLatest) - https://phabricator.wikimedia.org/T197464 [13:21:33] etc etc [13:21:33] Query: INSERT IGNORE INTO `page` (page_namespace,page_title,page_restrictions,page_is_redirect,page_is_new,page_random,page_touched,page_latest,page_len,page_id) VALUES ('2','Amorymeltzer/sandbox/5','','0','1','0.206380669357','20190502101929','0','0','60635647') [13:21:33] Function: WikiPage::insertOn [13:22:56] McJill: have you tried it a second time i had the same thing with a page move on enwiki and i tried it a second time and it worked [13:26:12] If that was directed at me Reedy, sorry, not sure what you mean. Zppix: Yes, repeated attempts get there, but this isn't one page, it's dozens of pages with a fail rate somewhere between 20%-80% even after two attempts each [13:26:21] McJill: It's the error you're getting [13:26:27] So you can work out if it's a dupe of the tasks you suggested [13:31:53] Ah okay, thanks! I'm no sql expert, but won't quarry kick that back? [15:05:20] I didn't say to run the query yourself.... [17:26:09] fair enough [18:44:34] tools-sgebastion-07 is pretty slow, is NFS-load to high? [19:36:11] Is there a channel for old-timers to answer questions about ancient history from 2008? [19:36:47] this is one of the older channels [19:36:57] I'm wondering why we stopped providing static HTML dumps [19:37:06] It seems like it's be really compute-intensive [19:37:58] it is [19:38:28] there is a plan to provide html dumps (parsed and expanded wikitext) from restbase [19:38:32] but that's not ready yet [19:39:43] there's a couple open tickets actually, I just can't get to them [19:39:51] huh! [19:39:51] not enough cycles in the day/week/month/year [19:39:55] I'll look at it [19:40:04] there's some code even [19:40:20] s/it's be/it'd be/ [19:40:37] probably the gerrit patch is on one of the tickets [19:41:00] https://phabricator.wikimedia.org/T133547 [19:41:08] it would only be for current revisions, mind you [19:42:09] Right, I hear you [19:42:12] I see https://github.com/wikimedia/htmldumper too [19:43:42] yeah, there were some issues with that approach [19:44:04] but the idea (use the restbase api) is fine at least as long as there is restbase [19:44:53] Sure [19:45:21] currently in the dumps repo is html_dumps.py as a type of 'misc dump' [19:45:41] (see also: incr_dumps.py for adds-changes dumps) [19:45:52] but it's not working completely yet [19:46:04] so that's what we'd want to fix up [19:46:35] deleted pages need to go away in new dumps [19:46:56] and they need to be able to be broken into pieces if they get too large (wikidata, enwiki) [19:47:09] both of those were issues with the sqlite approach [19:48:30] This ticket is amazing [19:48:32] https://phabricator.wikimedia.org/T17017 [19:50:44] 2008, before i was even here [19:50:56] but yes the dumps produced were one directory per article [19:51:08] so yuo can see how problematic that would be today [19:54:01] and note that there are two sorts of html dumps people are talking about [19:54:14] one is 'here's the expanded wikitext as html with all the templates expanded etc' [19:54:28] the other is 'here's the full on page with some skin or other and etc' [19:54:38] i only ever was working on the first one [20:02:00] does anyone know if there are incompatibilities between a responsive skin and user scripts? [20:02:32] (responsive on mobile) [20:06:46] You can simulate the behaviour by making your browser window very small [20:07:16] Some elements vanish when the window is small, some elements get fewer columns [22:13:58] (heading out but) Thanks Reedy [23:41:11] The manual at https://www.mediawiki.org/wiki/Manual:$wgMaxCredits mentions a "significant" (in italics) performance impact on large wikis [23:41:33] Is this really true if caching is done properly, e.g. for Wikipedias? [23:42:14] It's not cached properly, currently, presumably [23:42:14] Imho it shouldnt matter if your cache is properly setup but i havent had personal experience ToBeFree [23:42:26] "Note that this will require 2-3 extra database hits for every single page view" [23:42:33] That doesn't sound cached [23:42:41] it's about the German Wikipedia which is currently considering to add an Xtools link for attribution to the footer [23:43:15] I personally mentioned that I'd have this variable set to -1 instead and wonder if I'm suggesting something impossible [23:43:57] ( discussion about adding an Xtools link is at https://de.wikipedia.org/wiki/Wikipedia:Meinungsbilder/Link_auf_Autorenstatistik_bei_jedem_Artikel ) [23:44:14] 'wgMaxCredits' => [ [23:44:14] 'default' => 0, [23:44:14] 'testwiki' => -1, // T130820 [23:44:14] 'wikivoyage' => 10, [23:44:14] ], [23:44:15] T130820: Enable action=credits on test and or beta - https://phabricator.wikimedia.org/T130820 [23:44:33] Reedy: not even by Varnish for unregistered users? [23:44:33] oh, interesting [23:45:15] Varnish/similar cache might help [23:45:24] But that doesn't stop the many queries for logged in users etc [23:46:45] * ToBeFree nods [23:47:36] a "database hit", I wonder, can mean anything from a complicated dynamic "retrieve all authors" query that takes a minute to finish, to "get this from cache" [23:48:13] I guess you're doing a DISTINCT query against revisions... Potentially checking thousands of rows [23:48:21] oh god [23:48:32] should I open a ticket about adding a cache for this? [23:49:16] Might have a quick look to see if there is one [23:49:36] There's probably a few improvements of various sorts that can be done [23:49:42] Just someone with enough interest to work on it [23:52:57] This may be the existing task https://phabricator.wikimedia.org/T49722 [23:53:46] https://phabricator.wikimedia.org/T49723