[00:13:56] Can somebody tell me if the editprotected user right allows for page moves that are fully, or move, protected? [00:14:39] yes [00:14:46] You sure? [00:14:48] AFAIK it bypasses any page protection [00:15:00] To edit it though. [00:15:11] It won't bypass cascade though. [00:16:32] ... Pretty sure it should... [00:18:16] Krenair, should what? [00:18:39] bypass cascade, or move? [00:18:44] editprotected should be able to bypass cascade... [00:18:52] No it shouldn't [00:19:23] Krenair, https://www.mediawiki.org/wiki/Manual:User_rights/id#Daftar_hak [00:21:28] what is transition team ? [00:24:40] Betacommand, http://lists.wikimedia.org/pipermail/wikimedia-l/2013-March/124851.html [00:37:38] TimStarling: Any objections if I move interwiki.cdb (and trusted-xff.cdb at the same time) to wmf-config somewhat like was going to happen for git deploy? The format is pretty stable, and it gets updated relatively infrequently.. [00:38:11] no objections [00:53:44] TimStarling, can you tell me exactly what editprotected right does? [01:54:11] Cyberpower678: let's you edit protected pages that you could unprotect yourself [01:54:41] TimStarling, huh? [01:55:04] Can you move protected pages? [01:56:08] yes [01:56:15] editprotected doesn't affect that [01:56:36] Cascade protection? [01:56:39] https://www.mediawiki.org/wiki/Manual:User_rights/id#Daftar_hak [01:57:59] editprotected doesn't apply to pages with cascade protection [01:58:17] That's good. [01:58:21] Thanks. [02:01:46] Anyone know where the doxygen config is? [02:01:59] maintenance folder [02:02:10] maintenance/Doxyfile [02:02:51] thanks [02:18:27] Sorry! This site is experiencing technical difficulties. [02:18:28] Try waiting a few minutes and reloading. [02:18:28] (Cannot contact the database server: Unknown error (10.64.16.6)) [02:19:10] (Cannot contact the database server: Unknown error (10.64.16.6)) [02:19:12] OH GOD [02:19:14] PANIC [02:19:15] EVERYONE [02:19:33] notpeter: ^ [02:19:36] "(Cannot contact the database server: Unknown error (10.64.16.6))" on https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/Newsroom/Suggestions [02:19:42] db1017.eqiad.wmnet [02:19:47] TimStarling: ^ [02:19:55] The most appropriate response is anarchy [02:19:56] Wheee [02:19:57] All in favor? [02:20:00] enwiki master [02:20:11] looking [02:20:13] it's back [02:20:14] thank god [02:20:40] yup, fine now [02:20:48] looks to be doing a lot of sleeping [02:21:27] that's normal [02:28:53] maybe we should deploy that PoolCounter thing now [02:36:00] was anyone else just logged out of gerrit and unable to log back in? [02:36:33] I can't view any change, it just says "session expired" [02:37:01] I am not having that problem [02:37:25] TimStarling: do you have multiple tabs open? [02:37:39] i've had that issue before, Can't remember what I did [02:37:44] deleting my cookies fixed it [02:38:29] I always have multiple tabs open [03:01:41] Error connecting to 10.64.16.6: User 'wikiadmin' has exceeded the 'max_user_connections' resource (current value: 80) [03:01:51] wtf? [03:05:20] [0303][tstarling@fenari:~]$ dsh -g job-runners ps -C php | wc -l [03:05:20] 265 [03:05:56] [0304][tstarling@fenari:~]$ dsh -g job-runners ps -C php -o args | grep enwiki | wc -l [03:05:56] 45 [03:08:31] TimStarling: i added that back to limit the number of job runners that can concurrently work on enwiki [03:08:53] well, can we limit it in some way that doesn't flood the error log? [03:09:23] won't the job runners just go into a tight infinite loop trying to connect to enwiki? [03:10:03] well, tight finite loop anyway [03:13:04] in the one minute from 7:56:00 to 7:56:59, we got 2149 of these errors [03:13:25] TimStarling: could the job runner move on to the next wiki on err 1226 [03:13:45] I don't think so [03:13:49] it's not like wikiadmin is just for job runners [03:14:46] there are a lot of maintenance scripts that can't tolerate a connection error so easily [03:16:10] suggestion? [03:18:21] there are hackish things we can do to improve the current situation, and there are less hackish things we can do [03:19:31] have you seen https://gerrit.wikimedia.org/r/#/c/54884/3/MWSearch_body.php,unified ? [03:20:02] we can wrap things in PoolCounter now in just 5 lines of code [03:22:21] ooh. that would be far less hackish [03:23:16] that would even be helpful with the redis jobqueue, where workers will be able to pull jobs off the queue much faster [03:25:22] but there is no way for jobs-loop.sh to even tell if a run failed or not, as it is currently written [03:25:44] maybe we should make a rule to never write anything in shell script [03:27:28] jobs-loop.sh was pretty simple when I wrote the first version of it [03:27:31] but look at it now [03:27:42] I would never have written the current thing in shell script if I was starting from scratch [03:28:30] bash arrays! [03:28:53] Lolcode! [03:29:45] the problem is, as it is now written, it is heading down a path where a decreasing proportion of our developers can look at it and retain their sanity [03:30:32] it's starting to look like mediawiki templates? :) [03:30:53] and +1 on the no shell scripts rule [03:34:00] anyway, with PoolCounter around the main part of runJobs.php, it could be configured to wait for a while before giving up [03:34:05] which would make it not be a tight loop [03:34:12] it would just waste 75% of the process count [03:39:48] i suppose it would be nice to implement PoolCounter in a runjobs.sh replacement, where once the pool was full, it would pick a different wiki for new processes [03:43:20] yes [03:44:04] currently it's configured to use up to 320 processes on eqiad alone [03:45:43] 192 in the default queue and 128 in the prioritised queue [03:46:34] I wonder if that figure was selected with any knowledge of the DB constraints [03:46:56] i'm guessing no :) [03:47:39] are we generally happy with poolcounter as is? [03:47:47] my idea for an immediate fix is to try to find a process count which will allow the job queue to run quickly, but won't take the servers down if all processes use the same DB server [03:48:06] i'm thinking vs. e.g. zookeeper. which is now getting some use. with solr [03:48:23] because I think it's pretty normal for work to be concentrated in a single wiki from time to time [03:48:38] jeremyb_: I am happy with it [03:48:48] i don't know very much about it [03:50:33] https://www.mediawiki.org/w/index.php?title=Extension:PoolCounter&diff=667344&oldid=502330 [03:50:38] first edit in over a year :) [03:51:08] https://gerrit.wikimedia.org/r/#/c/27437/ [03:52:24] that's notpeter increasing the process count from 5 to 12 [03:52:42] i.e. from 80 to 192 cluster-wide [03:53:28] https://gerrit.wikimedia.org/r/#/c/50877/ [03:53:40] and that's Aaron adding the other 128 [03:54:11] does anything that's not mediawiki currently use poolcounter? [03:54:17] jeremyb_: no [03:54:33] and lack of activity implies that it is perfect, right? [03:55:49] on the page i edited? no, just something i noticed. no implication at all [03:56:52] i think the increase from 5 to 12 in october is around the time i first added a max_user_connections to wikiadmin in tampa in response to issues [03:57:06] i should have just caught that change and reverted [03:58:58] so you think it could be set to, say, dprioprocs=5 and iprioprocs=2? [03:59:18] so, my guess from reading on poolcounter a little just now is that it's kind of similar to memcache in terms of not having automated failover when a node dies, etc. (not like mysql where there's master/slave and load balancing in php) [04:00:24] which job types are in the prioritized queue? [04:00:34] if a server dies, the client will just use a different server [04:00:37] nothing bad will happen [04:01:25] ipriotypes="AssembleUploadChunks PublishStashedFile" [04:01:27] TimStarling: how many poolcounter servers are we running? [04:01:51] that is iprioprocs [04:02:14] ok, dprioprocs=5 and iprioprocs=2 sounds good [04:02:29] jeremyb_: it's a puppetized service, can find out by looking at site.pp [04:02:30] jeremyb_: 2 [04:03:19] if both die, nothing bad will happen then either [04:03:31] unless there is a sudden load spike, in which case the cluster will be unprotected [04:03:37] Ryan_Lane: i was starting with the mediawiki conf... [04:03:43] * Ryan_Lane nods [04:07:48] TimStarling: i increased wikiadmin'@'10.64.%' max connections to 200 [04:08:38] !log increased 'wikiadmin'@'10.64.%' max connections to 200 on s1 [04:08:44] Logged the message, Master [04:10:51] so what else then? rewrite jobs-loop $sometime and make it use poolcounter? [04:11:51] * jeremyb_ wonders if that should get a ticket [04:28:10] jeremyb_: do you know how I could modify CheckUser to allow the checkuser-summary message to accept a $1 parameter? [04:28:38] binasher: what was performance issue you saw with job runners? [04:29:11] * Jasper_Deng found no documentation about how things like $1 and $2 are implemented by the program [04:29:12] in the past we have seen problems with heavy write traffic from LinksUpdate, was that it? [04:29:55] hrmmmm [04:29:58] error: index file .git/objects/pack/pack-e80f065ef28685f0f9dc89b5854140e2d654d6f2.idx is too small [04:30:06] * jeremyb_ has a lot of that locally [04:30:14] * jeremyb_ figures out what/how to fix [04:32:07] Jasper_Deng: fwiw, in general you want to ask in #mediawiki and ask the wind not an individaul [04:32:09] individual [04:32:11] * [04:32:16] TimStarling: that was a problem, the other day looked like the rate at which refreshLinks jobs were being inserted [04:32:29] and if you must ask a specific person then remember to make sure the person is not half asleep! :) [04:35:52] binasher: interesting [04:36:31] surprising, as the resolution of refreshLinks2 into refreshLinks is supposed to take slave lag into account [04:37:18] did you talk to AaronSchulz about it? [04:39:42] TimStarling: he was looking at it with me at the time, and pushed out https://gerrit.wikimedia.org/r/#/c/56572/ [04:40:38] do you know if that fixed it? [04:43:23] filed https://bugzilla.wikimedia.org/show_bug.cgi?id=46770 for rewriting jobs-loop.sh [04:44:40] there isn't a limit to the number of refreshLinks jobs that may come out of a individual refreshLinks2 job, is there? [04:46:55] 56572 won't stop a million refreshLinks jobs from being queued up quickly if the right template is edited, but will stop additional refreshLinks2 running [04:47:38] ohhhhh, he just tested it, didn't actually do it yet :( [04:47:56] (^demon and gerrit repo gc) [04:48:09] no wonder it's so slow [04:48:11] definitely helpful, just not sure if it was enough [04:52:18] TimStarling: your commit msg says 250 but asher's !log says 200? [04:53:15] jeremyb_: the commit i see says 112 [04:53:17] binasher: if there are more than 500 pages to do, the refreshLinks2 job will split into jobs that do 500 each [04:54:17] job_title: Citation/CS1 job_params: a:3:{s:5:"table";s:13:"templatelinks";s:16:"rootJobSignature";s:40:"0dc5f2c6b71d89875eac251b0ffe9434084f05d2";s:16:"rootJobTimestamp";s:14:"20130329161556";} [04:54:20] so say if there are 4M pages on a wiki, the limit will be 8000 new refreshLinks2 jobs inserted per original refreshLinks2 job [04:54:26] job_params used to have id ranges [04:54:35] but it doesn't any more? [04:54:57] job_cmd: refreshLinks2 job_title: Navbox job_params: a:3:{s:5:"table";s:13:"templatelinks";s:16:"rootJobSignature";s:40:"1aacc3e394f1deb5a0f901b3e4bff862d5746006";s:16:"rootJobTimestamp";s:14:"20130326024511";} [04:55:29] > so that the total number of processes (112) will now fall under the newly-raised wikiadmin process limit of 250. [04:55:33] it used to do the partitioning during the edit request [04:55:54] but for heavily-used templates, partitioning was taking too long [04:56:21] so it was changed to insert a single refreshLinks2 job with no start or end specified [04:56:32] then the partitioning is done by the job queue [04:57:14] the partitioning process involves counting the number of links [04:57:36] if it is more than 500, the job is split up into subjobs that do 500 each [04:57:48] i.e. more refreshLinks2 jobs, but now with start and end specified [04:58:31] ah, ok [04:58:34] if it is less than 500, or if start and end are specified (i.e. it is a previously created batch of 500 titles), then the job is split up into refreshLinks jobs, each of which does an individual title [05:00:07] and after Aaron's recent changes, refreshLinks2 jobs generally are not run if there are refreshLinks jobs in the queue [05:00:28] so that limits the insert rate somewhat [05:00:31] the rate of inserts with 500 rows per statement seemed to contribute to the replag [05:01:06] we can just reduce $wgUpdateRowsPerJob [05:01:20] every time I said 500, that is actually $wgUpdateRowsPerJob which is currently set to 500 [05:02:37] mmm, there is also $wgUpdateRowsPerQuery [05:03:12] binasher: you see it in my quote? [05:03:23] ah, maybe it was 100 rows per statement [05:04:10] // Insert the job rows in chunks to avoid slave lag... [05:04:10] foreach ( array_chunk( $rows, 50 ) as $rowBatch ) { [05:04:11] $dbw->insert( 'job', $rowBatch, __METHOD__ ); [05:04:11] } [05:04:17] note no wfWaitForSlaves() [05:04:45] this sounds like a ponzi scheme [05:04:47] :-) [05:05:08] anyway, we can just switch to redis [05:06:01] the code is pretty much ready, we can switch to it and stop worrying about bandaging all the burst pipes in JobQueueDB [05:06:24] very much prefer that idea :) [05:06:37] * jeremyb_ runs off [05:17:55] TimStarling: so https://gerrit.wikimedia.org/r/#/c/57030/ is temporary? [05:18:37] yes [05:18:44] but I'm not sure what the permanent solution would look like [05:19:33] it's hard to use all available CPU when you don't know when the job queue is going to morph into a DDoS attack against some master DB server [05:20:13] maybe some kind of rate limiting is needed [05:20:17] TimStarling: maybe you can increase dprioprocs to 7 [05:20:35] the iprioprocs almost never happen anyway atm [05:20:44] when the do happen more, we will be on redis already [05:21:17] *when they [05:22:09] maybe [05:22:40] if you did that, I'd +1 [05:23:53] also those will usually be on commons and enwiki [05:25:07] [22:00] TimStarling so that limits the insert rate somewhat [05:25:11] "somewhat"? https://gdash.wikimedia.org/dashboards/jobq/ :) [05:26:08] yeah, it limits it in the long term, maybe not in the short term [05:26:40] those 1200k/min where outage level [05:31:23] TimStarling: "wikiadmin conncetions" [05:31:30] was that Max or you? [05:34:25] who's max? [05:34:32] me [05:34:44] I didn't fetch his commit, I don't know what he changed [05:35:12] he fixed the typo [05:35:18] i.e. PS4 was an amendment of PS2 [05:41:39] * Susan smiles at closedmouth. [05:41:42] TimStarling: are you merging that today? [05:41:58] harder [05:42:12] I guess so [05:44:31] speaking of dberrors, maybe someone can look at https://gerrit.wikimedia.org/r/55931 & https://gerrit.wikimedia.org/r/55948 [05:59:30] is !tech a stalkword? (don't kill me) [06:00:39] AFAIK no [06:01:55] unless some particularly masochistic "tech" person has decided to stalk it [06:03:09] "person" [06:04:42] natural or legal [09:19:22] Hi Nemo_bis, why did you remove the "shell" keyword on https://bugzilla.wikimedia.org/show_bug.cgi?id=45764 and some others? [09:21:58] andre__: shell is only for bugs ready for shell [09:22:29] a site request can either have shell if ready, shellpolicy if consensus has to be found etc., nothing if not ready (e.g. request to run a maintenance script that doesn't yet exist) [09:22:48] or at least that's how I was told in the last few years [09:22:58] Nemo_bis, that's not clear from https://bugzilla.wikimedia.org/describekeywords.cgi [09:23:08] unsurprising :D [09:23:19] hehehe [09:23:23] another aspect of https://bugzilla.wikimedia.org/show_bug.cgi?id=45539 then [09:23:41] that bug is rather useless IMHO [09:24:05] The description of the keyword is not incorrect [09:24:14] feel free to add a comment there to explain why [09:24:43] A bug which can't directly be acted upon does not "Require[s] someone with wikidev group" [09:25:00] makes sense. [09:25:13] maybe "Requires " should be replaced with "To be acted upon by" [09:27:20] * Nemo_bis hopes this doesn't require a bug filed [09:35:03] * Nemo_bis filed it anyway https://bugzilla.wikimedia.org/show_bug.cgi?id=46781 [11:56:41] Is it possible to know when Special:Wanted categories is going to be updated next time? [12:01:22] LA2, on which site? [12:01:31] LA2: on which project? [12:04:34] da.wiktionary [12:05:39] it was last updated on April 1, 10 am, http://da.wiktionary.org/wiki/Speciel:%C3%98nskede_kategorier [12:05:57] should I wait a day or a week for the next update? [12:22:36] LA2: a week if you're lucky [12:22:43] IIRC [16:18:18] andre__: one sec, network drop [16:18:24] greg-g, yeah :-/ [17:56:43] Anyone know much about client-side caching: https://bugzilla.wikimedia.org/show_bug.cgi?id=46801 [18:07:22] greg-g- I intend to take advantage of your lightning deployment window this evening, to backport https://gerrit.wikimedia.org/r/#/c/57067/ (fixes a regression in wmf12) [18:07:41] anomie: sounds great [18:07:46] thanks for the heads up [18:08:14] ^demon: before I forget to ask, is there a bug or something for the problem you mentioned on wikitech with MediaWiki repo taking some 3 GB on gerrit host? [18:08:34] <^demon> No, but I've got a solution. [18:08:38] I hear it may have to do with git not handling binaries diff well (or at all) [18:08:40] <^demon> Just need to schedule a low-traffic time. [18:08:41] Ah, wonderful then [18:09:11] * Nemo_bis was sure of not being able to contribute any useful info but tried nevertheless [19:19:33] does anyone knows of a program similar to levelUp but on a local wiki level? [19:22:32] Alchimista: similar in what way? [19:23:48] On "such as +2 privileges on a repo": it could be an extension for a single wiki. Is this "local wiki level"? [19:24:40] And there's the non-coders situation evaluation coming https://www.mediawiki.org/wiki/Talk:Mentorship_programs/LevelUp#footer [19:26:21] Nemo_bis: not that complicated. Simply a program to help people comming into the programing stuff, bots, gadgets [20:30:58] AaronSchulz: who do you think should be the person to review this work if not your nor Tim: https://gerrit.wikimedia.org/r/#/c/56193/ (re the mysql/db migration for php 5.5 issue in !b 45288) [20:31:31] * greg-g is trying to figure out how to not overload you [20:33:13] I can't think of anyone, unless Chad is interested [20:33:29] I don't see why it can't be tim or myself though [20:34:08] AaronSchulz: just felt like I've been pushing a lot of stuff to you lately [20:46:50] greg-g: Is the owner not likely to work on it? [20:50:26] Reedy: I assume parent will work on it, but as for reviewing/merging.... [20:55:32] DarTar: any increase in editing numbers since they switched the citation templates to use Lua? [21:27:01] kaldari: why should there be? it only means that editors will add more such templates making the pages harder to edit, doesn't it [21:27:42] possibly, although I imagine there may be a bit of a bump in the meantime [21:28:41] The rendering times for most featured articles, for example, has been cut in half [21:55:08] kaldari: that would affect only editor hitting preview [21:55:25] most newbies don't use that, in fact when de.wiki forced them to there was a drop in activity (2007) [21:55:31] or editors wanting to make several edits in a row [21:56:51] For copyediting long articles, I typically edit one section at a time. If each edit takes a minute to save and render I usually loose interest after 3 or 4 sections and go read a book instead :P [21:57:43] it's easy to get distracted while waiting 30-60 seconds for a page to refresh [21:58:43] I imagine people who use Facebook would go check Facebook and then probably get sucked into lolcat videos [22:00:41] kaldari: it would be interesting to know if there's a difference in use of edit section links among new and old editors [22:00:57] on en.wiki the edit section links are sooooooo far away [22:06:53] indeed [22:07:47] fyi for folks not on operations the issues with upload.wikimedia.org are due to bandwidth issues and being fixed [22:08:15] thanks Leslie [22:34:26] !log payments cluster updated from 12bc36c238 to 0ab980b8b [22:34:32] Logged the message, Master [22:51:33] Reedy: Would love to get your comments at https://bugzilla.wikimedia.org/show_bug.cgi?id=36316 [22:51:42] regarding userOption default changes [23:04:45] greg-g- Is anyone else using the lightning deploy window, or just me? [23:05:30] you're it [23:05:33] anomie [23:13:57] kaldari: Could just do one big insert from the users table to set all users to the current default and sync the code../ [23:14:12] I think it would've been Roan/Usability who did anything like this before.. [23:15:57] 15 'dontswitchmeover-desc' => 'Preference for users to specify whether they want to preserve their skin setting when the default skin is changed.', [23:15:57] 16 'dontswitchmeover-pref' => 'Do not allow my skin to be changed when the default skin changes', [23:16:46] kaldari: http://svn.mediawiki.org/viewvc/mediawiki/trunk/extensions/DontSwitchMeOver/ was the "method" last time it would seem :/ [23:19:32] kaldari: I'd see what RoanKattouw_away remembers [23:31:34] Reedy: thanks [23:33:34] w00t, 502 errors. anyone? [23:34:15] Where? [23:34:15] greg-g, anomie|away-ish: when you're done, i was hoping to sync a couple of small config changes [23:34:25] ..or not [23:34:25] I'm getting it on a link. [23:34:28] just a random link [23:34:39] Huge spam in operations [23:35:22] ori-l- I was done 25 minutes ago [23:36:02] kk [23:36:04] thanks [23:46:52] Hey Nemo_bis... [23:47:09] I want to clean-up all these watchlist default bugs so we can actually make sense of them... [23:48:24] it seems to me that what we want to do is set pages that users create or edit to appear on new user's watchlists by default, but not change the settings for existing users (who may or may not have explicitly chosen other settings). [23:49:24] would you mind if I combined bug 36316 and bug 45020 into 1 bug that effectively says this?