[00:00:49] I have [00:00:55] today I'm asking more specific questions [00:01:47] i think i saw that exact question before [00:01:53] Possible [00:02:03] do you have any idea as to how to resolve that issue? [00:02:54] yes... [00:03:28] 1) identify the page element using something like the DOM inspector and then choose a selector for it [00:03:46] 2) set css for that selector to 1.2em or 0.8em or something like that [00:03:54] tweak as needed [00:03:58] 3) profit! [00:04:18] i feel no motivation to help any more than that at this point [00:04:19] I'm sorry, you're assuming a skill set not in evidence [00:04:24] ":DOM inspector" ? [00:04:30] choose a selector? How? [00:04:46] errr, no. actually i'm not [00:05:01] you're assuming I know how to do this. I'm pointing out that I don't. [00:05:04] i was kinda assuming your response would be something like that. which is why i didn't offer help earlier [00:05:28] Yes. Can you offer a *more detailed* solution? [00:06:18] What are you trying to resize? [00:06:41] pick any of dom inspector, firebug, or any other tool which allows you to click on part of the page to select the corresponding thing in said tool [00:07:07] pick a selector for the thing it highlighted or maybe one of it's parents [00:07:16] do 2 and 3 from above [00:08:04] maybe see https://en.wikipedia.org/wiki/User:Jeremyb/vector.css for inspiration [00:08:39] i didn't pull those selectors out of thin air... [01:40:07] [[Tech]]; DragonflySixtyseven; update; https://meta.wikimedia.org/w/index.php?diff=5442089&oldid=5427550&rcid=4130599 [01:45:44] [[Tech]]; DragonflySixtyseven; ffs; https://meta.wikimedia.org/w/index.php?diff=5442093&oldid=5442089&rcid=4130603 [02:53:20] Hi. There is a bigdelete request for pt.wikipedia. I wonder if I can do it know. [02:53:46] this is the page: https://pt.wikipedia.org/wiki/Usu%C3%A1rio:GRS73/Arquivo/Agosto/2008 [02:56:58] vvv ^ [03:02:49] Teles: If you file it in bugzilla, someone will do it [03:03:34] um [03:03:40] wasn't supposed to be done by stewards? [03:03:42] Teles: can't you do it yourself? [03:04:37] I can, but I need approval from some op to check if I won't break anything [03:05:25] ok, im not really sure who you'd get that from... [03:07:06] I'm almost sure that vvv is one of them, but looks like he's not aroun [03:08:38] You'd want someone like Tim, not Victor. [03:08:43] I guess. [03:09:30] I thought the bigdelete limit was 10,000. Is it actually 5,000? [03:10:01] yeah. A local admin already tried to delete it, but couldn't. [03:11:33] Teles: You have my blessing to delete the page, in accordance with local policy. [03:12:56] somebody wants so separate some revisions. I will be restored afterwards [03:13:51] It won't be an enjoyable exercise to load Special:Undelete with over 5,000 revisions. [03:13:54] But it's fine. [03:15:04] http://toolserver.org/~vvv/revcounter.php?wiki=ptwiki_p&title=Usu%C3%A1rio%3AGRS73%2FArquivo%2FAgosto%2F2008 [03:15:41] the advice stewards received is that they have to stay in contact with somebody from operations team [03:16:12] https://pt.wikipedia.org/w/index.php?title=Usuário:GRS73/Arquivo/Agosto/2008&action=info [03:16:29] There's no advice there. :-) [03:16:37] Teles: i'm sure TimStarling will yell out in the unlikely case you actually break something [03:21:39] well, we were told to it only after approval [03:25:27] you can do it [03:25:55] Teles: ^ [03:26:15] ah, good. Thanks :) [03:26:31] don't give up [03:26:42] we believe in you [03:30:06] :D [03:31:57] I wonder if I have to follow this same procedure in case I want to restore it [03:32:29] possibly, if you're doing all >5k at once [03:32:35] (same sort of thing, db writing) [03:37:11] restoring is probably not as well-tested as deleting [03:37:59] I think part of the reason we make it hard to delete articles with big histories is to smooth out some of that community fickleness [03:38:27] and make sure such articles are only deleted when they really really need to be deleted [03:39:27] TimStarling: Who says the community doesn't want me to delete the main page? >.> [03:47:23] if a deletion of this kind of page can cause server problems, it seems to be logical that restoring it may have the same effect [03:47:58] Bigdelete exists because moving rows between tables is annoying and could potentially lock up the sites for an hour. [03:48:14] When someone deletes the English Wikipedia's sandbox or whatever. [03:48:20] I thought the limit was higher. [03:48:27] Though I guess 5000 is as sensible as any. [03:48:36] Special:Undelete is still not paginated, as I recall. [03:48:46] lemme try deleting the sandbox and see what happens... [03:49:31] [03:50:08] You've won Wikipedia. [03:50:09] https://bugzilla.wikimedia.org/show_bug.cgi?id=7996 [03:50:24] The point being that at some limit, your browser won't want to load all those checkboxes. [03:54:00] The bugs in this area are scattered and weird. I feel like I'm missing a few. [12:04:55] any lilipond expert here ? [12:32:54] Hi , I was looking for a citation searcher? Is there one? [12:37:18] Qcoder00: This channel is for help and discussion in regards to the technical backend for the WMF cluster, Your question may be better suited for #wikipedia-en or variant channels [13:20:27] is it normal for a request of a PNG thumb to take 10 s? [13:20:51] yes [13:28:26] if it's REALLY BIG the first time maybe [13:28:33] but…. it really shouldn't most of the time :) [13:30:50] Nemo_bis: what brion said, also I'm looking at graphs and I don't see any pattern of increased response times on the upload infrastructure [13:31:01] Nemo_bis: but if yu do see more of these, do let us know [13:31:33] and try to get the headers... [13:36:58] brion, paravoid, doesn't seem big to me https://commons.wikimedia.org/wiki/File:Christian_distribution.png [13:37:22] yeah that's not too huge [13:37:24] http://p.defau.lt/?vOg3d3tHYLIsyt_O6U3QLw [13:37:46] it also happened to me with the original of a SVG yesterday [13:37:57] so I'm a bit confused [13:38:10] i can get 801 and 802px versions rendered fairly quickly… might be a fluke with one server or a one-time slowness fetching from the storage [13:38:19] well not one-time but intermittent :) [13:39:43] * jeremyb_ introduces Dragonfly6-7 to  [13:41:12] Nemo_bis: yours was a cache miss all the way down the stack... [13:41:42] well if it had to generate, that's to be expected :) [13:43:53] still it doesn't purge the 800px thumb :( [13:45:19] well it's also 0.04 secs according to wget. so nothing to complain about [13:45:35] Nemo_bis: what did you expect to be purging? [13:48:29] the thumb [13:48:55] with which action? [13:49:30] action=purge [13:50:26] errr, on what? [13:50:34] full URL maybe... [13:50:47] ...? the image above [13:51:09] aka https://bugzilla.wikimedia.org/show_bug.cgi?id=47825 [13:51:31] that URL doesn't have action=purge in it [13:51:38] please just give the full URL [13:55:05] * jeremyb_ waits for Nemo_bis [13:59:14] jeremyb_: sorry, I don't understand the question [13:59:49] Nemo_bis: give me the steps you take to do a purge [14:05:19] 1) Click the "purge" button https://commons.wikimedia.org/w/index.php?title=File%3AChristian+distribution.png&action=purge [14:05:41] [yes I'm too lazy to use the API and I would have said if I used that] [14:06:13] doesn't matter if you're too lazy to use the API. you didn't say what you did use :( [14:06:16] until now [14:07:00] is there a particular part of the map that changed to compare? [14:09:03] the refs [14:09:17] aha [14:09:57] history is your friend ;) [14:12:41] apparently dysprosium is the one of the guilty [14:12:53] dysprosium is in eqiad [14:12:56] not europe [14:14:06] paravoid: ^ [14:14:24] s/is the/is/ [14:14:33] I don't understand a thing :) [14:14:46] paravoid: an eqiad varnish is maybe not purging [14:14:56] dysprosium? [14:15:19] that's the one :) [14:15:25] ill have a look [14:15:57] * jeremyb_ looks up that element... [14:16:33] i see purges being processed on dysprosium just fine [14:17:21] well then it's one of those in varnish but not in swift? [14:17:28] iirc that's been a past reason to not purge? [14:17:40] i don't understand? [14:17:55] isn't the list of thumbs to purge built using swift? [14:18:09] so if a thumb is in varnish but not swift somehow then you can't purge it? [14:18:50] at least that's how i thought things worked and i think i've heard of that scenario being an issue in the past [14:19:27] anyway, the point is https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Christian_distribution.png/800px-Christian_distribution.png is out of date [14:19:32] some text on the bottom has changed [14:20:02] mark: ^ [14:20:45] yes but it should have been purged when the thumb got deleted [14:21:32] right [14:22:22] 4/44/Christian_distribution.png/120px-Christian_distribution.png [14:22:23] 4/44/Christian_distribution.png/350px-Christian_distribution.png [14:22:25] that's what swift has [14:23:36] paravoid: now what does swift have? [14:24:02] + 75px + 800px, why? [14:24:18] because i loaded the 800px with a bogus query string [14:24:50] and now i repurged from the wiki [14:24:59] and *now* it's up to date [14:25:03] Nemo_bis: confirm? [14:51:16] jeremyb_: yes, was created just now for me http://p.defau.lt/?YPupBKXw1tDrPMNkYpU_fQ [14:51:33] notice the 11 s wait [16:41:13] Nemo_bis: i was more interested in the content of the response... it's the right image? [16:44:52] [[Tech]]; DragonflySixtyseven; /* Font sizes and css */ new section; https://meta.wikimedia.org/w/index.php?diff=5443762&oldid=5442093&rcid=4132615 [16:58:05] jeremyb_: yes it is, it's enough to check Age: [16:58:37] Nemo_bis: errr, maybe. maybe not. better to open it and look if the text changed or not [16:59:35] jeremyb_: I did, but it's enough to check age [17:00:07] * jeremyb_ grumbles [17:15:01] Reedy: Can you let me know when you guys cut the wmf3 deployment branch? [17:18:00] AaronSchulz: ^ [17:22:58] kaldari: Already have [17:23:12] rats :P [17:24:15] Why? [17:24:25] 97 minutes ago apparently [17:25:43] you guys start early :) [17:26:04] I'd usually have started it about 3 hours ago [17:26:20] Since Chad tidied up the repos, the building and initial cloning is much much quicker [17:27:00] was just hoping to get a change out on the train, but I'll ask greg about using the lightnight deploy window [17:27:13] er lightning [17:27:19] Greg's out today [17:27:26] You can push it in now, or just use part of our deployment window [17:27:36] Chances are we don't need much more than the first 30 minutes or so [17:27:53] OK, I'd love to do that if possible [17:28:14] I'll ping you in a little bit [17:33:18] Sure [17:33:24] kaldari: Does it require a scap? [17:33:30] no [17:33:37] Fair enough [17:33:46] Just was going to say I'll need to run scap again for wikidata stuff [17:33:47] it's just a single file [19:04:41] Reedy: I'm ready to do our mini-deployment if the rest of your window is open [19:05:00] Nearly [19:05:10] 2nd scap seems to have fixed the l10n cache issues [19:05:34] when did the 2nd scap start? [19:05:46] is it still running? [19:12:24] kaldari: Should be all good now [19:12:39] Thanks [19:19:21] Reedy_: I'm all done now. Thanks! [19:22:02] superm401, ori-l: Are you guys OK with E2 deploying GettingStarted tomorrow along with all the other Echo related stuff? [19:22:11] kaldari, yep, thank you. [19:22:28] StevenW is as well. [19:22:32] ditto [19:40:56] Reedy_, https://bugzilla.wikimedia.org/show_bug.cgi?id=44899#c7 namespaceDupes.php? [19:47:13] [[Tech]]; DragonflySixtyseven; /* Font sizes and css */; https://meta.wikimedia.org/w/index.php?diff=5444074&oldid=5443762&rcid=4133100 [19:55:23] Krenair: Sort of. Having empty pages in place doesn't help [19:56:07] [[Tech]]; DragonflySixtyseven; fix; https://meta.wikimedia.org/w/index.php?diff=5444105&oldid=5444074&rcid=4133141 [20:31:59] someone moved a page from userspace into module namespace, but now the fancy highlighting doesnt show up [20:32:01] https://www.wikidata.org/wiki/Module:Label [20:32:05] anomie: ^ [20:32:52] legoktm: Hmm. [20:33:39] we both edited after the move and that didnt work either [20:33:48] should I try a delete/undelete? [20:33:58] I'm guessing this is a contenthandler thing? [20:35:46] legoktm: Yeah. wikidatawiki is the only one with $wgContentHandlerUseDB true, so it brings its content format along with it instead of determining it from the namespace and page title. The question is how to change that from the UI. [20:35:52] I think the content handler is determined by namespace and title of the page [20:36:09] <^demon> I'm looking at the entry in `page` for that wiki. [20:36:10] not by any explicit content of model/format info in the db [20:36:14] <^demon> s/wiki/article/ [20:36:14] what happens if i delete and undelete the page? [20:36:23] oh is this on wikidatawiki? meh [20:36:29] apergos: we're special :P [20:36:31] <^demon> page_content_modle is 'wikitext' for that page. [20:36:31] then there yeah [20:36:46] so wanna bet move doesn't update that [20:37:00] <^demon> Yep, that'd be my guess. [20:37:19] legoktm: You should be able to change it with an API edit, explicitly setting the content model and format. I don't see anywhere in the UI to do it though. [20:37:46] hmm ok, let me try with apisandbox [20:38:27] um [20:38:32] what format should i set it to anomie? [20:38:41] legoktm: Set contentmodel=Scribunto, and I'm not sure for contentformat. [20:39:09] lets see what happens if i leave contentformat blank :P [20:39:20] Bad Things :-P [20:39:21] Is the Toolserver down? [20:39:52] "code": "internal_api_error_MWException", [20:39:52] "info": "Exception Caught: Format CONTENT_FORMAT_TEXT is not supported for content model wikitext", [20:40:58] <^demon> anomie: What's ns10? [20:41:21] ^demon: ?? [20:41:24] https://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Module:String&format=jsonfm&rvprop=content <-- says "CONTENT_FORMAT_TEXT" [20:41:35] ^demon: usually template [20:41:36] ^demon: 10 is template [20:41:38] * MatmaRex has no context again [20:41:40] <^demon> Template, ok. [20:44:09] legoktm: Worked for me. Were you trying to use appendtext or something? [20:44:16] yeah [20:44:19] prependtext [20:44:45] guess that wont work [20:45:13] what a lovely diff https://www.wikidata.org/w/index.php?title=Module%3ALabel&diff=32673944&oldid=32669933 :P [20:45:57] thanks everyone :) [20:46:22] Speaking of which, that "CONTENT_FORMAT_TEXT" is a bug. I just wonder if fixing it is going to confuse wikidatawiki. [20:46:38] <^demon> anomie: Yeah, something's up with page content models and moving. Got a couple of pages in NS_TEMPLATE that have page_content_model = 'Scribunto'; [20:47:29] ^demon: ContentHandler when $wgContentHandlerUseDB is true will happily take the old content model along when a page is moved. And you can set it semi-arbitrarily with API edits. [20:48:02] I did file a bug a while back about how you can import pages onto Wikidata into ns0 with the wrong contentmodel [20:48:13] CONTENT_FORMAT_TEXT yeah that's correct, I had to hunt to find an existing module in the wikidata dump [20:48:23] <^demon> Yeah, but should you be allowed to set things semi-arbitrarily like that? It just seems odd that you could mark a page in NS_CATEGORY as Scribunto or wikibase -item. [20:48:59] well either you have whatever that config var is on, or off [20:49:12] if it's on, you can't arbitrarily mark (or at least it will ignore stupidity like that) [20:49:20] if it's off then... [20:49:20] <^demon> There's that :p [20:49:27] you takes yer chances [20:49:47] <^demon> So, can we rename the flag to $wgAllowUsersToSetSillyContentModelsThatMakeNoSense? ;-) [20:50:05] well [20:50:22] we need to strike a balance between descriptive variable names and usable ones dontcha think [20:50:25] :-P [21:11:08] AaronSchulz: Hey aaron, we ran into a strange db error when running our maintenance script for Echo. We sort of know the cause of it, but it doesn't make sense. Any chance I could run it by you or someone else? [21:43:12] AaronSchulz: I think this is the problem we ran into: https://bugzilla.wikimedia.org/show_bug.cgi?id=47848 [21:43:30] which is a bug that is likely causing other problems as well [21:48:58] Is Jenkins down? [21:49:11] It's been over an hour since CR +2 on https://gerrit.wikimedia.org/r/#/c/61464/ [21:49:33] it's testing it *really* thoroughly [21:50:25] ^ hashar [21:50:53] are there tests for that extension? ;D [21:50:54] superm401: i explicitly added 'jenkins-bot' as a reviewer; maybe that'll help [21:51:20] it's supposed to pick it up automatically, but maybe it lost its handle on the change stream [21:51:49] yeah there are tests and it should react to +2 hmm [21:52:26] No unit tests unfortunately (there are browser ones), but that hasn't stopped Jenkins in the past (it still does merge and lint check) [21:52:50] poor jenkins is too busy [22:13:08] superm401: fixed :-] [22:13:18] superm401: you might have to +2 [22:13:37] Thanks, what was it? [22:13:52] ^ hashar [22:14:13] superm401: the jenkins web service was locked down due to too many connections [22:14:19] I have restarted the prosy [22:14:20] proxy [22:16:01] Thanks [23:28:21] Say i run 'git review' and it lists something like 20 patches as opposed to just one, what did i do wrong? screenshot: http://i.imgur.com/GWQ50P7.png [23:30:16] ebernhardson: that's a pure black screenshot, but you either merged smething you didn't intend to, or git-review got confused about what is on master [23:30:43] ebernhardson: the ultimate solution to this is to remember the git sha1 hash of the change, then run this: [23:31:06] ebernhardson: git checkout master; git reset --hard gerrit/master; git pull; git cherry-pick [23:31:45] (you might have to replace "gerrit/master" with "origin/master", depending on how your repo is set up) [23:32:10] actually make that: git checkout master && git reset --hard gerrit/master && git pull && git cherry-pick [23:32:53] (and you might need to `git reset --hard` first if git complains.) [23:33:19] this basically discards all of your changes and resets your master branch to pristine state, then applies the patch. [23:34:36] MatmaRex: heh, doh :) shoulda looked at my screenshot first [23:34:58] figured it out though, the problem is i have an origin and a gerrit as remotes (even though they point to same place) [23:35:06] and i merged from origin, instead of gerrit [23:35:35] ebernhardson: `git remote rm origin`. you'll save yourself some headaches [23:35:56] MatmaRex: excellent, doing that now [23:36:06] (this kills the remote, of course.) [23:36:23] yup [23:42:23] ebernhardson, MatmaRex: An easier workaround is 'git fetch gerrit' [23:43:30] what i posted is the *ultimate* solution. ;) [23:43:55] good for when you have no idea what happened to your repo [23:44:02] i have actually been using hard resets to fix my review problems in last few days, although i was manually finding the right one in git log [23:44:05] and you just want it to work already [23:44:15] hard against gerrit/master is wonderful idea, why didnt i think of that :) [23:44:30] `git reset --hard gerrit/master` has become my best friend when working with gerrit [23:45:14] (i forget or don't bother to `checkout -b` way too often) [23:53:29] thx StevenW, had meant to reply singly, but hit the wrong reply-to button, then had an ohnosecond [23:54:16] sDrewth: no problem [23:58:46] has anything been changed overnight that would have an impact on English Wikisource? I am seeing my old toolbar set lose its ability to have additional buttons