[00:08:26] anyone else want to look into https://www.mediawiki.org/wiki/Special:Nuke/Ppsg-office ? (those were mostly or all nuked and so according to Jasper_Deng_away they should therefore not show up on the nuke list) [00:08:59] i checked one of them when previewing a link to it it shows a redlink [00:09:17] it's been over a day i think [00:15:31] hrmmm, why is only enwiki graphed for job queue? [00:17:16] http://ganglia.wikimedia.org/latest/?c=Miscellaneous%20pmtpa&h=spence.wikimedia.org&m=cpu_report&r=hour&s=descending&hc=4&mc=2#mg_no_group_div [01:46:22] jeremyb there are serious issues with the job queue [01:46:37] jeremyb: as you can probably see [01:46:54] jeremyb: there are two related bugs to this [03:33:29] Seddon: what are the bugs? i could tell maybe there was something wrong but that wasn't related to my comment [03:34:36] wow, it's been broken over a week? [03:35:32] http://ganglia.wikimedia.org/latest/?r=month&cs=11%2F20%2F2012+18%3A42&ce=12%2F6%2F2012+3%3A23&c=Miscellaneous+pmtpa&h=spence.wikimedia.org&tab=m&vn=&mc=2&z=medium&metric_group=ALLGROUPS#mg_no_group_div [03:35:58] http://ganglia.wikimedia.org/latest/?r=custom&cs=11%2F22%2F2012+11%3A57&ce=12%2F1%2F2012+8%3A20&c=Miscellaneous+pmtpa&h=spence.wikimedia.org&tab=m&vn=&mc=2&z=medium&metric_group=ALLGROUPS#mg_no_group_div [03:36:30] anyway, my comment stands. why only enwiki? [03:43:13] Thehelpfulone or any other bz admin: ping [03:43:37] please update the link in the top of all pages to be HTTPS. ("file a bug report") [04:27:46] jeremyb: I have no idea.... I can only assume the the details for other wiki's are on another server? [04:28:00] nope [07:25:45] jeremyb: because nobody bothered to add graphs for everything [07:26:32] binasher> Nemo_bis: they're in puppet/files/graphite/gdash-dashboards but i never actually made a puppet rules to install them.. [07:26:41] binasher> Nemo_bis: see the oddly named puppet class nagios::ganglia::monitor::enwiki in puppet/manifests/nagios.pp [09:15:00] Hello. I am looking to get the local wikipedia setup with a readable font. dv.wikipedia.org .Is changing of the ISO font possible? Replacing with a custom ttf. [09:20:51] CRC43: https://www.mediawiki.org/wiki/Extension:WebFonts#Supported_languages [09:24:40] Nemo_bis: Dhivehi (alt:Dhivehi) is not listed. yet http://dv.wikipedia.org/ has minimal setup already configured. Just a really lousy ttf, that is characteristic of the old language. [09:25:25] CRC43: the section below has instructions on how to add other fonts [09:26:17] Oh thank you. looked at the table and thought that was the end of the page. [09:49:04] Nemo_bis: Does this extension require SSH access to the installation? or am I missing something? [09:54:09] CRC43: I don't understand, you mentioned Wikipedia [09:54:16] you only have to file a bug to add the font [09:54:36] and then to enable it on the wiki open a discussion and then request it on bugzilla when there's consensus [09:57:31] If I block my account, what happens to automated edits that are in the queue [10:06:43] Seddon: what queue? [10:06:52] jobqueue [10:08:03] am I going to somehow break the system or will it do the intended purpose of simply not permitting the edits to occur or because they are already in the queue, they will simply happen whether we want them to or not [10:08:15] edits don't get queued in the JQ afaik, its other authomated stuff, such as page reprasoing from template edits [10:09:01] p858snake|l: they are automated, its notifications sent out by the notification extension [10:09:41] Seddon: afair edits are no longer made in your behalf, so a block won't work [10:10:03] you can block the bot though :) [10:10:03] saper: they definitely show up in my contributions [10:10:15] oh bollocks the bot... [10:10:23] ok I need a global block [10:10:32] Seddon: that's probably another story :) but I'll check. Last time I looked at the code it was pretty simplistic. [10:10:58] but it was long time ago (just before it was introduced) [10:11:04] saper: edits and email on meta are made through my user account, the rest go through a bot [10:11:32] yes, local is local [10:15:29] Seddon: jhs is already asking this on #mediawiki-i18n [10:59:18] !log job queue snapshot for bug 42614: en.wiki 380k still raising steadily, Commons 85k down from 120+, fr.wiki 4M+, pl.wiki 18k, nl.wiki 8k, other tops 2k at most [10:59:27] Logged the message, Master [11:00:28] Nemo_bis: I am gonna go get some breakfast, ill be back on in a bit [11:28:20] hello, anybody here? unfortunately the display problems on pl.wiki i reported ~12 hrs ago are still happening for some people :( [11:28:32] (freshest reports are from 3 hrs ago) [13:54:02] jeremyb, done, thanks [14:21:12] Thehelpfulone: danke [14:21:53] np :) [14:24:03] jeremyb: have you seen the ganglia rule for the jobqueue graph? [14:24:21] i saw the paste. i didn't look at it thought [14:24:23] though* [14:24:36] ok [15:10:33] Question for Gerrit experts. [15:10:59] I followed this procedure to create a change with a dependency: http://www.mediawiki.org/wiki/Git/Workflow#Create_a_dependency [15:11:11] in fine, I pushed with git push gerrit HEAD:refs/for/master [15:11:25] If I want to specify a topic, what should I use instead? [15:12:09] <^demon> HEAD:refs/for/master/myawesometopic [15:13:34] Thanks. [15:38:11] mutante, hashar: Any more thoughts on librsvg on gallium? Because I could, you know, just remove the tests and forget about the whole thing :P [15:38:26] "moving on" I think they call it :P [15:38:46] Jarry1250: oh yeah sorry [15:38:57] Jarry1250: librsvg got sorted out by paravoid [15:38:59] iirc [15:39:29] gallium:~$ rsvg-convert --version [15:39:30] rsvg-convert version 2.36.1 (Wikimedia) [15:39:32] Jarry1250: ^^^ [15:39:39] Oh, amazing stuff. [15:39:41] I can't remember if there was a bug for it [15:39:55] Jarry1250: also we no more trigger tests on patch submission [15:40:10] Jarry1250: so we need to test it on our local machine before +2 ing it (which will trigger the tests) [15:41:00] hashar: For translate or just generally? Parser tests take like a gazillion years to run on my PC :( [15:43:00] I suppose I keep my option to manually trigger them? [15:44:49] Okay, it's disabled generally I see. [15:45:26] And actually I'm not sure how I would trigger a manual test... [15:45:57] I don't know if that's possible [15:53:27] hashar: Also, I'm going to have no way of knowing whether I've fixed the parser tests themselves under someone +2s, because I have rsvg-convert setup properly on my local PC, that was never the problem :( (but I agree that's an anomaly) [15:53:59] yeah that is inconvenient :-] [15:54:19] Jarry1250: for parser tests you can use /tests/parserTests.php [15:54:25] it should be wary faster [15:54:40] hashar: It is. But it also runs different code paths [15:54:41] I will remove the PHPUnit version hopefully by the end of the year [15:54:44] yeah [15:55:12] Does Jenkins run parserTests.php on +2 or the PHPUnit version? [15:55:28] <^demon> We should remove the code duplication. [15:55:38] <^demon> But ability to run parser tests from phpunit is a good idea. [15:55:58] ^demon: Agreed, it confused the hell out of me for like 5 hours [15:56:13] But it's not very simple, because you've got two separate classes. [15:56:30] *divergent [15:56:35] <^demon> It's stupid, and I've been mad about that since we did it. [15:57:18] But also you have the issue raised in September about the fact that PHPUnit is slower because it actually does unit testing [15:57:29] i.e. it calls setup and tearDown after every test [15:58:16] So you have the conundrum of whether to import that back in parserTests.php or not AFAICT [16:00:37] How do you make phpunit only run parser tests btw? [16:00:42] I can't recall the syntax [16:00:55] ^demon: we don't really need PHPUnit there, just need it to record tests results and generate a Junit ouput [16:01:00] that will be good enough :-] [16:01:33] I tried to optimize the NewParserTests.php file and eventually gave up. Too much cluttering in it. Probably going to be easier to add Junit support [16:01:48] <^demon> I like having it in phpunit. [16:02:03] <^demon> I like being able to just do `make phpunit` and it runs all tests. I don't want to have to run a second set of tests. [16:02:38] <^demon> Also, the only reason we're still supporting parserTests.php is because it was there first. If we'd done it all in phpunit to begin with we wouldn't be having this discussion. [16:02:44] bah can't add reviewers :( [16:02:47] the change is a draft : https://gerrit.wikimedia.org/r/#/c/35805/ [16:03:04] <^demon> My opinion has always been to deprecate parserTests.php and improve the performance of phpunit. [16:03:26] I will make sure to reach out to you whenever I start working on that :-] [16:03:39] maybe I should rewrite the PHPUnit integration from scratch [16:03:52] One thing, incidentally, is that it uploads the images for every test [16:03:57] (PHPUnit that is) [16:04:07] <^demon> See, that's stuff we can improve on :) [16:04:35] <^demon> Like originally, when we dropped databases and re-created them on every test...but we sorta fixed that so it only truncates tables between tests. [16:07:03] anyway I am out [16:07:13] see you tomorrow [16:07:22] <^demon> Goodnight hashar. [16:10:23] andre__: https://www.mediawiki.org/w/index.php?title=Talk:Community_metrics&diff=0&oldid=613750 [16:10:52] Nemo_bis, I don't see an advantage thru that [16:11:01] just one step to make it more heavy-weight [16:11:22] just a fyi [16:11:48] maybe you also have a better suggestion on how to list/count bugfixers [16:12:17] usually the person who closed the bug is called such, but this is not perfect either [16:15:54] Dereckson: you're wrong [16:16:06] check better [16:16:26] Nemo_bis: for RESOLVED FIXED these are statistics that should really really be gathered in gerrit. For the rest it could be done for "triagers" in Bugzilla, I'd say [16:16:33] I'll comment. Thanks for heads-up :) [16:21:40] hi [16:22:06] please deploy this localisation-update: https://gerrit.wikimedia.org/r/#/c/37072 [16:57:25] Nemo_bis, andre__ > I see two different bug resolvers statistics. [16:57:48] (1) There is the Community metrics page with a top 10 starting by nobody/wikimedia-bugs [16:57:54] This indeed take the assignee bug [16:58:04] (2) The weekly mail about Bugzilla statistics [16:58:22] This one statistic seems to use the resolver field. [16:59:06] of course it does [16:59:11] I said it just above [17:00:47] ? [17:01:53] ] ssh review gerrit ls-projects | grep irc [17:01:53] operations/debs/ircd-ratbox [17:02:28] That won't ease https://bugzilla.wikimedia.org/show_bug.cgi?id=14652 followup. [17:02:52] Platonides: bip [17:03:07] Platonides: do you still need https://bugzilla.wikimedia.org/show_bug.cgi?id=14652 ? [17:44:07] matthiasmullie: can you deploy this localisation-update? https://gerrit.wikimedia.org/r/#/c/37072 [18:38:48] <^demon> aude, DanielK_WMDE: I'm running rebuildEntityPerPage.php on wikidatawiki right now. [18:51:39] ^demon: cool... i don't think it has any status output, does it? [18:51:49] can you tell me how fast the entity_per_page table is growing? [18:51:53] <^demon> Nope, I was just thinking that would've been nice. [18:51:55] <^demon> I'll check [18:52:05] it should have ~350k entries in the end [18:52:11] (one for each item page) [18:52:22] yea, the script is a bit basic [18:52:51] should this fail, this *can* be done with a single sql query, actually. with a lot of arcane knowledge and install-speciffic assumptions [18:53:04] <^demon> It's just past 100k now. [18:53:23] excellent [18:53:29] so it'll be done in less than a hour. [18:53:46] <^demon> Maybe just under 1000 rows per second? Shouldn't take too long to finish, yeah. [18:53:53] ^demon: can you do a sanity check? the entry for item id 1234 should point to the page with the title Q1234 [18:54:49] <^demon> Yep, looks good. [18:54:53] cool, thanks [18:55:08] did you look at the prining script already? that's independant, can be done before the rebuild is finished [18:55:25] <^demon> Not yet, I'm going to grab lunch now (still haven't done that) [18:55:29] * DanielK_WMDE is running back and forth between his desk and the kitchen [18:55:43] yea, gotta priorize :) [18:56:29] A bunch of us are now in #wikimedia-office to talk about https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/2012-12-06 - come join and watch the live video stream [19:01:59] about to run scap! [19:14:43] DanielK_WMDE: ^demon how's the script doing? [19:15:02] <^demon> ~220k entries. [19:15:05] <^demon> Should be done within the hour. [19:15:08] * ^demon goes back to lunch [19:15:47] yay [19:29:33] hey all, anyone online that knows about how i might integrate async uploading of an xml file in an extension i'm working on? [19:29:59] you should try #mediawiki [19:30:13] jeremyb: k, thanks [19:38:31] who do i bribe to have a small config change reviewed? :( https://gerrit.wikimedia.org/r/#/c/36197/ [19:42:54] <^demon> MatmaRex: Done. [19:43:17] how much did it cost? [19:43:21] ^demon: yay! [19:43:36] <^demon> DanielK_WMDE: Not you, MatmaRex :p [19:43:54] oh, why did i not see that? wishful reading, i guess [19:44:03] <^demon> We're at 371166. [19:44:14] ^demon: thanks! [19:44:20] ok. that's about done, i suppose [19:44:41] ^demon: when is it going to be deployed? [19:44:46] <^demon> I just did. [19:45:32] oh. [19:45:36] that's lovely then, thanks a lot! [19:45:40] ^demon: oh wow, we are nearly at 500k now. the site is growing too fast! [19:45:45] i'm off now, minding the kids [19:45:47] bbl [19:55:19] Reedy: Hi! Can you take an other look to this change, please, to say me is the way chosen to increase memory limit is good or not? [19:55:21] This fix a problem that affect Wikisources and that is waiting for 15 days. https://gerrit.wikimedia.org/r/#/c/36632/ [19:57:41] Change was only submitted 3 and a half ago ;) [19:57:52] It seems slightly hacky.. But we do provide the functionality like that to do that... [20:01:45] Reedy: Thanks! Is it possible to deploy the patch now? [20:02:44] ...in order to don't have to wait the deployment of wmf6 Wednesday. [20:03:33] I can in a little bit [20:03:38] You can make the backport yourself you know ;) [20:03:45] Is it normal for scap to take over an hour for a single wmf version now? [20:04:09] Press enter a few times? [20:04:36] that seemed to help [20:05:27] Reedy: How? I didn't know that not-maintainer can to that. [20:05:43] Anyone can make a change and submit it if they have a gerrit account [20:06:04] Just needs someone with the correct rights to approve/merge it, and subsequently deploy it [20:07:08] kaldari: You might even still be wanted to type yes for hume... [20:07:30] I did that [20:07:39] seems to be stalled again though [20:08:07] there it goes [20:08:30] <^demon> aude: rebuildEntitiesPerPage finished ok. [20:08:38] hey everyone, the https://bugzilla.wikimedia.org/show_bug.cgi?id=42452 -related issues are still happening :( [20:08:47] Tpt: Same was as random people can propose puppet changes [20:08:59] if i were you, i'd rebootsome servers :( [20:09:23] Rebooting random servers isn't helpful [20:11:25] Reedy: I got over a dozen of these errors: mw7: Copying to mw7...failed [20:11:43] should I worry about that? [20:12:33] well, about 2 dozen now and counting [20:13:13] Not knowing what's actually failed isn't helpful.. :/ [20:14:07] a couple different failures... [20:14:18] mw37: rsync: send_files failed to open "/php-1.21wmf5/maintenance/.mwsql_history" (in common): Permission denied (13) [20:14:27] mw37: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1536) [generator=3.0.9] [20:14:45] actually it looks like pretty much all of them are reporting failures [20:15:10] reedy@mw7:~$ /usr/bin/sync-common [20:15:10] Copying to mw7...rsync: send_files failed to open "/php-1.21wmf5/maintenance/.mwsql_history" (in common): Permission denied (13) [20:15:10] rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1536) [generator=3.0.9] [20:15:12] Yeah.. [20:16:19] reedy@fenari:/home/wikipedia/common$ ls -al php-1.21wmf5/maintenance/.mwsql_history [20:16:19] -rw--w---- 1 aaron wikidev 138 2012-12-05 21:49 php-1.21wmf5/maintenance/.mwsql_history [20:16:21] ffs AaronSchulz [20:16:45] wikidev has no read? seriously? [20:16:49] heh [20:16:56] <^demon> Well, you have write ;-) [20:17:03] FWIW, it looks like the scap is actually working [20:17:16] i.e. the wikis are getting updated :) [20:17:47] ^demon: yay! [20:20:49] <^demon> I've got the change in for puppet for the prune script. [20:20:55] <^demon> Just need to track down an opsen. [20:22:10] so, i'm thinking that test2 should show the interwiki links now if there is an associated item on wikidata [20:22:16] when we purge a page, but i'm not seeing anything [20:22:26] * aude wonder if we're missing something..... [20:23:00] DanielK_WMDE: the script is done [20:23:12] is somebody hunting down https://bugzilla.wikimedia.org/show_bug.cgi?id=42452 or is everyone just too fed up with it? [20:26:14] MatmaRex: I sent an email to Ops, but haven't heard anything [20:26:33] I'm surprised no one seems interested in this bug [20:27:59] i'm not surprised anymore :/ the immediate help i received from you and others here yesterday was just too odd ;) [20:28:12] heh [20:28:14] anyway, it's still happening, but apparently mostly on monobook [20:28:28] (or at least that's the reports i'm getting) [20:28:28] ^demon: DanielK_WMDE which section is wikidatawiki on? s3? [20:28:37] <^demon> Yes. [20:28:50] i've advised everyone to try clearing their cookies per https://bugzilla.wikimedia.org/show_bug.cgi?id=42452#c28 [20:29:01] ah, ok [20:29:04] that's the default [20:32:06] * aude thinks https://test2.wikipedia.org/wiki/New_York_City?action=purge should show links [20:33:23] * MatmaRex thinks https://pl.wikipedia.org/ should finally stop sending old CSS [20:36:08] AaronSchulz: what would be the best way to clean up https://gerrit.wikimedia.org/r/#/c/36340/ ? Is it possible to just re-open the changesets that were merged before and that should be merged again? [20:37:39] valhallasw: you could make a patch that reverts the revert and has a dependent commit with a fix if the fix is small (or just have them all in one commit) [20:40:10] AaronSchulz: that would mean: git revert , remove all the new LST stuff and commit, and git review those two changes? [20:40:52] wait what are you trying to do? [20:41:10] I thought you were wanted to restore that stuff [20:41:57] AaronSchulz: I'm leaving the new LST parser for now, but there were some independent improvements by others [20:42:17] ^demon: having wikidata on s3 is probably not a good idea. it just grew by 100000 pages in two or three days. [20:42:23] improved parser tests, splitting the code to more files, etc [20:42:23] bot-driven imports, sure. but still [20:42:45] valhallasw: would any of those changes merge cleanly by themselves? [20:42:58] <^demon> DanielK_WMDE: That's a question for binasher, I guess. [20:43:09] sure [20:43:17] so, what's next... [20:43:23] AaronSchulz: I'm not really sure. They would need to be rebased in any case. [20:43:30] setting up the prune script and then enabling wb_changes, I suppose. [20:43:38] <^demon> DanielK_WMDE: I've got a patch in for the prune cron, just need an opsen to merge. [20:43:46] ^demon: if you want to do that now, i'll stick in fron of the screen. [20:43:57] ic [20:44:06] AaronSchulz: it may be easier just to do the revert, manually back out the LST parser changes and to re-commit that [20:44:34] valhallasw: you might try cherry-picking the good changes back and resolving conflicts [20:44:39] ^demon: the cron job should be unprobelmatic. once the table is enabled again, we need to have an eye on size. [20:44:46] let me know when it happens [20:44:57] AaronSchulz: that's a good idea [20:45:09] i'll be online for another two or three hours. but not working, so perhaps afk or watching a movie [20:45:20] DanielK_WMDE: i agree, re: wikidata on s3 [20:46:05] binasher: ok. doesn't make a difference to us, but you may want to keep an eye on the site and activity on wikidatawiki, and move it if appropriate [20:46:24] if we build an entirely new shard, we can cleanly migrate it off of s3 with minimal write downtime… but migrating it to an existing shard will require downtime [20:46:37] the bigger it is, the more write downtime there'd be [20:46:48] so sooner than later would be good [20:47:13] <^demon> Do we have the available db boxes for a new shard? No downtime sounds nice. [20:47:16] no [20:47:37] <^demon> Fair enough. Where would you want to put it? [20:47:58] i'd put it on s5 with dewiki [20:48:21] sounds reasonable [20:48:52] binasher: wikidata is atypical, in that I expect it to grow quite large in terms of nomber of pages and links, but with rather few authors, and not a terrible lot of edits [20:48:58] DanielK_WMDE: can you think of some reason that links wouldn't show up yet on test2? [20:49:02] e.g. with a page purge? [20:49:08] what are we missing? [20:49:30] aude: hm... have you tried edit preview? [20:49:36] they should work [20:49:52] * aude tries again [20:50:02] nothing [20:50:03] <^demon> binasher: Sounds fine to me. When do you think we could do this? Like you said, sooner the better. [20:50:20] aude: yea, confirmed [20:50:21] https://test2.wikipedia.org/wiki/Riesi for example [20:50:27] a new item on wikidata [20:50:38] DanielK_WMDE: i wouldn't be surprised if wikidata warrants its own shard in the future, though sharing s5 with dewiki (only db on s5) should cover a lot of growth [20:50:40] i tried our old trusty Helium. no dice [20:50:55] what's involved with making wikidata read-only? [20:51:24] binasher: just the usual. at least in theory. we discovered a few places where modifications could be made even in read only mode. we are fixing that now [20:51:38] read only seems just on the software side [20:51:51] * aude thought the database enforced it somehow, but that's not the case [20:52:04] hm... i feel blind... i have no idea how to debug test2 [20:52:12] me too [20:52:28] is it providing 'enwiki' as its global identifier? [20:52:36] it is in the setting, yes [20:52:51] we have readOnlyBySection for setting entire shards to read-only.. i'm not actually sure how to do it for an individual project [20:53:25] <^demon> We've also got the MediaWiki $wgReadOnly, but it's not strictly enforced in 100% of places...mainly just user-facing things. [20:54:56] readOnlyBySection just sets $wgReadOnly [20:55:10] ^demon: test2 is not picking up language links as expected. it's probably a minor oversight, but hard to debug without any access... [20:55:12] the real check is in mysql ;) [20:55:15] not sure how to procede [20:55:28] * DanielK_WMDE regrets not putting in more debug statements [20:55:28] nothing in the error logs? [20:55:58] it works perfectly fine on my test instance, when i add new items and don't run pollfor changes [20:56:09] the links do appear when i create a page or purge on my client [20:56:19] DanielK_WMDE: would it be ok if wikidata is read-only for an hour or two on Monday? [20:56:55] binasher: that's to do schema updates? [20:57:13] no, to migrate shards [20:57:24] maybe the schema updates should happen first [20:57:29] hmmm.... the schema updates don't require read only? [20:57:52] i think it's okay as long as we don't make regular habit of this [20:58:09] people understanding the project is new and all [20:58:23] schema updates won't require read only once the schema is sane [20:58:30] right [20:58:30] binasher: we plan a regular deploy on monday (that would be wmf6 i guess). and schema updates. and shard migration... seems a bit much for one day. but please talk to denny about it, it's his decision. [20:58:44] yeah, that does seem like a lot [20:58:45] i think regular deploy does not require read only [20:58:55] it'll be rather minor update [20:58:59] and i'd rather not migrate shards until after the schema is fixed [20:59:02] <^demon> DanielK_WMDE: Here's the full debug from my request to [[Riesi]]: http://p.defau.lt/?jz9B8W4iaZaAbkwQD6qVWw [20:59:29] ^demon: thanks! i'll look. ideally, i'd want to see what happens during an edit preview [20:59:37] i don't see wikibase there [20:59:46] ^demon: we may have to add in more debug statements to see what's happening. [20:59:57] because we have hardly any wfDebug >_< [21:00:33] <^demon> Hmm, it shows up on some pages [21:00:38] ^demon: you wouldn't see anything if you are hitting the parser cache. the interesting bit happens just after parse. hence, edit preview [21:00:39] really? [21:00:44] <^demon> https://test2.wikipedia.org/wiki/Main_Page is showing it for me. [21:01:28] ^demon: that page has languagelinks in the wikitext [21:01:39] the question is - will they show if there are non in the wikitext? [21:01:41] they should [21:01:42] * aude would like to not be subject to pending changes [21:01:44] well, not on the main page [21:01:48] * aude removed a bunch [21:01:52] that's probably not linked on wikidata (or maybe it is?!) [21:01:55] but it's stuck in pending changes [21:02:02] DanielK_WMDE: probably is [21:02:09] aude: sorry, what is stuck where? [21:02:24] http://www.wikidata.org/wiki/Q5296 [21:02:31] DanielK_WMDE: my edits [21:02:43] oh, pending changes [21:02:45] shouldn't matter though as i can see pending edits [21:02:49] have we tested interaction with that? [21:02:53] should just work, but... [21:03:05] aude: edit preview should always work. [21:03:06] we have that on our test client [21:03:13] DanielK_WMDE: but it doesn't [21:03:34] ^demon, aude: i'm not doing anither night shift today. i'll poke at it tomorrow, probably by adding a bunch of debug statements and staring at the log. [21:03:43] unless someone wants to spend the evening doing that [21:03:52] ok [21:04:04] * aude is expecting guest soon [21:04:16] won't really be around tomorrow, but might pop in irc [21:04:18] <^demon> DanielK_WMDE: I'll start early tomorrow so we can finish debugging this. [21:04:51] ^demon: great, thank you. when would that be, would you say? [21:05:26] * DanielK_WMDE was planning to take the day off tomorrow. well, some of it anyway [21:05:35] <^demon> Sometime around noon, utc. [21:05:39] <^demon> (7ish my time) [21:05:56] ^demon: sounds excellent! i'll be there, adding debug statement ;) [21:06:10] i should figure out how that log channel stuff works. [21:06:11] tomorrow [21:11:43] AaronSchulz: I cherry-picked the commits, but now gerrit is complaining the relevant changesets are already closed. I'll try the revert-the-revert-and-do-a-more-specific-revert method. [21:24:20] AaronSchulz: https://gerrit.wikimedia.org/r/37297 [21:24:37] and https://gerrit.wikimedia.org/r/37298 [21:24:46] is this the (a) correct way? [21:33:03] Isarra: you really shouldn't be IRC'ing as root [21:42:58] valhallasw: you can cherry pick and change the commit id [21:43:55] <^demon> aude, DanielK_WMDE: Cron for pruning should be live momentarily. [21:44:12] ^demon: sounds great [21:44:23] well, the gerrit commit ID of course [21:45:47] AaronSchulz: I was somewhat too lazy to do that for a dozen changes [21:48:52] was it really 12? [21:49:11] I think cherry-picking preserves authorship properly [21:49:17] <^demon> aude: I made a mistake, we're fixing it now :) [21:49:21] <^demon> I set it up to run every 15m. [21:49:46] AaronSchulz: Let me check what git blame does. I would expect it to understand git reverts [21:50:19] but you're right, it doesn't [21:50:54] 13.7 PB/s? O_o http://ganglia.wikimedia.org/latest/?r=week&cs=&ce=&m=cpu_report&s=by+name&c=Virtualization+cluster+pmtpa&h=&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 [21:50:56] there were 7 commits that I merged back (some completely, some partially) [21:56:00] kaldari: a guy just confirmed that clearing his cookies fixed https://bugzilla.wikimedia.org/show_bug.cgi?id=42452 for him [21:56:16] interesting [21:56:22] (at least i think he did, i suggested clearing both his cache again and then cookies if it fails, and he sais that "it worked") [21:56:38] it would certainly be interesting. [21:57:31] how many servers are handling these calls? how hard would it be to try to send this load.php request to all of them and see which ones return correct results? [21:57:54] There's a script for that [21:57:56] well, sort of [21:59:27] valhallasw: I probably broke the authorship with that revert :) [22:00:19] though git blame would still be fine by doing "blame previous" or whatever [22:00:31] AaronSchulz: ok, no clue why gerrit thinks I'm on that banch, but this is the cherry-picked version: https://gerrit.wikimedia.org/r/#/q/status:open+project:mediawiki/extensions/LabeledSectionTransclusion,n,z [22:01:23] MatmaRex: If you hear of anyone who is technically competent seeing the bug, see if you can get them to copy out the X-Cache HTTP header for the bits request that pulls the CSS. [22:01:53] valhallasw: I am root. [22:01:55] hey LeslieCarr thanks for pointing to http://www.meetup.com/SF-Bay-Area-Large-Scale-Production-Engineering/events/92123162/ -- hope 1 or 2 people from WMF can go and tell people we're hiring [22:02:09] <^demon> aude: Script just ran, worked fine. Like I said, we've got it on 15m intervals. [22:02:15] it should says something like "niobium miss (0)" or "stronium hit" [22:02:19] LeslieCarr: I just sent a note to Sangeeta to suggest that she go. [22:02:39] cool [22:02:45] MatmaRex: I have no idea [22:02:52] LeslieCarr: Do you know? [22:02:58] kaldari: aye aye [22:03:24] kaldari: haven't been following conversation and sort of busy right now :-/ [22:03:35] LeslieCarr: i.e. how many servers actually serve bits.wikimedia.org content [22:03:40] ^demon: [22:03:41] ok [22:04:02] kaldari: unfortunately it looks like the only people who are now reporting this are the kind of people who send e-mails with BMP attachments to show me the screenshots :) [22:04:11] who are still* [22:04:15] valhallasw: seems ok [22:04:37] where all of those previously reverted, even https://gerrit.wikimedia.org/r/#/c/35388/3 ? [22:04:37] LeslieCarr: and is there a list of them somewhere so we can check them all for bug 42452 [22:05:01] kaldari: look at all the collections that are called bits here - http://ganglia.wikimedia.org/latest/ [22:05:16] valhallasw: I should have told you to do cherry-pick -x, too late I guess ;) [22:05:57] AaronSchulz: no, that one hasn't been merged at all - it's also a different branch [22:05:59] I guess it's just ones with Original-Change-Id [22:06:04] MatmaRex: Looks like there are only 4 bits cache servers [22:06:17] hm. [22:06:18] I'm surprised gerrit didn't whine about Change-Id not being the last line [22:06:20] at least in eqiad [22:06:29] probably more in Europe [22:06:32] AaronSchulz: -x woudln't have helped, as it doesn't add the cherry-picked-from line if there are merge conflicts; I kept the original change-id as original-change-id, which should help [22:06:39] 6 in esams [22:06:42] 4 in tampa [22:06:49] valhallasw: really it doesn't? huh [22:06:59] AaronSchulz: that's what the docs said - I haven't tested it [22:07:18] OK, so about 14 total. That should be testable [22:07:28] When recording the commit, append a line that says "(cherry picked from commit ...)" (...) This is done only for cherry picks without conflicts. [22:08:08] Reedy: Is there a way that I can request content from the esams or tampa servers from here? [22:08:34] https://gerrit.wikimedia.org/r/#/c/37308/ has a misleading summary [22:08:40] (I am clearly not the right person to troubleshoot this bug :P) [22:08:46] They've all got external IP addresses [22:08:53] it's just adding a test but it sounds like it's adding functionality [22:09:04] so you can use sq68.wikimedia.org from anywhere [22:09:11] Thanks! [22:09:15] meh [22:10:03] AaronSchulz: just a sec. [22:10:10] MatmaRex: One problem though is that there are potentially hundreds of different URLs that could be cached, since each person has different gadgets installed. [22:10:11] too late [22:10:35] MatmaRex: And resourceLoaders appends them into 1 request [22:10:54] AaronSchulz: heh, OK [22:12:03] kaldari: yeah, i understand [22:13:03] kaldari: tbh, clearing the bits varnish caches one by one (with delay in between) would probably be fine [22:13:06] but if we can find out what bits URL they are using, it wouldn't be too hard [22:13:13] it doesn't usually take long for them to repopulate [22:13:18] just obviously don't do them all at once [22:13:38] Reedy: that would be a good thing for us to resort to at this point [22:14:08] Reedy: who can I ask to do that? [22:14:59] Anyone in ops I guess.. [22:15:12] Presumably how the mobile varnish cache is purged should be sufficient [22:16:02] <^demon> aude: Please remind me--how often did we want to run the polling script again? [22:19:03] ori-l: Are you using your E3 deployment window today? [22:20:37] ^demon: i'd like it as often as possible but we should discuss with daniel before turning it on [22:21:05] * aude would really like it continuous [22:21:39] as otherwise it means *some* delay in edits showing up in the client, but we can live with a short lag [22:22:09] As long as we make sure it doesn't start doing a new run before the last one has finished... [22:22:33] No reponse from ori-l , not seeing him run any processes on fenari --> I'm gonna deploy two cherry-picks real quick [22:26:16] <^demon> aude: And we can go ahead and cut on the changes table now? The comment says to wait for the polling to be ready. [22:26:59] (done) [22:27:40] <^demon> RoanKattouw: Pet peeve...I wish people would stop git pulling in the wmf branches :( [22:28:01] <^demon> Nobody uses --ff-only :( [22:28:15] Oh, right [22:28:28] But -ff-only would fail if there are security patches [22:28:44] I occasionally fix up a situation with lots of merge commits by running git pull --rebase [22:28:44] <^demon> Then you just fetch & rebase. [22:28:45] ^demon: the table can be turned on [22:28:54] the pruning is what it needs [22:29:25] AaronSchulz: great, thanks! [22:29:47] Now at least I can blame Reedy when not everything has been merged back in yet ;-) [22:30:04] <^demon> RoanKattouw, Reedy: I can haz review? https://gerrit.wikimedia.org/r/#/c/37324/ [22:30:07] but seriously - thanks for merging all those changes. [22:30:08] I'm still confused what you were doing... [22:30:56] <^demon> RoanKattouw: git up is aliased on /h/w/c/ to `git pull --ff-only` ;-) [22:31:01] Yay [22:31:28] <^demon> mediawiki-config should *almost never* have stuff committed that's not in gerrit. [22:32:13] ^demon: This was wmf5 [22:32:19] It doesn't have security patches now [22:32:25] <^demon> Yeah I know. [22:32:28] But the wmf* branches have security patches from time to time [22:32:32] <^demon> I fixed it, it was wildly out of sync with origin. [22:32:41] ^demon: BTW can I haz review? https://gerrit.wikimedia.org/r/37327 [22:33:32] <^demon> Merged. [22:33:39] RoanKattouw: can I haz a review for https://gerrit.wikimedia.org/r/#/c/37291/1 ? [22:33:50] <^demon> RoanKattouw: I'll deploy both if you merge mine :p [22:35:14] or ^demon [22:35:24] ^demon: Yours has a Cr comment from Reedy :) [22:35:31] <^demon> Yeah, he's nitpicking. [22:35:39] Reedy nitpicks? [22:35:40] The comment is a lie. [22:38:54] <^demon> Fine, comment removed. [22:38:56] <^demon> PS2. [22:39:40] ^demon: Merged [22:41:09] <^demon> Deployed commonsettings. [22:41:46] Thanks [22:41:59] nowai [22:42:11] <^demon> wai [22:42:11] 35 rows in set (0.00 sec) [22:42:13] wheee [22:42:19] stuff in wb_changes :D [22:42:57] <^demon> 65...66 [22:43:00] <^demon> Yay for prune script. [22:43:11] 76! [22:43:22] 81 rows in set (0.00 sec) [22:43:58] 108 [22:43:58] <^demon> Ok, I think we all agree the table is being populated ;-) [22:44:20] So I herd u liek JSON [22:44:34] <^demon> Reedy: fyi, log for polling script is in /var/log/wikidata on hume [22:44:49] <^demon> s/polling/pruning/ [22:45:12] So, does test2wiki do anything useful now? :p [22:45:48] ^demon: \o/ [22:45:49] Reedy: the links aren't showing up :( [22:45:58] doesn't require the polling, if we edit / purge a page [22:47:03] I've no idea how the client is even supposed to work... [22:47:13] <^demon> Daniel said he's gonna lob some wfDebug in tomorrow morning so we can figure it out. [22:49:20] Hmm [22:49:26] no idea how to do that with test2 [22:49:44] aude: seems wikidatawiki is starting to get numerous deadlock errors.. [22:49:45] Thu Dec 6 21:50:03 UTC 2012 srv257 wikidatawiki Wikibase\TermSqlCache::saveTermsOfEntity 10.0.6.44 1213 Deadlock found when trying to get lock; try restarting transaction (10.0.6.44) INSERT INTO `wb_terms` (term_language,term_type,term_text,term_entity_id,term_entity_type) VALUES ('eo','label','Kalvinana preÄ [22:50:11] errr [22:50:29] only, like since we turned on wb_changes? [22:50:34] * aude can't see how it's related [22:51:14] no, all day [22:51:21] 139 [22:51:29] when we run the schema changes, i believe we are dropping the uniqueness constraints adn that might help [22:51:32] ok [22:52:50] Logged for reference anyway [22:52:54] updated bugzilla [22:53:26] https://bugzilla.wikimedia.org/42547 [22:53:31] i think yours is a duplicate [22:53:48] Yup [22:56:10] Susan: to answer your question on bug 41729, I am personally agnostic. I think it's probably less disruptive to just change Vector. But others are pushing for changes to the way parser outputs section edit links overall. [22:56:55] StevenW: Yeah, right after I posted that comment, I noticed your mail to design-l from November said that it would be Vector-only. [22:57:23] I'm not sure about others, but I usually use my mouse with my right hand. Putting the section edit links on the other side of the screen seems a bit strange to me. [22:57:58] I think the justification is not mouse position, but read/scan patterns in LTR languages. [22:58:16] Right... [22:58:43] and that it is often hard to associate the right link with the right header when they're separated that much distance. [22:58:46] Susan: I don't understand this point [22:58:55] would you also prefer the sidebar being to the right? [22:59:05] The scrollbar is on the right. [22:59:16] you didn't answer [22:59:25] I'm not sure what you're asking. [22:59:33] MediaWiki's sidebar [22:59:36] Oh. [22:59:47] Sorry, I thought you meant scrollbar. [22:59:56] No, most of the sidebar is completely useless. [23:00:06] Heh. [23:00:20] I have a bunch of sections (including the logo) hidden. [23:00:28] Many think the same of section links. [23:00:28] I don't need to go to the Current events portal or whatever. [23:00:41] This is not really relevant. [23:00:56] In terms of usability, it seems like they'd be better where they are currently. [23:01:07] StevenW: http://piramido.wmflabs.org/wiki/Hipster_ipsum [23:01:08] What? [23:01:15] In terms of usability, it seems like they'd be better where they are currently. [23:01:21] they=? [23:01:26] The section edit links. [23:01:38] "Currently" for me is next to the header. [23:01:44] On which wiki? [23:01:51] All Italian wikis. [23:01:57] Since 2006 or so I think. [23:02:01] Perhaps 2007. [23:02:30] * Susan shrugs. [23:02:37] If I find it too annoying, I'll just override it. [23:02:58] I understand the arguments for making the links more discoverable. [23:03:15] But for the users who have already discovered them, I'm not sure moving them improves much. [23:03:19] You go girl. [23:03:31] Reedy: ok, made the edit link show up https://test2.wikipedia.org/wiki/Dred_Scott [23:03:44] How many times did you purge it? ;) [23:03:45] It improves that the cursor has to move around less. [23:03:45] it's still not pulling in the external links from the repo, though [23:03:52] it's a new page [23:04:04] But yes, old editors are almost not affected either way perhaps. [23:06:29] aude: Looks like it's mostly there.. It's found the matching id etc [23:07:43] So putting links directly next to the headers makes them more discoverable? [23:08:24] That's the claim. [23:08:28] And adding an icon. [23:08:42] Probably a pencil icon, shaped like the goddess Agora. [23:08:43] ok, with the edit link, i can tell it's checking wikidata [23:08:58] the edit link will be there if a corresponding wikidata item exists [23:09:04] otherwise, no edit link [23:09:37] My experience has been the opposite - unless you're reading the entire article it's very easy to not even notice they're there when they're next to the header because they just look like more header. [23:09:39] * aude made a test2wiki page (no edit link), made the wikidata item (edit link), deleted the item (no edit link), restored the item (edit link) [23:09:53] And when you want to edit a section, you're not looking at the header, but at the section... [23:09:56] sure we can debug it tomorrow [23:10:01] heh [23:10:05] it's not like it's urgent [23:10:07] just irritating [23:10:15] no, but seems the db connection is working [23:10:25] at least checking items per site table [23:10:46] language links requires accessing the entity data, which involves different tables [23:10:49] Susan: Icon with 'edit' text? [23:11:06] I'll get a screenshot. [23:11:07] I was taught that having both icon and text is the spawn of evil. [23:11:22] anyway, should sleep soon [23:11:30] Or maybe it was just redundant. I dunno. [23:11:32] Isarra: https://upload.wikimedia.org/wikipedia/mediawiki/4/4a/Screenshot-SectionEditLinks-Vector.png [23:11:41] https://www.mediawiki.org/wiki/Extension:Vector/SectionEditLinks [23:11:46] Is that a wikia wiki? [23:11:55] Wait, nevermind, they don't do vector. [23:11:57] Isarra: when you're looking at the section you should double click. ;) [23:12:03] * Nemo_bis never used that feature [23:12:30] There is no way users would find that when they actually want it without being told. [23:17:34] Isarra: what do you think about the way headings look on pl.wiki? [23:17:39] random article: https://pl.wikipedia.org/wiki/Beyond_Final_Fantasy [23:18:28] Easy to miss. [23:18:43] huh, really? [23:18:44] Heh, the top link is in Polish while the others are in English. [23:18:58] JavaScript insertion. [23:18:58] They blend in already being next to it, and making them small only further diminishes their importance. [23:19:35] Since they do still have the [] it's not as bad as it could be, though. [23:19:56] Isarra: i feel that way about the default ones [23:20:07] moved aside, away from the article content [23:20:23] And so they should be, since they are not more content. [23:20:25] Susan: hm, i didn't notice that before - lazy coding :P [23:20:42] Susan: (also it's been that way since at least 2008, i think, before the mediaWiki object) [23:21:01] Isarra: i think we wanted to make them discoverable? [23:21:25] someone should make some stats real quick, about the percentage of section edits on pl.wiki and en.wiki [23:21:38] among IPs, new users and established users [23:23:33] Aye, it would definitely be something to look into. [23:25:37] Isarra: can you poke someone about this? i have to get some sleep, and wake up in five hours now. good night. [23:30:45] I wouldn't know whom to poke. [23:40:08] "I was taught that having both icon and text is the spawn of evil." Isarra, where did you learn that? [23:43:50] I would prefer not to say. [23:46:13] Regardless of where, however, redundancy of that nature just defeats the purpose - if the icon is an adequate indicator then there should be no need for text, and if it isn't, then it shouldn't be there at all. [23:49:51] Except that text and icons together are far and away the most rememberable and understandable ui choice [23:50:50] which is part of why the A/B test of the new section edit links was so successful. [23:51:29] Even in very icon-heavy interfaces like the new Gmail, text explainers are present [23:51:58] Hey AaronSchulz. Is there an easy way to see if a row in the text table belongs to a revision that has been deleted/suppressed? [23:52:48] JOIN on revision and archive (rev_text_id/ar_text_id) and check rev/ar_deleted? [23:54:08] brion: damn you conflicts :) [23:54:44] Cool, thanks. Although I'm guessing since rev_text_id isn't indexed on the revision table, asher will slap me if I try that... [23:55:47] true it's not I don't think [23:56:29] so it would have to be scanned in batches or something if done [23:59:00] chrismcmahon: can you look at 2012120610012691 ? [23:59:20] jeremyb: no, I don't think I have access to OTRS [23:59:42] orly... [23:59:51] csteipp: how are you coming across the text row?