[00:12:48] sorry if this has been asked already, but is WP lagging?... [00:13:11] domas, it doesn't just look like it's my ISP now - others are reporting lag on enwp too [00:13:43] I can't get anything to work.... [00:19:10] Anybody around to pick this up please? [00:19:30] pick what up? [00:20:20] concerning what I spoke about earlier, the issues with wikipedia having probs, paravoid - I'm not the only one noticing it now. Quite a lot of us are, it seems it's not just my ISP. [00:20:32] who is quite a lot? [00:20:43] where is this being discussed? [00:20:51] Shearonink, Moe_Epsilon, Myself still, Frood... [00:21:01] #wikipedia-en, paravoid [00:21:12] are you all located in Europe? [00:21:18] no [00:21:27] what exactly are you experiencing? [00:21:52] database lag is at 1636 seconds [00:22:05] some pages just slow loading, others failing to load at all, paravoid [00:22:11] yes [00:22:30] I can get to Special:NewPages and the New User log, but not to my own contributions, I can't edit commons either right now. [00:22:47] uploads are just hanging like saddam [00:23:57] TimStarling: Around? ^ [00:23:58] https://gdash.wikimedia.org/dashboards/totalphp/ [00:24:03] okay, there's definitely something wrong. [00:24:33] I'm looking [00:25:18] db12 is acting up [00:25:41] yay [00:25:42] it's a million years old and runs half the site, what do you expect? [00:25:44] it's not just me [00:25:54] had a load spike a few moments ago [00:25:59] paravoid, I must admit I did get some mild relief earlier from restarting and reconnecting my wifi, but that looked to only have been a temporary problem, I managed to get a proper traceroute through after I did that :) [00:26:17] So you and domas were right in some respects :) [00:26:52] BarkingFish: you were reporting network problems in Europe, this is very far from that :) [00:27:01] lots of PopulateRevisionSha1 [00:27:10] maybe I'll kill -STOP that for now [00:27:25] yeah, because at the time I was the only one seeing it, and I'm in Europe. I didn't realise others would see it as well, that weren't in europe [00:28:16] !log on hume: stopped populateRevisionSha1.php with kill -STOP due to excessive (800s) lag on db12 [00:28:26] Logged the message, Master [00:28:51] ok, give it 10 minutes now [00:29:11] hmm, maybe less than that [00:29:20] I guess we caught it after whatever caused the problem stopped running [00:29:39] yeah [00:29:41] strange that populateRevisionSha1 didn't back off by itself though [00:29:49] the load was already down when I ssh'ed in [00:30:00] http://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&tab=ch&vn=&hreg[]=db12 [00:30:38] it started at around 23:48 [00:30:47] and lasted until 00:20 [00:31:47] it's caught up now [00:32:12] !log on hume: kill -CONT [00:32:21] Logged the message, Master [00:32:42] what did you run to see that? [00:33:03] mysql -h db12 -e 'show processlist' | grep 'system' [00:33:18] but I'm old fashioned [00:33:21] ah, heh [00:33:34] I was wondering if we had anything more sophisticated than that [00:34:20] you can use mysql -h db12 -e 'show slave status\G' | grep Seconds_Behind_Master [00:34:45] in MySQL 4.0 that wasn't available [00:35:09] if you want to know about more than one server, there's mwscript lag.php --wiki=enwiki [00:35:49] and I think there's some graph on the toolserver somewhere that shows all servers [00:37:11] Looks like enwp is back to normal, everything loading now [00:37:54] yeah, it's been okay for a while [00:37:57] https://noc.wikimedia.org/dbtree/ has a graph for it [00:38:03] graph? tree thing [00:38:26] I know, but this isn't updated realtime [00:38:35] graph in the graph theoretic sense? [00:38:47] e.g. it still says lag: 994 for db12 [00:38:59] ganglia is wrong as well [00:39:02] Blame Ganglia then [00:39:02] Ganglia derived db data recent as of: Mon Jul 30 0:38:18 GMT 2012 [00:39:55] that's strange [00:42:26] uh oh [00:42:44] TimStarling: spiking again [00:43:08] AFTv5 [00:43:23] yes [00:43:31] no idea what /that/ is, but it's definitely full of that [00:43:41] it's non-essential [00:44:31] !log tstarling synchronized wmf-config/InitialiseSettings.php 'disabled ArticleFeedback since it caused an overload on db12 and general site slowness' [00:44:39] Logged the message, Master [00:44:55] is it a job? [00:45:32] !log killed ArticleFeedback queries on db12 [00:45:40] Logged the message, Master [00:45:45] no [00:45:57] AFTv5 is the new version of the article feedback tool, I believe, paravoid - it's being tested at the moment - I think Ironholds ran an Office hours about it a day or two back [00:46:00] user=wikiuser [00:46:13] for job runners, user=wikiadmin [00:46:32] paravoid: it's the box at the bottom of wikipedia pages [00:46:43] that asks readers for feedback about articles [00:47:01] feedback! yay! [00:47:01] strange that it is still going [00:47:25] maybe the AFT queries were an effect of increased load [00:47:40] load is fluctuating, right now it's increasing again [00:47:46] right, separate global [00:48:02] WP just got slow for me again [00:48:11] my watchlist won;'t come up [00:48:17] Shearonink: we know, we're on it. [00:48:25] I know you're on it [00:48:27] :D [00:48:47] paravoid: did you know about this tool? http://noc.wikimedia.org/dbtree/ [00:49:00] !log tstarling synchronized wmf-config/InitialiseSettings.php 'disabled properly' [00:49:08] Logged the message, Master [00:49:15] !log killed all AFTv5 queries another few times [00:49:24] Logged the message, Master [00:49:43] Ryan_Lane: I do, yes [00:49:51] * Ryan_Lane nods [00:50:07] seems to have worked now [00:50:21] load going down [00:50:30] and fast. [00:50:49] yeah well it's a 1 minute average, so it goes down over the course of 1 minute [00:51:06] it doesn't take long to kill all queries [00:51:13] yes, I know how load avg works :) [00:51:45] I wasn't born yesterday :P [00:52:00] it was like three days ago, right? :) [00:52:19] that's still plenty of time to learn what load avg is :) [00:52:55] !log tstarling synchronized wmf-config/InitialiseSettings.php 'docs' [00:53:04] Logged the message, Master [00:53:21] I wonder why we don't have mytop installed in db servers. [00:54:09] not sure what the point of it is [00:57:05] but it can be installed if you like it [00:57:29] I don't mind [00:58:22] I don't often log in to db servers during an emergency, I configured them all to allow root to connect from fenari [00:58:28] mysql connection is faster than ssh login [00:58:33] fenari doesn't have mytop either :) [00:59:22] does now [01:00:00] !log installed mytop on fenari at faidon's request [01:00:13] Logged the message, Master [01:00:23] hahahaha [01:00:51] all these Bots calling people Master [01:01:37] it has some special names for some people [01:02:05] I wonder if that should've been in quotes [01:02:37] TimStarling, I wonder if I tried to use the bot, whether it would call me what others do... "dickhead" :) [01:02:57] BarkingFish: I can add that message, if you'd like [01:03:01] :) [01:03:29] Ryan_Lane, feel free. I never use the logbot, i never have any need to. [01:03:45] heh. nah. I have a feeling not everyone would appreciate that ;) [01:04:20] now if it was true artificial intelligence, Ryan_Lane 0 whenever someone told it to log a message, it'd tell you to bow down and respect your metal overlords :) [01:04:38] heh [01:04:45] you should see domas's message [01:05:29] dunno if you saw steve walling's comedy presentation on youtube, but he has a slide of someone bowing down before a huge metal robot statue with the heading "Robot Overlords. We welcome them!" [01:07:48] heh. didn't see that [01:08:34] Ryan_Lane, http://www.youtube.com/watch?v=UEkF5o6KPNI [01:08:40] ty [01:09:08] Steve at the Ignite Portland comedy club, doing his famous piece on Why Wikipedians are the wierdest people on earth :) I lol'd. [01:13:16] right guys, I'm out. Thanks for your help tonight :) [01:20:02] uh oh [01:20:19] I thought AFT5 was supposed to be a properly built thing, unlike AFT4 [01:23:09] domas: late to the party. [01:23:11] tsk tsk tsk [01:24:13] timstarling: I just use ~/bin/slavestatus :) [01:24:31] ~/bin/proddbs | pmysql "SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE user='system user' AND STATE NOT LIKE '%master%'" 2> /dev/null | cut -f 1,7-10 [01:24:55] I didn't get a SHOW FULL PROCESSLIST unfortunately [01:25:02] I'm trying to reconstruct the query now from the code [01:25:14] I did, but I closed the window moments ago :( [01:25:48] regarding sha1 population, it should be done with direct data loads into slaves [01:29:49] or with faker! [01:29:49] It's all done bar enwiki.. [01:31:09] the query was probably something like [01:31:15] select * from aft_article_feedback LEFT JOIN aft_article_answer as rating ON rating.aa_feedback_id = af_id AND rating.aa_field_id IN (1,16) LEFT JOIN aft_article_answer as comment ON comment.aa_feedback_id = af_id AND comment.aa_field_id IN (2,4,6,17) where af_user_id=4635 and af_user_ip is null ; [01:32:08] that's my user ID, there were many user IDs in the comments in show processlist [01:33:26] no key on af_user_id, so it scans the whole aft_article_feedback table [01:33:58] 200k rows [01:34:03] I'll send that by email [01:34:07] hahaha [01:34:18] CREATE INDEX /*i*/af_user_id_user_ip_created ON /*_*/aft_article_feedback (af_user_id, af_user_ip, af_created); [01:34:59] no af_created that I can see above [01:35:12] Looks like enwiki doesn't match the schema file [01:36:41] Not overly suprising.. [01:39:26] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/ArticleFeedbackv5.git;a=commit;f=sql/ArticleFeedbackv5.sql;h=b88abc763e9d0b1742869cb2c617f592ff91d452 [01:40:52] Updated Jun 25, 2012 9:39 PM [02:23:05] !log LocalisationUpdate completed (1.20wmf8) at Mon Jul 30 02:23:05 UTC 2012 [02:23:17] Logged the message, Master [02:45:58] !log LocalisationUpdate completed (1.20wmf7) at Mon Jul 30 02:45:58 UTC 2012 [02:46:09] Logged the message, Master [02:54:38] Are you guys still having any grief please? Things going through or using the API seem to be taking a while, and timing out occasionally. [09:50:02] heyas [09:50:26] how hard is it to get an extension to the abusefilters? [10:19:17] apergos: typo: "going o be" https://apergos.wordpress.com/2012/07/30/media-mirrors-om-nom-nom/ [10:20:28] not any more :-D [10:20:34] hmm people find these posts fast [10:20:57] apergos: yes, even though the planet feeds are broken! [10:21:02] only because it's you [10:21:05] aww [10:21:26] so what is wrong with the feeds, I don't know how to fix it... someone was telling me that I could maybe change some setting in the blog but I don't see any [10:21:43] it just lets me have the rss feed or not, there's nothing else I could see [10:22:52] Media mirrors, om nom nom [10:22:57] [10:23:03] from http://en.planet.wikimedia.org/atom.xml [10:23:07] RSS 2.0 works [10:23:19] do I have to do something for the atom feed to work? [10:23:35] that means that the aggregato (at least Firefox) will load the planet instead of the post when you try to open it [10:23:38] no idea! [10:23:42] mutante may know [10:23:49] or iAlex [10:24:23] ok [10:24:27] i just wondered: is it sensible to extend the abuse filters so they accept (extenally defined) sets? [10:25:05] externally as in? [10:25:33] apergos: should that be filed as a bug? [10:25:44] eptalon: as a workaround for global abusefilter you mean? [10:25:58] eptalon: but that would give problems with logging I'd say [10:26:06] Nemo_bis: !autoconfirmed | subject in subject_set | ... [10:26:20] oh [10:26:25] looks bad [10:26:39] and much like you have the "list of contributors" to an article, you could define this set. [10:26:49] Nemo_bis: I have no idea [10:27:10] I barely know about the planet stuff: i.e. someone told me where to ask to be included and that's about it [10:27:19] which would mean you no longer need to touch that filter, and simply adapt the set. [10:28:46] it would probably also improve performance. [10:30:03] Nemo_bis: in what way would it give problems with logging? [10:31:18] eptalon: yousaid yourself "you no longer need to touch that filter" [10:31:33] yes, you adapt the set. [10:32:05] And adapzing the set could be logged the same way as changing the filter [14:12:09] Hi all. https://bugzilla.wikimedia.org/token.cgi told me for shlomif@iglu.org.il (which I'd like to change but cannot login using my password) that «A token for changing your password has been emailed to you. Follow the instructions in that email to change your password.» - can anyone help? [14:16:16] hi rindolf, obvious question but did you check your spam folder? [14:17:56] Thehelpfulone: let me see. [14:18:50] For the record, it's not there. [14:22:26] rindolf, sure, I believe the subject should be "Bugzilla Change Password Request" and the email will be coming from bugzilla-daemon[at]wikimedia.org [14:23:27] Thehelpfulone: got it now, thanks. [14:23:41] no problem :) [15:34:55] heyas [15:58:40] !log reedy synchronized wmf-config/ [15:58:47] Logged the message, Master [16:28:19] !log reedy synchronized wmf-config/InitialiseSettings.php 'Enable ShortUrl on tawiki, hiwiki and orwiki' [16:28:27] Logged the message, Master [16:32:27] !log reedy synchronized wmf-config/InitialiseSettings.php 'Enable shorturl on tawikis' [16:32:35] Logged the message, Master [16:48:57] hello Platonides [16:49:04] http://lists.wikimedia.org/pipermail/wikitech-l/2012-July/062116.html -> the it user is me ;) [16:49:22] I didn't want to steal the username to all the others [16:54:59] Hi, I think I just discovered that my ISP blocks certain pages of Wikipedia. Does anyone use Hutchinson Three 3G and willing to try and confirm it for me? [16:54:59] Mobile Broadband provider in the UK and seem to be blocking http://en.wikipedia.org/wiki/Lock_picking [16:55:04] from #wikipedia-en [16:55:28] !log reedy synchronized wmf-config/ [16:55:36] Logged the message, Master [16:57:22] If they can browse everything else, it seems pretty likely [17:02:25] !log reedy synchronized wmf-config/CommonSettings.php 'Remove duplicate routing code' [17:02:33] Logged the message, Master [17:12:18] censors are so kind, constantly suggesting us interesting topics [17:13:46] Krenair, point him to https://en.wikipedia.org/wiki/Lock_picking :P [17:14:45] oh yeah. was in #wikipedia, not -en [17:15:07] I just replied him there [17:37:40] What's the page for scheduling database schema updates on the cluster? [17:39:39] kaldari: https://wikitech.wikimedia.org/view/Software_deployments ?? [17:39:49] or just email private-l ? [17:40:16] found it: http://wikitech.wikimedia.org/view/Schema_changes [17:53:27] Krenair: There might be an "adult filter" [17:57:27] Reedy: do you have a few spare cycles to take a look at the current state of gerrit.wikimedia.org:29418/mediawiki/extensions/GeoData.git ? [17:59:08] invisibleLeslieC: ha ha [18:00:27] Reedy: time to deploy enwiki? [18:04:38] Yeah [18:06:43] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.20wmf8 [18:06:51] Logged the message, Master [18:08:18] Office hours with analytics team right now (supposedly) at #wikimedia-analytics [18:08:33] or not [18:08:47] no, in a hour :) [18:10:46] Fatal error: require() [function.require]: Failed opening required '/usr/local/apache/common-local/php-1.20wmf8/includes/WebStart.php' (include_path='.:/usr/share/php: [18:10:46] /usr/local/apache/common/php') in /usr/local/apache/common-local/php-1.20wmf8/index.php on line 54 [18:11:55] Rargh [18:11:58] More srv281 noise [18:14:08] Reedy: it'll soon be over [18:14:14] (thanks to notpeter) [18:14:17] * Reedy rejoyces [18:15:04] yeah in an hour [18:15:11] I hope to sit in for it, depends on our meeting [18:19:12] h [18:21:56] robla: all looks fine [18:22:22] yay, we didn't ruin Wikipedia today :) [18:25:56] Warning: mt_rand() [function.mt-rand]: max(-1) is smaller than min(0) in /usr/local/apache/common-local/php-1.20wmf8/extensions/ConfirmEdit/FancyCaptcha.class.php on line 135 [18:26:50] $n = mt_rand( 0, $this->countFiles( $directory ) - 1 ); [18:27:22] lol [18:28:35] * Reedy BZ's it [18:41:06] guys im really stuck here... i've been trying to find the answer for a week... i still can't get search results to present links to the section as opposed to the entire article.. How does wikpedia do this? I apologize Tim + Reedy I know you guys have expressed frustration at me for being a dumb noob... I am really trying to learn here i've got lucene 2.1 up + running mediawiki 1.19.1 but i just can't get those kinds of search results [18:41:06] ... I tried so many settings from the NOC config [18:43:28] example you search 'Test Concept' it gives you the following result: Stalking horse (section Related concepts) [18:43:44] but w/ my wiki it just gives 'Stalking Horse' blah blah blah text no link to the section [19:06:48] no-one is willing to help out a noobie eh :( [19:14:58] j0tev: your question is very difficult (compared to the average) :) [19:15:39] if #mediawiki doesn't help probably mediawiki.org is your only hope (maybe with a mailing list reminder) [19:16:57] j0tev are you talking about http://www.techwyse.com/blog/search-engine-optimization/how-to-enable-google-section-links/ ? Sounds like Google's decision is... unpredictable [19:17:57] spagewmf oh wow is Wikipedia really using Google's snippets? I thought it was their own [19:19:09] spage, basically, if you go to Wikipedia and you do a search, let's say, for the term 'test concetpt' as in my question [19:19:43] you will notice that all of the == heading == code is reaplced with (Section: link_to_heading) [19:20:20] example, 'Stalking horse (which links to the page) and then (section Related concepts) which links to the section., [19:20:28] I do not believe Google is responsible for this. [19:21:18] No, that's our own stuff [19:21:22] It's the MWSearch extension I believe [19:21:23] From what I have found, it appears that Lucene may have this capability but I have as of yet been unable to figure out how to enable it... [19:21:27] There's some Lucene integration going on there [19:22:23] RoanKattouw, I'm using MWSearch + Lucene 2.1 - is there a particular parameter that needs to be enabled... I've got my config files as close to those of the NOC as I can for a small, single-wiki site. [19:22:30] !log reedy synchronized php-1.20wmf8/extensions/UploadWizard/ [19:22:33] I don't know [19:22:38] This is where my knowledge ends [19:22:39] Logged the message, Master [19:22:43] You might search for the word 'snippe' [19:22:45] *snippet [19:22:50] Or try to contact Robert Stojnic [19:23:09] I would love to ask rainman [19:23:11] :) [19:23:16] if i could find a way to get hold of him [19:26:03] !log reedy synchronized . [19:26:11] Logged the message, Master [19:51:10] Guys, I have subscribed to mediawiki-l and tried to send a message to it - and it's saying that i need to subscribe still when i send message. [19:51:15] I clicked the confirmation link already [19:51:25] in fact it's confirmed because when i click it again it says its invalid. so it has been confirmed. [19:52:51] nm, looksl ike the confirm link is broken [19:52:57] i replied instead, that seems to work. [20:28:59] !log reedy synchronized php-1.20wmf7/extensions/Collection [20:29:06] Logged the message, Master [20:30:07] !log reedy synchronized php-1.20wmf8/extensions/Collection [20:30:16] Logged the message, Master [20:35:35] Reedy: what's Collection 1.6.1? some live hack? big trouble? [20:37:54] Just some bugfixes for noise in the logs [20:38:03] It was 1.6.1 before too ;) [20:38:50] Warning: in_array() expects parameter 2 to be array, null given in /usr/local/apache/common-local/php-1.20wmf7/extensions/Collection/Collection.suggest.php on line 587 [20:39:57] Reedy, have you seen https://bugzilla.wikimedia.org/show_bug.cgi?id=38864 ? [20:40:08] JS version of Collection does not match PHP version [20:40:35] This sounds... stupid [20:41:01] I don't see the error [20:41:30] # Extension version. If you update it, please also update 'requiredVersion' [20:41:30] # in js/collection.js [20:41:40] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;a=commit;f=Collection.php;h=458dea6fb89fb08029d43d2daf88b7c7fbcb34c4 [20:41:59] Blame Siebrand [20:42:17] That's 1.6.1 [20:42:21] var requiredVersion = '1.6.1'; [20:42:49] Yeah but I see: [20:42:50] js/collection.js:var requiredVersion = '1.6.1'; [20:42:56] Collection.php:$wgCollectionVersion = "1.6.1"; [20:43:05] Which are the same [20:43:15] But when checking the JS file I was served, it's only 1.6 [20:43:22] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;a=commit;f=js/collection.js;h=12209e204dc72be758c54ccf621e8cd21837d216 [20:44:52] If it's still around now, it's just caching [20:46:01] !log reedy synchronized wmf-config/InitialiseSettings.php 'touch' [20:46:09] Logged the message, Master [20:46:33] Reedy: can't see change from 1.6 to 1.6.1 in git [20:46:47] Ah, now it's fine. [20:46:51] I guess Chrome cached it. [20:47:00] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;a=blobdiff;f=Collection.php;h=67df42e5881e7bdaf42d20f7c52f9a38982e5c61;hp=03db0f92c83577911ef943f233e2a745d5e15005;hb=458dea6fb89fb08029d43d2daf88b7c7fbcb34c4;hpb=c1d891722a1e2d31de89698309b06479bff6e6f8 [20:47:03] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;a=blobdiff;f=js/collection.js;h=2ff614a13bc5e73ae0cf2f6edcb2f9c8b5528732;hp=b56d66637b58f255cb3843457f32318d67b66e2e;hb=12209e204dc72be758c54ccf621e8cd21837d216;hpb=413af3c12db4ee24576a9b86559badd8922cf004 [20:49:42] pity it was reviewed okay [20:53:10] Reedy: now let me explain my problem with 1.6.1; I went to https://pl.wikipedia.org/wiki/Specjalna:Wersja which lists Collection extension as "1.6.1" and links to 83d3b26 (https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;h=83d3b26887a487b6d7737cad841d75d60f44be29). I go to that commit, click "log" and there is nothing about 1.6.1 there (not it is in the recent blob) [20:53:34] something not synchronized? [20:55:27] Check them all, they're showing the same info [20:55:31] It's to do with the .git folders etc [20:55:37] "same" as in from the same period [20:55:41] none of them are right [20:56:26] It would look like it's the revision at which it was branched [20:56:30] Reedy: I don't think I understand what you wrote before (returned after a week offline) [21:03:35] Which part? [21:07:08] --exclude=.svn --exclude=.git [21:13:46] saper: anyway. it depends on what versions of the .git folders etc are deployed [21:14:34] Reedy: makes me worry... [[Special:Version]] is where my troubleshooting starts :( [21:15:16] It would've been a similar state with SVN, AFAIK [21:15:37] based on is set to exclude etc [21:15:56] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git [21:16:02] Look at the branches at the bottom [21:16:09] wmf/1.20wmf8 and wmf/1.20wmf7 [21:18:19] saper: syncing around the .git folders adds a lot of size to stuff [21:18:39] Reedy: Which is why we should exclude .git/objects only [21:18:40] many histories.. [21:18:46] The rest is lightweight [21:18:49] I was trying to find it [21:18:52] The scripts are vague [21:19:03] some excluding .svn and some .git etc [21:22:07] gn8 folks [21:23:29] saper: feel free to open a bug ;) [21:43:11] Reedy: come on, not yet another WONTFIX'ed Special:Version bug from me [21:44:16] How many have you got? [21:44:23] It's a "config" issue I suppose [21:44:43] Presumably some needed file is only transferred by some scripts but not others (such as sync-dir) [21:44:58] Certainly, running sync-file wouldn't move .gitfiles [21:45:51] https://bugzilla.wikimedia.org/show_bug.cgi?id=34796 [21:45:54] it's INVALID, sorry [21:47:31] oh there is https://bugzilla.wikimedia.org/show_bug.cgi?id=36271 [21:48:06] !log reedy synchronized php-1.20wmf7/extensions/Collection/.git 'Just for saper..' [21:48:13] Logged the message, Master [21:48:19] RoanKattouw: srv281: rsync: recv_generator: mkdir "/apache/common-local/php-1.20wmf7/extensions/Collection/.git/objects/64" failed: No space left on device (28) [21:48:23] You were saying? [21:49:02] There you go saper, https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Collection.git;h=da8781b4ed115b392a4fb2004c499a21ec385ba8 [21:49:18] Reedy: thanks, looks better now [21:49:32] Of course, that's only one extension in one version of WMF... [21:49:41] how was the current sync different? [21:50:05] you have synced local ".git" as well? [21:50:22] I only synced that [21:51:20] do I get the problem right: 1) we only sync working copies of software 2) we don't sync contents of .git subdirectories 3) special:version uses .git to get the current commit 4) therefore special:version is always broken? ;) [21:52:31] It's not always broken [21:52:37] It was right at the point of initial deployment [21:53:45] well... [21:54:30] ddsh -cM -g mediawiki-installation -o -oSetupTimeout=30 -F30 -- "sudo -u mwdeploy rsync -l 10.0.5.8::common/$DIR $DESTDIR" [21:54:36] suggests we sync everything [21:54:52] there's exclusions for stuff without home [21:55:10] err, with [21:56:07] Reedy: Yeah srv281 is broken, I put in an RT ticket [21:56:18] I wasn't meaning that [21:56:21] Basically it didn't have /usr/local/apache moved onto a larger partiion [21:56:25] I know [21:56:31] Oh [21:56:41] I knew it was, paravoid reinstalled it once for me a week or so ago [21:56:43] You're pointing out that .git/objects is causing the disk space overflow ) [21:56:45] * :) [21:56:57] it's going to get fixed soon [21:56:59] Reedy: "everything"? I thought rsync is non-recursive by default (without -r or -a) [21:57:10] I was pointing out it was syncing the objects dir [21:57:14] I think I've been read about srv281 about twice a day since it was broken [21:57:21] xD [21:57:24] I've been reading/hearing complaints [21:57:34] We can blame Chris for testing it [21:57:57] Yeah we need to make it not sync objects [21:58:15] saper: vs ddsh -cM -g mediawiki-installation -o -oSetupTimeout=30 -F30 -- "sudo -u mwdeploy rsync -a --delete --exclude=.svn --exclude=.git --exclude=cache/l10n --no-perms 10.0.5.8::common/$DIR/ $DESTDIR" [21:58:25] which is explicitly excluding copying .git/svn dirs around [21:58:42] Yeah that's in sync-file isn't it [21:58:47] But scap doesn't have that exclude *sigh* [21:59:04] sync-common-file only does it conditionally [21:59:13] which, given general impossibility to somehow encode git revision in the code, leaves us in the void? [21:59:33] As shown, it works.. some of the time. [21:59:51] Reedy: should I include a tracking bug "Make sure Special:Version is up to date with what we are running" ? :) [22:00:19] s,include,file, [22:00:20] then scap-2 has [22:00:20] rsync -a --delete --exclude=**/.svn/lock --no-perms 10.0.5.8::common/ /usr/local/apache/common-local [22:04:51] Having it work consistently wouldn't be a bad thing (for directory syncs at least) [22:06:33] saper: as always, the sync scripts are in version control [22:06:35] Submit a patch! [22:12:19] Reedy: if I only knew how the whole syncing works [22:12:54] A bunch of scripts, and that's about that [22:12:54] Reedy: you can't outsource the problem to non-ops people having no clue :( [22:13:16] It all goes down to rsync calls [22:13:33] whether run via ddsh, or via remote script runs that then run rsync [22:34:55] Do the WMF servers use any particular OS? [22:35:36] Ubuntu [22:36:00] * Isarra dies. [22:36:09] guys, I got an answer back from Robert on my previous question. [22:36:15] What's wrong with Ubuntu, Isarra? [22:36:16] * Damianz shoots Isarra's corpse [22:36:26] In case it helps you guys out, basically what he said was, Lucene-Search 2.1 should have the ability to parse wiki titles of its own accord. [22:36:33] this is what the $wgLuceneSearchVersion = 2.1 is for [22:36:38] Many things, though it's not really any worse than any other distro. [22:36:51] I guess I was just expecting something more... sexy? I dunno. [22:37:03] Not sexy... meh, I forget the word. [22:37:09] Thanks for your suggestions anyways guys. Cheers [22:37:44] Krenair: <3 Arch Linux [22:45:13] Isarra: Ubuntu is very sexy. >:-( [22:45:48] Ubuntu is ubuntu. [22:46:03] And that's a good thing! [22:48:40] Okay. [22:48:50] ubuntu ain't so sexy on < 1 GB of ram [22:50:57] our servers couldn't survive on <1gig of ram [22:51:46] it would be like trying to run Crysis on a netbook amirite [22:53:08] You deserve to be slapped for trying it [22:53:24] It's not like ram is expensive these days [22:54:43] ram is so cheap nowadays, we should just run everything on ramdisks [22:55:56] oh man [22:55:59] fusion i/o cards [22:56:01] that would be so sweet [22:56:30] for what? [22:56:34] everything :) [22:56:50] well in reality databases and any intensive disk i/o stuff [22:57:00] but if we're in pony fairy land, for everything! [22:57:25] but will it run Crysis? [22:58:15] ok, i knew that joke was bad. Also very old, and very overused. [22:58:29] cheers guys [23:04:42] The crysis joke is old [23:04:48] I've been able to run it fine for years [23:20:39] Reedy: after playing Crysis 2, I realized how slow the first one ran on my lappy [23:21:01] of course the biggest lack of replay value comes from the instant death physics bugs [23:22:03] heh