[00:00:23] since I was fetching 6 repos on every node [00:01:37] the other possibility is that salt somehow killed the fetch process mid-fetch [00:01:45] just reading about HTTP settings in the git-config manpage [00:02:16] but I'm doubtful about salt killing the processes. I'm using a returner. it should run until its finished [00:02:30] there's something to be said for svn's policy of ignoring SIGTERM isn't there? [00:02:36] heh [00:02:56] well, the biggest problem is that git won't resume botched downloads [00:03:01] and here I was thinking that git didn't ignore SIGTERM because it was just atomic and awesome [00:03:12] unfortunately not so much [00:03:20] it's checkout stage works well [00:03:27] but its fetch stage is terrible [00:03:49] and knowing it'll screw up like this makes it scary [00:04:17] because you need to run: git fetch; git fetch —tags; git submodule foreach git fetch; git submodule foreach git fetch —tags [00:04:23] it's a lot of places for it to corrupt itself [00:06:55] I did a manual test of using bittorrent for .git. looks positive so far [00:07:21] ^demon and I also wrote a design doc for the changes http://etherpad.wmflabs.org/pad/p/git-deploy-bittorrent [00:08:06] there is git fetch --recover [00:08:16] hm. let me try that [00:08:25] sorry git http-fetch --recover [00:08:27] I don't see tis as an option [00:08:27] ah [00:08:33] "Verify that everything reachable from target is fetched. Used after an earlier fetch is interrupted." [00:08:46] ah [00:08:57] let's see if it works [00:10:23] that needs to be run on every commit-id [00:12:37] and needs the url as well [00:21:00] all the docs I've found online (including ones from linus) were: "delete the objects and refetch" [00:22:10] alas, when it gets into certain states it seems that even a git fsck will only list one corrupt object at a time, so I'd have to parse the error message to find the object and delete it, then run fsck again [00:22:28] and different problems have different errors, so I'd have to parse them all and hope they don't change between versions [00:22:38] so, yeah, bittorrent fetch stage. heh [00:23:29] I'm just trying to find the relevant git source [00:24:04] would it have been an object request or a pack request? [00:24:41] could have been either [00:25:01] root@mw1111:/srv/deployment/mediawiki/l10n-slot1# git fetch [00:25:01] error: corrupt loose object '6cf84af6b49eb06d6a26c26ebc4a6f7518636fc5' [00:25:01] fatal: loose object 6cf84af6b49eb06d6a26c26ebc4a6f7518636fc5 (stored in .git/objects/6c/f84af6b49eb06d6a26c26ebc4a6f7518636fc5) is corrupt [00:25:01] error: http://tin.eqiad.wmnet/mediawiki/l10n-slot1/.git did not send all necessary objects [00:25:10] there's one that's currently screwed up [00:26:49] last time I checked, bittorrent wasn't network-aware [00:27:08] does it need to be? we're deploying inside of a single network [00:27:27] I don't know about you, but I'd like to have syncs that take less than 10 minutes [00:28:00] so I'd rather use a solution which has some kind of network awareness as a possible future extension at least [00:28:09] ah, I see what you mean. peers should pull data from peers in the same network [00:28:27] yes, or in the same rack or row [00:29:29] we could do it the same way we were planning with the fetch stage of git fetch [00:29:44] deploy to the other deployment host [00:29:54] then to a "rack node" per rack [00:29:58] then to the nodes in the rack [00:30:03] and bittorrent doesn't have a differential feature does it? [00:30:23] uh oh [00:30:33] unless you're going to use git format-patch | tar -c? ;) [00:30:42] heh [00:30:54] if we're transferring .git folders it's only going to transfer the new objects [00:31:18] hmm [00:31:20] has this been done before somewhere? [00:31:32] probably not [00:31:35] you know rsync --whole-file will only transfer new objects also [00:31:49] most large deploys just use bittorrent by itself [00:32:09] yes, we could use rsync too [00:32:16] ma rk and I discussed that earlier [00:32:40] rsync could be much much faster than the way we have it set up currently [00:32:55] it could, yes [00:33:13] it's an alternative to using git fetch [00:33:33] or do you mean using it rather than using git altogether? [00:34:01] well you'd still need to checkout [00:34:36] git checkout appears to be pretty nice at first impressions [00:34:40] no matter what we switch to, I'd like to fetch to a location other than the current working directory [00:34:43] and check which servers have the last ref fetched [00:34:45] but it has overhead [00:34:54] the checkout itself is actually very quick [00:35:00] the fetch is the problematic stafe [00:35:02] )stage [00:35:04] ugh [00:35:08] *stage [00:36:56] so, alternatives: rsync that writes to a cache location during the fetch and rsyncs to the working directory on the checkout stafe [00:37:03] I can't type stage today [00:37:08] or, bittorrent that does the same [00:37:27] or bittorrent that fetches the .git dir and does a git checkout like we currently do [00:37:53] why can't you rsync the .git dir and do git checkout? [00:38:02] we can also do that [00:39:00] I was hoping to make it less centralized [00:39:03] Ryan_Lane: http://topbt.cse.ohio-state.edu/ lol my uni ;) [00:40:40] * AaronSchulz looks for something less experimental [00:41:06] we can easily enough have a seed per datacenter [00:41:20] on each deployment host [00:41:43] each of them would have a torrent that points the peers to the same datacenter [00:43:14] I think any more fine-grained awareness is likely to make performance worse [00:43:32] because it really ends up meaning you have less peers [00:43:36] downloads my serialize by chaining [00:43:39] *might [00:43:45] the peering itself is fairly short-lived [00:44:35] the part that takes the most amount of time is the l10n cache [00:45:12] and I have a feeling a lot of that time is due to saturation of the deployment hosts' link [00:45:41] mmm, bottleneck [00:46:30] the only big problem with limiting peers to seeds in the same datacenter is that you need to transfer the data to the other seed first [00:46:34] that's going to waste some time [00:46:46] yes, serializing [00:46:51] yep [00:46:59] we could just fix the git bug [00:47:13] yep. could do that too [00:47:29] that doesn't solve our l10n problem, though [00:48:03] in fact, l10n is worse inside of git than it was when we just used rsync [00:57:01] we could generate that cache locally on demand [00:57:16] but then we would have to push out LU CDBs instead of LC [00:58:37] on demand? how so? [00:59:02] with manualRecache off [00:59:05] didn't that suck? [00:59:24] did it suck as much as putting the files in git? [00:59:32] unless it is changed to fallback to the last cache or something [00:59:51] well that would be useless for version changes, nvm [01:00:12] TimStarling: ah, right, some of us were discussing alternatives to the cdbs the other day [01:00:22] decided it wasn't really worth talking about without you here [01:00:28] we could use PoolCounter, it only takes a few seconds to generate each file [01:00:38] yeah, I was thinking of PC [01:00:46] I wasn't sure how long it took though [01:00:57] it would waste a lot of cpu/power to do it this way, right? [01:00:59] can test [01:01:09] maybe I'm remembering it worse that it was [01:01:15] but I have a meeting now [01:02:29] python /home/laner/murder_client.py peer http://deployment-bastion.pmtpa.wmflabs/mediawiki/slot0/.git.torrent slot0/.git 127.0.0.1 [01:02:31] git checkout slot0-20130115-211910 [01:02:34] git reset —hard [01:02:40] git submodule update —init [01:03:01] perfect copy. [01:03:10] that could also replace clone [01:03:45] and as an added benefit it fetches all submodules as well [01:04:47] * AaronSchulz wonders if a LCStore_JSON class would help [01:05:05] Ryan_Lane: we already have abstraction to use things other than cdb [01:05:17] * Ryan_Lane nods [01:05:53] we need to rewrite git submodule foreach [01:06:11] that's the slowest part of checkout, because it does it serially [01:06:42] I remember some git mailing list discussion about making that parallelizable [01:06:48] that would be ideal [01:06:52] that was one of the first things I wondered about when I started using git [01:07:19] if we rsync'd or bitorrented the .git dir, a checkout just moves files aroun [01:20:34] Ryan_Lane: maybe just json as a transport format makes more sense, we need the files to be indexed [01:21:01] * AaronSchulz wonders if the json->cdb would happen on sync or on demand [01:21:15] I say before we start going down this road, let's see if we can make the deployment fast enough that it doesn't matter [01:21:52] if we switch to bt and it only takes a couple minutes to sync all the files, then problem solved [01:21:57] if it still takes 10 minutes.... [01:21:58] heh [01:24:00] hmm, if MW write JSON but created the cdb on the fly for reads if it wasn't there, then the deploy code would not have to do anything special [01:24:11] true [01:24:17] deploys would just ignore the cdb files [01:24:30] how much would that slow things down? initial lookup would be unindexed [01:24:30] (if any were on the deploy host) [01:25:00] well it would traverse the whole json for the requested language and build the cdb [01:25:10] it could do that in a streaming i/o way I'd hope [01:25:20] I can't see that being very slow [01:25:35] it can use PoolCounter in the worst case [01:25:35] ah, ok. I thought you meant add entries to a cdb as they were accessed from the json [01:26:03] oh, no, heh [01:26:26] http://php.net/manual/en/function.dba-nextkey.php ;) [01:27:11] though writes would have to be either batched in memory (lame) or in the cdb file first [01:27:58] hmm, some code would need changing to avoid doing foreach() { set some key } [01:28:06] rebuilding the json each time would suck [01:28:48] * AaronSchulz wonders why the LCStore class hierarchy is in the same file as LocalisationCache [01:28:54] separate files folks [01:29:44] hmm, I see the caller does startWrite() ... finishWrite() [01:30:09] the last one would be the obvious point to actually make the json [01:30:51] though really it's all in memory anyway, so...it would more sense to just change that code [01:32:02] Ryan_Lane: actually JSON could be written as it goes as long as we don't have to worry about duplicate keys in maps [01:32:08] that would be a lot simpler [01:32:26] * AaronSchulz is looking at LC::recache() [01:32:43] I'd hope we don't have duplicate keys [01:32:57] heh, yeah we shouldn't it [01:33:14] especially since it comes from php arrays that are already automatically de-duplicated [01:33:18] indeed [01:33:24] and it would make no sense anyway [01:34:18] so yeah, all the reads and writes could be done in a streaming way [01:34:50] cool [01:35:16] do we know just how well json would diff here? I'd imagine pretty well [01:35:28] probably very well [01:35:44] it would also compress well, and git should be doing that [02:19:47] Ryan_Lane: so that would work, though roan is kind of convincing me to make it transport only [02:20:37] converting the json => cdb in a post-fetch hook and moving them in (possibly via a symlink trick on the i10n cache dir) on a post-checkout hook [02:21:37] ok, time to go home [07:00:18] I'm going to submit a patch to tweak Bugzilla's interface a tad. If there's a no-brainer improvement you'd like to make to the landing page copy or design but are too intimidated by Git/Gerrit, ping me. [07:09:56] New patchset: Stefan.petrea; "Fixes for countryreports and device deployment" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [07:51:55] New patchset: Stefan.petrea; "Fixes for countryreports and device deployment" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [07:52:47] hello [08:00:36] * petan huggles hashar [08:41:50] New patchset: Stefan.petrea; "Fixes for countryreports and device deployment" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [08:43:52] New patchset: Stefan.petrea; "Fixes for countryreports and device deployment" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [08:48:40] New patchset: Hashar; "mediawiki doc generation on ref-update" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/44384 [08:48:59] New review: Hashar; "Replaced by https://gerrit.wikimedia.org/r/44384 . Will most probably abandon this change." [integration/zuul-config] (master); V: 0 C: -2; - https://gerrit.wikimedia.org/r/39207 [08:49:13] New review: Hashar; "replace https://gerrit.wikimedia.org/r/#/c/39207/" [integration/zuul-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/44384 [09:06:40] New patchset: Stefan.petrea; "Fixes for countryreports and device deployment (ready-for-merge)" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [09:39:17] hey hashar, I committed https://gerrit.wikimedia.org/r/#/c/44278/ yesterday [09:45:59] ahh [09:46:04] MaxSem: morning :-] [09:46:11] sorry debugging some Zuul / Git / Jenkins issue [09:46:26] MaxSem: andrew created an instance yesterday [09:46:41] did he? I thought it was me:P [09:46:50] ah [09:46:51] maybe you [09:46:58] something like deployment-varnish-t [09:47:07] need to setup the DNS entry + give it a public IP [09:48:43] oh yeah we have stuff both in InitialiseSettings.php and as common settings [09:48:44] grbmbml [09:50:20] MaxSem: have you ever connected to the beta project? deployment-bastion ? [09:50:56] hehe, how did I set up that instance if I didn't?;) [09:51:08] just checking :-] [09:51:12] anyway [09:51:13] https://gerrit.wikimedia.org/r/#/c/44278/1/wmf-config/InitialiseSettings-labs.php,unified [09:51:22] that stuff is for configuration we want to override in labs [09:51:31] InitialiseSettings.php is loaded first [09:51:42] then InitialiseSettings-labs.php is loaded to override production settings [09:51:51] ouch [09:51:52] got it [09:51:58] so most of the additions there are duplicates :-D [09:51:58] will do the tweaks [09:52:11] (yeah that is really missleading) [09:52:13] by the way, how good is French here: http://www.youtube.com/watch?v=eTd3zco8Q-s ? [09:52:22] you can check them by going to deployment-bastion and running mwscript eval.php --wiki=enwiki [09:52:29] then using var_dump on the variables [09:52:36] beta gives me > var_dump( $wmgMobileFrontendLogo ); [09:52:36] string(76) "//upload.wikimedia.org/wikipedia/commons/8/84/W_logo_for_Mobile_Frontend.gif" [09:55:37] oh yeah [09:55:41] grmblbl I hate our conf [09:57:21] hashar, so what can you say about pronunciation in that video?:) [09:59:01] and I added another comment about the mobile-labs.php [09:59:11] I think you are the first one to use the new per realm feature :-] [09:59:19] * hashar looks at the youtube video [10:00:02] MaxSem: the pronunciation is a bit awkward. Typical to lyrics people I guess. [10:00:11] and the singer has a weird accent [10:00:29] ah [10:00:35] well, the tenor is Swedish and soprano is American:] [10:00:46] I was about to say it sounds like some Symfonic Metal [10:00:59] it is [10:01:09] they're the founders of this style [10:01:15] the swedish accent explain the weird french :-] [10:01:18] but it is acceptable [10:01:29] I actually understand what they are singing hehe [10:01:42] so cliché though, they are drink in red wine! [10:03:08] "Les Fleurs Du Mal" is some poetry [10:03:29] by Baudelaire iirc [10:03:51] probably one of the most known french poet [10:04:35] he translated the Edgard Alan Poe novels. That also means Poe novels are really popular in France, maybe more than in UK. [10:04:48] or US [10:04:54] can't remember where Poe lived [10:05:25] MaxSem: I leaved some note on https://gerrit.wikimedia.org/r/44278 [10:05:32] not sure how to best handle the labs / production settings [10:05:35] cool, thanks [10:07:29] one day we will have to refactor all of that stuff [10:08:23] * hashar listens to http://www.youtube.com/watch?v=xjlgUx7_aN0 [10:08:27] fight fire with fire!! [10:13:33] hashar, try http://www.youtube.com/watch?v=y3SRll7wCH0 [10:14:04] lol [10:14:53] I bought Angra - Temple of Shadows , http://www.youtube.com/watch?v=-x2EeVaDKA8 [10:14:57] sounds a bit nicer to me :- [10:15:13] the full album can be listened straight (that is the link above) [10:32:50] hashar, — mon semblable, — mon frère: poe was american, of course! :P [10:33:56] ahh [10:34:09] ori-l: that is one of my favorites french authors :-] [10:34:14] (thanks to Baudelaire hahaha) [10:39:22] hashar: there is a well-known anthology of modern french poetry that accompanied me constantly through my teens -- http://www.amazon.com/Anchor-Anthology-French-Poetry-Translation/dp/0385498888 [10:39:57] I should probably read french poet one day [10:40:17] ori-l: I am pretty sure I have read nothing from that list :/ [10:41:21] i don't read very much poetry these days but i regret it [10:41:29] wikipoems-l! we should create it heh [10:45:42] we have a poem wiki somewhere [10:45:50] ori-l: brion actually wrote a poem extension for mediawiki [10:46:40] or maybe it was not brion [10:46:43] gmmhmhm [10:46:48] ah got it http://www.mediawiki.org/wiki/Extension:Poem [10:58:16] oh, neat [11:04:46] * ori-l passes out. good night. [12:05:37] Deneme [12:51:14] Change merged: Erik Zachte; [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/44382 [13:26:02] nice: a rev_comment has an actual newline in it [13:26:24] I wonder how the heck that got in there (edit is from a bot but you would expect mw to catch that) [13:27:07] from enwiki, select rev_comment from revision where rev_id = 530117026 (after tee so you can od -c it and see for yourself).... [14:56:47] hi. can we get https://gerrit.wikimedia.org/r/#/c/44401/ deployed to wikidata if there is time? it deals with a bug in the JS in production for anon users. ^demon , Reedy ? (being a pest, sorry :) ) [14:59:35] <^demon> Denny_WMDE1: On it. [15:01:29] ^demon: thx! [15:01:50] <^demon> Something's up with fenari, will let you know when I'm done. [15:02:00] ok, thx [15:03:52] <^demon> Ok, done. [15:11:06] ^demon: hi :-] Can you update a repo config for me please? Check Require Change-Id at https://gerrit.wikimedia.org/r/#/admin/projects/operations/debs/python-voluptuous :) [15:16:29] <^demon> Done. [15:16:32] danke [15:16:38] <^demon> (You can set this if you create a project from the command line, btw) [15:16:43] <^demon> You can set all those options. [15:16:56] ahh [15:17:05] good to know for the next repo! [15:21:42] ^demon: thanks for the deploy [15:22:35] <^demon> Yw. [18:56:12] mwalker: yeah [18:56:33] Hey AaronSchulz, any chance I could get a review of https://gerrit.wikimedia.org/r/#/c/42367/ ? [19:55:43] Warming up for https://www.mediawiki.org/wiki/Meetings/2013-01-17 [19:57:57] I can haz a hangout link? [19:59:51] We are still setting up things... [20:01:48] MaxSem: http://www.youtube.com/watch?v=RFC2A_zTQ3s&feature=youtu.be [20:02:08] awjr_techchat, that's last time [20:02:24] that explains a lot [20:02:32] please ping me when the youtube link is up [20:02:36] * siebrand grins at awjr_techchat  [20:02:40] :p [20:02:44] and as I'm a deployer I should ideally be able to ask questions;) [20:02:59] i figured we could always ask in here [20:03:04] but, i agree [20:03:22] heh....I clicked on that link and said..."who is that? that looks like me!" [20:03:30] heh [20:04:07] Hello All, we're about to start a Tech Chat. The event will be streamed here: http://youtu.be/isq-jid4ujQ [20:04:31] and if you'd like to JOIN, not just watch, ping me here [20:04:32] thanks [20:04:49] cndiv- I'd like to join [20:04:59] https://www.mediawiki.org/wiki/Meetings/2013-01-17 is starting now! [20:05:17] cndiv - join plz [20:05:19] don't be shy about joining...we rarely have more than we can handle joining [20:05:21] cndiv, ping [20:06:23] I will be taking questions from IRC to Ryan / Chris, if there is any [20:07:05] mmm problems [20:07:14] MaxSem: Took a look at the Solr stuff. Looks nice. Where did you get stuck last year? [20:07:19] trying to bring Ryan back [20:07:37] multichill, stuck at what? [20:07:54] I'm pretty sure you gave it a try last year [20:08:03] arriving... [20:08:10] multichill, I didn't [20:08:12] Is cndiv responding to our pings? [20:08:20] we had not time for it [20:08:21] anomie i think he's fixing ryan's set up atm [20:08:24] Is someone talking? [20:08:32] ryan got disconnected briefly [20:08:33] 1 sec [20:08:35] should be back now [20:08:47] trying to share presentation [20:09:00] back! [20:09:12] Hmm, ok, let's see if we can hijack some labs instance which already has Solr on it ;-) [20:09:54] JOIN link is https://plus.google.com/hangouts/_/47b3ba58aebed6ec135af762f09b2ca1d7040e92# [20:10:09] MaxSem and anomie [20:10:12] multichill, it's not hard to set up - just use the existing solr puppet module [20:10:16] cndiv, thanks [20:11:14] I love the cloud as long as it is Cloud <-> some managed stuff <-> me ;-) [20:11:27] thanks cndiv [20:13:58] dude! deploy-info! that's awesome [20:13:58] * sumanah watches https://www.youtube.com/watch?feature=player_embedded&v=isq-jid4ujQ#!  [20:14:30] what does killing nfs do for test.wikimedia.org? [20:14:54] ok awjr_techchat [20:15:19] ! [20:15:19] ok, you just typed an exclamation mark with no meaning in the channel, good job. If you want to see a list of all keys, check !botbrain [20:15:25] bah [20:16:01] mobile team uses test extensively [20:16:42] well, it was nice they got input from everyone else on that decision... [20:17:08] MobileFrontend will be on betalabs, but that might not be ready for one month + [20:17:11] test becomes much, much more like test2 in the new system [20:17:30] nice plug for beta :) [20:17:41] awjr_techchat: is there anything in particular about the NFS mount aspect of test that you need? [20:17:51] robla's point is important - we can set up whatever test wikis we want, no? there's nothing that prevents us from deploying to a test2-style test wiki to check for fatals in production, correct? [20:18:21] robla no, not in regards to NFS in particular, but we put a lot of effort into getting test configured in a way to make it possible for us to stage MobileFrontend (test2 will not currently work for that) [20:19:03] robla i expect once we have things sorted on betalabs this won't be a big deal, but im concerned about the time in between test going away and us having everything set up on beta labs in a way that fully mimics prod [20:19:18] demo time! [20:20:57] as long as we have something usable in the interim (between test going away and MF being fully set up on betalabs),i dont think it will be a problem for us [20:21:45] but there are currently quite a lot of things about MF that we can't test locally or in our simple testing environments on labs because of peculiarities in how production is set up, so it's critical for us to have /some/ kind of staging environment that mimics production as close as possible before we deploy [20:22:04] What's that fatal in the message? [20:23:10] I'm here [20:23:53] qgil_: please ask the speaker to repeat questions from local people, we do not hear them [20:24:03] ok [20:24:08] does the detailed report tell you roughly how long til they are done? [20:24:11] qgil_: ^ [20:28:28] lulz [20:28:51] murder! [20:29:38] "I'm considering bittorrent right now" says Ryan [20:29:47] yep [20:29:53] that is actually kinda awesome [20:30:02] yeah [20:30:09] Twitter's murder: https://github.com/lg/murder [20:30:16] a murder is a set of crows [20:30:34] and you know Ryan is just crowing over this achievement of installing "murder" into our sys :-) [20:30:59] ohho! [20:31:39] how far away is the future, qgil_? [20:32:10] ok [20:32:24] that is, the 'future' ryan is referring to for these new features (including new target for using git-deploy in prod) [20:32:30] qgil_: ^ [20:32:43] Facebook also uses BitTorrent for deploys, or did, just for the record. [20:33:09] Also see http://engineering.twitter.com/2010/07/murder-fast-datacenter-code-deploys.html [20:33:19] Includes a link to a Vimeo video. [20:33:32] question: What happens if someone tries to commit something locally without doing "git deploy start"? [20:34:23] qgil_: also, is NFS gone right now/is test.wikipedia.org stil usable for staging? [20:34:51] git deploy start; git deploy --force sync [20:35:31] wait awjr_techchat [20:35:39] np :) thanks qgil_ [20:37:34] more questions? [20:38:34] qgil_: there's been situations where resource loader has not picked up a change -- the resolution is to touch the file and then sync the file -- is there a similar workaround available in git-deploy? [20:38:52] ok mwalker [20:39:28] one thing to note: tin isn't a bastion host, so you'll need to go to bast1001, then tin [20:40:01] qgil_: ^ that's not a question, but I think it's relevant to S's question [20:43:07] I don't think so. [20:43:29] I believe GitHub has you use the git protocol for authenticated repos. [20:43:46] Who was that who just asked about comparison with Capistrano? [20:43:53] Capistrano: https://github.com/capistrano/capistrano [20:43:54] Subbu [20:44:31] Actually, they recommend it as Git Read-Only. But it's not bare. [20:44:43] Eloquence, thanks. yes, i've used it for rails deployment .. so was curious. i haven't investigated the details. [20:44:50] <^demon> superm401: git:// protocol is read-only, since it can't do authentication. [20:45:02] --init not -init [20:45:12] ^demon, which should be fine for deployments, right? [20:45:20] The web servers aren't doing commits, just fetching. [20:46:24] live testing [20:46:26] FATAL FATAL FATAL <----- watching the screencast [20:46:47] <^demon> superm401: Potentially. I thought about that--but we kind of moved away from relying on native git fetch. If it fails, you can end up with corrupted objects. [20:47:00] <^demon> That's why we started talking of torrenting the .git dirs around, instead. [20:47:10] ^demon, right but someone said maybe it was just the HTTP transport that corrupted. [20:47:23] Also, did you guys try smart HTTP, or just dumb (git now has both)? [20:47:33] <^demon> I'm not sure, tbh. [20:47:44] questions still? [20:47:59] ^demon, to review later, http://git-scm.com/2010/03/04/smart-http.html [20:50:15] can't there be a simple command that just goes back to previous deployed state, quickly? [20:50:15] q? [20:51:20] sure do like all the pointing at beta. MF and E3 extensions will be great additions. [20:51:33] spagewmf, chrismcmahon is your beta point man :) [20:51:46] so we need two beta instances, one running wmf7 and one running wmf8 (?) [20:52:19] or we assume deploying our code will work on both, as we do now. [20:52:29] :) [20:52:30] we're going to have to change some policies about what gets deployed there I think [20:52:35] <^demon> Can someone ask Ryan to get on IRC? [20:52:50] Thanks to the presenters [20:52:57] good night [20:52:57] best of luck with the switchover [20:53:03] thanks everyone! im excited about this :) [20:53:16] ^demon: what's up? [20:53:19] spagewmf- beta is a whole cluster, it has hosts like http://en.wikipedia.beta.wmflabs.org/ and so on [20:53:23] Yes, thanks! [20:53:45] <^demon> robla: I was going to ask him about what superm401 suggested, trying git with smart http. [20:53:46] <^demon> But it's no rush. [20:53:51] anomie, what hosts on that cluster are running wmf8? [20:54:02] Or git:// [20:54:11] <^demon> Granted, it might not even solve the problem we had. [20:54:28] spagewmf- I'm not sure offhand. It depends on what's configured in wikiversions-labs.dat [20:54:43] anomie thanks, I'll take a look [20:54:52] I suppose "WE ARE WAITING FOR THE HANGOUT URL" is no longer true? [20:56:46] Can I make a change, deploy it to beta cluster from this special deploy-bastion for beta labs (seems confusingly named), and *NOT* have it on the table for deploying to the live cluster? It seems like beta is using the same repository branches [21:27:18] siebrand: i have a question for you [21:27:50] why are access keys (such as 'accesskey-ca-edit') internationalized but not localized? [21:28:26] in software at least, I presume there are some cases out in the wild where these are overridden in the MediaWiki namespace [21:28:28] @MediaWikiMeet was closed [21:37:36] TrevorParscal: what a mess that would be for little gain [21:38:13] why would you localize accesskeys? [21:38:38] these are great for working on wikis in foreign languages [21:40:48] then why internationalize them? [21:41:50] TrevorParscal: customisation != l10n [21:41:53] it's a glaring design mistake if you have a comment in an i18n file that says "do not translate or duplicate this message to other languages" (as you do in the MessagesEn.php file) [21:42:41] not everything needs to be translated [21:42:56] I'd say accesskeys are one such thing [21:43:05] if it shouldn't be localized, it shouldn't be internationalized [21:43:05] no? [21:43:10] Nikerabbit: You think it's appropriate to force every de-language wiki (say) to locally-override ctrl+b into ctrl+f? [21:43:36] James_F: yes if they want to do that [21:43:50] James_F: do they actually do that in germany? [21:43:53] Nikerabbit: ... [21:44:08] James_F: i've never seen translated hotkeys in Polish software [21:44:10] but if there's a sensible default for a different language, why not build that into the software? [21:44:17] Nikerabbit: What is so special about keyboard shortcuts that they don't get the internation... what Trevor said. :-) [21:45:04] I would be pretty annoyed if they were different in every wiki I use [21:45:09] plus we would get tons of conflicts [21:46:13] that's why they shouldn't be part of the i18n system [21:46:29] because any wiki can override them and invoke said pain on you [21:46:41] TrevorParscal: wikis like this customisation [21:46:48] they use it in rare cases [21:46:53] if they are to be configurable, it should be based on user lang and be consistent [21:47:06] for instance on it.wiki v is used for featured articles (vetrina) and diff is - [21:47:22] Nikerabbit: To put it the other way around, why the hell are you forced to use enwiki (and English language cultural imperialism) shortcuts on plwiki, fiwiktionary and so on unless they customise? [21:47:24] in my example, not all wikis have a vetrina" [21:48:06] Also, this is what happens if one doesn't have proper handling: https://translatewiki.net/wiki/Thread:Translating_talk:MediaWiki/Kiwix_access_key_translations [21:48:15] James_F: they suck for all languages anyway [21:48:26] 'x' for random page... [21:48:54] Nikerabbit: that's because obsessive special:random clicking kills you [21:49:21] Nikerabbit: They work really quite well for English - Find is F; Copy is C, Cut is X ('Cu' sound), Go is G, Edit is E. [21:49:35] Nemo_bis: There's a tension between local wiki customisation and confusing users, of course. [21:50:32] the reason I ask is because we are trying to decide what to do with the various keyboard shortcuts in VisualEditor [21:50:42] * Nemo_bis figured [21:50:43] Nikerabbit: But yes, they suck for non-English speakers. [21:51:13] James_F: yes but the answer to the original question is simple, they are messages because that's where local configs are stored... [21:51:27] we are making the labels language and platform localizable, but the actual shortcut keys (which are currently all en-centric) may need to be localized as well [21:51:30] are they actual shortcuts like ctrl+s or accesskeys? [21:51:43] but then I see that mw doesn't do this, so I'm not sure where to go from here [21:52:23] they are keyboard shortcuts, like ctrl+b for bold [21:53:37] then don't localise, it's like ctrl+c which is not localise anywhere [21:54:24] doesn't that seem bad? [21:54:41] imagine trying to tell someone on an arabic keyboard layout to press ctrl+b [21:54:54] we internationalize the label [21:55:07] but doesn't that make it likely that the key will make no sense at all? [21:55:34] you have to ask Amir about that [21:56:00] TrevorParscal: CodeEditor (?) had such problems too, I remember; are you addressing them systematically? [21:56:03] Nikerabbit: Ctrl+C certainly is localised in some OSes/Browsers on language/locale basis. [21:56:19] Nikerabbit: I've been bitten by it, for instance. :p) [21:56:53] https://bugzilla.wikimedia.org/show_bug.cgi?id=39649 [21:57:17] James_F: not seen any [21:57:20] (not really shortcuts, maybe there was something about them too) [21:57:23] not in windows as far as I know [21:57:45] Nikerabbit: I'm not sure praying in aid Microsoft's design choices is perfect. ;-) [21:58:02] Nikerabbit: But you guys are the expert. [21:59:50] TrevorParscal: i've never seen keyboard shortcuts translated in polish-language software [21:59:59] James_F: ^ [22:00:05] (if that means anything) [22:00:09] MatmaRex: Sure. [23:20:50] http://wikitech.wikimedia.org/view/Wikibugs needs to be updated