[08:04:49] pwd [08:04:53] pwd [08:04:56] wat the [08:04:58] ........ [08:05:57] hello there [08:06:00] anybody who [08:45:58] New patchset: Hashar; "(bug 36167) Testswarm unable to open db file" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/5600 [08:47:25] New review: Hashar; "(no comment)" [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/5600 [08:48:06] New review: Hashar; "Would surely need to be polished overtime." [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/5440 [08:48:09] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/5600 [08:48:10] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/5440 [08:48:22] Nikerabbit: thanks for the Testswarm bug report :-] [10:27:29] New patchset: Hashar; "fix chmod on directory" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/5614 [10:28:01] New review: Hashar; "(no comment)" [integration/jenkins] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/5614 [10:28:03] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/5614 [10:30:55] * TimStarling is finding that whatever language he is meant to be writing in, it comes out as an arbitrary mixture of Lua, PHP and JS [17:27:56] Reedy: around? [18:06:31] getting lots of 503 errors with gerrit [18:07:29] Yeah it's restarting [18:07:44] ^demon https://gerrit.wikimedia.org/r/#change,5033 just needs approving [18:07:50] Ryan_Lane ok [18:07:52] Ryan_Lane this is going to bounce gerrit when it goes through [18:08:01] oh i see [18:08:08] yep :) [18:08:24] * awjr wait patiently [18:11:42] seems the config was actually bad [18:11:45] OK it's still down [18:12:03] Ryan_Lane: Could you bring Gerrit back up ASAP? We're supposed to be deploying to English Wikipedia [18:12:31] There was an 11am deployment scheduled so not everyone was all that happy when you restarted Gerrit at like 10:57 [18:12:53] then people shouldn't ask me to review and push it right before one [18:12:54] you ask, I do [18:13:08] so roan I'll say what I said to rob la in another channel: all ops people are in our regularly scheduled meeting, 1 hour likely [18:13:17] True, someone from the same team asked you to [18:13:19] as far as deployments go [18:13:25] apergos: I know, and that makes it even worse [18:13:44] also, you guys pushed in broken config. [18:13:47] soooo. yeah [18:13:49] Yeah [18:13:52] don't get all pissy with me [18:13:54] it's back up [18:13:58] Yay, thanks [18:14:09] ^demon asked you to do it, that's bad internal communication, fair [18:15:31] I can either do things when you guys ping me, or I can check all the calendars every single time ;) [18:15:32] so what I want folks to know is that if there is prep work before the deployment this is a great time to do it because if somehthing goes sour and you need us you're going to be pulling folks out of our bi-monthly update (not your fault but just so you know) [18:16:25] ^demon: you should have tested this in labs ;) [18:16:47] <^demon> Yeah yeah... [18:16:54] apergos: I don't think we foresee needing ops, other than keeping Gerrit up [18:17:01] I've reverted the change in gerrit [18:17:06] resubmit when it's fixed [18:17:10] <^demon> Ok. [18:17:39] ok, hopefully not [18:19:41] Ryan_Lane: sorry, I didn't realize ^demon pinged you [18:20:49] robla: no worries [18:21:29] <^demon> robla: I suppose I should keep track of our own schedule before asking Ryan to break gerrit. This is all my fault :) [18:21:35] :D [18:23:20] no prob. anyway, Aaron is working out the newbie patch now, and then we should be ready [18:24:10] Ryan_Lane: I have a couple quick questions about the bot group on Labs if you have a second [18:24:25] kaldari: let's talk about in in -labs [18:24:29] sure [18:27:20] hi hexmode! [18:27:29] hey [18:27:36] ready to watch English Wikipedia for comments, complaints, and compliments? :) [18:28:26] * hexmode sighs [18:29:06] sumanah: I just bumped a bug to highest and added you and robla as cc on it. Is that ok for notification instead of a seperate email? [18:29:34] hexmode: no, please do send a separate email as well to ensure that we see a separate notification outside the bugzilla mail. [18:29:51] I agree with Sumana [18:29:58] what's the bug hexmode? [18:30:16] (IRC should be sufficient since we're already here) [18:30:31] hrm.... https://bugzilla.wikimedia.org/show_bug.cgi?id=36174 [18:30:38] maybe I didn't send it yet [18:55:07] Looks like the latest MediaWiki update on EN WP broke some important gadgets. [18:55:11] Namely Twinkle [18:55:31] https://en.wikipedia.org/wiki/Wikipedia_talk:Twinkle#Twinkle_not_working [18:58:34] * robla looks [18:59:24] RoanKattouw_away: ^ [19:00:18] Let me enable it and see what happens [19:01:32] "$ is undefined", that's fairly clear-cut, fixing [19:02:20] hi sumanah [19:02:28] desperately waiting for my result [19:02:31] hi drecodeam [19:02:42] drecodeam: look at blog.wikimedia.org as of about 2 seconds ago. [19:04:27] drecodeam: congratulations. [19:05:06] Hmm, automagic $ aliasing seems to have broken [19:05:08] I wonder wh [19:05:09] y [19:05:13] thanks a lot sumanah [19:05:23] give me a moment..i am happy rejoicing ! [19:05:45] drecodeam: it was nice meeting you a few days ago as well :) [19:05:56] YuviPanda: ya man, totally ! [19:05:59] https://blog.wikimedia.org/2012/04/23/wmf-selects-9-students-for-gsoc/ [19:07:11] Found it, committing fix [19:07:39] DarTar: ping when you're ready [19:08:10] kaldari: i got selected for GSoC ! [19:08:13] Fix submitted: https://gerrit.wikimedia.org/r/5628 [19:08:23] It's totally trivial but I would like not to self-review it [19:09:33] Hmm [19:09:36] Oh hey Twinkle works now [19:09:54] rsterbin: bringing Aaron in the channel so we can review the bug [19:10:02] ok [19:10:28] Congratulations to Ankur Anand (drecodeam), Harry Burt, Akshay Chugh, Ashish Dubey, Suhas HS, Nischay Nahata, Aaron Pramana, Robin Pepermans (SPQRobin), and Platonides for being accepted to GSoC [19:10:48] \o/ [19:11:03] so first off, when was the deployment that fixed the noedit event count? [19:11:15] friday [19:11:28] do you remember the time roughly? RoanKattouw ? [19:11:35] hey halfak [19:11:38] DarTar: Server admin log should tell you [19:11:46] Hi-o [19:12:08] if you have a link handy, I don't remember the rev number off the top of my head [19:12:10] DarTar: April 19 between 21:00 and 21:15 UTC [19:12:15] sweet :) [19:12:42] Thanks Roan :) [19:13:31] so rsterbin, if I understand correctly the other bug we *do* have users overbucketed [19:13:38] that's correct [19:14:18] so that means that we need to hold off on the stage 3 analysis (at least the part of it dealing with volume) [19:14:30] Are we exactly 2x overbucketed? [19:15:11] no [19:15:41] About how many more uses should have been bucketed #1? [19:15:50] hang on... [19:18:54] if the fix is available do you guys think we could push it today? https://gerrit.wikimedia.org/r/#q,5496,n,z [19:19:04] halfak - the init events are ~1500 off [19:19:21] that's up to Roan [19:19:44] Looking [19:21:17] I see. Thanks. [19:21:21] rsterbin: is there any chance that the overbucketing bug may have affected stage 2 as well? [19:22:12] i don't think it would have, because the buckets themselves were changed [19:22:23] but maybe you should look at the init events anyway [19:22:44] I will, but what I recollect from my post-deployment checks is that the init events were kosher [19:22:54] I have approved your change, but I am going to wait a few hours before I deploy it [19:22:58] I can dig up my report and perform a few more checks [19:23:00] They're doing a core MW upgrade on enwiki right now [19:23:10] but it's good to know that stage 2 is unlikely to be affected [19:23:23] RoanKattouw: sounds good [19:23:29] ping me when it goes live [19:23:48] ok [19:23:55] DarTar: anything else? [19:24:44] nope not really, I think halfak and I know what to focus on and we'll wait for the fix for the second set of questions that we cannot address now [19:25:15] rsterbin: Thanks. Hasta. [19:25:24] no problem [19:25:26] thanks, ttyl [19:29:13] most everything on VPT seems to be about diff colors, which is probably a good sign: http://en.wikipedia.org/wiki/Wikipedia:VPT#1.20wmf1_deployment_complete [19:30:28] Hah [19:30:45] Digging through the mailing list archives I just discovered that apparently robla was the org admin for GSoC 2010 [19:30:55] Or at least he announced the accepted students [19:31:14] yup, that was part of my initial gig here [19:31:32] I can't find any pre-2009 announcements of accepted students [19:31:49] Like, one written by Brion (the 2009 one was written by me) [19:32:05] sounds like snipe hunting to me [19:32:25] Hah! [19:32:52] Only in 2012 and 2009 did someone think to tag the GSoC announce post with 'gsoc' [19:33:22] ptptptpth...tagging [19:34:13] Yeah it's pretty inconsistent overall [19:35:01] <^demon> re: diff colors...you're never going to please everyone. [19:35:19] <^demon> You're *always* going to find someone who says "I like the old way better" [19:35:31] That's true [19:36:04] For example, if you make the wikitext light green, you will almost certainly find those people [19:37:06] Because no one else seems to care but enwiki does, I am going to fix https://bugzilla.wikimedia.org/show_bug.cgi?id=36113 [19:37:32] Well [19:39:43] Hmm, it seems to be fixed in mater [19:39:52] Daniel_WMDE_: you seriously don't see how you could use a queue for this? [19:40:09] you're pushing out a bunch of requests that say "do this" to clients [19:40:30] rather than doing that you push it into a queue, and the clients poll the queue [19:40:31] Ryan_Lane: i see various ways to use queues for this. the question is which one of them works efficiently [19:40:41] we have the job queue right now [19:40:44] it sucks, but it works [19:41:11] pushing to the API is a *really* bad idea [19:41:14] Ryan_Lane: so your suggestion would be to go with a polling sheme. ok. [19:41:28] why is it a bad idea? [19:41:40] what happens when the push fails for some reason? [19:42:03] polling is more reliable [19:42:16] push/pull is obviously best [19:42:26] which is why pubhubsubhub model is good [19:42:41] yes, i know. that was pretty much my own argument against the push scheme. polling is more reiable [19:42:49] but also way slower and has way more overhead [19:43:01] but, pubsub is push based? [19:43:06] also, if we are updating external clients, push is way less efficient if it's direct to the api [19:43:10] pubsub is push/pull [19:43:33] yes. same as the schema i proposed on the list [19:43:37] it's polling with a push to notify clients [19:43:46] nothing keeps the clients from pulling. they just don't need to poll. [19:43:52] (that isn't totally accurate, but it's close enough) [19:44:09] now i'm confused. [19:44:15] in-cluster we should just poll, though [19:44:40] poll the database? or the api? [19:44:43] out-of-cluster we should do pubsub [19:44:52] write into a queue, and poll the queue [19:44:54] yes, pubsub would be nice for 3rd parties [19:45:08] (the queue right now is the job queue) [19:45:17] if the job queue is too slow, we should replace it with something saner [19:45:22] <^demon> Poll using the database like we do for FileRepo. There's no reason to make an HTTP request when you're already inside the cluster. [19:45:29] exactly [19:45:58] ok, db polling was also my firth thought. jeroen convinced me of the push scheme. [19:46:12] would be nice to have your arguments on the list, for posterity :) [19:46:28] <^demon> I already said on-list to do it like FileRepo ;-) [19:46:40] just wanted to talk to you about it first :) [19:46:51] I agree that an abstraction layer would be good [19:47:04] Ryan_Lane: i'm not sure how a client would poll the job queue, though. afaik it can't really. the job queue consists of things to run. and the client doesn't have direct access to wikidata's queue. [19:47:18] the clients would be the job runners [19:47:20] just like now [19:47:32] ^demon: i wasn't aware that the file-repo did periodic polling... does it? [19:47:34] the job runners run the jobs to clear the caches [19:47:53] <^demon> Daniel_WMDE_: Well when the locally cached copies run out, it'll re-poll. Or on a forced purge. [19:48:17] heh. I guess a lot of people don't use the job queue the way we do :) [19:48:34] ^demon: yea, but i'd reaaly try to avoid this. because waiting > 1 minute to see your changes sucks. and polling everything every minute also sucks. right? [19:48:50] no [19:49:01] polling in this way actually works [19:49:14] the polling itself isn't a problem because there is always jobs on the queue [19:49:14] Daniel_WMDE_: do I understand correctly that your problem is isomorphic to the problem of transwiki transclusion? [19:49:32] this can't be instant. we don't have the resources to handle it [19:49:40] vvv: with regards to caching, yes [19:50:01] Ryan_Lane: about the job queue... the clients would be the job runners. ok. how do the jobs get into their queue? [19:50:13] we have a daemon that runs the jobs [19:50:19] i must be missing something here, i really don't see how the job queue can be used. [19:50:22] it's a cluster of nodes [19:50:25] why not? [19:50:40] this is for cache invalidation, right? [19:51:01] <^demon> What if you had the repo push to the client job queues? [19:51:05] when a template changes, it figures out all pages that need to be invalidated, and sticks them into the queue as jobs [19:51:08] Ryan_Lane: cache invalidation and getting the new data [19:51:26] <^demon> So it still wouldn't be instant updates, but the clients would still get a job to update (when it gets to it) [19:51:28] the job runners go through and purge the cache for those pages [19:51:59] we can't handle instant updates [19:52:19] ^demon: can do that. that's essentially a push. the only difference is that the re-rendering is queued. [19:52:20] WTF, I'm getting 503s from bits [19:52:43] Daniel_WMDE_: and that's how template modification works ;) [19:52:53] Ryan_Lane: i know [19:53:04] is this different in some fundamental way that I'm missing? [19:53:16] <^demon> Well, the transwiki aspect of it. [19:53:25] Ryan_Lane: and that's perfectly fine. i'm not opposed to using the job queue. it's even in the proposal (for the repo side). rerendering on the client can be queued, sure. [19:53:40] ^demon: so we push into a bunch of queues, rather than just one :) [19:53:48] Ryan_Lane: but that doesn't say anything about when and how stuff goes from the repo to the clients. [19:54:12] Ryan_Lane: push how? directly into the client databases? [19:54:17] that's kind of scary. [19:54:21] well, we could have a central queue [19:54:27] so... push to the client's job queue via the api? [19:54:32] that's exactly what we proposed :) [19:54:45] not via the api [19:54:53] we want this to be asynchronous [19:55:03] it scales better and is more reliable [19:55:08] Ryan_Lane: the rendering, yes. the posting the the queue?... [19:55:13] same [19:55:32] writing to another database is no more asynchronous than a http request [19:55:53] it goes through less hops [19:56:17] and is *way* more likely to succeed [19:56:19] well, ok. so direct database access is quicker. granted. [19:56:54] but if we do shared databases, then I think all clients reading from a central db is much nicer than the repo writing to hundreds of lcient dbs. [19:56:59] though that would mean polling, of course [19:57:13] well, the job queue is polled anyway. so no difference there. [19:57:18] exactly [19:57:25] this is in-cluster, of course [19:57:39] subhub for external is definitely the right answer, though :) [19:57:48] which is why an abstraction class for this would be nice [19:57:49] Ryan_Lane: ok. so you would prefer all client wikis to poll a central queue (aka database table) for updates in regular intervals? [19:57:55] that way it can send to both at the same time [19:58:06] it would be especially nice if this stuff could be pipelined [19:58:16] (I've been working in openstack code for far too long now, I can tell) [19:58:39] Daniel_WMDE_: direct to the database may not be a good idea [19:58:43] make an abstraction for that too [19:58:50] what if we want to switch to a queue? [19:58:59] the db *is* the queue [19:59:00] see how the job running stuff is done [19:59:10] well, it is now [19:59:16] what if we want to use rabbit, or redis? [19:59:18] but yes, sure, in the code there will be a "ClientNotification" class thingy [19:59:29] cool. we can just subclass if it we want to change it [19:59:39] of course. that's not the issue here [19:59:42] * Ryan_Lane nods [19:59:52] the issue i want to decide is: pushing or polling? http or direct database? [19:59:59] in-cluster, polling [20:00:06] direct to the queue [20:00:30] out-cluster, pubhub [20:00:52] the 3rd party interface will have to wait, wikidata is on a tight schedule [20:00:58] nice gsoc pĆ¼roject for next year, though :) [20:01:25] yeah :) [20:01:48] right. hm... ok. [20:02:27] I'll put my thoughts together in the mailing post [20:02:28] i'm now in the odd position that you convinved me of the way i originally proposed, so now i have to argue against the stuff i proposed on the list on the team's behalf :) [20:02:36] maybe brion or tim will totally disagree with me :) [20:02:36] Ryan_Lane: thanks :) [20:03:03] yw [20:03:06] well, i'd be happy if you could get their input, no matter what it is :) [20:06:13] Ryan_Lane: hm... there's one issue i have with using a true queue: it means that the repo has to know all clients, and address them individually, with highly redundant data. [20:06:56] Ryan_Lane: if the clients are polling a central thingy, that shouldn't be a queue you pop stuff off. it should be a reduced version of recentchanges [20:07:20] a change-id, timestamp, and the id of the item that was changed. [20:07:58] kept around for days or weeks. so clients can easily re-sync. and there'S no redundant data. and the repo doesn't have to know anything about the clients. [20:08:03] much nicer that was [20:08:06] *way [20:08:32] Daniel_WMDE_, where will you be storing the data ? Was it goign to be on wikipages? [20:08:56] maybe storing them on top of eg. git would be a solution for distribution system [20:09:10] if the way it's stored is not too important [20:09:39] Platonides: no, it will be wikipages, for a lot of good reasons. we can put a secondary copy of the data anywhere though, and plan to, for efficient queries [20:09:40] Daniel_WMDE_: even in a push system you'd need to know that [20:10:00] Ryan_Lane: yes, i know. it'S one advantage of the polling that we don't there [20:10:25] we could use a queue that doesn't get entries removed, but works like a feed [20:10:56] that's more difficult in our current solution [20:10:57] yes. which is pretty much what rc is. which is what i proposed above... or tried to [20:11:04] I don't think the job queue can handle that right now [20:11:41] the queue can be the revisions attached to a page [20:11:42] no, the job queue can't. but RC can. [20:11:57] Platonides: no, it needs to be global [20:12:35] basically, what do we need? the client needs to be able to answer a query like this: [20:12:52] "tell me all the items that changed since x, so i can see which of them are relevant to me" [20:12:56] RC can do that [20:13:37] hm, you're right [20:14:25] in fact with an infinite recentchanges, operations resemble a lot to things like recentchangeslinked [20:17:15] kind of, yea [20:17:25] though it's cross-cluster, so no joins. [20:18:53] heh [20:18:56] ok.... [20:18:56] so [20:19:02] binasher and I have been discussing it [20:19:12] Daniel_WMDE_: why do you want to do such complicated things :D [20:19:31] it makes us have to think too muchj [20:19:57] since this really needs to be handled like a feed, let's just handle it as a feed [20:19:58] hehe :) [20:20:04] yes [20:20:12] let's use pubhub for internal and external [20:20:26] we can push into a global feed [20:20:36] then the individual wikis will get a notification via pubhub [20:20:51] they'll then push invalidations into their local queues [20:21:15] ugh. that's a *lot* of http traffic we are talking about here... [20:21:23] we're *really* good at that :) [20:21:58] well. ok. then we are pretty much back to the proposal i sent to the list. [20:22:07] only that the push is done via pubhub, not the api [20:22:11] yeah [20:22:46] so, I was thinking that the only thing that should get stuck into the queue is something like "this property changed" [20:22:55] the local wikis would determine what that means for their own invalidations [20:23:01] and the normal job queue would handle it [20:23:37] it should be fairly efficient overall [20:24:19] s/queue/feed/ [20:24:40] lemme reword all of that, actually [20:24:51] so, I was thinking that the only thing that should get stuck into the feed is something like "this property changed" [20:24:58] the local wikis would determine what that means for their own invalidations [20:25:08] and the normal local job queues would handle it [20:26:06] yes, i agree. though it would be "this property on that item" [20:26:12] right [20:26:19] an item being "something described by a wikipedia page"# [20:26:24] a triple :) [20:26:58] though we only really care about the first two items of the triple in this situation, right? [20:27:13] it doesn't matter what the value changed to, just that it changed [20:27:33] though, it could be interesting to throw all three in, then clients could watch the stream for value updates too :D [20:27:35] yes. even the subject alone would probably be sufficient, at least for now. [20:27:49] but known which property changed is nice [20:27:50] s/stream/feed/ [20:27:54] and probably with some version numbers [20:27:57] yeah [20:28:04] ok now [20:28:15] ok, writing a post to the list [20:28:18] so... now the question is: do we cache the relevant data on the client? [20:28:26] which client? [20:28:32] so it can be accessed quickly when the page needs to be rerendered for other reasons? [20:28:35] the client wiki [20:28:38] ok [20:28:41] the one using the data [20:28:44] yes, there needs to be some local caching [20:28:45] probably a good idea [20:29:06] if it's going to ask upstream everytime, memcache would be enough [20:29:09] ok, now we are 98% back to the original proposal on the list :) [20:29:22] if it's going to keep a full copy, in a db [20:29:36] i was going to try to avoid asking upstream [20:30:21] waiting for an http request to another system while trying to save a page may get annoying [20:30:33] and we are talking about the majority of wikipedia edits [20:30:45] so i prefer a full local cache [20:31:11] with http based push and some periodic refreshing [20:31:45] bah. stupid wifi [20:32:06] a full local cache is good because third parties would also use it ;0 [20:32:08] err ;) [20:33:27] ok. back in a bit. lunch. [20:33:54] i'm about to log off for the day. [20:34:10] let's continue on the list :) [20:35:29] wow [20:36:02] Who would be the right person to contact for https://bugzilla.wikimedia.org/show_bug.cgi?id=22521 ? [20:47:46] Hi, is there a way I can recover my password on my original account? The situation is described at https://en.wikipedia.org/wiki/Wikipedia:CHUU#Hurricanefan24_.E2.86.92_Hurricanefan25 [21:22:52] hey RoanKattouw, do you think rsterbin's fix could be pushed to production by midnight UTC, i.e. in 2 hrs 30 from now? [21:23:24] (so we can use midnight on Apr 24 as a cutoff) [21:23:58] I'll push it now [21:27:54] sweet [21:28:14] Ryan_Lane: Found it: https://gerrit-documentation.googlecode.com/svn/Documentation/2.3/access-control.html [21:28:24] DarTar: Deployed, should start taking effect over the next 10 mins [21:28:42] excellent [21:30:24] cool [21:52:42] RoanKattouw: i'd like to switch MobileFrontend back to being deployed from 'master' instead of the 'wmf/1.20wmf1' branch we set up last week - is that straightforward to do? [21:53:29] Yes, that is fairly straightforward to do [21:53:43] In fact you can follow the docs for deploying from master and it'll just work [21:53:52] despite the fact that you deployed from wmf/1.20wmf1 before [21:53:57] oh fancy [21:54:24] RoanKattouw that would be: http://wikitech.wikimedia.org/view/How_to_deploy_code#Case_1c:_extension_update ? [21:54:41] Yes [21:54:51] oh wait that looks like it's still using wmf/1.20wmf1 branch? [21:55:24] oh right the wmf/1.20wmf1 branch for core [21:57:13] Yah [22:06:37] <^demon> RoanKattouw: I just read what spearce said to you about Project Owners. That's depressing that's this is hardcoded :\ [22:06:45] Yeah well [22:06:49] Did you read my proposed solution? [22:07:48] <^demon> Does it mean we're going to have to set things for each project and we can't just inherit? [22:07:51] <^demon> Or did I misunderstand? [22:08:20] <^demon> And on this note, I totally want them to fix http://code.google.com/p/gerrit/issues/detail?id=1197 now [22:09:33] more gerrit problems, now with permissions? [22:10:11] <^demon> Not problems. Opportunities to let our creative side shine :) [22:10:18] what's the issue? [22:11:14] <^demon> Project Owners have uber-powers over refs/meta/config, and that is hardcoded :( [22:11:44] I saw you have a gitorious labs project [22:12:00] <^demon> Yeah, that was before I found out it was ruby. [22:12:01] Is that active? what were your goals? [22:12:10] oops [22:12:17] <^demon> No, it was never active. I never even got it setup and running. [22:12:35] I wanted to make a gitolite setup [22:13:07] <^demon> You can steal my gitorious project for that if you'd like. I can add you. [22:13:46] I was going to use gareth one [22:13:49] Hmm [22:14:07] ^demon: Does the mediawiki group even need to own the mediawiki project? It's already got the +2 and submit rights etc separately enumerated [22:14:19] Separate of the Project Owners meta-group even [22:14:34] <^demon> Probably not, no :) [22:14:47] So AFAICT if I revoke ownership of the mediawiki project for the mediawiki group, nothing would break other than that they can't edit the ACLs anymore [22:14:47] <^demon> I think I did that so 'mediawiki' gets permissions anyone grants to owners on projects. [22:14:52] <^demon> But since owners SUCK ;-) [22:14:55] (which is what I want to accomplish) [22:15:06] OK well [22:15:11] This is actually quite convenient [22:15:21] Thank you, Echo Of A Past Chad [22:15:39] <^demon> So what's the deal with extensions? They can manage their own acl still? [22:16:01] Yes, so they would be able to mess with their wmf/* branches if they were determined enough [22:16:09] But there's the submodules [22:16:14] And those are in core, so there's a barrier there [22:16:27] <^demon> Right, I'm not concerned about deployment. [22:16:33] <^demon> I'm concerned about people fucking up their acls. [22:16:38] Of course in general an extension owner messing with deployment branches is Really Not OK, but we'll have to enforce that socially [22:16:41] Ryan_Lane: is now a good time to talk for 5 min about a plan for Suhas between now & May 21? I can come over to your cube [22:16:52] sure [22:16:57] I'm just writing a blog post [22:16:57] wheeee [22:16:59] yay! [22:17:07] <^demon> Platonides: I added you to the project. I hadn't made any instances yet, so feel free to go wild. [22:17:22] on my blog, not wikimedias :D [22:17:48] I don't want to fill wikimedia's blog up with the tech minutia of individual changes in Labs [22:19:33] Platonides: Create ALL of the instances! [22:19:50] <^demon> One does not simply create instances. [22:20:07] ^demon, ok [22:20:33] I already had to deal with an instance which didn't allow anyone to log in :P [22:21:29] good night [22:22:20] ^demon: you know this git-review issue that's been discussed on wikitech-l recently? [22:22:35] is it just my imagination, or is it a regression in the latest version of git-review? [22:22:52] Isn't it supposed to be a feature? [22:23:04] Oh the git fetch gerrit thing? [22:23:08] I'd swear that's a regression [22:23:08] (annoying one at that) [22:23:14] It should be easily fixable [22:23:23] Let me try fixing it, actuall [22:23:25] <^demon> That's what I've said but Antoine says no. [22:23:33] I can't imagine how it could be intended [22:23:47] <^demon> Neither can I. And it *definitely* started exploding after we all updated git-review. [22:24:04] I don't even know how it can tell the difference between a commit from one origin or another, if they both have the same hash [22:24:26] <^demon> That's exactly what I said, but Antoine claims they're different. [22:24:38] <^demon> git shouldn't care what you call your remotes if they're all referring to the same history. [22:25:09] In practice git doesn't work that way [22:25:30] When you run git pull, it updates what remotes/origin/master points to, but not what remotes/gerrit/master points to [22:25:37] Unless you explicitly git fetch gerrit [22:25:52] This is stupid because origin and gerrit point to the same URL but whatever [22:26:05] I think I can sort of see why this is broken [22:26:38] but shouldn't git-review run git fetch gerrit itself? [22:26:58] it's meant to be submitting changes against the current gerrit repository [22:26:58] Yes [22:27:07] Some code paths run git remote update gerrit , which is equivalent [22:27:12] But it looks like there's a code path where that doesn't happen [22:27:18] Or, rather, doesn't happen early enough [22:28:28] <^demon> RoanKattouw: Back to permissions...I notice All-Projects grants Read to Project Owners. I wonder what happens if you revoke that. [22:28:39] That's strange [22:28:49] Don't we also grant Read to Anonymous Users though? [22:29:15] <^demon> Not on refs/meta/config [22:29:52] <^demon> You'd have to explicitly set a DENY on Reading refs/meta/config. [22:30:04] <^demon> I'm just curious if it Breaks Shit. [22:30:08] * RoanKattouw doesn't follow [22:30:20] Whatever [22:30:32] I'm sure this doesn't break essential things such as +2 and submit [22:30:49] <^demon> I'll break gerrit-dev later and find out :) [22:43:45] TimStarling: https://review.openstack.org/6741 [22:44:00] Also, their fancy new --list feature was broken in a very basic way, I fixed that in https://review.openstack.org/6740 [22:44:26] excellent, thanks [22:44:35] oh hi TimStarling - when you get a moment, I'd like to know how what you think is the next step for Score. Do you need GrafZahl to make those performance fixes? [22:45:26] I can do it next friday if he doesn't want to do it [22:46:00] TimStarling: ok, I'll ensure GrafZahl knows that it's on him to decide whether to do it now or let you do it :) [22:46:03] not sure he knows that.