[00:04:49] <^demon|away> sumanah: Sorry was running a few minutes late. [00:04:54] hi ^demon|away [00:04:59] got a few minutes to do some code review? [00:05:10] <^demon|away> Yeah let me call in. [00:05:33] ^demon|away: call's over [00:05:39] <^demon|away> Ah ok [00:05:55] ^demon|away: (Tim had a hard stop at 8pm our time) [00:06:31] <^demon|away> Which ticket #'s do you want me to look at? [00:06:45] will pm [02:02:27] is tfinc still in teh office? [02:02:33] guess not. [13:13:43] folks from enwp asked me how to edit moodbar feedback items, no clue why they asked me, but someone knows? [13:14:11] related to http://en.wikipedia.org/wiki/Special:FeedbackDashboard/5853 [13:18:32] jorm: around? [13:18:40] you should probably know that, since you are author :) [13:35:35] It's a bit early for jorm. [13:40:46] petan|w: You can't edit feedback items once submitted [13:43:04] RoanKattouw, I think they have a valid point, though; edits, edit summaries, and usernames can be hidden if they contain copyvios, attacks or libel; Feedback items should provide this feature too. [13:45:06] Oh you can *hide* them, yes [13:45:36] Sysops should have a hide link [13:45:43] I believe that's deployed already [13:46:08] I don't think so from what I've seen in source code [13:46:59] I reviewed the code that implements the hiding feature [13:47:07] I am quite certain that the feedback dashboard has administrative hiding [13:47:16] I am not 100% sure it's deployed yet, but there's a deployment tonight [13:49:49] (link to here) [13:49:51] (hide feedback) [13:50:03] Is what I saw after I +sysopped myself and refreshed the feedback dashboadr [13:50:10] okay [13:51:05] Weird, I don't see it with the staff bit. [13:51:15] Anyway. [13:53:35] No, staffs don't have that right apparently [13:53:48] Up until recently, sysadmins didn't have the block right [13:55:54] I guess I am just used to having every possible right everywhere since until recently I was a steward =) [13:58:19] Ah [13:58:33] Yeah that'll do it [16:44:52] It's a known thing that the moodbar right doesn't pass to +staff (neither, by the way, does +reviewer, i believe) [18:10:35] jorm: around? [18:10:54] yes. [18:11:04] I was thinking of splitting moodbar-admin to moodbar-hide and moodbar-unhide so that it would be more customizable [18:11:28] as I started on wikitech-l, there was a question if it was possible to give that permission on enwp to more than just sysops [18:11:49] i don't think splitting it into two groups is wise. [18:11:53] permission bloat and all. [18:11:57] True [18:12:00] But '-admin' is a bad name [18:12:04] yes. [18:12:14] i agree. i think "feedback-moderator" is probably best. [18:12:24] .. [18:12:29] How about just feedback-hide [18:12:34] and have it imply unhide too [18:12:34] right, I meant for instance some might want to give permission to hide it only to let say rollbackers, while unhide should be left only to sysops [18:12:34] <^demon> +1 to Roan [18:12:41] <^demon> hide implies unhide. [18:12:41] Rights should be an action [18:12:48] <^demon> Just like watch implies unwatch. [18:13:46] i'm leery of granting rights that are one-way. [18:14:20] to be honest, I am too, but I know how the folks from enwp like stuff complicated :) [18:14:40] <^demon> feedback-hide would be fine, and it would imply unhide. [18:15:30] but, renaming it is also good idea... [18:16:14] one of the goals is to DECOMPLICATE things. [18:16:21] ok [18:17:00] <^demon> I don't understand what's complicated here. [18:18:33] two user rights versus one. [18:18:58] probably yes [18:19:26] <^demon> Roan and I both said just making it one would be best :) [18:19:35] <^demon> But a nice action-specific name and not moderator|admin [18:19:42] when it comes to that, is there some reason why certain rights on testwiki are granteable by crats but not revokable? [18:19:56] I though it was some mistake in config, but maybe not, especially translateadmin etc. [18:20:00] that's a normal permissions model, actually. [18:20:06] you can grant rights you own, but not remove them. [18:20:21] no [18:20:38] <^demon> That's totally configurable, which groups a group can add/remove. [18:20:42] I mean I can grant translateadmin even if I am not that, but can't revoke it [18:20:55] on test.wikipedia [18:21:07] <^demon> Yes, this is similar to how it's used on enwiki too. [18:21:12] <^demon> (or at least was) [18:21:14] aha [18:21:32] <^demon> You can say "bcrats can grant +sysop and +foobar, but can only revoke foobar" [18:21:51] <^demon> Or "sysops can grant other people +sysop, but can't revoke it" [18:21:55] <^demon> Totally configurable. [18:21:56] that makes sense [18:22:09] <^demon> Or anonymous users can grant people +bcrat ;-) [18:22:13] I mean you can grant +sysop + foobar +crat but only revoke crat [18:22:14] <^demon> If you were crazy. [18:22:17] that doesn't make sense [18:22:32] uh wrong example [18:22:38] <^demon> I'm saying you configure it by group :) [18:22:40] <^demon> * is a group [18:23:04] I just didn't know why that translateadmin can be only granted, not revoked, that's all :) [18:23:21] <^demon> It'd be in InitialiseSettings. [18:23:41] the reason? [18:23:46] I didn't see it there [18:24:03] <^demon> Somebody configured it that way :) [18:24:10] <^demon> It's testwiki, so it's no big deal. [18:24:14] sure [18:37:17] IIRC there was some bug causing IE to crash in 1.18, correct? [18:37:32] At some point there was one yeah [18:37:40] do you remember the root casue? [18:38:26] we have a form crashing IE and I'm hoping that it is the same/similar problem [18:38:43] pgehres: 1s [18:38:51] I think I can find you one [18:39:29] pgehres: http://bugzilla.wikimedia.org/31673 # not crashing [18:40:25] hexmode: hmm, this is a 1.17 wiki and, literally, iE7 and IE8 crash and close [18:40:27] *hexmode pasted "pgehres: list of bugs" at http://paste2.org/p/1754694 [18:40:37] hrm... 1.17 [18:40:57] didn't hear 1.17 [18:41:14] yeah??? [18:41:22] we still run a few of those around here (wmf) [18:42:00] ^demon: will you be around for the next 80 min or so? [18:42:15] pgehres: https://bugzilla.wikimedia.org/show_bug.cgi?id=31424 1.18 IE crash [18:42:26] <^demon> I can be, although I don't know how much help I'll be. I know diddly other than mysql and sqlite. [18:42:54] hexmode: awesome thanks [18:43:07] will check the jquery version [18:43:44] pgehres: what is the URL for this wiki? [18:44:08] test-payments.tesla.usability.wikimedia.org [18:44:29] pgehres: excellent... was looking for a WMF wiki to test regressions on [18:44:49] its pretty hacked up [18:44:54] its the dev ground for payments [18:45:05] thanks ^demon I think that'd be helpful [18:45:24] wish Tim could be around, but that's basically nearly impossible to schedule [18:46:40] hexmode: its jQuery 1.4.2, so not the dreaded 1.6.2 [18:46:45] hmm [18:48:42] ^demon: guillom is publishing this tomorrow: http://www.mediawiki.org/wiki/Wikimedia_engineering_report/2011/October ....should we put any sort of call for the unknown committers in there? [18:49:31] <^demon> Not a bad idea. I'll make a second etherpad and put just the unknowns in it. [18:53:08] <^demon> robla: I made http://etherpad.wikimedia.org/Unknown-committers [18:55:44] great! guillom, I see you're in that pad already. do you want me to craft some wording for the report, or are you already on it? [18:55:59] Heh, Mystery Committers [18:56:12] robla, if you can just write a sentence about it, I'd appreciate it; otherwise I'll do it [18:56:23] *robla goes to scribble [18:56:34] ^demon: I somewhat know rfrisar [18:56:48] I would've been able to give you their full name if I had still had my pre-2008 e-mail archive [18:56:49] <^demon> Then fill it in :) [18:56:55] did so [18:57:21] <^demon> Ah, I've got him. [18:57:34] <^demon> wikitech-l from '09 [18:57:38] Right [18:57:40] I was gonna say [18:57:46] yay [18:57:47] wikitech-l archives should turn up most of these people [18:57:48] yeah, I was thinking of searching in the wikitech archive [18:57:57] I probably have looxix's email somewhere [18:58:10] or I can find it in the archives of wikifr-l [18:58:26] and I'm pretty sure kelson42 is Emmanuel Engelhart [19:00:06] Yes [19:00:08] Would have to be [19:00:18] OK, who all is here for the non-MySQL database bug triage? Etherpad: http://etherpad.wikimedia.org/database-bug-triage please speak up [19:00:27] *MaxSem [19:00:36] <^demon> Added a few more that I have names but no e-mail addresses. [19:01:08] sumanah: Hi, I am here [19:01:12] Ah, I can dig up shaihulud as well [19:01:19] Greg__, hexmode ^demon Nikerabbit, MaxSem [19:01:47] ^demon, do you need the name, the email, or both? [19:02:17] <^demon> Preferably both. In the worst case scenario of people we just absolutely can't find, I'll fall back on username + dummy e-mail address. [19:02:23] <^demon> But really we want proper attribution. [19:02:32] ok [19:02:39] I figure we will start off by validating & prioritizing some Postgres bugs, since G_SabinoMullane is here [19:02:43] this is better. was in the wrong room :( [19:02:51] sumanah: I'm here but have to leave in 15min to pick up my daughter [19:02:53] blobaugh: where's DJ? do you know? [19:02:55] ok, hexmode [19:02:57] blobaugh: welcome! [19:02:58] I just can't promise the email addresses will work [19:03:17] hi brian_swan [19:03:20] OK, I'm here. [19:03:31] Are you happy now, Sumanah? [19:03:33] brian_swan: please open http://etherpad.wikimedia.org/database-bug-triage to follow along [19:03:33] ARE YOU HAPPY NOW? [19:03:40] Sheesh. [19:03:46] sumanah: brian_swan is not here yet, i joined him. this is his first time IRCing [19:03:53] oh! welcome to IRC, brian_swan [19:04:08] I have the bug triage open, too. [19:04:13] I figure we will start off by validating & prioritizing some Postgres bugs, and then move on to SQL Server, Oracle, & SQLite [19:04:14] Although I haven't looked at the tickets at all. [19:04:24] Etherpad line 21: https://bugzilla.wikimedia.org/show_bug.cgi?id=30787 PostgreSQL 9.0 default 'mediawiki' schema causes failure, use 'public' instead. [19:04:31] I'm going to give everyone a minute to read it [19:04:58] hi djbauch -- we are starting off by validating & prioritizing some Postgres bugs, and then will move on to SQL Server, Oracle, & SQLite. first is https://bugzilla.wikimedia.org/show_bug.cgi?id=30787 PostgreSQL 9.0 default 'mediawiki' schema causes failure, use 'public' instead. [19:05:14] I don't think there's a way to set a default schema, other than search_path. [19:05:54] (yes, the Etherpad scrollbar doesn't work. Sorry. Try clicking the lefthand column of row numbers; then you can use pgup/pgdn or up/down arrow keys.) [19:06:41] Not sure what the issue is here. Nobody else has complained about this. [19:06:51] G_SabinoMullane and others: do you agree with alester? [19:07:01] The user they use should already have the search_path changed. [19:07:17] G_SabinoMullane: I'd love to see whether anyone can reproduce this error; if not, I'm happy to close it. [19:07:21] It would be nice to have some more work done on the multi-schema idea. but the basics are pretty sound. [19:07:46] I think it's reasonable to say "get it reproduced by $SUNSET_DATE or we close it" [19:07:53] I'm going to guess that they are connecting with the wrong Postgres use in their LocalSettings.php [19:07:54] But I don't know the MW style of bug handling. [19:08:23] Oh, and hi Greg! [19:08:50] hi [19:08:53] alester: G_SabinoMullane: I take it that neither of you is volunteering to try to reproduce this. [19:09:08] sumanah: You take correctly. :-) [19:09:22] Oh, I can reproduce - just use the wrong user :) [19:09:27] hahaha [19:09:35] The installer should be doing an ALTER USER to set the default search_path [19:09:37] In which case I shall note in the bug "please try as Greg suggests -- can you still repro?" [19:09:51] and also note the "are you on the wrong user" [19:09:55] But without knowing the exact steps they did, it's impossible to know if its a true bug or user error. [19:09:59] <^demon> "ALTER ROLE $safeuser SET search_path = $safeschema, public" [19:10:21] ok [19:10:24] <^demon> Line 469 of PostgresInstaller [19:10:26] I shall reply to the reporter [19:10:27] and move on [19:10:32] One down [19:10:35] next: https://bugzilla.wikimedia.org/show_bug.cgi?id=20475 SpecialExport producing corrupt output (PostgreSQL errors) [19:10:37] N-1 to go [19:10:50] /j djbauch is your sql stuff ready for prime time except for the - thing? [19:10:56] alester: 9 to go, just fyi [19:11:21] ^demon, I've added a few names and addresses; it's probably all I can do. robla , I'll add a sentence in the report and a link to the etherpad [19:11:55] sumanah: are we working through the list of bugs before we talk about the idea section? [19:12:00] guillom: I added something. feel free to wordsmith [19:12:09] <^demon> guillom: Thank you though! I'm finding everyone knows like 1 or 2 names, which helps. [19:12:20] robla, ah, edit conflict! Thanks [19:12:29] yes, blobaugh [19:12:30] The MSSQL stuff is almost there. (in response to Ben) I expect to be able to resolve the strangeness and then it would be ready to go. [19:12:49] I think OverlordQ summed it up well. Yet more fun with the loadbalancer [19:12:59] robla, looks great. Thanks. [19:13:10] One of the unfortunate aspects of pg_query() is that it doesn't require a first parameter of a connection if it's been called before. [19:13:28] It lets you call pg_query( $conn, $sql ) the first time , and then pg_query( $sql ) thereafter. [19:13:46] <^demon> What silly behavior. [19:13:47] and I'm guessing that somewhere before 1.17.0 which I'm looking at now, there were pg_query() calls that relied ont hat behavior incorrectly. [19:14:00] *^demon adds that to the list of reasons he dislikes pg. [19:14:04] ^demon: PHP is all about silly dangerous shortcuts. [19:14:22] <^demon> php's so stupid. [19:14:49] Praise the Lord and pass the ammunition. [19:14:54] *blobaugh glares at ^demon [19:15:29] mysql_query does something similar (but not exactly similar, of course, because after all, this is PHP) [19:15:31] Anyway, I'm looking in DatabasePostgres.php and I don't see anywhere it's calling pg_query() without a connection object. [19:16:00] It's pg_connect() that's the issue, no? [19:16:07] But I don't have the 1.16-svn source in front of me. [19:16:22] G_SabinoMullane: mysql_query takes it as an optional second parameter, it does not change the expected param types between calls [19:16:27] ok, so, what is the next step here? can one of you quickly try to repro with 1.17.0 right now? [19:16:43] Sounds like a job for someone who knows LoadBalancer well [19:16:43] Not me, but yes. [19:16:44] or can I ask you to do so sometime soon and report back to Bugzilla? [19:16:45] ok, I'm off :) [19:16:52] G_SabinoMullane: not necessarily [19:17:12] it should be easy to test, make some jobs and make sure they are run after call to Special:Export [19:17:45] Really, anyone should be able to test this against 1.17.0 [19:17:52] not just me or GSM [19:18:05] *hexmode steps out for a few [19:18:26] anyone having postgre installed? :) [19:18:53] Postgres? Never heard of it! [19:19:06] I do, and my company wiki is on it. [19:19:10] I should probably install it at some point and test how my extension works (or not) with it [19:19:24] Can I just go to Special:Export? [19:19:33] so, alester, since you already have postgres installed, can I ask you to test for repro and get back to us, maybe later this week? [19:19:47] alester: you should also have non-zero jobqueue [19:19:59] I don't know about "jobqueue" [19:20:00] I've exported pages just fine before (but never tried to specifically duplicate this bug) [19:20:06] Nikerabbit, I have: Notice: unserialize() [function.unserialize]: Error at offset 0 of 53 bytes in D:\Projects\MediaWiki\includes\objectcache\SqlBagOStuff.php on line 381 [19:20:19] MaxSem: context? [19:20:30] who has PG :P [19:20:30] Wow, is that a "D:" drive. Flashbacks! : [19:20:34] testing it now [19:20:46] MaxSem: but for that error? :) [19:21:14] alester: if you have some commonly used template, you could edit that just before [19:21:15] I just now ran an export on all the pages in [[Category:Development]] on my company wiki and it ran fine w/o errors. [19:22:01] I wouldn't be surprised if it is really fixed, it's been many changes since 2009 [19:22:04] alester: ok, I'm ok with closing the bug [19:22:08] I'd make this a "see if you can reproduce with 1.17.0" [19:22:16] next: https://bugzilla.wikimedia.org/show_bug.cgi?id=22579 PostgreSQL Schema option ignored [19:22:17] or close it. [19:22:19] alester: ok [19:22:36] 22579 is the same as the first ticket we looked at [19:23:20] Does not seem the same, but does claim to be fixed. [19:23:26] ok, so I will tell the user to please check that s/he used the correct Postgres user [19:23:38] Unfortunately nobody left a revision [19:23:43] next: https://bugzilla.wikimedia.org/show_bug.cgi?id=32118 test special pages SQL queries against all supported databases [19:24:18] so this is a feature request and I hope that I could get some of you to pitch in on this [19:24:19] throw some test data at it, then call internally via the API? [19:24:31] MaxSem: for bug 32118? (just checking) [19:24:36] yup [19:25:29] without having the test know how to access the db itself you could do that. insert via api and retrieve via api, compare [19:25:41] djbauch: blobaugh: alester: brian_swan: G_SabinoMullane: perhaps you'd be willing to write a few unit tests? ^demon can help you learn your way around our unit testing framework [19:25:48] Not me. [19:25:54] alester: :P [19:26:01] I don't know how you guys do testing. [19:26:09] alester: we are happy to show you! [19:26:13] just write the test and turn it in ;) [19:26:17] "just" [19:26:20] http://www.mediawiki.org/wiki/Testing_portal [19:26:27] *alester puts fingers in ears [19:26:36] o/` la la la la I can't hear you o/` [19:26:54] alester: https://www.mediawiki.org/wiki/Manual:Unit_testing & https://www.mediawiki.org/wiki/Fixing_broken_tests & ^demon, the human manual :-) [19:27:08] alester: earlier in the North American day, hashar is around on IRC (he lives in France) [19:27:09] I have no tuits on my plate for that. [19:27:11] ok. [19:27:14] anyone else? [19:27:23] djbauch? [19:27:44] sumanah: i can work with djbauch to make sure the MSSQL driver is fully tested [19:27:53] blobaugh: cool, maybe design some tests? [19:28:06] vvv, do you ever work with non-MySQL databases? wanna help write some tests? [19:28:23] sumanah: I don't have any experience with them :( [19:28:24] Yeah, I can work with Ben on all the required MS SQL tests. [19:28:25] is there any docs about SQL gotchas that are not supported on some databases? [19:28:27] (btw, ^demon, is it fine for us to upload to Commons the recording of your "how to write unit tests" talk from New Orleans?) [19:28:59] ok, djbauch, blobaugh, I will put you down for that, then -- thanks! [19:29:19] <^demon> sumanah: If you must :\ [19:29:29] sumanah: is there no specification for test for dbs right now? [19:29:40] ^demon: I don't HAVE to but I think some folks would find it helpful. So if you don't mind. [19:29:48] Nikerabbit: I don't think so. [19:29:56] blobaugh: I don't know. I can follow up with you separately to help develop one? [19:30:01] blobaugh, first, you have to get normal tests to work on your backend [19:30:04] <^demon> sumanah: It was just audio, right? [19:30:07] yes ^demon [19:30:12] <^demon> Then fine :) [19:30:28] ^demon, are you shy? =) [19:30:32] next: https://bugzilla.wikimedia.org/show_bug.cgi?id=28172 wfGetDB called when it shouldn't be [19:30:37] WORKSFORME [19:30:45] djbauch: you wrote in the etherpad: "I see this happen when an install fails to complete for some reason (on SQL Server in my case) and then tries to resume. Restarting the install from scratch after fixing whatever caused the failure works around the problem that the load balancer has been turned off and never turned back on." [19:30:50] <^demon> MaxSem: Not in front of people, I just hate the sound of my own voice recorded :) [19:30:55] never was able to repro it outsiide of one page [19:30:58] Nikerabbit: We tend to break those into methods, e.g. implicitGroupBy() [19:31:26] lost that cryptic skill long ago [19:31:49] brrr [19:32:01] that was intended for a different bug [19:32:02] alester: can you repro this? [19:32:15] I dug into 28172 quite a bit, but finally gave up after swimming in the sea of LoadBalancer objects and DB calls. [19:32:20] MaxSem: so your "never was able to repro it outsiide of one page" -- bug 28172? [19:32:22] G_SabinoMullane: but one thing are the actual sql files [19:32:24] <^demon> I've never managed to repro it after I fixed it for mysql/sqlite. [19:32:42] G_SabinoMullane: I got the impression that I need to duplicate the files for each typ [19:32:44] Would be nice if all the LB stuff was documented somewhere [19:32:49] sumanah, no:P it was prepared for the next bug:P [19:32:49] No, I can't. [19:32:53] and I have no LB [19:32:54] aha :-) [19:33:11] *MaxSem needs to sleep [19:33:16] Yes, my comment on the etherpad was for the next bug. Special pages are working OK for me now on SQL Server. Even the GROUP BY stuff [19:33:52] MaxSem: no sleep for you [19:33:55] djbauch: I move your comment to bug 22010 then [19:34:02] *blobaugh hands back the unit tests [19:34:06] I think Chad H or Tim need to take a look at 28172 [19:34:21] OverlordQ and I are stumped :) [19:35:06] <^demon> G_SabinoMullane: I have looked at 28172. [19:35:12] <^demon> And have never been able to repro it. [19:35:13] thanks djbauch sorry for misunderstanding [19:35:21] "next" is sadly ambiguous in these meetings :/ [19:35:42] djbauch: "Special pages are working OK for me now on SQL Server. Even the GROUP BY stuff" is re which bug? 28172? [19:36:15] *hexmode is back [19:36:31] so, G_SabinoMullane, you & OverlordQ can reliably reproduce 28172, right? [19:36:57] ^demon: Odd. I can dupe quite easily [19:37:07] Well I've not tried lately, but yes [19:37:25] G_SabinoMullane: if you could try against HEAD sometime this week or next, that'd be great -- could I ask you to do that & report back on the bug? [19:37:37] <^demon> It shouldn't differ by dbms, since that code is shared. [19:37:50] hmm, and Overlord's repro was like 3 days ago [19:37:51] <^demon> And I haven't been able to replicate on mysql or sqlite. [19:37:51] I think the key is here: #2 /var/www/thedarkcitadel.com/w/includes/User.php(2858): wfGetDB(-1) [19:38:00] <^demon> Yes I know that. [19:38:06] Meant for 32118 (Special Pages) [19:38:10] <^demon> After the db has been initialized the LB is re-enabled. [19:38:23] <^demon> And then User can do its thing. [19:39:52] ok, so, perhaps we can take this discussion back to the bug & I can ask Tim to poke his head in when he has a moment [19:39:55] on to the next item? [19:40:20] ^demon: Why is the LB being disabled there? [19:40:54] e.g. to prevent you from creating a user when you shouldn't :) [19:41:05] <^demon> The LB code automatically does things like connect to the database and such. [19:41:10] =when DB is not ready yet [19:41:15] any SQLite users around? [19:41:23] <^demon> Right, it's to keep you from breaking things accidentally. [19:41:25] Work emergency, biab [19:41:54] *MaxSem coughs quietly [19:41:58] well, let's move on to non-Postgres stuff [19:42:10] MaxSem: you use SQLite ever? https://bugzilla.wikimedia.org/show_bug.cgi?id=31696 update.php fails with SQLite [19:42:44] <^demon> He's our sqlite maintainer. [19:42:45] other than maintaining its support, you mean? :P [19:43:02] MaxSem: um duh, sorry. [19:43:20] I would love to see a repro for this bug [19:43:37] looks like the connection is getting closed somewhere [19:44:44] anyone other than MaxSem want to try to repro this? [19:45:17] I may try to reproduce since I probably face similar / same issues with MSSSQL [19:45:38] ok, djbauch, can I ask you to do that offline, and we'll move on? [19:45:50] OK [19:45:51] back [19:45:57] nxt: https://bugzilla.wikimedia.org/show_bug.cgi?id=28512 SQLite installation via CLI fails to expand ~paths [19:46:00] next* [19:46:21] again with the SQLite. Whadday tryin' to do to us? :-) [19:47:01] alester: well, this is a "databases that aren't MySQL" triage and I figure sometimes the MySQLisms will hit multiple DBs at once... I asked ahead of time what bugs people wanted to cover! [19:47:15] I know, I am just kiddin' [19:47:20] That's what I do. :-) [19:47:24] oh ok [19:47:26] *bashful* [19:47:27] <^demon> Rather than trying to jump through hoops to figure out where the path is, why not just error out when a ~ path is given and say "give a full path please" [19:47:29] I see for bug 28512 that hexmode reported it in April [19:47:32] <^demon> ^ bug 28512 [19:47:34] That's some hacky code there. Why are we not just disallowing ~ ? [19:47:37] *dopey* [19:47:41] *hexmode checks [19:47:52] <^demon> G_SabinoMullane: Exactly. It's easier just to say "don't do that" than anything else. [19:47:58] gah. [19:48:08] i svn upped and now my localhost is throwing sql errors. [19:48:17] Message? [19:48:38] om within function "Revision::fetchFromConds". Database returned error "1054: Unknown column 'rev_sha1' in 'field list' (127.0.0.1)". [19:48:42] 28512: low-priority, someone should fix this eventually, installer issue, move on? [19:48:42] query hidden. [19:48:47] <^demon> jorm: Run update.php [19:48:50] Oh [19:48:52] Run update.php [19:48:52] how do i do that? [19:48:54] <^demon> Like you always should after svn up :) [19:48:58] php maintenance/update.php [19:49:06] ^demon: Oh come on, no one does that :) [19:49:14] *^demon does [19:49:18] I only run update.php if my wiki breaks [19:49:26] ok, I'm moving on [19:49:37] https://bugzilla.wikimedia.org/show_bug.cgi?id=28281 Differentiate between MySQL and MySQL forks (ie MariaDB) [19:49:54] <^demon> RoanKattouw: I've got a script called 'svnup' that does lots of fun things for my working copies :) [19:50:09] Chad says: " Low priority enhancement -- someone will get to this eventually? Not that big a deal." agreed. Is this in any way *easy* (for newbs) or something the MariaDB people might want to fix themselves? [19:50:09] I have update-repos but that just svn ups various path [19:50:11] s [19:50:28] as PR [19:50:53] I invited MariaDB's Colin Charles here but sounds like he didn't make it. [19:50:59] <^demon> I dunno what MariaDB would need to do. [19:51:02] *Platonides has mwup which is basically 'cd && svn up | color' [19:51:04] sumanah: that looks like an issue on the MariaDB side? [19:51:11] <^demon> Presumably we'd like to have some way in php to differentiate the two. [19:51:22] the only way to fix that is to contribute the patch to them [19:51:25] blobaugh: I mean, MariaDB people would donate a bit of time to fix it in MediaWiki. [19:51:31] <^demon> Again... [19:51:37] <^demon> I'm not sure what MariaDB needs to do? [19:51:58] <^demon> We assume it's mysql because we're using mysql_* functions [19:52:07] ^demon: it looks like just change the link on the Special:Version page [19:52:09] getSoftwareLink() currently returns a static string [19:52:19] (DatabaseMysql class) [19:52:28] <^demon> Platonides: Right. Because we assumed DatabaseMysql would only ever refer to mysql. [19:52:43] the work is to somewhow differenciate between the two [19:52:49] so to fix it a new class would need to be made? DatabaseMariaDb? [19:53:03] <^demon> Platonides: But other than a software link, do we need to differentiate at all? [19:53:16] <^demon> ie: Would the subclass be anything other than overriding getSoftwareLink()? [19:53:17] Not in the code, yet [19:53:18] make mysql family servers add a command 'SELECT UPSTREAM_LINK();' :) [19:53:24] ha [19:53:28] I don't think so [19:53:30] tell you what [19:53:36] I will write to Colin and ask whether they care [19:53:38] and we can move on [19:53:40] ^demon: afaik you are correct [19:53:51] if we switched to it, perhaps we would begin optimizing some edge cases [19:53:53] next: https://bugzilla.wikimedia.org/show_bug.cgi?id=26273 Database layer should automagically add GROUP BY columns on backends that need them [19:54:04] I thought MariaDB was meant to be super transparent with regards to working MySQL code? [19:54:05] but it's similar enough to mysql that I don't think we would make any change in the class [19:54:15] <^demon> For that, we need to file upstream bugs so all the other DBMSs act like Mysql. [19:54:21] <^demon> ^ bug 26273. [19:55:49] There is a related bug to 26273 but I don't know where it is offhand. Tim and I made some hand-waving solutions for this. [19:55:49] man. it is taking FOREVER to rebuild the ar localization cache. [19:55:53] <^demon> sumanah: I was kidding :) [19:55:57] hexmode: love your comment on this one [19:56:07] I know ^demon :) [19:56:08] heh [19:56:09] "You know what would be really nice for this bug? Some examples!" [19:56:18] Basically, we need to have MW gather the list of columns from the tables and create the GROUP BY on the fly. [19:56:33] <^demon> jorm: The annoying thing is if you re-run update.php 30 seconds after you finish it'll rebuild all the L10n caches again :) [19:56:51] jorm: are you using multiple threads? [19:56:54] it's been doing this for 5 minutes. just the one language. [19:56:56] no. [19:57:08] but my hard drive is dying on this thing, so who knows. [19:57:26] maybe that would only crash your comp then :o [19:57:32] it shouldn't take more than 10 seconds per language [19:57:36] re https://bugzilla.wikimedia.org/show_bug.cgi?id=26273 -- is this a longterm project for some interested developer, then? [19:57:59] like, GSoC level? or intractable? or what? [19:58:27] G_SabinoMullane: why not have a var that lists, per table, the correct group by? [19:59:16] not sure if it is worth full GSoC, but looks like non-trivial amount of work indeed [19:59:40] This bug (26273) used to rear its ugly head a lot. Not so much any more. It was just one special page the last time I checked. Staying away from GROUP BY 1,2,3, etc. and sticking to the more stringent rules enforced by SQL SERVER (e.g., making sure to name the fields by name rather than by alias) has helped a log. [19:59:42] blobaugh: Because schema changes (new columns) would break it [20:00:36] G_SabinoMullane: could this be a global that gets updated on schema changes manually? [20:00:36] ...helped a lot [20:00:54] djbauch: do you think it would be fairly easy to generate a list of the places in MediaWiki that name fields by alias? [20:01:18] <^demon> blobaugh: We can barely keep people updating all the proper places when they do a schema change anyway...a global they'd have to keep in check too would be impossible. [20:01:35] ^demon: fair enough [20:02:01] I'm ok with spending like 10 more minutes together talking about non-MySQL database support, and as long as people want in #mediawiki [20:02:13] sorry that this is running over the hour I mentioned [20:02:37] I kept the # of bugs to about 10 to try to keep the bug portion short; I guess hexmode can run a triage faster than I can [20:02:50] blobaugh: That seems like a lot of work compared to simply reading the db cols directly [20:02:55] I am still interested in determining how the unit tests are going to be run. Is WMF running each db server or should the db module dev be responsible? [20:03:17] G_SabinoMullane: maybe, but it would make the system much faster than needing another db call [20:03:23] I think wmf _should_ provide test runners [20:03:33] Sumanah: Yes, there were only two places that caused a problem that I found. One special page and one obscure place. [20:03:42] those labs VM that supposedly are available for everything [20:03:58] Ryan_Lane: ^^ [20:04:05] Platonides: ok, so how do we get things like Oracle going? Is there a good free version? [20:04:23] I can help get licenses for SQL Server, but that is only one db [20:04:42] Hey, before we get going on that, in the larger "how to improve non-MySQL db support" discussion, let's address djbauch's MSSQL encoding issue? http://mediawikiworker.blogspot.com/2011/10/struggling-with-inexplicable-issue.html [20:05:17] blobaugh, free edition of MSSQL is not enough for tests? [20:05:24] I think this problem must be of my own creating? I just don't see it yet. I thought maybe somebody may have seen something similar. [20:05:55] MaxSem: I am trying to determine that now, but if MS is willing to donate SQL Server why say no? [20:06:30] Is it a good idea to try to stick to NVARCHAR rather than VARCHAR columns for just about everything? [20:06:57] <^demon> Most everything in mysql is varchar. [20:07:02] blobaugh, there's a free version which could be used [20:07:32] <^demon> Same with Oracle. [20:07:40] alolita: You can't hearm e? [20:07:41] blobaugh: No, it would only be called once, then cached. [20:07:54] G_SabinoMullane: oh, good [20:08:28] alolita: Amir and I can hear each other [20:08:29] sumanah, the question would be: how was that cached version generated? [20:08:36] At least, according to Tim. Now it is just a SMOP for someone.... [20:08:39] alolita: And we can hear you, but you can't hear us [20:08:53] Platonides: you mean re djbauch's problem? [20:08:56] maybe mssql corrupted the gzipped blob containing the cached page [20:09:06] this one http://mediawikiworker.blogspot.com/2011/10/struggling-with-inexplicable-issue.html [20:09:13] Yes, and I tended to leave most things that way, but I see that the other person who has done work on MSSQL made much more heavy use of NVARCHAR than I did. With data stored in UTF-8, it seems like the column should be NVARCHAR? I think VARCHAR BINARY is like NVARCHAR no? [20:09:15] Platonides: yes, that is djbauch's issue [20:10:10] brian_swan: btw, are you here with us? [20:10:12] Platonides: I have turned off the gzipping of the object cache because I don't get good results when it's turned on. Maybe that's a symptom of the same problem? [20:10:16] alester: do you have any thoughts on the above? [20:10:25] no [20:10:35] I shudder in fear for how you guys do cross-DB stuff [20:10:36] Yes. [20:11:09] <^demon> alester: We just make it work for mysql && sqlite and hope for the best. [20:11:21] Yeah, that's kinda sad. :-) [20:11:23] I would think nvarchar should be the default [20:11:23] BTW, I expect to get this solved through shear persistance [20:11:27] But I don't have a better solution. :-) [20:11:33] especially with an all-volunteer army. [20:11:51] brian_swan: http://thread.gmane.org/gmane.science.linguistics.wikipedia.technical/56187 might also be useful for you to read. [20:12:15] No NVARCHAR is not the same as VARCHAR BINARY [20:12:18] Platonides: The response I got was that SQL Express will probably work for unit tests, but that I need to talk to another guy to make sure. brian_swan's coworker actually [20:13:39] alolita: It also allows uploading large files but that's tangential [20:13:46] It would probably be better to just get a full license though. Especially since that is what people will run it on. SQL Express is not designed for production [20:14:40] djbauch: VARCHAR BINARY is used to store raw bytes, e.g. other db's BLOB or BYTEA. Very few tables need that in MediaWiki, generally just the caching stuff. [20:15:17] djbauch: " BTW, I expect to get this solved through shear persistance" re the issue you blogged about? [20:15:30] djbauch: " BTW, I expect to get this solved through shear persistance" re the issue you blogged about? [20:15:42] yes [20:15:54] <^demon> G_SabinoMullane: We do use varbinary for our timestamps in mysql ;-) [20:17:28] ^demon: Yeah, I know. You just can't seem to get timestamps right :) [20:17:39] ok, we're now officially done with all the bugs we wanted to address, and so the "Ideas:" starting line 153 of the Etherpad http://etherpad.wikimedia.org/database-bug-triage are all up for discussion [20:17:51] <^demon> G_SabinoMullane: iirc, the reason is largely historical. [20:17:57] including "* Design that meta-schema idea that has been kicked around such that we have no more tables.sql anymore (at least not as the canonical source)" [20:18:16] <^demon> Max and I started doing that in new-installer. [20:18:21] <^demon> And it was actually working not half bad. [20:18:45] <^demon> But we postponed it so it wouldn't delay 1.17 further. [20:18:56] djbauch: you know NVARCHAR is ucs-2, right? [20:19:10] <^demon> If someone wanted to pick that up again, we stashed it in some branch. [20:19:32] djbauch: http://stackoverflow.com/questions/144283/what-is-the-difference-between-varchar-and-nvarchar [20:19:42] I guess that's what I need to address (UCS-2 vs UTF-8). [20:19:44] ^demon: Name? [20:20:03] <^demon> abstract-schema? [20:20:13] k [20:20:20] <^demon> http://svn.wikimedia.org/viewvc/mediawiki/branches/abstract-schema/ [20:20:52] <^demon> in includes/db/ you'll find Schema and some related classes. [20:21:33] On a related note, I wrote something that uses tables.sql from the updater to add new tables and columns, rather than creating all those anoying "patch.sql" files. [20:21:52] Someday I will clean it up and publish it; but I should look over the abstract-schema first. [20:22:18] djbauch: that is what i was thinking the case was before, that encoding difference. i just was not sure where it existed [20:22:23] G_SabinoMullane: is it in a git or mercurial repo somewhere, where someone can use the messy version as inspiration? [20:22:30] or bazaar or whatever [20:22:33] brian_swan: see ya later [20:23:26] sumanah: No. I was gonna create a branch for it at one point, but couldn't get branching to work. [20:24:10] G_SabinoMullane: we'll be switching to git soon so that should be easier, if you like DVCSes -- if you need any help with SVN branch I know we'd be happy to give tips/help [20:24:10] ^demon, I don't like it [20:24:38] that's not... readable [20:24:41] <^demon> Platonides: I'm not married to it, we can scrap it entirely and go another route if we'd like. [20:24:46] sumanah: Is there a page that tells how to make them? ISTR there may have been permission problems. [20:24:56] sorry, I've gotta go [20:25:01] bye! [20:25:02] thanks MaxSem [20:25:10] Platonides: Agreed: the one big array is pretty ugly. Sorry, ^demon :) [20:25:13] <^demon> G_SabinoMullane: You should be able to make branches. [20:25:17] extdist 3577 0.0 0.0 105116 3936 ? S 20:00 0:00 /usr/bin/svn cleanup trunk/extensions [20:25:18] extdist 10079 0.0 0.0 104964 3748 ? S 20:19 0:00 /usr/bin/svn cleanup branches/REL1_16/extensions [20:25:23] I guess it's an hourly cronjob? [20:25:25] I was thinking of more of a flat-file like thing that we parse [20:25:26] if it read the SQL schema and created the structure on the fly, maybe it could do something [20:25:34] Oops wrong channel [20:25:53] I have my work cut out for me. Will go now. Thanks Sumanah [20:26:00] <^demon> G_SabinoMullane: yaml? [20:26:09] thanks djbauch I think we will wrap up soon [20:26:20] ^demon: Maybe. But perhaps even simpler. [20:26:39] we don't currently support windows in labs [20:26:48] I'll have to look closer at the differences between dbs [20:26:59] if it can run in ubuntu, yes, it's a good place to host the db [20:27:13] <^demon> So oracle and pg we can do in labs. [20:27:18] <^demon> Mssql not so much. [20:27:43] I suppose nobody ever tried to run MSSQL in wine [20:27:49] heh [20:27:57] I seriously doubt that would work [20:28:21] at some point we may support windows [20:28:22] it's probably too much glued to windows internals [20:28:33] I would believe that [20:28:45] we have to support VNC before we can support windows [20:28:52] btw, I found strange you didn't answer my labs rant [20:28:54] If we're done with bug triage I'm going to go. [20:29:00] ok, so, yeah [20:29:02] Platonides: I didn't read the entire backscroll [20:29:04] what was the rant? [20:29:19] we are done with bug triage, alester and now are talking in general of how to improve MediaWiki support for databases [20:29:22] Ryan_Lane, not here, in wikitech-l [20:29:29] let me find out the date... [20:29:39] there's a rant about labs there? I read wikitech-l [20:29:39] It sounds like you have the Pg stuff well in hand, yes? [20:29:43] If I can help on Pg, let me know. [20:29:48] maybe it was in a thread I ignored [20:30:10] alester: I wouldn't say we have it well in hand [20:30:19] ok, i'll stick around then. :-) [20:30:31] Ryan_Lane: SQL Azure is pretty much the same as SQL Server [20:30:38] alester: we're lacking in people who will help & in time from those people [20:30:55] what's SQL azure? [20:31:05] heh mssql under wine ;) [20:31:07] the sql that runs in microsoft's cloud architecture? [20:31:10] Ryan_Lane: Microsoft's "cloud" version of SQL Server [20:31:14] yep [20:31:14] meh [20:31:29] so you'll have internet latency (bad) but don't have to install and run sql server (good) ;) [20:31:45] trye [20:31:46] *true [20:31:48] brion: thats about right [20:32:00] from a testing POV would it be sufficient? [20:32:02] 25/10/2011 [20:32:07] however you guys need to set it up I can help get what you need [20:32:14] Platonides: what's the subject? [20:32:16] should do, but we should do a practical check on latency between eqiad and there [20:32:18] I can search then? [20:32:21] "MediaWiki unit testing and non-MySQL databases" [20:32:25] Ryan_Lane: as long as time doesn't matter, yes it would be fine for testing i think [20:32:32] *Ryan_Lane nods [20:32:32] I think I was answering you [20:32:51] I got a bit annoyed :P [20:33:11] Ryan_Lane: do you have space in your rack for a dedicated Windows machine if your labs will not support Windows? [20:33:21] alester: we need help fixing bugs, writing unit tests, doing the meta-schema, testing our tarballs [20:33:21] alester: https://bugzilla.wikimedia.org/show_bug.cgi?id=384 is a tracking bug for postgres issues in MediaWiki [20:33:31] we really don't *want* to support windows [20:33:36] [another possibility is running a dedicated phpunit test vm in MS's cloud as well, so the latency is smaller. but that probably is more work to maintain] [20:33:50] no one on the ops team will want to [20:34:21] I'd say let's just hit the database there directly [20:34:22] Ryan_Lane: right, but it isn't really much different than having installs of other dbms running that you do not use [20:34:25] cloud ftw :) [20:34:37] sumanah: You're going to force me to create a Mediawiki to-do list aren't you. [20:34:40] ok, if that is how you want to attack it let me see what i can do [20:34:40] *^demon hasn't touched mssql in ~4 years and doesn't want to change that :) [20:34:42] the latency will suck, but we can at least get tests [20:34:50] blobaugh: no. we have to keep the OS secure [20:34:53] and virus free [20:34:55] :-) alester [20:35:04] heh [20:35:06] someone has to configure the OS, and the database [20:35:11] it's not the same, at all [20:35:24] well certainly let's try it with the MW on linux in our VMs and the SQL in azure cloud and see how it goes [20:35:33] if it's too slow we can retool from there one way or another [20:35:35] yep [20:35:39] that sounds like a plan [20:35:48] and it's the easiest thing we can set up, so best entry point [20:35:57] Ryan_Lane: if it is behind your firewall with only internal access it would not get a virus, however the cloud approach first will work until a local server is required [20:36:08] I'd be more than happy for microsoft to manage the windows vms for us [20:36:18] if they offer that, I'll be glad to add windows support to labs [20:36:20] that might be a possibility [20:36:20] RoanKattouw: could you look at https://www.mediawiki.org/wiki/Special:Code/MediaWiki/101670 and see if you think it is worth the effort to merge for tarball? If not, no worries. [20:36:30] http://stackoverflow.com/questions/144283/what-is-the-difference-between-varchar-and-nvarchar [20:36:34] oops [20:37:03] hexmode: Yeah that one can go in. It's zero extra trouble and it fixes a strict warning [20:37:22] do we know someone at Microsoft? [20:37:30] Platonides: in the channel! [20:37:33] Ryan_Lane: brion i need to get some lunch before my next meeting. i will see what i can setup for you and ping you back [20:37:38] <^demon> hexmode: Already merged. [20:37:40] Platonides: MS employees are yucky [20:37:41] <^demon> Took 20 seconds. [20:37:42] *Ryan_Lane nods [20:37:44] awesome thanks ! [20:37:49] sounds great. thanks [20:37:50] np [20:37:50] ^demon: tyvm [20:38:06] ok, I need to head off soon, it's been more than half an hour since I thought we'd be done :-) blobaugh can I ask you to be the lead on this (testing non-MySQL databases in the Wikimedia Labs infrastructure)? [20:38:06] (and additional infrastructures as necessary) [20:38:07] ok this page indicates azure has DCs in san antonio, chicago, dublin, amsterdam, singapore, and hong kong -> http://cloudinteropelements.cloudapp.net/Choosing-the-Data-Center-Location-with-Windows-Azure.aspx [20:38:23] san antonio is probably best for us, but we should check chicago also (from eqiad) [20:38:23] sumanah: yes [20:38:45] ok all. im out. [20:38:49] sumanah: thanks for organizing this [20:39:01] sumanah: thanks for running it :) [20:41:55] another open issue that I'd like to get a bit of closure on [20:41:56] the meta-schema idea [20:41:56] I stopped copying and pasting IRC notes into etherpad when the conversation got a bit tangled [20:41:58] it sounds like we want this to happen [20:41:59] G_SabinoMullane: are you interested in working on it, or helping me to get Josh Berkus to respond to my email so we can get volunteers to do it? [20:42:00] alester: ^ [20:42:12] looking [20:42:23] I am also open to anyone else wanting to work on it or help me gather volunteers to work on it [20:42:30] Which "it"? [20:42:47] I don't know what "it" is from the backscroll. [20:43:42] the abstract schema, I guess [20:43:52] although I'd base that on existing SQL [20:44:25] That sounds like something too big for me to leap on. [20:45:25] alester: the meta-schema [20:45:26] alester: ^demon & Max worked on http://svn.wikimedia.org/viewvc/mediawiki/branches/abstract-schema/ [20:45:27] there's lag on my end, sorry [20:45:28] ok. No one is taking it on, then [20:45:30] I declare this meeting over and I will write up the notes/results and send them out [20:45:30] thanks all [20:45:32] thanks ^demon, blobaugh, djbauch, alester, G_SabinoMullane, Platonides, Nikerabbit, hexmode, brian_swan, brion, Ryan_Lane [20:45:52] I think an abstract schema is a fantastic idea and I wish you moral support! :-) [20:46:17] wow, you kep talking for 45 mins extra? [20:46:33] "You are in a twisty little maze of reserved words, all alike." [20:46:41] Nikerabbit, wiki people don't know how to stop talking when we're excited. this is hardly news ;) [20:47:22] brion: wiki people also tend get some actual work done collectively [20:47:27] haha [20:47:29] we try ;) [20:48:17] Nikerabbit: I miscalculated how long it would take to triage 10 bugs, so then the other discussion had to spill over past the 1hr mark [20:48:17] Nikerabbit: discussion & alignment is actual work [20:48:17] Nikerabbit: never think it isn't [20:48:18] hi TrevorParscal, 2 quick questions. 1) ok to fwd that note of yours re citations to the parser/visual editor list? [20:48:33] sumanah: sure [20:48:46] <^demon> brion: We also have RfCs on line endings :) [20:48:48] speaking of which, we've got gwicke over and are gonna do some quick chatting [20:48:49] abstract scheme sounds very useful [20:48:51] TrevorParscal: and 2nd, more of a comment: I shall fwd to you 3 ideas for diagrams for the AOSA book. [20:48:54] brion, TrevorParscal: so- I was wondering about the serializers already in WikiDom and how and if to integrate those with the parser [20:49:10] sumanah: yes [20:49:22] changing the parser output to wikidom would be a good opportunity to switch serializers [20:49:27] Platonides: responded to your rant :) [20:49:28] sumanah: hola \o [20:49:29] so in theory it may make sense to use wikidom's serializers and drop the separate one from the parser end [20:49:34] Platonides: do you not have a labs account yet? [20:49:34] exactly :D [20:49:36] if not, let's fix that [20:49:55] gwicke: so, I have a few serializers in JS that need to be updated a bit but are generally useful [20:50:00] Ryan_Lane, doesn't seem to [20:50:01] things become more clear when you can actually see what's going on [20:50:04] hi troubled! we just finished the database triage and I need to write up notes from http://etherpad.wikimedia.org/database-bug-triage [20:50:06] I'm not in Special:Listusers [20:50:09] *Ryan_Lane nods [20:50:16] ok, then the question becomes how to do this bes [20:50:18] t [20:50:22] I have to link your SVN and Labs accounts [20:50:26] I was in the old labs testing, though [20:50:28] well, we need our code to be in the same place I think [20:50:35] yeah. that was testing only [20:50:36] sumanah: bah, i missed! sorry, meant to catch this one but was busy helping track down a bug with someone :( [20:50:46] OK, i'm goin'. [20:50:51] and that means we should probably look a few minutes ahead in time and see that we need to create an extension [20:51:17] sumanah: Please feel free to drag me into stuff like this in the future. I am glad to help however I can, and who knows, maybe tuits will free up. [20:51:17] no problem troubled. hashar you may want to know that blobaugh and djbauch are interested in helping write unit tests to improve how well MediaWiki works on non-MySQL databases [20:51:21] we can continue to do our stand-alone work there, and we are eventually going to have a special page anways [20:51:21] sure alester [20:51:24] at some point all svn users will be links with labs automaticlaly [20:51:28] *linked [20:51:29] Plus this was kinda fun. [20:51:31] *automatically [20:51:40] *nod* [20:51:47] TrevorParscal: nod too [20:51:55] alester: https://www.mediawiki.org/wiki/Bug_management/Triage to know more about future such meetings. [20:51:56] TrevorParscal, how about we make extensions/VisualEditor [20:52:08] sumanah: I really need to have the testing mailing list setup :D [20:52:15] you can move the wikidom stuff into there, with its standalone test pages initially [20:52:20] and then we can do a couple steps [20:52:28] hashar: I think if you just send to the dev list and label [test] in the subject line that is good enough [20:52:29] 1) a special: page with a standalone editor widget [20:52:45] sumanah: good idea! [20:52:46] 2) an editor mode that loads the editor onto a page (but just initially blank) [20:52:57] should be very easy [20:52:59] and from there out we can start hooking up parsing bits and whatnot [20:53:07] ok [20:53:13] moving some of those out of ParserPlayground, which we won't need to keep long-run [20:53:22] for me, the most important thing would be to have the other modules in fixed locations for command-line testing [20:53:30] at least initially [20:53:40] For the sake of all that is holy, please use svn copy / svn rename [20:53:50] Don't lose history [20:54:03] or [testers] [20:54:05] ParserPlayground will also break when the serializers are switched [20:54:10] yes good point RoanKattouw ! [20:54:53] gwicke, we can let it die except the parser itself and the tests wrapping it; probably move those in with VisualEditor [20:55:10] *TrevorParscal nominates RoanKattouw to do the copying [20:55:17] *nod* [20:55:39] RoanKattouw: we want to create a new extension called VisualEditor [20:55:45] in there we will have a modules folder [20:56:00] in the modules folder we will have what's currently in lib/hype/* [20:56:24] we will also have a VisualEditor/tests/ folder [20:56:43] and in there we can put whats currently in wikidom/tests/hype/* [20:57:13] and we will finally have a VisualEditor/demo folder, where wikidom/demos/hype* stuff can go [20:57:23] do the copy, then I will check it out and fix all the linkage [20:57:26] please :) [20:57:27] \o/ [20:57:45] i think you can actually do that from one command-line [20:57:48] in other words, we only need to copy */hype/* stuff [20:57:53] svn cp svn+ssh://...... svn+ssh://..... [20:58:02] gotta make the directory first tho [20:58:50] once the hype stuff is there, I can move the parser in there tomorrow [20:58:51] I think you can do svn cp instead of a normal cp, if you have that checked out [20:59:06] ok i've just created a stub extensions/VisualEditor directory [20:59:07] *TrevorParscal doesn't want the liability of performing these SVN commands himself [20:59:14] hehe ok i'll do it ;) [20:59:31] should the parser end up in the modules directory as well? [20:59:32] <^demon> TrevorParscal: wuss ;-) [20:59:48] gwicke, yes but we'll want to rearrange files i think [20:59:52] theyh're a bit haphazard [21:00:03] yes, more consistent naming prefix [21:00:24] ^demon: at least I own it [21:01:13] TrevorParscal, ok how's this look https://www.mediawiki.org/wiki/Special:Code/MediaWiki/101684 [21:01:52] oh. I forgot to mention, I made a new channel for labs #wikimedia-labs [21:02:19] there's a bot in there that'll tell you when ssh keys are being updated, and when home directories and projects are being created [21:02:20] and my channel list gets longer :) [21:02:37] brion: spiff so far [21:03:00] great! [21:03:03] brion: Ryan_Lane: Krinkle's communication plan gets longer, too [21:03:21] brion: do you want to tweak the names, or should I have a go at it tomorrow? [21:03:28] lemme see if i can do a clean initial copy of the ParserPlayground actual parser bits (without the ui) [21:03:47] sumanah: eh? [21:03:56] have some uncommitted changes right now, but can move those over as well [21:04:24] Ryan_Lane: http://www.mediawiki.org/wiki/User:Krinkle/Communication [21:04:38] he sent a note to wikitech-l about it on the 16th of Oct [21:04:48] ah [21:04:55] heh [21:05:21] troubled: hey, if you want to get more into MediaWiki development, check out the Wikimedia Labs project -- a dev env that we host for you [21:05:41] troubled: https://labsconsole.wikimedia.org/ [21:05:51] sumanah: cool thx [21:06:25] sorry if you spoke earlier, flakey dsl and ip change kill my irc until I notice prolonged silence :) [21:06:29] troubled: it's in closed beta right now, but just ask Ryan_Lane and he'll make an account for you [21:06:30] Ryan just linked my account, but I still see it quire useless... [21:06:37] *quite [21:06:37] no prob troubled [21:06:44] it is for mediawiki devs right now [21:06:47] ok lessee... i think i can leave out renderer, serializer... the nodetree is separate (could copy later if desired) [21:06:59] if you want to add architecture to production, though, it's quite useful [21:07:06] and i think can live without the hashmap too, that's mainly used for the tree view behavior [21:07:37] sumanah: well im definetly interested in how the "pros" setup and dev for it. the setup I was working on was extremly complicated 2 repo environment with live and dev on same box. lots of permutations of things to go wrong (wrong file edits/commits etc) [21:07:44] If i need to know the puppet incantation for something before you can test it... :S [21:07:53] Ryan_Lane: troubled runs MediaWiki and is working on sphinx search support [21:08:08] Platonides: you can write and test the puppet manifests yourself [21:08:09] in labs [21:08:20] gwicke, ok https://www.mediawiki.org/wiki/Special:Code/MediaWiki/101685 should be the main core bits [21:08:37] I can give you a project, you can create instances, puppetize something, add it to the test repo, merge it, then run puppet on the instance you created [21:08:44] oh wait the peg definition file too :) [21:08:56] Ryan_Lane, I don't have any idea how to write a puppet manifest [21:09:09] it's not too hard. gimme a sec [21:09:13] I can make blindly some change of the php code [21:09:14] brion: I think the grammar is missing [21:09:19] Platonides: http://docs.puppetlabs.com/guides/language_guide.html [21:09:27] gwicke, yeah I just added it in the next commit :D [21:09:34] but asking me to write a puppet manifest to make something.. [21:09:40] https://labsconsole.wikimedia.org/wiki/Main_Page#Checking_out_the_puppet_repositories [21:09:48] brion: np ;) [21:09:53] you don't have to. it's just something you can do in labs right now :) [21:09:55] <^demon> Platonides: puppet manifests are easy :) [21:10:06] Ryan_Lane, you mean the crazy git section? ;) [21:10:17] Yes, puppet manifests are easy [21:10:19] what's so crazy about it? :) [21:10:27] it's not the best documentation in the world, but it works [21:10:28] Well [21:10:36] I have no idea what I'm doing when I write puppet stuff [21:10:39] and I keep asking people to make it better, but no one edits it ;) [21:10:43] But I just copy from other places [21:11:01] I noticed the other week/month that you guys release the puppet stuff. very neat. I was reading the article and noticed similarities in what I was doing for sphinx (per language config stanzas got long), where we used templates to gen them. although I used m4 though, forgot what you guys used, but nice to see the concept is common :) [21:11:02] brion: ok, I'll try to wire up the new serializer tomorrow [21:11:02] <^demon> RoanKattouw +1 [21:11:10] <^demon> You copy+paste from other areas of puppet. [21:11:16] gwicke, awesome [21:11:18] <^demon> And if it's wrong, mark or Ryan will tell you [21:11:22] <^demon> Yay CR \o/ [21:11:32] yep [21:11:34] i'll be online around 8am my time for a checkin [21:11:35] troubled: and then there are the juju project folks who are even one level more abstract than puppet in terms of what's automated [21:11:39] *brion tries to remember how math works [21:11:53] brion: 1+1=2 [21:11:54] brion, TrevorParscal: thanks! [21:12:04] sumanah: I can imagine :) I was gonna play with cfengine myself, but you guys got me torn since the puppet release [21:12:12] \o/ [21:12:22] troubled: anyway, go ahead & ask Ryan_Lane for an account if you want one to play with [21:12:33] bye! [21:12:36] Ryan_Lane: heya :) [21:13:22] troubled: do you have an svn account right now? [21:13:34] Ryan_Lane: no i do not. im rather new around these parts [21:13:38] ah [21:13:53] we haven't worked out the access policy so well right now :) [21:14:01] troubled: what were you going to work on in labs? [21:14:23] Ryan_Lane: sphinxsearch is the extension that im tinkering with atm [21:14:26] ah. cool [21:14:45] so. it isn't quite ready for mediawiki development, but it can be used to set up sphinx search architecture [21:15:07] Ryan_Lane, suppose I wanted to run parsertests there [21:15:08] how would a puppet manifest do that ? [21:15:39] well, puppet would install scripts, daemons, crons, etc [21:15:43] brion, I think I can move the other stuff over [21:15:51] excellent [21:15:55] troubled: I'm going to contact you in pm [21:16:00] ok [21:16:13] so I would need to create a cron programmed for now+5' then install it trhough puppet? How ugly [21:16:41] you only need to puppetize things you want to move to production [21:16:51] or things you want to live between virtual machines [21:17:09] for instance, if you wanted a larger virtual machine, and you wanted all your changes to still be there [21:18:24] though you could have both instances up, and manually copy everything across too [21:18:38] You don't do /everything/ in puppet [21:18:57] Puppet contains manifests that say stuff like "every Apache server should have a cron job that does X every day at 2:00" [21:19:02] well, what I'm being said is 'with this you can do puppet' [21:19:25] e.g. https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=commitdiff;h=7cfd59b0b13e052fd4dee6f57800ba6ddebd3856 [21:20:01] Platonides: OK so basically what puppet does is you give it manifests/recipes for "here's how to set up an Apache server, here's how to set up a Squid server, etc." [21:20:31] And changes in the manifests are applied too [21:20:35] with this, right now, you can do puppet [21:20:43] So that way you can use our puppet stuff to build a clone of our cluster [21:20:46] with labs, in the future, you'll be able to do mediawiki development [21:20:51] Then change something in your clone, push to git [21:21:02] Then we can review the change, then merge it into our production branch [21:21:13] And then puppet will apply it to our production machines automatically [21:21:15] I understand that, Roan [21:21:17] OK [21:21:21] What don't you understand then? [21:21:40] I proposed a simple task above [21:21:40] 22:18:52 Ryan_Lane, suppose I wanted to run parsertests there [21:22:15] I mentioned it isn't ready for mediawiki development ;) [21:22:29] in the future tests will be run automatically for you [21:22:31] so, the wmflabs still don't do anything? [21:22:46] I'm using it right now [21:22:50] for devops stuff [21:22:58] one of its intended uses [21:23:02] Oh? [21:23:08] Is it fully functional for that purpose? [21:23:09] If so, I want it [21:23:10] yeah [21:23:18] So I can actually test my puppet changes, you know [21:23:31] new services work [21:23:39] current services are more difficult [21:23:42] but most still work [21:23:52] Probably because most are not puppetized completely? [21:24:06] and because the puppet manifests need slight changes for labs [21:24:07] If you give me an account I'll scratch my own itch and see if I can puppetize dsh on fenari [21:24:14] Oh, hrm [21:24:31] that should be easy enough [21:24:38] Yeah [21:24:56] Assert presence of the package in the right circumstances [21:24:57] let me get past the influx of communications I just got and I'll work with you on that ;) [21:25:01] And manage the config file [21:25:03] Sure [21:25:05] Well [21:25:07] Tomorrow then [21:25:08] I'm off to bed [21:25:14] ok. works for me [21:25:40] Roan if you document your steps, maybe I can learn something [21:26:00] Platonides: git revisions good enough? [21:26:09] You'll be able to view the diffs of what I did [21:26:14] and why do you need an acocunt? :s [21:26:19] Oh, to test it [21:26:20] Right [21:26:27] I'd have to document the steps for that [21:26:36] But I expect it's just something like running puppet as root [21:26:50] I'll need to make you a project for this [21:26:55] so that you can have root [21:26:59] OK, do whatever you have to do [21:27:03] and can make your instances [21:27:06] Drop me an e-mail when you're done [21:27:10] will do [21:27:19] Also, the ircecho-on-hardy issue [21:27:24] What's up with that? [21:27:47] ircecho? [21:27:55] Yeah [21:28:01] what's that? [21:28:02] It's what the nagios and logmsgbot bots run on [21:28:22] It essentially tail -f 's a file and spits it out over IRC [21:28:22] ah, a dummy bot which just forwards messages to irc? [21:28:25] Yes [21:28:30] what's the problem with that? [21:28:38] Ryan_Lane: Is the ircecho-on-hardy issue fixable or should I bring the old version of ircecho back? [21:28:50] Platonides: The ircecho package is broken in hardy, cause it's written for lucid [21:29:04] Our current install of ircecho on fenari pulls libraries out of /home/kate :D [21:29:14] RoanKattouw: I'm not sure if it is or not [21:29:23] well, that doesn't look like "written for lucid" [21:29:24] So I figured I'd set it up nicely, and found out that ircecho had already nicely been packaged by Ryan [21:29:30] the old version of python-pyinotify is missing features [21:29:42] Platonides: No but the version that Ryan packaged is written for a newer version of Python or some library thereof [21:29:57] why would a irc bot need pyinotify? [21:30:03] Ryan_Lane: Why did we need to have that file support thing anyway? What's wrong with tail -f file | ircecho ? [21:30:10] is it reading from the filesystem?? [21:30:13] Yes [21:30:17] Nagios writes to a file [21:30:25] ircecho used to only be able to read from a pipe [21:30:33] RoanKattouw: because ircecho can now read multiple files, and write to multiple channels [21:30:33] So Ryan extended it to also be able to read from a file [21:30:34] tee -f ? [21:30:38] Ah, right [21:30:42] er... tail -f [21:31:00] and pyinotify is efficient :) [21:31:09] i only read when a file is updated [21:31:12] a blocked pipe is, too [21:31:15] Ryan_Lane: Anyway, let's get the ircecho issue fixed quickly. logmsgbot has been broken for like 48h now [21:31:18] I don't spin and wait, eating cpu [21:31:21] where is that program? [21:31:22] *Ryan_Lane niods [21:31:30] Platonides: /trunk/debs/ircecho IIRC [21:32:08] oh, just one file [21:32:18] I don't like to write half-assed applications, so I do it properly ;) [21:32:30] I like it :) [21:33:18] it'll re-open the file if it is re-created, for instance [21:33:30] in case the log file is in a directory that gets log-rotated [21:33:53] that watch_transient_file [21:34:04] yeah [21:34:10] that functon seems more advanced than the normal inotify(2) :) [21:34:21] it's a standard function, I think [21:34:43] maybe it only exists in the pyinotify implementation, though [21:34:55] and some some magic behind the scenes [21:35:04] I think it will be done by watching the directory [21:35:06] the alternative is to watch the directory [21:35:15] heh [21:35:50] Ryan_Lane, did y'all have a chance to peek at status of http://rt.wikimedia.org/Ticket/Display.html?id=1614 ? i noticed some asking about it earlier [21:35:55] so who works on the lucene stuff around here anyways? [21:35:58] lemme see [21:36:10] troubled, rainmann [21:36:13] troubled: a volunteer, rainmain-sr [21:36:23] aye yes, that rings a bell, thx [21:36:27] ganglia, nagios, and ... stats apparently are all throwing a nagios auth prompt on ssl [21:36:43] ah. right [21:37:16] apache config on half of our systems is fucked [21:37:20] hehe [21:37:21] been looking to compare approaches to indexing with the sphinx stuff I was messing with. the sphinx extension seems like it needed some work in a few places from our tests. although I must say our setup is a little unusual compared to a typical wiki [21:37:31] I'm not sure who has been setting it up, but it's killing me [21:38:10] *Ryan then finds that the logs point to Ryan_Lane* [21:38:16] heh [21:38:21] xD [21:38:32] heh. that's possible, but unlikely [21:43:48] incidentally i noticed there's an http: image being loaded on https://rt.wikimedia.org/ which throws up a big ugly dialog box for me on every page: https://rt.wikimedia.org/Ticket/Display.html?id=1883 [21:43:52] might be nice if we can kill that :D [21:45:32] i think it's a logo image [21:45:43] it only seems to show up in the '3.5' and '3.4 compat' themes, but it's all stretched [21:45:49] in the default web2 theme it's not even visible [21:46:05] themes? :) [21:47:23] themes are selectable in your preferences in rt [21:47:37] ah. I wonder how much of a pain it's going to be to fix that [21:47:54] hopefully just finding where that url is and changing it to https or blanking it ;) [21:49:09] heh [21:49:18] or to a protocol-relative [21:49:35] ahhhh ok [21:49:38] it's one we loaded [21:50:04] I don't have admin privileges in RT [21:50:09] we'll probably need mark to fix that [21:50:27] it's apparently really easy to fuck up RT, so we limit admin changes to one person [21:53:01] Ryan_Lane: brion back from lunch. i am working on getting you SQL Azure right now. i need to write up a proposal. it should be a pretty quick response though [21:53:16] that's great. thanks [21:55:31] Ryan_Lane: brion back from lunch. i am working on getting you SQL Azure right now. i need to write up a proposal. it should be a pretty quick response though [21:55:39] what, grr [21:55:42] sorry about that [21:56:15] awesome, thanks blobaugh ! [21:56:18] Ryan_Lane: Fix that openstack bug? [21:56:25] Ryan_Lane, hehe fair enough :D [21:56:32] johnduhart: yep [21:56:35] wooo [21:56:41] it was something I screwed up when I made network changes [21:56:55] thankfully this release of cactus so far is stable :) [21:56:57] err [21:57:00] of openstack nova [21:57:06] (it's the cactus release. heh) [21:57:18] hmm, that etherpad page with the log, doesn't seem to like me very much. It keeps asking to reconnect :| [21:57:29] are you using https? [21:57:37] if so, don't [21:57:38] ah yes I am (extension) [21:57:43] good to know, thx [21:57:51] etherpad seems to have issues when using https [21:57:54] uh [21:57:58] I couldn't figure out how to make it work with https [21:58:08] I hate that application [21:58:14] It says I don't have Nova credentials on labsconsole is that normal [21:58:20] that's a mediawiki bug [21:58:21] Log out and back in [21:58:23] log out, and log back in [21:58:32] I need to track that down one of these days :( [21:58:37] ok [21:58:38] lol [21:58:45] I tracked down three session bugs already [21:59:06] mediawiki handles sessions poorly [21:59:14] http://etherpad.wikimedia.org/ep/pad/export/database-bug-triage/latest?format=pdf still doesn't seem to work though, gives unknown failure #3 [21:59:29] tried hard refresh too [21:59:44] I didn't even realize it had that feature. heh [21:59:55] neither did the dev who made it aparently ;) [21:59:59] :D [22:00:23] html seems to work though, guesisng missing convert tools or something [22:00:34] maybe the pdf support is missing [22:00:36] aye, txt works too [22:01:04] go figure I head straight for the one thing that doesn't work. I seem to have a nack for that :) [22:01:49] :D [22:07:20] jorm: what do you think about text box for reason when hiding items in moodbar? [22:07:45] i think it's probably needed. [22:07:50] if you wanted I could help to implement that [22:07:58] oh? [22:08:09] i haven't had a chance to go into it and do design work so far. [22:08:15] andrew had one originally but we pulled it. [22:08:41] ah [22:11:13] ok, and am I allowed to eventually touch its source code in trunk? I am not sure if others can work on extensions of other people... [22:12:41] yes you can, it's polite to contact the person if they are active [22:12:54] or discuss changes on wikitech-l if they are large [22:13:08] that's what I just tried to do :) [22:13:10] (work on extensions of other people) [22:17:17] it's more complicated when dealing with code that is deployed to WMF sites, though. [22:18:09] have you ever done a lot of medaiwiki coding, petan? [22:19:04] lot of, surely not... [22:19:28] okay, I would branch it rather than touching trunk... if I was to change sth [22:21:33] ew svn branches [22:22:05] Ryan_Lane: Sorry to bug you again but I can't log into pad1, trying my wiki password but no dice [22:22:21] ssh doesn't work with password auth [22:22:37] johnduhart: you must forward your agent when connecting to bastion [22:22:49] you are using an ssh agent, right? [22:23:27] crap [22:23:33] something is wrong :) [22:24:07] how did this public IP get disassociated from bastion? [22:24:44] I'm on bastian already [22:24:48] oh [22:24:52] yeah. it isn't letting me in either [22:24:59] gimme a sec [22:25:34] it didn't build correctly, it seems [22:25:43] lemme see what happens when I force a puppet run [22:27:04] I didn't found anything to connect from bastion [22:27:20] well, there weren't even docs saying that you could connect to it [22:27:22] stupid corrupted nscd [22:27:39] Platonides: https://labsconsole.wikimedia.org/wiki/Special:NovaInstance [22:27:48] you can connect to anything in that list [22:28:00] I wonder if you need sysadmin rights for that list [22:28:23] probably only to create instances [22:28:35] no [22:28:40] I had seen it in my rant [22:28:51] "the public ips are private" [22:28:58] well, if it's really broken, someone will just revert it. [22:29:09] Platonides: and I responded in the rant [22:29:19] bastion is a bastion host [22:29:21] heh, I didn't see it [22:29:27] from bastion you can log into any other node [22:29:38] to see that, you need to be logged in and go to https://labsconsole.wikimedia.org/wiki/Special:NovaAddress [22:30:03] bullshit [22:30:03] bastionhttps://labsconsole.wikimedia.org/wiki/Nova_Resource:I-0000005arunningm1.small10.4.0.17208.80.153.194 [22:30:03] ??? default [22:30:03] novaami-0000001d2011-10-25T21:54:08Z [22:30:10] ugh. bad formatting [22:30:20] anyway. bastion on Special:NovaInstance shows a public IP [22:30:33] yes, and it's on dns, which is good [22:30:46] the rest of the hosts can be connected to *from* bastion [22:30:55] which means you need to forward your ssh agent [22:31:42] the rest are private ips [22:31:49] they aren't meant to be accessed from the outside world [22:31:57] we don't have enough public IPs to support that [22:32:14] and it's more consistent with how things work in our production environment [22:32:37] !log rebooting pad1.pmtpa.wmflabs [22:32:37] --elephant-- Wrong channel! [22:32:41] heh [22:33:13] johnduhart: it's working now [22:33:29] nscd cache got corrupted somehow [22:33:38] nice, thanks [22:33:42] yw [22:33:54] you can reboot your instance via the interface as well, btw [22:34:29] there's no rescue console just yet. I hope to have that working at some point in the future [22:43:14] Platonides: ;) [22:43:24] just to verify you can't do anything there [22:43:27] you don't have sudo on there [22:43:41] you would have sudo on a project that was given to you [22:44:02] changes to the clone of the cluster need to go through code review [22:45:04] the clone of the cluster? [22:45:20] the testlabs project is meant to be a clone of the production cluster [22:45:29] in general it's supposed to be in sync with it [22:45:45] changes are supposed to happen there, then be pushed to the production cluster [22:45:58] new architecture is supposed to be built in new projects [22:46:10] then moved to the testlabs project, then to production [22:47:12] are names supposed to be descriptive? [22:47:22] of projects? [22:47:26] or instances? [22:48:28] well, how would you know what is each instance ? [22:48:46] ah. you mean the name that shows up in the prompt [22:48:55] I'm going to work on fixing that soon [22:49:06] the hostnames need to be the instance names for uniqueness [22:49:34] I'm going to put something in that makes the prompt use the display name of the instance, rather than the instance id [22:50:04] the instance names aren't descriptive either [22:51:45] labs-mw1 isn't descriptive? [22:51:53] some mediawiki install? [22:52:06] oh. heh. I guess it doesn't outside of our ops team [22:52:20] it's a mediawiki application server [22:52:26] an apache? [22:52:31] basically [22:52:37] we don't use apache, or srv anymore [22:52:44] we use mw, because it's where mediawiki runs [22:52:50] in the future it could be hip hop [22:52:52] or nginx [22:52:59] I didn't know you removed the apache term [22:53:03] the mw was easy [22:53:09] cp is caching proxy [22:53:11] I have no idea what ceph stands for [22:53:15] ough [22:53:16] we are moving from squid to varnish [22:53:25] ceph is a clustered filesystem [22:53:26] cp is the command to copy files! xD [22:53:36] I'm working on puppetize ceph right now [22:53:42] *puppetizing [22:53:45] heh :) [22:53:59] it's not a "west wind", then [22:54:02] db is database. that one is easy though :) [22:54:15] and mc memcached [22:54:20] yep [22:54:28] nfs is clear, too [22:54:30] if you click on the instance id link [22:54:42] are they all supposed to be running a copy of the cluster configuration? [22:54:42] it'll show you what puppet classes and variables are being used [22:54:51] that's the end goal, yes [22:55:10] Ryan_Lane: Am I going to have to create an apt package for etherpad-lite? [22:55:19] johnduhart: that would be ideal [22:55:22] if one doesn't exist [22:55:45] I wouldn't start there though :) [22:55:46] I don't understand why I can't log into labs-mw2 but I can at labs-mw2, given that both list the same security groups (default, web) [22:55:55] i'd probably save that till the last step [22:56:11] if you can't then something is broken [22:56:47] yeah. labs-mw2 is broken [22:57:01] gonna delete and recreate it :) [22:57:03] oh, nice [22:57:32] "you should be able to do it, unless you can't or it's broken" :P [22:57:52] well, I recently added nfs home directories everywhere [22:57:59] and some of the instances broke when I did that [22:58:30] for things that are properly puppetized, I just delete them and recreate them [22:58:36] they come back working perfectly ;) [22:59:51] where are things? [23:00:18] shouldn't mw1 have mediawiki somewhere? [23:00:41] s/things/instances/ [23:00:50] or services, depending on how you want to look at it [23:00:56] mediawiki is deployed via scripts [23:01:03] the deployment scripts aren't on labs yet [23:01:35] this is why I mentioned it isn't ready for mediawiki development [23:01:43] no way of installing mediawiki ;) [23:01:49] do we have a platform meeting? my google calendar integration is broken [23:02:00] robla? [23:02:29] well, mw instances seem to have subversion and php [23:02:39] so it might be possible [23:02:52] yes, because the puppet classes do that [23:03:05] but we also need the squids and varnish boxes up for this to work too [23:03:18] and we need lvs to work [23:03:24] I'm currently stuck on lvs [23:03:29] it doesn't seem to have httpd, though [23:03:48] why are squids a prerequisite for apaches? [23:03:52] they are at the backend [23:04:36] apache lives behind squid [23:04:45] our apaches aren't directly accessible [23:05:05] ah. crap. new instances are broken [23:05:19] I need to make autofs restart on changes to /etc/default/autofs