[00:07:04] jeremyb: I was hoping for someone from ops to comment. Pasting self-referential OTRS replies isn't really very helpful. [00:21:47] Elsie: errr, did you read it? [00:24:27] Elsie: in particular http://lists.wikimedia.org/pipermail/wikitech-l/2013-August/071086.html and https://wikitech.wikimedia.org/wiki/Projects#Switch_.22text.22_to_Varnish [00:24:42] grrrrr, this starbucks wifi *sucks* [00:26:20] jeremyb: What does the "text" cluster have to do with high-level redirects? [00:27:50] Elsie: the redirects in question are *currently* handled by the squid text cluster [00:28:07] I thought they were handled by Apache. [00:28:10] (and are backed by apache under that) [00:28:14] they are [00:28:45] This seems kind of ridiculous. [00:28:48] Erik's comment is from 2011. [00:28:48] squid and varnish are both used for reverse proxy + caching (at least within wikimedia) [00:28:52] how so? [00:29:08] It's coming up on two years and we're still silently redirecting users. [00:29:09] varnish could be used as a backend webserver but that's not how we use it [00:29:42] When a user enters https://wikipedia.org, they end up at http://www.wikipedia.org. [00:30:02] yes, i quoted somethin gthat says about the same thing in my last comment there [00:30:02] This is a high priority issue that shouldn't wait for whatever this next migration is. [00:30:15] did you look at my last 2 links in this channel? [00:30:20] In December 2012, there was a comment about "after the eqiad migration." [00:30:40] jeremyb: I read your comment and the links earlier today. [00:30:50] I know how to read. [00:30:52] And click. [00:30:55] I'm multi-talented. [00:30:57] including the very last 2 links in this channel? [00:31:25] The very last 2 links in this channel are https://wikipedia.org and http://www.wikipedia.org. [00:31:37] And stop being so goddamn patronizing. I already said I read the fucking links. [00:32:46] Anyway, it's been nearly two years and we're still sending users from HTTPS to HTTP. [00:33:10] * Technical_13 takes note not to ask Elsie if the links were read. [00:33:25] And when users ask, we give them a canned reply about how they can install a browser extension and that "we're working on it." [00:33:38] That doesn't really seem acceptable. [00:33:39] uhuh [00:33:40] canned? [00:33:47] https://en.wiktionary.org/wiki/canned#English [00:33:47] i wrote that from scratch myself [00:33:57] lol [00:34:08] i think actually he's the only person i can remember writing in about it [00:34:18] jeremyb, you can can? [00:34:22] You mean besides the people commenting on the bug? [00:34:33] And the ... um... six duplicate bugs? [00:34:46] i was referring to OTRS [00:34:53] And yet. [00:35:24] anyway, i think you're going out of your way to make it difficult to converse about this. and i'm on crappy wifi [00:35:34] so i guess maybe i'll just ignore you :) [00:35:56] If only. [00:37:18] (e.g. i said "my last 2 links" and the comment you chose to actually respond to was a more recent+less carefully worded instance where I forgot "my". you could have figured out what i meant. I still don't know if you read them.) [00:37:46] jeremyb: I read your comment and the links earlier today. [00:37:51] So ambiguous. [00:37:54] * Technical_13 grabs jeremyb and starts to can-can [00:41:10] Anyone else want to join jeremyb and I in a https://en.wikipedia.org/wiki/Can-can [01:08:11] Hello everybody! [01:08:18] Question: I'm hitting Wikipedia's API and I'm getting, sometimes, 403 Forbidden [01:08:28] do you know with whom I may talk to resolve this? It doesn't seem a matter of hit rate, or the queries done, or the time of the day... it's very strange [01:08:49] btw, I'm already providing a sensible user-agent, and even an "accept" header [01:08:57] thanks! [01:09:41] facundobatista: what's your UA string? [01:10:04] facundobatista: example query/URL? [01:10:15] jeremyb, 1', gathering info [01:10:30] facundobatista: also, lots of people are in transit right now. may have to be patient for an answer [01:10:48] Technical_13: oh, you came back. maybe facundobatista wants to cancan? [01:10:58] jeremyb, yes, don't worry, I may repeat the question in other times, searching for people in other timezones :) [01:11:21] lol sure, I don't care who joins us jeremyb [01:11:24] :p [01:11:30] for the UA, I provided something that is not a browser, clearly a bot, and with a contact; it's: "Ubuntu One Wikipedia Scope (u1di@canonical.com)" [01:11:56] btw, hola jeremyb and Technical_13 :) [01:13:27] facundobatista: are you at DebConf? [01:13:37] jeremyb, Technical_13, an example of the query/URL is: 'http://en.wikipedia.org/w/api.php?action=opensearch&limit=10&format=xml&search=somequery' [01:13:53] (the search always changes, the rest doesn't) [01:13:57] jeremyb, nop! I wish :) [01:14:53] jeremyb, have a couple of friends at debconf, though :p [01:26:05] facundobatista: know allison randal? [01:26:30] facundobatista, 100% sure that the user agent is getting sent? [01:27:14] the API FAQ mentions "Also, it could mean that you're passing & in the query string of a GET request: Wikimedia blocks all such requests, use POST for them instead." [01:27:26] I don't entirely understand that, but maybe that's causing it [01:27:38] Krenair, yes! it's three lines of code, the URL creation, generating the Request with the proper headers, and calling urllib2.urlopen() [01:28:24] Krenair, nop, the querie strings are pretty normal ones (and, also, it's not repeatable... I hit the same search again and finishes correctly) [01:29:00] So that example URL you provided sometimes gets 403 Forbidden but not always? [01:29:11] facundobatista: ewwww, not using requests? [01:29:15] (the lib by that name) [01:29:28] jeremyb: ಠ_ಠ [01:29:43] facundobatista: have you tried reproducing with curl on CLI? [01:30:31] jeremyb, nop :p [01:30:58] Krenair, for example: 'http://en.wikipedia.org/w/api.php?action=opensearch&limit=10&format=xml&search=mo' [01:31:32] (that "mo" is the real search issues, yesterday at 2:56 p.m [01:31:35] ) [01:31:58] facundobatista: https://en.wikipedia.org/w/index.php?diff=568293740&oldid=561909719 https://en.wikipedia.org/w/index.php?diff=568293735&oldid=508134916 [01:32:34] facundobatista: are you saving / inspecting the body on 403? [01:34:07] jeremyb, I'm in the middle of the branch about that [01:34:21] (to store the body of the 403 in the OOPS I'm saving) [01:34:38] jeremyb, regarding Allison, don't know her, does she still work for Canonical? [01:34:55] facundobatista: idk. did 2 years ago (when i met her at DebConf) [01:35:00] ah, right [01:35:44] facundobatista: idk about 403 but at least for 500 we sometimes emit a name of the backend that was used. also see Via, etc. headers. maybe there's some pattern of what gives 403 or not [01:35:45] jeremyb, I work in Canonical since 2008, but in Online Services, never talked to her :/ [01:36:33] facundobatista: will you be idling here? or else we can just use the email address from your UA string, right? [01:36:50] jeremyb, ok, my plan is to save the body and the headers of the HTTP response... then I'll comeback with more info [01:37:27] facundobatista: save for the valid responses too if you can [01:37:34] jeremyb, I'll be here, or at least in Freenode if I'm online (#pyar and #ubuntuone, most surely); feel free to contact me at facundo@canonical.com, also [01:37:45] k [01:37:56] i wonder wtf just happened to the starbucks music [01:38:18] kinda sounded like the sound you get when you plug or unplug an electric guitar [01:38:28] good night [01:38:29] jeremyb, thanks for the help! [01:38:50] facundobatista: sure, but didn't really do much yet! [01:39:15] wow, and now something else came on way too loud. i thinks it's someone'd ipod [01:39:19] welp, i'm leaving anyway [05:08:30] Reedy: Could I get you to pastebin the result of "show tables;" on enwiki? [05:08:54] From what I understand, there are a few tables lingering in various DBs that should be killed. [05:10:13] E.g., povwatch_log. [05:11:12] I think we can probably get rid of user_old as well. [05:11:23] Seeing that it has about 300,000 rows, dating it to about 2005. [05:12:50] I'm not even sure what the "hashs" table is. [05:13:59] 12 aft_ tables... good grief. [06:15:36] paravoid: you know about artur's crazy asm @ fastly? [06:15:47] you has a reply on 31369 :) [06:15:57] asm? [06:16:03] assembly [06:16:04] oh [06:16:06] yeah I've heard [06:22:03] paravoid: http://en.wikipedia.org/wiki/Garum [06:22:17] I happening to be reading that (don't ask) and noticed something missing [06:22:57] what is? [06:23:08] oh, http://en.wikipedia.org/wiki/File:RomanFishFarm.jpg [06:23:08] * Aaron|home feels like he keeps running into vandalism, test edits, and oddities while reading pages [06:23:45] missing from the db too though [06:24:01] https://commons.wikimedia.org/wiki/Special:GlobalUsage?target=RomanFishFarm.jpg [06:24:15] > 2013-08-08T02:08:26 INeverCry (talk | contribs) deleted page File:RomanFishFarm.jpg (Copyright violation, see Commons:Licensing) [06:24:52] ohh, I just manually refreshed File:RomanFishFarm.jpg and now it is indeed shown as missing in the db [06:25:12] maybe paravoid has some comments on https://wikitech.wikimedia.org/wiki/Automated_hardware_testing [06:25:32] stale cache [06:26:58] hah, is there something wrong with wikitech setup or is the extension really designed so that i get the dynamic popup confirming my edit went through on every page load of the just edited page? (even after i've seen the confirmation 5x already) [06:27:57] bbl [06:28:31] jeremyb: not just wikitech :) [06:28:55] Aaron|home: uhuh :) [06:28:58] at first I thought it was a ve bug were you get that when not even using it, but I guess it's intentional? [06:29:13] at least if it wasn't I'd imagine it would have been removed by now...maybe [06:29:56] well the UI has existed like that since before VE. IIRC [06:30:11] idk if it's been doing it on every page load forever though [06:30:28] i imagine the determinism makes it easier to test... [06:31:52] * Aaron|home reads http://jugshopsf.com/menu/ [09:48:18] hmm autocreation on viwikivoyage doesn't seem to be working [11:50:18] hmpf, I forgot that all the pre-migration diff links to wikitech are also broken https://bugzilla.wikimedia.org/show_bug.cgi?id=21117#c27 [11:51:04] poor wikitech wiki, so unloved and mistreated [12:53:11] QueenOfFrance: Can you file a bug, please? [15:03:16] What are all those endless waiting for X.beta etc. ... after I login on beta.wmflabs.org? [15:03:28] the new centralauth? [15:14:20] Nemo_bis: yes, but beta has a different issue also that is causing a lot of 503s. manybubbles and I have been poking at it some. [15:14:57] I've pretty much given up on it until I can get help from someone better [15:15:29] ah, yes, got several 503 just now [15:17:48] it seems I was lucky when I managed to login [15:42:57] I don't know if this is the right channel, but I'm unable to reset the password to my SUL account on any project [15:44:11] this is a fine channel for that and I don't have the answr [15:44:32] you have email registered with the account? [15:44:46] Yeah, I get the reset emails and everything [15:45:04] and what happens then? [15:45:06] I use the temp password it gives me, set a new password and then I can't log in with the password I set [15:46:15] what username? [15:46:18] Wagner [15:47:45] I've tried resetting the password on meta, frwiki, wikispecies [15:48:08] ok lemme see if I can turn up anything in the logs at least [15:49:27] ym aroudn what time? [15:49:31] *um [15:49:51] like, a half hour ago, just now, a few hours...? [15:49:56] (these logs are huge) [15:49:57] an hour ago [15:50:00] ok [15:50:02] ‏‎16:19 BST [15:58:09] haven't forgotten you, tried some smaller logs first and now looking at the largest one [16:03:54] I see you in here [16:04:26] Have any idea what's up? [16:04:32] but it does this [16:04:33] Wagner (temp) [16:04:36] which is weird [16:04:43] That's a new account I made [16:05:11] oh :-D [16:05:13] I was going to use it to leave a message on somebody's talk page before realising this channel existed [16:05:40] so I see Set global password for 'Wagner' [16:05:42] Elsie: likely not for a week, crazy busy :) [16:05:47] before that [16:16:33] apergos: hello [16:16:38] hey [16:16:54] do we have a parent today or is he on break? [16:17:40] i don't see him here and i don't know anything more; he was supposed to show up yesterday too, but didn't [16:19:08] I pinged him, we'll give two minutes and then bam it's on [16:19:13] ok [16:22:04] time's up I guess [16:22:12] so how are things going? I saw the latest commits [16:22:20] right; i think updating from MediaWiki should work now (including progress reporting) [16:22:29] if you want to try it, the command line looks something like this: [16:22:39] ./idumps u php "/var/www/maintenance/dumpBackup.php --full --stub" /var/www/maintenance/fetchText.php sc sc.id sh sh.id pc pc.id ph ph.id [16:22:59] this assumes you have php in PATH, MediaWiki is installed in /var/www and you want to create (or update) all 4 kinds of dumps at the same time [16:23:34] what are sc sh pc and ph? [16:24:00] and I will definitely try it but I'll have to tweak stuff, I have a few zillion installations so... [16:24:22] that's the same as it was before, it describes what kind of dump is created; e.g. sc is stub-current [16:24:30] ah, that's the abbrevs [16:24:32] ok [16:24:53] can I ask why you added in the at() stuff? [16:24:54] sure, you don't have to try it today [16:25:36] this evenign or tomorrow morning, depending on time. new toy, of course I want to play with it :-) [16:26:58] it was just refactoring to get better errors in the case of index out of bounds errors: you get a specific exception instead of segfault [16:27:14] Mystaceus: I am officially stumped, I didn't see anything obvious; if you can't stick around for awhile, you could bugzilla it and drop the link in the channel here [16:27:21] and I will poke someone when they show up [16:28:32] apergos: I can probably stick around a while, besides I don't even know where the bugzilla is. [16:29:16] apergos: so, now i have started working on diff dumps [16:30:27] Mystaceus: bugzilla.wikimedia.org, but if you hang out a couple hours people will start being around [16:30:34] svick: yay! [16:31:26] timezones && travel (bad IRC connection from intercontinental flights!) [16:32:01] i have an idea about how the format of that should look like, so now i implement it and then write a spec [16:32:12] ok, looking forward t seeing that [16:32:34] be sure to publicize that on the list as soon as possible, even if not polished and perfect [16:32:50] t let people weight in, although I see we haven't had a lot of nibbles yet [16:33:44] yeah, people were much more interested in the first email thread than in the last one [16:34:43] and yet this later ones are the ones that are going to determien their future :-D [16:34:46] ah well [16:34:53] *these [16:34:55] *determine [16:35:25] so anything you want to chat aout with regard to the format or how it's going to work? [16:35:31] or do you prefer to hack at it some first? [16:37:00] well, there is one thing: how to represent list of revision ids of a page in the diff dump [16:37:32] the problem is, the list doesn't have to be sorted and i have to handle deletions and additions in the middle [16:38:10] should it not be ordered by rev id ascending (for example)? [16:38:51] i don't think so, i think some kind of cross-wiki imprting creates old revisions with high ids [16:39:17] yes it could, I mean would you not intend to sort it by ascending? [16:39:32] maybe I don't understand clearly the issue [16:40:25] you are going to have revisions that either are 'we don't want these now' or 'these are new' [16:40:44] anything else doesn't go in the diff dump I suppose [16:40:45] i mean, i can't just say "add revision id 123 to page id 456" i have to say "add revision id 123 to page id 456 at position 50" [16:41:08] so that's what I don't understand: why 'at position 50'? [16:41:46] because the cross-wiki import can create a revision in the middle, no? i should probably look more closely at how exactly does that work [16:42:00] you don't get them back in some particular order [16:42:02] https://bugzilla.wikimedia.org/show_bug.cgi?id=27112 [16:42:34] you can get one 'in the middle', sure [16:42:36] or it may not be cross-wiki import, it may undeletion of some revisions (and i don't mean just unhiding the text) [16:42:41] right [16:43:06] can't you leave where to insert, to the script that does the insert? I mean if it is going to insert by increasing rev id then it does that [16:44:37] but then revisions would be sorted by rev id in the XML and i think that would be wrong [16:44:49] by revid per page [16:44:52] (though it would be certainly simpler for me) [16:44:54] right [16:44:55] I think that's what we have now [16:45:15] how do you want them? [16:46:48] how dies MediaWiki UI show them? by timestamp? but i think the best option would be to keep them in the same order as they are now in current XML dumps, so i'll look at the code [16:47:04] so the current order is by chance [16:47:08] that's what I'm saying [16:47:20] oh, ok [16:47:24] I have had dumps where the order switches from one dump to the next [16:47:36] it mostly follows timestamp which is mostly revid ascending [16:47:48] but that's due to how stuff go stuffed in there with inserts [16:48:03] it needs to be fixed by choosing something (see that bug report) [16:48:31] and as lone as any library reader and any converter to xml does the right thing (what mw should be doing) [16:48:40] then the internal storage doesn't matter so much [16:48:46] well except perhaps for speed of conversion [16:54:04] facundobatista: any more logs/etc.? [16:54:35] jeremyb, not yet, probably tomorrow [16:54:52] apergos: ok, then i won't worry about the internal order (at least for now) [16:55:08] just choose one and stick with it [16:55:18] but be prepared to switch it if that bug gets resolved [16:55:44] about the at() changes? [16:55:47] ok [16:55:56] what was the thinkin gthere? [16:56:37] apergos: 13 16:26:58 < svick> it was just refactoring to get better errors in the case of index out of bounds errors: you get a specific exception instead of segfault [16:56:40] no? [16:56:44] oh [16:56:49] didn't see that and looked too [16:57:13] well, if you do vector[wrongIndex], you get a segfault if you're lucky; if you do vector.at(wrongIndex) you are guaranteed to get an exception [16:57:24] I am not concerned (yet) but if performance turns out to be an issue I might be (though benchmarking may tell me not to worry then either) [16:57:34] that's all [16:57:45] certainly for development it's fine [16:58:26] i doubt bounds checking will cause measurable slowdown; but yeah, if it turns out it does, i will optimize it then [17:00:52] i think that's it for today, see you tomorrow [17:04:13] okey dokey, have a good rest of the day and talk to you tomorrow [17:06:28] thanks, you too, bye [17:12:12] Mystaceus: I am told that anomie might be a good person to ask [17:12:38] apergos: I assume anomie isn't on yet? [17:12:52] I guess not (I dunno if they frequent in here or not, not sure [17:12:53] ) [17:13:24] either here or wikimedia-dev [17:13:44] I gotta get going but I'll swing by later to see if anybody found anything [17:13:45] tah! [17:13:53] alright [17:18:29] Mystaceus: anomie is on vacation this week [17:18:42] oh [17:23:55] I wonder who else could help me with that issue then [17:33:41] Oh [17:33:44] I see the problem [17:33:57] Looks like my account has actually been "locked" [17:34:22] I thought I would've at least been notified about this [17:34:22] Naughty naughty.. [17:35:23] apergos: Problem solved. [17:35:38] Mystaceus: there's #wikimedia-stewards [17:36:08] jeremyb: What would that be for? To appeal the lock? [17:37:00] or to help figure out that you were locked faster than you figured it out yourself :) [17:37:35] Mystaceus: but you seem to be indef blocked in quite a few places [17:37:43] (locally) [17:37:45] Yeah, I know that [17:37:50] so a lock would be justified maybe [17:37:52] (good-editor-turned-vandal or potentially compromised account) is on my SUL tough [17:37:55] though* [17:38:18] I don't expect to lose my blocks on any of the projects I'm locally blocked on [17:39:15] idk what "on my SUL" means [17:39:28] anyway, -> -stewards. this is not a technical issue [17:39:34] single unified login, or whatever [17:40:15] Yeah, I'm in stewards. I initially thought it was a technical error, as I couldn't change the password (which makes sense now) [17:40:48] (moved) [18:47:03] chrismcmahon: heya, so, how's the beta cluster doing? Are things back to normalish now? [18:54:56] greg-g: not really. we're a bit stumped [18:55:57] greg-g: I have restarted varnish, apaches, memcached's without much effect [18:57:15] chrismcmahon: i could take a look in an hour or two maybe. has someone written about what's been tried and what the symptoms are? [18:57:18] is there a ticket? [18:57:27] first: lunch! [18:57:34] aka breakfast [18:58:15] jeremyb: https://gerrit.wikimedia.org/r/#/c/78968/ more eyeballs much appreciated [18:58:29] greg-g: ^^ [18:59:48] chrismcmahon: what does that have to do with beta cluster/varnish/apache/etc.? [19:00:23] jeremyb: bad copy on my part https://bugzilla.wikimedia.org/show_bug.cgi?id=52776 [19:00:42] aha [19:01:09] jeremyb: just very slow and lots of 503 for most pages [19:15:23] Anyone know who is familiar with the EditPage code? specifically dealing with edit conflicts [19:19:50] chrismcmalunch: meh lunch, oh well, I was gonna ask what the fatal messages looked like [19:21:17] apergos: i'm not seeing anything showing up in the fatals log... [19:21:24] (when i make a 503) [19:21:40] ok cause it was in the bug report without specifics [19:22:37] yeah [19:22:44] maybe i'm looking in the wrong place [19:23:22] you anna look on /data/project/logs or something like that, from one of the deployment instances [19:23:28] deployment-prep I mean [19:23:40] and you need to give it a minute for the entries to show up, it's not to the second [19:24:36] well i think it's been more than a few mins [19:25:06] anyway, bbiab [19:27:18] k [19:30:34] apergos: haven't seen anything particularly suspicious in the fatals log or errors log either [19:31:01] ok [19:31:11] I'll look at that tomorrow and see if I can figure out anything [19:31:32] (it's 10:30 pm here so winding down for today) [20:00:13] <^d> chrismcmalunch: Fyi, we're going to deploy to test2 at some point before mw.org. Haven't decided when, will pick one of the lightening deploy windows soon. [20:00:43] ^d: thanks, having new stuff on test2 in advance is helpful [20:22:41] it seems we're never able to figure out anything about beta's fatals [20:23:12] weee [20:23:24] * greg-g actually really grumbles annoyedly [20:54:12] greg-g: were you at wikimania? [20:54:45] Danny_B: do you by chance know if the issue of ThOrg and User Group hosted wikis has come up? [20:55:12] i don't have any clue, sorry [20:57:38] anyone else know if the same approach to wikis for Chapters will be applied to ThOrgs and User Groups? [21:00:59] varnent: so... you're asking about a technical decision or something else? [21:02:29] jeremyb: well - I suppose I'm not 100% sure - my understanding is that Chapters get WMF hosted private and public wikis - correct? [21:02:43] and basically they can use BZ to do so [21:02:56] varnent: no, I was home :/ [21:03:17] jeremyb: trout me if I'm wrong [21:03:28] greg-g: that explains it - I was thinking "WTF are the odds two Gregs running around the same dev room didn't bump into each other" [21:03:40] varnent: I know, right? :) [21:04:11] jeremyb: so the question is - that would presumably apply to ThOrgs given their nature - but what about User Groups - would that be case by case or uniform? - basically anything recognized by WMF Board gets wiki if requested and approved? [21:04:30] varnent: i guess there might be 1 or 2 chapters with a WMF hosted private wiki. but that's very rare. most WMF hosted chapter stuff is public wikis [21:04:55] varnent: idk... i would ask erik [21:05:05] jeremyb: okay - well same question then - public wikis for ThOrgs and such [21:05:23] varnent: 13 21:04:54 < jeremyb> varnent: idk... i would ask erik [21:05:25] :) [21:05:26] okay - I'll send him an email [21:05:36] he's on vacation for a bit, just fyi [21:05:48] he might have an auto-responder going, dunno if he does that [21:06:01] yeah - wtf - like half the staff in my hotel were either ending or starting a vacation :) [21:06:05] you could try geoff maybe? [21:06:25] it's like they had a flight to China paid for and wanted to make use of it! ;) [21:06:47] there are some staff on the AffCom list that may know - but not really any Ops folks - I'll ponder that [21:07:07] (I did that when I was invited to talk at Berlin Open Access conf 9 in Beijing. Took the bulk of my vacation time ending up being in China for a full month) [21:07:14] greg-g: yeah - if that was a work trip I would have done the same - but the HK trip was the vaca for me :) [21:07:22] yeah [21:07:33] varnent: you could just file a bug and CC erik and see what happens [21:08:05] the problem with the type of work I do is you get inherent guilt when you are out of the office for more than 72 hours - so vacations beyond a week are exceptionally rare [21:09:05] jeremyb: I think that's what I'll do once I get some feedback from AffCom - it may not be something we want to ask anyway - my hunch based on this conversation is there isn't a set policy yet - so more pondering could be done before some official request is made [21:10:29] varnent: Reedy may know [21:10:48] whether there's a policy [21:11:20] Reedy: ping [21:11:41] bbl [21:19:16] varnent: Re_edy is either still on a plane back from HK or recovering right now (he flew back this "morning" (pacific timezone)) [21:20:12] gotcha [21:20:25] greg-g: I don't suppose you know if there's a policy :) [21:20:33] * greg-g reads up more [21:21:06] ah, just making the chapter specific private/public wikis for their use? [21:21:09] I have no idea ;) [21:21:43] I searched on-wiki and didn't see anything - going to check a couple other places - lol - you'd think I'd know being on AffCom - but I basically just know what's done - not if it's written down [21:21:53] * greg-g nods [22:43:03] gn8 folks