[02:31:23] anyone awake? getting reports account creation is completely broken in #wikipedia-en-help [02:33:03] Account Creation Completely broken. [02:34:18] . [02:34:18] [22:31] * tos2 is now known as tos [02:34:18] [22:31] <+Cyberpower678> Ok. Here's the diagnosis. Any attempt to create the account returns a "Fatal exception of type MWException". The account and SUL get created but, email and password get left out. Essentially, the user can't login with the password set or provided by email. Nor can they confirm email and use it to reset the password. [02:35:03] Thereby leaving a corrupted account. [02:35:38] ping [02:35:41] ping [02:35:41] ping [02:35:42] ping [02:35:43] ping [02:35:49] Emergeny [02:35:55] Cyberpower678: what [02:36:02] spamming 'ping' does nothing [02:36:10] other than poke me thru our anti-spamming bot [02:36:11] Look up. [02:36:22] AaronSchulz, andre__, QueenOfFrance: [02:36:28] so? [02:36:30] I happens [02:36:30] ahhh ok [02:36:33] And I'm not a dev [02:36:41] and spamming 'ping' doesn't ping devs lol [02:36:59] File a bugzilla, imo. [02:37:11] https://www.mediawiki.org/wiki/How_to_report_a_bug [02:37:16] nope, my method would eventually get one... [02:37:51] andre__, I believe this may be affecting hundreds of users at the moment. [02:37:59] so don't talk [02:38:01] file a bugzilla [02:38:05] and then talk :) [02:38:10] Cyberpower678, without steps to reproduce I can't comment. [02:38:12] I was going to do one in the morning when I got a computer, but was thinking this may be critical [02:38:19] en.wikipedia.org? [02:38:30] . [02:38:30] [22:31] * tos2 is now known as tos [02:38:30] [22:31] <+Cyberpower678> Ok. Here's the diagnosis. Any attempt to create the account returns a "Fatal exception of type MWException". The account and SUL get created but, email and password get left out. Essentially, the user can't login with the password set or provided by email. Nor can they confirm email and use it to reset the password. [02:38:42] andre__, just create an account. [02:39:00] if this seriously affects all accounts, someone should turn off global account creation... [02:39:08] otherwise we will be taking up usernames with no passowrds. [02:39:12] Cyberpower678 just file a bug regardless [02:39:18] take credit for such a nice find [02:39:23] :D [02:39:35] be a master, a bugzilla master! [02:39:49] Hmm, I get "[1903eff7] 2013-06-18 02:39:00: Fatal exception of type MWException" after creating an account on en.wikipedia.org [02:39:49] !bugzilla [02:39:49] https://bugzilla.wikimedia.org/$1 [02:39:49] I'll try to submit it but it will be tough from droid. .. [02:39:59] Technical_13, I'll do it. [02:40:09] It's such an incredible find. :p [02:40:10] kk [02:40:34] idc either way who does it as long as it is done. [02:40:35] andre__, and you'll find you can't use that account. [02:40:47] cc me in it c678 [02:41:04] confirming. sigh. [02:41:38] andre__, nor can you reset the password despite the email being setup during creation. [02:45:26] Technical_13, what's your bugzilla username. [02:45:27] ? [02:45:41] hmm, the last code deployment window was already 7 hours ago [02:45:47] technical_13 [02:46:44] @yahoo.com if it needs that part... [02:47:02] andre__: is there an switch someone can flip to disable account creation temporarily? [02:47:27] because whatever you just created is now taken on all Wikimedia projects indefinitely, with no viable method of logging in short of devs changing password manually. [02:48:17] Charmlet, I don't know, sorry [02:48:22] hmm. [02:48:35] if there is, can someone flp it? [02:49:23] Filed bugzilla [02:49:35] !bugzilla 49727 [02:49:36] https://bugzilla.wikimedia.org/49727 [02:51:08] Technical_13, you just downgraded the bug. [02:51:24] pretty good c678 other than it's not an unprioritized blocker. [02:51:45] it's an immediate critical [02:51:56] I upgraded it actually. [02:52:08] Can't you do immediate blocker? [02:52:33] blocker only means it prevents fixing other bugs... [02:52:42] Oh. [02:52:49] critical is actually higher. [02:52:53] :) [02:53:40] you don't have editbugs? [02:54:08] Technical_13: most new users don't [02:54:22] is it unconfirmed or new? I didn't check that. [02:56:02] okay. it is new and swalling confirmed it. [02:56:21] Thanks for filing it! [02:57:42] Technical_13: Thanks for filing the bug! I came to this channel because I was interested in how the big reporting worked. [02:58:05] c678 filled it... [02:58:26] I just guided him into doing it, maybe... [02:58:39] Matthew is investigating. [02:58:47] Technical_13, no. I just don't trust droids. :P [02:58:52] Oh, well, thanks to you both! [02:58:58] Teahouse_Guest93, if you have any bug reporting questions I'm probably the one to ask :) [02:59:12] again, thanks for making noise and filing a ticket. Really appreciated! [02:59:15] andre__, nuh uh [02:59:16] :p [02:59:38] can someone try to create an acount on another site and see if unification works? might be a workaround for now. [02:59:42] andre__ No questions. Just being a spectator [03:04:30] Teahouse_Guest93, Teahouse_Guest93 account creation works on other wikis. Try creating your account on meta.wikimedia.org [03:05:04] Teahouse_Guest93, wait [03:06:37] Teahouse_Guest93, you there? [03:07:04] Cyberpower678: I am waiting. I'm a little gun-shy now ;) [03:08:21] unification working from meta? [03:13:32] Teahouse_Guest93: don't worry, we don't bite here [03:14:46] Teahouse_Guest93, Jasper_Deng is right. We don't bite. We chomp. :D [03:14:58] hey [03:15:09] * Jasper_Deng takes away Cyberpower678's jaws [03:15:19] time to retest account creation, see last two entries on the top of https://wikitech.wikimedia.org/wiki/Server_admin_log [03:16:04] * Cyberpower678 gets his spare which are bigger. :p [03:32:20] Thanks for all of the help and the work everyone. It's been fascinating watching the account creation issue going from my problem to a bug to seeing the bug report updated with patches. I am new to the editing part of Wikipedia (hence, the account creation) and this has been an interesting experience. Everyone has been so helpful and kind. So, really, thanks again! [03:33:34] glad to have entertained... [03:36:21] T13|Sleeper: I wasn't entertained exactly, just curious and interested. I was considerably frustrated when I couldn't get account creation to work. [03:37:05] Teahouse_Guest93: Wikimedia does remarkably well for a low-budget organization. I like their efficiency a lot. However, sometimes there are snags [03:39:20] Jasper_Deng: Totally understandable. [03:41:31] well, it's also always question who's around and awake... [03:41:40] but thanks :) [03:44:42] it's a matter of someone that knows what channel to poke.... [03:44:48] :p [03:47:20] so I asked in the #wikimedia-operations channel (though that's definitely the wrong channel) and tried in the staff channel to check who's still in the office (though it's 8:30PM in that timezone) [03:47:32] my next step would have been to send an email to the engineering mailing list. [03:51:39] andre__: it seems Roan's on it [05:33:22] I'm trying to figure out why https://en.wikipedia.org/w/index.php?title=Wikipedia:Today%27s_featured_article/Article&action=edit doesn't work [05:33:29] Is it the template? Or the #time parser function? [08:49:29] meh, again https://pt.wikipedia.org/wiki/Wikip%C3%A9dia:Esplanada/propostas#Vota.C3.A7.C3.A3o_sobre_o_CAPTCHA_e_carta_destinada_ao_Bugzilla [08:56:42] DanielK_WMDE: hey daniel, do you happen to have a moment to answer a couple of questions regarding content handler? [09:10:50] anyone happen to know - with content handler, is it possible to upload a file and have it placed into a content handler namespace page or does the content handler namespace page need to be create as a new page with text only? [09:17:02] dan-nl: hey [09:17:43] DanielK_WMDE: hey, with content handler, is it possible to upload a file and have it placed into a content handler namespace page or does the content handler namespace page need to be create as a new page with text only? [09:17:44] dan-nl: Content can contain any kind of content; in theory this includes binary blobs, though tthere *might* be issues with the database backend. If that is the case, use base64 of something like that [09:18:34] dan-nl: as for "uploading": if you use mediawiki's upload mechanism, mediawiki will treat your upload as a media file, not page content. media files also have handlers and revisions, but they are handled separately and quite differently from page content. [09:18:47] for one thing, they are served raw from the file system to the client. [09:19:53] if you want to be able to create wiki pages with your custom content model by uploading files, you'll have to either abuse Special:Import (needs client side pre-processing of the files) or create a new Special:XXX page for handling the upload and creating a page from it [09:20:09] DanielK_WMDE: yes, i was trying to use upload from web request to send an xml file to a content handler namespaces page, but that didn't work [09:20:38] not sure whaty ou mean by "upload from web request" [09:20:51] DanielK_WMDE: i see. that's what i was afraid of … was able to read the xml as a string and use the api to create a content namespaces page [09:21:30] DanielK_WMDE: $this->_UploadBase = UploadBase::createFromRequest( $WebRequest ); [09:21:33] don't use mediawiki's media upload feature. it's for media files. they are treated as raw blobs that sit on the file system and can be served directly. [09:21:49] DanielK_WMDE: right, that's what i was discovering [09:22:48] dan-nl: there is a conceptually blurry but practically fundamental distinction between page content and media upload. the reaons are mostly historical, but it's not going to go away soon. [09:22:51] DanielK_WMDE: off-hand do you know how i would be able to use the mediawiki framework to create a special page that would handle the upload? [09:23:49] dan-nl: extensions can define special pages by extensing the SpecialPage class and registering the new class. the special page has code to generate an html form, and handle post requests to that form. [09:23:52] DanielK_WMDE: k, no problem with that, just need to know how to "best" create the page using a $_FILES upload … was able to do it by reading file_get_contents, but i don't think that's the best way to deal with it [09:24:21] You should be able to do file uploads using a fiel field in the form. [09:24:31] DanielK_WMDE: right, already have the form for it, just need to know what part of the mediawiki framework i can use to process the upload so that a page is created from the xml file ... [09:24:45] $_FILEs and file_get_contents is more or less what you need to do, yes [09:24:56] though it may be that mediawiki provides some nice wrappers for thatz [09:25:02] look how SpecialImport handles uploads [09:25:07] do i place the file into the content handler page or do i copy the text content of the file and place it in the page? [09:25:43] the text. i don't understand what "place the file into the content handler page" would mean [09:25:58] DanielK_WMDE: k, will look at the special:import [09:25:59] you are trying to create a page with text in it. so put text into it. no matetr where it comes from. [09:26:30] DanielK_WMDE: right, that's the distinction i needd to sort out - i have to pre-process the file before placing it into the page [09:27:01] DanielK_WMDE: my only concern then has to do with large xml files … may run into memory and timeout issues [09:27:47] DanielK_WMDE: then after the content handler page exists, i need to read it as a string and process it as i need, so convert the string back into an xml document or node … ? [09:30:01] dan-nl: you have to decide whether the "natiove" representation of your content is going to be a DOM or an XML string. The latter is easier to implement, the former is nicer for manipulation. [09:30:39] dan-nl: in your ContentHandler, you would put the XML parsing/serializing if you go the DOM route. Otherwise, you don' [09:31:09] ...you don't need to do much there. If you use a text based content representation, just derive from TextCOntent resp TextContentHandler, like i did in my example [09:31:25] dan-nl: if you need to do any processing after uploading, just do it after uploading, before creating the page [09:34:42] DanielK_WMDE: right, thanks, that clears up things for the moment …seems i need to decide whether to store the xml as serialised in the page or as text and then move on from there ... [09:35:44] dan-nl: well, it will always be *stored* as a string. the question is how you want to represent it in your application logic, that is, what your Content object is going to contain [09:35:54] that largely depends on what operations you want to allow on your content. [09:36:04] free hand xml editing? go the text route. [09:36:23] high level manipulation of parts of the XML structure? use a DDOM [09:36:34] DanielK_WMDE: right … will have a think about that and decide. i think it's more about just reading in the xml rather than editing it ... [09:37:12] rendering of the logical structure, rather than tags, will also be easier using a DOM [09:37:17] DanielK_WMDE: thanks for the DataPages example, that worked just fine. my only concern tis that it seems it should become part of the core rather than a separate extension for xml … otherwise there may be several implementations for xml [09:37:26] if you are fine with looking at (highlighted?) XML code, use a text based model [09:38:19] dan-nl: XML is a serialization format, not a data model. The logical data model is what you are representing with your XML. [09:38:22] DanielK_WMDE: k, looks like i'll need to play around with a few ideas, pick one and possibly change later on ... [09:38:46] We could have a generic DOMContent implementation that could be re-used, yes. But that'S about it. [09:39:05] DanielK_WMDE: i see [09:39:47] All the other logic will be specific to the actual schema, not generic to XML. For example, what a "section" is or how to get "text for the search index", or what a "pre safe transform" would do - all that depends on your application. [09:40:00] it's nothing we could decide up from from "XML" in general. [09:40:52] DanielK_WMDE: right, i was just thinking about the highlighting and whether or not the content handler would pre-parse the xml and make sure it's valid, etc, but i guess that is best to leave up to each individual implementation, although i do see some code re-use possible ... [09:42:21] DanielK_WMDE: thanks for your time here … will try a few things out and may have some more questions later ... [09:44:08] aww, no stats for pt.wiki when one needs them most... dump stuck behind 6 big wikis http://www.infodisiac.com/cgi-bin/WikimediaDownload.pl [09:51:14] Nemo_bis: you mean like this? http://dumps.wikimedia.org/ptwiki/20130529/ [10:02:44] se4598: yes, it's the oldest, next in the queue [13:02:12] was enotifminoredits killed? O_o [13:18:04] did the email outage break enotifs for edits around 23.27 CEST of yesterday? [15:03:18] Filed as https://bugzilla.wikimedia.org/show_bug.cgi?id=49749 . [16:08:21] Did the account creation things from last night get fixed? [16:14:10] Charmlet: yes [16:14:20] oh good. [16:22:53] Yep... [17:00:00] is this the channel for Labs or there is another channel? [17:00:52] fale: #wikimedia-labs [17:01:00] thanks Nemo_bis :) [17:01:01] e ciao [17:01:10] ciao Nemo_bis :) [18:13:17] Several users are reporting issues with confirmation links being invalid on the English Wikipedia. [18:14:10] James_F: ^ [18:14:32] lfaraone: Users who can log in to their wiki account? [18:15:01] James_F: "I receiveda confirmation email, but the link sends me to a Wikipedia page that saysthat the confirmation code is invalid." [18:15:12] James_F: "I received an activationlink in an email from Wikipedia, but when I click on the link, I am directed to a pagethat tells me my link is expired." [18:15:43] lfaraone: Are the users reporting this on-wiki? By e-mail? IRC? [18:15:49] James_F: OTRS. [18:16:00] lfaraone: Hmm. Probably is the breakage we did last night. :-( [18:16:15] lfaraone: About 8000 accounts' creation got completely broken, unfortunately. [18:16:19] Yes, sounds like [18:16:30] James_F: ah, okay. yeah, other people report of "fatal errors" [18:16:41] Yeah, so [18:16:49] About 8k people got a "fatal error" when creating their account [18:16:50] lfaraone: Yeah, that would be it. :-( [18:16:56] The accounts don't have passwords set [18:17:00] I see. I only saw 5 tickets about it, fascinating. [18:17:09] lfaraone: Those accounts are essentially permanently inaccessible. We'll have to delete them from the DB. [18:17:15] Most people didn't provide an email address, only about 100ish out of 8k did [18:17:27] RoanKattouw: I can tell this story; you go write code. :-) [18:17:31] James_F: ah, okay. Should I tell them to wait for that, or just to create a new account in the interim? [18:17:33] We did store the email address, but not the confirmation tokens so the confirmation links don't work [18:17:46] Password reset also doesn't work because that requires the email address be confirmed first ;( [18:17:54] RoanKattouw: but I confirmed the e-mails [18:17:58] for those 160 [18:18:01] Oh OK [18:18:08] They might have reported this before you did that [18:18:11] I did not generate a password reset e-mail, but they should be able to [18:18:23] So if they try again, then theoretically they should be able to confirm their email and then change their password [18:19:11] two users claimed they got confirmation links that were invalid, but now if *I* try and reset their passwords I'm told we don't have an address on file for them [18:19:57] lfaraone: Yes, that's because their existing e-mail is unconfirmed. [18:20:08] oh, I thought ori-l confirmed them. [18:21:06] ori-l: Did you manually set them as confirmed, or did you just re-trigger the initial please-confirm e-mail? [18:22:17] James_F: the former [18:22:39] let me pastebin the code [18:25:07] sorry, my server disappeared. [18:26:11] RoanKattouw, James_F: this is what I ran: https://gist.github.com/atdt/5807946 [18:26:40] (that's probably a stupid way to run a script on multiple wikis, but I forgot how we usually do it) [18:27:09] ori-l: Ah, but you didn't trigger a password-reset e-mail? [18:27:24] nope. I mentioned that in my e-mail [18:27:50] ori-l: Right-o. [18:28:38] lfaraone: OK, if they were one of the 160 that ori set, they should be able to reset their password; the other ~7800 are just lost forever. :-( [18:29:24] though we might be able to release the user name so it can be re-used [18:29:40] I didn't do that last night because I wasn't sure how to filter out accounts created by SUL [18:30:43] I saw 'Okeyes (WMF)' with blank e-mail and password for kabwiki and confirmed with Oliver that he had logged on to kabwiki for the first time the previous day, but then when I asked him to try and log out and log in *on kabwiki* it worked fine. [18:37:25] James_F: I've informed the users, thanks. [18:45:26] about to run scap [18:45:46] *brace for impact* [19:20:46] From #wikipedia: [19:20:47] 18<DocPlatypus> http://en.wikipedia.org/wiki/List_of_state_highways_in_Texas -- "Sorry, the servers are overloaded at the moment. Too many users are trying to view this page. Please wait a while before you try to access this page again. Timeout waiting for the lock" [19:20:47] 18<DocPlatypus> page came up fine before I logged in. after logging in, I got this error. [19:20:47] 18<DocPlatypus> delete all wikipedia.org cookies, page comes up fine again. [19:20:59] log back in, problem recurs. [19:21:35] I've never heard of Wikimedia servers doing that [19:21:52] are you sure it isn't something like perhaps a web proxy messing with his connection? [19:22:00] wonder what lock it's waiting for [19:22:04] it's happening to me as well Jasper_Deng [19:22:30] yes! finally! someone else has this problem! [19:22:50] LOL [19:22:55] Krenair: what happens if you clear all wikipedia.org cookies? [19:24:07] and now I get "Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon." etc, same as before [19:25:02] I can browse random articles while logged in without issue, or at least, I have not hit another article that does this [19:25:22] Maybe cache-related? when you log in you will no longer hit the squids [19:25:31] isn't that the poolcounter whatever anti-Michael Jackson downtime? [19:25:49] And the PoolCounter class seems related to caching stuff [19:25:54] probably memcached though [19:26:00] Nemo_bis, yes. [19:26:12] "Request: GET http://en.wikipedia.org/wiki/List_of_state_highways_in_Texas, from 98.195.26.149 via cp1002.eqiad.wmnet (squid/2.7.STABLE9) to 10.64.0.130 (10.64.0.130) Error: ERR_READ_TIMEOUT, errno [No Error] at Tue, 18 Jun 2013 19:23:37 GMT" [19:26:16] if that helps [19:26:32] "Given enough requests and the item expiring fast (non-cacheable, lots of edits...) that single work can end up unfairly using most (all) of the cpu of the pool. This is also known as 'Michael Jackson effect' since this effect triggered on the english wikipedia on the day Michael Jackson died, the biographical article got hit with several edits per minutes and hundreds of read hits." [19:27:08] this is not an article I'd expect to get that many hits though [19:27:29] could be template-related? [19:27:36] could be [19:28:14] willing to spend a good half hour helping you guys figure this out [19:29:29] this may be a new desperate attempt by Obama to attract attention on decaying USA infrastructures compared to Africa, Asia and rest of the world [19:29:40] article loads for me, here is the NewPP limit report: [19:29:43] Preprocessor visited node count: 743545/1000000 [19:29:43] Preprocessor generated node count: 83287/1500000 [19:29:43] Post‐expand include size: 854113/2048000 bytes [19:29:43] Template argument size: 267956/2048000 bytes [19:29:43] Highest expansion depth: 22/40 [19:29:44] Expensive parser function count: 471/500 [19:29:44] Lua time usage: 0.789s [19:29:45] Lua memory usage: 1.36 MB [19:29:49] "ok, let's use PRISM for something useful" [19:29:51] se4598_2: are you logged in? [19:29:57] yes [19:30:02] Thanks Obama. [19:30:13] hmm [19:30:16] Expensive parser function <-- thats high [19:30:28] se4598: please don't flood [19:30:50] use pastebin if you have more than a few lines [19:31:02] Jasper_Deng: sorry, though would be not so many lines [19:32:11] * Nemo_bis shrugs https://en.wikipedia.org/w/index.php?title=Special:RecentChangesLinked&days=30&from=&namespace=10&target=List+of+state+highways+in+Texas [19:33:00] Why is db59 lagged by over 2 days? [19:34:28] http://ganglia.wikimedia.org/latest/graph_all_periods.php?me=Wikimedia&m=cpu_report&r=hour&s=by%20name&hc=4&mc=2&g=cpu_report&z=large also the 'wait' on that top left graph... [19:34:45] * Krenair doesn't even understand these numbers but they don't look good [19:34:49] A number of Tampa DBs are lagged badly [19:36:40] Filing a ticket [19:36:45] db63 has a problem too [19:37:05] *db64 [19:37:11] That one has a hardware problem [19:37:16] alex@alex:~$ python -c 'print float(1277471)/60/60/24' [19:37:16] 14.7855439815 [19:37:24] yep that one is over 14 days heh [19:38:13] I wonder what will happen now with search being in tampa :) [19:38:29] search is in tampa? [19:54:26] Temporarily [19:54:35] Some hardware movements going on in EQIAD [20:11:29] Where can I find the Bugzilla api documentation for jsonrpc.cgi? [20:12:29] try googling "bugzilla api" [20:25:47] Tried to search the api documentation for "closed bug status" and it returned an error. :( [20:29:13] I started a bug report and then got out of range for wifi. :( [21:37:32] Krenair: because db59 had hardware issues, then when it was brought back there was a problem with replication, not noticed for some days, now it is catching up [22:39:49] wiki's slow [22:40:52] .... [22:40:57] that's it? [22:41:05] doesn't seem slow to me, so ignoring :) [22:41:09] internet doesn't work [22:41:42] i click and nothing happens [22:57:21] anyone have any idea what I should be looking at that creates the {{done}} message when renaming a user on enwiki? (my thought is would it be able to make similar messages for other actions such as protect) :) [22:58:06] addshore: http://en.wikipedia.org/wiki/Template:Done [22:58:40] if it doesn't need other templates you can just copy/paste over [22:59:02] addshore: What do you mean by "{{done}} message"? [22:59:08] Do you have a screenshot? [23:00:17] taking one now [23:00:34] oh actually *goes to find someone to rename* [23:02:08] Elsie: http://grab.by/nFye [23:03:19] Well that just looks broken. ;-) [23:03:33] Oh, that's copy-paste code? [23:03:38] Interesting. [23:03:39] it has compelling ascii-art feel [23:03:45] a compelling, even. [23:03:59] oh, you want that green box, the actual {{done}} thing is unrelated then [23:04:04] 8=========D ~~~~ [23:04:38] https://en.wikipedia.org/w/api.php?action=query&meta=allmessages [23:04:41] (Large page.) [23:04:55] Renameusersuccess is the relevant MediaWiki page. [23:05:17] https://en.wikipedia.org/w/index.php?title=MediaWiki:Renameusersuccess [23:05:22] well, what I would love is copy and paste code for protections to paste onto WP:RFPP :P [23:05:56] Don't protections return you back to the page? [23:06:11] I'm not sure there's an equivalent success MediaWiki message. [23:06:25] *goes to find a page to protect* [23:06:27] You could do this with JS, I imagine. [23:07:19] Elsie: They do return you to the page. [23:07:28] mhhhm, thats what I was thinking, but Im a major js failure :D [23:08:07] * Technical_13 hides. [23:15:11] Technical_13: you like javascript? :D [23:15:33] Learning it, yep. [23:16:27] Why? [23:17:02] want to try and make me the above described script? :P [23:18:59] Wasn't watching and have a short log on this droid. Leave me a note on one of my talk pages and I'll let you know what I can do. [23:20:16] kk :> [23:20:40] I watch w: mw: and m: the most I think. [23:32:20] {done} :) [23:44:49] Oh [23:46:00] Yeah I think you're looking for something similar to what I have planned for the help me template. [23:53:42] hey, who can approve shell access to labs?