[00:53:02] TimStarling: Have you seen https://wiki.php.net/rfc/optimizerplus ? [00:53:12] "Integrating Zend Optimizer+ into the PHP distribution" [00:53:39] interesting [00:54:43] yeah, it's very good that they are finally releasing and integrating that code [00:55:01] the architecture of the zend engine has always been pretty broken due to the need to support zend optimizer [01:51:35] I'd like to tag logged events in server-side code w/the git SHA of the caller, if available. I think I can get it by calling debug_backtrace and looking for the __FILE__ key of the array representing the caller frame. But before I embark on this adventure, I'd like to know if anyone thinks this would have ruinous implications on performance. [01:52:05] The logging function could plausibly be called up to five times in a single MediaWiki-handled request [01:53:29] What do you mean with the "git SHA of the caller"? [01:53:29] I'll profile it, etc., but maybe this is so stupidly awful that it shouldn't even be attempted? [01:53:57] well, the caller can be presumed to exist either in core or another extension than the logging function, which lives in EventLogging (which wouldn't log any events itself) [01:54:34] OK, so you'd find out which repository the caller is in, and figure out the git SHA that that repo is at? [01:54:54] so more precisely: the git SHA of the HEAD commit of the repository in which the calling code resides [01:54:56] right [01:55:04] Right, OK [01:55:28] I don't *think* that's a problem. I'll point out that there are functions for looking at git SHA1's in SpecialVersion [01:55:37] Or called from SpecialVersion.php rather [01:56:30] right, spagewmf was pointing me toward includes/GitInfo.php earlier. [01:57:39] It seems like 5.4 would make that a bit less expensive by allowing you to specify how many frames you want to get back from debug_backtrace. [01:58:19] OK, if it's not totally insane I'd like to give it a shot. Thanks Roan. [01:58:21] I was in wm-tech puzzling out what happened to the git SHA1's for extensions on production wikis [01:58:59] Oh, yes, there is that [01:59:06] .git may not be well-synced on the cluster [01:59:06] Reedy and I think the .git stuff that gets sync'd has changed [02:01:21] I'll just default to sha1(mt_rand()) and let our poor analysts scratch their heads [02:03:48] ori-l: We all know you can just replace the mt_rand() with 7 [02:03:56] navigator.webCam.capture() and then "enhance" the reflection off the user's eyeballs to see what she is looking at. I saw it on TV once. [12:36:27] <^demon> saper: qchris found the problem :D [12:37:12] So it did the trick? [12:37:15] Great! [12:37:30] what was it? [12:37:41] <^demon> Two copies of mysql-connector residing in ./lib. [12:37:43] An outdated MySQL connector [12:37:56] ooooooooaahhhhhhhhh [12:38:07] And I tried both variants locally. Both worked :-( [12:38:22] no wonder I had trouble reproducing :( [12:38:32] so it returned 0 for number of updated rows [12:38:38] <^demon> Right. [12:38:58] And I started testing gwtorm :) at least learned something. [12:39:36] <^demon> Anyway, I sent an e-mail to repo-discuss saying "Yeah we fixed it." [12:39:54] <^demon> Having gerrit handle the "multiple versions of the same library in ./lib" case might be nice, if it's not hard. [12:39:55] Wonderful. [12:40:25] Gerrit (Java?) does the right thing. It picks one and uses it. [12:40:32] However in our case it was the older one. [12:43:19] ^demon: IIRC this was this the last blocker for the gerrit update. So I can start on the project-move-plugin? [12:43:33] it's classloader [12:43:48] did id end up in the unpacked temp folder with gerrit classes? [12:44:49] <^demon> qchris: Yeah, go back to plugins :) [12:44:53] saper: I did not check. However when logging the MySQL SQL statements, the handshake was done using the older connector. [12:45:03] ^demon: Ok. [12:45:32] great [12:45:59] gerrit unpacks all stuff again and uses a custom classloader, but we should get rid of that [12:46:12] but that's low priority and for gerrit (not for us) [12:46:58] ^demon: remember when I implied that ^J at the end of some email headers might be a local installation issue? :) [12:47:11] <^demon> e-mail headers? No. [12:47:19] <^demon> I remember something about line endings. [12:47:50] yes, in the headers [12:48:34] Subject: =?UTF-8?Q?[Gerrit]_bug_32504:_Import_of_=22MoreBugUrl=22_extension_-_change_(wikimedia...modifications[master])=0A?= [12:49:20] does not happen in gerrit mail from openstack for example, or my local install [12:49:49] annoyance, but may break stuff [12:50:00] <^demon> How would we work around it? [12:50:11] I am not sure where it coms from? [12:50:13] comes from [12:50:16] the code seems fine [12:51:15] it's in the velocity templates gerrit-server/src/main/resources/com/google/gerrit/server/mail/ChangeSubject.vm: [12:51:25] <^demon> I hate those templates. [12:51:31] <^demon> They annoy me :p [12:55:18] is email getting out of gerrit-dev? [12:55:18] <^demon> No, shouldn't be. [12:55:18] <^demon> iirc, I set sendemail.enable = false [13:15:04] <^demon> I wonder when stable-2.5 is going to get merged to master again. [13:15:26] <^demon> If that happens soon (as in, the next couple of days), that would probably be a good build to deploy. [13:15:29] <^demon> So we get all the minor 2.5.x fixes as well. [13:16:01] <^demon> 2.5.2 release notes were merged, which makes me think a release is soon. [14:09:56] !! [14:10:12] yay thought police is over [14:15:32] eh? [14:16:33] Wikipedia: Are Wikipedia software development engineers of the caliber that could work as SDEs at Google / Facebook / Amazon / etc? [14:16:34] oh men [14:16:43] are we THAT good? http://www.quora.com/Wikipedia/Are-Wikipedia-software-development-engineers-of-the-caliber-that-could-work-as-SDEs-at-Google-Facebook-Amazon-etc [14:18:13] <^demon> I really wish I could read quora without registering. [14:28:03] ^demon: seems like I don't need to [14:29:18] you prolly already have a cookie [14:29:28] it will let me read the first answer it says [14:29:56] yeah it fuzzes out the rest of them [14:31:22] ohh [14:31:27] only the first comment can be read [14:31:37] I guess that is why I never go to quora [14:31:52] I can't read Leslie / Roan replies :( [14:32:01] yep [14:33:12] + I am not going to log with a Facebook account :D [14:36:20] abbb [14:36:23] never ending story [14:36:32] https://m.mediawiki.org/ gives out wrong cert *.wikipedia.org [14:40:00] is quora still invite-only outside USA? [14:48:04] no idea [14:52:39] doh [14:53:20] 4040 bugs against MediaWiki [14:53:32] Nemo_bis: we are never going to get that done :( [14:56:27] heh [14:56:40] Wikimedia>General is still the worst [14:56:44] well, after LQT of course [15:00:32] LQT is a fuoriclasse/hors classe (?)/? [15:01:02] <^demon> bug 9685 should probably go somewhere else. [15:01:07] <^demon> GeoData? [15:01:08] <^demon> Or something. [15:01:19] <^demon> Just not Wikimedia / General [15:02:35] Allow WYSIWIG editing https://bugzilla.wikimedia.org/show_bug.cgi?id=5398 [15:02:36] *sigh* [15:02:48] "Wikimedia>General" means "nobody accepts this as responsibility (yet)" [15:03:18] <^demon> I use it also for "Wikimedia-specific bugs that don't belong in one of the other components" [15:03:32] so that WYSIWIG editing bug, we now it is going to be fixed with visual editor [15:03:41] thus should I just mark it as fixed ? :-D [15:04:08] I don't think there is any point in keeping that bug around [15:05:24] isn't there a visualeditor tracking bug or something [15:05:36] I am closing it [15:11:07] well [15:11:15] some stuff to do + swimming pool [15:11:20] will be around later tonight [15:19:54] 1 [15:19:56] err [15:20:21] ^demon: i, uh, found a silly bug in the code you merged for me yesterday :/ https://gerrit.wikimedia.org/r/#/c/47083/1 [15:20:30] could you approve the fix? this is just too silly :P [16:42:14] Raymond_: or would you? [16:42:16] DanielK_WMDE: done [16:42:16] <^demon> Oh, whoops. [16:42:16] <^demon> Was just looking. [16:42:17] ^demon, Raymond_: thanks (and sorry)! [16:42:17] <^demon> Np. [16:42:29] this is a test of logbot... [18:58:08] j^: "This wiki does not accept filenames that end in the extension ".JPG"." [18:58:17] * AaronSchulz doesn't recall getting that before in testing [18:58:28] * AaronSchulz checks his config [18:59:06] 'jpg' is in $wgFileExtensions [18:59:20] Weird. IIRC we try to normalize the filename before running it through config checks. [19:01:11] brion: https://gerrit.wikimedia.org/r/#/c/47094/1 easy review :) [19:01:52] <^demon> Just +2'd. [19:03:01] * AaronSchulz finishes whoring out for reviews for now [19:05:24] whee [19:05:31] ^demon wins this round [19:06:07] <^demon> The student has eclipsed the master ;-) [19:07:37] :D [19:47:03] hi [20:00:04] <^demon> hashar: Hey, I was wondering something. [20:00:18] hold on hold on :-) [20:00:28] trying to find a cheap flight with Doreen [20:00:38] <^demon> No rush. [20:12:23] ok I got a deal [20:12:34] will enjoy a layover in the beautiful airport of Minneapolis \O/ [20:12:46] <^demon> Never flown through there. [20:15:10] I never landed elsewhere than in SFO [20:15:39] ^demon: so what were you wondering? [20:15:45] <^demon> So yeah, my question was re: something I saw in jenkins. Under the "You have data stored" message on the manage jenkins page, there's another notice in red. [20:15:46] hashar: Be aware that you will be passing through US immigration and customs at MSP, not SFO [20:16:04] Which means lines and hassle in MSP, picking up your luggage (and then handing it off again) in MSP, and a quicker exit at SFO [20:16:22] RoanKattouw: ahh good to know. Cause there is like 5 hours layover in MSP [20:16:35] (notice how Roan managed to remember that Minneapolis airport code is MSP) [20:16:48] <^demon> Roan knows all airport codes. [20:16:58] each time i came to SF it took me a good half hour just to get of the 747 [20:17:08] <^demon> RoanKattouw: Quick. TVL? [20:17:12] and then roughly 2 hours to pass the custom / pick luggage get in the BART :( [20:17:29] ^demon: Quicker than spending 45 mins in an immigration line [20:17:43] !lookup airportcode BVA [20:18:04] <^demon> RoanKattouw: No, I was seeing if you knew what airport TVL was :p [20:18:04] (tip: 3rd Paris airport) [20:18:25] Oh, heh [20:18:27] No I don't [20:18:31] anyway, I wanted to get the AMS -> SFO flight, but it is overpriced on the weekend I have to fly :( [20:18:31] I didn't know about BVA either [20:18:42] KL605 is usually overpriced [20:18:46] <^demon> Tahoe. [20:18:47] RoanKattouw: that is Beauvais, on the north of Paris. I used to live there, even attended high school in that city [20:18:53] <^demon> (Not Reno-Taho, which is RNO) [20:19:04] Wow, TVL is in my home state [20:19:07] the airport is small but can get Boeing 747 (not sure about the A380) [20:19:10] That's somewhat embarrassing [20:19:38] also BVA is used by low costs company for some European flight. You can take a bus from Paris to BVA then get your Ryan Air / EasyJet there for a very cheap price. [20:19:51] hashar: Is BVA the one that's almost in Belgium? [20:20:04] that is also known as the 3rd Paris airport. Whenever CDG / ORY are saturated or in fog, planes might end up being routed to BVA [20:20:21] RoanKattouw: na BVA is roughly 75km away from Paris [20:20:32] <^demon> hashar: So yeah, that third message on the manage jenkins page? [20:20:37] <^demon> Is that something important? [20:20:38] Belgium is maybe 150km / 200 km farther to then orth [20:20:42] Right, OK [20:20:56] <^demon> The one that starts "Because of" [20:21:05] ^demon: ah looking. That is the "manage jenkins" page ? [20:21:11] <^demon> Yes. [20:21:29] * hashar wonders if he should start an Airport Code Foundation with Roan. [20:21:32] haha [20:22:14] <^demon> hashar: It sounds kind of...bad. [20:22:18] ^demon: I don't know what to do with it. Jenkins had a security issue that would potentially compromise the API code in a very specific scenario (aka when using jenkins slave) [20:22:34] ^demon: the message is supposed to regerneate all API keys, which are used by Zuul. [20:22:47] so rekeying might broke the Jenkins - Zuul connection. [20:23:08] the API key is in the secret puppet repo and thus would need to sync with someone from ops to update it in case it got changed. [20:23:11] so hmm [20:23:15] <^demon> Hmm :\ [20:23:17] I did not bother to update it yet [20:23:36] <^demon> Probably worth scheduling some downtime for. [20:23:38] I should have applied for an Ops position hehe [20:23:42] yeah [20:24:05] I usually don't properly schedule my ops interventions, beside with one of the european ops [20:24:32] we end up doing them during the European morning [20:24:39] I should probably schedule that one properly. [20:26:02] <^demon> Yeah, I see the bit about only affecting setups w/ slaves, so we're probably ok for now. [20:26:09] <^demon> But yeah, should be done eventually :) [20:26:14] <^demon> Ok, thanks for clarifying. [20:26:31] should probably have dropped you an email earlier when I got Jenkins upgraded [20:26:32] sorry [20:26:56] <^demon> Oh, we've pencilled in the 11th for the gerrit upgrade. What needs doing with zuul so it doesn't flip out again? [20:27:30] ahh [20:27:44] yeah so I wanted to upgrade my Gerrit instance to 2.6 [20:27:46] and test out zuul there [20:28:05] then eventually started to look for a war file and an upgrade process, got lazy and moved to something else. [20:28:14] I guess I can get my zuul in labs to connect to your 2.6 install. [20:28:30] <^demon> Ok, so I haven't picked which exact build we're using yet. [20:29:02] <^demon> The build from today at least contains everything we need. [20:29:07] <^demon> Couple of stuff as of yesterday went in. [20:29:29] <^demon> But yeah, you can grab builds from https://integration.mediawiki.org/nightly/gerrit/wmf/ [20:29:47] <^demon> Any of those are ones I've pinned as being worth using. Newer dates are generally best. [20:30:58] rekeying is https://bugzilla.wikimedia.org/show_bug.cgi?id=44592 [20:31:03] would poke with ops next week [20:31:21] ^demon: then put the Gerrit.war and restart ? [20:31:53] <^demon> Grab it, run `java -jar init -d /var/lib/gerrit2/review_site/` [20:31:55] <^demon> Then restart. [20:32:06] <^demon> Rather, stop, run init, then start. [20:32:07] <^demon> :) [20:32:32] hmm [20:32:42] that sounds to simple [20:32:47] <^demon> :) [20:32:49] * hashar tries [20:33:08] * hashar asks wife to choose one file from https://integration.mediawiki.org/nightly/gerrit/wmf/ [20:34:47] ahh [20:35:41] <^demon> Ah, I didn't promote today's build. [20:35:44] <^demon> Wait one moment. [20:36:12] <^demon> gerrit-336eb70b51fe2328d4dd21fef3c78ba11e32758d.war is a good build. [20:36:35] 0x0f 0x1f [20:36:58] (ok tried to be creative by using the hex code for NOP on x86) [20:38:26] starting [20:38:29] with init script [20:38:56] [2013-02-01 20:38:28,729] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review fatal: No names found, cannot describe anything. ready [20:38:58] ahha [20:38:59] fatal [20:39:01] but ready [20:39:22] ah that is the version number [20:39:26] the interface show the same [20:39:30] http://integration.wmflabs.org/gerrit/#/q/status:open,n,z [20:39:49] <^demon> Wow, weird. [20:39:56] <^demon> That's not happened before. [20:39:57] might be my setup [20:40:09] integration-jenkins2 [20:40:50] 2013-02-01 20:40:40,703 INFO zuul.Jenkins: Build 91fd7ae8bc5043c59c596202835a5177 #45 started, url: http://blblbllb:8080/ci/job/mediawiki-core-lint/45/ [20:43:10] <^demon> What's the failure? I don't have that hostname. [20:43:56] <^demon> Does Zuul use SSH or JSON-RPC to talk to Gerrit? [20:44:13] ssh gerrit stream-event [20:44:30] <^demon> And then ssh gerrit review for the other stuff? [20:44:31] it maintains a connection permanently and attempt to reconnect (sometime) [20:44:46] yeah the rest is ssh too such as adding comments [20:44:55] <^demon> Ok good, no json-rpc. [20:44:58] is the JSON-RPC a new thing? [20:44:58] <^demon> I was afraid for a moment. [20:45:17] <^demon> No, JSON-RPC is being removed very rapidly in favor of the new public stable restful api. [20:45:25] <^demon> So I was afraid something was using it. [20:46:02] and git-review uses ssh too [20:46:05] <^demon> Yep. [20:46:21] do you know the https://integration.mediawiki.org/zuul/status page ? [20:46:33] (that gives the list of jobs in Zuul pipelines) [20:46:45] the openstack guy made a json service out of it [20:46:49] and build a nice web page http://zuul.openstack.org [20:47:13] <^demon> Ooh, that'd be nice. [20:47:14] he even use a python module to write stats in graphite and generate neat graphs [20:47:34] I will setup something like this for us [20:47:43] whenever I migrate the integration docroot out of puppet [20:48:05] (I want us to be able to update the integration.wm.org by ourself without having to beg for a merge + merge on sockuppupet + puppetd ) [20:48:15] less work on ops side [20:49:51] i dont see that happeneing. [20:49:59] the entire point is nothignt is live on cluster without ops merging. [20:50:18] (the sockpuppet manual merge step is intentional!) [20:50:50] but meh, i just make the donuts. [20:51:09] (shoudl have phrased, why would that happen, not that i dont see it happening, i have no idea what goes on around here ;) [20:55:10] RobH: yeah I was not complaining about the process :-D [20:55:24] just that historically the html pages for integration.mediawiki.org have been in puppet [20:55:37] and that does not make real sense, that just add some manual burden on your shoulders :-D [20:55:39] thats ok, my comment was a token ops comment since all my fellow opsen are traveling ;] [20:55:58] ahh [20:56:02] are they all sent to FOSDEM ? [20:56:20] well, most are, or are traveling to something else unrelated but close in dates [20:56:34] that explain the silence today [20:56:35] so [20:56:44] im pretty sure im the only ops person in office (toehr than ct) [20:56:48] lets deploy my python rewrite of mediawiki ;D [20:56:59] you have access to the DC that should cover us [20:57:06] its not yet 5pm here, we should wait for 3 more hours [20:57:09] push at 4pm pst [20:57:11] ;-] [20:57:27] if its worth doing, its worth doing at quitting time \o/ [20:57:52] * hashar notices it is 10pm already and still has some mails to unpill [21:01:54] or [21:01:59] I should rewrite wikibugs in python [21:24:37] ^demon: is the Gerrit upgrade in the engineering calendar yet ? [21:24:59] <^demon> Rob's trying to finish cat herding to make sure we've got a good time. [21:28:09] I guess in the morning for ya? [21:28:17] <^demon> Na, we'll do it late. [21:28:21] ahh [21:28:35] so you might have to reload / restart zuul :-D [21:30:23] hashar: you're here? [21:30:28] w00t! [21:30:30] yes I am [21:30:34] did you get my email? [21:31:04] yup [21:31:09] hexmode: I think I replied to you [21:31:21] so you get a node.js project for Jenkins [21:31:26] what is the project ? :-] [21:31:35] hashar: and then I replied to you, and you didn't get that, I think [21:31:39] :P [21:31:53] ok, I will dig it out and resend [21:31:58] indeed [21:32:04] I did not get the 2nd one [21:32:08] ok [21:32:12] while I'm here [21:32:15] send it the .fr one please :-D [21:32:25] anyone know about renameuser? ^demon? [21:32:52] <^demon> Not really :\ [21:33:07] <^demon> hashar: Where are the zuul docs? [21:33:20] ^demon: Reedy knows, though, right? [21:33:35] <^demon> Not sure. What's up? [21:33:50] ^demon: I left a note on wikitech but the doc is on mw.org [21:33:52] wikitech : https://wikitech.wikimedia.org/view/Zuul [21:33:55] so ops can find it [21:34:00] doc https://www.mediawiki.org/wiki/Continuous_integration/Zuul [21:34:09] short story: logs are /var/log/zuul/zuul.log [21:34:19] restart is /etc/init.d/zuul restart [21:34:21] plain and simple [21:34:22] ^demon: just have some people who want to use it. I know there were problems in the past -- at least there seemed to be from bz -- and I wanted to find out what the current status was [21:34:23] (usually) [21:35:10] <^demon> hexmode: I think it works fine for most people. I think it's just on large setups like WMF where it causes problems. [21:35:12] <^demon> But YMMV. [21:35:26] !google define YMMV [21:35:27] https://www.google.com/search?q=define+YMMV [21:36:16] ok [21:36:35] now lets go find out what "Your mileage may vary" might mean [21:37:23] hashar: it means "This is what I think happens, but I can't guarantee you'll see the same thing." [21:37:39] ahh [21:37:45] like "works for me" on a bug report [21:37:56] similar, yes [21:39:53] so hmm [21:40:05] hexmode: have you resent the email? [21:40:15] hashar: sorry [21:40:23] 3 different coversations [21:40:28] I'm on it! [21:40:43] oh ok [21:40:53] going bed soon so you might want to prioritize me haha [21:41:21] hashar: ok, so it looks like I lost it or didn't send it [21:41:35] hashar: are you going to be up for, say, 15min? [21:41:45] yeah [21:41:55] Ok, let me bang this out again [21:50:43] hashar: sent [21:51:05] New patchset: Hashar; "jobs for operations-debs-adminbot" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/47116 [21:53:04] New patchset: Hashar; "triggers for ops/debs/adminbot" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/47117 [21:53:23] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/47117 [21:53:28] hexmode: deploying that ^^^^^ [21:55:50] hexmode: got it [21:56:24] ahh [21:57:01] so no zuul :-D [21:57:05] you probably don't want to use that [21:57:15] that is like a big bertha to kill a mouse :-] [22:08:02] hexmode: reply sent [22:13:04] New patchset: Hashar; "jobs for operations-debs-adminbot" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/47116 [22:13:33] New review: Hashar; "jobs generated in production." [integration/jenkins-job-builder-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/47116 [22:14:28] New patchset: Hashar; "pyflakes for ops/debs/adminbot" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/47130 [22:14:41] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/47130 [22:40:58] heading bed! [22:41:02] see you later everyone [22:41:11] hexmode: feel free to mail me whenever you want :-] [23:05:44] Should we have images where an old vision has a newer timestamp than the current version? [23:08:38] AaronSchulz- You mess with images. ^ [23:15:07] Where is Http:get documented ? I see it used in MWSearch/MWSearch_body.php [23:15:23] Http::get I should say [23:16:14] Looked under extensions/Solarium but didn't see the get() method. [23:20:26] xyzram: It's in MediaWiki code [23:20:30] includes/HttpFunctions.php IIRC [23:20:31] xyzram- Not sure where it's documented, but look in includes/HttpFunctions.php in core [23:21:06] when in doubt, read the source ;) [23:22:22] Ah, I see it now thanks; missed it in the voluminous grep output. [23:43:03] anomie: you mean last-modified or oi_timestamp? [23:43:41] AaronSchulz- oi_timestamp, newer than img_timestamp [23:45:00] I can't thing of a good reason it should, might be caused by selective delete+restore or something [23:45:03] *think [23:45:31] Selective delete+restore is how I managed to do it [23:45:51] (delete, restore old version, restore new version) [23:46:28] jdlrobson: with regard to 'carriers' and central notice -- I'm writing up the mingle card for it now -- does it make sense that mobilefrontend takes care of the ip -> carrier mapping and simply passes it along to CN via the GET string or via cookie? or is there another method we should be considering? [23:47:13] um i'm not too familiar with the code.. preilly or brion would be best people to check with. I suspect that would be the way it works though [23:48:02] mwalker: central notice doesn't do anything with carriers that i know of? [23:48:20] the ZeroRatedBanner does use the carrier [23:48:31] as detected by Varnish proxies from IP range and inserted into HTTP headers [23:49:26] anomie: I think that is known then [23:49:45] not sure why the behavior was never changed