[13:55:28] * drdee feels like cacophonix, aiii yayaaaayyaaaahhyaaaaa,  [13:57:18] AIIII YAAYYYAAAAAYYHAAAAAHHHHHAAAAAAA [13:57:26] lol [13:59:16] for those folks who don't know cacophonix: http://static.tvtropes.org/pmwiki/pub/images/Assurancetourix_9992.jpg [14:03:13] average: what's the status of 701? [14:03:25] ottomata: what's the status of 1093? [14:03:35] qchris: what's the status of 1079? [14:03:47] milimetric: what's the status of 1111 and 1120? [14:04:13] can we get some kind of fancy IRC room mingle # to url translation? [14:04:14] :) [14:04:14] (i think we need an irc bot that converts mingle card numbers to hyperlinks [14:04:19] :) [14:04:20] yES! [14:04:21] drdee: Still in coding and testing :-( [14:04:27] still in coding and testing [14:04:30] k [14:04:31] k [14:04:33] also - : ( [14:04:34] um, drdee, i think done? i'm not working on it anymore, it did what I wanted it to do [14:04:39] kool [14:04:39] maybe in needs demo? [14:04:41] or maybe [14:04:46] yeah i guess [14:04:46] demo? [14:04:51] yeah i think so [14:05:32] is there a python irc bot that we can modify? [14:05:53] milimetric and 1111? [14:06:03] i put it in coding and testing [14:06:06] k [14:06:10] but haven't heard from EZ [14:06:11] you got all the data? [14:06:14] mmmmmmm [14:06:15] he said 8 hours last night [14:06:24] and I'm assuming something else went wrong [14:06:45] average, around [14:06:46] ? [14:07:05] this painful process alone should be reason enough to work on moving some of that logic to Hadoop [14:07:11] aight [14:07:21] ottomata: any news about testing openjdk 7? [14:07:33] yeah, well, the hive stuff I did yesterday was kinda to test that I guess? [14:07:35] and that worked fine [14:07:43] so, i'm waiting for this giant terasort that toby and I started to finish [14:07:47] sweet [14:07:48] it looks like that will be another day or two [14:07:56] and then I want to run the smaller easier benchmarks and save the results [14:08:01] let me put that job on the well [14:08:11] then I can install openjdk 7 in prod [14:08:34] i'm going to try to grab leslie to do hue/oozie/hive today or tomorrow (I think she doesn't know it yet :) ) [14:08:36] what' was the title of that migle card? [14:08:37] so you can put that on the wall too [14:08:41] uhhhh dunno? [14:08:48] you reatedd it with tnegrin! [14:08:53] hmmm [14:09:05] if you create the card you own it :D [14:09:12] i didn't create it! [14:09:15] toby tdid [14:09:17] :) [14:09:29] rules are different for me [14:09:36] I create it - u own it [14:09:38] ;) [14:09:42] :D [14:09:51] tnegrin: remember the title of the card? [14:10:00] ha - no [14:10:06] can you look in the history [14:10:09] sure [14:10:12] i will dig it up [14:10:13] np [14:10:45] 1114 [14:10:51] https://mingle.corp.wikimedia.org/projects/analytics/cards/1114 [14:10:55] Hadoop: Run and retain results of Hadoop benchmarks [14:11:25] yeah, and drdee, can you think of a good way to save those results? i guess on the kraken wiki pages? [14:11:28] hm [14:11:40] google doc? [14:11:49] it's tabular data [14:12:20] and if we want to calculate delta's in performance then a google spreadsheet is uesful [14:12:26] just have it a public link [14:12:29] and link from a wiki [14:12:31] ok cool [14:12:33] i like danke [14:12:33] my 2 cents [14:12:42] good i like that better too [14:16:01] milimetric: what do we do with ez, wait or poke? [14:16:53] I don't think there's any reason to poke [14:17:01] he's working on it full time, I'm sure [14:17:25] yes but we don't know what to expect [14:17:51] well, the way I approach it is that I expect there to be no data unless he sends me data [14:18:06] I've been pleasantly surprised 100% of the time so far. [14:18:50] but you're welcome to ask, I'm just saying I've never seen much reason to ask [14:19:29] i know he is working on it [14:19:35] but we don't know the nature of the dealys [14:22:04] drdee, I'm actually a little blocked on the things we prioritized yesterday (openjdk 7, hue/hive/oozie) [14:22:05] which of these (or something else) shoudl I work on? [14:22:05] - Camus timestamp bucketing using kafka message key [14:22:05] (I'm thinking we shoudln't worry about this right now) [14:22:05] - Hive + JSON + Camus + partitioning [14:22:06] - Look into remote DC kafka/zookeeper options (probably ahve to work with mark on this) [14:22:28] the latter we will talk about that tomrrorow [14:22:37] i think Hive + JSON + Camus + partitioning [14:22:50] that's important [14:24:18] are you now doing https://mingle.corp.wikimedia.org/projects/analytics/cards/734 ? [14:24:41] ottomata: ^^ [14:25:02] want to do that with leslie this week, maybe today? i just sent her an email [14:25:14] oh we aren't migrating oozie database, right? [14:25:16] on that card? [14:25:44] i was referring to https://gerrit.wikimedia.org/r/82611 [14:25:50] i thought it was mingle card 734 [14:26:07] yeah it is [14:26:12] where is the oozie database running right now? [14:26:19] it isn't we blasted it [14:26:42] ok [14:26:46] but then i guess it needs to be reinstalled? [14:26:54] yes, but not 'migrated' :) [14:27:00] puppet should do everything for us automatically [14:27:47] sorry, not 100% following [14:28:02] its ok, there is just something that is in scope on that card that is no longer relevant [14:28:04] i will take it out [14:28:11] ok [14:28:39] ping average [14:30:36] milimetric: do you want to demo 1094? [14:31:48] milimetric: what about 1122? [14:32:38] 1122 isn't on the board [14:32:41] so I forgot about it :( [14:32:56] but I did work on it for 30 minutes on Friday [14:33:05] that's due tomorrow too isn't it [14:33:06] my bad [14:33:09] yes [14:34:01] drdee, do you want me to demo varnishkafka stuff tomorrow? [14:34:03] or another time? [14:34:10] I'm feeling very anti-demo and wanting to skip meetings right now :) [14:34:20] but someone can demo 1094, sure [14:34:32] who can demo 1094? [14:34:39] anyone, it's just a map [14:38:07] ok, I [14:38:07] I'll demo it [14:38:07] ty [14:38:07] ottomata: you mean today? [14:38:07] or are you referring to the meeting with mark? [14:38:26] i mean today [14:38:37] basically the demo i gave you guys last thurs [14:38:42] s'ok if not [14:40:15] is there a way we can demo it in a less technical way that will be understood by our audience? [14:40:36] (i would like to show it off somehow) [14:42:19] hm, not really [14:42:31] its mostly varnishkafka and camus and failover stuff [14:42:35] maybe wait then [14:42:47] we can show this off a little more when it is in prod-ish [14:42:49] instead of labs [14:42:58] and once we get hive stuff figured out [14:54:02] so guys it seems that only dan & stefan have stuff to demo: [14:54:11] - 1094 (https geomap) [14:54:15] - new features for wikimetrics [14:54:20] are we all cool with that? [14:54:31] average, ottomata, qchris, milimetric: ^^ [14:54:48] sure [14:54:50] Yes, of course [14:55:05] thanks! [15:29:12] woke up [15:29:47] milimetric: do you think we can attempt a deploy ? [15:30:04] yes, give me like 15 minutes [15:30:09] ok [15:30:16] the problem is it'll need customer sign-off anyway before we can showcase it in the sprint demo [15:30:32] so today, just get ready to demo the stuff we showed last Thursday [15:30:47] make sure you have cohorts, etc., run through a few reports [15:45:48] ok average [15:45:51] my time's up :) [15:45:58] let's try to deploy [15:46:20] hangouting [15:47:06] average: https://plus.google.com/hangouts/_/1bd391e378f5a4c2d717fe23918d6c63cba5e1c6 [15:50:16] (Abandoned) Milimetric: Made a small change for Christian to demo gingle #for 1112.3 [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/81942 (owner: Diederik) [15:50:46] drdee, I just moved #734 to done [15:50:58] you are the man! [16:13:21] (PS7) Stefan.petrea: Implemented Survivor metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/81421 [16:24:19] (PS8) Stefan.petrea: Implemented Survivor metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/81421 [17:03:43] (PS1) Milimetric: work in progress on timeseries [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/82636 [17:04:04] milimetric: it's time to start using gingle :) [17:04:15] oh yea? [17:04:21] you tested it and it's ready? [17:04:21] why not :) [17:04:31] i did test it [17:04:34] so how do we handle the authentication piece? [17:04:41] just leave it open for now? [17:04:55] ohhh right [17:05:16] GINGLE!? [17:05:17] one option is to make it https and to just give every gingle user a passkey [17:05:21] that they configure [17:05:23] i was thinking to enable basicauth in appache [17:05:35] YES OTTOMATA! [17:05:38] GINGLE [17:05:46] yeah, basicauth is no good without https though [17:05:54] gigle bells :) [17:06:01] not sure if you do https in labs [17:06:08] yeah, otto set it up a bunch of times [17:06:10] xmas is close [17:06:19] haha :) [17:06:31] https is enabled for wikimetrics, drdee [17:06:33] ottomata can we do https non labs? [17:06:37] that's true [17:06:49] on labs? [17:06:51] or non labs? [17:06:53] :p [17:08:13] non -> on [17:08:56] om -> nom [17:11:04] haha [17:11:07] yes ssl anywhere! [17:11:15] ottomata: Say (hypothetically) if zero logs jumped from 2.0M yesterday to 2.2M today, and the newly added carriers total up for only 5K, would that make you say "Yes, qchris, that's because we did X"? (For some known X) [17:11:19] just not official cert anywhere [17:11:29] no chris, that would not make me say that [17:11:31] :( [17:11:41] That's the wrong answer :-((( [17:13:48] ottomata, could you enable basicauth and https on the apache instance on limn0 for gingle.wmflabs.org ? [17:15:51] you puppetizing this thang or you just want a little 's' slapped on your site? [17:15:53] (PS2) Milimetric: work in progress on timeseries [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/82636 [17:17:07] just slap it ! [17:21:07] drdee: did we capture the need to make sure the MaxMind database is up to date? [17:22:29] the maxmind db is already updated automatically, i think as a user acceptance criteria to future hadoop jobs we need to add that we are able to quickly replicate an analysis from an arbitrary point in time using an arbitrary maxmind db [17:23:12] I'm not sure I understand -- did we just make that fix? [17:24:14] how did it happen if the database is automatically updated? [17:25:11] on stat1002 the database is automatically updated through puppet [17:25:34] on kraken that did not happen but now kraken is puppetized it will also ahppen [17:25:54] not true [17:25:59] ok [17:26:11] the geoip.dat files are updated at /usr/share/GeoIP by puppet automatically [17:26:19] so why did it not happen on kraken? [17:26:19] I don't need an solution, but I will create a bug [17:26:20] putting them into HDFS is not puppetized [17:26:33] ok that's the part [17:26:34] also it seems like Erik Z's are also out of data [17:26:40] date [17:31:55] i think putting them into HDFS is the same issue as jar hell [17:32:08] if we figure out jar hell, we can figure out .dat (and other) versioned files in hdfs [17:32:35] i think they are slightly distinct [17:32:40] tnegrin: created https://mingle.corp.wikimedia.org/projects/analytics/cards/1131?version=1 [17:33:01] updating the card now [17:33:02] ok -- how do I like this to the bug I created [17:33:06] link this [17:35:54] tnegrin: you just wait and it happens automagically [17:36:44] I'd like to link it to the HDFS card you just created -- should I do that manually? [17:37:55] it will get imported into mingle automatically [17:38:15] but you can add the link to 1131 [17:50:21] tnegrin: check https://bugzilla.wikimedia.org/show_bug.cgi?id=53762 it now contains a link to card 1132 which was imported into mingle [17:50:32] i fixed the linking, that was a small bug [17:55:52] cool -- I get this message when I try to access the card: Either the resource you requested does not exist or you do not have access rights to that resource [17:56:31] i get that too [17:56:44] what card? [17:57:12] wait [17:57:26] sorry wrong link [17:57:30] https://bugzilla.wikimedia.org/show_bug.cgi?id=53762 [17:57:45] i will fix it after the demo [17:57:52] but check https://mingle.corp.wikimedia.org/projects/analytics/cards/1132 [17:57:57] that's the imported card [17:58:25] and they cross-reference each other [17:58:55] ok -- cool [18:26:01] J-Mo1: yt? [18:26:23] yep DarTar [18:26:26] what's up? [18:27:13] do you want to stick around to chat about the single-user ranges with milimetric and drdee? (and is it appropriate to use the sprint planning slot for a quick check-in on this?) [18:27:29] sure. same hangout? [18:27:44] drdee ? [18:27:44] I've got 30 min 'til my next meeting DarTar [18:27:56] sprint planning first [18:28:14] alright, so let's schedule something for another time [18:28:31] kk. next Mon - Wed maybe? [18:29:19] wfm, I sent a reply with a few thoughts just a minute before the showcase [18:29:28] DarTar: I read card 1058 on mingle [18:29:37] DarTar: is the new s parameter not survival_days ? [18:29:49] DarTar: IMHO it's already part of card 701 [18:30:00] if it is that's awesome :) [18:30:05] let me look it up, one sec [18:30:14] DarTar: please do :) [18:30:15] actually, 1 secmin [18:30:30] tnegrin: sprint planning [18:30:58] he's talking to robla, 5 mins maybe? [18:31:47] k [18:37:54] are we planning? [18:38:22] tnegrin: Yes. In the hangout of the showcasing. [18:39:16] tnegrin: https://plus.google.com/hangouts/_/945f5ffe0887dfb9a331140259996f9a7c0f54db [19:10:59] DarTar: 1058 says "dev status dropped" [19:11:23] DarTar: so it's included in 701 right ? [19:19:41] average: not yet [19:19:54] but i think we should just make the 'NOW" parameter configurable by default [19:20:21] by offering a date selector [19:20:31] that way you offer both options [19:22:07] average: updated 701 [19:23:32] * average reading 701 [19:25:14] makes sense? [19:27:10] can't diff with previous, but re-reading the card [19:27:28] it's just the top table that's new [19:27:45] ottomata, milimetric: requests library does not accept self signed certifiactes [19:27:50] requests.exceptions.SSLError: [Errno 1] _ssl.c:499: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed [19:28:29] s parameter appears only in top table [19:28:44] ah, sux [19:28:48] solved it! [19:28:52] verify=False [19:28:59] s is effectively survival_days isn't it ? [19:29:10] yes [19:29:45] in that case we already have it [19:29:59] i think so too [19:30:11] the default behavior was that you could not specify s [19:30:13] i think so average, s seems to just be the "survival days" [19:30:20] it was ujust 'NOW' [19:30:40] oh, wait no [19:30:43] i'm wrong [19:30:46] milimetric: ? [19:30:52] s is the time after the survival days to check for survival [19:31:03] but that contradicts the definition [19:31:06] so T = survival days [19:31:12] S = new span parameter [19:31:22] right now, behavior is: [19:31:35] if edit exists after T, the editor survived [19:31:50] or if T is not specified, if edit exists after END_DATE, editor survived [19:31:59] I think S, and DarTar can correct us if we're wrong, is saying: [19:32:12] if edit exists after T but before T + S, the editor survived [19:32:21] do you guys think I'm understanding that correctly? [19:32:27] right now we are computing [stat_date, end_date] , [registration_date,end_date] , [start_date,start_date + survival_days] , [registration_date, registration_date + survival_days] [19:32:38] * average is reading what milimetric wrote [19:33:59] [T,T+S] is captured by [start_date,start_date + survival_days] [19:34:03] which we implement [19:36:14] doesn't end_date becomes S? [19:36:32] in that case why do you need T? [19:36:48] so is S a date or a number ? [19:37:04] i think it could be both [19:37:37] both uses are useful [19:37:47] something far in the future is easier to enter as a date [19:37:50] if it's a number, we're calling it survival_days. if it's a date we're calling it end_date . (this is in the current implementation) [19:38:05] while something nearby like next week then entering 7 might be easiser [19:38:21] that sounds good to me [19:39:37] the addition of S is superfluous IMHO, I hope DarTar will confirm this. otherwise we'll have to talk to understand what it is [19:46:31] i agree with average [19:46:42] average, can you write this in an email and send it to darter and cc dan and me/ [19:46:42] ? [19:48:19] drdee: yes [19:48:50] ty [19:58:29] ottomata: yt? [19:58:55] yup heya [19:59:26] one last patch? :) [19:59:42] i tested this one by disabling puppet and making the changes locally [19:59:42] https://gerrit.wikimedia.org/r/#/c/82648/ [20:00:22] ori-l: one last beer? [20:00:31] one for the road [20:00:53] i always have two beers: [20:01:00] the first and the last [20:01:45] ottomata: <3 thanks [20:01:49] yup! [20:04:27] hey ottomata [20:04:40] hyeaaa [20:04:47] can we private chat? [20:04:59] sure [20:06:12] drdee: email sent [20:06:16] ty [20:11:07] hi tnegrin, did you want to talk? [20:12:32] ok drdee, mostly good news about hive partitions ( I think) [20:12:39] i still don't have a full understanding of how all this hive stuff works [20:12:44] but! [20:12:56] shooot [20:12:56] we can use partitions with the 2013/08/22/08 hierarchy [20:13:04] and with the raw data [20:13:07] like how i suggested? [20:13:09] but, we have to create each partition manually [20:13:17] script? [20:13:20] specicying the location for each hour [20:13:23] yeah, i think that would be fine [20:13:37] maybe if we use oozie to schedule camus import [20:13:44] we can chain a partition creation after success [20:13:58] oo, or even just a regular oozie job [20:14:02] on addition of new directory [20:14:05] create hive parititon [20:14:08] aight [20:14:16] as far as I can tell [20:14:20] with an external table [20:14:27] hive will pick up any data added to a partition at query time [20:14:31] oo i shoudl test that :) [20:14:41] it sounds reasonable [20:17:54] yes! it works [20:17:56] coooooOOOOOl [20:18:02] woooooot [20:33:24] average: sorry, got caught into other meetings, looking it up now [20:35:07] average, drdee: do you have a sec to talk about survival? Otherwise I'll follow up by mail [20:51:26] DarTar hangout ? [20:51:28] drdee: ? [20:51:32] drdee: want to join ? [20:51:46] give me 10 minutes [20:51:50] talkking to qchris [20:52:03] ok [20:57:59] drdee, average: sorry, I've got something else coming up, I'll send a note by mail when I'm done [21:10:05] blargh [21:11:06] (PS9) Stefan.petrea: Implemented Survivor metric [analytics/wikimetrics] - https://gerrit.wikimedia.org/r/81421 [21:12:02] milimetric: let's merge this as is, and start the transition to timestamps [21:12:09] milimetric: what do you think ? [21:12:27] did you fix the start_date / end_date overwrite problem? [21:12:29] yes [21:12:35] test it in the web? [21:12:43] doing that now [21:12:47] and then test it against mediawiki as well [21:12:52] if that all works, then I'm ok to merge [21:13:19] the problem is... the mediawiki test won't be 100% complete I don't think [21:13:21] how would I test against mediawiki ? [21:13:32] oh, just change the config file to point to mediawiki [21:13:41] and create the database using the script in the scripts folder [21:13:54] I'm not sure if that works perfectly but it should be close [21:14:31] the problem is, the columns will be created by sql alchemy and it won't be exactly the timestamp column that production has [21:14:45] it will probably just create actual DATETIME types [21:14:47] grrrr [21:14:59] no, we have to fix this and test it in a production-like environment before deploying [21:15:00] sorry [21:15:06] yeah, I will test now in --mode web , and then we can discuss merging so we can go forward with the timestamp transition [21:15:36] we will probably have to do some refactoring on many files but it will work out ok [21:17:45] blargh [21:18:24] yeah, average, that works. We can merge and we just won't deploy until we can test against the prod schema [21:18:50] milimetric: do we have the prod schema in our git repo ? [21:20:55] no we don't average, but you can add it to the design folder [21:21:04] as a mysql workbench file [21:21:11] ok [21:21:28] and you can reverse-engineer it after SQL Alchemy creates it [21:21:52] all that SQLAlchemy needs is a blank database that it can connect to with the commented out mysql string from db_config.yaml [21:22:14] if you have that and you run the tests, they'll create the schema according to the mappings [21:22:29] do that, then reverse engineer it from Mysql Workbench [21:22:54] and before you reverse engineer it, you could manually change the datetime columns to that varbinary(14) [21:27:08] yeah, that's a good idea [21:31:10] hey Snaps_, quick q [21:31:15] to disable setting a kafka key [21:31:16] yep [21:31:20] format.key.type = null [21:31:20] ? [21:31:37] you just comment out the #format.key.type = .. line [21:31:46] and format.key [21:31:53] ah, k cool [21:34:56] ok, and Snaps_ [21:34:57] log.data.copy = true [21:35:00] shoudl that be false by default? [21:35:46] its safer with it set to true, and that is also the required setting for offline files. [21:35:56] but with live logs false probably works, and is faster [21:36:17] so, I think it should default to true to be safe. [21:38:52] offline files? (I don't know varnish very well) [21:38:55] ok [21:39:54] online = shared memory varnish log files that are being actively written to by varnishd. offline = a regular file containing varnish log entries. [21:40:10] s,are,is, [21:41:14] huh, didn't know offline was a thing [21:42:10] produced by running 'varnishlog -w offline.file' on an active varnish box (varnishlog will read the online shared memory log file and write the contents to regular file on disk) [21:50:23] ah ok nice, so that's especially good for testing varnishkafka? [21:51:16] It must be a hardware problem [21:51:30] yeah. when varnishkafka crashed during Mark's tests a couple of weeks ago he handed me an offline which I could reproduce the issue with locally. Very convenient [21:51:37] offline file [21:52:16] why? [21:52:16] The third party API is not responding [21:53:31] why is third pary API not responding? [21:53:32] The code is compiling [21:54:03] uhmmmm [21:54:09] milimetric do you know who test_ is ? [21:54:25] no [21:54:32] hi test_, who are you :) [21:54:35] why is he here? [21:54:36] I usually get a notification when that happens [21:55:28] I'm sorry test_, not sure what you're asking about [21:55:36] may we ask who you are and what you're interested in? [21:56:50] test_: Why are you trolling us? [21:56:50] Well done, you found my easter egg! [21:57:04] I did? Why? How? What did I do? [21:57:05] I thought he knew the context of what I was talking about [21:57:26] Stupid 'why' bot :-P [21:57:27] You must be missing some of the dependencies [21:57:54] why do you insult qchris? [21:57:55] Our hardware is too slow to cope with demand [22:00:23] test_: who are you ? [22:00:36] lol [22:00:41] drdee is messing with all of us average [22:00:59] test_ literally responds to the word "why" [22:01:00] You must be missing some of the dependencies [22:01:09] and yes, I am missing ALL of the dependencies [22:01:09] why [22:01:10] We didn't have enough time to peer review the final changes [22:01:17] why [22:01:18] I couldn't find any examples of how that can be done anywhere online [22:01:23] oh boy [22:01:30] why [22:01:31] Our code quality is no worse than anyone else in the industry [22:01:46] i agree [22:02:07] second that, but wonder why test_ is so defensive about it [22:02:07] I thought he knew the context of what I was talking about [22:02:13] oh ok [22:02:21] test_: why [22:02:22] There was too little data to bother with the extra functionality at the time [22:03:11] abcwhydef [22:03:11] You can't use that version on your system [22:03:20] you found the right place we talk about big data [22:04:02] _why the luck stiff [22:04:03] I thought I finished that [22:07:21] why [22:07:22] It's a character encoding issue [22:07:54] test_ is like a toddler [22:07:58] 1234 [22:08:39] 1234 [22:09:21] I think test_ is a reverse toddler [22:09:32] usually toddlers ask the word that shall not be named [22:10:11] {0} [22:10:21] {1} [22:10:56] 1213 [22:12:12] 4321 [22:14:32] 1112 [22:14:32] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1112 [22:14:46] ottomata, milimetric: ^^ [22:14:57] we have a minglifying ircbot [22:15:21] average, qchris: ^^ [22:16:40] that's interestin [22:20:06] lol [22:20:15] ircingle? [22:20:22] 1112 [22:20:22] milimetric hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1112 [22:20:40] hey guys check out 112 [22:20:40] milimetric hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/112 [22:20:51] hey guys, yesterday I drank 200 beers [22:20:51] milimetric hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/200 [22:20:53] lol [22:20:55] fail [22:25:19] i was thinking of prefixing the id like [22:25:20] M1234 [22:25:20] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1234 [22:26:49] 1234`ls` [22:26:50] Snaps_ hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1234 [22:26:55] damn, so close [22:27:55] test_: I want to tell you a shellcode [22:28:03] but I won't [22:29:26] Snaps_: did you try to do an interpolation of sorts ? [22:30:10] 123 567 [22:30:10] average hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/123 [22:30:10] average hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/567 [22:30:17] (PS1) JGonera: Story 1124: Add Edits monthly graphs [analytics/limn-mobile-data] - https://gerrit.wikimedia.org/r/82759 [22:30:18] grrrit-wm hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1 [22:30:18] grrrit-wm hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/1124 [22:30:18] grrrit-wm hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/8275 [22:30:18] grrrit-wm hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/9 [22:30:27] hah ! ^^ [22:30:30] :D [22:30:36] well, that's easily fixable [22:31:21] test_ met grrrit-wm , they just had their first conversation [22:54:04] https://gerrit.wikimedia.org/r/82759 [22:54:04] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['https://gerrit.wikimedia.org/r/82759'] [22:55:53] https://gerrit.wikimedia.org/r/82759 [22:55:53] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['https://gerrit.wikimedia.org/r/82759'] [22:56:35] https://gerrit.wikimedia.org/r/82759 [22:56:35] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['https://gerrit.wikimedia.org/r/82759'] [22:56:40] dear lord [22:56:40] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['dear'] [22:56:40] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['lord'] [22:56:45] HAHAA [22:56:45] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['HAHAA'] [22:57:28] welp [22:58:43] test [22:58:47] 1234 [22:58:51] https://gerrit.wikimedia.org/r/82759 [22:59:33] https://gerrit.wikimedia.org/r/82759 [23:00:13] ok [23:00:31] 124 [23:00:31] drdee hopes that someone will have a look at https://mingle.corp.wikimedia.org/projects/analytics/['124']