[09:23:37] !ping [09:23:38] Pong. [10:28:22] apergos: Do you happen to know who runs http://commons.prototype.wikimedia.org/wiki/Special:Version these days? [10:28:52] no idea whatsoever [10:29:17] I thought prototype was dead, whows what I know [10:30:25] We're using it to test the uploadwizard, but it's not up to date [10:30:36] (we = people of Wiki Loves Monuments) [10:30:53] i think that has been replaced by labs? [10:33:08] apergos: Do you know how hard it is to get the version in sync with production Commons? [10:33:56] mutante: Linkie to the Commons instance? [10:35:25] multichill: i may be wrong, i just found it is on "tesla" and this one http://wikitech.wikimedia.org/view/Tesla [10:36:28] so vmware, needs vmware windows client [10:36:54] don't quote me, but i think its likely an open task to migrate these to labs [10:37:27] (the non-windows ones?) hmmm [10:37:53] You don't need a client, you can use one, there is also a web based version (Java I guess) [10:39:23] vsphere != vmware player though [10:39:45] i also don't have the answer to who manages them [10:40:13] I know. I run vsphere, vcenter, vcloud, veverything :P [10:40:15] but.. Ryan wrote the doc page:) [10:41:34] multichill: oooh! "Subject: Fwd: [Engineering] All tesla VMs to be shutdown in one month [10:41:37] (including prototype) [10:42:11] 23 Feb. so its like they are not supposed to be alive anymore :p [10:43:51] multichill: would be good to answer to that thread you are still using it . quote "On March 26th I'll be shutting down all tesla VMs." [10:45:56] LOL, where did you find that mutante? [10:46:43] engineering mailing list [10:48:18] Oh, boy, the first hit for labs is http://en.labs.wikimedia.org/wiki/Main_Page [10:48:22] And that's a broken link :{ [10:49:09] multichill: hmm, yea, that could be better. it is labsconsole.wikimedia.org [13:40:57] Vito_away: (continuing from #wikipedia-it ) [13:41:19] imho Priority and Severity fields are completely redundant and take only maintenance effort [13:41:32] Milestones are pretty much the main thing right now [13:41:44] milestones and assignee. Who and when, basically. [13:41:58] yep [13:42:09] well, the fields are definitely redundant [13:42:38] Don't assume anything when someone changes priority, I'd say ignore it [13:43:44] honestly in the past months and actually years I had the feeling WMF is not focusing developing process towards the right direction [13:45:06] I know a part of the community thinks that, and it is a slowly spreading "mind set" that is mostly triggered through village pumps. I think the last few months of wmf actions are not in that direction anymore. But we're not communicating enough in the right places. so the community just keeps spreading their "itches" with each other. [13:45:14] It's how communities work. I've seen it happen in other places. [13:45:38] May not be intentional but a big part of it is just spreading rumors and bad stuff about things. [13:45:49] or misunderstandings that nobody is there it clea up [13:45:51] clear up* [13:47:05] Since I work for wmf and see things from a slightly different perspective I've seen that direction change a bit (over the last 12-14 months), and it's in pretty good shape right now imho. [13:47:05] but I can understand that from your perspective it may not feel like that direction has changed at all. [13:47:36] I feel it to whenever I'm in "user" mode, so to speak (patrolling edits, uploads, (un)block/(un)protect requests etc. on nlwiki and commons) [13:51:43] Krinkle: so are you feeling the new problem with mediawiki? [13:51:55] what new problem? [13:52:04] it's too xrumer-friendly :D [13:52:23] "mediawiki" as software? [13:52:43] yep [13:53:03] Depends. I don't' know XRumer software (other than reading about it). [13:53:28] well, I'm kidding when saying "xrumer-friendly" but honestly I see *too* many spambots [13:53:52] I do know that if you open up a website (in general) for edits by anonymous users and/or open registrations and edits for all registered users, that you will need protection to not end up being a dump place for every bot on the planet [13:53:58] that has nothing to do with mediawiki imho [13:54:15] WordPress recommends installing Akismet (which requires a key to work, it is not by default). [13:54:27] And similarly there are various extensions out there for MediaWiki. [13:54:39] Although that is a lot harder to set up than WordPress [13:54:56] Vito_away: on Wikimedia wikis as well? [13:55:10] (I'm not saying that makes it right, just curious) [13:55:24] oh yes, I'm dealing with Wikimedia's projects [13:55:43] we're locking dozen of accounts a day [13:57:03] And you believe those are bots that weren't manually set to attack one specific wiki, but mass things that got through automatically without human intervention? [14:03:16] hashar: How are we doing on continuous integration? What's your plan this week [14:04:31] it has been on hold for 3 weeks [14:04:41] and probably going to be on hold till end of june [14:04:49] I am focusing on building up the beta cluster [14:04:51] on labs [14:04:54] :-D [14:05:03] do you have any information about testswarm 1.0 ? [14:05:31] ::rolleyes:: [14:06:00] hashar: okay, cool. No problem, just checking up on the schedule so I don't have any wrong expectations [14:06:34] There's a few things I'd like to get in 1.0. Things that make the database incompatible (I'd like to avoid having to require a compete re-install for again for testswarm 1.1) [14:07:02] well we might end up installing testswarm on a labs instance [14:07:04] but nothing absolutely neccecary though. [14:07:05] TestSwarm 1.0-alpha is in a deployable state. [14:07:08] I hope so [14:07:11] Jenkins in general that is [14:07:19] Why just TestSwarm ? [14:07:46] The backend of TestSwarm is pretty lightweight. Whehter that is in production or labs doesn't make much difference. [14:09:17] hashar: I really like to get working on the browserstack stuff and the integration into jenkins so that builds can fail if test swam fails (ultimately) [14:09:38] but right now I can't do that because I don't know any puppet stuff and don't have access to it either [14:09:41] such as updating user agents [14:09:49] or whatever [14:09:52] just small stuff [14:10:33] I got it working with jquery (for which the build prices is a lot simpler btw, but good jenkins practice) [14:10:45] they are now completely on browserstack/testswarm/jenkins. [14:11:03] wtf [14:11:07] Project jQuery UI build #448: SUCCESS in 18 min: http://swarm.jquery.org:8080/job/jQuery%20UI/448/ [14:11:17] http://swarm.jquery.org:8080/job/jQuery%20UI/448/console [14:11:24] http://swarm.jquery.org/job/123 [14:14:02] Krinkle: sorry but I was a bit afk [14:14:07] well, it's xrumer [14:14:15] it's an automatic process [14:14:16] hashar: [14:14:17] The backend of TestSwarm is pretty lightweight. Whehter that is in production or labs doesn't make much difference. [14:14:18] [4:09pm] Krinkle: hashar: I really like to get working on the browserstack stuff and the integration into jenkins so that builds can fail if test swam fails (ultimately) [14:14:20] [4:09pm] Krinkle: but right now I can't do that because I don't know any puppet stuff and don't have access to it either [14:14:22] [4:09pm] Krinkle: such as updating user agents [14:14:23] [4:09pm] Krinkle: or whatever [14:14:24] [4:10pm] Krinkle: just small stuff [14:14:25] crashed somehow sorry [14:14:26] [4:10pm] Krinkle: I got it working with jquery (for which the build prices is a lot simpler btw, but good jenkins practice) [14:14:28] [4:10pm] Hydriz left the chat room. (Quit: Hydriz) [14:14:29] [4:10pm] Krinkle: they are now completely on browserstack/testswarm/jenkins. [14:14:30] [4:11pm] hashar left the chat room. (Quit: I am a manual virus, please copy me to your quit message.) [14:14:32] [4:11pm] Krinkle: wtf [14:14:33] [4:11pm] Krinkle: Project jQuery UI build #448: SUCCESS in 18 min: http://swarm.jquery.org:8080/job/jQuery%20UI/448/ [14:14:35] [4:11pm] Krinkle: http://swarm.jquery.org:8080/job/jQuery%20UI/448/console [14:14:36] [4:11pm] Krinkle: http://swarm.jquery.org/job/123 [14:14:37] [4:12pm] [14:14:38] (end of paste) [14:15:03] looking [14:15:35] looks neat! [14:15:54] well I wrote some puppet classes [14:16:05] Krinkle: so installing testswarm on labs should be pretty easy [14:16:21] the real blocker is updating the Debian package [14:16:40] as for the database migration, I guess we can just throw away all the previous results [14:17:04] hashar: and node.js [14:17:05] yes, throw away indeed [14:19:08] node.js ? [14:19:12] what is it needed for ? [14:19:21] do you mean that tests are being run under node.js ? [14:29:00] hashar: no, that would be nice too, but that's not what I meant [14:29:10] node is needed for node-testswarm, node-browserstack and testswarm-browserstack [14:29:22] Can be ran from cli [14:29:33] I don't see why we would want to reinvent that whole library in another language [14:29:51] I contributed a fair bit to it as well [14:30:45] hashar: personally I think we could even do your build steps in node (with "grunt.js" instead of "ant" or something like that). grunt is very nice. [14:30:48] but we don't have to do that per se. [14:31:12] I don't understand why node.js is needed [14:31:26] that is certainly going to be blocker from the ops team [14:33:14] hashar: I already answered the why, so what do you now understand? [14:33:21] is it going to be a blocker on labs? [14:34:38] browsers run tests client side then submit their result to the swarm [14:34:46] that swarm somehow magically update jenkins [14:34:57] Right [14:34:58] and I don't see why node.js is needed [14:35:06] swarm is not going to update jenkins [14:35:08] anyway, I just don't have time to speak about that today sorry [14:35:11] jenkins is polling swarm [14:35:14] :-( [14:35:29] although that polling is trivial to write [14:35:59] what we need node for is the browserstack integration that keeps running in the background to start virtual machines in browserstack, keeps them running, points them to join the swarm, handles with tokens etc. all that. [14:36:19] which is already written and ready out-of-the-box in node-browserstack and testswarm-browsertstack [14:36:31] node is easy to install. I don't see why that would be a problem. [14:37:04] just a social issue [14:37:05] :P [14:37:10] it's not like we don't have people that can maintain the scripts. its just javascript. [14:37:36] even easier to maintain than regular browser-oriented js since it doesn't require browsers. [14:37:44] hashar: anyway, not today. [14:39:17] I understand now :-D [14:39:40] we can surely set that in labs [14:39:44] even with node.js [14:42:29] alrighty [14:42:40] hashar: so is integration going to move into a labs project entirely ? [14:43:43] I could do that, starting my moving the current set up (which is in puppet already so that should be doable. Just setting up a labs project, public ip/subdomain, set up the instance with puppet and fix anything that was barcoded for integration.mw.o) [14:44:07] and finish up by redirecting integration.mw.o to integration.wmflabs.org [14:45:49] I have no idea yet [14:46:00] I am not sure what will be moved to labs or not [14:46:16] for now, labs is too unstable to run all of continuous integration stuff on top of it [14:47:45] hashar: but you do want to move testswarm to it? [14:47:50] (before the rest is moved) [14:48:28] then we might need the old fetcher again, since we now use jenkins to update, build and publish the static install [14:48:42] I'd rather not [14:48:54] but maybe you have a plan for that. I don't know [14:49:59] I have no plan at all :-D [14:50:26] I spent April on Jenkins / tests / git / gerrit [14:50:30] May on beta labs [14:50:37] k [14:51:04] I am probably sure I will want Jenkins to setup the MediaWiki instances [14:51:11] for testswarm to run tests against [14:51:15] right [14:51:28] it is actually doing it in a hacky way which need to be rewritten [14:52:00] aka at the end of the PHPUnit tests, files are copied to some public place and lot of ugly rewriting happen [14:52:36] I need an ant target to install MediaWiki from scratch to the public directory AND using a MySQL backend [14:52:45] so that need some ant magic ;-D [14:53:24] (pst, or grunt magic) [15:00:10] yeah, would be nice if we could re-install cleanly. [15:00:13] would* [17:42:49] NikeRabbit: in https://gerrit.wikimedia.org/r/#/c/6636/4/includes/HTMLForm.php you suggested using '===' instead of '==' in a number of places - why? [17:43:14] er [17:43:20] Nikerabbit, rather ^ [17:49:55] Mornings [18:02:24] awjr: == and != make my heart hurt :) [18:04:17] RoanKattouw: https://gerrit.wikimedia.org/r/#/c/7302/ [18:23:35] RoanKattouw: please let me know when that change is live [18:23:42] RoanKattouw: and thanks for your help earlier [18:23:53] preilly: I'll hold off because platform has a window until 1pm [18:24:24] RoanKattouw: okay [18:28:03] I think we're done [18:35:48] robla: FYI I just made a last-minute addition to the deployment calendar, deploying a PageTriage bug fix at 1pm [18:36:10] sounds fine...thanks for the heads up [18:36:11] RoanKattouw: https://gerrit.wikimedia.org/r/#/c/7970/ is ready for review. i know you're going to be away tomorrow and wednesday, and it's big-ish, so i figured an early heads-up would be useful [18:36:40] RoanKattouw: also, Fabrice asked me to remind you about the bucketing change from friday [18:36:48] Thanks [18:36:50] Ah yes [18:36:57] I'll lump that in with what I'm doing at 1pm [18:37:45] thanks [19:39:32] RoanKattouw_away: I recall you having a nice shell command that generates an aggregated report of error logs on the cluster (when I was in the office last summer) [19:39:42] Do you think you could put some junk on https://www.mediawiki.org/wiki/ERRHUNT/PHP ? [19:39:51] (or ./Apache) [19:42:48] fatalmonitor [19:43:09] Krinkle: most of the current ones are either awaiting review, or have bugs logged for them [19:43:28] Most likely by me or Hashar [19:44:49] https://gerrit.wikimedia.org/r/#/c/6574/ [19:44:56] https://gerrit.wikimedia.org/r/#/c/7124/ [19:45:33] https://bugzilla.wikimedia.org/show_bug.cgi?id=36992 (I saw this locally) [19:45:41] https://bugzilla.wikimedia.org/show_bug.cgi?id=36911 [19:46:00] https://bugzilla.wikimedia.org/show_bug.cgi?id=36781 [19:46:16] https://bugzilla.wikimedia.org/show_bug.cgi?id=35866 [19:46:28] https://bugzilla.wikimedia.org/show_bug.cgi?id=36262 [19:46:34] https://bugzilla.wikimedia.org/show_bug.cgi?id=36326 [19:46:38] <^demon> !paste | Reedy [19:46:39] https://bugzilla.wikimedia.org/show_bug.cgi?id=36328 [19:46:58] lols [19:47:03] That's all the obvious ones I can see [19:48:04] ok, nice. but still nice to have it maybe dumped every now and then (not including any private info) [19:48:24] volunteers have shown interest in it a few times in the past. it is a cool thing aparantly [19:49:44] directly viewing the php errors is a low hanging fruit [19:49:51] Indeed [19:49:54] exactly [19:49:55] unless htere's a load of them :p [19:50:17] they're full of useless stuff though [19:50:56] 5.7-54M a day when tar.gz [19:51:56] <^demon> A lot of the useless stuff can be fixed. [19:52:01] RoanKattouw_away: can you get https://gerrit.wikimedia.org/r/#/c/7302/ out soon [19:52:02] <^demon> Or otherwise shut up. [19:52:20] mmm [19:52:40] fatalmonitor does a reasonable job of filtering out a lot of the noise [19:52:40] Reedy: "fatalmonitor" is some kind of alias or bash script that is set up on fenari ? [19:53:07] could you paste a bit of junk as start on that mw page? [19:53:19] http://noc.wikimedia.org/~reedy/fatalmonitor [19:53:24] O_O [19:53:42] reedy@fenari://home/wikipedia/syslog$ du --si apache.log [19:53:42] 46M apache.log [19:53:46] oh, right. because noc is fenari [19:53:48] There's no point putting it onto a wikipage [19:53:58] sure, a link is fine too [19:54:11] i'm not sure if there's sensitive info in the files [19:54:32] AFAIK not [19:54:33] <^demon> I don't think apache.log is a problem. [19:54:46] <^demon> The wmerrors log is though. It has full stack traces. [19:55:46] Funny, I can't clone master [19:56:36] Ah, wrong channel [20:06:22] New patchset: Ottomata; "Updating run.sh with more tests. Adding example.log" [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8355 [20:07:22] preilly: Yeah I'm deploying it now, I have a 1pm-2pm window [20:08:00] RoanKattouw: okay great [20:08:09] RoanKattouw: just let me know when I can test it [20:08:32] Will do [20:09:40] RoanKattouw: thanks [20:10:05] New review: Diederik; "Looking good!" [analytics/udp-filters] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/8355 [20:10:06] Change merged: Diederik; [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8355 [20:56:37] folks from 3, is Roan around this afternoon? [20:57:33] I think he is today [20:58:19] k will try and find him later [21:19:14] preilly: The raw RL modules thing is live now [21:29:50] New patchset: Ottomata; "Adding stable; urgency=low to Changelog" [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8412 [21:35:26] Change abandoned: Ottomata; "Hmm, Changelog is finicky! Diederik will fix." [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8412 [21:36:14] AaronSchulz: if ( isset( $this->respHeaders['content-length'] ) && strtoupper( $this->method ) != "HEAD" ) { ? [21:38:20] hey RoanKattouw: I have some new clicktracking specs up for FeedbackPage and I'd like to know if it's ok to track impressions with no sampling for talk pages of AFT5 articles. Here's some figures to give you an idea of the order of magnitude [21:38:42] PageViews (ns0, random sample only): ca. 100K daily [21:39:04] Corresponding pageviews (ns1, random sample only): 5-10K daily [21:40:19] I don't have pv data handy for the additional article sample but I expect it will add as a maximum the same volume of pv to the talk pages [21:40:45] so we're talking of max 10-20K impression events [21:40:49] daily [21:41:25] I don't think this will break udp2log but I wanted to hear from you first [21:42:39] rsterbin: mailed you some notes on these specs, ping me if you have any question [21:49:01] or even $this->headersOnly [21:50:34] AaronSchulz: no, they're not doing head requests.. [21:51:27] Though, they are essentially proxying the pdfs [21:58:14] DarTar: So we're talking about a total of like 130k events/day? [22:00:22] Reedy: in any case, that len check should only be for GET [22:00:38] or POST, hmm [22:00:53] lol [22:00:59] I've done not head [22:01:02] Reedy: then you have the weird types [22:01:24] Reedy: what type is it? [22:01:42] What type is what? [22:02:07] the type pdf code is using [22:02:51] Reedy: you know what, lets just revert that check out completely [22:03:26] for example, even when POSTing or PUTing to swift, you get a 204 and a Content-Length of the thing stored, not the response. [22:03:30] lol, fair enough [22:03:48] although we are not using the HTTP class for that, lots of things could do that sort of thing [22:04:06] however is using the Http class can do the checks it needs [22:04:10] *whoever [22:05:11] need to find the bad change [22:05:28] RoanKattouw: no, we're not interested in ns0 impressions, only talk page impressions [22:05:36] 10-20K max in total [22:05:44] daily [22:06:00] and that should be an upper bound [22:06:27] 20K/day is nothing [22:06:47] That's like .20-.25 per second [22:06:55] So that's totally fie [22:06:57] *fine [22:06:57] well, we've never tracked impressions at 100% as you know, but I'll take that as a green light [22:07:05] Well ns0 impressions, sure [22:07:10] awesome [22:07:46] But you're talking ns1 and then only a sample of pages? [22:09:03] yep [22:21:07] Right, yeah then I see how you'd get such a low number [22:23:39] preilly: mwscript rebuildLocalisationCache.php --wiki=testwiki --outdir=/home/wikipedia/common/php-1.20wmf3/cache/l10n/ [22:24:13] RoanKattouw: okay that's running [22:24:36] DarTar: the only thing I see as an issue is the rev ID, which doesn't make sense for the feedback page — the feedback is for any revision of the page. [22:25:57] DarTar: the central feedback page doesn't use different code, so you'd get those views coming in with page ID being null (e.g., nothing in the additional data) [22:26:03] hey there [22:26:26] DarTar: if you like, we can do page ID and feedback ID for permalinks [22:26:27] rev_id for feedback page, agreed: we should probably just drop it [22:27:11] hmm we will have this data already via the click events from CTA5 [22:27:17] right? [22:27:40] i thought we were talking about the feedback page [22:27:47] yes [22:28:12] what I mean is, no need to have feedback ID for permalinks added to the log [22:29:03] so for feedback page, just the page_title of the corresponding ns0 article will do (unless it's easier to stick to the current format with the rev_id appended) [22:30:04] it's simpler to do page id, since we track that in the javascript already [22:30:28] page_id of the article, not the special page right? [22:30:32] yes [22:30:38] no sense in tracking the other [22:30:55] page_title|page_id ? That works for me [22:31:06] nobody cares about the special page's id — but we need the page id of the article to get feedback. [22:31:09] (I need to add it to the docs though) [22:31:18] exactly [22:31:25] ok [22:31:32] for central feedback page if I understand your suggestion we will have all impression events tagged with the ref_url key but then I should be able to filter them out using the ns field [22:31:34] right? [22:31:41] exactly [22:31:45] it's basically free [22:31:52] no extra work [22:31:59] same with permalinks as well [22:32:32] you could filter out what people were doing on permalink pages by which ones had a feedback id as well [22:33:16] it's true we'd get it from cta 5 as well, but that would be the permalink they started on, not anything they happened to get to after that [22:33:33] I'm not that interested in distinguishing permalinks from feedbackpage actions at this stage [22:33:38] ok [22:33:42] so page id only? [22:34:03] (and null if it's from the central page) [22:34:32] is it ok to have page_title|page_id and page_title|NULL for central page? for consistency with the rest of the log [22:34:55] we've never had the page title [22:35:06] we don't currently track it in the javascript [22:35:18] also, the central page wouldn't have a title, either [22:35:21] all our AFT5 events have page_title as additional data [22:36:09] the special page has different javascript. [22:36:26] like: enwiki ext.articleFeedbackv5@2-optionSE_1X-impression-bottom 20120521223559 0 c9pk9vV8BJvGiQ80QPCZZTROQ9zOpgXmh 0 0 0 0 Brooklyn Italians|486355982 [22:36:45] I see [22:37:20] let's stick to page_id only then, I'll have to run an extra query to get the titles but it's ok [22:37:53] i'll see what i can do [22:38:14] so to recap page_id for ns0 (for feedbackpage), NULL otherwise (central log) [22:38:22] that should do the job [22:38:38] sounds good [22:38:47] is the user_privs bit ok? [22:39:06] yeah [22:39:10] sweet [22:41:10] rsterbin: I see fresh records in the log with different bucket IDs from Stage 3, I thought we had entirely disabled them as of the last deployment? [22:41:30] what's the version number? [22:41:37] 2 [22:42:12] (not that we need to fix this for the analysis ...) [22:42:19] we have collected enough data [22:43:01] then they have old js [22:43:08] ic [22:43:12] the new version is 3 [22:43:31] right [22:43:38] oh, wait, wrong config [22:43:46] ? [22:43:55] number was not bumped? [22:44:28] version number for clicktracking was not bumped [22:44:41] i bumped the numbers for the form and the link [22:44:55] so everybody's now getting form 1 and link X [22:45:02] but the clicktracking version is the same [22:45:29] are you getting records for link E or form 4? [22:45:29] so ext.articleFeedbackv5@2-optionSE_4X_edit-impression-bottom is an impression for the AFT form? [22:45:41] form 4 yes [22:45:50] I don't think I've seen link E, lemme check [22:46:06] roan just updated the code today for the form change [22:46:20] the js might still be out of date for some people [22:47:27] I have 15 events in total matching overlay in today's log [22:47:48] that changed on friday [22:47:54] it should really be fine by now. [22:48:09] i wonder how long the js could be cached? [22:48:15] no idea [22:48:31] at the same time 15 events per day is not a big deal [22:48:48] only 4 are impressions [22:49:07] wow, someone even clicked on triggerTBX :-o [22:49:09] so possibly just four people with weird cache settings [22:49:20] yep, most probably [22:49:21] wow, i didn't think that had ever happened [22:49:50] yeah must be a first timer, makes me wanna hug this Q3Fds7nHVafcIlYT1aF16mM07LT6Qtwr5 [22:49:56] haha [22:50:00] like s/he found the easter egg [22:50:08] the toolbox link would give an overlay still [22:50:28] right, not that I really care at this stage [22:50:29] hang on, phone's ringing... [22:50:31] k [22:51:30] brb [22:51:32] ugh, robocalls [22:51:35] ok [22:59:16] re [23:17:09] New patchset: Diederik; "Updating build script make deb packager happy, fixes some Changelog issues" [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8426 [23:17:43] New review: Diederik; "Ok." [analytics/udp-filters] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/8426 [23:17:45] Change merged: Diederik; [analytics/udp-filters] (master) - https://gerrit.wikimedia.org/r/8426 [23:31:27] rsterbin: do you want to send a note to fabrice, roan etc with a reference to the CT version number problem ? [23:31:51] you're probably the best person to do it if you've nailed it down already [23:48:18] preilly: is the new deployment window for tomorrow one you intend as a new standard window every week, or just for tomorrow? [23:48:31] robla: new standard [23:49:02] cool, just making sure