[09:45:07] good morning [09:46:12] hashar: found out that less also support color with the -R flag ;) [09:46:24] hello :) [09:46:37] yeah less -R is amazing [09:46:43] +1 [09:47:12] re node profiling: node --prof, then http://code.google.com/p/v8/wiki/V8Profiler [09:47:18] but somehow we should be able to detect it is | to another command and disable color [09:48:16] I tried --prof . Got a fabulous 387MB profiling file by running the full suite [09:48:24] I will try again with --filter [09:48:46] the full file then fails to process for me, so I think filter is a must [09:50:29] linux-tick-processor did print a polite 'Aborted' before quitting ;) [09:51:11] it is probably safer than an OOM issue that randomly kill a process :D [09:53:17] the linux killer seems to be quite good these days, in the last years it always killed the right process for me [09:54:07] talk about it with domas. i know it killed -9 our master mysql one day :-) [09:54:23] ouch! [09:54:39] did that cause corruption? [09:54:55] of course :) [09:55:02] argh.. [09:55:25] innodb got corrupted and I think he had to restore a snapshot then replay a few hours of transactions [09:55:39] was a long time ago, you will have to ask domas for details [09:56:31] all the innodb / mysql stuff has always looked cryptic to me. I just asked or warned Domas & JamesDay whenever something looked wrong [09:56:32] will do when it is war story time ;) [09:56:41] hehe [13:55:00] <^demon> hashar: fyi, that's going to be split out into misc/integration.pp soon :) Like I did in https://gerrit.wikimedia.org/r/#change,1178 [13:55:17] <^demon> Once that's approved and pulled into production, we can do the same thing with contint classes. [13:55:22] <^demon> And rename them something sane :) [13:56:52] <^demon> hashar: This is the uncommitted integration.pp I have http://p.defau.lt/?79eNftrxIUNmp9X2Hk0kdA [14:42:56] gwicke: when running the JS parser tests I got some: Unhandled token: "" [14:43:04] don't you just want to skip null tokens? [14:43:26] no, those are spurious so I keep warning about them for now [14:44:05] indicates a blank string being returned somewhere, that is not wrapped in a token [14:44:25] so it is a mild bug in the grammar [14:44:40] ok ok :) [14:44:53] 135 tests now passing for me ;) [14:44:59] well done! [14:45:07] now you have to figure out how to make that parser faster :-p [14:45:28] yes, it just keeps getting slower with each feature added [14:45:46] tokenizer in C should help [14:45:51] ;) [14:46:01] hehe [14:46:12] do [14:46:19] do you have any support for templates yet? [14:46:41] it parses them, and there is some code for expansion [14:46:48] but that is not hooked up to parser tests [14:46:58] and not in sync with the token-based parser in any case [14:47:13] so needs to be ported [14:47:51] if we do the expansion on the token stream, the we should be able to support unbalanced templates [14:48:06] for example the table start / row/ table end templates [14:48:09] I was saying that because some tests sounds easy to fix. Ie: {{ns:Image}} ->

File

[14:48:48] yes, there is still a lot of low-hanging fruit [14:49:45] makes it all the more fun, especially with the count of passed tests and nice green color as a motivation ;) [14:50:42] I think I will add it to jenkins [14:50:52] but need to figure out how to get a fresh node.js setup there [14:51:04] and how to package/install node modules using puppet :-D [14:51:39] that would be nice [15:14:15] gwicke: there is a minor version of PEG.js out (you use 0.6.1, 0.6.2 out) [15:14:21] changelog on https://github.com/dmajda/pegjs/blob/master/CHANGELOG [15:14:53] nice- thanks! [15:15:14] the first bug did actually bite me [15:15:21] or you can skip it and just use npm :p [15:15:49] that makes more sense I guess [15:16:03] package is named pegjs [15:16:08] npm -g install pegjs [15:16:19] just need to delete the current file and that as a requisite in the README file. [15:16:38] letting you do so because I don't want to be blamed for breaking stuff :-)))))))) [15:22:24] \o/ [15:22:28] enjoying node.js? :D [15:22:41] always did [15:22:49] before that I played with ruby :-) [15:23:12] I love those languages where everything is an object and that let you alter anything [15:23:44] hashar: committed [15:23:55] i've done some fiddling with node for side projects, gotta get back to some of em :D [15:24:03] and of course the parser test stuff [15:24:10] ;) [15:24:17] it's wacky for web servers :D [15:24:36] but i for one welcome our new asynchronous overlords [15:24:36] gwicke: it work! thanks [15:24:48] I actually like more comfortable run times that do more of the work for you [15:24:59] event loops: the thing from 90s GUI programming that's still actually relevant [15:25:25] still better than OS threads, but there are runtimes with lightweight threads as well [15:25:34] isn't it how a CPU works anyway? Looping and waiting for IRQ from devices :p [15:26:48] erlang and haskell make event loops easier by not requiring you to construct all callback chains [15:27:17] you only need to manually construct the non-sequential chains [15:27:30] i'm doing more poking with promise patterns ($.Deferred in jquery), kinda liking that [15:27:36] still some boilerplate and hoops though [15:28:27] and error handling can become very messy without scoped exception handling [15:28:39] *nod* [15:29:08] promise pattern lets your deeper levels jump straight to deferred.reject() or such, but you have to catch any exceptions yourself first [15:29:50] ok, still sounds like an improvement [15:30:24] brion: it might be possible to support unbalanced templates by expanding them in the token stream [15:30:43] then the html tree builder figures out the DOM tree [15:30:54] gwicke: sounds about right yeah [15:31:03] +1 [15:31:03] but I'm not sure how well that works in the editor [15:31:32] just slapping some marker attributes on the template-included tags might not be enough [15:31:35] hmm, true [15:31:50] as the markup might match up in really twisted ways [15:31:56] :) [15:32:21] at least it will be well-formed in the end ;) [15:32:47] worst case: recognize them as unbalanced and don't expand them in the editor [15:33:14] keep them as visible chunks and allow editing them (eg tweaking the parameters) but don't render them fully [15:33:15] hmm- yes [15:33:24] not ideal but may be necessary [15:34:13] that detection can be a bit tricky at the token level [15:35:04] right now some of the html tree builder's balancing stuff is also used implicitly for balanced content [15:35:35] which complicates the detection a bit [15:36:10] well- lets simply try it I guess [15:38:48] yep [15:39:34] simply not expanding is easy to do in any case, as is forcefully balancing [15:45:14] heh http://openbadges.org/About/ is mostly a giant JPG :P [15:45:52] i was all "oh they must be using webfonts, they've got a nice look there" and then oh nooooo it's an image [15:46:11] designers going overboard.. [15:46:36] brion: openbagdges sounds like game achievements [15:46:38] looks like they did that as pdf and just copied it to the page :) [15:46:42] hashar: it does! [15:47:03] i'm not 100% convinced on it but i'm intrigued. some of the community dept folks are interested in trying out some stuff [15:47:06] could be fun to play with [15:47:16] Achievement unlocked: pushed broken code [15:47:36] here mine : http://steamcommunity.com/id/hashar/stats/TF2/?tab=achievements :) [15:47:47] SVG might also become interesting now that html5 parsers start to become more common [15:47:58] *nod* [15:48:23] brion: If we ever enroll in the openbadges I am a volunteer to distribute some :-) [15:48:27] even IE finally has svg [15:48:29] :) [15:48:42] got a few ideas already such as "checked out MW from svn^H^H^Hgit" [15:48:45] android actually was the big holdout more lately. >:( [15:48:48] hehehe [15:48:48] reading the spec feels a bit like christmas, if only all of it was already implemented in all browsers [15:48:53] and "got reverted by brion" [15:48:57] :DDD [15:49:30] I like how you revert us, cause you always try to be as nice as possible when explaining the reason. So it is actually encouraging us to produce better code [15:49:43] oh good :D i do try to be nice about it [15:49:53] though sometimes i'll be like "what the fuck is this fucking shit? holy fucking fuck this is awful!!!!" [15:51:01] I'm a bit less revert-happy than brion but I did fairly harshly revert that change yesterday that introduced a fatal in the phpunit suite [15:51:21] The test runs were aborted halfway due to that fatal, so Jenkins wasn't recording any test results (because half of them never even ran) [15:54:46] owie [15:55:02] That was going on for a few hours [15:55:10] RoanKattouw: I was about to revert that change myself [15:55:36] If that lasts all day until the committer wakes up, and then when the tests run again there's a failure, it'll be hard to find the rev that caused the failure [15:55:37] gwicke: is there an overall document for how the new parser work? [15:55:54] hashar: nothing up to date [15:56:12] gwicke: not even an overall workflow ? Something like thisfile.js does tokenizer, that one does rendering etc.. [15:56:28] no, not even that [15:56:34] 8-) [15:56:51] all the docs at mw.org discuss the problem in more general terms [15:57:08] will try to add something tonight [15:57:47] should perhaps link that from future/parser_plan [15:57:59] something like: wikitex -> processed by mediawiki.parser.peg -> tokens -> html5tokenEmiter -> html5token -> mediawiki.DOMPostPRocessor -> HTML :) [15:58:20] the above mostly totally wrong though [15:58:36] html5token -> html5 tree builder -> jsdom -> > mediawiki.DOMPostPRocessor -> HTML [15:58:51] that would be great as an introduction [15:59:06] I basically swapped out the default html tokenizer for a wiki-specific one [16:00:04] in a browser, the tokens could just be fed to the browser's native parser by serializing the tokens to .innerHTML [16:00:25] and then running the DOMPostProcessor on that [16:00:49] in theory, the result is identical [16:03:26] sounds so evil :D me like [16:04:17] oh that's pretty cool??? the open badges thingy stores the badge metadata as a JSON blob inside the PNG image itself :D https://wiki.mozilla.org/Badges/infrastructure-tech-docs [16:04:56] lol [16:14:23] http://www.mediawiki.org/wiki/Future/Parser_development [16:16:30] that at least give an overall idea [16:20:39] hashar: I'll continue to fill that page a bit further [16:20:49] dont fi [16:21:05] don't feel like it is an obligation. I was just asking =- [16:21:36] we need to document things anyway- much better than repeating things over and over [16:28:14] hashar: thanks for fixing the modeline! [16:28:49] have not used those before, so just tried to slap one in that evidently didn't work.. [16:29:25] that one because of the trailing */ [16:29:32] vim mode lines are tricky sometime [16:29:57] in /* :vim:command var=foo */ [16:30:03] *gwicke nods [16:30:04] it missed the last column iirc [16:30:24] well my node stuff does not work. [16:30:54] will look at that later :D I have spend enough time on js anyway [16:32:20] hashar: thanks in any case! [17:13:07] Is-it possible to provide flavicon for the wiki of Wikimedia in multi resolution in order to provide an hight resolution icon for pinned sites in Windows 7 and Gnome Shell ? documentation for Internet Explorer : http://msdn.microsoft.com/en-us/library/gg491740%28v=VS.85%29.aspx [17:25:53] I think we were using pngs [17:26:12] in which case, I don't think several sizes in one file are supported [17:26:35] we could use one 'big' file which gets resized by the browser, hough [17:26:57] I don't know how good is the support for non-ico favicons on current IE, though [17:27:37] you can fill a bug requesting that enhancement [17:27:40] that documentation isn't too good, btw [17:51:29] gwicke: I have added a --cache (optional) https://www.mediawiki.org/wiki/Special:Code/MediaWiki/104878 [17:51:49] gwicke: it save the object resulting of tests parsing in a cachefile [17:52:11] ahhh- nice! [17:52:35] I'm just wrangling with tables, just what I need ;) [17:52:36] not sure it is that usefull [17:53:10] well- right now I repeatedly run the same filtered tests again and again [17:53:29] so parsing all tests takes up a large part of the total time [17:53:32] well the cache file will be outdated whenever a file imported is modified [17:54:03] because I am not sure which js files are used to generate the cases [17:54:49] it just parses the parserTests.txt file from phase3 [17:55:03] so the resulting structure should only change if the input changes [17:55:12] but the parse() function give a different output if one of the js file is changed [17:55:25] well might give a different output [17:55:31] ok, but I haven't touched the test case parser in a long time [17:55:56] we know what we are doing for now [17:56:06] anyway, look for "fileDependencies" that should be pretty straightforward [17:56:25] in _require I push the current file to fileDependencies you can comment that [17:56:39] and just push the parserTests.txt filename :p [17:56:40] I don't really expect the parser test structure to change, except if the input file changes [17:58:14] game over. i off to cook a chili con carne 8-) [17:58:38] bon appetit! [17:58:44] merci [17:59:00] *gwicke is also getting hungry [17:59:50] *gwicke likes instant testing gratification [18:11:37] gwicke: :) [18:12:19] hashar: you sped up my workflow a lot [18:12:43] which is fun! [18:12:55] great! that is part of my job actually :p [18:13:28] will probably rewrite that later to use QUnit [18:15:15] I have a branch somewhere to run the MediaWiki javascript tests under node :)) [18:16:29] the test code we have now just grew, but for now it does the job [18:16:39] and very well at that [18:17:35] but keep in mind that the curren JS code is more a prototype [18:18:02] I would expect anything heading for production to be ported to C(++) or similar [18:35:49] Ryan_Lane: Ping pong :) [18:36:16] Ryan_Lane: would need your assistance to get a new Nova project named "testswarm" so I can get a vm there with root access :D [18:36:47] we tried this morning (our time) but failed miserably lacking proper credentials [19:25:12] ^demon|away: are you going to https://opendatahackdc.eventbrite.com/ ? [19:37:52] <^demon|away> Afraid not, have too many things going on. [19:39:06] understood [19:54:19] <3 that wikitech-l e-mail [19:54:23] "I'm a newbie and I wanna get started" [19:54:49] I wish more people just stepped up and said "I wanna get started, hit me" [19:55:43] There are many valid approaches, and that is a particularly easy-to-deal-with one. [19:55:55] True, other approaches are valid too [19:56:14] huh, anyone seen TrevorParscal? [19:56:30] Inez_ ? [19:58:05] RoanKattouw: I am glad that the volunteer saw or predicted that we were friendly enough that such an email would receive useful response! [19:58:50] Reedy: the new WMUK guy, Jon Davies, has brought some stuff to the WMF headquarters [19:58:54] tea towels, posh tea, etc. [19:59:15] Are tea towels that british? :p [19:59:35] "I'd like to get a meal I could eat all of..." [19:59:47] Reedy: they are British enough. :) [19:59:50] <^demon|away> Why do you need a towel for tea? [19:59:54] <^demon|away> ^ That's your answer [20:00:31] many things are nice accessories that may not be strictly necessary yet are still comfortable and nice to have [20:00:31] *Reedy beats ^demon|away [20:00:37] Most of my meals aren't towel-specific [20:00:57] but you still know where your towel is [20:01:03] :) [20:01:04] having a specific one just makes it that much easier [20:01:08] indeed, I am a hoopy frood. [20:01:13] :-) [20:02:24] *Reedy notes that one cannot eat a Tea Towel [20:02:33] hi ramkrsna [20:02:51] sumanah, hey [20:02:52] ramkrsna: Phil Chang wanted to get in touch with you -- do you have his contact info? [20:04:57] sumanah: why only wmf projects? is all stuff which is developed for cluster requested by wmf? [20:05:27] no [20:05:32] I thought this is channel related to wmf wikis things development [20:05:35] petan, the idea is that the other channel is for general MediaWiki software development, and this one is for Wikimedia-specific stuff, plus sometimes we have meetings in here because it is quieter., [20:05:52] sumanah, nope pchang@python.org [20:05:53] oops [20:05:54] yes that's what I thought [20:05:56] There's not /really/ a good reason for the existence of this channel [20:06:07] But you know how Too Many Channels/Mailing lists/Whatever Syndrome goes [20:06:12] sumanah, sorry muscle memory pchang@wikimedia.org [20:06:24] ramkrsna: ok, and what's yours? [20:06:27] <^demon|away> This channel has one reason for existing. [20:06:37] <^demon|away> The Usability Initiative thought #mediawiki was too noisy [20:06:42] hasharEat: yeah, it requires an ops member to do so [20:06:52] sumanah, mail@ramkrsna.net [20:09:17] hasharEat: ok. added the project, and added you to it [20:09:31] thanks ramkrsna [20:10:16] <^demon|away> Ryan_Lane: Oh by the way, this is the list of packages I needed https://gerrit.wikimedia.org/r/#change,1223 [20:10:42] you don't need qt4-qmake anymore? [20:11:06] <^demon|away> I believe libqt4-dev includes qt4-qmake [20:11:10] ah [20:11:30] <^demon|away> This was based on the dependencies in oneric. [20:11:41] <^demon|away> Since lucid had that absurd version # dependency [20:11:59] <^demon|away> s/lucid/natty/ [20:13:32] <^demon|away> Yep, just confirmed qt4-qmake is required by libqt4-dev in lucid. [20:13:50] ok [20:16:00] sumanah, Jesse wild also wanted some, packaged into fedora [20:16:17] sumanah, scripts and software packages [20:16:39] She wants what packaged for Fedora exactly? [20:16:44] ramkrsna: I've given Phil your contact info -- would you like Jessie's? do you have it? [20:17:02] *RoanKattouw is wary whenever packaging is brought up in the context of MediaWiki [20:17:42] Since it's Jessie, I'm guessing this has to do with Kiwix? [20:18:14] <^demon|away> RoanKattouw +1 [20:18:17] Yeah I figure she wants it packaged so it can be installed on some embedded device [20:18:37] Fair warning: all (or at least most) existing MediaWiki packages suck [20:18:45] No one has yet managed to package MW properly [20:18:55] I guess we at WMF could do it but we have no direct motivation to do so [20:19:05] because we wouldn't run those packages ourselves anyway [20:19:08] <^demon|away> Debian/Ubuntu are the least bad, iirc. [20:19:17] Does Kiwix depend on MediaWiki at all? [20:19:52] I don't think so [20:19:56] She's jwild on freenode, ramkrsna, in case you want to talk to her. [20:20:02] Other than it needs MW to generate the article stuff etc [20:20:03] No, Kiwix is an independent app [20:20:10] I think it might have started independently too [20:20:26] It's basically a ZIM viewe [20:20:28] r [20:20:46] Then we have something that generates ZIM files from Wikipedia articles [20:21:10] Then install the viewer (Kiwix) and one of those ZIM files on a phone, and you've got Wikipedia offline, essentially [20:22:11] ok, all my memories & understanding are confirmed. You can stop worrying that someone is working on packaging MW. :-) [20:24:39] sumanah, sorry about the delay in the reply, I have another irc meeting going on OFTC [20:24:55] Hmm, yeah I guess the offline/Kiwix people wouldn't be trying to package MW itself [20:25:01] ramkrsna: that's fine. You can just talk to pchang & jwild from here on out, I'm just the messenger :) [20:25:24] sumanah, yeah phil for the landing page and search trends, jwild for the getting the wikipedia packages into fedora [20:31:47] I hope those Wikipedia packages are ZIM files with content, not software? :) [20:59:16] Ryan_Lane: hey, I have something on my longterm todo list: that I should push for "deployment privilege separation" [21:02:27] oh no "Failed to allocate new public IP address." [21:02:34] we are out of IP address 8-) [21:03:07] :( [21:03:18] hashar: in what system? just Wikimedia in general? [21:03:32] labs? [21:03:37] on labs yes [21:03:53] Ryan_Lane create me a project (thanks) and I successfully created an instance [21:04:11] since I am too lazy to figure out how to use the bastion I thought I could assign a public address to that server :) [21:12:42] ah I am loggued in :) [21:12:55] and I am root [21:14:07] *hashar bootstrapping A.L.I.X [git://github.com/hashar/alix.git] [21:16:11] ha A.L.I.X. :) [21:17:20] Awesome Life Improver under uniX [21:17:38] I have packed my vimrc / bashrc etc in a github project [21:17:44] ah [21:17:49] this way I can easily feel at home on any computer [21:18:15] will probably start enhancing it to add some scripts for mediawiki [21:18:16] Do you have a screenrc in there too? [21:18:24] I don't use screenrc [21:19:06] I use ???+N or ???+T to get a new terminal [21:19:06] *RoanKattouw uses a modified version of Ryan's screenrc, it's awesome [21:19:07] hashar: rock [21:19:14] Oh [21:19:19] I only use screen on remote hosts [21:19:35] On my laptop I use konsole (KDE's terminal emulator) which has tabbed terminals [21:19:36] and I will probably end up buying a 27 inches screen [21:19:55] and probably a mac mini [21:20:30] sumanah: you can find tons of people vimrc / bashrc on github :-) [21:20:40] it's cool [21:20:44] I'm glad of this trend [21:20:54] github makes things really easier [21:21:22] Timo reached me how you can just click on a file online, edit it and then in one click submit a pull request to upstream editor [21:21:27] all of that in your browser! [21:21:55] so when I spot a minor fix, I just do that now and in two minutes author has a patch :-D [21:22:14] I should do that [21:22:26] the one time I tried to do something like that, I did it with git on my local machine [21:22:29] xmonad is also nice if you have a lot of shells [21:22:31] I should try the web-based way instead [21:28:16] sumanah: we can do a shared desktop session one day :) [21:29:18] but basically: click a file, at the top right of file content look for the [edit this file]??button. Edit, submit pull request :-) [21:30:13] awesome, thanks hashar [21:31:36] RoanKattouw, you could also try the tmux , which is more like screen on steroids [21:31:53] most of the upstream distributions ship it [21:35:50] RoanKattouw: because I was lazy [21:35:56] RoanKattouw: re: gerrit.war [21:35:59] and I regret it ;) [21:36:03] Yes, you should [21:36:15] I should have made a package [21:36:23] One of the downsides of git is that we'll have to live with that mistake forever [21:37:52] Ryan_Lane: I managed to create my VM and connect to it as root through the bastion :-) [21:37:57] \o/ [21:37:59] sweet [21:38:10] we aren't necessarily our of IPs [21:38:12] *out [21:38:26] but??? for most things we prefer people use socks proxies [21:38:35] Ryan_Lane: a NAT from whatever port on a public IP to my instance port 80 will be enough [21:38:37] certain things require public IPs, though [21:38:46] that works too [21:38:54] not urgent though [21:39:06] I'm kind of loving foxyproxy and an SSH socks proxy [21:39:26] are the instances using the puppet git repo? [21:39:43] I can not find a way to show puppet conf on my instance :) [21:39:44] ssh bastion.wmflabs.org -D 8080, then set up foxy proxy to use localhost:8080 for anything in pmtpa.wmflabs [21:39:58] hashar: it's using the test branch [21:40:15] great :) [21:40:25] on the todo list is a way for each project to have its own branch [21:40:26] can you let me know how I can ask my server to fetch the latest changes ? [21:40:31] puppetd -tv [21:40:37] or puppetd --test [21:40:43] both force a puppet run [21:40:50] I think -tv is more verbose [21:41:28] I really need to write more documentation [21:41:36] worked! [21:41:43] cool [21:42:07] right now the list of classes and variables in the configuration interface is managed via LocalSettings.php [21:42:22] I have an open bug to change that, so that it's possible to make that list via mediawiki [21:43:08] and group the settings into labels, so that it's easier to figure out what you need to set to create different things [21:43:09] Hey Ryan_Lane , your Architecture/Labs talk is CC-BY-SA , right? [21:43:14] RoanKattouw: yes [21:43:23] I'm preparing a talk for the NL LUG on Saturday and I'm stealing a few of your diagrams [21:43:25] Mostly for the arch part [21:43:29] that's cool [21:43:34] I'm redoing the puppet + labs part [21:43:38] OK good [21:43:41] I really need to get better about licensing my talks :) [21:43:48] God knows you probably stole those diagrams from someone else [21:43:53] which ones/ [21:43:54] It's OK [21:43:59] It's a condition of your employment [21:43:59] I'm nearly positive I made them all [21:44:12] The ... on steroids slide certainly isn't yours [21:44:13] Ryan_Lane: https://labsconsole.wikimedia.org/w/index.php?title=Access&diff=630&oldid=539 :) [21:44:27] RoanKattouw: that's not true :D [21:44:27] Nor is the CARP diagram [21:44:31] it *IS* mine [21:44:34] so is the CARP one! [21:44:38] Ryan_Lane: something that might works is that when ever someone ask you a question, try to update the labsconsole wiki :D [21:44:47] RoanKattouw: go find the original, then mine [21:44:50] Earliest I can find it is Mark's 2007 talk [21:44:51] RoanKattouw: they are different [21:44:54] Oh, there's differences? [21:45:00] yep [21:45:14] hashar: yeah. I should. thanks for updating it :) [21:45:21] Oh, I see [21:45:57] Crediting that one to you as well then [21:46:25] Pfft [21:46:26] oops [21:46:33] "I stole it from some rouges at the Wikimdia Foundtion" [21:49:00] this is nifty idea: http://www.ex-parrot.com/pete/upside-down-ternet.html [21:49:28] tip: git merge merge all changes until that commit :( [21:50:47] Krinkle: thanks for the puppet presentation link [21:51:14] you're welcome (the one via twitter, right) [21:51:31] hashar: Yes, to merge a single change use cherry-pick [21:51:34] yeah [21:51:50] I am still in the first slope of my git learning curve :-) [21:51:58] will hopefully reach the plateau soon [21:52:08] *hashar looks for commit # [21:52:58] ok I cherry picked it [21:53:23] it just seems to have applied the change and silently committed it in the background without any information message [21:53:24] awesome [21:54:48] Ryan_Lane: looks like gerrit-wm does not notify for tests changes. Could you possibly apply https://gerrit.wikimedia.org/r/1228 ? [21:54:52] on test branch [21:55:45] hashar: it does [21:55:55] hashar: I moved those logs to #wikimedia-labs [21:56:07] yet another irc channel :-) [21:56:22] I don't think you want labs spam in #wikimedia-tech or -operations [21:56:55] I plan on having way more spam than in currently in the channel, too [21:56:56] probably not [21:57:12] right now it just lets me know when people's home directories are created, and when their keys are updated [21:57:49] I want the recent changes log for a specific namespace to show up in there too [21:58:54] there is an IRC rc bot somewhere [21:59:20] yeah. I'll need to find it :) [22:00:00] you mean irc.wikimedia.org ? [22:01:09] something feeds that irc network [22:01:23] that's what we're talking about [22:01:34] there's a bot listening to udp in one of those "special" servers [22:01:39] ah [22:01:44] it's very likely puppetized already [22:02:07] irc.wikimedia.org is ekrem, look at their processes? [22:02:13] it's a python script [22:04:27] *Ryan_Lane nods [22:04:34] I'll check out puppet, and see if it's there [22:04:45] if so, it'll be a pretty easy thing to set up [22:05:14] you just need the host, listing port and maybe a look to the message format [22:05:31] but it was quite hidden in our svn