[00:02:12] jeremyb: like, the natural number? what else would come between 3758 and 3760? [00:02:34] jeremyb: if you're referring to a ticketing or patch management system, you'll have to specify which :P [00:08:18] ori-l: rt [00:08:23] no battery remaining [00:08:26] * jeremyb runs away [00:24:11] jeremyb: I marked it resolved; thanks. [00:33:56] Anybody on here know about RevisionDeletion? Specifically how do you use Special:RevisionDelete? I have my local permissions set so that I should be able to use it, but when I go to the special page, it just says "Invalid target revision". The docs on MediaWiki.org are no help whatsoever :( [00:37:55] kaldari: go to a page history [00:37:59] check some checkboxes [00:38:07] and then hit hide/show revisions (or something like that) [00:38:55] I don't see any 'hide/show' :( [00:39:06] oh it sas [00:39:17] del/undel selected revisions [00:39:19] says* [00:39:24] I don't see those either :( [00:39:37] I set the following in my LocalSettings.php: [00:39:38] $wgGroupPermissions['sysop']['deleterevision'] = true; [00:39:38] $wgGroupPermissions['sysop']['deletelogentry'] = true; [00:39:44] and am logged in as a sysop [00:39:52] do I need to do anything else? [00:39:55] https://en.wikipedia.org/wiki/Wikipedia:REVDEL#Technical_details [00:40:00] i wonder if enwiki changed the messages. [00:41:07] "Change visibility of selected revisions" <-- kaldari thats the default message [00:41:43] ah, I see it! [00:42:23] The screenshots at https://www.mediawiki.org/wiki/Help:RevisionDelete and https://www.mediawiki.org/wiki/Manual:RevisionDelete must be outdated [00:42:37] I was looking for links in the log entries themselves [00:42:56] log entries have links i think. [00:43:28] no, checkboxes. [00:43:46] yeah, those look pretty outdated. [00:43:55] legoktm: Anyway, thanks! [00:44:02] np :) [02:00:45] legoktm: I rewrote much of the documentation and replaced the ancient screenshot with a new one: https://www.mediawiki.org/wiki/Manual:RevisionDelete [02:01:40] someone should give me a barnstar :) [02:02:02] mediawiki wiki doesn't have barnstars >.> [02:02:20] The 'I can't believe you actually wrote documentation' Barnstar [02:02:24] oh that is awesome :D [02:02:27] i'm sure someone in the office can give you a gold sticker or something [02:02:30] * legoktm throws wikilove at kaldari  [02:02:42] yay, I love gold stickers [02:14:18] hrmmmmmm, gold barnstar? [02:14:24] i wonder how much that would cost [02:14:30] we could do 14kt [06:11:40] jeremyb: well, gold has fallen so much :P (500 G€ losses for central banks I'm told) [07:56:28] paravoid: i'm getting 502 Bad Gateway nginx/1.1.19 when trying to load wikipedia [08:26:03] matanya_: are you having huge packet loss to bits too? 50 % for me on gw-wikimedia.init7.net [08:26:17] yes, seems so [08:26:58] mark_____ changed some routing the other day, dunno if related [08:27:24] ping some ops from -operations :) [08:29:08] e.g. mutante ... [08:30:40] matanya_: sorry, i don't know about routing changes, if you think it's serious and networking we should probably call mark [08:30:55] * mark_____ has a look [08:31:00] WP works for me though [08:31:03] ah, thanks [08:31:22] jeremyb: no yet :) [08:31:28] it does, but not a reliable manner. thanks mark_____ [08:33:31] so whoever sees problems, can you please post traceroute output? :) [08:37:36] i'm not seeing any issues at the moment [08:43:49] mark: http://p.defau.lt/?1_2YRrCApGKVhW_KuoLEVQ [08:43:55] seems better now? [08:44:12] ok, that's not packet loss to bits [08:44:20] 0.0% to bits-lb [08:47:46] jeremyb: https://bugzilla.wikimedia.org/show_bug.cgi?id=55503 [08:48:55] 1 % maybe... http://p.defau.lt/?m6L4vR_x2nW5Jo06BpO9ag dunno, it's matanya_ complaining, he should provide more info ;) [08:51:06] mark: in a pm it is good? [08:51:27] yes [14:30:58] Request: GET http://outreach.wikimedia.org/wiki/Wikipedia_Education_Program, from ... via sq72.wikimedia.org (squid/2.7.STABLE9) to () [14:30:58] Error: ERR_CANNOT_FORWARD, errno (11) Resource temporarily unavailable at Wed, 09 Oct 2013 14:26:30 GMT [18:29:04] Debugging Flask apps without proper error reporting is really a huge PITA... [18:29:33] not being able to turn http/500's into error messages does not help -_-' [18:33:34] valhallasw: this is related to labs? [18:33:53] yes, because normally one would use apache's error logs... [18:34:05] so, tool labs relates. [18:44:44] valhallasw: heh. should likely mention it in the labs channel, then ;) [18:44:53] Ryan_Lane: >_< thanks. [18:45:02] err [18:45:03] I think there was some issue with giving access to the error logs? [18:45:16] ok, my brain is fried. I thought I posted the 'it's working' in the wrong channel. [18:45:26] Yeah, privacy issue due to IP's [18:46:02] it will be fixed in the future when Coren implements his plans for tool labs webservers 2.0 :-) [18:47:10] Which I am working on now. [18:47:34] The solution was too evident for me to consider it back when I was doing basic design; in retrospect it's effin simple. [18:48:39] heh [19:25:09] csteipp: you around? [19:56:50] mwalker: I'm back [20:05:52] csteipp: Jeff_Green and I were wondering what your thoughts were on allowing a ganglia user to sensitive log data [20:07:15] I would be very cautious about using ganglia as an identity source.. I would say they are trying on the security front, but they have a long way to go on their process. [20:10:11] *nods* that's pretty much the conclusion we came to [20:10:30] or at least; I think so [20:10:37] what do you mean by identity source [20:10:39] ? [20:12:00] csteipp: specifically we're talking about a log parser that feeds data into ganglia. mwalker initially built it as a module to make use of ganglia's metric grouping. but since it will run under the ganglia user it can't access the logs presently [20:12:00] the context I'm coming from is I have a python script running under the ganglia user; and we were debating on if it could be trusted to have direct access to log data; or if we needed to aggregate things for it in a different process [20:12:08] jinx! [20:12:10] ha what he said [20:12:24] Ah, sorry, thought you meant an end user [20:12:48] so we already disable gexec which is good-ish [20:13:20] so we're talking about the sanity of allowing a local ganglia daemon access to sensitive data [20:14:15] Yeah, overall I would be very cautious about it. Are you actually putting that sensitive data into ganglia? [20:14:33] Or is it just in the same dataset as something ganglia is accessing? [20:14:33] no [20:14:55] it's raw banner logs and we're talking about aggregating a few simple counters off of them [20:15:28] i.e. impressions in the last 15 minutes, and similar [20:17:18] gatcha. So I personally wouldn't do it, but I have no idea how much other similar data that user has access to, so it could be somewhat irrelevant. [20:17:38] at present, little or none [20:17:55] I would probably setup something to replicate a sanitized version of the logfile that ganglia then read [20:18:55] right. we were talking about having the ganglia module shell out to a script that would run under sudo [20:19:57] and that script would run as a user that generates the sane report to stdout [20:20:05] sound reasonable? [20:20:59] So the script actually reads the log file and then reports back the aggregates? [20:21:06] yep [20:21:10] Yeah, that would probably work [20:21:34] alright, we'll do it! [21:00:09] ori-l: I was wondering -- do you have any insight into why we use ganglia for some things and graphite for others? e.g. is the plan to move to graphite so any new reporting metrics should be written to go there? [21:01:16] there isn't a simple answer to that; graphite is very flexible but its flexibility also makes it hard to expose publicly, so access to it is somewhat gated [21:01:55] there's a strong existing commitment to keeping ganglia public and open and people have historically relied on it to diagnose a range of issues [21:02:33] if you write metrics by sending them to statsd you can have them routed to both so you can enjoy the best of both worlds [21:02:53] oh? tell me more! [21:03:01] in general, I think that our graphite setup needs a lot of work so if you're developing something that you want to rely on then ganglia might be the thing to do [21:03:38] statsd is a simple daemon that listens to metrics via UDP. metrics look like 'myapp.somemetric.dbcalltime:143|ms' [21:03:41] I'd like to rely on it :) but I'll be writing more things that follow the same pattern I'm developing in the frack for banner impressions [21:03:59] it computes summary statistics and flushes them to a backend every N seconds / minutes [21:04:13] where 'backend' is currently (and for the forseeable future) either ganglia or graphite or both [21:04:48] '|ms' measures latency, but there are other data formats; you can 'foo|incr' to increment a counter [21:04:59] it's nice in that statsd keeps state, so your application doesn't have to [21:05:54] can it accept timestamped calls? [21:06:00] I took a quick look at your gmond module last night and it looks good; I would say get that in place before considering other options, because none are currently as reliable / familiar / maintainable as gmond metric modules. We use those all over the place. [21:06:05] ok [21:06:24] er, not sure, but I think not; it assumes the time it receives the metric is the time it applies to [21:06:56] makes sense; that's how ganglia works -- it's just frustrating for me because we have a lot of batched operations [21:07:17] and it turns out that graphite is friendly towards that sort of data [21:07:35] so, a discovery i made recently is that the ganglia wire protocol is pretty simple and sometimes it's much simpler to just write to ganglia yourself rather than contort yourself to fit gmond's framework [21:07:46] ah, yes, graphite is probably more flexible [21:08:29] if you want to set aside half an hour later to walk me through your requirements i can try and help you figure out what would be a good fit [21:08:36] i'd be happy to , i mean [21:09:17] sure; that'd be cool