[02:53:29] is this a good place to ask random tech questions [02:57:23] SirNapkin1334: As long as you can somehow connect it to wikimedia :) [02:57:27] SirNapkin1334: Ask away [02:57:36] oh [02:57:38] uhm [02:57:39] uh [02:57:48] If you're not sure, just ask [02:58:02] is there such thing as a calculator that can handle octowords [02:58:24] as in, ideally computer software [02:58:27] not a pocket calculator [02:58:40] yeah, I guess that isn't related to wikimedia. But if you just mean supports inputing base 8 numbers or converting between [02:58:56] then i think at the very least the unix bc command can do that [02:59:26] via the ibase and obase magic variables [03:00:03] e.g. if you do ibase=8 it will treat input as base-8 [03:00:13] and if you do obase=8 it will treat output as base 8 [03:00:26] well, in this case an octoword would be a base-32 hex number [03:02:02] huh? If its a hex number its by definition base-16 [03:02:31] * bawolff_ just looked up the definition of octo-word, which appearently should be taken literally to mean 8 words [03:03:13] oh dear [03:03:16] not base 32 [03:03:23] i get numbers mixed up [03:03:32] i mean can go up to 32 numbers [03:03:35] long [03:03:36] so [03:03:51] FFFF FFFF FFFF FFFF FFFF FFFF FFFF FFFF [03:04:07] I don't deal with hex that much [03:04:07] So a 16 byte value [03:04:23] yes [03:04:26] I think you used octoword correctly, I just haven't heard the term before [03:04:42] yes, well, most computer's don't go into 128-bit stuff [03:05:09] 64 bits, thus quad-words, are the most commonly used [03:06:12] Anyways, bc supports arbitrary sized integers so it should work [03:06:30] but when i just tested it obase=16 made it behave weirdly [03:07:14] I always thought word was meant to mean the native integer size for the computer [03:07:25] so a 64bit computer 1 word = 64 bits [03:07:28] but i don't really know [03:07:30] no, I believe it's simply the type of integer stored [03:07:45] QWORD is just the biggest it can store [03:07:52] This is more low-level computer stuff than i usually deal with [03:08:36] same for me [03:09:11] I'll be away for a while [03:11:56] although google says that gcc supports a __int128 integer type [03:52:40] bawolff_: how do I set the base? running `bc --obase=16 --ibase=16` returns and error [03:53:38] first start bc with no arguments [03:53:40] bc [03:53:41] then do [03:53:43] ibase=16 [03:53:53] although obase=16 did something weird when i was testing it [03:57:13] yeah same [03:57:15] I tried that [03:57:20] and it gave answers in base 10 [03:59:14] So try doing [03:59:24] obase=16;ibase=16 [03:59:43] SirNapkin1334: Also, i just found https://stackoverflow.com/questions/9889839/bc-and-its-ibase-obase-options which explains why things were weird for me [03:59:53] once ibase is read, it adjusts parsing of later options [04:01:30] so, you have to set obase first [04:01:35] and that works! [04:01:37] thank you [04:03:23] Thanks, I learned something new today too. I never realised that's how bc worked, and I think I've encountered that weird behaviour before and just gave up [04:04:03] i didn't even know bc existed so I definitely learned something new today lol [04:05:02] bc is very useful so thank you [04:05:04] The real obscure one is dc [04:05:30] i think anyone who actually understands dc is a true unix master magician [04:06:00] the invocations pretty much could be magic spells [04:06:15] i'm installing it with apt right now [04:06:32] uh [04:07:01] what the fuck does reverse polish mean [04:07:35] Like 2 2 + [04:07:46] instead of 2 + 2 [04:08:21] uh [04:08:23] why [04:08:25] and also how [04:08:50] it doens't give any output when I try that [04:09:24] holy shit this uses raw stack [04:09:41] Oh, it doesnt actually use that syntax, just the operator not in middle [04:09:43] try [04:09:45] echo '2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x'|dc [04:10:06] Example from wikipedia to print primes [04:10:31] anyone who actually understands that is a true master in my mind [04:10:45] that just looks like somethign you type in and get pwn'd [04:10:52] lol [04:11:15] well it does eventually overflow the stack in a sense [04:11:33] (indef loop using stack. Eventually ulimit kills it) [04:11:46] oh dear, I think it's time I hsould head to bed [04:11:51] thank you for your help [04:12:00] this was a good experience [04:12:00] no problem [04:12:06] now I know about bc [04:12:12] and I know about dc and not to touch it ever [04:12:15] bye [04:12:22] [AWAY] [10:34:26] Hello. I have tried to walk around the network topology and datacenters pages but can't find what I'm looking for: is there a number of currently operative server count, either aggregated or grouped by functions? [10:35:01] found a count of 77 varnish servers on a graph, but failed all the other groups [10:35:38] I don't think we public numbers quite so much [10:35:46] What're you trying to find out? And for what reason? [10:36:00] So I can help point you in the right direction [10:36:03] first, it ought to be public, right? :) [10:36:39] but I will do a presentation to local IT teachers and I want to talk about real operations behind WP [10:36:50] Do many people care exactly how many servers we run? [10:36:59] I have the various architectures but real life numbers would really help [10:37:15] to imagine what's that 2 GWh/year is about [10:37:32] Yes indeed they do, that's one of the most asked questions [10:37:39] Most asked questions where? [10:37:48] Presentation. [10:37:59] I happen to talk a lot about wikipedia you see. [10:38:44] people need some down-to-earth data to imagine the actual computers, data centers, internet lines. [10:39:45] There's a figure of ~1300 including VMs [10:39:54] they work with webservers, database servers, so it helps to see that the 250qps database request is handled by, say, 90 mariadb instances [10:39:56] yes we currently have 1324 hosts in Cumin, our orchestration framework [10:40:00] But not all servers are directly running Wikipedia [10:40:03] Or even indirectly [10:40:06] Yes I know. [10:40:08] that includes VMs [10:40:15] I'm just saying for clarity :) [10:40:20] The grand total is one of the numbers I was looking for. [10:40:30] Apparently it's about 1200 bare metal servers [10:40:54] and I suspect a bunch of VMs or CTs [10:41:04] but bare metal is good for me [10:41:09] ~100-150 based on the numbers above for vms [10:41:34] thanks. pity that these numbers have disappeared from the public. not even grafana have a total of running servers. [10:41:58] only percents [10:42:45] File a task in Phabricator? :) [10:42:53] there are traces of phabricator tasks how and when it have happened, but all of them I've found was closed as "status quo" [10:43:13] It's certainly not an unreasonable request [10:43:20] recurring also [10:43:36] well, okay, I try to create a very narrow phrased one. [10:43:56] and hope aklapper won't hit me with a cluebat :-P [10:43:59] Certainly a page on wikitech updated periodically should be minimally doable [10:44:14] easiest probably a grafana dashboard [10:44:28] it does have all the data, just need a template [10:45:08] thanks, I'm off phabricating something :) [10:45:40] wonder why we didn't try to build solar panel farms yet, looking at the energy consumption [10:46:00] lots of interesting data are public, btw. plenty of new to me too. [10:46:25] We don't own the datacentres we're in [10:46:42] you're right [10:46:50] and they don't seem to have the incentive [10:46:54] the data varies quite a bit depending on which part of the infrastructure you're including or not, the answer is not just one number ;) [10:46:57] There's been threads of discussion on occasion about power sourcing and whether we should be using renewable energy etc [10:47:34] volans, first, all of them; then all the ones invoving in wikimedia open wikis (that's the HARD one) [10:48:01] Reedy, yep, I guess it's still too big a task for Wikimedia [10:48:08] the problem is the definition of "them" grin ;) [10:48:13] there aren't only servers [10:48:36] volans, I can summarise but you don't want me to :) [10:49:16] sure, routers and network stuff is interesting, but most probably not for my audience :) [10:49:37] I'm just saying that depending on what "them" includes the number change and might change in somewhat significant way [10:49:56] so basically webservers/cache/obj storage, databases, webservers, and like, and easier to include load balancers too. [10:50:25] not including routers and switches and thermal sensors and stuff ;) [10:51:12] but generally the _magnitude_ is enough. ~1000 servers is good start for imagination [10:51:46] but, as I said, I'm okay with tallies like varnish, apache proxy, apache server, swift node, squid node, .... [10:51:47] :-) [10:52:20] oh and elasticsearch. that's usually a large bunch. [10:52:36] Considering those mostly are used to run Wikipedia too :P [10:52:43] but I'm good. thanks for all the responses. [10:52:55] no problem :) [10:53:01] yes, wikipedia is the topic. it's not a problem if foundation stuff is mixed in. [10:53:01] File the task and see if you can nerd snipe sre into doing a grafana thing :P [10:53:19] I'll sacrifice a black cock. [10:56:31] … [10:56:42] wtf [10:59:16] ^^ [11:03:50] !ops [11:08:45] there is no need to scrifice a roster into getting a grafana dashboard [11:08:52] I presume chicken? [11:12:39] p858snake|L, are you sure? ;) [11:13:46] McJill, fowl if you like. sounds much more boring. [11:14:05] Excuse me but WTF [11:15:01] ^^ [11:15:36] Krenair, it's an activity most often related to using human genitals to or with other humans, why do you ask? [11:15:57] grin, why did you think it was acceptable to come here and say that? [11:16:10] Krenair, say what? [11:16:52] I fail to see what you mean. [11:18:31] Oh come on now. Without knowing you, I can forgive the fowl line — who knows, Santeria maybe? — but to snarkily reply to Krenair like that, you know what you’re doing. [11:19:25] McJill, you mean by disagreeing seeing "wtf" as a reply on a public channel, considering it a rude behaviour? [11:19:49] apart from the open possibility that "wtf" could mean something I'm not aware of. [11:19:59] something _not_ rude. [11:21:33] !ops [11:21:50] Folks, let's keep this on-topic and family-friendly please. [11:21:53] hi [11:22:04] stwalkerster, thanks. [11:23:36] He was a Debian developer? Do a lot of them talk like that? [11:24:05] no [11:24:26] there are several Debian developers around here and I've not seen this before [11:24:32] Ok [11:24:59] /me looks for the popcorn [11:26:24] Reedy: Lol [11:49:32] he is not anymore [11:50:36] hasn't been for years; not sure who to tell freenode from? [11:50:46] ( https://nm.debian.org/person/grin ) [11:52:41] yeah I eventually tracked down how to contact the right people and will send them a note later [16:03:25] Hi anyone here good with Quarry stuff? [16:13:21] Several of us, just ask what you're trying to do [16:16:55] Don't ask to ask, just ask... [17:31:39] Hi everyone! Not sure if this is the right channel to ask, but... Could someone please add me (https://meta.wikimedia.beta.wmflabs.org/w/index.php?title=Special%3ACentralAuth&target=Daimona+Eaytoy) to the global developers group on Beta Cluster? I'm already global sysop, should it help. Many thanks! [17:33:54] If nobody responds here you can also try #wikimedia-releng [17:37:26] hi Daimona [17:37:32] I could [17:38:03] what do you want to use it for? [17:39:20] Thanks bawolff and Krenair :-) Mainly I'd use to test partial blocks with another couple of trusted people (itwiki sysops) [17:39:38] Having some extra rights would make it easier to manage user rights without bothering someone else [17:40:36] I think we normally use the stewards group for this [17:40:42] it should do what you want [17:41:43] Daimona, try now [17:47:47] Oh yeah it's fine either way [17:48:18] I just thought the global-dev group would have less privileges, and fit better than steward [17:48:37] Yeah, now it's fine, thanks :-) [17:51:22] framawiki: Hi [17:51:28] The problem query - [17:51:37] hello ShakespeareFan00 [17:51:38] https://quarry.wmflabs.org/query/18892 [17:51:43] oh you'r still here:) [17:52:00] There's no easy way from a filename to tell if something might be "controversial" [17:52:17] So I'd like someone technical to come up with a soloution [17:52:54] See https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard#%22Controversial%22_media,_and_how_best_to_filter_it_.... for further context [17:56:00] i don't see how a bot/automatic stuff can detect "controversial" files [17:56:42] perhaps somebody else here have any idea ? [17:56:59] Well it can if there's a suitable categorisation/templte [17:57:08] English Wikipedia has a badimage list [17:57:17] But that tends to be reactive [17:58:05] So what's actually needed is a categorisation and a {{risky image}} template admins can add [17:58:17] Or something like that [17:58:46] Hi NotASpy [17:59:17] See https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard#%22Controversial%22_media,_and_how_best_to_filter_it_.... [18:01:46] NotASpy; I was in here trying to find a technical soloution [18:03:35] I don't think one exists. [18:03:46] Also IIRC this was highly controversial. [18:04:44] Define controversial [18:05:25] something something technical solution to a social problem [18:07:42] ShakespeareFan00: like if controversial = nsfw, its in theory possible to use machine learning (e.g. google vision api) or if there is basically any well defined criteria like lots of reverts. But if its just that someone somewhere doesnt like it...thats pretty impossible unless you have an explicit list [18:08:08] also image censorship is a huge hot button issue [18:09:21] bwolff: In this instane what I think I'm asking for is the badimage list to be proactively applied vs reactively [18:09:40] ShakespeareFan00: see also https://meta.wikimedia.org/wiki/Controversial_content [18:09:45] and uploader given the option to say something controversial by selecting on option on the file description page [18:10:04] Good luck convincing the community of that [18:10:05] It's not about censoring the content generally, it's about removing it from certain queries [18:10:56] The badimage list is to prevent vandalism/disruption, not for classification [18:11:04] but yeah, if controversial images werw tagged as such in some manner (bad image list probably doesnt scale) the technical bits arent really that hard [18:12:05] What's needed is something like the bad image list for 'problem' content [18:12:37] like for example old political cartoons for example. [18:12:54] where the attitudes expressed aren't considered acceptable anymore [18:13:40] Or which would be politicaly extreme for example [18:14:12] Another use for an expanded 'problem image list' would be for images of sexuality [18:14:40] but that sort of list would likely be even more controversial than the content on it [18:15:21] I dont think that such a list being needed is the majority opinion on wiki [18:15:42] None of that is going to be well-received, either at enWiki or commons [18:15:45] heck commons keeps putting naked people on the main page... [18:16:03] bawolff: Not currently, but some jursidictions are moving to a postion of wanting sites to filter 'problem' content [18:16:22] And yes I am well aware Wikipedia etc aren't censored [18:16:55] I'm guessing something terrible happened today. Dare I ask what ? [18:17:09] NotASpy: I'll explain in a bit [18:17:15] I have to go eat [18:17:28] back in a bit [18:29:21] ShakespeareFan00, I honestly think that even if the US put in place such laws and WMF legal forced stuff to happen there'd be a big fight [18:30:07] not sure many other jurisdictions are likely to sway opinions on-wiki [18:30:11] Honestly, if us put in such laws, i think moving the foundation would seriously be on the table as an option [18:32:02] I don't think the US actually would but still [18:35:23] NotASpy: You wanted a fuller explanation? [18:35:46] yes, what has prompted this again today ? [18:35:59] https://quarry.wmflabs.org/query/18892 is the query I have for finding image with missing information [18:36:32] 2 of the filenames it came back with (near the top of the current results) have filenames suggesting controversial content [18:37:10] I came here to ask if there was a technical mechanism for 'removing' such media from the query [18:37:29] Hell of a lot more than 2 controversial files on commons... [18:37:46] because although in this instance the filename is a warning , that's clearly not always going to be the case [18:38:17] Oh. There is various porny categories you can exclude in a query [18:38:29] not perfect but will get some [18:38:44] although if file is missing info anyway... [18:38:51] Therefore I felt there needs to be a tag/categorisation that means "controversial" content can be dropped out of the list.. [18:39:16] meaning contributors that feel uncomfortable going near it don't ever have to have it show up in queries [18:40:18] And as I said some jurisdictions are considering getting even tougher about viewing content they deem illegal, politically extreme etc... [18:40:41] https://commons.wikimedia.org/wiki/User:Bawolff/usage_stats_sexual_media has some starting places for queries [18:41:11] NotASpy: I know you've expressed a viewpoint about censorious jurisdictions previously, so I won't ask again [18:42:24] ShakespeareFan00: the problem is deciding the cut-off or threshold. We can do what we can to help people avoid viewing material that may be illegal and place them in legal jeopardy but it's incredibly difficult. [18:42:52] ShakrspeareFan00: i mean, this has been brought up in the past and strongly rejected by thr community. I dont think anything has changed since then [18:43:31] NotASpy: At WP:AN someone said that once "Structured Data" is rolled out... (cue insane laughter) [18:43:58] Its literally being rolled out today (to beta wiki) [18:43:59] it might be possible to have geo-filter based Censorship tags as part of it.. [18:44:46] sure. But even without structured data we could do the same thing with categories [18:44:56] bawolff: Yes [18:45:12] i dont think there are any blockers beside political will [18:45:22] So I've got my answer... the problem now is finding someone wiling to do the categroisation [18:46:03] Because for obvious reasons, I'm wary about going near content that's "controversial" in the context of this discussion [18:46:55] Note the page i linked has no images on it, just suggestive category names and numbers [18:47:22] and is also super old. I think russavia asked me to do it back before he got banned [18:48:20] ShakespeareFan00: If you want templates/categories on images, your issue is lack of consensus to do so, not someone to do it for you [18:49:07] (Sigh) [18:50:13] McJohn: There's a WP:AN thread on English Wikipedia [18:50:42] I’m aware, thank you. I was replying to “the problem now is finding someone wiling to do the categroisation” [18:50:51] I'd naturally like there to be a consensus reached before one is forced by external circumstances .. [18:51:26] And as I got my technical answer [18:51:48] Further disscussion on this is best had at WP:AN (or some other policy forum) [18:51:52] * ShakespeareFan00 out [20:14:21] what is the status of the comment table? [20:14:34] is it already used in production? in replicas ? [20:18:35] it appears this is https://phabricator.wikimedia.org/T166733 [20:19:07] write new status [20:19:13] sounds like new comments would be going into there? [20:22:41] thanks Krenair [20:22:59] sorry I don't know more framawiki [20:23:12] found what i was searching [20:23:13] you might try asking the individual people involved if you want/need more detail [20:23:15] cool [20:23:25] why my queries are broken as of today [20:23:28] :)