[01:02:17] SMalyshev, can you add me as a reviewer for wdq? [01:02:35] add where? [01:03:06] SMalyshev, to the wdq repo - this way i can +2 the https://gerrit.wikimedia.org/r/#/c/374654/1 [01:03:14] i looked over it, seems good [01:06:40] ah, so you can review it, you mean to add +2? I am not sure how to do it, I'll check. But I wanted gehel to also take a look, just in case [01:07:11] it's somewhere in gerrit setups, i'll see [01:20:12] SMalyshev, what do you think about the helper functions as i wrote for the centroid calc? [01:24:33] is there a reasonable way to create pointToX(point) and xyzToPoint(x, y, z) [03:04:12] SPARQL gurus, I need your help :) I need to find objects, such that all subobjects satisfy a criteria. https://stackoverflow.com/questions/45951198/sparql-find-objects-with-all-sub-objects-matching-a-criteria [12:18:03] It's a pity that PetScan doesn't put labels to automatically created items... It's ancestor, Creator, could [12:18:45] I thought it did? [12:27:30] @sjoerddebruin: https://www.wikidata.org/wiki/Special:WhatLinksHere/Q35243371 [12:27:56] Have you filled in "Label language"? [12:35:43] now yes, but it doesn't help [13:10:35] hi! I wonder which bot is resonsible for the mass additions of sitelinks (probably by creating new items?) seen here: https://tools.wmflabs.org/wikidata-todo/duplicity.php?wiki=enwiki&mode=stats [13:11:04] i'd be interested to learn more about how it operates [13:25:30] Technical Advice IRC meeting starting at 3 pm UTC/5 pm CEST in channel #wikimedia-tech, hosts: @addshore & @CFisch_WMDE - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [13:30:46] @pintoch, I suppose that your plot is about growth of en-wiki pages, not Wikidata items [13:31:25] it is a plot of enwiki pages without wikidata items, so I am wondering where these sharp drops come from [13:31:51] ah, you are talking about drops! I thought about growth [13:32:19] it is a growth of sitelinks :) [13:32:27] so a drop of items without sitelinks [13:32:47] maybe https://www.wikidata.org/wiki/Special:Contributions/GZWDer_(flood) [13:32:52] (or maybe a drop of enwiki articles, but that does not sound very likely) [13:33:06] and many other bots too [13:34:08] okay… so if these bots do it, is it even worth using Duplicity at all? [13:34:58] I guess they create many duplicates by doing that, but still, I wonder [13:35:38] maybe they create items for only clear cases which are not (or they think are not) on Wikidata yet [13:38:31] GZWder likes to create duplicates, yes [13:38:50] just mass import of all pages to items [13:39:02] even recently removed from older item by mistake [13:40:06] Stryn: yeah, I was thinking that too, but the drops are so sharp that I would really like to know what criteria they are using [14:43:23] pintoch: if it's gwzder, then as far as I know they just create new blank items for anything not linked to wikidata, leaving everyone else to clean up all the duplicates, add statements and stuff [14:43:43] er, gzwder I mean [14:45:04] so using duplicity is still a good idea, since you can link pages to existing items instead of someone else creating a duplicate [15:22:28] nikki: okay, thanks :) [18:28:26] nikki: P971 hit suggestions \o/ [18:59:38] sjoerddebruin: wow :o [18:59:41] finally [21:21:22] How do I dump wikidata and load it into blazegraph? [21:23:10] sec^nd: php maintenance/runScript.php extensions/Wikibase/repo/maintenance/dumpRdf.php > SOMEWHERE/dump.ttl [21:23:28] wait, dump Wikidata or dump your own Wikibase installation? [21:26:51] load a wikidata dump into blazegraph [21:26:59] can I just use the ttl files [21:27:06] And how big would that be uncompressed? [21:27:13] okay, that’s documented in https://github.com/wikimedia/wikidata-query-rdf/blob/master/docs/getting-started.md [21:27:20] just using the ttl files should work [21:27:32] not sure how big they’re uncompressed [21:40:08] Lucas_WMDE, hi, is there anything missing with my patches that i can fix? thx! [21:44:32] sec^nd, i just did exactly that - not too hard -- download ttl.gz file, mange it, load it, run updater [21:44:45] good luck :) [21:46:13] yurik: no, but I think Jonas usually merges these kinds of changes and he’s on vacation :) [21:46:19] but I can probably merge them tomorrow [21:46:33] np, thx! [21:46:46] Lucas_WMDE, how hard do you think it would be to add babel to the build chain? [21:47:13] i haven't played much with js build envs, they seem overly hacky/each project making their own [21:48:25] SMalyshev, i updated my version of blazegraph with your distance patch, seems to work great - http://tinyurl.com/y7t3ycha [21:48:36] yurik: cool [21:48:37] thanks for getting it done so quickly [21:50:12] What is the best way to query all of the properties of parent classes in RDF? [21:50:17] in SPARQL [21:51:12] yurik: no idea, I’ve never used babel [21:51:24] I don’t think I’ve ever worked on a JS build, actually (only contributed to projects where one was set up) [21:51:37] its a pain i tell ya )) [21:51:52] yurik: some weird entries that query produces... eg. this one: https://www.openstreetmap.org/node/3095857668#map=2/70.3/30.6 - is it in Antarctica or Sweden? [21:52:32] looks like norway [21:53:31] seems wrong [22:00:25] SMalyshev, removed wp link [22:02:14] SMalyshev, do you think its possible to make the catch up script faster for WD? OSM has as much if not more data, but it catches up within a day using minute diffs, or possibly even faster with hourly [22:02:44] what you mean by catch up script? [22:06:02] the problem is that we don't have diffs. So we have to apply each change one by one, which is kinda slow. If we could make change script for, say, whole day, it'd be much faster but I am not sure how to do that, yet [22:24:12] Aleksey_WMDE: Thiemo_WMDE: Reminder about https://gerrit.wikimedia.org/r/#/c/370300/ :) [23:55:10] hi, a really dumb question [23:56:07] in this query, SELECT DISTINCT * WHERE { ?p p:P1539 ?pop } LIMIT 10, it returns wds and wd, but they link to the same.Is there a list to know what these mean? [23:56:16] a link* I meant [23:57:45] https://en.wikibooks.org/wiki/SPARQL/Prefixes [23:57:47] I found it [23:57:49] xd