[15:01:13] Hi all, I’m still pretty new to Wikidata. I have a question. I work for an institution with a large authority file (GTAA, it’s already partly aligned with Wikidata). We would like to make more alignments and we have a tool (CultuurLINK) that enables us to probably make more such links. However, it only works with RDF-data and it only works with datasets that are available fast (SPARQL is too slow for instance). if we'd want to download the entire se [15:01:13] humans (instances (p31) of human (q5)) (4 million+ names) how could we do that? We have a tool to match datasets, but API/SPARQL is too slow, so we need to import them in some way. Do you have any suggestions? Many thanks! [15:57:44] where do we need broken website links? https://www.wikidata.org/w/index.php?title=Q21385413&diff=621040246&oldid=620792186 [16:00:50] *cough* no reference? *cough* [16:31:25] Quick question: How to get the number of page views through wikidata and the number of characters in the wikipedia article of a certain wikidata entry? [17:42:41] addshore: help https://gerrit.wikimedia.org/r/#/c/403930/ it's failing tests [20:32:36] hi :) [20:37:51] Jonas_WMDE: does my comment on namespaces ticket explain it properly? If not, I could try to answer more questions on irc [20:39:27] thanks SMalyshev [21:22:45] aand bye