[01:11:49] I see that Merge has been OOJS-ified. It is very pretty. Thank you. [09:14:36] CI is back around :-) [09:14:44] wmflabs had some issue preventing it from spawning/deleting instances [09:53:26] PROBLEM - Host wdqs1001 is DOWN: PING CRITICAL - Packet loss = 100% [10:10:33] Hello out there. I was wondering, how one would go about to migrate a whole lexicon with its data into wikidata? Some of our data is already there, but we could enrich it, but most of it is new. Thanks [10:17:51] RECOVERY - Host wdqs1001 is UP: PING OK - Packet loss = 0%, RTA = 0.57 ms [12:10:06] hey there. I am going to cut the wmf branch over the afternoon [12:10:30] starting roughly now, I have no idea when the branch will actually be cut though [12:11:30] i don't think we are making a branch this week, afaik [12:47:23] the cut script is running now [12:47:34] but I guess for wikidata/wikibase it points to some branch [13:19:52] what's the right way to add a reference in https://www.wikidata.org/wiki/Q23921645 ? [13:19:55] for the date of birth [13:19:57] cf. http://data.bnf.fr/12629705/andre_de_paniagua/ [13:24:55] * hoo waves [13:26:24] yannf: this sort of touches on that: https://www.wikidata.org/wiki/Help_talk:Sources#Authority_control_databases_as_sources.E2.80.94how_to.3F.21 [13:27:58] hoo: is there a reason allowDataAccessInUserLanguage is not set for commons beta wiki? [13:28:45] aude: It should be [13:28:58] ok, found it [13:28:58] https://phabricator.wikimedia.org/rOMWC31459edf7f128e6a73ef5525e5c5fdbe32f8a3ae [13:29:05] ok :) [13:29:20] is this something we set also for wikidata? [13:29:31] i don't know if we announced that (or need to) [13:29:48] That is https://phabricator.wikimedia.org/T122670 [13:29:54] but still blocked on doing it for test [13:30:03] we could maybe do it for test today, if Lydia is ok [13:30:21] and then announce a date for Wikidata some time [13:31:28] ok [13:33:55] thedj, sorry, I don't get it [13:34:16] I mean, practically, what should be done? [13:36:04] I added retrieved+date and stated in [13:36:04] +data.bnf.fr (Q20666306) [13:36:59] I've seen others doing differently [13:37:48] i.e. imported from [13:37:48] +data.bnf.fr (Q20666306) [13:37:54] or [13:38:14] reference URL+url [13:38:18] or [13:38:20] both [13:42:15] anyone else here? [13:42:34] hi [13:42:37] hi [13:43:56] I'm from mh [14:01:42] yannf: imported from is for bots [14:03:16] though ppl often use it because they copy what they see. [14:03:48] but imported from should be a subset of stated in really if I understood correctly [14:05:12] ok [14:05:52] thedj, and what about data.bnf.fr (Q20666306) vs. reference URL+url? [14:07:01] data.bnf.fr (Q20666306) seems useful to classify the source [14:07:05] on that front I really don't know :( [14:07:14] machine readable [14:07:23] but it is less precise for human [14:07:29] s [14:08:38] there is some circularity when it comes to quoting authority control. [15:25:35] yurik: i am reading your emails. just super swamped right now. sorry! :( [15:25:57] Lydia_WMDE, tis ok, let me know what you think [16:11:44] Hello! Is it possible to retrieve a list of all possible locations via some kind of API? I could also extract it from the JSON dump, but would probably first have to extract all subclasses of location etc. [16:17:41] gothos: maybe you can find something here that you can adopt for your purposes? https://www.mediawiki.org/wiki/Wikibase/Indexing/SPARQL_Query_Examples [16:18:15] gothos: a list of all locations probably has several million entries... is that really what you want? [16:49:33] DanielK_WMDE__: Thanks, I'll take a look, didn't even know that there was a SPARQL endpoint. And yes, it is what I want since I'm doing some entitiy/relation extraction on a dataset :) [16:49:58] gothos: the response is probably so big that the request will time out... [16:50:21] any analysis on the full dataset is best done on the dump [16:50:32] perhaps use SPARQL to find all the relevant subclasses, and then scan the dump? [16:52:42] DanielK_WMDE__: Yeah, no worries. Just found the RDf dump downloads, I'll just import the data into a local virtuoso db [16:52:53] and run the queries on that [16:53:22] if you have virtuoso sitting around anyway, sure, why not :) [17:23:35] DanielK_WMDE__: I run wikidata on blazegraph, I tried with mongodb and simple queries work, but anything a bit more complex mongodb timesout and uses huge amount of memory ... [17:24:12] has anyone tried other nosql / json document based db to import latest.json directly? [17:24:36] i don't know [17:26:31] NoSQL usually isn't great for transitive queries. Nor is SQL, for that matter :) You want a graph database for that, or a proper reasoner. A triple store is a little bit of both... [17:28:49] I haven't been able to comprehend sparql notion .. I tried some examples, but I don't understand the sparql concept yet .. [18:28:36] jzerebecki: can you tell Lucie how to fix this? https://integration.wikimedia.org/ci/job/mwext-testextension-hhvm-composer/3505/console [18:32:33] DanielK_WMDE__, frimelle: error is unrelated to the patch. it seems AP depends on something that depends on Wikidata, which includes its own copy of AP [18:33:47] DanielK_WMDE__, frimelle: uh it is ContentTranslation... [18:34:52] jzerebecki, DanielK_WMDE__ uuh, I'll be in tomorrow probably. Can we have a look on that then? [18:35:24] Is it kind of a dependency circle? [18:37:26] frimelle: yes AP -> ContentTranslation (for https://gerrit.wikimedia.org/r/#/c/280950/ ) -> Wikidata (which includes the 2nd copy of AP). will revert that and remove CT from the dependencies of AP [18:37:46] jzerebecki: Thanks :3 [18:39:00] jzerebecki: thanks for sorting this out! [19:25:06] hello everyone. who should I speak to about an error in a interlanguage link? [23:49:45] "Jep" --Lydia_WMDE; J makes a different sound in English :-P