[08:47:34] https://blog.okfn.org/2017/10/26/how-wikimedia-helped-authors-make-over-3000-articles-green-open-access-via-dissemin/ [08:52:15] yipee \o/ [11:12:47] Hi all! This is Jolan from Europeana Collections. I'm crossposting a question here that I mentioned over at #mediawiki, hoping that someone knows more about this [11:13:32] The Europeana SPARQL endpoint was federated into the SPARQL API, albeit a few years back. When trying to create a query that calls on the Europeana SPARQL API through the Wikidata query service, I get the error "Unknown error: Service URI http://query.wikidata.org/bigdata/namespace/wdq/sparql.europeana.eu is not allowed". my SPARQL syntax seems in order, using the SERVICE parameter to call upon the Europeana endpoint at sparql. [11:14:16] Which would leave the possibility that the Wikidata Query service isn't connected to the Europeana SPARQL endpoint (anymore). Is there a way this can be checked, or is there someone who knows the status of this? Relevant document: Europeana is mentioned on the list of federated API endpoints in the SPARQL query service user manual: https://www.mediawiki.org/wiki/Wikidata_query_service/User_Manual#Federation [11:16:02] Hi. I am looking at the film metadata in Wikidata, and came across something strange. lack a reference to the Internet archive, even though include such reference. Do you have any idea why wikidata lack the reference? Can it be added? [11:21:54] explain what I am doing with wikidata. [11:47:25] I have no idea what could cause a reference for a entry that was last updated in august to be missing in wikidata, nor how to rectify the situation. :( [12:08:49] join [12:12:54] JolanWuyts: can you paste your query somewhere (e. g. pastebin)? It looks like the service URI isn’t absolute [12:13:26] I think you might have something like `SERVICE sparql.europeana.eu …` when it needs to be `SERVICE …` [12:13:49] pere, could you elaborate? [12:15:29] @Lucas_WMDE sure! [12:16:20] https://pastebin.com/F2jaV7k9 [12:17:09] yup, with it works [12:17:16] i. e. the protocol (http://) and the final / were missing [12:18:22] oh, huh. I tried that, but it seems I used https instead. Derp. [12:19:09] oohkay, it works on my end now. Great to know that the federated endpoint is operational! Thanks a bunch Lucas [12:19:31] no problem :) [12:19:35] good luck with the queries! [12:34:29] Jhs: sure. not sure what you want more info on? [12:34:53] Jhs: ah, you missed the initial question... repeating. [12:35:31] Hi. I am looking at the film metadata in Wikidata, and came across something strange. lack a reference to the Internet archive, even though include such reference. Do you have any idea why wikidata lack the reference? Can it be added? _using_Wikidata.html > explain what I am doing with wikidata. [12:37:03] Amir1: Do you have time to approve a 1-point release notes update? https://github.com/wmde/WikibaseInternalSerialization/pull/124 [12:56:05] pere, okay, i understand now. the answer is that there is nothing automatic – the link to the Internet Archive was added manually to the Wikipedia article, and needs to be added to the Wikidata item as well, which you (or anyone) can do manually [12:57:01] pere, often bots import stuff that is already in Wikipedia to Wikidata, but there are many reasons why this maybe wasn't done for this item. The main reason is probably that a bot imported everything from Wikipedia when the property was created, but then the template was added to the Wikipedia article _after_ that import took place [12:58:01] pere, to add it yourself: Go to the item, press "Add statement" all the way at the bottom, type "Internet Archive ID" and then write "Popeye_meetsSinbadtheSailor" in the next field that appears [12:59:32] Jhs: aha. then I have misunderstood completely how wikidata get its content. I always assumed it was imported automatically and regularly with high frequency from the content on wikipedia. [13:00:38] the few I can add myself manually to wikidata is not going to make much difference when I try to go through thousands (or even millions) of video files to find their IMDB title ID. [13:00:41] nope, not automatic in that sense. it's often done by bot (which are also automatic), but it's normally a one-time thing, or at least it happens in irregular intervals (which can be far apart) [13:01:44] then my suggestion for people to add the IMDB and Internet Archive macros to the wikipedia articles is useless. :( [13:04:17] is there some way I can ask the people who imported the film refs using a bot to rerun the bot to update the content? [13:06:22] pere: you can try to ask nicely, but they aren't required to do that [13:06:59] I do not even know how to figure out who to ask nicely... [13:12:22] pere: look at the history to see which bot added them, then look at the bot's userpage to see who runs that bot [13:14:16] I suspect User:WikiDataMovieDB, but see no user page. :( [13:33:37] pere, looking into this now, give me a few minutes [13:35:37] Jhs: thank you. In the mean time I added a question and a link at the end of . [13:40:55] Jhs: I got to run soon. [13:43:19] pere, it seems there are 221 films with that template on enwiki that _don't_ have a property for that in Wikidata [13:43:47] https://petscan.wmflabs.org/?language=en&project=wikipedia&ns%5B0%5D=1&templates_yes=Internet%20Archive%20short%20film&sparql=SELECT%20%3Fitem%20WHERE%0D%0A%7B%0D%0A%20%20%3Fitem%20wdt%3AP31%2Fwdt%3AP279*%20wd%3AQ11424%20.%0D%0A%20%20OPTIONAL%20%7B%20%3Fitem%20wdt%3AP724%20%3Fdummy%20%7D%0D%0A%20%20FILTER%28%21BOUND%28%3Fdummy%29%29%0D%0A%7D&manual_list_wiki=enwiki&interface_language=en&active_tab=&doit= [13:44:01] aha. that would extend my dataset quite a bit. :) [13:44:12] i should be able to import those rather quickly manually [13:44:41] thank you! [13:44:48] got to run. see you later. [13:46:19] (Y) [13:54:55] How do I get a list of enwiki article titles in PetScan? (In the above query?) [14:02:37] lea_v_wmde_: around? [15:57:46] Jhs: did it work out to update the 221 films? [16:42:45] Hello i am searching for a spezial channel. Can someone help me ? [16:42:52] PLS PM [16:50:41] Jhs: look like my data set increased by 6 movies. :) [17:22:46] DanielK_WMDE: OK, jumping back into this, one final time, what we're looking for after your patch is a hook in SqlStore that replaces the EntityByTitleLookup returned from getEntityByTitleLookup based on...what, exactly? I can't tell what kind of title is being looked up at that point... [17:24:13] DanielK_WMDE: Or is it better to put the hook in SiteLinkTable#getEntityIdForLink so we can check the title each time? [19:36:16] DanielK_WMDE: I submitted an attempt that seems to work...still not sure if this is 100% the best solution, but at least I've achieved my goal of making a working system before leaving for vacation (next week) [19:42:25] PROBLEM - High lag on wdqs1004 is CRITICAL: CRITICAL: 31.03% of data above the critical threshold [1800.0] [19:44:56] PROBLEM - High lag on wdqs1003 is CRITICAL: CRITICAL: 36.67% of data above the critical threshold [1800.0] [19:45:26] PROBLEM - High lag on wdqs1005 is CRITICAL: CRITICAL: 34.48% of data above the critical threshold [1800.0] [19:54:09] oops [19:54:37] I'm maybe the bad guy that adds to much load to the server by changing mappings. [19:55:24] Tpt[m]: behave [20:22:26] PROBLEM - Check systemd state on wdqs1003 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [20:25:26] RECOVERY - Check systemd state on wdqs1003 is OK: OK - running: The system is fully operational [21:28:53] Jonas_WMDE, That gif explanation is awesome. (https://twitter.com/Jokrates/status/921831269499068416 ) [22:01:16] pere, sorry, didn't have time, had to go to the hospital & stuff [22:01:22] looking into it again now