[00:23:54] How do I get all items that contain a source with a link to some given website? [00:25:04] This naive approach does not work for abvious reasons: http://tinyurl.com/y7a99tbg [03:09:53] Hi [13:41:13] Nudin: I'm not aware of any method other than downloading a dump and parsing that :/ [13:41:49] there's a ticket about it - https://phabricator.wikimedia.org/T157811 - but it doesn't look like anything will change any time soon [13:42:10] nikki: that seems like a problem. Because this is a fairly common problem [13:42:34] You also have [[Special:LinkSearch]], but maybe that isn't enough for you [13:42:35] 10[1] 04https://www.wikidata.org/wiki/Special:LinkSearch [13:43:01] yeah, it makes it using urls for references really annoying [13:43:33] abian: that might work! Thanks! [13:44:19] I'd like to be able to search by domain name at least, for me that would narrow it down enough that my queries shouldn't time out [13:44:54] yes [13:45:08] Cool :) [13:46:14] the rdf-version should be enhanced to use schema:isPartOf not only with wikimedia-urls but with all urls… [13:54:47] how on earth am I supposed to use the mwapi search thing in queries [13:55:01] it seems to return the page title, but I can't seem to get from the page title to the item [13:57:06] oh, apparently you have to search using a generator instead of searching using a search [13:57:09] that makes sense (not) [18:06:00] nikki, a page has a data_item() that you can use to get the wikidata-item [18:06:28] wd=page.data_item() [18:06:39] wd.get(get_redirect=True) [18:15:43] wd=pywikibot.ItemPage('Q42') + wd.get() will also work [18:17:27] hmm... almost ... you need to give a data_repository too with ItemPage() [19:19:20] edoderoo: that looks like python, I was talking about sparql