[00:01:49] audephone: fatal like shown in fatalmonitor [00:01:55] not fatal PHP level [00:02:12] Ok [00:02:35] I think we are aware but suppose should still handle this better [00:04:37] And when there is a link of redirects like here Q24668012 -> Q24668011 -> Q19722200, there is no way to edit the redirect on Q24668012 to directly point to Q19722200? [00:05:46] (I mean something more straightforward than cancel at history level, then remerge) [00:06:05] Maybe Special:MergeItems [00:06:14] But not sure that works in this case [00:09:26] https://www.wikidata.org/w/index.php?title=Q24668012&action=history through the API, wbcreateredirect, that works [00:10:14] so perhaps a gadget redirect editor, to specify a new item id, and call again wbcreateredirect [00:11:28] Gadget could work if we can't come up with a better solution in wikibase itself [10:00:00] Jonas_WMDE: Are you in the office? [10:38:00] is there a specific SPARQL channel? [10:39:04] edoderoo: not that I know of [10:52:41] Thiemo_WMDE no I am on holiday for 2 weeks :) [10:57:34] :( [10:58:12] * sjoerddebruin would really like a review of https://gerrit.wikimedia.org/r/#/c/306136/ [10:58:59] sjoerddebruin: I can't help you, sorry [10:59:21] harmonia: I know, you do a lot of other useful stuff though <3 [10:59:32] thank you! [11:23:04] Jonas_WMDE: I saw you merged a patch today. This confuses me. [11:24:02] sjoerddebruin: I do have the tab open. Will review it as soon as I can. [11:24:23] Will make my workflow even faster, so thanks [12:40:12] DanielK_WMDE: Hey thanks for the review, one question. Is there a robust way to use test providers in test setup? [12:58:14] Amir1: data providers? sure, we use them all the time... what do you mean by robust? [12:58:59] DanielK_WMDE: I mean I want to use $dump https://gerrit.wikimedia.org/r/#/c/305849/9/client/tests/phpunit/includes/Api/EntityUsageTest.php [12:59:02] in setup [12:59:07] (hoo's suggestion) [13:03:13] Amir1: you shoudl be able to do this in setUp. Not in a data provider though (they are called before the db isolation is in place). [13:04:01] Yes, so I need to remove that part from data providers and put it somewhere else [13:04:26] could be a static member [13:04:47] note that each call to each test function uses a new instance of the test class. that confused me a lot in the beginning [13:05:00] this means that if you want to do anything across tests, you need to use static members [13:05:05] btw, there is also addDBDataOnce in MediaWikiTextCase. [13:05:34] that way, you would only have to insert pages once, not for every test run [13:05:54] the pages don't get cleaned up, but they shouldn't interfere with other tests. [13:06:46] but you are only writing to the page table, not inserting actual page content, so that shoudl be quick enough anyway [13:08:42] For someone that's completely new to wikidata's endpoints. What would be the recommended way to input text string and get a list of resources which have that string as part of its label. I'm interested in an RDF output. Working with a SPARQL endpoint is fine, unless a dedicate API for something like is preferrable. [13:11:35] csarven: FILTER regex (?itemLabel, "ac$"). [13:12:08] csarven: example: http://tinyurl.com/jka2nlc [13:13:28] Wow, that's a horrible URL [13:14:03] all SPARQL queries url are pretty horrible frankly :p [13:15:02] "https://query.wikidata.org/embed.html#%23French... " ? [13:15:48] That's an application URL. Hardly "HTML". I can't curl that and get anything useful in HTML [13:16:27] csarven: no it was an example of a SPARQL query using FILTER regex to query labels [13:16:47] you asked how to query strings in labels [13:17:04] I say you should use FILTER regex and here is an example [13:17:32] and then you can export the result of your query in json, csv, etc. [13:18:25] Thanks. I'm aware of SPARQL works. I was wondering about the recommended method. Is SPARQL the primary way to get 'live' data out? [13:19:02] csarven: ah. Well, you can use the API too [13:19:14] as you want [13:19:39] I usually prefer SPARQL but that's probably me [13:19:57] How do they compare in performance? Lets say for the example I've used. [13:20:39] I dunno, cause i never use the API ^^ [13:20:49] sorry [13:20:50] hi SMalyshev - Daniel K mentioned you yesterday as the goto person for any questions to better understand wikidata query architecture (ofcourse whenever you can spare some time) [13:22:19] harmonia You should use something like FILTER (STRENDS(?itemLabel, 'ac')) instead of FILTER regex (?itemLabel, "ac$"). regex is expensive. [13:24:12] csarven: in this precise example, yes, I agree, but STRENDS doesn't permit to query for strings in the middle of labels, so to answer in a generic way... [13:24:30] the query wasn't mine, it's in the examples list [13:24:56] we should probably verify that the queries in the example list are optimized [13:25:07] Depends on the query obviously. Can still use CONTAINS instead of regex for in the middle [13:26:40] I'll give the SPARQL endpoint a go and see how responsive it is. SHould be fine I imagine. [13:26:40] Thanks [13:27:12] the endpoint is limited to 30sec, so heavy queries will timeout [13:27:24] but I think the same limitation was done on the APi? [13:27:37] need confirmation, I don't really use the API [13:29:55] Hi DanielK_WMDE - quick question is wikireviews under wikimedia ? [13:30:30] harmonia: over the API you can not "set" the time limit, so it's most likely the same [13:30:47] thank you edoderoo :) [13:30:49] some huge sparql-queries die on my Raspberry Pi because of lack of memory [13:31:01] I am asking this because if wikidata runs the risk of spamming if they let products and reviews under the wing, so should wikireviews be concerned about it [13:31:20] they do give result on my Linux box with 5Gb [13:42:35] DanielK_WMDE: Thanks! [13:46:34] https://docs.google.com/spreadsheets/d/1HwIHX9_W5XxlfOD5LlQPZqCsc9IZ8bGa7lOUQkK4aP0/ [14:17:03] Q: Anybody succeeded to get n-triples output from the wikidata SPARQL rest interface? [14:17:53] I manage to get JSON result sets, but don't seem to have luck with CONSTRUCT queries [14:20:45] Wait, got some success with a simpler query now ... [14:38:04] Nope ... I don't get any of the triples I'm trying to construct ... just other stuff :/ [14:42:32] So, this is the command I try, which does generate n-triples, but none containing my "dummypred" predicate: https://gist.github.com/samuell/3d29d8b585513b4be9c2f9bedc8b9b83 [14:42:45] ... in case anybody can spot something wrong with it? [14:58:25] I don't think that format=json makes sense for a CONSTRUCT output [14:58:28] probably does nothing [14:58:38] I presume you get an RDF/XML back? [14:59:09] See also: https://www.mediawiki.org/wiki/Wikidata_query_service/User_Manual#Supported_formats [15:00:51] I'd be interested in the N-Triples/Turtle/N3 output for CONSTRUCTS. Please ping if you get that workgin. [15:01:19] RDF/XML is fine too. Parser takes care of it any way. [15:18:18] csarven, Yea, that was a typo, but didn't fix it when removing [15:18:41] csarven, It turned out to be something with how CURL handles the data [15:19:11] It works fine if I add the GET parameters to the main URL [15:19:50] So, this query worked fine (testing now against a local blazegraph, so as not to bog down wikidata): curl -g 'http://localhost:9999/blazegraph/sparql?query=CONSTRUCT+{+?s+?p+?o+}+WHERE+{+graph+?g+{+?s+?p+?o+}+}' -H 'Accept:text/plain' [15:20:55] Somehow curl -X GET --data-urlencode "..." seems to mess things up. [15:21:39] csarven, So, that curl -g ... command above works fine now and returns N-triples [15:22:15] (I'm using blazegraph's docs to get the right accept-headers https://wiki.blazegraph.com/wiki/index.php/REST_API#RDF_data ) [15:22:49] Ok, gotta go, thx for trying to help csarven! [18:07:49] DanielK_WMDE: frimelle: Rebased https://gerrit.wikimedia.org/r/305439 [18:07:57] Would be nice to get a new +2 today [20:09:27] DanielK_WMDE_: Hey, one thing [20:09:27] https://en.wikipedia.org/w/api.php?action=query&prop=fileusage&titles=File%3AExample.jpg [20:09:46] I thought it's kind of the standard way to represent query results [20:10:01] If you think we should make it more compact, I'm all for it [20:22:14] DanielK_WMDE_: aude: Can you please re +2 https://gerrit.wikimedia.org/r/305439 after rebase? [20:37:20] Amir1: honestly, i'm not quite sure yet. i'd welcome more comments. hoo, can you look at my comments on amir's patch? [20:38:04] DanielK_WMDE_: Sure, will have a look in a bit [20:38:14] am currently looking at https://phabricator.wikimedia.org/T143818 [20:38:15] Thanks [20:38:23] transaction handling is nasty, still [20:39:57] i'm in the archcom meeting [20:40:05] and i'm at a training thing the next two days [20:40:09] so not much time for reviews [20:40:21] I see [20:40:27] the above is just a rebase [20:40:37] other than that, I'll try to poke aude and Lucie [21:00:16] hoo: I'm starting to think we should exclude sport (P641) from the entity suggester calculations, because it's being used on all kind of items. [21:04:14] sjoerddebruin: Hm… that property looks a little like it serves to many purposes [21:04:51] yeah, it's being used for all kind of things... it doesn't make the suggestions better on for example https://www.wikidata.org/wiki/Q26662689 [21:05:32] ah, shoot [21:06:16] We could do it better by making specific items for things, but those should also be translated and stuff... [21:06:21] This is one of the properties that certainly put the suggestions much off [21:07:43] sjoerddebruin: Done [21:07:52] Looks better now [21:08:03] woah, didn't knew it was that easy [21:10:12] It's easy (yet ugly) [21:10:55] We should make ideas for improvements. [21:11:08] It's one of the key elements of Wikidata and it's sad that it's decreasing so much. [21:11:29] It used to give better suggestions in the past, imo. [21:13:31] The ideas behind it are significantly flawed… the model was good enough for some time, but nowadays, there are to many properties that break the assumptions [21:15:11] We really need a large usage database or something, that could also be used for the sorting of search results. [21:20:20] I don't think it's very hard to find a better model for this, but someone needs to put some thought into it [21:20:52] Depending on how complex that model is, implementing it can be anything from trivial to an own project [21:21:20] I think there are easy fixes that will yield way better results, but I haven't yet put that much thought into it [21:21:55] Well, I think this was a easy "fix". [21:24:05] It doesn't seem to have influenced the sports clubs items too much. [21:24:37] have you seen https://phabricator.wikimedia.org/T143645 btw? [21:27:21] No, not yet [21:27:31] do you have steps to reproduce, or does it just happen sometimes? [21:27:59] I can reproduce it… yikes [21:28:02] it happens at certain combinations it seems [21:28:28] it happens a lot when I add information to people, but disappears when I add more external identifiers (might be related) [21:31:49] Grrr… accidentally refreshed a phabricator ticket, and my comment draft is gone :S [21:32:44] It mostly saves my stuff when I close some tab [21:32:53] but apparently not on F5 [21:33:11] ugh [21:33:33] don't most browsers trigger a confirmation popup then? [21:33:50] Only if you sent something with POST beforehand, I guess [21:33:57] at least firefox didn't [21:34:00] Ah right [21:34:36] Same behaviour on Safari. [21:35:09] Another thing that annoys me on Phabricator is that you can't right-click on search suggestions. [21:53:41] wow [21:53:57] We always do two api calls when the suggester opens?!