[03:51:28] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1951 bytes in 0.112 second response time [04:11:28] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1955 bytes in 0.098 second response time [04:18:27] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1952 bytes in 0.092 second response time [04:28:28] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1925 bytes in 0.101 second response time [04:50:28] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1954 bytes in 0.106 second response time [05:15:28] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1949 bytes in 0.088 second response time [06:37:33] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1947 bytes in 0.087 second response time [07:02:33] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1938 bytes in 0.123 second response time [07:34:33] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1945 bytes in 0.088 second response time [08:09:33] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1946 bytes in 0.099 second response time [08:21:33] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1953 bytes in 0.100 second response time [08:22:25] grrr [08:26:33] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1931 bytes in 0.104 second response time [10:00:22] I've run into T187855 while setting up a fresh vagrant box with the wikidata role, is that familiar to anyone? [10:00:22] T187855: Undefined index in WikibaseClient $this->getRepositoryDefinitions()->getDatabaseNames()[''] - https://phabricator.wikimedia.org/T187855 [10:04:30] No :/ Can have a look unless someone beats me to it later today [10:06:04] thx [13:05:58] héllo, what is the database used by wikidata to answer sparql queries? [13:06:31] amz3: BlazeGraph [13:07:24] tx [13:08:50] np [14:01:25] "TypeError: 'str' does not support the buffer interface" arghghg [14:28:15] Ahh, just removed 200 redirects from my watchlist with one line of jquery. :) [15:31:36] :+1: [15:32:03] 6845 pages to go :P [16:19:10] I was very interested by the reproducibility of the wikidata ecosystem thread? [16:21:19] it's more a wikibase question, but if I install wikibase do I have access to a sparql endpoint [16:21:32] it's more a wikibase question, if I install wikibase do I have access to a sparql endpoint [16:24:36] Wikibase doesn't really communicate with the sparql endpoint I think. [16:25:38] so you dump the mysql database of wikibase using some format like .ttl and load it into blazegraph? [16:25:47] every day? [16:26:05] You can run your own local query service and run federated queries to Wikidata though. http://sulab.org/2017/07/integrating-wikidata-and-other-linked-data-sources-federated-sparql-queries/ [16:28:28] Hm, it seems we have enough pages covering running federated queries from our query service, not really to ours. [16:30:20] I am not looking into querying wikidata [16:30:57] I am looking in reproducing wikibase installation of wikidata to both dump .ttl files and allow for querying the stored/edited data with the wiki engine [16:31:51] Ah, I think it's still tough to import triples in Wikibase instances but maybe Lucas_WMDE or addshore can explain more. [16:36:22] addshore is the guy to talk to for an easy installation of all this :) [16:36:54] amz3: how it works on Wikidata is – a separate service keeps Wikidata and the query service in sync [16:37:21] it looks at RecentChanges all the time, gets the TTL data of affected entities (Special:EntityData), and issues appropriate SPARQL UPDATE commands [16:37:45] (updates are usually synchronized within a handful of seconds) [16:38:10] so sjoerddebruin is correct, Wikibase doesn’t know about the query service – the query service just pulls all the data itself [16:38:34] and AFAIK it’s totally possible to run your own query service that also uses the data from Wikidata [16:38:45] you just need to initialize it with a TTL dump [16:38:53] and then afterwards run the updater and point it at www.wikidata.org [16:45:45] oh ok [16:47:19] fwiw, I am working on versioned triple store and I am looking at how wikidata does the things around structured data to see how it would be helpful in wikidata-like usecases. [16:47:45] or see whether my database provides any value somehow [16:49:14] afaiu, you stream changes feed (aka. the write log) to another databases after pre-loading it with an initial data [16:50:53] what my database does is version quads in a git-like history [16:51:05] quads or triples [16:52:54] My plan was to redirect wiki contributors to the raw version of the database, and do reads on an optimized version as per 'git checkout master' [16:53:03] That's what happens in wikidata actually [16:53:50] except wikidata doesn't support branches, there is a single branch [17:13:57] legoktm: <3 for looking into the URL shortner more [17:16:44] <3 [17:21:06] <3 [17:49:38] is https://www.mediawiki.org/wiki/Wikibase/DataModel#Quantities still up to date? [17:50:42] I am wondering why the lower and upper bounds mentioned there are not explicitly exposed by the user interface and the API [17:51:04] lowerBound and upperBound are optional now [17:51:13] so I think that should be added [17:51:22] apart from that, I think it’s still up to date… [17:51:25] okay, thanks [17:51:45] (the UI doesn’t let you add asymmetrical lower and upper bounds afaik, but I’m not sure if the API would prevent it) [17:51:47] are they somehow inferred from the string entered by the user? [17:51:59] 10±2 means amount 10, lower bound 8, upper bound 12 [17:52:09] just 10 means lower and upper bound are missing [17:52:10] oh I see [17:52:21] and "10.000" versus "10"? [17:52:36] that’s a difference in the amount, I think [17:52:45] hm, though actually I’m not sure [17:52:50] if there’s any difference between those two [17:53:13] I tried in the UI and it does display differently [17:53:17] hm [17:53:27] so I guess one must have the amount "10" and the other the amount "10.000" [17:53:53] (we try very hard to treat the amount as just a string, so we don’t run into precision problems) [17:53:53] okay… so amount is really a string [17:54:00] okay [17:54:12] I am looking for the regex that this string must match [17:54:34] but as usual I am completely lost when it comes to grepping in the Wikibase code ^^ [17:55:31] I found it locally [17:55:35] now I just have to find a link I can send to you :P [17:55:43] it’s not on codesearch.wmflabs.org apparently [17:56:36] pintoch: https://github.com/DataValues/Number/blob/d0ec661c7b3fa49b5be39638d745256e2f8bc141/src/DataValues/DecimalValue.php#L43 [17:56:49] (I think that should be it) [17:57:53] Lucas_WMDE: fabulous! you're a star. should I add it to https://www.mediawiki.org/wiki/Wikibase/DataModel#Quantities ? [17:58:04] pintoch: actually, that can’t be quite right [17:58:13] because that regex requires a sign if I’m not mistaken [17:58:18] and we definitely don’t require that in the input [17:58:33] ah yeah it must be the normalized representation after parsing [17:58:37] so there must be a more relaxed parser somewhere [17:58:39] yeah [17:58:50] I guess that would still be appropriate to have in the data model documentation… [17:59:43] normalization is here I think https://github.com/DataValues/Number/blob/d0ec661c7b3fa49b5be39638d745256e2f8bc141/src/ValueParsers/DecimalParser.php#L148-L175 [18:01:15] ouch, okay ^^ [18:01:58] the regex for the acceptable values should be somewhere in the JS code because the UI seems to validate my input before allowing me to publish the statement [18:05:16] ah, good point [18:06:30] or https://github.com/DataValues/Number/blob/d0ec661c7b3fa49b5be39638d745256e2f8bc141/src/ValueParsers/QuantityParser.php#L145 seems to do it in the backend [18:07:39] yeah I can’t see anything like that in the JS code [18:08:06] okay don't worry I'm just going to extract a regex out of that [18:08:15] it's going to be funny ^^ [18:27:28] (actually it's not in JS code because the validation is made by the backend even at that stage - a request is fired to https://www.wikidata.org/w/api.php?action=help&modules=wbparsevalue [18:28:04] which I am definitely not going to do in my case… hmm) [19:00:49] First newspaper I article I ever saw that links to a Wikidata diff: https://derstandard.at/2000074556709/Datenpanne-Google-erklaerte-Thomas-Brezina-fuer-tot [19:03:42] tbh I’m impressed with that amount of investigation… they could’ve just left it at “silly Google” :D [19:04:16] also, wow, that name brings back some childhood memories ^^ [21:05:59] Hello [22:05:14] good evening guys. I wonder if one can give me an example of query that puts the occupations an certain item in one row, instead of listing it row-wise [22:08:21] who knows that and might help? [22:10:55] guest24: i'm not a SPARQL expeprt, but as far as I know, this con't really be done. The query defined the colums of the result set, just like in SQL. [22:11:22] The columns can't be determiend by the result, nor can they vary row-by-row [22:11:46] otoh, in SQL, you can work (read: hack) around that using GROUP_CONCAT. [22:11:54] maybe something similar is possible in SPARQL [22:11:55] concat exists in sparql too afaik [22:13:15] It would be good if I can combine the results using concat or something else. This would save a lot of post editions [22:13:24] "Animals owned by people holding any position" might be a good example to work with. [22:13:50] guest24: https://stackoverflow.com/questions/18212697/aggregating-results-from-sparql-query [22:15:54] Thanks Daniel and sjoerd. I will have a look on your suggestions and see how far I get [22:33:28] I tried to concatenate the occupations but sparql says the variable is a not aggregate http://tinyurl.com/ybt2nmgg [22:33:56] the "#" has to be deleted to see the error