[10:07:29] okay, the WDQS lag is going up again… [10:08:17] how about enforcing that everyone uses maxlag=5 for their edits, as the policy suggests? [10:09:06] QuickStatements only respects that with its background mode at the moment [10:29:50] pintoch: afaik under 100 per minute never caused trouble so far [10:31:11] well, as I understand it, it is the total editing speed that causes WDQS to lag (the sum of all individual editing speeds) [10:31:29] does the size of the edit matter too? [10:31:42] probably! [10:33:09] but anyway, why do we leave it to tool developers and users to control their own editing speed? [10:34:02] we enforced per case years ago [10:34:37] so mediawiki itself refused edits if the user was going too fast? [10:34:52] No, we blocked until the issue was fixed in the bot code. [10:35:54] right… but that still requires admin action… capping the editing speed sounds like something that should be handled by mediawiki directly, no? [10:36:08] It should be possible yes. [10:36:09] in other words, why is maxlag opt-in? [10:37:17] It's a shame adam is on holiday till first of June... [10:38:04] anyway, i see a lot of edits on the scientific articles... maybe that is the problem? Quite large items that are being edited by at least two users atm [10:39:37] (FWIW there *is* a hard rate limit, I think https://phabricator.wikimedia.org/T184948 is still current) [10:39:47] Correction, three users* [10:41:01] https://phabricator.wikimedia.org/T199662 [10:42:00] But I don't think the current issue is due to not having maxlag... [10:43:17] Lucas_WMDE: are you sure this is in place? because sjoerddebruin had to block XabatuBot, editing at 123 edits/min https://wikidata.wikiscan.org/?menu=live&filter=bot&sort=weight&date=24&list=users [10:45:29] oh, no, apparently it was removed again [10:45:32] sorry [10:45:42] https://phabricator.wikimedia.org/T198396 [10:46:04] "We still do want the limit in place for normal users as they are not bound to maxlag and other rules." [10:46:23] Which is 90 edits per minute Wikimedia-wide, right? [10:46:28] though the default rate limit for all wikis (90/min) should still be in effect [10:46:31] yeah [10:48:18] Wikiscan might count edits slightly differently than MediaWiki does, but I don’t know if the difference is large enough to explain this [10:48:39] two servers are clearly dropping atm btw https://grafana.wikimedia.org/d/000000489/wikidata-query-service?panelId=8&fullscreen&orgId=1&from=now-3h&to=now [10:48:47] Lucas_WMDE: Wikiscan is just some easy way to see some kind of edit rate [10:50:43] ah [10:50:48] the “bot” group has the “noratelimit” right [10:50:53] https://www.wikidata.org/wiki/Special:ListGroupRights#bot [10:51:19] ok, that makes sense [10:52:04] you can define rate limits as non-skippable [10:52:16] but I think there’s no way currently to have a regular and a non-skippable rate limit [10:52:21] otherwise that might be something to consider [10:53:14] let's see how it goes once QS respects maxlag, I assume this should significantly reduce the editing volume [10:54:26] is that going to happen? [10:56:57] I don't see why Magnus would refuse to do that, he has been cooperative in doing just that for his GeneDBot [10:57:49] maxlag doesn't include query service lag btw [10:58:03] we do influence it with dispatch lag [10:58:11] but i don't see any dispatch problems atm [10:58:30] ah right, I thought I had read that somewhere [10:58:53] https://phabricator.wikimedia.org/T194950 [10:59:02] also: https://phabricator.wikimedia.org/T194950#4394132 [11:03:10] And like I said, nobody is editing above 90 edits per minute atm. But three of them are doing edits to large items, and I think the query service indexes the whole item again if something is changed. http://wikidata.wikiscan.org/?menu=live&filter=all&sort=weight&date=6&list=users [11:03:52] good point [11:04:26] (it does, yes) [11:04:52] concerning https://phabricator.wikimedia.org/T194950#4394132, the maxlag handling Magnus points to in that comment does not seem to be used by the current QS code [11:09:29] apparently this was committed recently: https://phabricator.wikimedia.org/R2010:6eedbf852913c4907b4a700d0bcaa02001df233d [11:17:06] "One thing to consider here to stop the situation getting too terrible would be to add the wdqs lag to the maxlag for wikidata.org" https://phabricator.wikimedia.org/T209201 [11:18:54] ah, sorry, I think I am wrong, it looks like it does respect maxlag in browser mode too [11:21:11] "It's edit load PLUS data size PLUS query load that pushes the public servers over the edge where we see lags." [11:24:47] hmm… hang on… I might be responsible for the recent WDQS surges then! [11:25:23] because I have taken on this task https://www.wikidata.org/wiki/Wikidata:Requests_for_permissions/Bot/PintochBot_4 [11:26:09] Just 131 M so far [11:26:19] (mb) [11:26:35] yes the diffs are tiny, but it does touch a lot of big items, so although I am editing at around 30 edits/min, it is still a lot too much for WDQS [11:26:52] *way too much [11:26:53] sjoerddebruin: thanks for the link, I’ve replied to the question in the last comment [11:26:57] perhaps that clarifies things [11:27:01] I still think the three users editing scientific publications have more impact [11:27:51] Lucas_WMDE: thanks :) [11:28:17] could be (and IMHO Wikicite should stop flooding Wikidata) but in this case the start of today's lag coincides with me starting a new run of the bot [11:28:35] woke up early? :) you can try pausing it [11:28:48] (and that is some interesting name mention) [11:28:56] yeah I stopped it a while ago [11:30:54] okay, so given that the task involves edits to millions of items, if editing at 30 edits/min is already too much, it is really going to take ages :-D [11:32:09] I would suggest asking Daniel Mietchen to pause and let's see what effect that has. [11:32:29] (total size of 2,3 GB in the last 12 hours) [11:33:58] how many items? 5 million at 30/minute would be just under 4 months [11:34:56] I thought QuickStatements didn't allow multiple batches by the same user at the same time? [11:35:16] I count three active batches of Daniel here: https://tools.wmflabs.org/quickstatements/#/batches [11:38:15] Correction: 4. [11:58:26] query service lag doesn't seem increasing anymore [12:06:24] not really decreasing yet either though [12:23:16] sjoerddebruin: I don't think concurrent background QS batches are much of a problem given that all background batches are capped by QuickStatementsBot's own editing cap [12:25:00] Lucas_WMDE: according to MisterSynergy the task impacts about 5 million items indeed… [12:51:59] pintoch: most edits shouldn’t happen as QuickStatementsBot anymore, though [12:52:16] even in background mode? [12:52:34] yes :) [12:52:45] I sent a patch to fix that a while ago [12:52:48] (as lucaswerkmeister, not Lucas_WMDE) [12:52:50] oh, I didn't realize that, that's great! [12:55:07] I am still puzzled by the fact that the editing speed is so high then… It seems to me that respecting maxlag=5 induces a ~30 edits/min maximum speed for a given user [12:55:44] Multiple batches... [12:56:01] isn't maxlag immune to that though? [12:56:25] (I have genuinely no idea) [12:58:31] I see varriying rates between 40 and 70 edits per minute on https://www.wikidata.org/w/index.php?title=Special:Contributions/Daniel_Mietchen&offset=&limit=500&target=Daniel+Mietchen [13:00:04] ok, I'll leave him a message [13:03:20] ping... someone here for WikiDataCon... about the deadline [13:03:44] it says/said 12:00 UTC+2 [13:03:56] is that noon or midnight for the proposal deadline? [13:17:26] maybe the number of QuickStatement batches running at the same time should be limited to 1? [13:17:41] egonw: would be logical to me, some kind of queue [13:18:04] at least per user... I see Amasuela has at least two running... [13:18:40] well, maybe to just repeat what pintoch said... the rate limit should apply for the batches combined, at least... [13:18:47] that is automatic with one batch at the time... [13:19:13] could one of you request that to Magnus? I think I have annoyed him enough over the past few weeks :-° [13:19:19] :) [13:19:27] I met him last week (finally) [13:19:36] he's actively working on improvements [13:20:58] I'd be very happy to submit patches myself if his projects were maintained with external contributors in mind, but without docs and tests I am not very keen [13:21:18] Magnus is off this week [13:22:07] And it's pretty common to see new accounts doing half a million edits without any approval [13:23:31] really? [13:23:40] oh, I remember the pain with my first 1000 edits... [13:24:19] Can someone take care of whatever nonsense this is https://www.wikidata.org/wiki/Special:Contributions/ChrisGillissen [13:24:44] Praxidicae: I'm on it... [13:24:50] Oh [13:24:51] Hm [13:24:54] I think that's on of our students... [13:24:58] it's lipids [13:25:01] They might be chemical compounds but it still looks strange [13:25:06] Sorry lol [13:25:15] Looked like gibberish until I found one that wasn’t [13:25:16] yeah, the names are, umm, interesting :) [13:25:23] no, it looks good [13:25:39] I'll ping my PhD candidate, who should be reviewing this anyway :) [13:25:39] Names did look like spambots though haha [13:25:44] e.g. 250k http://wikidata.wikiscan.org/user/Amasuela [13:25:49] yeah, they do :) [13:27:21] Praxidicae: I'm meeting with the LIPID MAPS team in two weeks, and we're collaborating via WikiPathways with them on lipid knowledge [13:27:32] there is a huge amount of literature about this new area of biochemistry... [13:27:57] it's not until the past 10-15 years that we can measure this class of compounds well... [13:28:16] and that results in a lot of biological knowledge that was a black box in the 200 years before that [13:29:37] I am not good at math or science so that’s probably why I thought it was gibberish :p [13:29:56] well, I am a chemist, and think those names are weird... [13:30:14] And we also have an lta who puts nonsense into similar articles [13:31:32] things are still not improving... [13:31:51] Praxidicae: I'll see to it the required info for chem compounds is added [13:32:57] 👍 [13:33:37] * egonw is updating his scripts to check the chemistry of stuff in Wikidata anyway... [13:34:02] (which in general is in pretty good shape...) [13:34:14] (which is important to us, because we use the data in our research :) [13:35:05] sjoerddebruin: PS, sorry we did not get to talk more at the recent WMNL meeting [13:35:17] wanted to chat with you a bit more about WikiWetenschappers [13:35:27] egonw: true, true. Coming to the Hackathon or? [13:35:31] but I assume you talked with Hanno anyway [13:35:36] the one in Prague? [13:35:40] Yes and yes [13:35:44] no, cannot make that :( [13:35:50] Oh, that's a bummer [13:36:08] we're writing up two abstracts for WikiDataCon, tho [13:36:08] hmm, so the WDQS lag is still increasing, even if all bots are at 30 edits/min ea now [13:36:11] *each [13:36:19] one around Scholia, the other around chemicals/metabolites in Wikidata [13:36:39] pintoch: one of Mietchen's batches is almost finished, wonder what the difference would be [13:36:48] egonw: doesn't Wikidatacon have a theme this year? [13:36:57] sjoerddebruin: ok, let's wait for that then [13:37:05] checking [13:37:50] it seem "the evolution of Wikidata" [13:37:53] "The program will focus on the main topic (Wikidata and languages)" [13:37:58] and that [13:38:12] ... as highlighted track [13:38:30] And I assume you also responded to the wikicite survey? [13:39:48] pintoch: nvm, seem like new ones were started :( [13:40:34] [survey] not sure... link? [13:40:46] I fill out quite a number of survey, but never remember which/when [13:41:07] https://docs.google.com/forms/d/e/1FAIpQLSfnTbN1epnxNqnikhlkLW51Hy06kUrA-KF-7g1Ec57n6n8Bow/viewform [13:41:07] [wikiwetenschapper] info Maastricht Uni is moving forward, btw [13:41:35] sjoerddebruin: +1 (no, not sure I have filled this out... I missed it, it seems) [13:42:12] Please do. Most economical imo would to organize it around wikidatacon again [13:47:08] let me email him then. If the situation degrades I think QS lets admins stop individual batches so we could try that [13:47:24] Yes, it does. [13:47:36] Also, someone stopped: https://grafana.wikimedia.org/d/000000170/wikidata-edits?refresh=1m&panelId=9&fullscreen&orgId=1&from=now-3h&to=now [13:48:17] it was some bot editing labels [13:48:21] should I pauze my job? [13:49:43] ok, I paused my quickstatements jobs... [13:49:56] curious if that actually is visible in that graph [13:50:14] https://grafana.wikimedia.org/d/000000170/wikidata-edits?refresh=1m&panelId=2&fullscreen&orgId=1&from=now-3h&to=now might show a drop [13:50:58] One of Florentyna's batches finished, so that causes the other drop [13:52:17] [survey completed] [13:53:34] Lucas_WMDE: i do think that including the query service in the max lag is the correct next step, we mostly have this problem now when people are mass-editing items that don't have sitelinks aka no dispatchting. [13:53:53] * Lucas_WMDE nods [13:54:48] egonw: seems like your jobs weren't too much [13:55:36] good. I didn't expect so, because it makes one edit per second or so [13:55:49] unpauzed again [13:56:01] (and it beavers away with more Massbank Accession IDs :) [13:56:08] pintoch: how much time do we give Daniel? [13:57:09] he told 1.5h ago he was at a conference and unlikely behind his keyboard [13:57:15] but I just give him a Skype ping too [13:57:46] otherwise i will stop his batches, as that seems the most efficient thing atm [13:58:11] he just replied... [13:58:12] hang on [13:59:01] Daniel: "all stopped" [13:59:09] can confirm [13:59:12] now let's see [13:59:27] give him my blessings btw :) [13:59:58] done :) [14:01:09] Technical Advice IRC meeting starting in 60 minutes in channel #wikimedia-tech, hosts: @CFisch_WMDE - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [14:01:36] sjoerddebruin: it does not seem to have a lot of effect... [14:01:55] be patient [14:02:32] :) [14:05:48] Mostly the problem with this is: you ask someone to pause and then someone else starts a job [14:06:15] wdqs1006 seems to responding well to it though [14:06:50] I asked Renamerr to stop his jobs as well (714 MB in the last 6 hours) [14:11:30] Someone making 45 items per minute is also not helping, I'll assume [14:15:06] sjoerddebruin: https://phabricator.wikimedia.org/T221774 [14:15:43] Lucas_WMDE: good :) [14:15:58] and wdqs1006 is increasing again [14:16:21] I'm stopping Renamerr's batches in 10 minutes. [14:51:08] Technical Advice IRC meeting starting in 10 minutes in channel #wikimedia-tech, hosts: @CFisch_WMDE - all questions welcome, more infos: https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting [14:51:29] sjoerddebruin: it's back in the red region [14:51:49] egonw: yeah, asked another bunch of people to stop [14:51:57] also, stop yours as well again... [14:52:28] yes, did that already [14:52:36] a few minutes ago [14:52:57] Editing items larger than 1 MB is not very nice either i think... https://www.wikidata.org/wiki/Special:Contributions/ArthurPSmith [14:53:14] 1.1 GB with just 8 edits per minute, so it's not just about edit rate [14:53:42] wow, that's some serious editing... [14:54:34] sjoerddebruin: ah, nice catch, that is probably the root cause of the issues [14:54:41] i honestly don't know what to do more [14:55:03] blocking quick statements with some abuse filter would be the most drastic effecient step [14:56:53] yes, that would be very drastic [14:58:39] This looks promising https://grafana.wikimedia.org/d/000000489/wikidata-query-service?panelId=8&fullscreen&orgId=1&from=now-3h&to=now&refresh=5s [15:01:32] my gut feeling is that the problem will only be resolved once Wikicite accepts that Wikidata is not the appropriate host wiki for the project, at least in the current state of affairs [15:02:40] There are like 4 accounts at the same time adding descriptions to disambiguation pages... [15:04:08] I don't think the current load is just wikicite [15:04:39] A good step would be the batch queue in QuickStatements [15:05:19] egonw: sure, but Wikicite takes up a big share of the load, and it would need to have a much higher editing rate if it were to reach a useful scale [15:05:44] sjoerddebruin: Magnus confirms to be offline [15:05:49] sjoerddebruin: one sec... [15:05:54] Yeah, he said that in Telegram [15:06:14] did he give a suggestion how to proceed? [15:06:45] No, it was unrelated and I don't want to bother him [15:06:58] he made half a suggestion ... one sec... [15:07:06] I asked clarification [15:09:37] (the number of edits is back in yellow now) [15:09:44] yeah, blocked two accounts [15:09:57] I give them a notice and 20-30 min to respond. [15:10:31] btw, I did not see those 1MB edits for ArthurPSmith [15:10:59] @pintoch, one important will be to figure out how to get Wikidata just scale better... [15:11:07] egonw: it's not about the diff size, it's about the overall size of the items edited [15:11:08] 10 edits per second, even if botted, is not that much [15:11:24] egonw: the edit itself is not large, but the items itself are larger than 1 MB [15:11:38] After every change, the query service has to process the whole item again [15:11:47] sjoerddebruin: got it [15:12:12] so, that's 10MB/s to digest [15:12:21] still not that much [15:12:28] (well, clearly too much :) [15:12:36] but given all possible computation :) [15:15:54] oh well… [15:16:58] one thing I have been wondering about, if english titles need to be copied to labels of many other languages, but kept in English [15:17:12] that doesn't help the size of the item [15:17:24] No, it doesn't and is practically useless. [15:17:42] agreed, there are better solutions for that [15:18:49] so, each query server needs to update their RDF separately? [15:19:07] ie. they all have a full copy of all data? [15:19:37] (I see that 1006 is recovering a bit) [15:20:36] sjoerddebruin: what could be nice if QS would notice the load in the system and then just pauzes until the servers caught up [15:20:51] egonw: https://phabricator.wikimedia.org/T221774 [15:21:13] of course :) [15:25:08] 1004 and 1005 are still going up [15:25:53] this is lovely for the query service https://www.wikidata.org/w/index.php?title=Q1550459&offset=&limit=250&action=history [15:26:51] indeed, given the above insight that's a killer [15:27:23] sjoerddebruin: Magnus did not reply to my request for clarification 20 minutes ago (so, really holiday :), but... [15:27:39] but he wrote "maybe have someone else restart the tool/Webserver?" [15:27:43] I think that was a hint [15:28:39] Related to what would that be? [15:28:55] The not being able to stop batches? It's no problem, I can block accounts. But new people keep showing up... [15:30:44] well, I wrote something along the lines of: "wikidata people are getting nuts of fighting to active use of QuickStatements right now" [15:30:48] it was a reply to that [15:31:05] I think the reboot will basically empty the list of future batches [15:33:54] there are almost none left https://grafana.wikimedia.org/d/000000170/wikidata-edits?refresh=1m&panelId=2&fullscreen&orgId=1&from=now-3h&to=now [15:35:09] +1 [15:35:27] btw, I also realize that all items will get larger and larger [15:35:32] that's what we want, right? [15:35:37] more info for items [15:35:46] https://en.wikipedia.org/wiki/Second_law_of_thermodynamics [15:36:01] so far, there was a tendency for prefering more statements over more items [15:36:14] at least how I perceived the unwritten rules [15:36:32] I know https://grafana.wikimedia.org/d/000000175/wikidata-datamodel-statements?refresh=30m&panelId=4&fullscreen&orgId=1&from=now-1y&to=now [15:37:26] pintoch: I'm bringing that up, bc the chemical items are at least as large as that of articles [15:37:46] (but there are only some 200 thousand chemicals at this moment, not a few million) [15:37:55] well, 19M [15:38:06] Haven't took action on ProteinBoxBot yet, but that is also editing 80k byte sized items [15:38:33] oh, I can ping Andra [15:38:52] It would probably help [15:38:59] hang on [15:39:03] Never have seen it so extreme as today, wonder why... [15:39:04] let's see if he is online [15:39:11] well, one or two months ago... [15:39:16] then it was this bad too... [15:39:23] was during a demo I gave [15:39:26] oh, wait... [15:39:36] I remember... I was demoing SCholia in Germany [15:39:50] 3 weeks ago [15:39:52] February? [15:39:54] on a Monday evening [15:40:00] or Tuesday... [15:40:04] wait, I'll check :) [15:40:25] April 2, Tue evening [15:40:43] then the outdatedness of the WDQS was also in the order of > 1h [15:40:51] for a couple of hours [15:41:02] https://grafana.wikimedia.org/d/000000489/wikidata-query-service?panelId=8&fullscreen&orgId=1&from=now%2Fy&to=now [15:41:21] 1006 is on zero now btw [15:41:30] ah, nice... [15:41:43] so, that was the worst I had seen, but Feb is "off the scale" [15:42:05] btw, it's always the same servers... [15:42:07] why is that? [15:42:20] and always in the same order [15:42:29] i don't know... [15:42:55] egonw: sure! I don't know about chemicals or genes - what people do in this field might or might not be sustainable for Wikidata [15:43:08] don't know... [15:43:17] but I am more familiar with the bibliographical domain [15:43:17] they are still smaller than an item of someone famous [15:43:25] are those outside the scope of Wikidata too? [15:43:50] and in that domain, I know that the database will remain basically useless until we can afford to import large databases such as Crossref [15:44:40] well, the number of items does not seem to be the problem, from what I understood this afternoon [15:45:31] would be interesting to see the statement counts for particular types (article, human, gene, chemical, etc) [15:45:33] well, publication items tend to be fairly large too I would say [15:45:55] pintoch: I'll see if I can do some testing of this tomorrow :) [15:46:29] but it's weird that 3 of the six servers (and particularly 2) struggle, but the other three hardly have issues... [15:46:31] in any case, the current situation seems detrimental for Wikicite (no credibility as an infrastructure project) and Wikidata (degraded service for all users) [15:48:14] I'm not convinced yet that this is not a simple technical issue [15:48:30] I mean, we're not talking Facebook levels of data here [15:49:11] meanwhile, Andra seems offline right now :( [15:51:41] maybe the problem with Wikicite is that very few of the involved users have any experience with running an infrastructure like the one the project aims to build, and little understanding of what it takes to store and index that sort of data [15:52:17] (I clearly do not have that experience at least) [15:53:14] oh, not sure about that... but a problem could be that there is no one from the core hardware team involved... [15:53:29] ok, I figured out why the three servers are special... [15:53:45] the ones that "break down" each time, is the "old" cluster [15:54:02] the three 100x servers keep having trouble at intervals... [15:54:10] the three 200x servers keep doing fine [15:55:04] so, I would say: phase out cluster "1" and replace it with a system like cluster "2" [15:55:18] egonw: which "the core hardware team"? [15:55:27] (-the) [15:55:33] :) [15:55:53] no clue... but I refer to the team that operate the servers... [15:56:10] maybe one person [15:56:18] One more user stopped on advice [15:56:40] Would be nice if Andra responded, then we have everything above 300 MB-ish [15:57:54] just sent him another ping [15:59:25] *sigh* https://www.wikidata.org/w/index.php?title=Special:Contributions/Charles_Matthews&offset=&limit=500&target=Charles+Matthews [16:00:32] can temporary batches not be stopped by admins? [16:00:45] was the other one also with the *old* QS? [16:00:47] It never worked for me in quickstatements [16:00:59] this one is with version 1.3... QS 2 does a great job at combining things into one edit [16:01:28] ahem "great job" [16:01:45] it still does two edits when adding a statement with a reference [16:02:27] really? oh, then I withdraw my comment [16:03:06] pintoch: good point... did not think of that [16:11:25] well, it seems the number of QS edits is under control... but 1004 and 1005 are not recovering much yet [16:13:27] 1005 want's to go for it at some point [16:13:40] place your bets! [16:15:27] 10m [16:15:56] 20m [16:16:56] https://www.youtube.com/watch?v=QOtuX0jL85Y [16:22:55] pintoch: I have been pondering the idea of a dedicated WDQS for Scholia, but that won't help if the issue is really the speed at which they can update [16:24:23] sjoerddebruin: you're turning on jobs again, right... just to get it past the 10m?? [16:25:41] Hmm, not sure what to do with this. https://www.wikidata.org/wiki/User_talk:Charles_Matthews#Query_service_lag [16:26:59] not straightforward indicates it's possible [16:27:29] don't know him yet, but he's from the ContentMine team... [16:27:39] good to see they are merging in data, tho [16:27:44] I still see him editing but also both servers are going down now :O [16:30:07] nice, at around 10m ;) [16:42:07] it's interesting to see that many lag peaks happen during the night and peak at midnight [16:43:13] more recently more typically at 22:00 [16:43:20] (CEST) [16:44:10] maybe the solution is to half the throughput of QS during the evening, say from 18:00 to 22:00 [16:44:58] I wonder if this correlates with increased use of WDQS in the evenings... [17:08:26] sjoerddebruin: thx [17:11:20] egonw: @dedicated WDQS: if scholia generates a significant proportion of the requests made to WDQS, that could be useful [17:12:39] because by easing the query load on the official WDQS, it would make it easier for it to stay in sync [17:13:44] but that is going to be quite expensive for you to run and it will not solve the problem on the long term, I think [17:18:51] time for pizza [17:19:18] should we unblock people? [17:19:36] feel free to [17:23:05] I have done it for ArthurPSmith [17:23:58] looks like his edits are not resuming, great [17:33:02] well, something helped the edit rate go back up in the red zone [17:33:34] but WDQS lag still well within parameters [17:34:51] I feel like we haven't fully understood the issue here :-/ [17:36:01] note that it's always above the red https://grafana.wikimedia.org/d/000000170/wikidata-edits?refresh=1m&panelId=9&fullscreen&orgId=1&from=now-90d&to=now [17:42:26] haha... cry wolf :) [18:32:14] Can I do queries of items in a wikipedia category? [18:32:26] As in “all items in this category that have this property” [18:33:53] From what I can see items don’t have this property, so it’s not added automatically. [18:35:14] e.g. “list of karaoke games that have an open source license” [18:37:43] https://petscan.wmflabs.org should be able to do that [18:38:14] Be sure to flick "Use wiki" under "Other sources" to Wikidata, then you can use "Uses items/props" on the Wikidata tab [18:41:29] Cool, thanks [18:50:25] https://www.irccloud.com/pastebin/PlzDucS1/ [18:50:38] can someone shed light on how to get values out of wikidata's graphql responses? most fields are fine if objects or strings, but sometime a 'Point' or some other type will just return `{ [18:50:38] "mainsnak": "[object Object]" [18:50:38] }´ [18:51:19] its a string that looks like there was supposed to be an object there, not sure how to juice it :/ [19:35:18] hm, i had encountered that before