[13:57:56] Can I say that the new Constraint report is confusing [13:57:58] https://www.wikidata.org/wiki/Special:ConstraintReport/Q20643686 [14:05:48] is there an issue atm with Labs ? re AuroList ? [14:07:58] sDrewth: that's a bit vague :) what do you find confusing about it? [14:09:14] weird [14:09:51] sDrewth: although bear in mind that it's not even finished yet (at the bottom of the list several things are still marked as "todo" :)), so I imagine the focus right now is on making it all work [17:09:28] Hoi, could someone make a screenshot for me of the Wikidata info on https://www.wikidata.org/wiki/Wikidata:Statistics/Wikipedia [17:09:48] it does not fit completely on my screen (laptop) and I want to blog about it.... [17:09:57] if you have not seen it, it is really nice [17:11:27] I want to compare it with the stats by Magnus :) [21:09:09] PROBLEM - check if wikidata.org dispatch lag is higher than 2 minutes on wikidata is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1421 bytes in 0.183 second response time [21:12:36] ...what [21:13:10] we can't keep up the rate of edits [21:14:11] "HTTP CRITICAL: HTTP/1.1 200 OK" [21:15:16] legoktm: artifact of the check. the tool to check http in icinga 1 / nagios is _really_ confusing [21:15:40] we check if a regex matches on the content [21:15:53] but yea it obviously still returns 200 [21:16:06] ah [21:16:18] okay :P [21:18:48] !admin can someone block https://www.wikidata.org/wiki/User:BotNinja for 24h the dispatch lag is over 2min again. (not their fault that we can't sustain the edit rate currently.) [21:19:10] jzerebecki: McKay [21:19:16] hmm [21:19:28] I guess the backoff mechanism was never built [21:20:46] you could build your bot to check the lag regularly and wait if it goes over 1min or something [21:21:22] but yea there is not mechanism to throttle bots automatically at the api level server side [21:22:10] jzerebecki: I spoke to him yesterday and he said he was when he gets time and understands the reasons behind the block is fine [21:24:57] yea read that. seems he didn't yet. [21:27:33] don't we have some mechanism for api clients to stop processing when normal db replication lag gets bad enough? [21:27:48] Krenair: it is not replication lag [21:27:58] I know this isn't replication lag. [21:28:12] yes i think we have [21:28:46] in this case it is more important to fix the scaling issues, before implementing throttling [21:29:08] first one is https://phabricator.wikimedia.org/T105592 [21:30:35] then make dispatch use more than 1 cpu and make it more distributed than a cron job on terbium [21:32:01] created https://phabricator.wikimedia.org/T105654 for that [21:33:06] * jzerebecki out of internet range for quite some hours [21:40:08] RECOVERY - check if wikidata.org dispatch lag is higher than 2 minutes on wikidata is OK: HTTP OK: HTTP/1.1 200 OK - 1412 bytes in 0.137 second response time [21:53:23] jzerebecki: If you really want to unbreak dispatching, we should move it onto an hhvm based server (as it's cpu bound and long running) [21:53:37] Also fully switching to usage tracking would improve things [21:54:13] If the raw edit rate is a problem now, we can just increase the number of runners slightly [21:55:16] Also due to batch size and how often we dispatch we can never handle more than 600 edits / minute, even if we had the cpu time to do so [22:13:55] jzerebecki: https://gerrit.wikimedia.org/r/#/c/224365/