[05:42:19] any admins around? we need a bot blocked [05:45:13] https://www.wikidata.org/w/index.php?title=Special:Contributions/ValterVBot&offset=20170602053909&limit=500&target=ValterVBot [05:46:18] alernatively if the bot owner is reachable and can slow it down to reasonable edits per minute [05:50:41] MisterSynergy: are you around? [05:59:50] apergos here I am [06:00:02] ah thanks [06:00:54] whats wrong with the bot? [06:01:11] MisterSynergy: we were actually seeing impact on our servers from the speed of the bot editing; could you be able to temp block it or reach the owner? [06:01:16] too many edits [06:01:19] too short a time [06:01:51] if you want we are in #wikimedia-operations [06:02:09] I'm now there [06:38:57] apergos: Hi, seems to be solved. I looked at the source at https://github.com/ValterVB/VBot/blob/master/VBot/WikimediaAPI.cs and it doesn't seem to comply with https://www.mediawiki.org/wiki/Manual:Maxlag_parameter [06:39:31] hey multichill long time no chat! [06:40:02] yeah we were just talking about the lag check in the ops channel [06:40:10] looking now [06:40:39] apergos: Bots which don't implement that should be blocked right away [06:41:05] Any sensible framework has it implemented, it's usually the homebrew bots that cause havoc here [06:41:48] yeah I see it's not using any external mw library [06:41:58] and it sure doesn't have a lag check in there itself [06:42:11] apergos: Sure been a long time. I would have expected you at the hackathon! [06:42:58] yeah I have issues with travel and visas and such [06:43:08] so I can't go to those things >_< [06:43:18] https://www.wikidata.org/wiki/User_talk:JCrespo_%28WMF%29#ValterVBot_problem [06:43:23] what do you think about that? [06:43:44] Bummer [06:43:54] ah I see you are already on it, heh [06:44:47] I remember Retry-After: a recommended minimum number of seconds that the client should wait before retrying was givining issues [06:45:06] yep [06:45:16] but also: running one instance, not multiple [06:45:45] So if you open https://www.mediawiki.org/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1 with debug console open retry-after is 5 [06:45:58] Maybe retry-after should be tweaked a bit more to have bots back off longer? [06:46:24] that would be a question for our dbas [06:47:32] wha is that "Parallel,For" cycle I wonder [06:48:20] Last commit 2 years ago, I think we're looking at old code [06:48:34] anyways 20 edits a second with lag tells me something was off [06:48:42] oh hm the old "where is the source code" problem [06:48:45] gah [06:54:29] He'll figure it out, back to work [06:55:30] see ya [09:33:59] I'm munging the dump [09:34:15] or more like the computer is doing it and me, the carbon based lifeform is doing the dishes [09:39:41] you need a dishwasher :P [09:57:11] and the dishes are done [09:57:28] next: bagging recyclable aluminium cans [09:58:17] it has munched 460000 entries [09:58:27] how many there are in total? [10:08:38] in the entire dump? millions [10:33:29] it's at 7.3 million now [10:49:18] wikidata.org front page says there are 26 million data items [12:36:12] gehel : any idea if the thermal paste on wdqs2 was indeed defective in any way ? composition, structure, amount, anything ? ^^ [13:26:55] Alphos: I haven't seen any issue since the thermal paste has been applied, so that's an idication. [13:27:21] Alphos: I haven't seen the thermal paste myself (I don't have physical access to those servers)... [13:29:49] ok ^^ [13:30:34] then again, there hasn't been any issue since even before it was applied, after you took it out of rotation and let it catch up on its lag, if i understood correctly [13:33:53] Hi everyone, anyone else getting errors with their instance of the wikidata query service? 13:26:02.803 [main] INFO o.w.q.r.t.change.RecentChangesPoller - Got 51 changes, from Q27963200@492262703@20170529230002|522741229 to Q13485581@492262802@20170529230049|522741328 [13:33:53] Exception in thread "main" org.wikidata.query.rdf.tool.exception.ContainedException: Non-200 response from triple store: HttpContentResponse[HTTP/1.1 500 Server Error [18:41:05] Hello. [18:41:12] I'm running './loadRestAPI.sh -n wdq -d `pwd`/data/split' [18:41:48] and I'm watching the "statements loaded" in the shell running the BlazeGraph [18:42:10] if it is 26 million entities, how many statements there will be? [18:46:10] jubo2: 151 million total statements according to https://tools.wmflabs.org/wikidata-todo/stats.php [18:52:08] hello WikidataFacts ! [18:52:17] hi! :) [18:53:41] WikidataFacts: ok thabnks.. [18:53:48] it is now at 50 mln [19:10:35] I got some sort of system freeze [19:11:04] I'll leave it to run some night [22:53:04] PROBLEM - High lag on wdqs2001 is CRITICAL: CRITICAL: 66.67% of data above the critical threshold [1800.0] [23:07:04] RECOVERY - High lag on wdqs2001 is OK: OK: Less than 30.00% above the threshold [600.0]