[12:29:28] hey my bot on wikidata replies this: 2013-08-17 14:27:10,395 - pywiki - VERBOSE - Pausing due to database lag: Waiting for 10.64.16.15: 1808 seconds lagged [12:30:04] and then pauses 120 sekonds is there currently a problem i see other bots still editing [12:33:37] is there a overview where i can see whats the lag of a server or a api request? [12:36:34] there is an api request you can make: [12:36:51] http://www.wikidata.org/w/api.php?action=query&meta=siteinfo&siprop=dbrepllag&sishowalldb= [12:37:36] a thx hm -.- there seems to be one server lagging [12:37:43] http://noc.wikimedia.org/dbtree/ [12:37:53] this is nice if you want to look at all the db clusters [12:38:07] click on any mastr (root of a tree) to see what's hosted there [12:38:11] yep [12:39:46] that one might be running the abstracts phase of the dumps (or maybe not)... [12:50:02] hm [12:50:56] Sk1d: Now it the right channel. apergos is probably the right person to take a peak [12:51:16] https://ganglia.wikimedia.org/latest/?r=day&cs=&ce=&tab=ch&vn=&hreg[]=db1026 indicates that the lag rices since the last 6 hours I dont know if this seems to be nomal ... [12:51:19] *peek [12:52:24] yeah that stage has been running about 6 hours [12:52:42] probably finish in another 6 [12:54:50] ok [12:54:59] but you shuld be able to connect to another slave [12:59:18] hm I don't know what pywikibot is doing exactly in the background. I tried to restart the script but this did not help [13:00:02] well it will be using the api, you won't connect directly [13:00:29] but for editing you'll be contacting the master (wi\ell the mw server will on your behalf) [13:00:33] *well [13:00:41] and for reads it should get you a non lagged box [13:14:36] apergos: hm, the api does block reads when maxlag=... is passed [13:14:44] e.g https://www.wikidata.org/w/api.php?action=query&prop=categoryinfo&titles=Category:Foo|Category:Bar&maxlag=5&format=jsonfm [13:14:59] so should pwb only add maxlag=5 for write requests? [13:14:59] yes it should but on a particular db [13:15:52] no the bot should check maxlag at all times [13:15:56] apergos: are you sure? I thought maxlag always referred to the most-lagged slave [13:16:09] mm lessee [13:16:20] that would also make more sense, at least when writing [13:17:27] the docs don't say and I'm too lazy to look at the code just now [13:18:06] I don't see why it would poll all the dbs for lag before every read or write though [13:18:10] seems unlikely [13:19:31] apergos: it makes sense for writing, though. You'd always connect to the master DB for that, which per definition has replag 0 [13:19:42] so it has to keep track of the slave state anyway [13:22:34] https://github.com/wikimedia/mediawiki-core/blob/master/includes/db/LoadBalancer.php#L1007 [13:24:32] yeah I was just looking at it (after saying I was too lazy) [13:24:37] I am too lazy but oh well [13:25:11] :D [13:25:28] (I have a local checkout so we're talking *really* lazy) [13:25:39] then again, six database slaves for enwiki also sounds wrong: https://en.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=dbrepllag&sishowalldb= [13:25:50] but I'm not an ops guy :p [13:27:13] six is plenty [13:27:24] most of what goes on never touches the db right [13:27:31] reads reads reads, and all from cache [13:28:19] There were some toolserver issues yesterday... wonder if they're fixed. [13:28:30] no clue [13:30:17] apergos: ah, of course. [13:37:11] any reason db1026 is lagged? [13:37:24] Apparently SQL server S3 and/or S7 is having issues. [13:38:39] I would guess db1026 is running the abstracts dumps for wikidata (but I have not checked it) [14:04:28] * Elsie caches apergos. [14:05:26] you're going to get served an outdated version then [14:05:35] ;-) [14:17:05] all my bots have stopped editing on s5 because of high replication replag of more than an hour on slave db1026. Is this expected? [14:18:59] Merlissimo: 15:38 < apergos> I would guess db1026 is running the abstracts dumps for wikidata (but I have not checked it) [14:19:25] see http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-tech/20130817.txt for more scrollback [14:25:34] valhallasw: or http://bots.wmflabs.org/~wm-bot/html/%23wikimedia-tech/20130817.htm for a slightly prettified version. [17:35:39] Hello. Has there anything changed in the last days regarding login via API, or is there anything know broken? [17:49:16] krd, there were no deployments this week [17:51:06] all my skripts stopped working at around 09:20 GMT+2 today. [17:51:33] login fails in a way a could'n determine yet. [17:58:50] krd: bist du 100% sicher das es am api liegt? [18:00:13] nein, ich schau mir das erst seit ner halben stunde an. [18:00:39] Aber es scheint, als kriege ich immer wieder NeedToken zurück, auch wenn ich eins mitsende. [18:00:53] oh, das ist seltsam o_O [18:01:31] das müsste aber dann auch andere Leute betreffen. sehr seltsam. [18:01:49] ja, heute hat sich wegen api noch niemand beschwert [18:04:41] http://nl.wikipedia.org/wiki/Wikipedia:Arbitragecommissie/Zaken/Standpunt_gebruikers_tegenover_Wolf_Lambert [18:05:07] whoops, wrong chat [18:07:56] does anyone know if uploading from url is allowed on en.wiki (and how I could have checked for myself)? [18:10:08] Go to special upload and put in a url where the filename goes? :p [18:25:11] Just if anyone cares: my problem mentioned above is because of high lag. [21:11:19] csteipp: i'm not following the CSS issue [21:11:37] CSP lets you set policies on javascript and css [21:11:57] The only sane way to prevent XSS with CSP is to disallow both inline [21:12:10] (both css and js) [21:12:28] ahhh. so it would still be saved just not used at render time [21:12:43] yep [21:13:46] csteipp: does that include