[06:52:16] I found I couldn't run my Fish scripts (which run a Rust dump-parsing program that might benefit from more resources) in the grid engine. It doesn't seem able to find `fish` in the path and when I created a local copy of the binary the grid engine couldn't run it because the necessary libpcre2 shared library was missing. Is this intentional (perhaps PCRE2 is too potentially dangerous for the grid engine) or just has nobody cared to use Fish? [08:30:11] Erutuon: I dont know for sure. I suggest you open a phabricator task so we discuss in detaila in there [08:30:33] Erutuon: you are using the grid and not the bastions, right? [08:30:43] (to run your scripts) [08:32:16] I think so; I was using jsub as described in Help:Toolforge/Grid. [08:57:36] cool [11:39:54] arturo: do we have any graphs regarding replica usage? [11:40:32] Steinsplitter: I guess you mean wikireplica usage metrics, right? like server loads, number of connections etc? [11:40:44] yes [11:40:58] I'm sure we have, but I'm not sure where :-) [11:50:35] Steinsplitter: check this [11:50:36] https://grafana.wikimedia.org/d/000000278/mysql-aggregated?orgId=1&var-dc=eqiad%20prometheus%2Fops&var-group=labs&var-shard=multi&var-role=slave&from=1576734313860&to=1576755913860 [11:50:41] https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-dc=eqiad%20prometheus%2Fops&var-server=labsdb1009&var-port=9104&from=1576745148939&to=1576755948939 [11:50:56] https://grafana.wikimedia.org/d/000000377/host-overview?orgId=1&refresh=5m&var-server=labsdb1009&var-datasource=eqiad%20prometheus%2Fops&var-cluster=mysql&from=1576745428257&to=1576756228257 [11:53:36] arturo: thx :) [13:40:51] On dewiki_p there are 32701(!) Records in table revision where the id rev_page has no corresponding record in table page Query was: select distinct rev_page from revision LEFT JOIN page on rev_page=page_id where page_id IS NULL; [18:22:20] !log tools.replag Migrated service to new Kuberentes cluster [18:22:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.replag/SAL [18:23:37] bstorm_: ^ switching replag was smooth :) -- `webservice stop && /usr/bin/kubectl config use-context toolforge && webservice --backend=kubernetes php7.3 start` [18:23:44] Yay! [18:24:09] it took about 30 seconds for the ingress to attach and things to work again [18:25:22] I think I will try moving stashbot tomorrow (easier to deal with breakage on light deploy days) [18:27:14] Makes sense