[00:18:48] !log tools.lexeme-forms deployed 242c25810b (clarify Spanish verbs) [00:18:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lexeme-forms/SAL [00:54:18] !log devtools - deleting instance codesearch-stretch, creating codesearch-buster [00:54:20] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [00:59:53] !log devtools deleting instance codesearch-buster [00:59:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Devtools/SAL [02:15:06] !log toolsbeta rebooting toolsbeta-sgecron-01 and toolsbeta-test-k8s-etcd-3 to get nfs unstuch [02:15:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [02:17:20] !log catgraph rebooting fridolin and login-test to resolve hangs associated with an old NFS failure [02:17:21] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Catgraph/SAL [02:20:55] !log math rebooting mathosphere to resolve hangs associated with an old NFS failure [02:20:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Math/SAL [02:22:36] !log wikidata-dev rebooting wikidata-lexeme-lua to resolve hangs associated with an old NFS failure [02:22:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [02:23:59] !log tools rebooting tools-paws-worker-1006 to resolve hangs associated with an old NFS failure [02:24:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [02:24:26] !log mwoffliner rebooting wp1 to resolve hangs associated with an old NFS failure [02:24:26] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Mwoffliner/SAL [02:25:10] !log snuggle rebooting snuggle-enwiki-01 to resolve hangs associated with an old NFS failure [02:25:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Snuggle/SAL [04:26:21] !log admin upgrading designate on cloudservices1003/1004 [04:26:23] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [05:21:31] bd808: ack. thanks [06:05:12] !log quarry applied T242355-ver2.patch T242355 [06:05:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Quarry/SAL [09:12:37] !log tools.checker Switched over to use new k8s cluster [09:12:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.checker/SAL [09:13:45] !log tools.contentcontributor Switched over to use new k8s cluster [09:13:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.contentcontributor/SAL [09:14:47] !log tools.coverage Switched over to use new k8s cluster [09:14:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.coverage/SAL [09:16:01] !log tools.coverme Switched over to use new k8s cluster [09:16:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.coverme/SAL [09:17:17] !log tools.dump-torrents Switched over to use new k8s cluster [09:17:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.dump-torrents/SAL [09:19:01] !log tools.extreg-wos Switched over to use new k8s cluster [09:19:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.extreg-wos/SAL [09:19:41] !log tools.fab-proxy Switched over to use new k8s cluster [09:19:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.fab-proxy/SAL [09:20:13] hmm [09:20:16] https://tools.wmflabs.org/extreg-wos works now [09:20:20] (no trailing / [09:20:21] ) [09:20:27] but of course, that means the CSS etc are missing [09:21:08] !log tools.extreg-wos Switched over to use new k8s cluster [09:21:09] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.extreg-wos/SAL [09:21:46] meh [09:21:49] that should be old cluster [09:23:54] !log tools.extreg-wos Switched over to use new k8s cluster [09:23:55] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.extreg-wos/SAL [09:26:16] !log tools.ipchanges Switched over to use new k8s cluster [09:26:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.ipchanges/SAL [09:27:06] !log tools.mwpackages Switched over to use new k8s cluster [09:27:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.mwpackages/SAL [12:02:50] !log admin icinga downtime cloud* labs* hosts for 2 hours for openstack upgrades T241347 [12:02:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [12:02:52] T241347: upgrade cloud-vps openstack to Openstack version 'Pike' - https://phabricator.wikimedia.org/T241347 [12:04:49] !log admin icinga downtime toolchecker for 2 hours for openstack upgrades T241347 [12:04:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:58:27] duesen_: should be all better. Things working for you now? [14:00:20] yes, thank you! [14:03:34] !log admin icinga downtime all cloudvirts for another 2h for fixing some icinga checks [14:03:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:13:33] andrewbogott: arturo any idea when upgrades will be done? [14:13:45] Zppix: should be done now! [14:13:49] do you see any issue? [14:14:01] arturo: my bot thats on k8s seems to not auto restarted [14:14:16] which one? [14:14:24] arturo: tools.zppixbot [14:17:07] Zppix: try restarting it in a few moments and report what you see. NFS may have been failing a bit due to our upgrade operations [14:18:26] arturo: ill restart it at 8:22 UTC-5 and ill let you know (if it doesnt restart itself by then) [14:19:43] ok, we can restart it before, if you want [14:20:03] arturo: let me try to restart it now from my end [14:20:10] ok [14:23:18] !log tools.zppixbot delete pod to attempt force restart due to openstack upgrade (T241347) [14:23:20] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.zppixbot/SAL [14:23:20] T241347: upgrade cloud-vps openstack to Openstack version 'Pike' - https://phabricator.wikimedia.org/T241347 [14:25:21] arturo: before i deleted the pod prior to my !log it did seem to be due to a NFS problem as you mentioned (just incase it may help in any way for future upgrades) its now going thru requirement checks ill let you know when it is fully up [14:29:32] arturo: and all seems to be fine now... thanks! [15:29:17] !log tools failed the gridengine master back to the master server from the shadow [15:29:19] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [16:54:35] duesen_: https://tools.wmflabs.org/openstack-browser/project/codesearch -- those folks are somehow involved [19:01:01] hello! i'm looking for someone familiar with PAWS admin and setup [19:01:12] do you all know who I should look for? [19:04:20] I think chico, but he's /away atm [19:06:24] ok thanks, Reedy do you know where I can find where paws-public is configured? [19:06:25] e.g. [19:06:25] https://paws-public.wmflabs.org/paws-public/ [19:06:41] i think it is in tool labs [19:06:47] (i have never used tool labs before) [19:07:40] Depends what you mean by configured... [19:07:55] i'm mostly trying to figure out what and how this is generated [19:08:03] it allows paws to share readonly notebooks with others [19:08:13] which is a requested feature for our internal private jupyter install [19:09:14] ottomata: basically it is a view of the local filesystem that is shared by the PAWS worker nodes. It is not a Toolforge tool, but the instance running it probably does live in the tools Cloud VPS project. [19:09:33] hmm [19:09:54] but something is rendering the .ipynb files into html.... [19:09:55] e.g. [19:09:56] https://paws-public.wmflabs.org/paws/user/Ottomata/notebooks/EventStreams%20Demo.ipynb [19:10:08] there is a Publlic Link button on the real notebook [19:10:15] trying to find out how that all works...so i can steal it :) [19:10:19] I'm looking in Puppet to see if I can find the code, but it may be out in Yuvi's github [19:10:36] all the docs i've found abouut paws say 'not puppetized' [19:10:48] i've found yuvi's paws repo [19:10:49] not much in there though [19:10:57] aside from some nginx conf to pass to tools [19:11:01] https://github.com/yuvipanda/paws/blob/master/edge/nginx.conf [19:11:05] or hmm [19:11:10] no to pasw-public... [19:11:53] bd808: is there a paws cloud vps project? [19:12:16] ottomata: there is, but it is mostly not used anymore... although I think the ingress is still there [19:12:27] could you add me to it so I can investigate? [19:14:00] it looks like paws-proxy-02.paws.eqiad.wmflabs is the ingress -- https://tools.wmflabs.org/openstack-browser/server/paws-proxy-02.paws.eqiad.wmflabs -- and no Puppet [19:14:21] aye [19:14:25] no access for me there [19:17:07] ok. thinkgs are getting slightly more clear. The nginx on paws-proxy-02.paws.eqiad.wmflabs reverse proxies to https://tools.wmflabs.org/paws-public/ [19:17:30] no work? https://tools.wmflabs.org/copyvios/ [19:17:38] which is running some custom Kubernetes container [19:18:27] hispano76: maintainer information for that tool is at https://tools.wmflabs.org/admin/tool/copyvios [19:19:18] ok [19:19:20] ottomata: the image that does the rendering is docker-registry.tools.wmflabs.org/paws-public-renderer:latest [19:19:26] ah [19:19:39] bd808: hi - I guess https://phabricator.wikimedia.org/T216020#5798303 is expected ? [19:19:42] I'm going to guess that is built from Yuvi's github repo somewhere [19:19:55] oof [19:19:56] yeah [19:20:31] hauskatze: have you activated a virtual env? [19:21:22] the default build.py in his repo points at https://quay.io/repository/wikimedia-paws-prod/paws-hub [19:21:32] https://quay.io/organization/wikimedia-paws-prod [19:22:28] ah ha,.... [19:22:28] https://github.com/yuvipanda/nbpawspublic [19:22:29] maybe [19:22:33] bd808: no idea what's that so I guess no :) [19:22:46] iirc our venv is for Py2 [19:22:49] indeed this is where Public Link comes from [19:22:50] https://github.com/yuvipanda/nbpawspublic/blob/master/nbpawspublic/static/main.js [19:22:51] interesting. [19:23:37] ottomata: https://github.com/yuvipanda/paws/tree/master/images/renderer [19:24:00] ah haaa [19:24:34] intersting. [19:24:42] thanks bd808 i'm on my way to understanding now [19:25:08] thanks bd808 :) [19:25:37] 2/3 helped. That's not bad. [19:27:34] hauskatze: I am trying to remember how that bot is run. It is a grid engine job correct? [19:27:51] bd808: yup, using jstart [19:27:57] that project is a mess [19:28:04] I wish I had time to update it a bit [19:28:15] the working code is on data/project/stewardbots/StewardBot [19:28:20] not on public_html [19:28:22] fwiw [19:29:14] hauskatze: I see that there is a Python 2.7 virtual environment at $HOME/venv. So `./venv/bin/pip freeze` may be what you want/need [19:29:42] Oh, but there is another venv at $HOME/StewardBot/venv [19:29:44] bd808: aha. So we'll need to update our venv first for py3 I guess? [19:29:51] oh, geez [19:30:14] `jstart -N stewardbot -mem 2G /data/project/stewardbots/venv/bin/python2.7 /data/project/stewardbots/StewardBot/StewardBot.py` [19:30:32] so the $HOME/venv one looks like it may be the one in use? [19:30:59] I remember mostly that StewardBot needs to be massively rewritten :) [19:32:00] 1.4K lines of organic python2 irc bot code! [19:32:41] I should've studied Python instead of law, I could fix it myself :) [19:33:48] xqt is writting patches for the py3 migration [19:36:01] hauskatze: I left some info on the ticket. [19:36:10] thanks bd808 [20:33:06] flickr2commons, OAbot, and reFill are all failing [20:34:02] #wikipedia-en-help sent me to #wikipedia-tech and they sent me here [20:34:37] after I posted on BitBucket and Phabricator and a few talk pages. and also mentioned it on Discord [20:34:59] and apparently Earwig is wonky too, i hear [20:35:53] Cymbopetalum: https://tools.wmflabs.org/admin/tool/flickr2commons https://tools.wmflabs.org/admin/tool/oabot https://tools.wmflabs.org/admin/tool/refill [20:36:08] those pages list the maintainers of those tools [20:36:30] They should check their messages then [20:36:59] They are volunteers with lives [20:37:19] They work on these projects because they want too not because they have too [20:37:27] Im sure they will resolve it whenever they have time [20:37:43] i know, but you don;t think it's odd that all these tools are failing? [20:37:48] Cymbopetalum: flickr2commons seems to need the trailing slash to work -- https://tools.wmflabs.org/flickr2commons/#/ [20:37:48] No i dont [20:38:15] When upgrades are done stuff can break [20:38:25] the trailing slash thing is T242719 [20:38:25] T242719: https://tools.wmflabs.org/{toolname} no longer redirects to https://tools.wmflabs.org/{toolname}/ on new k8s cluster - https://phabricator.wikimedia.org/T242719 [20:38:30] @bd808 that will help the authorization issues? [20:39:27] Cymbopetalum: I have no idea. All you said so far is "are all failing" [20:39:38] so I was poking around a tiny bit [20:40:10] Flick2rcommons keeps repeating "ATTENTION! This tool uses OAuth to upload files to Commons. Make sure you authorise first!" no matter how many times you authorize [20:40:43] reFill says "Submitting your task... Internal Server Error " when you submit [20:41:16] Uploaded file: https://uploads.kiwiirc.com/files/ee42cc3f478a72e274c9ffc4ef63450d/pasted.txt [20:41:28] OAbot says this [20:42:07] the issue seems to be in the magnustools oauthuploader -- https://tools.wmflabs.org/magnustools/oauth_uploader.php?action=checkauth&botmode=1 gives me a 502 as well. Oddly enough https://tools.wmflabs.org/magnustools/oauth_uploader.php?action=checkauth works without issues [20:43:02] I saw that Magnus moved a bunch of his tools to the new Kubernetes grid in the last 24 hours. Errors are likely related somehow. [20:43:19] Note to self:: dont move to new cluster [20:43:41] no, don't move clusters and walk away without testing ;) [20:43:55] Thats the fun part though heh [20:44:22] I moved 12 of my tools 3 weeks ago and they are running better than ever (mostly because the new cluster is not running much yet) [20:44:43] last time I counted 109 tools have moved [20:47:32] How do you move clusters? [20:47:43] !log tools.magnustools Lots of $HOME/error.log messages about "ile_get_contents(http://en.wikipedia.org/w/index.php?action=render&title=Template:CasTemplate): failed to open stream [20:47:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.magnustools/SAL [20:48:08] Zppix: https://wikitech.wikimedia.org/wiki/News/2020_Kubernetes_cluster_migration [20:48:34] I wonder why that tool is trying to load a deleted page from enwiki? [20:52:01] apparently that's all that https://tools.wmflabs.org/magnustools/cas.php does so it has been broken since 2016-11-07 when that template was deleted on enwiki [20:53:00] I wonder if it should use https://en.wikipedia.org/wiki/Template:CAS now instead? [20:54:49] Cymbopetalum: https://bitbucket.org/magnusmanske/flickr2commons/issues if you want to report bugs about Flickr2Commons and let the maintainer know [22:28:23] hi folks! [22:28:48] fundraising tech would like to spin up a staging server where we can test features of CentralNotice before they're merged into master [22:29:01] i.e. before they're available on the beta cluster [22:29:42] So we just need a mediawiki instance and a few extensions, and the ability to get in and git pull any arbitrary patch in review from gerrit [22:30:09] Would that be possible on toolforge, or is a cloud VPS project more appropriate? [22:32:02] This looks like it would be nice and quick to set up: https://wikitech.wikimedia.org/wiki/Help:MediaWiki-Vagrant_in_Cloud_VPS [22:40:34] Is there an equally easy way to get a wiki running under Toolforge? [22:43:17] ejegg: maybe it's possible in deployment-prep (existing cloud VPS project aka. beta) [22:44:12] mutante: oho, so a new instance associated with beta, but where we can pull arbitrary code to one particular repo without it being overwritten by the auto-gerrit-sync? [22:44:41] that might be just the ticket - we'd get all the rest of the infrastucture updated automatically [22:45:47] I'll ask over in -releng then [22:46:05] ejegg: that part i am not sure about. but in general i _think_ it's "one or the other VPS project" and maybe "setup your own local puppetmaster" per https://wikitech.wikimedia.org/wiki/Help:Standalone_puppetmaster. best is to create a ticket and just request it and see the replies [22:50:13] OK, thnaks mutante! [22:55:49] !log tools.stashbot Rotated conduit API token [22:55:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stashbot/SAL [22:57:47] ejegg: I think mutante is giving you good advice. :) It is technically possible to run MediaWiki on Toolforge, but performance will be horrible. If you can squeeze into deployment-prep that would be nice. If not y'all can apply for your own Cloud VPS project -- https://phabricator.wikimedia.org/project/view/2875/ [22:59:08] thanks bd808! James_F has warned me that be dragons in deployment-prep, so I think I'll apply for a project and see how it goes [22:59:17] *thar be dragons [22:59:44] Less dargons and more unclear ownership in my experience, but *nod* [23:02:37] !log tools.phab-ban Rotated conduit API token [23:02:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.phab-ban/SAL [23:19:55] request is in! [23:51:30] !log tools.os-deprecation Rotated conduit API token [23:51:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.os-deprecation/SAL