[00:19:18] it looks like oauth2 is not too bad so people just implement it client-side without a library [00:20:01] yeah, the signature is the nasty part in OAuth 1 and OAuth 2 has no signatures [01:16:13] actually there are some npm modules for it [01:16:27] is there anyone who can approve me for OAuthConsumerRegistration? [01:17:25] < tgr> oh, well. You'll have to ask a meta admin or steward to make you autoconfirmed, then [01:17:47] ^ those folks [01:18:30] ...well, stewards can do anything but I guess this is more of a meta admin task [01:18:41] where do I find them? [01:20:38] Both usually hang out in #wikimedia-stewards [01:20:47] ok thanks [01:58:52] where is the oauth2 endpoint for english wikipedia? here it says to go to oauth2/authorize under the rest endpoint: https://www.mediawiki.org/wiki/OAuth/For_Developers#OAuth_2 [01:59:21] but apparently wikipedia's rest endpoint is not rest.php but https://en.wikipedia.org/api/rest_v1/ [02:27:22] ningu, there are now two "REST API"s, there's that one (RESTBase) and rest.php (MediaWiki) [02:27:59] ok [02:28:14] and where is the appropriate one to log in to wikipedia? [02:28:57] https://en.wikipedia.org/w/rest.php/oauth2/authorize [02:29:21] thanks [17:43:47] !log tools migrating b24e29d7-a468-4882-9652-9863c8acfb88 to cloudvirt1022 [17:43:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [19:01:31] codesearch seems to be down "hound is still starting up" [19:02:48] !log codesearch is down "Hound is still starting up, please wait a few minutes for the initial indexing to complete." [19:02:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Codesearch/SAL [19:13:52] !log tools.replag Restarting to pick up latest ingress configuration [19:13:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.replag/SAL [21:33:13] !log tools Removed tools-sgewebgrid-lighttpd-092{1,2,3,4,5,6,7,8} & tools-sgewebgrid-generic-090{3,4} from grid engine config (T244791) [21:33:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:33:17] T244791: Scale up 2020 Kubernetes cluster for final migration of legacy cluster workloads - https://phabricator.wikimedia.org/T244791 [21:35:09] !log tools Deleted tools-sgewebgrid-lighttpd-092{1,2,3,4,5,6,7,8} & tools-sgewebgrid-generic-090{3,4} (T244791) [21:35:12] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:43:17] andrewbogott, Petscan doesn't seem to have come back up after the maintainace today, https://tools.wmflabs.org/nagf/?project=petscan#h_overview_network-bytes shows no network packets getting in [21:43:47] AntiComposite: Petscan is the name of a VM or a tool or a service or what? [21:44:09] petscan.wmflabs.org / vm: petscan4 [21:44:15] https://tools.wmflabs.org/openstack-browser/project/petscan [21:46:13] it looks to me like both VMs are up and running, I can ssh in [21:46:23] so maybe the services don't start up automatically on boot or something? [21:49:08] dunno, don't know anything about how it's set up. Thanks for looking, I'll poke magnus (the maintainer) about it [21:49:27] happen to know if it's nginx or apache? I can try restarting [21:50:13] appears neither [21:51:27] https://wikitech.wikimedia.org/wiki/Nova_Resource:Petscan has some instructions [21:52:00] ugh. that's not a nice way to run a server [21:52:07] nope [21:52:10] screen + shell command [21:52:16] * bd808 shudders [21:52:34] fun fact, that's how the wikimedia ircd used to work too [21:53:55] it gets better https://bitbucket.org/magnusmanske/petscan/src/master/run.sh [21:54:24] wow [21:55:20] that's the most homegrown replacement for a systemd unit I have seen in a long time [21:57:53] !log tool.totoazero edited crons to use new specific py2 pwb.py endpoint since T213287 [21:57:53] Framawiki: Unknown project "tool.totoazero" [21:57:54] T213287: Drop support of python 2.7 - https://phabricator.wikimedia.org/T213287 [21:57:59] !log tools.totoazero edited crons to use new specific py2 pwb.py endpoint since T213287 [21:58:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.totoazero/SAL [22:00:48] Dear wmcs team, FYI and you're probably already aware, we pywikibot team announced the drop of python2 support. First py3-only patchs were submitted few days ago, resulted in complete breaking for scripts using nightly or automatically updated version. If you see users complaining of encoding, login, token issues using pwb, checking version can be the key here :) [22:01:48] * Reedy awards a "The World Burns" token [22:03:01] I shall review the logs of my bot then [22:03:14] although my scripts use the python3 prefix [22:04:23] * Framawiki received 800 errors in sentry today for a bot for a bot that someone delegated me to maintain [22:35:40] Framawiki: thanks for the heads up. I'm going to guess that lots and lots of things are going to end up broken :/ [22:36:23] Framawiki: should I (or you?) send another ping to the cloud-announce list about this breaking change? [22:56:29] do we have a guideline/rule of thumb on how much ram/cpu/iops is too much for toolforge and should be a VPS project? [22:57:01] * chicocvenancio messing with Analytics/Data_Lake/Edits/Mediawiki_history_dump [22:57:19] py3 is for the best. but yeah it can be annoying [23:11:18] chicocvenancio: we don't really have anything written, no. But processing the data lake dumps is likely to be a tight fit. The exec nodes for both grid engine and k8s are 4 core/8G, so there is a natural upper limit on a single process there (and default quota limits are lower than a full worker). [23:11:51] IOPS is a major constraint in Toolforge today [23:13:59] yeah, I started on PAWS knowing it wouldn't affect others, but it gets oom-em with the over 100mb files [23:14:33] I'm afraid if I start this in toolforge it will either be killed or negatively affect other users [23:14:47] bd808: are we sure it will be killed before affecting others? [23:15:48] no. not sure it would be killed at all. Badly behavied tools can and do take down worker nodes and saturate NFS [23:16:13] the data lake dataset is huge if I remember correctly [23:17:13] there has been a project in Analytics for going on 2 years to expose Data Lake to Cloud VPS instances as column store of some kind, but it keeps getting stuck on various things [23:17:35] in all its 0.5tb but all files are under 2GB [23:17:58] * chicocvenancio nods [23:18:35] yeah, I'll create a ticket for a new VPS project for this. Don't want to bring down the shiny new k8s cluster [23:38:13] !log tools Added tools-k8s-worker-22 to 2020 Kubernetes cluster (T244791) [23:38:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [23:38:17] T244791: Scale up 2020 Kubernetes cluster for final migration of legacy cluster workloads - https://phabricator.wikimedia.org/T244791 [23:50:13] !log tools Added tools-k8s-worker-23 to 2020 Kubernetes cluster (T244791) [23:50:16] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [23:50:16] T244791: Scale up 2020 Kubernetes cluster for final migration of legacy cluster workloads - https://phabricator.wikimedia.org/T244791 [23:53:38] !log tools Added tools-k8s-worker-24 to 2020 Kubernetes cluster (T244791) [23:53:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL