[01:47:23] 10Labs, 10Labs-Infrastructure, 10Operations, 10Patch-For-Review, 10Wikimedia-Incident: Some labs instances IP have multiple PTR entries in DNS - https://phabricator.wikimedia.org/T115194#3338989 (10Andrew) I (finally) wrote a script to hunt and kill leaned dns records: https://gerrit.wikimedia.org/r/#/c... [02:01:28] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-eqiad, 10Patch-For-Review: rack/setup/install labvirt101[5-8] - https://phabricator.wikimedia.org/T165531#3338992 (10Andrew) > not sure if h/w raid is needed Yes please! Most of the existing labvirts have two spinny drives which are paired in a raid 1... [06:45:33] PROBLEM - Puppet errors on tools-exec-1415 is CRITICAL: CRITICAL: 33.33% of data above the critical threshold [0.0] [07:25:32] RECOVERY - Puppet errors on tools-exec-1415 is OK: OK: Less than 1.00% above the threshold [0.0] [07:40:11] madhuvishy: yeah I told him. it worked but he wanted to connect with Mysql Workbench [07:56:43] 10Labs, 10Operations, 10ops-eqiad, 10Patch-For-Review: setup promethium in eqiad in support of T95185 - https://phabricator.wikimedia.org/T120262#3339277 (10Muehlenhoff) [07:56:47] 10Labs, 10Operations, 10Patch-For-Review: (don't) decom promethium - https://phabricator.wikimedia.org/T164395#3339275 (10Muehlenhoff) 05stalled>03Resolved We can simply close the ticket. [08:41:28] 10Labs, 10DBA, 10Patch-For-Review: Add and sanitize s2, s4, s5, s6 and s7 to sanitarium2 and new labsdb hosts - https://phabricator.wikimedia.org/T153743#3339390 (10Marostegui) The import finished during the weekend on the labs hosts and I have configured replication on db1095 for s5, and it is now flowing.... [08:42:05] 10Labs, 10DBA, 10Patch-For-Review: Add and sanitize s2, s4, s5, s6 and s7 to sanitarium2 and new labsdb hosts - https://phabricator.wikimedia.org/T153743#3339393 (10Marostegui) [14:08:30] 10Labs, 10Operations, 10cloud-services-team (Kanban): Initial OpenStack Neutron PoC deployment in Labtest - https://phabricator.wikimedia.org/T153099#3340630 (10Andrew) [14:08:34] 10Labs, 10Operations, 10Patch-For-Review: Disable keystone admin_token usage - https://phabricator.wikimedia.org/T165211#3340628 (10Andrew) 05Open>03Resolved a:03Andrew [15:05:24] 10Labs, 10Labs-Infrastructure, 10Epic: Nova-network to Neutron migration - https://phabricator.wikimedia.org/T167293#3340904 (10bd808) [15:09:26] 10Labs: Discontinue use of admin_token for keystone - https://phabricator.wikimedia.org/T167295#3340920 (10bd808) [15:09:30] 10Labs, 10Operations, 10Patch-For-Review: Disable keystone admin_token usage - https://phabricator.wikimedia.org/T165211#3340918 (10bd808) [15:17:29] 10Labs, 10Labs-Infrastructure, 10Operations, 10netops, 10ops-codfw: codfw: labtestpuppetmaster2001 switch port configuration - https://phabricator.wikimedia.org/T167321#3340972 (10ayounsi) a:03RobH [15:37:08] does anyone know if we should shut down the merlbot2 crons? I got a big pile of emails about one of the maintainers having a full mailbox at their ISP [15:37:48] no clue honestly what the last status of the merlbot situation was [15:37:49] I left them running a year ago when we had to shutdown merlbot but now I'm wondering if merlbot2 is actually working either [15:38:48] does labs have scap? [15:38:54] I seem to have 2000 emails marked as spam by gmail from the full mailbox just over the weekend :/ [15:39:04] Zppix: not globally, no [15:39:06] Zppix: not int he way you are asking [15:39:20] Zppix: it can be setup in a project. let me find my notes on that [15:39:26] how does one get scap for a project? [15:39:45] https://wikitech.wikimedia.org/wiki/User:BryanDavis/Scap3_in_a_Labs_project [15:40:21] wikimedia-ai wants it in their labs ores instancse [15:40:55] Zppix: it is managed bythe project owners and is not a managed service we offer as a team [15:41:02] if you use role::deployment_server then ferm firewalls are enabled in the VMs which may or may not cause problems [15:42:43] ack [16:07:34] !log tools.merlbot2 Commented out all cron jobs because of thousands of email bounces for a maintainer [16:07:36] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.merlbot2/SAL [16:13:21] 10Tool-Labs-tools-Other: merlbot2 cron jobs disabled because of thousands of bound messages for maintainer emails - https://phabricator.wikimedia.org/T167692#3341229 (10bd808) [16:13:46] 10Tool-Labs-tools-Other, 10Tracking: merl tools (tracking) - https://phabricator.wikimedia.org/T69556#3341244 (10bd808) [16:13:51] 10Tool-Labs-tools-Other: merlbot2 cron jobs disabled because of thousands of bound messages for maintainer emails - https://phabricator.wikimedia.org/T167692#3341243 (10bd808) [16:14:28] 10Tool-Labs-tools-Other: merlbot2 cron jobs disabled because of thousands of bounce messages for maintainer emails - https://phabricator.wikimedia.org/T167692#3341229 (10bd808) [16:22:43] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/install labtestnet2002 - https://phabricator.wikimedia.org/T167159#3341278 (10Papaul) [16:25:16] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/install labtestneutron2002 - https://phabricator.wikimedia.org/T167160#3341291 (10Papaul) [16:51:49] So I think there used to be a way to get an instance to mount a second disk image over /srv by adding a magic puppet class? Can anyone remind me what class that is? [16:54:10] twentyafterfour: https://wikitech.wikimedia.org/wiki/Help:Adding_Disk_Space [16:54:16] twentyafterfour: fyi https://wikitech.wikimedia.org/wiki/Portal:Wikimedia_VPS [16:55:00] nice! thanks chasemp [16:55:22] twentyafterfour: note that it only really does any good on m1.medium and larger VMs [16:55:39] m1.small has a tiny amount of space to partition and mount on /srv [16:56:05] * bd808 should probably make m1.small just put the whole quota into / [16:56:37] yeah [16:56:39] yeah but hopefuly we move away from this whole thing before that could actually be revised [16:57:16] self-serve attachable volumes are >1 year away still sadly [16:57:38] meh was just trying to eek out a bit more space for deployment-tin on /srv but apparently it's already got a 40gb partition on srv [16:57:47] unless ops and hardware fall from the sky [16:57:53] bd808: what, no lvm on instances? [16:57:56] heh [16:58:01] I meant we won't revise the small instance within the next year :D [16:58:23] kill it with fire? [16:58:53] "attachable block storage" the right answer. we just need to fix some things that are more broken first [16:59:49] I can't decide if rabbitmq is an inside joke or a genuine expression of insanity [17:01:08] I thought "attachable block storage" was originally a thing in openstackmanager in wikitech [17:01:34] it never worked, then all remaining code got removed eventually [17:02:39] zhuyifei1999_: that may be, not sure. there was an attempt years ago 3+? that didn't work out iiuc [17:02:59] * zhuyifei1999_ seaches [17:04:00] !log tools.merlbot2 Killed all running jobs for T167692 [17:04:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.merlbot2/SAL [17:04:02] T167692: merlbot2 cron jobs disabled because of thousands of bounce messages for maintainer emails - https://phabricator.wikimedia.org/T167692 [17:04:15] 10Labs, 10PAWS, 10Tool-Labs, 10Tools-Kubernetes: Consider moving PAWS to its own k8s cluster, rather than using Tools' k8s cluster - https://phabricator.wikimedia.org/T167086#3341483 (10yuvipanda) It looks like everyone's onboard with this plan, so I'll start poking at it in a week or so. [17:04:18] glusterfs may have been called that at some point [17:04:53] hmm the special page still exist https://wikitech.wikimedia.org/wiki/Special:NovaVolume [17:05:38] OpenStackManager is full of cruft that doesn't actually work/never worked ;) [17:06:01] That feature was originally a feature within nova, which we (maybe) supported [17:06:17] but it was rolled out into a separate service, cinder, and we never caught up [17:06:31] andrewbogott: was teh glusterfs trial the backend to teh nova front end? [17:06:41] maybe [17:06:48] * zhuyifei1999_ never heard of cinder [17:07:14] i mean, it must've been originally, I don't know if everything broke as soon as we moved off of glusterfs [17:07:16] 10Tool-Labs-tools-Other, 10Tracking: merl tools (tracking) - https://phabricator.wikimedia.org/T69556#3341508 (10bd808) [17:07:19] 10Tool-Labs-tools-Other: merlbot2 cron jobs disabled because of thousands of bounce messages for maintainer emails - https://phabricator.wikimedia.org/T167692#3341505 (10bd808) 05Open>03stalled p:05Triage>03Normal Marking as stalled, but leaving open so people can see that this is the current state of th... [17:07:32] andrewbogott: right gotcha [17:07:54] cinder...block >>>> of storage [17:07:58] :) [17:08:18] yeah google found https://wiki.openstack.org/wiki/Cinder for me [17:11:03] Cinder is in some ways a SAN management abstraction correct? [17:11:18] bd808: that's the impression I have too [17:11:28] never seen it in action though [17:13:39] ceph is the pseudo canonical backend it seems and that's much more horizontal scale out than SAN scaleup approach [17:13:45] but I think of EMC when I think SAN [17:15:53] *nod* I do see ceph and gluster buried in the support matrix. most everything else there looks to be some kind of SAN vendor/applicance [17:16:02] oh and NFS! [17:17:21] I think kilo got an nfs driver that is kinda weird [17:18:10] and now iiuc ceph has an NFS backend [17:18:13] option [17:18:27] which is probably a better play depending on what actually works as advertised [17:25:49] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341596 (10GoranSMilovanovic) [17:26:09] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341608 (10GoranSMilovanovic) [17:27:18] chasemp: bd808 im thinking about doing a shared scap project on labs would this be allowed/plausible [17:35:21] 10Labs, 10Labs-Infrastructure, 10DNS, 10Mail, and 3 others: Set SPF (... -all) for toolserver.org - https://phabricator.wikimedia.org/T131930#3341658 (10Reedy) a:03herron [17:36:52] Zppix: what is the point? [17:37:06] we have scap3 in beta clsuter [17:37:37] bd808: halfak was saying labs should have a shared deploy server for labs instances instead of making everyone create their own and i agree [17:37:39] I don't see a lot of projects clamoring to use WMF production deployment tooling in Cloud Services [17:38:00] find more than the ai project that's going to use it [17:38:10] and then think hard about how you will do access contriol [17:38:21] also i was wondering if i could name a phab project labs-icinga2 as myself and paladox maintain an icinga2 instace onf wmflabs but with that name it implies you guys run it so i was being sure you were okay with that name [17:40:05] "labs-project-*" is the prefix for phab projects related to VPS projects -- unitl we do the renaming in T167244 [17:40:05] T167244: Rename and update Cloud Services Phabricator projects - https://phabricator.wikimedia.org/T167244 [17:40:19] bd808: okay ill do that then thanks [17:41:06] but your one icinga2 instance hidden in the gerrit project seems sort of unlikely to need a phab project [17:41:35] I'm kind of burned out on these "projects" that have no customers, requirements, or roadmaps [17:42:00] it looks a bit like hat collecting [17:43:16] bd808 we have one user of icinga2 [17:43:19] halfak :) [17:43:46] using it for operational paging about instance issues? [17:43:58] or using it in he let you deploy the collectors? [17:44:00] yep [17:44:09] it's in operation in #wikimedia-ai and #wikimedia-bot-testing [17:44:27] what does it do that shinken does not? [17:44:34] It's maintained [17:44:39] shinken is not maintained [17:44:53] (i do not mean labs, i mean upstream). [17:45:16] that is a non-answer [17:46:51] does it have user facing features that shinken does not? [17:47:38] the direction for production currently seems to be to use prometheus and related tools [17:47:56] I'm not sure why I wandered into this discussion yet again [17:48:01] bd808: it has an webui [17:48:01] Well, it's easier to navigate. It uses ldap. You can add hosts through the web editor (though i am not sure if we should allow that, instead doing it through a seperate repo). I find icinga web ui easier to navigate around the shinken. [17:48:06] gerrit-icinga.wmflabs.org [17:48:09] my past questions have never been answered [17:48:10] Zppix shinken has a webui [17:48:22] http://shinken.wmflabs.org [17:48:35] bd808: im genuily courious on why this is such a big deal [17:49:53] monitoring that is unreliable is worse than no monitoring. telling people that a service will be useful and reliable is a big deal. [17:50:21] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341596 (10Reedy) > Intensively searched for similar problem reports and solutions; many users are complaining about this; still with no success on my Labs instance... [17:50:32] bd808: where have i or paladox said its reliable? [17:50:33] and I have honestly got questions about how long until this becomes boring and the two of you move on to the next shiny problem [17:51:06] bd808: I dont know how you operate but when i commit to something im commited [17:53:01] Zppix: your behavior has predominiately fit in with the 'hat collecting' motif in the past, so there is reason to question long term maintenance [17:53:56] https://en.wikipedia.org/wiki/Wikipedia:Hat_collecting [17:54:40] unitl I see some of the things I asked about in https://phabricator.wikimedia.org/T162629#3187911 answered on phab/wiki icinga2 is just going to be a trigger word for me [17:54:43] greg-g: so when i try to collab with other volunteers on a project that could assist with other labs users its hat collecting, im not trying to start anything but thats insulting towards me.... not to mention how is requesting a phab project hat collecting? its a way to organise workflow and manage tasks... [17:55:38] "examine the open phabricator tasks related to Labs monitoring and provide a well researched explanation of which of them would be resolved with icinga2. You would also need to outline a maintenance and support plan for the new icinga2 service that seemed reasonable enough that we could rely on it." [17:56:55] W're not forcing people to use it infact if they want to they can ask the most we've done is drop a question and if they say no we leave it at that and usually we do that only if they are looking for monitoring [18:00:26] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341832 (10GoranSMilovanovic) @Reedy "Complaining where? I presume you don't mean on labs" - No, I don't mean complaining on Labs; I mean: many RStudio Shiny Server... [18:01:07] I can tell you feel attack, Zppix. Trust me though, Bryan's line of questioning comes from a long history of seeing things started that A) don't address the needs of the people in a better way than the current tooling, B) only creates more maintenance costs, and C) is going in a different way than what the rest of the local ecosystem is going (eg: prod). All of those together mean it can easily tur [18:01:13] n into a lot of wasted work for everyone involved. [18:02:45] greg-g: so that warrents accusations of hat collecting [18:03:01] Your other past behaviour suggests this [18:03:14] Such as signing the NDA for no actual purpose [18:04:45] Reedy: there was i did it to apply for ores [18:04:48] orts* [18:04:51] No [18:04:56] You did that AGES before ores [18:05:11] I remember the discussion [18:05:18] You reasoning was "just in case" or something [18:05:21] otrs? [18:05:42] the ticketing system for copyright issues and etc [18:06:03] yes, I was just making sure that's what you meant since you corrected to another wrong acronym :) [18:06:22] i always misspell that acronym [18:09:01] refers to it as ticket.wikimedia.org [18:10:26] \o/ [18:10:28] Hey folks! [18:10:35] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341596 (10bd808) If all you need is a transparent reverse proxy, you should be able to open port 3838 in your project's security groups and then target the wdcm.wm... [18:11:02] I've been talking to Zppix about getting some systems that exist in prod to also be available for labs/cloud so that services and more easily transition & share configuration [18:11:09] So direct some of your frustration at me too [18:11:20] Sorry if I'm making Zppix look like a hat collector. [18:12:02] FWIW, I've also been talking to Zppix and Paladox about how we'll make service that the Cloud team can't support sustainable in the long term. [18:12:17] I just want Cloud to look as much like Prod as possible. [18:12:52] that's not the point of Cloud Services broadly. It is the point of the beta cluster project however [18:13:18] We don't want to duplicate $every_service in beta. [18:13:28] Just things that relate to actual wiki traffic. [18:13:44] bd808, fair point. However, I'd like to run an experimental version of ORES in labs. I'd like to have Quarry and PAWS use the same minoritoring, changeprop, etc. as in prod. [18:13:59] I'm not talking about beta. [18:14:09] I was talking to bd808 ;-) [18:14:11] I'm talking about everything that will live in labs long term. [18:14:14] oh :) [18:14:55] halfak: I understand the test/acceptance use-case for VPS projects [18:15:16] but things like a shared scap3 deploy server don't make a lot of sense to me yet [18:15:26] I actually don't think ^ is a bad idea. [18:15:41] (Plus, it would be completely trivial to provision) [18:15:53] really we do need a shared deployment server but it needs to be authorized across labs projects and that is something I never figured out how to do [18:16:06] That too ^ [18:16:12] but completely a nightmare to deal with 2 projects that wanted different versions of scap3 thing X [18:16:13] we could instantly delete several redundant instances if we had a single centralized scap server [18:16:46] bd808: not really, using branches in the deployment project (or separate deployment repos) could work [18:17:44] I've spun up multiple phab-scap and deployment-phab instances trying to get scap3 working right in labs [18:17:51] and those should not need to exist [18:18:02] separate deploy repos implies separate Puppet config as well I think. for things that are using the ::service::$type abstraction at leas [18:18:34] I've done it too -- https://wikitech.wikimedia.org/wiki/User:BryanDavis/Scap3_in_a_Labs_project [18:18:36] These are solvable problems though. And easier than expecting every individual project to know how to setup their own scap master [18:18:38] :) [18:19:18] I think we managed to deploy from deployment-tin to phabricator-phab01 with scap3 [18:19:20] I'm not against it morally or anything. :) [18:19:35] so cross-project is doable but requires some futzing [18:20:05] yeah. I think it would mostly be about firewall holes on the project hosting the deploy server [18:20:18] after that its just ssh [18:20:21] yeah [18:21:00] well, creating the service user was a little tricky I think [18:21:19] didn't andrew just remove support for service users? [18:21:20] heh [18:21:37] service users are goofy [18:21:38] * twentyafterfour was only half-paying-attention to the email about that [18:22:05] its easier to have a "bot" LDAP account [18:22:49] deploy-service is already in LDAP too [18:24:15] true multi-tenant isn't something that scap3 is really built for though so there has to be quite a bit of trust of all the deployers and projects using a shared server [18:24:26] there is group permissions stuff [18:24:58] but in the VPS environment there's not a strong guarantee that user X can't figure out how to act as user Y on any given instance [18:25:33] mostly because our sudo rules default to being permissive [18:27:04] and too much stuffs are shared [18:28:26] I still like my phab ticket about the nfs setuid exploit what was fixed a while ago ;) [18:29:05] 10Labs, 10DBA: Prepare and check storage layer for atjwiki - https://phabricator.wikimedia.org/T167715#3341927 (10Reedy) [18:29:09] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3341941 (10GoranSMilovanovic) @bd808 @Reedy I already have a security group for Shiny Server, port 3838 opened. My /etc/nginx/nginx.conf is as follows (*exactly*... [18:59:23] 10Labs, 10Tracking: New Labs project requests (tracking) - https://phabricator.wikimedia.org/T76375#3342239 (10madhuvishy) [19:12:21] (03CR) 10Lokal Profil: [C: 04-1] "I migrated some draft comments to the latest patch-set." (036 comments) [labs/tools/heritage] (wikidata) - 10https://gerrit.wikimedia.org/r/354961 (https://phabricator.wikimedia.org/T165988) (owner: 10Jean-Frédéric) [19:15:07] (03CR) 10Lokal Profil: [C: 04-1] Proof of concept to harvest Wikidata into monuments database (033 comments) [labs/tools/heritage] (wikidata) - 10https://gerrit.wikimedia.org/r/354961 (https://phabricator.wikimedia.org/T165988) (owner: 10Jean-Frédéric) [19:37:09] 10Labs, 10Labs-Infrastructure, 10Operations, 10netops, 10ops-codfw: codfw:labtestnet2002 switch port configuration - https://phabricator.wikimedia.org/T167322#3342417 (10RobH) 05Open>03Resolved Ok, fixed and live. [19:37:14] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/install labtestnet2002 - https://phabricator.wikimedia.org/T167159#3342419 (10RobH) [19:42:59] 10Labs, 10Labs-Infrastructure, 10Operations, 10netops, 10ops-codfw: codfw: labtestneutron2002 switch port configuration - https://phabricator.wikimedia.org/T167326#3342443 (10RobH) 05Open>03Resolved done and live [19:43:04] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/install labtestneutron2002 - https://phabricator.wikimedia.org/T167160#3342445 (10RobH) [19:49:24] 10Labs, 10Labs-Infrastructure, 10Beta-Cluster-Infrastructure: Create a new instance flavor for deployment-prep - https://phabricator.wikimedia.org/T167723#3342471 (10hashar) [19:49:52] 10Labs, 10Labs-Infrastructure, 10Beta-Cluster-Infrastructure: Create a new instance flavor for deployment-prep - https://phabricator.wikimedia.org/T167723#3342471 (10hashar) [19:49:57] 10Labs, 10Tracking: Existing Labs project quota increase requests (Tracking) - https://phabricator.wikimedia.org/T140904#3342486 (10hashar) [19:50:11] 10Labs, 10Labs-Infrastructure, 10Beta-Cluster-Infrastructure, 10Release-Engineering-Team: Create a new instance flavor for deployment-prep - https://phabricator.wikimedia.org/T167723#3342471 (10hashar) [19:57:26] 10Labs, 10Analytics: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342573 (10bd808) >>! In T167702#3341941, @GoranSMilovanovic wrote: > @bd808 @Reedy > > I already have a security group for Shiny Server, port 3838 opened. This... [19:58:00] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342577 (10Nuria) [20:15:19] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342683 (10GoranSMilovanovic) The Horizon managed proxy is already pointed at port 3838, IP Protocol = TCP, Remote IP Prefix = 0.0.0.0/0, and still, it does not work. I wouldn't... [20:28:02] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342734 (10bd808) Looking at https://tools.wmflabs.org/openstack-browser/project/wikidataconcepts shows that your proxies are pointing to ci-jessie-wikimedia-486020.contintcloud.e... [20:30:21] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342739 (10GoranSMilovanovic) Ok, but how is it possible that I can reach RStudio Server on port 8787 then (that would be: http://wikidataconcepts.wmflabs.org/, I'm currently work... [20:42:56] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342780 (10bd808) >>! In T167702#3342739, @GoranSMilovanovic wrote: > Ok, but how is it possible that I can reach RStudio Server on port 8787 then (that would be: http://wikidatac... [20:50:48] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342826 (10GoranSMilovanovic) @bd808 @Reedy Thank you very much for your efforts. As I've told you, it seems that running the RStudio Shiny Server under a web-proxy causes trouble... [20:51:15] Reedy: are you working today? I have a "I bet Reedy knows how to do it" question about deleting a MediaWiki account. [20:51:34] 10Labs: Need support on hosting an RStudio Shiny Server on a Labs instance behind a proxy - https://phabricator.wikimedia.org/T167702#3342827 (10Reedy) >>! In T167702#3342826, @GoranSMilovanovic wrote: > @bd808 @Reedy Thank you very much for your efforts. As I've told you, it seems that running the RStudio Shiny... [20:51:40] bd808: Kinda [20:51:43] Going out for lunch soon [20:51:57] What's up? [20:52:37] its not urgent at all. Chad and I seem to have landed on delete the conflicting account as part of the solution for T165624 [20:52:37] T165624: Request to rename LegoFan4000 to MacFan4000 on WikiTech - https://phabricator.wikimedia.org/T165624 [20:53:02] I was just cleaning up open tabs and remembered :) [20:53:49] Which part? :P [20:54:29] is there anything special to look out for in deleting a row from the users table? [20:54:54] Any contribs or log entries [20:54:58] Thats it really [20:55:13] contribs was empty last I looked [20:55:24] There will be a new user log entry at least [20:55:26] I guess there would be an account creation log somewhere [20:55:29] yeah [20:55:43] so that should be deleted too? [21:00:40] Yeah [21:01:41] Ok, so delete users row, delete associated log entries, and delete the LDAP record. I'll make a note on the task and try to get that done this week. [21:02:02] Reedy: you should go have lunch :) [21:02:39] I guess you might want to check the other user_ tables too [21:04:06] Oh yeah, prefs [21:04:11] I'm guessing there isn't an maintenance script for this already [21:04:21] Nope because we don't delete users :) [21:04:48] would it be easier jsut to rename the unused account? [21:06:16] Yup [21:06:20] If renameuser is enabled [21:06:48] it seems to be [21:06:58] Heh, I can't believe we didn't think of that [21:07:02] Just rename [21:07:21] I wonder if it works properly with LDAP? [21:07:30] Shouldn't matter [21:07:42] ldap extension only keeps userid [21:07:53] so as long as it doesn't push the name into some ldap attrib [21:08:10] cn & sn are the wiki user name [21:08:23] but I can fix that manually if needed [21:08:38] So do a rename like normal process on-wiki says to to SomeBogusUnusedCrud [21:08:43] Then rename the account to the now-open name [21:09:10] up [21:09:15] *yup even [21:18:00] 10Labs, 10Gerrit, 10wikitech.wikimedia.org: Request to rename LegoFan4000 to MacFan4000 on WikiTech - https://phabricator.wikimedia.org/T165624#3342899 (10bd808) After talking this over on irc, the plan is to: * rename `MacFan4000` to `Abandoned-MacFan4000` using [[ https://wikitech.wikimedia.org/wiki/Specia... [21:25:08] 10Labs, 10Labs-Infrastructure, 10Patch-For-Review, 10Release-Engineering-Team (Kanban): Track labs instances hanging - https://phabricator.wikimedia.org/T141673#3342921 (10hashar) 05Open>03Resolved a:03hashar Got solved. Was most probably a kernel bug of some sort, see T152599 for traces etc. [21:30:19] 10Labs, 10Labs-Infrastructure: labvirt1006 super busy right now - https://phabricator.wikimedia.org/T165753#3342939 (10hashar) 05Resolved>03Open labvirt1006 still seems heavy loaded. Specially the disk I/O seems very high based on Grafana ( [[ https://grafana.wikimedia.org/dashboard/file/server-board.json... [21:38:35] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw: rack/setup/install labtestpuppetmaster2001 - https://phabricator.wikimedia.org/T167157#3342984 (10RobH) [21:38:44] 10Labs, 10Labs-Infrastructure, 10Operations, 10ops-codfw: rack/setup/install labtestpuppetmaster2001 - https://phabricator.wikimedia.org/T167157#3319424 (10RobH) [21:38:48] 10Labs, 10Labs-Infrastructure, 10Operations, 10netops, 10ops-codfw: codfw: labtestpuppetmaster2001 switch port configuration - https://phabricator.wikimedia.org/T167321#3342986 (10RobH) 05Open>03Resolved Done!