[11:04:03] Oh wow the action button is hidden on mobile (using horizon) [11:06:11] Nvm [11:06:21] It was on the bastion project [12:19:57] Zppix: I'm pretty sure Sopel is to blame here [12:20:09] see https://github.com/sopel-irc/sopel/issues/1144 [14:33:01] hi, i have created my bot on free host. But i want to upload it on toolslab! [14:33:15] https://it.wikiversity.org/wiki/Utente:Bot_Vegas [14:40:04] losvegas: see https://tools.wmflabs.org/ [14:40:11] section "Develop your own tool" [14:41:19] i have requested access 7 days ago [14:41:35] Oh, I see! Sorry! [14:43:27] https://toolsadmin.wikimedia.org/tools/membership/status/274 [14:43:55] 3 days ago, i was wrong [14:44:30] losvegas: do you have approval in it.wikiversity for the test edits? [14:44:42] yes [14:44:57] i chatted with administrator [14:45:25] https://it.wikiversity.org/wiki/Discussioni_utente:Pierpao#Re:Benvenutare [14:45:54] https://it.wikiversity.org/wiki/Discussioni_utente:Los_Vegas#Benvenutare [14:46:11] and i chatted with other users, on telegram [14:47:53] losvegas: your request is being reviewed. I'll get back to you as soon as possible [14:48:17] ok [14:48:44] thank so much [17:24:59] o/ all. Our commtech wiki is unwell - http://commtech.wmflabs.org What's the recommended way to fix it? [17:25:22] Niharika: upgrade hhvm? [17:25:48] bd808: Will vagrant git-update do it or is there another command for that? [17:26:30] you'll have to do it manually. vagrant ssh; sudo apt-get update; sudo apt-get dist-upgrade; that should get you the newer stuff [17:26:30] this might help? https://wikitech.wikimedia.org/wiki/HHVM#Upgrade_HHVM_to_a_new_upstream_version [17:27:09] bd808: Oh, actually, for some reason the mediawiki-vagrant directory has gone missing on that instance. [17:27:17] Niharika: if the container is still trusty you might be best served by backing up the db and media files and rebuilding the mw-vagrant container as stretch [17:27:35] This is similar to what happened on one of scholarships instances a while back. [17:27:52] ummm... [17:28:03] maybe a /srv mount problem? [17:28:09] (nvm me. that's infrastructure docs. /me finds more coffee) [17:28:16] Thanks quiddity. :) [17:28:18] Niharika: what's the instance name? [17:29:16] bd808: Wait, I see it now. Sorry, I was looking at the wrong place before. I'll try updating hhvm now and report back if it works. [17:29:18] Thanks! [17:29:59] yw. that whole VM instance should probably be rebuilt "soon" but that can happen another time [17:30:39] * bd808 needs to build the "list of Trusty vms" wall of shame soon'ish [17:35:11] bd808: The HHVM package did not get upgraded. It's still on 3.12.7. [17:35:15] https://www.irccloud.com/pastebin/scT4xoUn/ [17:35:34] Says it's the latest but clearly not. The website says otherwise. [17:35:49] * Niharika is looking [17:36:10] did you `sudo apt-get update`before that? [17:36:27] chicocvenancio: I did. [17:36:43] https://github.com/facebook/fbctf/issues/544 [17:36:53] Err https://hhvm.com/blog/2017/02/15/hhvm-3-18.html [17:37:08] Okay so, no more trusty. [17:38:47] Niharika: yeah, if the container is still trusty your going to have a hard time. [17:40:10] bd808: Okay, is there any chance there's some documentation on how to backup the data from this instance and restore on a new one? [17:40:35] https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki [17:41:07] Right, thanks. :) [17:41:31] a db dump and a copy of any local media files should cover it I think [17:43:05] I guess I was wondering if vagrant comes with any magic commands to do this for me. /me dreams [17:43:37] you can use the scripts we use for production- but it could be an overkill [17:43:39] Niharika: nobody has written one. There's a phab task from tgr that's probably several years old now [17:44:11] Ah, okay. So it's definitely a realistic dream. [17:44:41] well at least you aren't the only person who has asked ;) [17:45:00] this is where I say "patches welcome" ;) [17:47:07] didn't I write that at some point? [17:48:32] vagrant export-dump, there you go [17:49:28] Woo! Thanks tgr! :D [17:49:47] oh right, that only deals with content, not a full DB dump [17:49:58] so I guess I didn't [17:50:23] still, might be good enough for most cases [17:50:34] vagrant ssh -- sudo mysqldump .... (more magic needed here) [19:25:32] !log phabricator deleted phabricator-stretch5 and recreate as phabricator-stretch6 testing php 7.2 [19:25:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Phabricator/SAL [19:26:21] mutante ^^ :) [19:30:58] paladox: cool!:) [19:35:28] !log wikistream cleared out some HUGE logfiles on ws-web [19:35:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [19:37:04] !log wikistream rebooted ws-web in an attempt to revive it after running out of disk space [19:37:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikistream/SAL [19:43:23] !log tools-proxy-* Forced puppet run to apply https://gerrit.wikimedia.org/r/#/c/421472/ [19:43:23] bd808: Unknown project "tools-proxy-*" [19:43:29] !log tools tools-proxy-* Forced puppet run to apply https://gerrit.wikimedia.org/r/#/c/421472/ [19:43:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [19:58:12] !log rcm Oxygen: Preparing shutdown to delete and create the VM again [19:58:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Rcm/SAL [19:59:43] !log integration upgraded python-conftool on integration-slave-jessie-1001 and integration-slave-jessie-1004 to resolve puppet warnings [19:59:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Integration/SAL [20:07:49] !log rcm Oxygen: Deleted [20:07:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Rcm/SAL [20:41:16] !log rcm Oxygen: Created [20:41:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Rcm/SAL [20:59:31] bd808: hi, currently here? [20:59:54] just heading into a meeting Sagan [21:00:17] bd808: hm, ok. somebody else here who can help me with my floating ip? [21:00:27] andrewbogott: ? [21:00:38] try "!help" :) [21:00:40] Sagan: what do you need? [21:01:06] andrewbogott: hi :). I've delete a vm with a floating IP assigned (horizon failed to unassign the IP), and now I can't assign it to the new one [21:01:30] the project is rcm, the ip was 208.80.155.234 in the past, and the instance where it should assigned to is oxygen.rcm.eqiad.wmflabs [21:01:51] Sagan: ok, will look [21:01:54] ugh. I've seen this happen before. The delete triggers apparently don't release the ip back to the pool. [21:01:56] horizon only says: "Error: Can not assign the floating ip" (or similar) [21:02:05] andrewbogott: thx :) [21:02:22] bd808: hm, I guessed something like that. sadly, I was not able to release the IP before [21:02:32] andrewbogott ah [21:02:38] i think he hit the same thing as me [21:03:10] Sagan: better? [21:03:16] T189706 [21:03:16] T189706: Floating Ip panel missing from new horizon update - https://phabricator.wikimedia.org/T189706 [21:03:28] andrewbogott: yeah, it's shown as assigned now [21:03:30] thank you very much :) [21:03:54] Sagan: great. I need to figure out if I can revive the UI for that… otherwise floating ips may move into the realm of 'just make a phab request' [21:04:06] hm, yeah :/ [21:04:17] (Since we're about to rip out our entire network layer I'm reluctant to burn too many hours fixing the UI for our current layer) [21:04:30] andrewbogott: I wonder if it will come back magically with neutron stuff [21:04:50] since neutron handles all of these things a bit differently than nova-network [21:05:15] * bd808 votes to just get to neutron already ;) [21:07:14] bd808: I explicitly axed the floating-ip UI in Horizon because it was trying to talk to a not-there neutron [21:07:26] so yeah, ideally I can just remove a # from the code and it'll all work [21:07:50] but it might be that I can convince the existing code to talk to nova-network. Haven't dug into it yet. [21:20:33] andrewbogott: do you now how long it takes to apply a change of a security group? [21:21:11] Sagan: it should be more or less immediate. Always good to double-check that you don't have iptables blocking things on your VM though [21:22:54] andrewbogott: it looks like I currently have a problem with security groups on horizon, I can't change them. there is no error, but when I open the window again, all changes are reverted [21:23:05] which currently makes one of my hosts unreachable via web [21:23:40] Sagan: ok, I saw this same issue a couple of days ago but couldn't track it down. [21:23:49] What groups do you want? And, this is still oxygen right? [21:24:23] andrewbogott: icinga for oxygen, and default and icinga for neon please [21:24:45] neon is currently the instance beeing unreachable since horizon managed it to remove all groups from it [21:31:03] Sagan: better? [21:31:56] andrewbogott: yeah, thanks :) [21:36:48] andrewbogott: if you want to track the issue down: sadly I've touched the groups of tin as well, and horizon removed everything. so if you want to test it there, you can do it if you want. not urgent there, currently I don't need the host. but would be nice if you can add "default" back when you finished :). sorry [21:37:44] Sagan: I've added 'default' for now but yeah, I may use it as a test case later on [21:37:52] hmm, should a task be created for this? [21:37:55] andrewbogott: ok, thx :) [21:38:02] * Sagan stops touching security groups for now [22:22:47] bd808: how's the situation with load due to hikebike? any help needed? [22:24:29] MaxSem: I think we are doing ok at the moment. We put in a bandaid of rotating nginx logs more often to stop the disk filling up. I mostly opened that ticket so I could find out who to talk to about doing more and I think that is working. [22:24:58] a next step is probably to move those servers out from behind the shared proxy [22:25:17] which should be transparent mostly once I decide where to put the new proxy [22:25:29] curiously, the server itself isn't overloaded or anything [22:26:04] so what happened is that the disk on the shared proxy server was filling up due to access log volume [22:26:32] we have one host that almost all the *.wmflabs.org http traffic goes through [22:27:03] and it looks like it has been on the ragged edge of problems for months and just finally tipped over [22:27:17] doesn't look like it's that many requests: http://tiles.wmflabs.org/munin/mod_tile-day.html [22:28:41] that's just the ones that miss cache though right? [22:29:10] depends on what cache we're talking about [22:29:44] is that total request volume to the {a,b,c}.tiles vhosts? [22:30:01] yes [22:30:38] at least this host doesn't seem to have any http cache [22:32:42] maps-tiles2.maps.eqiad.wmflabs is the bigger server, but it doesn't even look like it has http endpoints pointed at it [22:33:02] nah, maps-tile3 is serving the trffic [22:33:16] oh wow. is there really still a precise host in that project? [22:33:36] almost 4 years old [22:34:18] someone did an in-place trusty upgrade on it :) [22:34:30] * bd808 feels a tiny bit better [22:35:17] last time I tried something like that I just bricked the instance, but this project has some masters:) [22:35:35] is maps-tiles2.maps.eqiad.wmflabs even doing anything? [22:36:04] besides taking up a whole project worth of quota I mean ;) [22:37:03] dunno, I'm not really a maintainer, I just have access and do sophisticated shit there like restarting apache [22:38:06] cpu hasn't been less than 75% idle for the last year... [22:38:17] that box is just eating resources :/ [22:39:19] like, probably, half of all the VMs... [22:40:01] we do a cleanup about once a year and get a ton of spare resources back, but yeah [23:15:30] (03PS1) 10BryanDavis: Order maintainers by cn [labs/striker] - 10https://gerrit.wikimedia.org/r/421669 [23:15:32] (03PS1) 10BryanDavis: Update UI to use term "Wikimedia developer account" [labs/striker] - 10https://gerrit.wikimedia.org/r/421670 (https://phabricator.wikimedia.org/T190543) [23:17:15] (03CR) 10jerkins-bot: [V: 04-1] Update UI to use term "Wikimedia developer account" [labs/striker] - 10https://gerrit.wikimedia.org/r/421670 (https://phabricator.wikimedia.org/T190543) (owner: 10BryanDavis) [23:17:17] (03CR) 10jerkins-bot: [V: 04-1] Order maintainers by cn [labs/striker] - 10https://gerrit.wikimedia.org/r/421669 (owner: 10BryanDavis) [23:26:38] !log tools clush -w @exec -w @webgrid -b 'sudo find /tmp -type f -atime +1 -delete' [23:26:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [23:46:36] !log tools.admin Restarted to deploy 90013f7 [23:46:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.admin/SAL