[09:36:00] Hello! We've been contracted to setup a web application on a cloud VPS server instance. We've been given root acces to an instance, but need to setup a LAMP stack. We followed the instructions at https://wikitech.wikimedia.org/wiki/Help:LAMP_instances but got stuck at "Reach the "configure" page of the instance from Special:NovaInstance. Check the box next to role::lamp::labs in the 'apache' section.". The page doe [09:36:26] Any help much appreciated - Thanks! [09:39:15] your message was cut off at "The page doe" [09:40:07] it was "The page does not exist and there is nothing similar in Horizon." [09:41:45] oh that doc seems outdated [09:42:27] we use now the puppet panel in horizon to set puppet roles [09:43:19] systopia: do you have access to horizon.wikimedia.org ? [09:44:15] Yes, I do have access. I also can see the roles, but I'm unsure about how to proceed. I'm not quite familiar with puppet. [09:48:10] also `role::lamp::labs` no longer exists, it seems [09:48:44] so not sure how to recommend next [09:49:19] using puppet to install your software is not required, you can just use apt/aptitude to install the required packages [09:52:48] Ok, so manually installing and configuring a complete LAMP stack would be the way to go then? I hoped to avoid that and have an image loaded or something. [10:00:33] I would need to search our puppet tree to see if there is something similar to a LAMP stack [10:01:38] the apt thing is just if you want a quick unblock [10:05:17] @arturo I would really appreaciate if you found something like that, thanks [10:13:11] systopia: what do you want to do? what is your cloudvps project? [10:16:56] arturo: We need to install Drupal 7 and CiviCRM [10:18:19] systopia: I don't think we have the LAMP stack ready to use in our puppet tree [10:19:21] you may try using docker, vagrant, ansible or any other mechanism for deploying stuff, but I can't offer you any more options from the operations/puppet.git point of view (where our puppet source code tree is stored) [10:20:52] arturo: Too bad, thanks however for looking at it [10:23:19] welcome [11:03:21] godog: you around? [11:04:29] I'm having issues with toolforge prometheus servers and wanted to check with you what would you recommend [11:05:34] arturo: yeah I'm here but need to run an errand now, I'll be back in 15 [11:05:41] ack [11:06:08] arturo: feel free to explain the problem in the meantime, I'll read when I'm back [11:06:17] ok [11:06:47] so I'm migrating tools-prometheus-01/02 to tools-prometheus-03/04 [11:06:53] (jessie to buster) [11:07:00] the software and the config is in place, good [11:07:32] now, for the new servers to have the old metrics data, I just scp'ed from tools-prometheus-01 to tools-prometheus-03 [11:07:49] after the scp, I forgot the adjust ownership of the metric files [11:08:07] now prometheus refuses to start with `opening storage failed: invalid block sequence: block time ranges overlap` [11:08:41] and I would say the disk is more filled that it ever was [11:08:53] so it could mean we have duplicated metrics for whatever reason [11:09:40] I'm tempted to just `rm -rf` the metrics data in tools-prometheus-03/04, and sync again from -01/02 (we will lose some data, but hey) [11:09:54] unless you know a way to tell prometheus to fix itself [11:10:11] or ignore metrics, purge the invalid ones or whatever [11:22:25] arturo: yeah that's likely an error of old data (blocks) plus new data that prometheus wrote in the meantime, the simplest would be to copy data again [11:23:28] arturo: also if 01/02 are still collecting metrics as usual the loss should be minimal [11:23:43] they are already shutdown (not deleted though) [11:23:59] it seems -04 is in better shape so I might simply use 04 as primary [11:24:34] ah ok, good news! [11:25:20] but yeah IIRC prometheus is able to tolerate overlapping blocks but like only since two versions ago or sth like that [11:26:05] BTW godog did you see that I rebuild the pkg for buster? [11:27:37] oh now, -04 has the same problem -_- [11:28:18] meh [11:31:28] arturo: ah! no I missed the rebuild, thanks a lot! [11:31:57] yw [11:37:41] !log tools re-create tools-prometheus-03/04 as 'bigdisk2' instances (300GB) T238096 [11:37:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:37:45] T238096: Toolforge: prometheus: refresh setup - https://phabricator.wikimedia.org/T238096 [11:38:05] !log tools start again tools-prometheus-01 again to sync data to the new tools-prometheus-03/04 VMs (T238096) [11:38:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [16:02:10] arturo: how'd it go with the new VMs ? [16:23:19] godog: apparently much better [16:23:35] thanks! will ping you if I see more issues [16:56:59] https://tools.wmflabs.org/admin/tools vs. tool records on https://toolsadmin.wikimedia.org/tools/id/squirrelnest-upf - I can't figure out why the listing on the first link is showing /None at the end of the URLs for my tool's links, and therefore giving 404 errors, instead of just using a path ending with / [16:59:35] it had been doing that previously, but as of several days ago when I checked, it was showing that, as well as another duplicate entry which I couldn't account for, and adding a toolinfo.json file didn't fix it [17:07:47] DSquirrelGM: I suggest you open a phabricator task [18:42:33] hello! i have a question about running mediawiki-vagrant in cloud vps as per the documentation. i can't get past an nfs error (even after running vagrant destroy -f a couple of times) [18:53:02] they'll need the project name at least to look into it [19:06:10] mepps: sometimes when the NFS mounts into the Vagrant managed LXC container fail the only fix is to restart the hosting Cloud VPS instance. It is an unsatisfying answer, but works to fix things more often than not in my personal experience [19:07:27] k thanks bd808! i'll try that [19:10:24] mediawiki-vagrant made a design decision ~7 years ago to keep the MediaWiki files on the hosting computer and mount them into the managed container. This was chosen to make editing the MediaWiki files easier than requiring some ssh connection into the managed vm. Unfortunately this has proven to make the runtime of the managed container slower and more likely to break. [19:11:03] If I had the energy to do it all over again, I would put all the files inside the managed vm and mount them back from the to the host computer for editing. [19:11:37] that would make the runtime more stable and performant and the editing experience a bit more complex [19:12:19] * bd808 started working with mediawiki-vagrant ~6.5 years ago and refuses to take the blame for the initial design choices ;) [19:15:12] bd808 :) [22:30:37] bd808: and question about media urls , why some media urls have the hash but not others? or is just direct his to the pages the ones that have no hashes? [22:31:07] bd808: i mean direct hit something like going directly to the page on commons [22:32:17] All of the thumbnails for sure should have the md5 hash bits in the URL. I actually don't remember if originals also have that or if they bypass that name mangling. [22:33:37] random commons page -- https://commons.wikimedia.org/wiki/File:Ara-Zoo-Muenster-2013-02.jpg -- appears to use the md5 hash spreading in the original media URL too. [22:35:47] nuria: godog and/or gilles might be able to tell you more about the details of how upload.wikimedia.org URLs work. [22:36:19] godog because of swift knowledge and gilles because of swift + thumbor [23:21:32] nuria: bd808: Because i recently investigated this while figuring out how to dump all valid public urls from swift, the hashes should be required for any "large" wiki. Essentially the hash is used as part of the swift container name and without the hash you will never find the image. The small wikis though get a single container per wiki and in theory the files could be found without the [23:21:38] hash. There *are* files in swift that do not have appropriate hash prefixing, although its an incredibly small amount [23:22:05] i didn't look to closely though and just threw out any url that didn't have the hash prefix when dumping [23:23:44] thanks for the knowledge drop ebernhardson :) This is why talking about things in public irc channels is awesome.