-
-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tutorial: SMART Monitoring with Scrutiny across machines #417
Comments
Nice guide! I'll get it added to the |
Just to make sure I have this straight, this would allow me to to run the collector on different systems and send it to the central scrutiny docker so that i can monitor the health of other drives on my network? |
Yes. What was offred in the docs beforehand was a scenario where if you decided to use the hub and spoke setup instead of the single omnibus image, you would have to run both in docker. I run my hub in docker, alongside the rest of my monitoring tools. But the spoke is a NAS without docker. That's why I wrote this guide. |
Gonna leave this here for anyone who ends up having the same issue I did.
otherwise the collector log would fail stating the smartctl binary was missing. |
@Tin-Joy59 Thank you for the guide! I'm trying to follow this but not very familiar with Docker compose. I get the following error when trying to deploy your example config in Portainer stacks
Any suggestions? Should I be setting the |
The Moreso, if you ever change the root config directory of your services, you will need to change only the value of the variable, not 50 different service definitions. If you are using Portainer, there is an "Environmental variables" section under the stack editor. Create a |
Thank you for this guide. I was having a hard time getting data on the dashboard and couldn't figure out why. it truned out I needed this guide because of proxmox. |
@tenekev , Sorry to bother you after almost 2 years of making this guide but I am having a hard time setting up the notification. |
@tismofied The square brackets in the examples are to specify that the username is optional. They should not be included in your configuration. If you're having issues generating a notification url, you should look at the official Shoutrrr documentation for your service - https://containrrr.dev/shoutrrr/v0.8/services/bark/ Scrutiny just uses Shoutrrr under the hood. |
Fixed my issue thank you! |
S.M.A.R.T. Monitoring with Scrutiny across machines
🤔 The problem:
Scrutiny offers a nice Docker package called "Omnibus" that can monitor HDDs attached to a Docker host with relative ease. Scrutiny can also be installed in a Hub-Spoke layout where Web interface, Database and Collector come in 3 separate packages. The official documentation assumes that the spokes in the "Hub-Spokes layout" run Docker, which is not always the case. The third approach is to install Scrutiny manually, entirely outside of Docker.
💡 The solution:
This tutorial provides a hybrid configuration where the Hub lives in a Docker instance while the spokes have only Scrutiny Collector installed manually. The Collector periodically send data to the Hub. It's not mind-boggling hard to understand but someone might struggle with the setup. This is for them.
🖥️ My setup:
I have a Proxmox cluster where one VM runs Docker and all monitoring services - Grafana, Prometheus, various exporters, InfluxDB and so forth. Another VM runs the NAS - OpenMediaVault v6, where all hard drives reside. The Scrutiny Collector is triggered every 30min to collect data on the drives. The data is sent to the Docker VM, running InfluxDB.
Setting up the Hub
The Hub consists of Scrutiny Web - a web interface for viewing the SMART data. And InfluxDB, where the smartmon data is stored.
🔗This is the official Hub-Spoke layout in docker-compose. We are going to reuse parts of it. The ENV variables provide the necessary configuration for the initial setup, both for InfluxDB and Scrutiny.
If you are working with and existing InfluxDB instance, you can forgo all the
INIT
variables as they already exist.The official Scrutiny documentation has a sample scrutiny.yaml file that normally contains the connection and notification details but I always find it easier to configure as much as possible in the docker-compose.
A freshly initialized Scrutiny instance can be accessed on port 8080, eg.
192.168.0.100:8080
. The interface will be empty because no metrics have been collected yet.Setting up a Spoke without Docker
A spoke consists of the Scrutiny Collector binary that is run on a set interval via crontab and sends the data to the Hub. The official documentation describes the manual setup of the Collector - dependencies and step by step commands. I have a shortened version that does the same thing but in one line of code.
When downloading Github Release Assests, make sure that you have the correct version. The provided example is with Release v0.5.0. [The release list can be found here.](https://github.com/analogj/scrutiny/releases)
Once the Collector is installed, you can run it with the following command. Make sure to add the correct address and port of your Hub as
--api-endpoint
./opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://192.168.0.100:8080"
This will run the Collector once and populate the Web interface of your Scrutiny instance. In order to collect metrics for a time series, you need to run the command repeatedly. Here is an example for crontab, running the Collector every 15min.
The Collector has its own independent config file that lives in
/opt/scrutiny/config/collector.yaml
but I did not find a need to modify it. A default collector.yaml can be found in the official documentation.Setting up a Spoke with Docker
Setting up a remote Spoke in Docker requires you to split the official Hub-Spoke layout docker-compose.yml. In the following docker-compose you need to provide the
${API_ENDPOINT}
, in my casehttp://192.168.0.100:8080
. Also all drives that you wish to monitor need to be presented to the container underdevices
.The image handles the periodic scanning of the drives.
The text was updated successfully, but these errors were encountered: