Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT] Separate disks on dashboard by computer in hub/spoke deployment #50

Closed
mglubber opened this issue Sep 28, 2020 · 13 comments
Closed
Labels
enhancement New feature or request

Comments

@mglubber
Copy link

Feature Request
Separate disks by computer or identify the host computer for each disk in the scrutiny dashboard when using multiple collectors on the same database/UI

Describe the solution you'd like
It would be nice if the Scrutiny dashboard could separate the 'cards' for each drive into groups based on the computer's hostname or a value specified in the collector's configuration file. This would make it easier to tell where a disk is and if a particular computer might be having problems.

Terrible mockup screenshot:
scrutiny
(The 0 degree temperature isn't an error - the drive itself doesn't report temperature.)

@bbrendon
Copy link

bbrendon commented Sep 28, 2020

powered on for 105.1 years? Is that part of your terrible mockup or a new issue? :)

I was about to create this issue. Good thing I found yours. I have about 50 drives in my screen. The only way to figure out where they go is to search all the computers for the problem serial number.

@mglubber
Copy link
Author

mglubber commented Sep 28, 2020

The 105 years is just Intel/early SSD weirdness with SMART reporting - apparently the drive reports power on hours in 1/100ths of a millisecond ¯\_(ツ)_/¯. It's actually fixed with smartctl -x, so I guess it's an old issue (#43)

@teambvd
Copy link
Contributor

teambvd commented Sep 29, 2020

+1 for this request - I've got 4 servers running their own instances currently because of this current limitation. Would be great to have the ability to either A.) assign a host value, or even better, B.) have the 'hostname' value and IP the collector's running on pulled with the metrics package, then populate it above the drives within it.

@AnalogJ AnalogJ added the enhancement New feature or request label Sep 29, 2020
@warwickchapman
Copy link

warwickchapman commented Sep 29, 2020

+1 Also using this on multiple servers as is implied by the existence of hub/spoke deployment.

@AnalogJ
Copy link
Owner

AnalogJ commented Oct 2, 2020

yeah, this a great idea. I'm trying to figure out a simple out-of-the box way to consistently provide a unique host identifier to the container. Unfortunately it seems like we'll have to change the runtime command, so I'll probably end up telling users to mount /etc/machine-id, to uniquely differentiate between hosts, and the provide some sort of mapping between machine-id and a user defined label.

related #71

@teambvd
Copy link
Contributor

teambvd commented Oct 3, 2020 via email

@AnalogJ
Copy link
Owner

AnalogJ commented Oct 4, 2020

It needs to be retrievable from within the container, with minimal permissions, so I'm not sure if that would work. Also if the network is deleted/re-created (eg. because of a docker-compose down/up) I think it would generate a new id. I could be wrong though?

Basically the idea is to pair a unique persistent host identifier with all the devices found by a collector, and then send that up to the API. Then we can associate a user-configurable label for the host id so that we have nice host display names in the UI.

The reason I'm going this route, rather than just telling users to pass a unique label to the collector directly, is that I imagine that users will want to rename their host label at some point, and that would create a duplicate set of disks, without any history, requiring some sort of migration functionality in the app. hm. Maybe this is just the definition of YAGNI though.

@teambvd
Copy link
Contributor

teambvd commented Oct 4, 2020 via email

@AnalogJ
Copy link
Owner

AnalogJ commented Oct 4, 2020

Hm. I don't think that'll work for hub/spoke deployments, the collectors won't know how to request an "existing" identifier.
I think I'll just go the -v /etc/machine-id route. It's consistent in linux (and I think there's an equivalent in unix too).

@fake-name
Copy link

fake-name commented Oct 6, 2020

For non-docker deploys, just using the hostname would probably be completely adequate.

Actually, is there any way to get the host machine hostname in docker? /etc/machine-id is pretty terrible from a usability standpoint, as it's just a 32-bit hex value.

After all, the hostname is basically already a human-readable unique machine id (unless people are using the same hostname on multiple machines, but that's such a nutty procedure that I think anyone who does that is basically on their own anyways).

@mglubber
Copy link
Author

mglubber commented Oct 6, 2020

Would it possible to use an environment variable (ie SCRUTINY_HOST_ID), and if that isn't set just use /etc/hostname? Hostname is easy to configure for docker containers, and the environment variable would let the 'machine name' be configurable for non-docker deployments as well. And if someone just wants everything under one id, they can do that too.

Basically the idea is to pair a unique persistent host identifier with all the devices found by a collector, and then send that up to the API. Then we can associate a user-configurable label for the host id so that we have nice host display names in the UI.

The reason I'm going this route, rather than just telling users to pass a unique label to the collector directly, is that I imagine that users will want to rename their host label at some point, and that would create a duplicate set of disks, without any history, requiring some sort of migration functionality in the app. hm. Maybe this is just the definition of YAGNI though.

Wouldn't it be more useful to not make the host identifier persistent? Each device has a (presumably) unique serial number (and probably other ids) already - and the SMART history shouldn't disappear because I moved the drive from pickles_mgcee to steve. If the host identifier changes, it would make sense to me that it only changes filtering/layout of the drive information, it doesn't really change anything about the drive itself - just like if the device moved from /dev/sda to /dev/sdb after a reboot.

@AnalogJ
Copy link
Owner

AnalogJ commented Oct 8, 2020

@mglubber I ended up using something similar to your proposed host labeling method in the collector. I don't auto-label using the /etc/hostname, so by default device grouping will be disabled, but it is configurable via CLI & an environmental variable. History is kept even when the device is moved across hosts, and/or the device is rebound to a diffent device file (sda -> sdd, etc).

It's been partially implemented in #88 (the label is not visble in the UI yet, but it is stored in the DB & automatically updated).

Thanks for your help & feedback!

@AnalogJ
Copy link
Owner

AnalogJ commented May 25, 2022

this is fixed in v0.4.7 🎉

Screen Shot 2022-05-23 at 8 56 14 AM

Sorry for long wait getting this rolled out, I just prioritized the backend/InfluxDB changes before working on the frontend.

Now that the backend is more stable, you should more UI fixes over the next couple of releases.

see #151 (comment) for implementation instructions.

@AnalogJ AnalogJ closed this as completed May 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants