-
-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEAT] Separate disks on dashboard by computer in hub/spoke deployment #50
Comments
powered on for 105.1 years? Is that part of your terrible mockup or a new issue? :) I was about to create this issue. Good thing I found yours. I have about 50 drives in my screen. The only way to figure out where they go is to search all the computers for the problem serial number. |
The 105 years is just Intel/early SSD weirdness with SMART reporting - apparently the drive reports power on hours in 1/100ths of a millisecond ¯\_(ツ)_/¯. It's actually fixed with smartctl -x, so I guess it's an old issue (#43) |
+1 for this request - I've got 4 servers running their own instances currently because of this current limitation. Would be great to have the ability to either A.) assign a host value, or even better, B.) have the 'hostname' value and IP the collector's running on pulled with the metrics package, then populate it above the drives within it. |
+1 Also using this on multiple servers as is implied by the existence of hub/spoke deployment. |
yeah, this a great idea. I'm trying to figure out a simple out-of-the box way to consistently provide a unique host identifier to the container. Unfortunately it seems like we'll have to change the runtime command, so I'll probably end up telling users to mount related #71 |
Any reason not to use the docker network ID (from “network inspect”)?
…On Fri, Oct 2, 2020 at 10:15 AM Jason Kulatunga ***@***.***> wrote:
yeah, this a great idea. I'm trying to figure out a simple out-of-the box
way to consistently provide a unique host identifier to the container.
Unfortunately it seems like we'll have to change the runtime command, so
I'll probably end up telling users to mount /etc/machine-id, to uniquely
differentiate between hosts, and the provide some sort of mapping between
machine-id and a user defined label.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQW5ULRMTQOW3IUPS3D7IITSIXU7LANCNFSM4R5BXKWA>
.
|
It needs to be retrievable from within the container, with minimal permissions, so I'm not sure if that would work. Also if the network is deleted/re-created (eg. because of a docker-compose down/up) I think it would generate a new id. I could be wrong though? Basically the idea is to pair a unique persistent host identifier with all the devices found by a collector, and then send that up to the API. Then we can associate a user-configurable label for the host id so that we have nice host display names in the UI. The reason I'm going this route, rather than just telling users to pass a unique label to the collector directly, is that I imagine that users will want to rename their host label at some point, and that would create a duplicate set of disks, without any history, requiring some sort of migration functionality in the app. hm. Maybe this is just the definition of YAGNI though. |
How about having the container check a new table (what will be the
identifier) at startup- if empty, generate I’d via /dev/random, else
present as host ID?
…On Sat, Oct 3, 2020 at 8:02 PM Jason Kulatunga ***@***.***> wrote:
It needs to be retrievable from within the container, with minimal
permissions, so I'm not sure if that would work. Also if the network is
deleted/re-created (eg. because of a docker-compose down/up) I think it
would generate a new id. I could be wrong though?
Basically the idea is to pair a unique persistent host identifier with all
the devices found by a collector, and then send that up to the API. Then we
can associate a user-configurable label for the host id so that we have
nice host display names in the UI.
The reason I'm going this route, rather than just telling users to pass a
unique label to the collector directly, is that I imagine that users will
want to rename their host label at some point, and that would create a
duplicate set of disks, without any history, requiring some sort of
migration functionality in the app. hm. Maybe this is just the definition
of YAGNI though.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQW5ULT6UZXKNMVSC2K6D53SI7CTJANCNFSM4R5BXKWA>
.
|
Hm. I don't think that'll work for hub/spoke deployments, the collectors won't know how to request an "existing" identifier. |
For non-docker deploys, just using the hostname would probably be completely adequate. Actually, is there any way to get the host machine hostname in docker? After all, the hostname is basically already a human-readable unique machine id (unless people are using the same hostname on multiple machines, but that's such a nutty procedure that I think anyone who does that is basically on their own anyways). |
Would it possible to use an environment variable (ie
Wouldn't it be more useful to not make the host identifier persistent? Each device has a (presumably) unique serial number (and probably other ids) already - and the SMART history shouldn't disappear because I moved the drive from |
@mglubber I ended up using something similar to your proposed host labeling method in the collector. I don't auto-label using the /etc/hostname, so by default device grouping will be disabled, but it is configurable via CLI & an environmental variable. History is kept even when the device is moved across hosts, and/or the device is rebound to a diffent device file (sda -> sdd, etc). It's been partially implemented in #88 (the label is not visble in the UI yet, but it is stored in the DB & automatically updated). Thanks for your help & feedback! |
this is fixed in v0.4.7 🎉 Sorry for long wait getting this rolled out, I just prioritized the backend/InfluxDB changes before working on the frontend. Now that the backend is more stable, you should more UI fixes over the next couple of releases. see #151 (comment) for implementation instructions. |
Feature Request
Separate disks by computer or identify the host computer for each disk in the scrutiny dashboard when using multiple collectors on the same database/UI
Describe the solution you'd like
It would be nice if the Scrutiny dashboard could separate the 'cards' for each drive into groups based on the computer's hostname or a value specified in the collector's configuration file. This would make it easier to tell where a disk is and if a particular computer might be having problems.
Terrible mockup screenshot:
(The 0 degree temperature isn't an error - the drive itself doesn't report temperature.)
The text was updated successfully, but these errors were encountered: