-
-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Displayed failed on passed disks #270
Comments
hm, I think I may have a logic bug causing Scrutiny attribute warnings to mark the disk as failed. |
You also mentioned it’s not possible to “un-set” failed for disks that already failed, is that the case even for these? Is there a simple way to manually do that? Thanks |
I'm seeing similar failures on dashboard (but show passing in details) after migrating from linuxserver docker container to official omnibus. |
You can do it manually by connecting to the web/api container, and running the following commands:
Those steps will reset the devices, but you'll want to re-run the collector afterwards (or wait for a scheduled run) to ensure that a failing disk was not accidentally set to passing. Closing this issue for now, please feel free to reopen/comment if you have any questions. |
Sorry I'd like to re-open the issue, since my original issue was not an issue with previously failed disk not restting, but disks that never seemed to have failed ever, passing all the test but still showing as failed |
ah, apologies @azukaar you're correct. Can you follow these instructions to generate some debug log files for me?
The log files will be available on your host in the config directory. Please attach them to this issue. |
Thanks @AnalogJ I pulled again it works, except / redirect to /web instead of /web/dashboard |
Hey everyone, I think theres another unrelated logic bug related to the sort order for data coming from InfluxDB. |
@azukaar the Are you not seeing that happen? |
it wasnt doing it for a while, may be cache issue, the redirect is working now |
I'll be posting a summary in #255 soon, please take a look there if I forget to update this issue. |
Describe the bug
A whole bunch of disks are marked as failed but in the details, they actually pass everything.
Might be worth nothing that it seems like the test has been done while my RAID 6 of 10 disks had 2 disks missing.
Log gathering did not work, first command seems to run fine but when copyinng them out the log file doesnt exist?
The text was updated successfully, but these errors were encountered: