-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(from spl/issues/389) -- SPLError: 26788:0:(range_tree.c:172:range_tree_add()) SPL PANIC #2720
Comments
I can confirm this. My OS happened to crash a few times because of a broken RAM-module. Any chance this gets fixed? Can I contribute? |
Could you try importing the pool using the latest source from github. There have been several fixes in this area. You can find directions for how to build the latest source at zfsonlinux.org. |
Build spl,zfs.zfs-utils on my arch box checking out latest github source. Details: https://gist.github.com/yeoldegrove/b1d5a83587dbce437c52 |
@yeoldegrove the failure you're seeing indicates that somehow the same address is being freed twice. Can you please rebuild your zfs source with the following patch applied and set the diff --git a/module/zfs/range_tree.c b/module/zfs/range_tree.c
index 4643d26..b5c9222 100644
--- a/module/zfs/range_tree.c
+++ b/module/zfs/range_tree.c
@@ -175,7 +175,7 @@ range_tree_add(void *arg, uint64_t start, uint64_t size)
rsearch.rs_end = end;
rs = avl_find(&rt->rt_root, &rsearch, &where);
- if (rs != NULL && rs->rs_start <= start && rs->rs_end >= end) {
+ if (rs != NULL) {
zfs_panic_recover("zfs: allocating allocated segment"
"(offset=%llu size=%llu)\n",
(longlong_t)start, (longlong_t)size); |
Works like a charm now. root@host ~ # dmesg | grep -Ei 'SPL|ZFS' The pool now imports and exports in seconds, even across reboots. I'll go back to the usual upstream code and will report if it still works later. |
After running the regular packages from arch demz-repo-core repo for two days now, everything seems to run fine. |
When a bad DVA is encountered in metaslab_free_dva() the system should treat it as fatal. This indicates that somehow a damaged DVA was written to disk and that should be impossible. However, we have seen a handful of reports over the years of pools somehow being damaged in this way. Since this damage can render otherwise intact pools unimportable, and the consequence of skipping the bad DVA is only leaked free space, it makes sense to provide a mechanism to ignore the bad DVA. Setting the zfs_recover=1 module option will cause the DVA to be ignored which may allow the pool to be imported. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3090 Issue openzfs#2720
When a bad DVA is encountered in metaslab_free_dva() the system should treat it as fatal. This indicates that somehow a damaged DVA was written to disk and that should be impossible. However, we have seen a handful of reports over the years of pools somehow being damaged in this way. Since this damage can render otherwise intact pools unimportable, and the consequence of skipping the bad DVA is only leaked free space, it makes sense to provide a mechanism to ignore the bad DVA. Setting the zfs_recover=1 module option will cause the DVA to be ignored which may allow the pool to be imported. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3090 Issue openzfs#2720
When a bad DVA is encountered in metaslab_free_dva() the system should treat it as fatal. This indicates that somehow a damaged DVA was written to disk and that should be impossible. However, we have seen a handful of reports over the years of pools somehow being damaged in this way. Since this damage can render otherwise intact pools unimportable, and the consequence of skipping the bad DVA is only leaked free space, it makes sense to provide a mechanism to ignore the bad DVA. Setting the zfs_recover=1 module option will cause the DVA to be ignored which may allow the pool to be imported. Since zfs_recover=0 by default any pool attempting to free a bad DVA will treat it as a fatal error preserving the current behavior. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#3099 Issue openzfs#3090 Issue openzfs#2720
A patch for ignoring bad DVAs on blocks which are being freed when zfs_recover=1 is set has been merged. This doesn't get to the root cause of how this could happen but it does provide as more convenient way to recover which has been damaged in this way. |
"rsync -PSAXrltgoD --del" from remote host caused 'SPLError'
full text here: https://gist.github.com/anonymous/39e252399acb6912a16e
P.S.
After removal of storage/samba filesystem, when importing a pool there is this problem, it isn't possible to export or remove a pool: all zfs/zpool related commands just hangs.
The text was updated successfully, but these errors were encountered: