-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix a race condition between recovery and backup #11955
Conversation
@jowlyzhang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice finding! I think ideally we wouldn't rely on disable_delete_obsolete_files_
for preserving SST files whose MANIFEST append/sync returned an error. In case of an error, we don't know the state of the MANIFEST - it might refer to the newly created files, in which case they're not really obsolete, or it might not.
Is there a way that file deletion can check with the ErrorHandler
whether seemingly obsolete files are needed due to manifest recovery?
db/db_impl/db_impl.cc
Outdated
std::optional<int> remain_counter; | ||
if (s.ok()) { | ||
assert(versions_->io_status().ok()); | ||
int disable_file_deletion_count = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a need to track the number of times file deletion is disabled by the error handler? I wonder if it would be simpler to check and do it once in SetBGError
if not done already, and reenable it in ClearBGError
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. The main motivation is for the recovery and other threads like backup thread to express their need to disable file deletion equally, so that the recovery thread cannot re-enable file deletion against the will of the backup engine thread.
Quoting the PR that added this file deletion disabling in the first place #6949
multiple threads can call LogAndApply() at the same time, though only one of them will be going through the process MANIFEST write, possibly batching the version edits of other threads. When the leading MANIFEST writer finishes, all of the MANIFEST writing threads in this batch will have the same IOError. They will all call ErrorHandler::SetBGError() in which file deletion will be disabled
So SetBGError
can potentially be called multiple times, while ClearBGError
should only be called once by the recovery thread. The original implementation work around this with a EnableFileDeletions(/*force*/true)
. However, as this race condition shows, it can trump other thread's attempt to keep file deletion disabled. So here we track this number more precisely to avoid this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I understand the problem. I'm just suggesting that EnableFileDeletions
be called from ClearBGError
, so the details are not exposed here to DBImpl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I missed an important detail in your original comment, which is to "do it once in SetGBError
if not done already. A boolean type to cap the counter per recovery to 1 makes sense to me and is more efficient. Moving the EnableFileDeletions
to be within error handler so that the details are contained is an improvement too. EnableFileDeletions
will acquire the mutex while ClearBGError
already holds the mutex, so we still needed a EnableFileDeletionsWithLock
version of the API, plus there are some useful db info logging related to enabling to keep around that ideally should be done outside of the mutex. I wonder for these reasons, maybe it's still worth keeping this in DBImpl
, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do log info in SetBGError
and other places in error_handler.cc while holding the mutex. A retryable error and recovery from it is relatively rare, so should be ok I think. I agree with keeping EnableFileDeletionsWithLock
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the thorough check. This makes sense. I have updated the PR with these improvement suggestions.
Thank you for the proposal. This is a more efficient and targeted way to handle the original issue. You are right that disabling file deletion could be an overkill. I will look into this and some other things in the obsolete file deletion procedures. I also saw a few |
2f3c261
to
750f61c
Compare
@jowlyzhang has updated the pull request. You must reimport the pull request before landing. |
750f61c
to
cb74fa0
Compare
@jowlyzhang has updated the pull request. You must reimport the pull request before landing. |
cb74fa0
to
89569b0
Compare
@jowlyzhang has updated the pull request. You must reimport the pull request before landing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for addressing all the comments!
@jowlyzhang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
89569b0
to
5e45142
Compare
@jowlyzhang has updated the pull request. You must reimport the pull request before landing. |
@jowlyzhang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thank you for the detailed review and improvement suggestions! |
@jowlyzhang merged this pull request in 933ee29. |
By the way I didn't fully explain the thought process here. There is one more step to show the potential problem. That is, user may call |
Thank you for the example for how this can lead to a failure mode, not just inefficiency. This made me wonder though is there a strong reason to make the API |
Sure. Ideally we change it in a way that breaks noticeably for Still, I wonder if |
Summary: A race condition between recovery and backup can happen with error messages like this: ```Failure in BackupEngine::CreateNewBackup with: IO error: No such file or directory: While opening a file for sequentially reading: /dev/shm/rocksdb_test/rocksdb_crashtest_whitebox/002653.log: No such file or directory``` PR #6949 introduced disabling file deletion during error handling of manifest IO errors. Aformentioned race condition is caused by this chain of event: [Backup engine] disable file deletion [Recovery] disable file deletion <= this is optional for the race condition, it may or may not get called [Backup engine] get list of file to copy/link [Recovery] force enable file deletion .... some files refered by backup engine get deleted [Backup engine] copy/link file <= error no file found This PR fixes this with: 1) Recovery thread is currently forcing enabling file deletion as long as file deletion is disabled. Regardless of whether the previous error handling is for manifest IO error and that disabled it in the first place. This means it could incorrectly enabling file deletions intended by other threads like backup threads, file snapshotting threads. This PR does this check explicitly before making the call. 2) `disable_delete_obsolete_files_` is designed as a counter to allow different threads to enable and disable file deletion separately. The recovery thread currently does a force enable file deletion, because `ErrorHandler::SetBGError()` can be called multiple times by different threads when they receive a manifest IO error(details per PR #6949), resulting in `DBImpl::DisableFileDeletions` to be called multiple times too. Making a force enable file deletion call that resets the counter `disable_delete_obsolete_files_` to zero is a workaround for this. However, as it shows in the race condition, it can incorrectly suppress other threads like a backup thread's intention to keep the file deletion disabled. <strike>This PR adds a `std::atomic<int> disable_file_deletion_count_` to the error handler to track the needed counter decrease more precisely</strike>. This PR tracks and caps file deletion enabling/disabling in error handler. 3) for recovery, the section to find obsolete files and purge them was moved to be done after the attempt to enable file deletion. The actual finding and purging is more likely to happen if file deletion was previously disabled and get re-enabled now. An internal function `DBImpl::EnableFileDeletionsWithLock` was added to support change 2) and 3). Some useful logging was explicitly added to keep those log messages around. Pull Request resolved: #11955 Test Plan: existing unit tests Reviewed By: anand1976 Differential Revision: D50290592 Pulled By: jowlyzhang fbshipit-source-id: 73aa8331ca4d636955a5b0324b1e104a26e00c9b
Thank you for helping check this. It's a good idea to augment In this manifest IO error case, the job fails, but we want to postpone the deletion of the temp files to be after recovery succeeds. One idea would be, as we continue to release this job's file number from the pending output file pool, we also add this file number to the error handler as the minimum file number of files to quarantine. |
Yes this sounds great. My only concern is space usage growing if flush/compaction happens while error recovery is ongoing. IIRC we disable background work during a hard error like MANIFEST failure. Is that right, and do we also avoid triggering auto-recovery flushes? |
These are great questions, thank you for this perspective! I checked the error handling logic more. It's true that all MANIFEST errors are initially marked as either hard or fatal errors that will stop non recovery background work. There is a special case for handling the no WAL scenario that will mark it as soft error (Editted: want to add this important detail, this marking as soft error will also mark the flag Lines 477 to 492 in 2e514e4
But it seems to me any recovery flush effort, either from manual Lines 389 to 406 in 2e514e4
At that time, error handler should have cleaned up the quarantine file set or we need to make sure it does. So recovery flush shouldn't be a concern? |
Sure, a quarantine file set makes sense, and in that case recovery flush is not a concern. I was thinking if we prevent file deletion more broadly during recovery then failed recovery flushes could cause dead SST files to accumulate. But I guess your implementation approach will make that not a possibility. |
…es (#11955) (#11979) Summary: With fragmented record span across multiple blocks, if any following blocks corrupted with arbitary data, and intepreted log number less than the current log number, program will fall into infinite loop due to not skipping buffer leading bytes Pull Request resolved: #11979 Test Plan: existing unit tests Reviewed By: ajkr Differential Revision: D50604408 Pulled By: jowlyzhang fbshipit-source-id: e50a0c7e7c3d293fb9d5afec0a3eb4a1835b7a3b
…3234) Summary: `DBErrorHandlingFSTest.AtomicFlushNoSpaceError` is flaky due to seg fault during error recovery: ``` ... frame #5: 0x00007f0b3ea0a9d6 librocksdb.so.9.10`rocksdb::VersionSet::GetObsoleteFiles(std::vector<rocksdb::ObsoleteFileInfo, std::allocator<rocksdb::ObsoleteFileInfo>>*, std::vector<rocksdb::ObsoleteBlobFileInfo, std::allocator<rocksdb::ObsoleteBlobFileInfo>>*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>>*, unsigned long) [inlined] std::vector<rocksdb::ObsoleteFileInfo, std::allocator<rocksdb::ObsoleteFileInfo>>::begin(this=<unavailable>) at stl_vector.h:812:16 frame #6: 0x00007f0b3ea0a9d6 librocksdb.so.9.10`rocksdb::VersionSet::GetObsoleteFiles(this=0x0000000000000000, files=size=0, blob_files=size=0, manifest_filenames=size=0, min_pending_output=18446744073709551615) at version_set.cc:7258:18 frame #7: 0x00007f0b3e8ccbc0 librocksdb.so.9.10`rocksdb::DBImpl::FindObsoleteFiles(this=<unavailable>, job_context=<unavailable>, force=<unavailable>, no_full_scan=<unavailable>) at db_impl_files.cc:162:30 frame #8: 0x00007f0b3e85e698 librocksdb.so.9.10`rocksdb::DBImpl::ResumeImpl(this=<unavailable>, context=<unavailable>) at db_impl.cc:434:20 frame #9: 0x00007f0b3e921516 librocksdb.so.9.10`rocksdb::ErrorHandler::RecoverFromBGError(this=<unavailable>, is_manual=<unavailable>) at error_handler.cc:632:46 ``` I suspect this is due to DB being destructed and reopened during recovery. Specifically, the [ClearBGError() call](https://github.com/facebook/rocksdb/blob/c72e79a262bf696faf5f8becabf92374fc14b464/db/db_impl/db_impl.cc#L425) can release and reacquire mutex, and DB can be closed during this time. So it's not safe to access DB state after ClearBGError(). There was a similar story in #9496. [Moving the obsolete files logic after ClearBGError()](#11955) probably makes the seg fault more easily triggered. This PR updates `ClearBGError()` to guarantee that db close cannot finish until the method is returned and the mutex is released. So that we can safely access DB state after calling it. Pull Request resolved: #13234 Test Plan: I could not trigger the seg fault locally, will just monitor future test failures. Reviewed By: jowlyzhang Differential Revision: D67476836 Pulled By: cbi42 fbshipit-source-id: dfb3e9ccd4eb3d43fc596ec10e4052861eeec002
A race condition between recovery and backup can happen with error messages like this:
Failure in BackupEngine::CreateNewBackup with: IO error: No such file or directory: While opening a file for sequentially reading: /dev/shm/rocksdb_test/rocksdb_crashtest_whitebox/002653.log: No such file or directory
PR #6949 introduced disabling file deletion during error handling of manifest IO errors. Aformentioned race condition is caused by this chain of event:
[Backup engine] disable file deletion
[Recovery] disable file deletion <= this is optional for the race condition, it may or may not get called
[Backup engine] get list of file to copy/link
[Recovery] force enable file deletion
.... some files refered by backup engine get deleted
[Backup engine] copy/link file <= error no file found
This PR fixes this with:
Recovery thread is currently forcing enabling file deletion as long as file deletion is disabled. Regardless of whether the previous error handling is for manifest IO error and that disabled it in the first place. This means it could incorrectly enabling file deletions intended by other threads like backup threads, file snapshotting threads. This PR does this check explicitly before making the call.
disable_delete_obsolete_files_
is designed as a counter to allow different threads to enable and disable file deletion separately. The recovery thread currently does a force enable file deletion, becauseErrorHandler::SetBGError()
can be called multiple times by different threads when they receive a manifest IO error(details per PR First step towards handling MANIFEST write error #6949), resulting inDBImpl::DisableFileDeletions
to be called multiple times too. Making a force enable file deletion call that resets the counterdisable_delete_obsolete_files_
to zero is a workaround for this. However, as it shows in the race condition, it can incorrectly suppress other threads like a backup thread's intention to keep the file deletion disabled.This PR adds a. This PR tracks and caps file deletion enabling/disabling in error handler.std::atomic<int> disable_file_deletion_count_
to the error handler to track the needed counter decrease more preciselyfor recovery, the section to find obsolete files and purge them was moved to be done after the attempt to enable file deletion. The actual finding and purging is more likely to happen if file deletion was previously disabled and get re-enabled now. An internal function
DBImpl::EnableFileDeletionsWithLock
was added to support change 2) and 3). Some useful logging was explicitly added to keep those log messages around.Test plan:
existing unit tests