Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dynamically removing store from storage manager #1232

Merged
merged 7 commits into from
Aug 20, 2019

Conversation

jsjtzyy
Copy link
Contributor

@jsjtzyy jsjtzyy commented Aug 8, 2019

  1. Given partition id, remove the corresponding store from storage
    manager, disk manager and compaction manager.
  2. Support deleting allocated files of store and returning swap
    segments to reserver pool if necessary.

@jsjtzyy jsjtzyy self-assigned this Aug 8, 2019
@codecov-io
Copy link

codecov-io commented Aug 8, 2019

Codecov Report

Merging #1232 into master will decrease coverage by 16.7%.
The diff coverage is 95.71%.

Impacted file tree graph

@@              Coverage Diff              @@
##             master    #1232       +/-   ##
=============================================
- Coverage     88.92%   72.21%   -16.71%     
- Complexity       60     6054     +5994     
=============================================
  Files             6      439      +433     
  Lines           352    34968    +34616     
  Branches         37     4437     +4400     
=============================================
+ Hits            313    25252    +24939     
- Misses           29     8560     +8531     
- Partials         10     1156     +1146
Impacted Files Coverage Δ Complexity Δ
.../com.github.ambry.store/StorageManagerMetrics.java 98.07% <100%> (ø) 6 <1> (?)
...in/java/com.github.ambry.store/StorageManager.java 88.73% <100%> (ø) 47 <3> (?)
...rc/main/java/com.github.ambry.store/BlobStore.java 90.42% <100%> (ø) 97 <3> (?)
...main/java/com.github.ambry.store/StoreMetrics.java 97.2% <100%> (ø) 10 <6> (?)
...java/com.github.ambry.store/CompactionManager.java 89.28% <100%> (ø) 25 <3> (?)
...ava/com.github.ambry.store/BlobStoreCompactor.java 92.12% <100%> (ø) 156 <0> (?)
.../main/java/com.github.ambry.store/DiskManager.java 86.6% <81.25%> (ø) 55 <4> (?)
.../com/github/ambry/account/HelixAccountService.java 87.05% <0%> (-0.94%) 33% <0%> (-5%)
...om/github/ambry/account/AccountServiceMetrics.java 100% <0%> (ø) 1% <0%> (ø) ⬇️
...ava/com.github.ambry/config/PerformanceConfig.java 100% <0%> (ø) 4% <0%> (?)
... and 432 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e5d6e7e...8ea2d7a. Read the comment docs.

@jsjtzyy jsjtzyy marked this pull request as ready for review August 12, 2019 18:20
@jsjtzyy jsjtzyy requested review from cgtz and lightningrob August 12, 2019 18:20
boolean removeBlobStore(BlobStore store) {
boolean result;
if (compactionExecutor == null) {
stores.remove(store);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once #1226 is merged, stores will become concurrent set which should be ok here.

1. Given partition id, remove the corresponding store from storage
manager, disk manager and compaction manager.
2. Support deleting allocated files of store and returning swap
segments to reserver pool if necessary.
Copy link
Contributor

@cgtz cgtz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after comments addressed

String[] swapSegmentsInUse = compactor.getSwapSegmentsInUse();
if (swapSegmentsInUse.length > 0) {
for (String fileName : swapSegmentsInUse) {
logger.trace("Returning swap segment {} to reserve pool", fileName);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe log this at info level since it is relatively rare

}
// step1: return occupied swap segments (if any) to reserve pool
String[] swapSegmentsInUse = compactor.getSwapSegmentsInUse();
if (swapSegmentsInUse.length > 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't need this if around the for loop

logger.info("Deleting store {} directory", storeId);
File storeDir = new File(dataDir);
try {
Files.walk(storeDir.toPath()).sorted(Comparator.reverseOrder()).map(Path::toFile).forEach(File::delete);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also have Utils.deleteFileOrDirectory that you can use

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, I didn't notice that. Will change this piece of code.

if (compactionExecutor == null) {
stores.remove(store);
result = true;
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can remove this nested else and use else if (!compact...

* @param id the {@link PartitionId} associated with store
* @return {@code true} if removal succeeds. {@code false} otherwise.
*/
public boolean removeBlobStore(PartitionId id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason for this method (and the ones in DiskManager and CompactionManager) to return success/failure booleans instead of throwing exceptions? With exceptions, you can log just once at the top level

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am slightly leaning towards current way to make it sort of consistent with scheduleNextForCompaction, controlCompactionForBlobStore etc. Returning boolean can be explicitly verified in unit tests.

@@ -219,6 +219,8 @@ private void deregisterIndexGauges(String storeId) {
registry.remove(MetricRegistry.name(Log.class, prefix + "CurrentCapacityUsed"));
registry.remove(MetricRegistry.name(Log.class, prefix + "PercentageUsedCapacity"));
registry.remove(MetricRegistry.name(Log.class, prefix + "CurrentSegmentCount"));
registry.remove(MetricRegistry.name(Log.class, "ByteBufferForAppendTotalCount"));
registry.remove(MetricRegistry.name(Log.class, "UnderCompaction" + SEPERATOR + "ByteBufferForAppendTotalCount"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor typo: SEPARATOR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

*/
@Test
public void deleteStoreFilesTest() throws Exception {
assumeTrue(isLogSegmented);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this code be tested with non segmented logs? Perhaps just put the test of swap segments within if (isLogSegmented)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I added some codes to ensure both segmented and non-segmented cases are tests.

StoreConfig config = new StoreConfig(new VerifiableProperties(properties));
MetricRegistry registry = new MetricRegistry();
StoreMetrics metrics = new StoreMetrics(registry);
//createBlobStore(getMockReplicaId(storeDir.getAbsolutePath()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove commented code

@jsjtzyy
Copy link
Contributor Author

jsjtzyy commented Aug 19, 2019

Addressed @cgtz 's comments. @lightningrob gentle reminder to review.

stores.remove(store);
result = true;
} else if (!compactionExecutor.getStoresDisabledCompaction().contains(store)) {
logger.error("Fail to remove store ({}) from compaction manager because compaction of it is still enabled",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it should be an IllegalStateException. Is there a valid scenario where compaction is still enabled when removeStore is called? If we return false and compaction is turned off later, will the store ever be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your point, however, I am trying to use same way in scheduleNextForCompaction method (line 360). Returning false may happen when disabling compaction on the store hasn't completed or hasn't been executed. The Helix state model will retry entire workflow (disable compaction, remove store) to guarantee the store is correctly removed.

logger.info("Deleting store {} directory", storeId);
File storeDir = new File(dataDir);
try {
Utils.deleteFileOrDirectory(storeDir);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method already throws IOException, so you probably don't have to catch the exception, unless you want dataDir to be in the message.

Also, I would recommend setting the cause exception to e if you keep the catch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reminding me of this. I prefer to keep this catch in case there is any non-IOException. Also, I would take your advice to set the exception e

@lightningrob lightningrob merged commit 90e1227 into linkedin:master Aug 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants