Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Remote Store] Update shallow snapshot flows to support new path types and hash algorithm #12987

Closed
ashking94 opened this issue Mar 30, 2024 · 3 comments · Fixed by #12988
Closed
Assignees
Labels
enhancement Enhancement or improvement to existing feature or request Storage:Performance Storage:Resiliency Issues and PRs related to the storage resiliency v2.14.0

Comments

@ashking94
Copy link
Member

ashking94 commented Mar 30, 2024

Is your feature request related to a problem? Please describe

With optimised prefix pattern for remote store path (as mentioned in #12567), we need to ensure that we are able to allow indexes to use the optimised remote store path type and at the same time be able to resolve the existing prefix pattern (Fixed remote store path type) during snapshots and restores.

Describe the solution you'd like

We have updated Snapshots and it's restore after introducing shallow snapshots with remote store. Due to the changes in the blob store path for different type of data, it becomes important that we keep track of the same in shallow snapshot metadata since the snapshot can be restored outside of the life of an index (meaning that the cluster manager holds no information about the index itself). I propose the solution where we will update the shallow snapshot metadata file that will keep track of the remote store type and hash algorithm so that we can reuse this information during times of 1. restore 2. clone 3. cleaning up remote store data on account of snapshot deletion when there are no more lock files present for the same.

Related component

Storage:Performance

Describe alternatives you've considered

No response

Additional context

No response

@peternied
Copy link
Member

[Triage - attendees 1 2 3 4 5 6 7 8]
@ashking94 Thanks for creating this issue; however, it isn't being accepted due to not having enough details to know what the expected result of resolution of the problem is. Please feel free to open a new issue after addressing the reason.

@github-project-automation github-project-automation bot moved this from Now(This Quarter) to ✅ Done in Storage Project Board Apr 3, 2024
@ashking94
Copy link
Member Author

@peternied I will add more details and will prefer reopening the same issue. This is an increment step of the meta #12589 to achieve #12567. Let me know if you see any concern on account of reopening this issue.

@ashking94 ashking94 reopened this Apr 3, 2024
@github-project-automation github-project-automation bot moved this from ✅ Done to 🏗 In progress in Storage Project Board Apr 3, 2024
@ashking94 ashking94 self-assigned this Apr 3, 2024
@peternied
Copy link
Member

@ashking94 I'm not familiar with the feature area and it is not more approachable to me. I'd recommend moving into slightly different mindset while creating issues in open-source projects to careful describe the problem & why its worth while to addressing and then constraints on how it is addressed. Finally to provide context, it should be considered part of the appendix - not required reading.

By coming from this perspective unfamiliar readers can onboard quickly and be better setup to review and provide feedback on the goals of the issue and related issues.

@github-project-automation github-project-automation bot moved this from 🏗 In progress to ✅ Done in Storage Project Board Apr 5, 2024
@bbarani bbarani moved this to Features in Test roadmap format Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request Storage:Performance Storage:Resiliency Issues and PRs related to the storage resiliency v2.14.0
Projects
Status: ✅ Done
Status: Planned work items
Development

Successfully merging a pull request may close this issue.

2 participants