-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle Entire Folder Revoke + Clean Up Before Share Process Succeeds #1101
Comments
noah-paige
changed the title
Handle Entire Folder Revoke + Clean Up Before Share Process Succeedds
Handle Entire Folder Revoke + Clean Up Before Share Process Succeeds
Mar 12, 2024
noah-paige
added
type: bug
Something isn't working
and removed
type: enhancement
Feature enhacement
labels
Mar 12, 2024
dlpzx
added a commit
that referenced
this issue
Mar 25, 2024
### Feature or Bugfix - Bugfix ### Detail - Remove unnecessary AllowAllToAdmin statement in bucket policy and in access point policies - For the deletion of access points in existing shares I kept a reference to `AllowAllToAdmin`. We could change this, I am not strong opinionated. - For existing bucket policies and access point policies `AllowAllToAdmin` will be deleted when all folders are revoked. It is unnecessary but it does not harm anyone to leave it as it is for existing shares. [UPDATE] Since the testing uncovered issues derived from removing `AllowAllToAdmin` with the current clean-up actions I took the issue #1101 and included it in this PR. On top of the aforementioned changes, this PR: - Remove unnecessary `clean_up` functions from revoke_folders and move the clean_up logic into the `process_revoked_folders` function (from the `data_sharing_service.py`) - Break `delete_access_point_policy` into smaller functions with less responsibilities: `revoke_access_in_access_point_policy` that generates the new access point policy; `attach_new_access_point_policy` that attaches a given policy to an access point; `delete_access_point` that deletes an access point. - Logic on what should be deleted moved to `backend/dataall/modules/dataset_sharing/services/share_processors/s3_access_point_process_share.py`. Only if the resulting policy contains no statements the access point is deleted, the IAM permissions revoked from the requester IAM role and the KMS key policy revoked. - Bonus point: fixed small bug on the clean-up (it was assuming prefix_list is a list, but for a single folder shared it can be a string) ### Relates - #997 - #1101 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Merged! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
When revoking a folder share item
Revoke_Succeeded
once the access point policy is updated to remove the folder prefix for the principal IAM roleBut if there are no additional folder shares for the same principal we perform a number of other steps, including:
The above actions all happen after the folder item is already updated to
Revoke_Succeeded
but theoretically the revoke process could fail in those steps and then there are pending resources remaining in the account that should have been cleaned up - and no way for data.all to notify the user or try again to clean up currentlyHow to Reproduce
Expected behavior
data.all updated to a Revoke_Succeeded status only when it is done cleaning up all resources related to the share item and/or notifies the user when it fails to clean up some particular resource
Your project
No response
Screenshots
No response
OS
N/A
Python version
N/A
AWS data.all version
v2.2.0
Additional context
No response
The text was updated successfully, but these errors were encountered: