Skip to content

Commit

Permalink
Handle Error on clean up share and not get stuck in IN_PROGRESS status (
Browse files Browse the repository at this point in the history
#1099)

### Feature or Bugfix
<!-- please choose -->
- Bugfix


### Detail
- Scenario
  - Folder share item exists in an existing share object in DA<=v2.2
  - DA upgrade to v2.3
- Before updating environment/pivot role - user removes folder item from
existing share object

- Issue
- Share tries to create the managed policies in `def clean_up_share()`
and remove the inline policies
  - Function fails since pivotRole does not have proper IAM permissions
- Exception raised and share gets stuck in a REVOKE_IN_PROGRESS state +
Dataset Lock Table never releases Lock for next share to process

- Resolution
- Wrap `def clean_up_share()` method for revoke folder item in a
try/except block
  - ++ Remove some unused additional code

### Security
Please answer the questions below briefly where applicable, or write
`N/A`. Based on
[OWASP 10](https://owasp.org/Top10/en/).

- Does this PR introduce or modify any input fields or queries - this
includes
fetching data from storage outside the application (e.g. a database, an
S3 bucket)?
  - Is the input sanitized?
- What precautions are you taking before deserializing the data you
consume?
  - Is injection prevented by parametrizing queries?
  - Have you ensured no `eval` or similar functions are used?
- Does this PR introduce any functionality or component that requires
authorization?
- How have you ensured it respects the existing AuthN/AuthZ mechanisms?
  - Are you logging failed auth attempts?
- Are you using or adding any cryptographic features?
  - Do you use a standard proven implementations?
  - Are the used keys controlled by the customer? Where are they stored?
- Are you introducing any new policies/roles/users?
  - Have you used the least-privilege principle? How?


By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache 2.0 license.
  • Loading branch information
noah-paige authored Mar 12, 2024
1 parent 445cffa commit 24fdeef
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 17 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -267,13 +267,8 @@ def revoke_share(cls, engine: Engine, share_uri: str):
share_uri,
ShareableType.StorageLocation.value
)
existing_shared_buckets = ShareObjectRepository.check_existing_shared_items_of_type(
session,
share_uri,
ShareableType.S3Bucket.value
)
existing_shared_items = existing_shared_folders or existing_shared_buckets
log.info(f'Still remaining S3 resources shared = {existing_shared_items}')

log.info(f'Still remaining S3 folder resources shared = {existing_shared_folders}')
if not existing_shared_folders and revoked_folders:
log.info("Clean up S3 access points...")
clean_up_folders = ProcessS3AccessPointShare.clean_up_share(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -215,16 +215,20 @@ def clean_up_share(
source_env_group,
env_group,
)
clean_up = clean_up_folder.delete_access_point(share=share, dataset=dataset)

if clean_up:
clean_up_folder.delete_target_role_access_policy(
share=share, dataset=dataset, target_environment=target_environment
)
if not dataset.imported or dataset.importedKmsKey:
clean_up_folder.delete_dataset_bucket_key_policy(dataset=dataset)

return True
log.info("##### Cleaning up folder share resources #######")
success = True
try:
clean_up = clean_up_folder.delete_access_point(share=share, dataset=dataset)
if clean_up:
clean_up_folder.delete_target_role_access_policy(
share=share, dataset=dataset, target_environment=target_environment
)
if not dataset.imported or dataset.importedKmsKey:
clean_up_folder.delete_dataset_bucket_key_policy(dataset=dataset)
except Exception as e:
log.info(f"Failed to clean up folder share resources due to: {e}")
success = False
return success

@classmethod
def verify_shares(
Expand Down

0 comments on commit 24fdeef

Please sign in to comment.