Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix 956 #961

Merged
merged 77 commits into from
Jan 11, 2024
Merged
Show file tree
Hide file tree
Changes from 74 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
ad4ab1f
Add Additional Error Messages for KMS Key lookup on imported dataset …
noah-paige Sep 15, 2023
dbbef3c
Get Latest in main to v2m1m0 (#771)
noah-paige Sep 19, 2023
d096160
Handle Environment Import of IAM service roles (#749)
noah-paige Sep 26, 2023
a53434f
Build Compliant Names for Opensearch Resources (#750)
noah-paige Oct 5, 2023
16c7026
Merge branch 'main' into v2m1m0
dlpzx Oct 10, 2023
c61ba15
Update Lambda runtime (#782)
nikpodsh Oct 10, 2023
f84250e
Feat: limit pivot role S3 permissions (#780)
dlpzx Oct 12, 2023
7d9122d
Fix: ensure valid environments for share request and other objects cr…
dlpzx Oct 12, 2023
1801cf1
Adding configurable session timeout to IDP (#786)
manjulaK Oct 13, 2023
599fc1a
Fix: shell true semgrep (#760)
dlpzx Oct 16, 2023
b356bf2
Fix: allow to submit a share when you are both and approver and a req…
zsaltys Oct 16, 2023
793a078
feat: redirect upon creating a share request (#799)
zsaltys Oct 16, 2023
f448613
Fix: condition when there are no public subnets (#794)
lorchda Oct 18, 2023
66b9a08
feat: removing unused variable (#815)
zsaltys Oct 18, 2023
c833c26
feat: Handle Pre-filtering of tables (#811)
anushka-singh Oct 18, 2023
6cc564e
Fix Check other share exists before clean up (#769)
noah-paige Oct 18, 2023
8b7b82e
Email Notification on Share Workflow - Issue - 734 (#818)
TejasRGitHub Oct 20, 2023
48c32e5
feat: adding frontend and backend feature flags (#817)
zsaltys Oct 25, 2023
6d727e9
Feat: Refactor notifications from core to modules (#822)
dlpzx Oct 26, 2023
8ad760b
Merge branch 'main' into v2m1m0
dlpzx Oct 27, 2023
3f100b4
Feat: pivot role limit kms (#830)
dlpzx Oct 27, 2023
fb7b61b
Make hosted_zone_id optional, code update (#812)
lorchda Oct 27, 2023
b51da2c
Clean-up for v2.1 (#843)
dlpzx Oct 30, 2023
6d3c016
Merge branch 'main' into v2m1m0
dlpzx Oct 27, 2023
7912a24
Feat: pivot role limit kms (#830)
dlpzx Oct 27, 2023
55c579b
Make hosted_zone_id optional, code update (#812)
lorchda Oct 27, 2023
92d4324
Clean-up for v2.1 (#843)
dlpzx Oct 30, 2023
5fb7cf8
feat: Enabling S3 bucket share
anushka-singh Oct 31, 2023
cf9afc1
feat: Enabling S3 bucket share
anushka-singh Oct 31, 2023
ddf8623
Merge branch 'v2m1m0' of https://github.com/anushka-singh/aws-dataall…
anushka-singh Oct 31, 2023
b54860d
fix: adding missing pivot role permission to get key policy (#845)
zsaltys Oct 31, 2023
a05e548
Merge branch 'v2m1m0' into anu-s3-copy
dlpzx Oct 31, 2023
1365e92
Revert overwrites 2.
dlpzx Oct 31, 2023
bbcfbd5
Revert overwrites 3.
dlpzx Oct 31, 2023
9e8cdf1
Revert overwrites 4.
dlpzx Oct 31, 2023
5d90797
Revert overwrites 4.
dlpzx Oct 31, 2023
94be491
Revert overwrites 5.
dlpzx Oct 31, 2023
cff577f
Revert overwrites 6.
dlpzx Oct 31, 2023
5ff80fb
Revert overwrites 7.
dlpzx Oct 31, 2023
3383166
Revert overwrites 7.
dlpzx Oct 31, 2023
7ed96af
Revert overwrites 8.
dlpzx Oct 31, 2023
c051896
Revert overwrites 9.
dlpzx Oct 31, 2023
f5d62d7
Revert overwrites 10.
dlpzx Oct 31, 2023
3783a95
Revert overwrites 11.
dlpzx Oct 31, 2023
dacba14
Revert overwrites 12.
dlpzx Oct 31, 2023
3b404cd
Revert overwrites 13.
dlpzx Oct 31, 2023
5d0fe68
Fix down revision for migration script
dlpzx Oct 31, 2023
158925a
feat: Enabling S3 bucket share
anushka-singh Nov 2, 2023
d112a21
bugfix: Enabling S3 bucket share
anushka-singh Nov 3, 2023
06edb53
feat: Enabling S3 bucket share - Addressing comments on PR
anushka-singh Nov 8, 2023
f43003c
feat: Enabling S3 bucket share
anushka-singh Nov 10, 2023
4516f4d
feat: Enabling S3 bucket share - Addressing comments on PR
anushka-singh Nov 15, 2023
0f2faf7
feat: Enabling S3 bucket share - Addressing comments on PR
anushka-singh Nov 16, 2023
9b0ab34
Merge branch 'main' into bucket_share_anushka
anushka-singh Nov 16, 2023
7ab6427
feat: Enabling S3 bucket share
anushka-singh Nov 10, 2023
e251fcc
feat: Enabling S3 bucket share - Addressing comments on PR
anushka-singh Nov 15, 2023
e8bfb4b
feat: Enabling S3 bucket share - Addressing comments on PR
anushka-singh Nov 15, 2023
2ff67bc
Merge branch 'main' of https://github.com/anushka-singh/aws-dataall i…
anushka-singh Nov 16, 2023
3254260
Update share.js
anushka-singh Nov 16, 2023
eb8bf3d
Update index.js
anushka-singh Nov 16, 2023
a06838f
Merge branch 'main' of https://github.com/anushka-singh/aws-dataall
anushka-singh Dec 18, 2023
bed3d51
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Dec 19, 2023
35b1730
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Dec 27, 2023
e9debef
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 3, 2024
2501cac
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 3, 2024
530c098
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 3, 2024
e55f2cd
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 3, 2024
102f6fb
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 3, 2024
b233396
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 4, 2024
b321c3a
Bugfix:956 Dataset sharing fails with auto create pivot role enabled
anushka-singh Jan 10, 2024
475d55a
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 10, 2024
ebdf783
Bugfix#932: Investigate why some shares did not go to failed state, b…
anushka-singh Jan 10, 2024
506b7a5
Bugfix:956 Dataset sharing fails with auto create pivot role enabled
anushka-singh Jan 11, 2024
90480c1
Bugfix
anushka-singh Jan 11, 2024
a0befd2
Replace get_key_id_using_list_aliases with get_key_id
noah-paige Jan 11, 2024
192bcd8
Bugfix:956 Dataset sharing fails with auto create pivot role enabled
anushka-singh Jan 11, 2024
4854af3
Merge remote-tracking branch 'origin/bugfix-956' into bugfix-956
anushka-singh Jan 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backend/dataall/core/stacks/db/enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ class StackStatus(Enum):
CREATE_COMPLETE = 'CREATE_COMPLETE'
CREATE_FAILED = 'CREATE_FAILED'
DELETE_IN_PROGRESS = 'DELETE_IN_PROGRESS'
DELETE_COMPLETE = 'DELETE_FAILED'
noah-paige marked this conversation as resolved.
Show resolved Hide resolved
DELETE_COMPLETE = 'DELETE_COMPLETE'
DELETE_FAILED = 'DELETE_FAILED'
ROLLBACK_IN_PROGRESS = 'ROLLBACK_IN_PROGRESS'
ROLLBACK_COMPLETE = 'ROLLBACK_COMPLETE'
Expand Down
18 changes: 18 additions & 0 deletions backend/dataall/modules/dataset_sharing/aws/kms_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,24 @@ def get_key_id(self, key_alias: str):
else:
return response['KeyMetadata']['KeyId']

def get_key_id_using_list_aliases(self, key_alias: str):
try:
key_id = None
paginator = self._client.get_paginator('list_aliases')
for page in paginator.paginate():
key_aliases = [alias["AliasName"] for alias in page['Aliases']]
if key_alias in key_aliases:
# Retrieve the key_id corresponding to the matching key_alias
key_id = [alias["TargetKeyId"] for alias in page['Aliases'] if alias["AliasName"] == key_alias][0]
break
except ClientError as e:
if e.response['Error']['Code'] == 'AccessDenied':
raise Exception(f'Data.all Environment Pivot Role does not have kms:ListAliases Permission for key {key_alias}: {e}')
log.error(f'Failed to get kms key id of {key_alias}: {e}')
return None
else:
return key_id

def check_key_exists(self, key_alias: str):
try:
key_exist = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,9 @@ def trigger_table_sharing_failure_alarm(
target_environment: Environment,
):
log.info('Triggering share failure alarm...')
subject = (
f'ALARM: DATAALL Table {table.GlueTableName} Sharing Failure Notification'
)
subject = f'Data.all Share Failure for Table {table.GlueTableName}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the table {table.GlueTableName} with Lake Formation.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the table {table.GlueTableName} with Lake Formation.

Alarm Details:
- State Change: OK -> ALARM
Expand Down Expand Up @@ -51,9 +49,9 @@ def trigger_revoke_table_sharing_failure_alarm(
target_environment: Environment,
):
log.info('Triggering share failure alarm...')
subject = f'ALARM: DATAALL Table {table.GlueTableName} Revoking LF permissions Failure Notification'
subject = f'Data.all Revoke LF Permissions Failure for Table {table.GlueTableName}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to revoke Lake Formation permissions for table {table.GlueTableName} with Lake Formation.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to revoke Lake Formation permissions for table {table.GlueTableName} with Lake Formation.

Alarm Details:
- State Change: OK -> ALARM
Expand All @@ -76,11 +74,9 @@ def trigger_revoke_table_sharing_failure_alarm(

def trigger_dataset_sync_failure_alarm(self, dataset: Dataset, error: str):
log.info(f'Triggering dataset {dataset.name} tables sync failure alarm...')
subject = (
f'ALARM: DATAALL Dataset {dataset.name} Tables Sync Failure Notification'
)
subject = f'Data.all Dataset Tables Sync Failure for {dataset.name}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to synchronize Dataset {dataset.name} tables from AWS Glue to the Search Catalog.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to synchronize Dataset {dataset.name} tables from AWS Glue to the Search Catalog.

Alarm Details:
- State Change: OK -> ALARM
Expand All @@ -101,11 +97,9 @@ def trigger_folder_sharing_failure_alarm(
target_environment: Environment,
):
log.info('Triggering share failure alarm...')
subject = (
f'ALARM: DATAALL Folder {folder.S3Prefix} Sharing Failure Notification'
)
subject = f'Data.all Folder Share Failure for {folder.S3Prefix}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the folder {folder.S3Prefix} with S3 Access Point.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the folder {folder.S3Prefix} with S3 Access Point.
Alarm Details:
- State Change: OK -> ALARM
- Reason for State Change: S3 Folder sharing failure
Expand All @@ -129,11 +123,9 @@ def trigger_revoke_folder_sharing_failure_alarm(
target_environment: Environment,
):
log.info('Triggering share failure alarm...')
subject = (
f'ALARM: DATAALL Folder {folder.S3Prefix} Sharing Revoke Failure Notification'
)
subject = f'Data.all Folder Share Revoke Failure for {folder.S3Prefix}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the folder {folder.S3Prefix} with S3 Access Point.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to share the folder {folder.S3Prefix} with S3 Access Point.
Alarm Details:
- State Change: OK -> ALARM
- Reason for State Change: S3 Folder sharing Revoke failure
Expand Down Expand Up @@ -173,11 +165,9 @@ def handle_bucket_sharing_failure(self, bucket: DatasetBucket,
target_environment: Environment,
alarm_type: str):
log.info(f'Triggering {alarm_type} failure alarm...')
subject = (
f'ALARM: DATAALL S3 Bucket {bucket.S3BucketName} {alarm_type} Failure Notification'
)
subject = f'Data.all S3 Bucket Failure for {bucket.S3BucketName} {alarm_type}'[:100]
message = f"""
You are receiving this email because your DATAALL {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to {alarm_type} the S3 Bucket {bucket.S3BucketName}.
You are receiving this email because your Data.all {self.envname} environment in the {self.region} region has entered the ALARM state, because it failed to {alarm_type} the S3 Bucket {bucket.S3BucketName}.
Alarm Details:
- State Change: OK -> ALARM
- Reason for State Change: S3 Bucket {alarm_type} failure
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
IAM_ACCESS_POINT_ROLE_POLICY = "targetDatasetAccessControlPolicy"
DATAALL_ALLOW_OWNER_SID = "AllowAllToAdmin"
DATAALL_ACCESS_POINT_KMS_DECRYPT_SID = "DataAll-Access-Point-KMS-Decrypt"
DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID = "KMSPivotRolePermissions"


class S3AccessPointShareManager:
Expand Down Expand Up @@ -331,14 +332,25 @@ def update_dataset_bucket_key_policy(self):
)
key_alias = f"alias/{self.dataset.KmsAlias}"
kms_client = KmsClient(self.source_account_id, self.source_environment.region)
kms_key_id = kms_client.get_key_id(key_alias)
kms_key_id = kms_client.get_key_id_using_list_aliases(key_alias)
existing_policy = kms_client.get_key_policy(kms_key_id)
target_requester_arn = IAM.get_role_arn_by_name(self.target_account_id, self.target_requester_IAMRoleName)
pivot_role_name = SessionHelper.get_delegation_role_name()

if existing_policy:
existing_policy = json.loads(existing_policy)
counter = count()
statements = {item.get("Sid", next(counter)): item for item in existing_policy.get("Statement", {})}

if DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID in statements.keys():
logger.info(
f'KMS key policy already contains share statement {DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}')
else:
logger.info(
f'KMS key policy does not contain statement {DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}, generating a new one')
statements[DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID] \
= self.generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, self.dataset_account_id)

if DATAALL_ACCESS_POINT_KMS_DECRYPT_SID in statements.keys():
logger.info(
f'KMS key policy contains share statement {DATAALL_ACCESS_POINT_KMS_DECRYPT_SID}, '
Expand All @@ -353,12 +365,14 @@ def update_dataset_bucket_key_policy(self):
statements[DATAALL_ACCESS_POINT_KMS_DECRYPT_SID] = (self.generate_default_kms_decrypt_policy_statement
(target_requester_arn))
existing_policy["Statement"] = list(statements.values())

else:
logger.info('KMS key policy does not contain any statements, generating a new one')
existing_policy = {
"Version": "2012-10-17",
"Statement": [
self.generate_default_kms_decrypt_policy_statement(target_requester_arn)
self.generate_default_kms_decrypt_policy_statement(target_requester_arn),
self.generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, self.dataset_account_id)
]
}
kms_client.put_key_policy(
Expand Down Expand Up @@ -474,7 +488,7 @@ def delete_dataset_bucket_key_policy(
)
key_alias = f"alias/{dataset.KmsAlias}"
kms_client = KmsClient(dataset.AwsAccountId, dataset.region)
kms_key_id = kms_client.get_key_id(key_alias)
kms_key_id = kms_client.get_key_id_using_list_aliases(key_alias)
existing_policy = json.loads(kms_client.get_key_policy(kms_key_id))
target_requester_arn = IAM.get_role_arn_by_name(self.target_account_id, self.target_requester_IAMRoleName)
counter = count()
Expand Down Expand Up @@ -510,7 +524,7 @@ def handle_share_failure(self, error: Exception) -> None:
self.target_folder, self.share, self.target_environment
)

def handle_revoke_failure(self, error: Exception) -> None:
def handle_revoke_failure(self, error: Exception) -> bool:
"""
Handles share failure by raising an alarm to alarmsTopic
Returns
Expand All @@ -526,6 +540,7 @@ def handle_revoke_failure(self, error: Exception) -> None:
DatasetAlarmService().trigger_revoke_folder_sharing_failure_alarm(
self.target_folder, self.share, self.target_environment
)
return True

@staticmethod
def generate_default_kms_decrypt_policy_statement(target_requester_arn):
Expand All @@ -541,6 +556,29 @@ def generate_default_kms_decrypt_policy_statement(target_requester_arn):
"Resource": "*"
}

@staticmethod
def generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, dataset_account_id):
return {
"Sid": f"{DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}",
"Effect": "Allow",
"Principal": {
"AWS": [
f"arn:aws:iam::{dataset_account_id}:role/{pivot_role_name}"
]
},
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey*",
"kms:PutKeyPolicy",
"kms:GetKeyPolicy",
"kms:ReEncrypt*",
"kms:TagResource",
"kms:UntagResource"
],
"Resource": "*"
}

def add_target_arn_to_statement_principal(self, statement, target_requester_arn):
principal_list = self.get_principal_list(statement)
if f"{target_requester_arn}" not in principal_list:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
DATAALL_ALLOW_OWNER_SID = "AllowAllToAdmin"
IAM_S3BUCKET_ROLE_POLICY = "dataall-targetDatasetS3Bucket-AccessControlPolicy"
DATAALL_BUCKET_KMS_DECRYPT_SID = "DataAll-Bucket-KMS-Decrypt"
DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID = "KMSPivotRolePermissions"


class S3BucketShareManager:
Expand Down Expand Up @@ -271,13 +272,25 @@ def grant_dataset_bucket_key_policy(self):
)
key_alias = f"alias/{self.target_bucket.KmsAlias}"
kms_client = KmsClient(self.source_account_id, self.source_environment.region)
kms_key_id = kms_client.get_key_id(key_alias)
kms_key_id = kms_client.get_key_id_using_list_aliases(key_alias)
existing_policy = kms_client.get_key_policy(kms_key_id)
target_requester_arn = IAM.get_role_arn_by_name(self.target_account_id, self.target_requester_IAMRoleName)
pivot_role_name = SessionHelper.get_delegation_role_name()

if existing_policy:
existing_policy = json.loads(existing_policy)
counter = count()
statements = {item.get("Sid", next(counter)): item for item in existing_policy.get("Statement", {})}

if DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID in statements.keys():
logger.info(
f'KMS key policy already contains share statement {DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}')
else:
logger.info(
f'KMS key policy does not contain statement {DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}, generating a new one')
statements[DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID] \
= self.generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, self.source_account_id)

if DATAALL_BUCKET_KMS_DECRYPT_SID in statements.keys():
logger.info(
f'KMS key policy contains share statement {DATAALL_BUCKET_KMS_DECRYPT_SID}, updating the current one')
Expand All @@ -289,12 +302,14 @@ def grant_dataset_bucket_key_policy(self):
statements[DATAALL_BUCKET_KMS_DECRYPT_SID] = self.generate_default_kms_decrypt_policy_statement(
target_requester_arn)
existing_policy["Statement"] = list(statements.values())

else:
logger.info('KMS key policy does not contain any statements, generating a new one')
existing_policy = {
"Version": "2012-10-17",
"Statement": [
self.generate_default_kms_decrypt_policy_statement(target_requester_arn)
self.generate_default_kms_decrypt_policy_statement(target_requester_arn),
self.generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, self.source_account_id)
]
}
kms_client.put_key_policy(
Expand Down Expand Up @@ -394,7 +409,7 @@ def delete_target_role_bucket_key_policy(
)
key_alias = f"alias/{target_bucket.KmsAlias}"
kms_client = KmsClient(target_bucket.AwsAccountId, target_bucket.region)
kms_key_id = kms_client.get_key_id(key_alias)
kms_key_id = kms_client.get_key_id_using_list_aliases(key_alias)
existing_policy = json.loads(kms_client.get_key_policy(kms_key_id))
target_requester_arn = IAM.get_role_arn_by_name(self.target_account_id, self.target_requester_IAMRoleName)
counter = count()
Expand Down Expand Up @@ -444,7 +459,7 @@ def handle_revoke_failure(self, error: Exception) -> bool:
f'with target account {self.target_environment.AwsAccountId}/{self.target_environment.region} '
f'due to: {error}'
)
DatasetAlarmService().trigger_revoke_folder_sharing_failure_alarm(
DatasetAlarmService().trigger_revoke_s3_bucket_sharing_failure_alarm(
self.target_bucket, self.share, self.target_environment
)
return True
Expand Down Expand Up @@ -482,3 +497,26 @@ def generate_default_kms_decrypt_policy_statement(target_requester_arn):
"Action": "kms:Decrypt",
"Resource": "*"
}

@staticmethod
def generate_enable_pivot_role_permissions_policy_statement(pivot_role_name, source_account_id):
return {
"Sid": f"{DATAALL_KMS_PIVOT_ROLE_PERMISSIONS_SID}",
"Effect": "Allow",
"Principal": {
"AWS": [
f"arn:aws:iam::{source_account_id}:role/{pivot_role_name}"
]
},
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey*",
"kms:PutKeyPolicy",
"kms:GetKeyPolicy",
"kms:ReEncrypt*",
"kms:TagResource",
"kms:UntagResource",
],
"Resource": "*"
}
Original file line number Diff line number Diff line change
Expand Up @@ -111,11 +111,14 @@ def process_approved_shares(self) -> bool:
shared_item_SM.update_state_single_item(self.session, share_item, new_state)

except Exception as e:
self.handle_share_failure(table=table, share_item=share_item, error=e)
# must run first to ensure state transitions to failed
new_state = shared_item_SM.run_transition(ShareItemActions.Failure.value)
shared_item_SM.update_state_single_item(self.session, share_item, new_state)
success = False

# statements which can throw exceptions but are not critical
self.handle_share_failure(table=table, share_item=share_item, error=e)

return success

def process_revoked_shares(self) -> bool:
Expand Down Expand Up @@ -178,9 +181,12 @@ def process_revoked_shares(self) -> bool:
revoked_item_SM.update_state_single_item(self.session, share_item, new_state)

except Exception as e:
self.handle_revoke_failure(share_item=share_item, table=table, error=e)
# must run first to ensure state transitions to failed
new_state = revoked_item_SM.run_transition(ShareItemActions.Failure.value)
revoked_item_SM.update_state_single_item(self.session, share_item, new_state)
success = False

# statements which can throw exceptions but are not critical
self.handle_revoke_failure(share_item=share_item, table=table, error=e)

return success
Loading
Loading