Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Resolve #1307] Fixing Connection Manager bug #1308

Merged

Conversation

jfalkenstein
Copy link
Contributor

@jfalkenstein jfalkenstein commented Feb 18, 2023

This resolves a bug discovered in the ConnectionManager from #1307.

The specific situation is when the ConnectionManager.call is invoked using a stack_name and explicitly passes None for each of profile, region, and sceptre_role. The overhaul of the ConnectionManager in v4 accidentally omitted this edge case which happen to specifically affect the !stack_output_external resolver when it's run without either an AWS_DEFAULT_REGION environment variable or without an AWS_PROFILE environment variable set to a profile that defines a region.

PR Checklist

  • Wrote a good commit message & description [see guide below].
  • Commit message starts with [Resolve #issue-number].
  • Added/Updated unit tests.
  • Added/Updated integration tests (if applicable).
  • All unit tests (make test) are passing.
  • Used the same coding conventions as the rest of the project.
  • The new code passes pre-commit validations (pre-commit run --all-files).
  • The PR relates to only one subject with a clear title.
    and description in grammatically correct, complete sentences.

Approver/Reviewer Checklist

  • Before merge squash related commits.

Other Information

Guide to writing a good commit

Comment on lines -30 to -39
config_path = os.path.join(context.sceptre_dir, "config", "9/B" + ".yaml")

with open(config_path, "r") as file:
file_data = file.read()

file_data = file_data.replace("{project_code}", context.project_code)

with open(config_path, "w") as file:
file.write(file_data)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was able to be completely removed because I'm not using Jinja syntax to reference the project_code properly. We should have done that all along... in any case, this chunk of integration tests code has been a thorn in the side of anyone running integration tests locally, since it actually produced changes in the files that couldn't be committed. With this gone, we're in much better state now.

Comment on lines +9 to +13
Scenario: launch a stack referencing the external output of an existing stack without explicit region or profile
Given stack "external-stack-output-stack-output/outputter" exists using "dependencies/independent_template.json"
And stack "external-stack-output/resolver-no-profile-region" does not exist
When the user launches stack "external-stack-output/resolver-no-profile-region"
Then stack "external-stack-output/resolver-no-profile-region" exists in "CREATE_COMPLETE" state
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We needed this test, since this was what was effectively broken.

@@ -1,5 +0,0 @@
template:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've renamed this StackGroup to "external-stack-output" because "9" was never a good name for a test fixture. And I've renamed this file to "resolver-with-profile-and-region" to properly describe what was being tested.

@@ -0,0 +1,5 @@
template:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the new testcase we needed, using !stack_output_external with no explicit profile/region/role.



parameters:
DependentStackName: !stack_output_external "{{project_code}}-external-stack-output-outputter::StackName {{environment_variable.AWS_PROFILE|default('default')}}::eu-west-1"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what used to be 9/B.

The changes here fix issues running the integration tests locally:

  1. We use jinja syntax to reference the project code instead of what we used to do, which was modify this template before tests were executed. Which was weird and unnecessary and produced a diff every time you ran these tests locally.
  2. Before, it assumed you had a profile called "default" set up. But running this locally, that might not actually be the profile you're using. So now, it uses your AWS_PROFILE environment variable, if it's set; otherwise, it'll default to "default".

Comment on lines -363 to -368
sceptre_context = SceptreContext(
command_path=stack_name + ".yaml", project_path=context.sceptre_dir
)

sceptre_plan = SceptrePlan(sceptre_context)
status = sceptre_plan.get_status()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We never actually used these lines of code.

Comment on lines +186 to +189
# For historical reasons, if all three values are "None", that means we default to the
# Stack's configuration.
if (profile, region, sceptre_role) == (None, None, None):
profile, region, sceptre_role = self.profile, self.region, self.sceptre_role
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the actual fix right here. When I changed the ConnectionManager, I accidentally omitted this situation coverage.

Before, the logic was:

  • If the profile, region, and role were all None, then...
    • if the stack_name was cached...
      • use the cached profile, region, and role by that cached stack name
    • otherwise, use THIS stack's profile, region, and role
  • Otherwise, use the explicitly specified profile, region and role

After v4 was released, the logic was:

  • If the stack_name was cached...
    • if the profile, region, and role were all None, then...
      • use the cached profile, region and role
    • otherwise, use the specified profile, region, and role, using the cached value when no value has been specified
      • This was a gap in our old logic, so this change was intentional and fixed some things.
  • otherwise, use the specified profile, region, and role, using the current stack's value when no value has been specified

The gap that I found that was causing some bugs was when the stack_name was not cached, but the profile, region, and role were passed as None. It turns out the !stack_output_external resolver would trigger this exact situation. But this wouldn't actually show up in terms of blow-ups when the user had a profile with a region or a default region specified as an environment variable.

With this fix, the logic is now:

  • If the stack_name was cached...
    • if the profile, region, and role were all None, then...
      • use the cached profile, region and role
    • otherwise, use the specified profile, region, and role, using the cached value when no value has been specified
      • This was a gap in our old logic, so this change was intentional and fixed some things.
  • otherwise...
    • If the profile, region, and role are all None, then...
      • Use the current_stack's profile, region, and role
    • Otherwise, use the specified profile, region, and role, using the current stack's value when no value has been specified

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great write-up!

Comment on lines +486 to +508
def test_call__stack_name_set_not_cached__profile_region_and_role_are_none__uses_current_stack_settings(
self,
):
service = "s3"
command = "list_buckets"
instance_profile = "profile"
instance_role = "role"
stack_name = "target"
self.set_connection_manager_vars(instance_profile, self.region, instance_role)

expected_client = self.set_up_expected_client(
service, stack_name, instance_profile, self.region, instance_role
)

self.connection_manager.call(
service,
command,
stack_name=stack_name,
profile=None,
region=None,
sceptre_role=None,
)
expected_client.list_buckets.assert_any_call()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test, in particular, fails without my fix and passes now.

@jfalkenstein jfalkenstein marked this pull request as ready for review February 18, 2023 16:52
Copy link
Contributor

@dboitnot dboitnot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

Comment on lines +186 to +189
# For historical reasons, if all three values are "None", that means we default to the
# Stack's configuration.
if (profile, region, sceptre_role) == (None, None, None):
profile, region, sceptre_role = self.profile, self.region, self.sceptre_role
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great write-up!

@jfalkenstein jfalkenstein merged commit 4576fbe into Sceptre:master Feb 18, 2023
@jfalkenstein jfalkenstein deleted the 1307-fixing-external-stack-role branch February 18, 2023 18:22
@jfalkenstein jfalkenstein mentioned this pull request Feb 18, 2023
1 task
jfalkenstein added a commit that referenced this pull request Feb 20, 2023
## 4.0.2 (2023.02.20)
### Fixed
 - [Resolve #1307] Fixing Connection Manager bug (#1308)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants