-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloudfront waiters #426
Cloudfront waiters #426
Conversation
They were using the wrong JMESPath expression to check the status of the AWS resource.
self.loader = Loader(self.data_path) | ||
|
||
# Make sure the cache is clear. | ||
self.loader._cache.clear() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you seeing cache issues before you added this? Line 522 we're not passing in a cache to the loader so it should be recreating a brand new cache each test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it is there by mistake. I was just copying some lines from another test that I found that uses loaders. This line is not needed.
Small question about clearing the cache, otherwise looks good. |
I removed the unnecessary clearing of the cache in the test. Merging. Will update CLI changelog as well. |
Fixes: aws/aws-cli#1059
There were two changes made in this PR:
Make the JMESPath success path for the waiter take into account the outer element in the cloudfront response. Without the outer element included, you get a bunch of max retries exceeded because the JMESPath query is not using the correct path to search for the success state.
Doubled the max retry attempts. I tested it for a single object invalidation, and with the current configuration, the max retries were consistently reached. I was estimating it took around 15 minutes to complete the invalidation, but the current configuration would wind up waiting for 10 minutes. By doubling the retry attempts, it bumped this to 20 minutes.
cc @jamesls @danielgtaylor