-
-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dont list buckets in s3 tests #318
Conversation
The tests used a ListBuckets operation which requires privileges that I'm not comfortable giving to Travis, and which aren't actually necessary. We know what the bucket name is so we don't need to scan each and every bucket, we can just open the one we want. Also removed an if/then on test bucket creation which caused race conditions.
The s3 tests were deleting and creating the same bucket over and over which isn't necessary - we now create the bucket once before the tests run and delete it after they're done. This is a little faster and appears to reduce the number of race conditions that we hit.
You shouldn't have to give any of your credentials to travis. My understanding is that it uses our credentials, specified in the env.global.secure section in the .travis.yml file. Could you please confirm? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution. Left you some comments.
This is true only when the tests run in the context of the Travis RaRe-Technologies/smart_open project. Each project has its own unique private/public key pair. I wanted to be able to trigger builds of my fork so I set up a Travis project for my fork so the private key is different. I can write a wiki page explaining how to do that if you'd like, it's not hard once you figure out what permissions are needed. |
piskvorky#318 (review) - No more global "s3" handle - allocate one locally when needed - Document setUpModule and tearDownModule - Document when we create/delete the test bucket - Put populate_bucket back where it was - Call cleanup_bucket after each test
I think I did all of the things you asked for but let me know if I missed something. |
piskvorky#318 (review) - No more global "s3" handle - allocate one locally when needed - Document setUpModule and tearDownModule - Document when we create/delete the test bucket - Put populate_bucket back where it was - Call cleanup_bucket after each test
Great work. Thank you for your contribution! |
While working on a previous PR I stubbed my toe on AWS permissions. The tests needed to be able to list all of the bucket names in my account but I'm not comfortable giving those permissions to Travis so I modified the cleanup_bucket() method to directly delete the bucket instead of listing each bucket to decide whether to delete it or not.
After I made that change I noticed that I was getting more test failures due to race conditions - tests expecting the test bucket to be there when it wasn't or vice versa. To make this happen less often I changed the suite so it creates the test bucket before the tests run and deletes it afterward instead of create/delete per test. This is more stable and saves 20-30s in the s3 tests when running without mocks.