-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 settings functional tests #2837
S3 settings functional tests #2837
Conversation
8abb1a6
to
371673a
Compare
Sure thing 😃 I'll try to review ASAP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all: kudos to the structure of your functional tests. Super comfortable to read and understand. Even though most of them look like they could be unit tests 😄
3 of the 4 failing tests make totally sense. The one with .secure
property I don't get, because I'm not familiar with this feature and what the intention is.
My question is: Do you have the capacity to add the missing checks to make the tests green?
assert exception.responseBody == "Account $accountId is inactive" | ||
} | ||
|
||
def "PBS should throw exception when account have different id inside json in S3 storage"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test makes sense and the implementation is missing this check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
assert bidderRequest.regs?.gdpr == ampStoredRequest.regs.ext.gdpr | ||
} | ||
|
||
def "PBS should throw exception when trying to take parameters from the stored request on S3 service with invalid id in file"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test makes sense and currently fails in the pipeline.
assert bidderRequest.imp[0].secure == secureStoredRequest | ||
} | ||
|
||
def "PBS should throw exception when trying populate imp[0].secure from imp stored request on S3 service with invalid impId in file"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def "PBS should throw exception when trying populate imp[0].secure from imp stored request on S3 service with invalid impId in file"() { | |
def "PBS should throw exception when trying to populate imp[0].secure from imp stored request on S3 service with invalid impId in file"() { |
This test I don't understand. What is in imp[0].secure
that this fails the test? Is this another layer of checksum/validation to ensure the integrity of the configuration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, got it: https://docs.prebid.org/prebid-server/endpoints/openrtb2/pbs-endpoint-auction.html#determining-bid-security-httphttps
it's a simple flag, forcing https or not
assert !bidder.getRequestCount(bidRequest.id) | ||
} | ||
|
||
def "PBS should throw request format exception when stored auction response have different id inside S3 file"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably sane to do that filename and ID match.
this.localStackContainer = new LocalStackContainer(DockerImageName.parse("localstack/localstack:s3-latest")) | ||
.withNetwork(Dependencies.network) | ||
.withServices(LocalStackContainer.Service.S3) | ||
localStackContainer.start() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creation of LSContainet should be in Dependencies.groovy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
localStackContainer.getAccessKey(), localStackContainer.getSecretKey()))) | ||
.region(Region.of(localStackContainer.getRegion())) | ||
.build() | ||
createBucket(bucketName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to remove call of createBucket(...)
from constuructor
S3Service(String bucketName) { | ||
this.localStackContainer = new LocalStackContainer(DockerImageName.parse("localstack/localstack:s3-latest")) | ||
.withNetwork(Dependencies.network) | ||
.withServices(LocalStackContainer.Service.S3) | ||
localStackContainer.start() | ||
s3PbsService = S3Client.builder() | ||
.endpointOverride(localStackContainer.getEndpointOverride(LocalStackContainer.Service.S3)) | ||
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create( | ||
localStackContainer.getAccessKey(), localStackContainer.getSecretKey()))) | ||
.region(Region.of(localStackContainer.getRegion())) | ||
.build() | ||
createBucket(bucketName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better formating
S3Service(String bucketName) { | |
this.localStackContainer = new LocalStackContainer(DockerImageName.parse("localstack/localstack:s3-latest")) | |
.withNetwork(Dependencies.network) | |
.withServices(LocalStackContainer.Service.S3) | |
localStackContainer.start() | |
s3PbsService = S3Client.builder() | |
.endpointOverride(localStackContainer.getEndpointOverride(LocalStackContainer.Service.S3)) | |
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create( | |
localStackContainer.getAccessKey(), localStackContainer.getSecretKey()))) | |
.region(Region.of(localStackContainer.getRegion())) | |
.build() | |
createBucket(bucketName) | |
s3PbsService = S3Client.builder() | |
.endpointOverride(localStackContainer.getEndpointOverride(LocalStackContainer.Service.S3)) | |
.credentialsProvider( | |
StaticCredentialsProvider.create( | |
AwsBasicCredentials.create( | |
localStackContainer.getAccessKey(), | |
localStackContainer.getSecretKey()))) | |
.region(Region.of(localStackContainer.getRegion())) | |
.build() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
protected static final String INVALID_FILE_BODY = 'INVALID' | ||
protected static final def DEFAULT_BUCKET = PBSUtils.randomString.toLowerCase() | ||
|
||
protected static final S3Service S3_SERVICE = new S3Service(DEFAULT_BUCKET) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks weard when on constant we call method like here S3_SERVICE.uploadStoredResponse(...)
As for me would be great move to lower case s3Service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
class StorageBaseSpec extends BaseSpec { | ||
|
||
protected static final String INVALID_FILE_BODY = 'INVALID' | ||
protected static final def DEFAULT_BUCKET = PBSUtils.randomString.toLowerCase() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty minor, for consistency move def
to String
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
protected static Map<String, String> s3StorageConfig = [ | ||
'settings.s3.accessKeyId' : "${S3_SERVICE.accessKeyId}".toString(), | ||
'settings.s3.secretAccessKey' : "${S3_SERVICE.secretKeyId}".toString(), | ||
'settings.s3.endpoint' : "${S3_SERVICE.endpoint}".toString(), | ||
'settings.s3.bucket' : "${DEFAULT_BUCKET}".toString(), | ||
'settings.s3.accounts-dir' : "${S3Service.DEFAULT_ACCOUNT_DIR}".toString(), | ||
'settings.s3.stored-imps-dir' : "${S3Service.DEFAULT_IMPS_DIR}".toString(), | ||
'settings.s3.stored-requests-dir' : "${S3Service.DEFAULT_REQUEST_DIR}".toString(), | ||
'settings.s3.stored-responses-dir': "${S3Service.DEFAULT_RESPONSE_DIR}".toString(), | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove redundant .toString()
protected static Map<String, String> s3StorageConfig = [ | |
'settings.s3.accessKeyId' : "${S3_SERVICE.accessKeyId}".toString(), | |
'settings.s3.secretAccessKey' : "${S3_SERVICE.secretKeyId}".toString(), | |
'settings.s3.endpoint' : "${S3_SERVICE.endpoint}".toString(), | |
'settings.s3.bucket' : "${DEFAULT_BUCKET}".toString(), | |
'settings.s3.accounts-dir' : "${S3Service.DEFAULT_ACCOUNT_DIR}".toString(), | |
'settings.s3.stored-imps-dir' : "${S3Service.DEFAULT_IMPS_DIR}".toString(), | |
'settings.s3.stored-requests-dir' : "${S3Service.DEFAULT_REQUEST_DIR}".toString(), | |
'settings.s3.stored-responses-dir': "${S3Service.DEFAULT_RESPONSE_DIR}".toString(), | |
] | |
protected static Map<String, String> s3StorageConfig = [ | |
'settings.s3.accessKeyId' : s3Service.accessKeyId, | |
'settings.s3.secretAccessKey' : s3Service.secretKeyId, | |
'settings.s3.endpoint' : s3Service.endpoint, | |
'settings.s3.bucket' : DEFAULT_BUCKET, | |
'settings.s3.accounts-dir' : S3Service.DEFAULT_ACCOUNT_DIR, | |
'settings.s3.stored-imps-dir' : S3Service.DEFAULT_IMPS_DIR, | |
'settings.s3.stored-requests-dir' : S3Service.DEFAULT_IMPS_DIR, | |
'settings.s3.stored-responses-dir': S3Service.DEFAULT_IMPS_DIR, | |
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
assert exception.responseBody == "Account $accountId is inactive" | ||
} | ||
|
||
def "PBS should throw exception when account have different id inside json in S3 storage"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PBS should throw exception when account id is't match wiht bid request account id
Please check other similar case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@muuki88 It’s possible to add the missing checks here to make the tests pass. However, for more consistency, I suggest adding these checks in your Pull Request (PR). Once done, I can merge them with the current PR. |
@SerhiiNahornyi , @marki1an I need to push back on some of the failing tests, as they are currently very hard to implement. The StoredDataResult model contains the results untyped. So this validation is currently not available in any I would argue that these kind of validations should be done in a generic way a level higher. Could you remove the failing tests? |
@muuki88 Hi there. |
@muuki88 sure, marked |
@osulzhenko anything that blocks this? |
@muuki88 Not from my side. You need to get approval on your original PR #2733 |
The workflow still fails on S3 unit tests:
|
…nctional-tests # Conflicts: # pom.xml # src/test/groovy/org/prebid/server/functional/testcontainers/Dependencies.groovy
…ings/copy' into s3-settings-functional-tests
No description provided.