-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AzureDevOpsDsc: Running integration test in pipeline #9
Comments
The problem is that the integration test cannot run for PR's since the secret pipeline values are not added to the build for PR's. That will make the build fail on main branch only, and then its "to late" to fix the build issue for the PR. So I'm thinking we should look at making the integration tests to run manually and each contributor can set there own "destructible" DevOps tenant. |
The part of running the integration tests is commented out, so if there is not a good way to run the integration tests for PR's this entire job should be removed (if moving to running them manually). AzureDevOpsDsc/azure-pipelines.yml Lines 194 to 221 in 8d43d12
|
Some initial thoughts:
|
Additionally, this is relevant to documentation here which will need amending as appropriate. |
Agree we need integration test.
I don't see any secure way of getting the API key for a PR that couldn't leak to other contributors/maintainers. 🤔
There might be the only way. Do not run integration tests in the PR, but instead the contributor must configure Azure Pipelines against there fork. We add documentation how a contributor creates a "destructible" Azure DevOps tenant on top of the one the fork is connected to so integration tests can run for the working branches in the fork. Then we add an entry to the PR template that clearly says that a PR must include a link to a test run that passes. We run the integration tests on merge to
This would be the best way to go. Looking at the software requirements it could be feasible looking at the Microsoft hosted-agents hardware it is using a Standard_DS2_v2 that has 7GB of memory. So if we can limit the SQL Server to 2GB (or 1GB) then run Azure DevOps server can run on (the not recommended) 2GB memory, then it could work. |
I have create a new Azure DevOps organization (https://dev.azure.com/azuredevopsdsc/) that is destructible, and added a PAT for that which is available when running builds on This does not solve this issue, but at least the tests that exist and new that is created can run. There will be a problem if merging a PR and the build fails on integration tests. |
- Enabled integration tests against https://dev.azure.com/azuredevopsdsc/ (see comment #9 (comment) for more information).
Useful SQL Server installation information from
Useful links for Azure DevOps Server 2020 RTW (from here): |
Just as an update... I've been taking a look at this and am now at a point where Azure DevOps installer/EXE is downloaded, then Azure DevOps Server is installed and configured automatically/programatically. So far, it takes about 25 minutes on a hosted, build server at present (the majority of that time is the installation itself but there might be ways of installing non-used components to speed this up at some point). Currently, I am trying to (and need to) determine:
Also note that Azure DevOps will seemingly install SQL Server Express as part of the configuration step so I've been able to perform a successful (according to the logs, but I've not been able to connect into it yet) installation and configuration of 'BasicInstall' Azure DevOps Server without having to perform a specific, SQL Server installation. I'm unclear what features using this default, SQL Server Express will include/omit from the Azure DevOps Server instance at present but I've been trying to focus on getting a basic, Azure DevOps Services instance running before considering anything else. |
I might park the above for a bit and have a think on it, and possibly look at alternative options... specifically around running (or confirming the run of) a contributor-specific pipeline/build (in their own Azure DevOps Server Organization) as part of the PR build based on the same, PR commit from the same repository. Some notes:
I'm slightly steering towards this being my preferred solution for this (if I can get to something that is workable) as this avoids the 25+ minute, pre-integration test, setup time (see my previous comment/post) and allows the integration tests to be completely within a few minutes (at present) - Nice, fast feedback from the integration tests 😄. We'd also be testing against an up-to-date version/instance - It would remove a little maintenance relating to upgrading integration test target instances etc. and give us relatively quick visability of changes to the API breaking functionality in the module. The setup of the builds for new contributors is still likely to be more time-consuming using this approach though. |
And also, just putting it out there (even though it's bad practice, and potentially, higher risk)... What are the downsides/risks of making the variable (in the build pipeline) that holds the PAT into a non- sensitive variable (so the PR builds can use/see it), and ensuring the scope is limited to the resources that can be managed by it. I'd guess the PAT would no-longer be protected (and effectively public to those that would want to create a PR/change/whatever to uncover it as it wouldn't be suppressed in the logs - although could potentially create a second variable (that was sensitive) with the same value/PAT to obsfucate this?). The PAT would only be providing access to teardown 'organization' and any resources in it (assuming it is scoped and created correctly) so the likely problems resulting from this would be where people/public want to deliberately mess up resources within the instance to hinder/impact the build, integration environment (which would/could be "reset" during a build anyway). This would also mean that the PRs couldn't run in parallel as they would all be running against the same instance (and there would have to be some lock mechanism in the build to prevent this). This seems like it may be less work than the other options? Making a PAT, deliberately visable (even though it's a little work to get hold of and if it grants access to little of any use/value) does seem non-prefered. Thoughts? |
Also, I've just found the following Microsoft answer which might suggest that obtaining an API key via the API is not going to be an option (with reference to the option of installing a build server, copy of Azure DevOps Server)... ... which might suggest some form of automation via the GUI would be required to obtain an API key. |
The integration tests are failing due to a lack of an Integration environment (Azure DevOps Services instance) and related API key.
In order to resolve the integration tests, the following variables need adding to the build/pipeline:
AzureDevOps.Integration.ApiUri
AzureDevOps.Integration.Pat
(set as sensitive variable)... and...
https://dev.azure.com/someOrganizationName/_apis/
) where the organization will be torn down and recreated each time (i.e. don't use this organization for anything you want to keep! 😁) ... there also has to be some consideration to ensure that multiple sets of integration tests can't run similtaniously against the same instance (not sure how that would be handled, initially - not sure if you want to create a new organization for every build? ... I think there is a limit).Originally posted by @SphenicPaul in #7 (comment)
The text was updated successfully, but these errors were encountered: