-
Notifications
You must be signed in to change notification settings - Fork 2k
Making the nvidia/cuda automated repo #18
Comments
Yes, it's definitely on our list! |
btw. sometimes is easier to setup circleci and push build to hub instead |
Another argument for automated repos would be that for others who create automated repos that happen build from of your image, it become trivial for those same people to enable a triggered build within the docker hub ecosystem. So when the Nvidia image updates with fixes, so to do users', Again the same could be done using web hooks and API calls, but keeping it simple with the docker hub interface makes it pleasant for newer users. |
The Phoronix test suite comes with OpenCL support, so could be useful to do regression-testing for the automated repo: http://www.phoronix.com/scan.php?page=article&item=nvidia-amd-opencl-2015&num=1 |
@ruffsl For @UniqueFool The problem with CI and testing is that I'm not currently aware of an open-source CI solution that would allow us to run GPU tests. We have internal solutions of course, but it will be more complex to integrate to GitHub. I will continue evaluating the solutions. |
You can just run build on CI, without testing: |
Sure, but it would be more convenient to deploy and test with the same solution. But indeed, the short-term solution could be to only automate the builds for now. |
You need to build it on CircleCI before testing anyway ;) So it's good first step to build + upload first. |
@flx42 , yes I've noticed this. Looking at the build details recording the build logs, I'm seeing the start times for each tag to have been triggered roughly simultaneously, with one of my higher level tags starting first. I'm rather sure the official repos do not suffer the same shortcomings ( although perhaps I've not noticed thanks to how often the upstream Ubuntu image rebuilds and triggers everything else), but I'm uncertain how to invoke the same build order in a single user repo. I've asked about this before, but was suggested to just re-trigger the build until cascading images reach steady state, I think this is a bit silly. Another approach I first used was to break up my tags into separate repos, like suggested here. This was a bit of a hassle to manage, but did insure that an sequential order is followed. Perhaps cuda runtime and development docker repos could be separate, but the lack of tag level vs repo level triggering would be hampering to further tag specific builds. Let me dig around, perhaps something has come along since I've last looked into this. Pinging @yosifkit or @tianon ? |
I've not seen any change on the Docker Hub that would allow images to depend upon another tag in the same repo. This is one of the reasons that the official images do not use automated builds. |
@ruffsl It looks like it's worse than this. When I start my build using a POST request, all the builds start in parallel and then all the Since all the runtime Dockerfiles for 6.5, 7.0 and 7.5 start with these lines:
Images In my personal github (https://github.com/flx42/nvidia-docker) I modified the
The first layers are the same as above for
So, we are needlessly duplicating layers that are physically the same. And since everything is rebuilt all the time, the user will have to fetch new layers even when the image they use didn't change. |
@flx42 , you are correct. Given the limitations of the automated build mechanics, it doesn't seem currently possible to host an automated repo on Docker Hub. Like yosifkit mentioned, official images are not built using the same rules, and so once the commits to the cuda dockerfiles settle down, this would be a nice channel to distribute images updated with upstream sources. This all take wind out of the sails for CI testing the master branch, but I suppose sheerun or UniqueFool suggestions along with the Makefiles you've already written would work well to automate pushing current images to the NVIDIA org repo for public review given triggered events on the master branch. |
Let's give up on the Docker Hub automated repo for now. CI remains an option so I will not close this issue yet. |
@flx42 , On a side note, you may want to keep around these links or put them in the readme/wiki somewhere for others (for at least the ubuntu tags): I just added something similar for the official ros repo and found it as a nice method for visually verifying parent image lineage. |
One year after... it's finally automated! We decided to use GitLab CI since it gives us more control on what we can do. Example of a pipeline run: https://gitlab.com/nvidia/cuda/pipelines/5876874 |
Thanks for making an cuda docker repo! One suggestion I'd make would be to make the nvidia/cuda docker hub repo to and automated repo. This repo could be useful as a testbed to test any future tags before official submission, but making it automated could really save time on maintenance in keeping the images up to date with the Dockerfiles. That's how we use the use them at osrf/ros. A neat thing also is use the git-repo's README.md to rendder the discription in the docker-repo, see Understand the build process.
The text was updated successfully, but these errors were encountered: