Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH Workflow for building and pushing docker image #32

Closed
SpecialAro opened this issue May 11, 2022 · 9 comments · Fixed by #47
Closed

GH Workflow for building and pushing docker image #32

SpecialAro opened this issue May 11, 2022 · 9 comments · Fixed by #47

Comments

@SpecialAro
Copy link
Member

SpecialAro commented May 11, 2022

Problem

The official API Server of Ferdium (https://api.ferdium.org/) is currently running through the docker image provided by https://hub.docker.com/repository/docker/ferdium/ferdium-server.

As described in #31 the current process for updating the official server is:

  1. locally pulling the new commits
  2. building a new docker image locally
  3. pushing the new docker image to the registry docker hub manually
  4. delete the old docker image on the server
  5. run the new docker image

This process is also totally non-sense and automation is very much in need.

Proposed Solution

My proposal is to create a GH Action for this workflow so that the image can be built and pushed to the registry docker hub automatically. Then, we should be able to check on the server side if a new version is available, stop the container, get the new image and start the container again with the new image.

Any thoughts on this? Do you agree?

@vraravam
Copy link
Contributor

I like this automated approach better than #31
For checking and restarting the docker container - will we need to go with fargate-like solution? Or are you thinking of doing this part in a cron job? will cron have the required permissions to stop and start a new container?

@SpecialAro
Copy link
Member Author

I like this automated approach better than #31

I think this is a different thing because, in my opinion, the server image shouldn't be upgraded when there is any change on the recipes submodule. It only needs upgrading when there is a version bump in this repo.

So, my approach is to try to update the recipes folder locally inside the container when it is needed and only stop and starting the container when a new version of the server API is released. Do you agree?

For checking and restarting the docker container - will we need to go with fargate-like solution? Or are you thinking of doing this part in a cron job? will cron have the required permissions to stop and start a new container?

Maybe we can try to achieve this using portainer (currently I'm using it to manage the server more easily). There is a feature called Edge Jobs (https://docs.portainer.io/v/be-2.10/user/edge/jobs) that possibly can be used for stopping the container, removing the image, and starting the stack again with the latest image from the hub.

@vraravam
Copy link
Contributor

So, my approach is to try to update the recipes folder locally inside the container when it is needed and only stop and starting the container when a new version of the server API is released. Do you agree?

Yes - but, this can happen only if the recipes folder is mounted as a volume on the container - and not baked into the image itself.

@SpecialAro
Copy link
Member Author

So, my approach is to try to update the recipes folder locally inside the container when it is needed and only stop and starting the container when a new version of the server API is released. Do you agree?

Yes - but, this can happen only if the recipes folder is mounted as a volume on the container - and not baked into the image itself.

#33 makes sure to always pull the newest version. To have two volumes (one for data and another one for recipes) can be a requirement to run this docker image, so that the user persists the data.

@SpecialAro
Copy link
Member Author

When #34 is merged and we release a new image automatically, I'll check if the server updated as well. I implemented a webhook using Portainer for each container (Server and Debugger) and linked it to our Dockerhub account.

If everything works as planned this issue can be closed. Thank you @vraravam and @santhosh-chinnasamy for your suggestions and help!

@SpecialAro
Copy link
Member Author

So... The automation with the webhook didn't work and broke the server. I've disabled that for the time being.

The workflow action isn't running properly as well and it is pushing every commit of the repo. This needs fixing

@santhosh-chinnasamy
Copy link
Contributor

@SpecialAro I have a workaround for actions triggered for each commit, we can setup a manual trigger instead of automatic action trigger. I haven't done this but read about it.

check https://levelup.gitconnected.com/how-to-manually-trigger-a-github-actions-workflow-4712542f1960

@vraravam
Copy link
Contributor

@santhosh-chinnasamy - Nice find! I would suggest that you do the changes and run it in your fork, and then raise the PR in the main repo with the successful run of your Action as proof. That way, you can do this independent of @SpecialAro 's interaction.

@santhosh-chinnasamy
Copy link
Contributor

Alright @vraravam , I'll work on it.

@cino cino added this to Ferdium Nov 11, 2022
@cino cino moved this to New in Ferdium Nov 11, 2022
@cino cino moved this from New to Todo in Ferdium Nov 11, 2022
@SpecialAro SpecialAro linked a pull request Oct 12, 2023 that will close this issue
39 tasks
@github-project-automation github-project-automation bot moved this from Todo to Done in Ferdium Oct 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants