Skip to content

thecoshman/ghrape

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ghrape

Ghrape (Grape) is a GitHub runner for ARM

This repo offers a Docker image that can be used to run a self-hosted GitHub runner to use in your repos. Due to how GitHub allows registering runners, when not setup as an organisation, you cannot share runners across all your projects; instead, for each of your repositories you wish to have self hosted runners, you must start a runner for each. GitHub does offer, as a preview currently, ARM based runners, but those can only be used for public repositories.

Who/What is this for?

Are you looking for:

  • ARM64 based runners
  • for private repositories
  • and you are happy not sharing runners across repos
  • and happy with a 'fixed' number of runners per repo

Then this might just be a solution for you. If you are looking for something more complex, like auto-scaling, you sadly might need to keep up with the hunt, or raise an issue!

Usage

Bringing up a single container will register a single runner within a GitHub repository. The runner will end up with with three automatically applied labels; self-hosted, Linux and ARM64. Its name will be auto-generated to something like d2c477aee719.

At container start, an API call is made to automatically get a short-lived token to register the runner; and when the container is stopped, an API call is again made to get a short-lived token to remove the runner from the repository.

The container requires three environment variables as described:

env variable description
GH_USER GitHub user for the runners, in my case thecoshman
GH_REPO The repository, owned by the user, you want the runner for, ie ghrape
GH_PAT A fine-grained token with (at least) "Administration" repository permissions (write)

See the GitHub API docs for more information about the registration and removal API calls that are made.

Note that examples below have a volume mount for docker.sock, this allows the runners to perform docker based operations. If you do not want this, I believe you can simply not use the volume mount, and whilst docker is installed, it can't be used. You might find you get permission denied errors, which can be solved by running (on your host machine) sudo chmod 666 /var/run/docker.sock, as described here. Further, as described on that page, you likely also need to update /etc/rc.local so it is applied at system start up.

Directly running a container (mediocre solution)

You can directly run a single container from the image using the following example command:

docker run \
  -i -t \
  --env GH_USER=thecoshman \
  --env GH_REPO=ghrape \
  --env GH_PAT=github_pat_123456 \
  --volume /var/run/docker.sock:/var/run/docker.sock \
  --name ghrape_runner \
  ghcr.io/thecoshman/ghrape:latest

Using Docker Compose (recommended solution)

The intention is that you use docker-compose to create a stack that will run N containers using the image. There's no auto-scaling here (yet?) you just need to define that the service is replicated with as many replicas as you wish. Maybe I'll add auto-scaling, but for now, I just want to have a small pool of runners available. In my experience, each runner is low enough impact that I don't them just sitting idle.

Here is an example Docker compose that could be used to run three runners for this repository:

services:
  runners:
    image: ghcr.io/thecoshman/ghrape:latest
    restart: unless-stopped
    environment:
      - GH_USER=thecoshman
      - GH_REPO=ghrape
      - GH_PAT=github_pat_123456
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      mode: replicated
      replicas: 3

(Note: As this project is public, I just make use of the GitHub provided runners)

Image Versioning and Updating (Watchtower?)

Each time the main branch is built, the resulting image is tagged with latest and the github runner code version, extracted form the Dockerfile, for example 2.235.0.

My intention/suggestion is that you only use the latest tag, as the runner code is self updating anyway. This means each time you pull the latest image, you are getting the most recent version of the code, reducing the 'catch up' that the runner needs to do. In theory, as the runner code self updates, you can get away with deploying containers once and letting them update for years. It's to be tested, but I believe using a utility, like Watchtower, would be a good idea, ensuring each time this image is updated, it's pulled and deployed for you locally.

TODO

Things I'm aware of and would like to fix/improve at some stage

Auto-update?

The main branch is automatically built and tagged with the runner version, extracted form the Dockerfile and a workflow is checking daily for a new version of the runner code. This should be in place, but awaiting an update of the runner code to actually confirm this. With that in place, I will then also add an example compose file for using Watchtower, once I've played with this myself.

Review Debian packages

The list of packages installed in the Dockerfile were taken from the example that I initially found; I would guess I don't really need them all, so at some stage, I'd look to test removing them and see if the runner still works.

Multi-stage build

In the interest in reducing the final build image, I would suspect I could make use of multi-stage build to install tools required purely for the build phase. Investigation required...

Dynamic packages and labels

'But I want X installed too', yeah, right now, the list of packages directly installed is fairly minimal, and probably more than required. My intention is that an env arg can be provided to list Debian packages to install prior to starting the runner. To go with this, an env arg to provide additional labels. This would make it very easy to use this single image to bring up runners that have additional tools such as Java or C++ compilers.

I might look to take this a bit further, if I ever need it myself (or someone actually wants it) to allow for things like Node packages. My thinking is that you can define a list of Node packages, and if set, node is installed and then the packages you want.

In general, I want to try keep this image lightweight but flexible.

None Debian version?

The Debian slim image used plus the software so far means the image is about 1.5GB. Not the worst, but maybe alpine could be used to help reduce the size.

License

This is open source, I've not really invented anything special here, so I really should slap on a proper license to that affect. Until I get around to that, please do feel free to take this and use it as you wish. That said, contributions to help improve this would always be appreciated.

About

Ghrape (Grape) is a GitHub runner for ARM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages