Tooling for Terraform, Ansible, Kubernetes, AWS, Azure and Google Cloud in a Fedora based Docker/Podman image. The image can be used with the Visual Studio Code Remote - Containers extension and supports various in-container extensions like Azure Tools for Visual Studio Code Visual Studio Code Kubernetes Tools, Terraform Visual Studio Code Extension etc. When Visual Studio Code supports Fedora Toolbox this image might be replaced with a Toolbox setup with all Cloud-tools installed via Ansible.
Usage: cloudctl [OPTION]...
-b, --build Build image from Dockerfile
-f, --force Do not use cache when building the image. Only useful with build option.
-d, --docker Use Docker CLI instead of Podman
-e COMMAND, --exec COMMAND,
-e=COMMAND, --exec=COMMAND Run COMMAND in runnning container 'cloudctl'
no option Create container 'cloudctl' and start a Bash session
-h, --help print help
Use
export CONTAINER_ALIAS='{ docker | podman }'
to set the Container Engine CLI for the current terminal session.
Build image using build cache.
cloudctl --build
cloudctl -b # short version
Rebuild image not using the cache at all.
cloudctl --build --force
cloudctl -b -f # short version
For more information about the build cache see Leverage build cache in the Docker Documentation.
cloudctl
# run kubectl in container
cloudctl --exec=kubectl
cloudctl -e kubectl # short version
# run kubectl with args
cloudctl --exec 'kubectl version'
cloudctl -e='kubectl version' # short version
Commands with spaces need to be quoted.
If you do not want to preface cloudctl
with sudo
, there are options, see Manage Docker as a non-root user. However this impacts security in your system, see Docker Daemon Attack Surface.
A better option is to use a rootless container tool like Podman. After basic setup of Podman in a Rootless environment, this can simply be done via alias docker=podman
.
The host user's ${HOME}/Projects
directory is mounted into /home/${CLOUDBOX_USER}/Projects
in the container.
If your current/working directory ${PWD}
in your IDE or Terminal on the host machine is within that Projects folder, cloudctl
changes into that directory on start.
The host user's ${HOME}/.ssh
directory is mounted into /home/${CLOUDBOX_USER}/host_ssh
in the container, and a symbolic link is created to the cloudbox user's .ssh
directory.
If you want to use different ssh keys coming with your project, delete the link and create a new link to a ssh directory in your project:
[bastion]$ rm -f ~/.ssh
[bastion]$ ln -s ~/Projects/devOpsProject/ssh .ssh
[bastion]$ terraform version
[bastion]$ ansible --version
Ansible is using Python 3 and has the following modules installed: dnspython, lxml, netaddr, pypsexec, pywinrm and pywinrm[credssp].
[bastion]$ kubectl version
[bastion]$ aws --version
[bastion]$ az --version
[bastion]$ gcloud version
[bastion]$ gsutil version
See Kubernetes.
[bastion]$ bq version
Sample Shim (Bash script) to provide kubectl
in your host system. Simply set a link to - or copy - the shim to a directory in your ${PATH}
:
chmod +x kubectl
ln -s /usr/local/bin/kubectl kubectl
and tools in your host IDE, e.g. Visual Studio Code Kubernetes Tools, can find the Kubernetes binary.