-
-
Notifications
You must be signed in to change notification settings - Fork 10
1.0 Included Software
All AI-Dock containers bundle a common set of software packages to provide a consistent work environment with a minimal amount of setup required.
The following software is installed in all containers. For additional software installed in desktop containers, see this section.
The service portal contains four components which are intended to tie the user experience together. By default it will be exposed on port 1111
unless overridden with environment variable SERVICEPORTAL_PORT_HOST
.
The login screen protects all exposed web services. You can set the credentials by giving the required values to WEB_USERNAME
and WEB_PASSWORD
environment variables.
When these values are not set, the default username is user
and the password will be generated at runtime.
If you need to access an API without logging in, you can authenticate with either Basic Auth or with a Bearer token. The values will be stored in the environment variables WEB_PASSWORD_B64
and WEB_TOKEN
respectively.
To disable authentication you can set WEB_ENABLE_AUTH=false
Note
You can use set-web-credentials.sh <username> <password>
to change the username and password in a running container.
The service list provides a list of running applications and gives up to three links for each.
The Direct / Default
link will point to the container's IP address and relevant port for the service unless you have set an override with the [SERVICENAME]_URL
environment variable.
You may also access services using tunnels provided by Cloudflare. Both named and quick tunnels are available. See the cloudflared section for more details.
The log viewer simply displays the container logs. It is updated every second and is a quick and easy way to view application output. General logs are stored in /var/log/
and process specific logs can be found at /var/log/supervisor/
.
All running processes are managed by supervisord
. The web based process manager provides a simple means to control these processes. You can also manage them from the command line with supervisorctl
.
Note
The screenshots provided in this section are from ai-dock/linux-desktop. Not all services listed will be present in non-desktop containers
This is a simple webserver acting as a reverse proxy.
Caddy is used to enable authentication for all web services and is installed at /opt/caddy
inside the container. To learn more about Caddy and its configuration options you can visit the project homepage.
The Cloudflare tunnel daemon will start if you have provided a token with the CF_TUNNEL_TOKEN
environment variable.
This service allows you to connect to your local services via https without exposing any ports.
You can also create a private network to enable remote connecions to the container at its local address (172.x.x.x
) if your local machine is running a Cloudflare WARP client.
If you do not wish to provide a tunnel token, you could enable CF_QUICK_TUNNELS
which will create a throwaway tunnel for your web services.
Secure links can be found in the service portal and in the log files at /var/log/supervisor/quicktunnel-*.log
.
Full documentation for Cloudflare tunnels is here.
Note
Cloudflared is included so that secure networking is available in all cloud environments.
Warning
You should only provide tunnel tokens in secure cloud environments.
A SSH server will be started if at least one valid public key is found inside the running container in the file /root/.ssh/authorized_keys
The server will bind to port 22
unless you specify variable SSH_PORT
.
There are several ways to get your keys to the container.
-
If using docker compose, you can paste your key in the local file
config/authorized_keys
before starting the container. -
You can pass the environment variable
SSH_PUBKEY
with your public key as the value. -
Cloud providers often have a built-in method to transfer your key into the container
If you choose not to provide a public key then the SSH server will not be started.
To make use of this service you should map port 22
to a port of your choice on the host operating system.
See this guide by DigitalOcean for an excellent introduction to working with SSH servers.
The jupyter server will launch a lab
instance unless you specify JUPYTER_MODE=notebook
.
Jupyter server will listen on port 8888
unless you have specified an alternative with the JUPYTER_PORT_HOST
environment variable.
A python kernel will be installed corresponding with the python version of the image.
Jupyter's official documentation is available at https://jupyter.org/
Syncthing is a peer-to-peer continuous file synchronization program which is very useful for efficiently transporting your work files from a local workstation to a remote container instance. As the files are sync'd in real-time there is no need for a separate download to retrieve the files.
This script follows and prints the log files for each of the above services to stdout. This allows you to follow the progress of all running services through docker's own logging system.
If you are logged into the container you can follow the logs by running logtail.sh
in your shell.
This service detects changes to files in $WORKSPACE/storage
and creates symbolic links to the application directories defined in /opt/ai-dock/storage_monitor/etc/mappings.sh
. This is useful when you want to run different applications which rely on the same models as each container can use the same volume.
Each containerised application will ship with a pre-configured mappings.sh
file but it may be incomplete. Please either override the file or open an issue in the relevant repository to have directories added.
IMPORTANT NOTE:
Micromamba will be removed from all images built after 6th June 2024. Python environments will use venv. Automatic syncing of software environments to mounted volumes will be removed. Manual syncing will be supported but discouraged.
Micromamba is a Python environment manager which can also be used to install system packages in isolation. See Software Manangement for more information.
The following software is installed in all containers based on ai-dock/linux-desktop. This is in addition to the software detailed in this section.
This provides the RTC interface for accessing the desktop through a web browser.
The service will bind to port 6100
.
See the project page for more information.
This provides the VNC fallback interface for accessing the desktop through a web browser.
The service will bind to port 6200
.
See the project page for more information.
This service relays the desktop on display :0
to the VNC on display :1
Learn about kasmxproxy here.
KDE plasma desktop environment. Restarting this service will also restart the currently running X server.
Either an Xorg server when running on NVIDIA hardware or Xvfb for VirtualGL rendering.
Fcitx [ˈfaɪtɪks] is an input method framework with extension support.
See the project page for more information.
Provides audio support for the WebRTC interface. Audio is not supported over VNC.
A small software collection is installed by apt-get to provide basic utility.
All other software is installed into its own environment by micromamba
, which is a drop-in replacement for conda/mamba. Read more about it here.
Micromamba environments are particularly useful where several software packages are required but their dependencies conflict.
Environment | Packages |
---|---|
base |
micromamba's base environment |
jupyter |
jupyter lab |
python_[ver] * |
python |
*
A 'spare' python environment is included alongside python applications.
Micromamba environments will be installed to /opt/micromamba
. If you have a mounted volume at $WORKSPACE
and set the variable WORKSPACE_MAMBA_SYNC=true
the environments will be moved to $WORKSPACE/environments
for persistence. This is most useful in cloud environments but can be slow at first start.
If you are extending this image or running an interactive session where additional software is required, you should almost certainly create a new environment first. See below for guidance.
Command | Function |
---|---|
micromamba env list |
List available environments |
micromamba activate [name] |
Activate the named environment |
micromamba deactivate |
Close the active environment |
micromamba run -n [name] [command] |
Run a command in the named environment without activating |
All ai-dock images create micromamba environments using the --always-softlink
flag which can save disk space where multiple environments are available.
To create an additional micromamba environment, eg for python, you can use the following:
micromamba --always-softlink create -y -c conda-forge -n [name] python=3.10