Multi-cloud AI environment built on SkyPilot
- Turns multiple cloud accounts into an integrated AI platform
- Provides on-demand GPUs with a seamless DX through SkyPilot
E.g. LMRun connects isolated clusters and MLOps servers in one bootstrap command
LMRun is a distribution: it has already made infrastructure choices to include a cloud setup, networking and services, that run below, across and on top of SkyPilot.
features
- Global VPN network of GPUs for secure availability and cheaper compute
- Open stack: work with any tool or framework on cloud-agnostic infrastructure
- No data lock-in: free or low-cost egress bandwidth for high throughput
- Hyperscaler (AWS) extension to outsource AI compute
- Set up your local environment
- Follow instructions in
init
to initialize the cloud environment - Follow instructions in
mesh
to deploy the cluster mesh - Go to
workspace
for example templates and use cases - Make them yours: change models, tune configuration, add resources, etc
Prerequisite: if necessary, install system dependencies. At the very least, you need to configure an AWS profile.
- Install uv
- Run
uv sync --extra dev
in this directory to create the virtual environment and install all dependencies - Set the export of variables when activating the environment
echo 'set -a; source "$VIRTUAL_ENV/../.env"; set +a' >> .venv/bin/activate
- Activate the environment:
source .venv/bin/activate
- Install Pulumi to manage cloud resources
- Use AWS CLI V2 to create an
lmrun
profile with admin permissions:aws configure --profile lmrun
. We recommend creating a new AWS account for this profile. If you already have a main account, create the new account from AWS Organizations. To use another profile name, editAWS_PROFILE
in.env
.