diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 9a966bda5..31b73fafd 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -9,33 +9,74 @@ We gratefully welcome improvements to documentation as well as to code.
## Community and Access
Join [weaveworks-ignite@googlegroups.com](https://groups.google.com/forum/#!forum/weaveworks-ignite) for calendar invites to calls and edit access to community documents.
+You can ask questions and discuss features on the [#ignite](https://weave-community.slack.com/messages/ignite/) slack channel.
We also hope to see you at our [developer meetings](https://docs.google.com/document/d/1fv8_WD6qXfvlIq7Bb5raCGyBvc42dNF-l8uaoZzoUYI/edit).
## Guidelines
-TODO: Add contributing guidelines here after we formalize them.
+If you have a feature suggestion or found a bug, head over to [GitHub issues](https://github.com/weaveworks/ignite/issues)
+and see if there's an open issue matching your description. If not feel free to open a new issue and add short description:
+ - In case of a bug, be sure to include the steps you performed and what Ignite responded so it's easy for others to reproduce
+ - If you have a feature suggestion, describe it in moderate detail and include some potential uses you see for the feature.
+ We prioritize the features to be implemented based on their usefulness/popularity. Of course if you want to start contributing
+ yourself, go ahead! We'll be more than happy to review your pull requests.
+
+The maintainers will add the correct labels/milestones to the issue for you.
+
+### Contributing your code
+
+The process to contribute code to Ignite is very straight forward.
+1. Go to the project on [GitHub](https://github.com/weaveworks/ignite) and click the `Fork` button in the top-right corner.
+ This will create your own copy of the repository in your personal account.
+1. Using standard `git` workflow, `clone` your fork, make your changes and then `commit` and `push` them to _your_ repository.
+1. Run `make autogen tidy`, then `commit` and `push` the changes. Just put `make autogen tidy` as the commit message.
+ This (re)generates any new/changed autogenerated content and cleans up the code's formatting.
+1. Go back to [GitHub](https://github.com/weaveworks/ignite), select `Pull requests` from the top bar and click
+ `New pull request` to the right. Select the `compare across forks` link. This will show repositories in addition to branches.
+1. From the `head repository` dropdown, select your forked repository. If you made a new branch, select it in the `compare` dropdown.
+ You should always target `weaveworks/ignite` and `master` as the base repository and branch.
+1. With your changes visible, click `Create pull request`. Give it a short, descriptive title and write a comment describing your changes.
+ Click `Create pull request`.
+
+That's it! Maintainers follow pull requests closely and will add the correct labels and milestones.
+After a maintainer's review small changes/improvements could be requested, don't worry, feedback can
+be easily addressed by performing the requested changes and doing a `commit` and `push`. Your new
+changes will automatically be added to the pull request. (Don't forget to add a new `make autogen tidy`
+commit if needed.)
+
+We also have Continuous Integration (CI) set up (powered by [CircleCI](https://circleci.com/)) that will build the code
+and verify it compiles and passes all tests successfully. If your changes didn't pass CI, you can click
+`Details` to go and check why it happened. To integrate your changes, we require CI to pass.
+
+If you have any questions or need help, don't hesitate to ask on our [mailing list](https://groups.google.com/forum/#!forum/weaveworks-ignite)
+or the [#ignite](https://weave-community.slack.com/messages/ignite/) slack channel. Have fun contributing!
## Make targets
-To compile the `ignite` binary, run
-```
-make
+To compile the `ignite`, `ignited` and `ignite-spawn` binaries, run
+```console
+make build-all-amd64
```
-To compile the `ignite-spawn` binary and build the Docker image, run
-```
-make ignite-spawn
-```
+`ignite`, `ignited` and `ignite-spawn` are also Make targets if you only need to build specific ones.
+Building `ignite-spawn` binary using either way also automatically packages it in its Docker container,
+and tags it as `weaveworks/ignite:dev` for development.
-Before submitting a PR, run
+To (re)generate autogenerated content in case your changes require it:
+```console
+make autogen
```
-make tidy
+
+Before submitting a PR:
+```console
+make autogen tidy
```
+This will clean up the code (run e.g. `gofmt`) in addition
+to making sure all autogenerated content is up to date.
Other targets:
-- Install the `ignite` binary: `make install`
-- (Re)generate autogenerated content: `make autogen`
+- Install the `ignite` and `ignited` binaries: `make install`
- Generate dependency graph: `make graph`
- Depends on `sfdp` (usually found in the `graphviz` package)
- Push the `weaveworks/ignite` Docker image: `make image-push`
diff --git a/README.md b/README.md
index f993951e3..6292dea7c 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# Weave Ignite
-![Ignite Logo](docs/logo.png)
+
Weave Ignite is an open source Virtual Machine (VM) manager with a container UX and
built-in GitOps management.
@@ -10,7 +10,7 @@ built-in GitOps management.
- Works in a [GitOps](https://www.weave.works/blog/what-is-gitops-really) fashion and can
manage VMs declaratively and automatically like Kubernetes and Terraform.
-Ignite is fast and secure because of Firecracker. This is an
+Ignite is fast and secure because of Firecracker. Firecracker is an
[open source KVM implementation](https://firecracker-microvm.github.io/) from AWS that is
optimised for [high security](https://github.com/firecracker-microvm/firecracker/blob/master/docs/design.md#threat-containment), isolation, speed and low resource consumption. AWS uses it as the foundation for their
serverless offerings (AWS Lambda and Fargate) that need to load nearly instantly while also
@@ -27,17 +27,20 @@ execute **`ignite run`** instead of **`docker run`**. There’s no need to use
`.vdi`, `.vmdk`, or `.qcow2` images, just do a `docker build` from any base image you want
(e.g. `ubuntu:18.04` from Docker Hub), and add your preferred contents.
-When you run your OCI image using `ignite run`, Firecracker will boot a new VM in c.125 milliseconds (!) for you, using a default 4.19 linux kernel. If you want to use some other kernel, just specify the `--kernel-image` flag, pointing to another OCI image containing a kernel at /boot/vmlinux, and optionally your preferred modules. Next, the kernel executes /sbin/init in the VM, and it all starts up. After this, Ignite connects the VMs to any CNI network, integrating with e.g. Weave Net.
+When you run your OCI image using `ignite run`, Firecracker will boot a new VM in about 125 milliseconds (!) for you
+using a default 4.19 Linux kernel. If you want to use some other kernel, just specify the `--kernel-image` flag,
+pointing to another OCI image containing a kernel at `/boot/vmlinux`, and optionally your preferred modules. Next,
+the kernel executes `/sbin/init` in the VM, and it all starts up. After this, Ignite connects the VMs to any CNI network,
+integrating with e.g. Weave Net.
-Ignite is a declarative Firecracker microVM administration tool, like Docker manages
-runC containers.
-Ignite runs VM from OCI images, spins VMs up/down in lightning speed,
+Ignite is a declarative Firecracker microVM administration tool, similar to how Docker manages runC containers.
+Ignite runs VM from OCI images, spins VMs up/down at lightning speed,
and can manage fleets of VMs efficiently using [GitOps](https://www.weave.works/technologies/gitops/).
The idea is that Ignite makes Firecracker VMs look like Docker containers.
Now we can deploy and manage full-blown VM systems just like e.g. Kubernetes workloads.
The images used are OCI/Docker images, but instead of running them as containers, it executes
-as a real VM with a dedicated kernel and `/sbin/init` as PID 1.
+their contents as a real VM with a dedicated kernel and `/sbin/init` as PID 1.
Networking is set up automatically, the VM gets the same IP as any docker container on the host would.
@@ -47,25 +50,26 @@ at most some seconds. With Ignite you can get started with Firecracker in no tim
## Use-cases
With Ignite, Firecracker is now much more accessible for end users, which means the ecosystem
-can achieve the next level of momentum due to the easy onboarding path thanks to a docker-like UX.
+can achieve a next level of momentum due to the easy onboarding path thanks to the docker-like UX.
Although Firecracker was designed with serverless workloads in mind, it can equally well boot a
normal Linux OS, like Ubuntu, Debian or CentOS, running an init system like `systemd`.
Having a super-fast way of spinning up a new VM, with a kernel of choice, running an init system
-like `systemd` allows to run system-level applications like the kubelet, which needs to “own” the full system.
+like `systemd` allows running system-level applications like the kubelet, which need to “own” the full system.
Example use-cases:
-- Set up many secure VMs lightning fast. It's great for testing, CI and ephemeral workloads
+- Set up many secure VMs lightning fast. It's great for testing, CI and ephemeral workloads.
- Launch and manage entire “app ready” stacks from Git because Ignite supports GitOps!
-- Run even legacy or special apps in lightweight VMs (eg for multi-tenancy, or using weird/edge kernels)
+- Run even legacy or special apps in lightweight VMs (eg for multi-tenancy, or using weird/edge kernels).
-And - potentially - we can run a cloud of VMs ‘anywhere’ using Kubernetes for orchestration, Ignite for virtualization, GitOps for management, and supporting cloud native tools and APIs.
+And - potentially - we can run a cloud of VMs ‘anywhere’ using Kubernetes for orchestration,
+Ignite for virtualization, GitOps for management, and supporting cloud native tools and APIs.
### Scope
-Ignite is different from Kata Containers or gVisor. They don’t let you run real VMs, but only wrap a container in new layer providing some kind of security boundary (or sandbox).
+Ignite is different from Kata Containers or gVisor. They don’t let you run real VMs, but only wrap a container in a VM layer providing some kind of security boundary (or sandbox).
Ignite on the other hand lets you run a full-blown VM, easily and super-fast, but with the familiar container UX. This means you can “move down one layer” and start managing your fleet of VMs powering e.g. a Kubernetes cluster, but still package your VMs like containers.
@@ -73,18 +77,18 @@ Ignite on the other hand lets you run a full-blown VM, easily and super-fast, bu
Please check out the [Releases Page](https://github.com/weaveworks/ignite/releases).
-How to install Ignite is covered in [docs/installation.md](https://ignite.readthedocs.io/en/latest/installation.html).
+How to install Ignite is covered in [docs/installation.md](docs/installation.md) or on [Read the Docs](https://ignite.readthedocs.io/en/latest/installation.html).
Guidance on Cloud Providers' instances that can run Ignite is covered in [docs/cloudprovider.md](docs/cloudprovider.md).
## Getting Started
-**WARNING**: In it's `v0.X` series, Ignite is in **alpha**, which means that it might change in backwards-incompatible ways.
+**WARNING:** In it's `v0.X` series, Ignite is in **alpha**, which means that it might change in backwards-incompatible ways.
[![asciicast](https://asciinema.org/a/252221.svg)](https://asciinema.org/a/252221)
-Note: At the moment `ignite` needs root privileges on the host to operate,
-for certain specific operations (e.g. `mount`). This will change in the future.
+Note: At the moment `ignite` and `ignited` need root privileges on the host to operate
+due to certain operations (e.g. `mount`). This will change in the future.
```bash
# Let's run the weaveworks/ignite-ubuntu docker image as a VM
@@ -126,21 +130,28 @@ For a walkthrough of how to use Ignite, go to [**docs/usage.md**](https://ignite
## Getting Started the GitOps way
-In Git you [declaratively store](https://ignite.readthedocs.io/en/latest/declarative-config.html) the desired state of a set of VMs you want to manage.
-`ignite gitops` reconciles the state from Git, and applies the desired changes as state is updated in the repo.
+Ignite is a “GitOps-first” project, GitOps is supported out of the box using the `ignited gitops` command.
+Previously this was integrated as `ignite gitops`, but this functionality has now moved to `ignited`,
+Ignite's upcoming daemon binary.
+
+In Git you declaratively store the desired state of a set of VMs you want to manage.
+`ignited gitops` reconciles the state from Git, and applies the desired changes as state is updated in the repo.
+It also commits and pushes any local changes/additions to the managed VMs back to the repository.
This can then be automated, tracked for correctness, and managed at scale - [just some of the benefits of GitOps](https://www.weave.works/technologies/gitops/).
The workflow is simply this:
-- Run `ignite gitops [repo]`, where repo points to your Git repo
-- Create a file with the VM specification, specifying how much vCPUs, RAM, disk, etc. you’d like from the VM
-- Run `git push` and see your VM start on the host
+ - Run `ignited gitops [repo]`, where repo is an **SSH url** to your Git repo
+ - Create a file with the VM specification, specifying how much vCPUs, RAM, disk, etc. you’d like for the VM
+ - Run `git push` and see your VM start on the host
-See it in action!
+See it in action! (Note: The screencast is from an older version which differs somewhat)
[![asciicast](https://asciinema.org/a/255797.svg)](https://asciinema.org/a/255797)
+For the complete guide, see [docs/gitops.md](docs/gitops.md).
+
### Awesome Ignite
Want to see how awesome Ignite is?
@@ -149,7 +160,7 @@ Take a look at the [awesome-ignite](https://ignite.readthedocs.io/en/latest/awes
### Documentation
-Please refer to the following documents:
+Please refer to the following documents powered by [Read the Docs](https://readthedocs.org/):
- **[Documentation Page](https://ignite.readthedocs.io/)**
- [Installing Ignite](https://ignite.readthedocs.io/en/latest/installation.html)
@@ -180,9 +191,9 @@ You can follow normal `docker build` patterns for customizing your VM's rootfs.
A _kernel image_ is an OCI-compliant image containing a `/boot/vmlinux` (an uncompressed kernel)
executable (can be a symlink). You can also put supporting kernel modules in `/lib/modules`
-if needed. You can match and mix any kernel and any base image.
+if needed. You can mix and match any kernel and any base image to create a VM.
-As the upstream `centos:7` and `ubuntu:18.04` images from Docker Hub doesn't
+As the upstream `centos:7` and `ubuntu:18.04` images from Docker Hub don't
have all the utilities and packages you'd expect in a VM (e.g. an init system), we have packaged some
reference base images and a sample kernel image to get started quickly.
@@ -196,6 +207,8 @@ but add `systemd`, `openssh`, and similar utilities.
- [Amazon Linux 2 Dockerfile](images/amazonlinux/Dockerfile) (`weaveworks/ignite-amazonlinux`)
- [The Firecracker Team's Alpine Image](images/alpine/Dockerfile) (`weaveworks/ignite-alpine`)
+These prebuilt images can be given to `ignite run` directly.
+
#### Kernel Images
- [Default Kernel Image](images/kernel/Dockerfile) (`weaveworks/ignite-kernel`)
@@ -204,8 +217,7 @@ but add `systemd`, `openssh`, and similar utilities.
#### Tutorials
- [Guide: Run a HA Kubernetes cluster with Ignite and kubeadm](images/kubeadm) (`weaveworks/ignite-kubeadm`)
-
-These prebuilt images can be given to `ignite run` directly.
+- [Guide: Run a set of Ignite VMs with Footloose](docs/footloose.md)
## Contributing
diff --git a/docs/FAQ.md b/docs/FAQ.md
index b7fa9c046..bce2789de 100644
--- a/docs/FAQ.md
+++ b/docs/FAQ.md
@@ -6,9 +6,9 @@ A collection of Frequently Asked Questions about Ignite:
No, you can't. Ignite isn't designed for running containers, hence it cannot work as a CRI runtime.
-Ignite runs VMs instead. In the future, we envision Ignite (maybe) being able to run VMs (not containers)
+Ignite runs VMs instead. In the future, we envision Ignite to (maybe) be able to run VMs (not containers)
based off Kubernetes Pods using some special annotations. This would however (most likely) be done as a containerd
-plugin (lower in the stack than CRI)
+plugin (lower in the stack than CRI).
## Q: What is the difference between {Kata Containers, gVisor, fc-containerd} and Ignite?
@@ -23,7 +23,7 @@ containerd plugin.
gVisor acts as a gatekeeper between your application in a container, and the kernel. gVisor emulates the
kernel syscalls, and based on if they are "safe" or not, passes them though to the underlying kernel, or
-a similar operation. gVisor's value-add is the same as the above, added isolation for containers.
+performs a similar operation. gVisor's value-add is the same as the above, added isolation for containers.
Ignite however, uses the rootfs from an OCI image, and runs that content as a real VM. Inside of the
Firecracker VM spawned, there are no extra containers running (unless the user installs a container
@@ -35,10 +35,11 @@ Firecracker is a KVM implementation, and uses KVM to manage and virtualize the V
## Q: Why does Ignite require to be root?
-In order to prepare the filesystem, Ignite needs to create a file containing an
-ext4 filesystem for Firecracker to boot latest. In order to populate this filesystem
+In order to prepare the VM filesystem, Ignite needs to create a file containing an
+ext4 filesystem for Firecracker to boot. In order to populate this filesystem
in the file-based block device, Ignite needs to temporarily `mount` the filesystem,
-and copy the desired root filesystem in. `mount` requires the UID to be 0 (root).
+and copy the desired root filesystem source contents in. `mount` requires the UID
+to be 0 (root).
We hope to remove this requirement from the Ignite CLI in the future
[#24](https://github.com/weaveworks/ignite/issues/24), [#33](https://github.com/weaveworks/ignite/issues/33).
@@ -58,7 +59,7 @@ Docker, currently the only available container runtime usable by Ignite, is used
1. **Running long-lived processes**: At the very early Ignite PoC stage, we tried to run the Firecracker
process under `systemd`, but this was in many ways suboptimal. Attaching to the serial console, fetching
- logs, and b) and c) were very hard to achieve. Also, we'd need to somehow install the Firecracker binary
+ logs, and 2. and 3. were very hard to achieve. Also, we'd need to somehow install the Firecracker binary
on host. Packaging everything in a container, and running the Firecracker process in that container was a
natural fit.
1. **Sandboxing the Firecracker process**: Firecracker should not be run on host without sandboxing, as per
@@ -67,7 +68,7 @@ Docker, currently the only available container runtime usable by Ignite, is used
binary to do sandboxing/isolation from the host for the Firecracker process, but a container does this
equally well, if not better.
1. **Container Networking**: Using containers, we already know what IP to give the VM. We can integrate with
- e.g. the default docker bridge, or docker's `libnetwork` in general, or [CNI](https://github.com/containernetworking/cni).
+ e.g. the default docker bridge, docker's `libnetwork` in general, or [CNI](https://github.com/containernetworking/cni).
This reduces the amount of scope and work needed by Ignite, and keeps our implementation lean. It also directly
makes Ignite usable alongside normal containers, e.g. on a host running Kubernetes Pods.
1. **OCI compliant operations**: Using an existing container runtime, we do not need to implement everything
@@ -78,15 +79,15 @@ All in all, we do not want to reinvent the wheel. We reuse what we can from exis
## Q: How does my filesystem in a Docker image end up in a Firecracker VM?
-In short, we `pull` an OCI image using the container runtime (docker for now), `create` a new container using
+In short, we `pull` an OCI image using the container runtime (Docker for now), `create` a new container using
this image, and finally `export` the rootfs of that created container to a tar file. This tar file is then
extracted into the mount point of an ext4-formatted block device file of the OCI image's size. The kernel
OCI image is similarly copied into the rootfs of the container. Lastly, Ignite modifies some well-known files
-like `/etc/hosts`, and `/etc/resolv.conf` to the VM work as you would expect it to.
+like `/etc/hosts` and `/etc/resolv.conf` for the VM to work as you would expect it to.
## Q: How does networking work as there are both containers and VMs?
-First, Ignite spawns a container using the runtime. In this container, one Ignite component `ignite-spawn` is running.
+First, Ignite spawns a container using the runtime. In this container, one Ignite component, `ignite-spawn`, is running.
`ignite-spawn` loops the network interfaces inside of the container, and looks for a valid one to use for the VM.
It removes the IP address from the container, and remembers it for later.
@@ -106,4 +107,4 @@ As per the announcement blog post: https://www.weave.works/blog/fire-up-your-vms
> Lucas Käldström (@luxas) is a Kubernetes SIG Lead and Top CNCF Ambassador 2017, and is a longstanding member of the Weaveworks family since graduating from High School (story here). As a young Finnish citizen, Lucas had to do his mandatory Military Service for around a year.
-> Naturally for Lucas, he started evangelising Kubernetes within the military, and got assigned programming tasks. Security and resource consumption are critical army concerns, so Lucas and a colleague, Dennis Marttinen, decided to experiment with Firecracker, creating an elementary version of Ignite. On leaving the army they were granted permission to work on an open source rewrite, working with Weaveworks.
+> Naturally for Lucas, he started evangelising Kubernetes within the military, and got assigned programming tasks. Security and resource consumption are critical army concerns, so Lucas and a colleague, Dennis Marttinen (@twelho), decided to experiment with Firecracker, creating an elementary version of Ignite. On leaving the army they were granted permission to work on an open source rewrite, working with Weaveworks.
diff --git a/docs/api/ignite_v1alpha1.md b/docs/api/ignite_v1alpha1.md
index 05cb3ccba..3fe724cc6 100644
--- a/docs/api/ignite_v1alpha1.md
+++ b/docs/api/ignite_v1alpha1.md
@@ -277,14 +277,14 @@ func Convert_v1alpha1_VMKernelSpec_To_ignite_VMKernelSpec(in *VMKernelSpec, out
Convert\_v1alpha1\_VMKernelSpec\_To\_ignite\_VMKernelSpec calls the
autogenerated conversion function along with custom conversion logic
-## func [Convert\_v1alpha1\_VMNetworkSpec\_To\_ignite\_VMNetworkSpec](https://github.com/weaveworks/ignite/tree/master/pkg/apis/ignite/v1alpha1/conversion.go?s=7778:7909#L179)
+## func [Convert\_v1alpha1\_VMNetworkSpec\_To\_ignite\_VMNetworkSpec](https://github.com/weaveworks/ignite/tree/master/pkg/apis/ignite/v1alpha1/conversion.go?s=7785:7916#L179)
``` go
func Convert_v1alpha1_VMNetworkSpec_To_ignite_VMNetworkSpec(in *VMNetworkSpec, out *ignite.VMNetworkSpec, s conversion.Scope) error
```
Convert\_v1alpha1\_VMNetworkSpec\_To\_ignite\_VMNetworkSpec calls the
-autogenerated conversion function and custom conversion logic
+autogenerated conversion function along with custom conversion logic
## func [Convert\_v1alpha1\_VMSpec\_To\_ignite\_VMSpec](https://github.com/weaveworks/ignite/tree/master/pkg/apis/ignite/v1alpha1/conversion.go?s=697:800#L16)
diff --git a/docs/declarative-config.md b/docs/declarative-config.md
index c510e54af..9e2b06a01 100644
--- a/docs/declarative-config.md
+++ b/docs/declarative-config.md
@@ -1,44 +1,43 @@
# Run Ignite VMs declaratively
-Flags can be convenient for simple use cases, but have many limitations.
-In more advanced use-cases, and to eventually allow GitOps flows, there is
-an other way: telling Ignite what to do _declaratively_, using a file containing
+Flags can be convenient for simple use, but have many limitations.
+For more advanced use cases, and to allow GitOps flows, there is
+another way: telling Ignite what to do _declaratively_, using a file containing
an API object.
-The first commands to support this feature is `ignite run` and `ignite create`.
-An example file as follows:
+The first commands to support this feature are `ignite run` and `ignite create`.
+Here's an example API object file contents:
```yaml
-apiVersion: ignite.weave.works/v1alpha1
+apiVersion: ignite.weave.works/v1alpha2
kind: VM
metadata:
- name: test-vm
+ name: my-vm
spec:
image:
- ociClaim:
- ref: weaveworks/ignite-ubuntu
+ oci: weaveworks/ignite-ubuntu
cpus: 2
diskSize: 3GB
memory: 800MB
```
-This API object specifies a need for 2 vCPUs, 800MB of RAM and 3GB of disk.
+This API object specifies a need for 2 vCPUs, 800 MB of RAM and 3 GB of disk space.
-We can tell Ignite to make this happen using simply:
+We can tell Ignite to make this happen by simply running:
```console
-$ ignite run --config test-vm.yaml
+$ ignite run --config my-vm.yaml
```
The full reference format for the `VM` kind is as follows:
```yaml
-apiVersion: ignite.weave.works/v1alpha1
+apiVersion: ignite.weave.works/v1alpha2
kind: VM
metadata:
# Automatically set when the object is created
created: [RFC3339-formatted date]
- # Mandatory, the name of the VM
+ # Required, the name of the VM
name: [string]
# Optional, autogenerated if not specified
uid: [16-char hex UID]
@@ -49,10 +48,10 @@ metadata:
annotations:
foo: bar
spec:
- # Optional, how many vCPUs that should be allocated for the VM
+ # Optional, how many vCPUs should be allocated for the VM
# Default: 1
cpus: [uint64]
- # Optional, how much RAM that should be allocated for the VM
+ # Optional, how much RAM should be allocated for the VM
# Default: 512MB
memory: [size]
# Optional, how much free writable space the VM should have at runtime
@@ -60,42 +59,61 @@ spec:
diskSize: [size]
image:
- ociClaim:
- # Required: What OCI image to use as the VM's rootfs
- # For example: weaveworks/ignite-ubuntu:latest
- ref: [OCI image reference]
+ # Required, what OCI image to use as the VM's rootfs
+ # For example: weaveworks/ignite-ubuntu:latest
+ oci: [OCI image reference]
kernel:
- # Optional, the kernel commandline to be specified
+ # Optional, the kernel command line for the VM
# Default: "console=ttyS0 reboot=k panic=1 pci=off ip=dhcp"
cmdLine: [string]
- ociClaim:
- # Required: What OCI image to get the kernel executable, and optionally modules from
- # Default: weaveworks/ignite-kernel:4.19.47
- ref: [OCI image reference]
+ # Required, what OCI image to get the kernel binary (and optionally modules) from
+ # Default: weaveworks/ignite-kernel:4.19.47
+ oci: [OCI image reference]
network:
- # Optional: What networking mode to use. Available options are: "docker-bridge" and "cni"
- # Default: docker-bridge
- mode: [networking-mode]
- # Optional, an array of portmappings that map ports bound to the VM to the host
- # Default: unset, no portmappings
+ # Optional, an array of port mappings that map ports bound to the VM to the host
+ # Default: unset, no port mappings
ports:
- # This example maps 0.0.0.0:6443 inside of the VM to 0.0.0.0:443 on the physical host
+ # This example maps UDP port 0.0.0.0:6443 inside the VM to 10.0.0.2:443 on the physical host
- hostPort: 433
vmPort: 6443
-
- # Optional, an array of files to copy into the VM at runtime
- # Default: unset, no filemappings
+ # Optional, specify an address to bind to on the host
+ # Default: 0.0.0.0, any address
+ bindAddress: 10.0.0.2
+ # Optional, specify a protocol for the port mapping (tcp or udp)
+ # Default: tcp
+ protocol: udp
+
+ storage:
+ # Optional, an array of mountPath and name pairs,
+ # set the mount points for named volumes inside the VM.
+ # Names must match configured named volumes.
+ # Default: unset, no mount points
+ volumeMounts:
+ - mountPath: /mnt
+ name: volume0
+ # Optional, an array of blockDevice and name pairs,
+ # expose block devices on the host inside the VM.
+ # The blockDevice path must point to a block device formatted
+ # with a filesystem providing an UUID (such as ext4 or xfs).
+ # Default: unset, no volume forwarding
+ volumes:
+ - blockDevice:
+ path: /dev/sdb1
+ name: volume0
+
+ # Optional, an array of files/directories to copy into the VM on creation
+ # Default: unset, nothing will be copied
copyFiles:
# This example copies a Kubernetes KubeConfig file from /etc/kubernetes/admin.conf
- # on the host to /home/lucas/.kube/config inside of the VM
+ # on the host to /home/user/.kube/config inside the VM
- hostPath: /etc/kubernetes/admin.conf
- vmPath: /home/lucas/.kube/config
+ vmPath: /home/user/.kube/config
# Optional, provides automation to easily access your VM with the "ignite ssh" command
- # If "ssh: true" is set, Ignite will generate an SSH key and copy the public key into the VM
- # This allows for automatic "ignite ssh" logins.
- # Alternatively: specify a path to a public key to put in authorized_keys in the VM.
+ # If "ssh: true" is set, Ignite will generate an SSH key and copy the
+ # public key into the VM. This allows for automatic "ignite ssh" logins.
+ # Alternatively: specify a path to a public key to put in /root/.ssh/authorized_keys in the VM.
# Default: unset, no actions regarding SSH automation
ssh: [true, or public key path]
```
diff --git a/docs/dependencies.md b/docs/dependencies.md
index 7f7558e46..2e02a2188 100644
--- a/docs/dependencies.md
+++ b/docs/dependencies.md
@@ -13,10 +13,16 @@ Everything apart from above, is not supported, and out of scope.
## Host Requirements
- A host running Linux 4.14 or newer
- - An Intel or AMD (alpha) CPU
- `sysctl net.ipv4.ip_forward=1`
- loaded kernel loop module: `modprobe -v loop`
- Optional: `sysctl net.bridge.bridge-nf-call-iptables=0`
+ - One of the following CPUs:
+
+| CPU | Architecture | Support level | Notes |
+|-------|------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Intel | x86_64 | Complete | Requires VT-x, most non-Atom 64-bit Intel CPUs since Pentium 4 should be supported |
+| AMD | x86_64 | Alpha | Requires [AMD-V](https://en.wikipedia.org/wiki/X86_virtualization#AMD_virtualization_.28AMD-V.29), most AMD CPUs since the Athlon 64 "Orleans" should be supported |
+| ARM | AArch64 (64-bit) | Alpha | Requires GICv3, see [here](https://github.com/firecracker-microvm/firecracker/issues/1196) |
## Guest Requirements
@@ -50,7 +56,7 @@ With time, we aim to eliminate as many of these as possible.
- `docker` for managing the containers ignite uses
- Ubuntu package: `docker.io`
- CentOS package: `docker`
- - `dmsetup` for managing devicemapper snapshots and overlays
+ - `dmsetup` for managing device mapper snapshots and overlays
- Ubuntu package: `dmsetup`
- CentOS package: `device-mapper` (installed by default)
- `ssh` for SSH-ing into the VM (optional, for `ignite ssh` only)
diff --git a/docs/devel.md b/docs/devel.md
index d98c11347..c326045b1 100644
--- a/docs/devel.md
+++ b/docs/devel.md
@@ -16,17 +16,24 @@ It uses Go modules as the vendoring mechanism.
The only build requirement is Docker.
+To build `ignite`, `ignited` and `ignite-spawn` for all supported architectures, run:
```console
-make binary
-make image
+make build-all
+```
+
+To only build for a specific architecture, append the architecture to the command:
+```console
+make build-all-amd64
+make build-all-arm64
```
## Pre-commit tidying
-Before committing, please run this make target to tidy your local environment:
+Before committing, please run this make target to (re)generate
+autogenerated content and tidy your local environment:
```console
-make tidy
+make autogen tidy
```
## Building reference OS images
diff --git a/docs/gitops.md b/docs/gitops.md
new file mode 100644
index 000000000..7a5796805
--- /dev/null
+++ b/docs/gitops.md
@@ -0,0 +1,62 @@
+## Ignite - the GitOps VM
+
+Ignite is a “GitOps-first” project, GitOps is supported out of the box using the `ignited gitops` command.
+Previously this was integrated as `ignite gitops`, but this functionality has now moved to `ignited`,
+Ignite's upcoming daemon binary.
+
+In Git you declaratively store the desired state of a set of VMs you want to manage.
+`ignited gitops` reconciles the state from Git, and applies the desired changes as state is updated in the repo.
+It also commits and pushes any local changes/additions to the managed VMs back to the repository.
+
+This can then be automated, tracked for correctness, and managed at scale - [just some of the benefits of GitOps](https://www.weave.works/technologies/gitops/).
+
+The workflow is simply this:
+
+ - Run `ignited gitops [repo]`, where repo is an **SSH url** to your Git repo
+ - Create a file with the VM specification, specifying how much vCPUs, RAM, disk, etc. you’d like for the VM
+ - Run `git push` and see your VM start on the host
+
+See it in action! (Note: The screencast is from an older version which differs somewhat)
+
+[![asciicast](https://asciinema.org/a/255797.svg)](https://asciinema.org/a/255797)
+
+### Try it out
+
+Go ahead and create a Git repository.
+
+**NOTE:** You need an SSH key for **root** that has push access to your repository. `ignited` will commit and push changes
+back to the repository using the default key for it. To edit your root's git configuration, run
+`sudo gitconfig --global --edit`. The root requirement will be removed in a future release.
+
+ Here's a sample configuration you can push to it (my-vm.yaml):
+```yaml
+apiVersion: ignite.weave.works/v1alpha2
+kind: VM
+metadata:
+ name: my-vm
+ uid: 599615df99804ae8
+spec:
+ image:
+ oci: weaveworks/ignite-ubuntu
+ cpus: 2
+ diskSize: 3GB
+ memory: 800MB
+ ssh: true
+status:
+ running: true
+```
+(For a more complete example repository configuration, see https://github.com/luxas/ignite-gitops.)
+
+After you have [installed Ignite](../docs/installation.md), you can do the following:
+
+```console
+ignite gitops git@github.com:/.git
+```
+
+**NOTE:** HTTPS doesn't preserve authentication information for `ignited` to push changes,
+you need to set up SSH authentication and use the SSH clone URL for now.
+
+Ignite will now search that repo for suitable JSON/YAML files, and apply their state locally.
+You should see `my-vm` starting up in `ignite ps`. To enter the VM, run `ignite ssh my-vm`.
+
+Please refer to [docs/declarative-config.md](../docs/declarative-config.md) for the full API reference.
diff --git a/docs/index.rst b/docs/index.rst
index 4052b6029..32f99745f 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -18,6 +18,7 @@ In this folder you can read more about how to use Ignite, and how it works:
dependencies
usage
declarative-config
+ gitops
networking
prometheus
footloose
diff --git a/docs/installation.md b/docs/installation.md
index 1991db8d4..c9a98b9ef 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -4,12 +4,12 @@ This guide describes the installation and uninstallation process of Ignite.
## System requirements
-Ignite runs on any Intel-based `linux/amd64` system with `KVM` support.
-AMD support is in alpha (Firecracker limitation).
+Ignite runs on most Intel, AMD or ARM (AArch64) based `linux/amd64` systems with `KVM` support.
+See the full CPU support table in [dependencies.md](dependencies.md) for more information.
See [cloudprovider.md](cloudprovider.md) for guidance on running Ignite on various cloud providers and suitable instances that you could use.
-**Note**: You do **not** need to install any "traditional" QEMU/KVM packages, as long as
+**NOTE:** You do **not** need to install any "traditional" QEMU/KVM packages, as long as
there is virtualization support in the CPU and kernel it works.
See [dependencies.md](dependencies.md) for needed dependencies.
@@ -70,10 +70,15 @@ save it as `/usr/local/bin/ignite` and make it executable.
To install Ignite from the command line, follow these steps:
```bash
-export VERSION=v0.4.2
-curl -fLo ignite https://github.com/weaveworks/ignite/releases/download/${VERSION}/ignite
-chmod +x ignite
-sudo mv ignite /usr/local/bin
+export VERSION=v0.5.0-alpha.1
+export GOARCH=$(go env GOARCH 2>/dev/null || echo "amd64")
+
+for binary in ignite ignited; do
+ echo "Installing ${binary}..."
+ curl -sfLo ${binary} https://github.com/weaveworks/ignite/releases/download/${VERSION}/${binary}-${GOARCH}
+ chmod +x ${binary}
+ sudo mv ${binary} /usr/local/bin
+done
```
Ignite uses [semantic versioning](https://semver.org), select the version to be installed
@@ -100,6 +105,6 @@ To completely remove the Ignite installation, execute the following as root:
ignite rm -f $(ignite ps -aq)
# Remove the data directory
rm -r /var/lib/firecracker
-# Remove the Ignite binary
-rm /usr/local/bin/ignite
+# Remove the ignite and ignited binaries
+rm /usr/local/bin/ignite{,d}
```
diff --git a/docs/my-vm.yaml b/docs/my-vm.yaml
new file mode 100644
index 000000000..4e4155663
--- /dev/null
+++ b/docs/my-vm.yaml
@@ -0,0 +1,14 @@
+apiVersion: ignite.weave.works/v1alpha2
+kind: VM
+metadata:
+ name: my-vm
+ uid: 599615df99804ae8
+spec:
+ image:
+ oci: weaveworks/ignite-ubuntu
+ cpus: 2
+ diskSize: 3GB
+ memory: 800MB
+ ssh: true
+status:
+ running: true
\ No newline at end of file
diff --git a/docs/networking.md b/docs/networking.md
index 36b17c4b8..4dc13764b 100644
--- a/docs/networking.md
+++ b/docs/networking.md
@@ -1,13 +1,19 @@
# Networking
-The default networking mode is `docker-bridge`, which means that the default docker bridge will be used for the networking setup.
-The default docker bridge is a local `docker0` interface, giving out local addresses to containers in the `172.17.0.0/16` range.
+Ignite uses network plugins to manage VM networking. The default plugin is `docker-bridge`, which means that the default docker bridge will be
+used for the networking setup. The default docker bridge is a local `docker0` interface, giving out local addresses to containers in the `172.17.0.0/16` range.
-Ignite also supports integration with [CNI](https://github.com/containernetworking/cni), the standard networking interface
-for Kubernetes and many other cloud-native projects and container runtimes. Note that CNI itself is only an interface, not
+Via the `cni` plugin Ignite also supports integration with [CNI](https://github.com/containernetworking/cni), the standard networking
+interface for Kubernetes and many other cloud-native projects and container runtimes. Note that CNI itself is only an interface, not
an implementation, so if you use this mode you need to install an implementation of this interface. Any implementation that works
with Kubernetes should technically work with Ignite.
+To select the network plugin, use the `--network-plugin` flag for `ignite` and `ignited`:
+```console
+ignite --network-plugin cni
+ignited --network-plugin docker-bridge
+```
+
## Comparison
### docker-bridge
diff --git a/gitops/README.md b/gitops/README.md
deleted file mode 100644
index c6b89323d..000000000
--- a/gitops/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
-## Ignite - the GitOps VM
-
-Ignite is a “GitOps-first” project, GitOps is supported out of the box using the `ignite gitops` command.
-
-In Git you declaratively store the desired state of a set of VMs you want to manage.
-`ignite gitops` reconciles the state from Git, and applies the desired changes as state is updated in the repo.
-
-This can then be automated, tracked for correctness, and managed at scale - [just some of the benefits of GitOps](https://www.weave.works/technologies/gitops/).
-
-The workflow is simply this:
-
- - Run `ignite gitops [repo]`, where repo points to your Git repo
- - Create a file with the VM specification, specifying how much vCPUs, RAM, disk, etc. you’d like from the VM
- - Run `git push` and see your VM start on the host
-
-See it in action!
-
-[![asciicast](https://asciinema.org/a/255797.svg)](https://asciinema.org/a/255797)
-
-### Try it out
-
-In this folder, there are two sample files [declaratively specifying how VMs should be run](../docs/declarative-config.md).
-This means, you can try this feature out yourself!
-
-After you have [installed Ignite](../docs/installation.md), you can do the following:
-
-```console
-$ ignite gitops https://github.com/luxas/ignite-gitops
-```
-
-Ignite will now search that repo for suitable JSON/YAML files, and apply their state locally.
-(You can go and check the files out first, too, at: https://github.com/luxas/ignite-gitops)
-
-To show how you could create your own repo, similar to `luxas/ignite-gitops`, refer to these two files:
-
- - [amazonlinux-vm.json](amazonlinux-vm.json)
- - [ubuntu-vm.yaml](ubuntu-vm.yaml)
-
-You can use these files as the base for your own Git-managed VM-spawning flow.
-
-Please refer to [docs/declarative-config.md](../docs/declarative-config.md) for the full API reference.
diff --git a/gitops/amazonlinux-vm.json b/gitops/amazonlinux-vm.json
deleted file mode 100644
index 18a66f776..000000000
--- a/gitops/amazonlinux-vm.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
- "kind": "VM",
- "apiVersion": "ignite.weave.works/v1alpha1",
- "metadata": {
- "name": "fragrant-sun",
- "uid": "d7ca3d08676aa190",
- },
- "spec": {
- "image": {
- "ociClaim": {
- "ref": "weaveworks/ignite-amazonlinux:latest"
- }
- },
- "kernel": {
- "ociClaim": {
- "ref": "weaveworks/ignite-amazon-kernel:latest"
- }
- },
- "cpus": 4,
- "memory": "2GB",
- "diskSize": "6GB"
- },
- "status": {
- "state": "Running"
- }
-}
\ No newline at end of file
diff --git a/gitops/ubuntu-vm.yaml b/gitops/ubuntu-vm.yaml
deleted file mode 100644
index 4dfe30f85..000000000
--- a/gitops/ubuntu-vm.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-apiVersion: ignite.weave.works/v1alpha1
-kind: VM
-metadata:
- name: ubuntu-vm
- uid: 05286f8fb73dc324
-spec:
- cpus: 2
- diskSize: 4GB
- memory: 1GB
- image:
- ociClaim:
- ref: weaveworks/ignite-ubuntu:latest
- kernel:
- ociClaim:
- ref: weaveworks/ignite-kernel:4.19.47
- ssh: true
-status:
- state: Running
diff --git a/ignite.iml b/ignite.iml
index 49df094a9..57043cac9 100644
--- a/ignite.iml
+++ b/ignite.iml
@@ -1,10 +1,16 @@
+
+
+
+
+
+
\ No newline at end of file
diff --git a/images/kubeadm/README.md b/images/kubeadm/README.md
index de4f5587d..3ea485136 100644
--- a/images/kubeadm/README.md
+++ b/images/kubeadm/README.md
@@ -2,13 +2,13 @@
This short guide shows you how to setup Kubernetes in HA mode with Ignite VMs.
-**Note:** At the moment, you need to execute all these commands as `root`.
+**NOTE:** At the moment, you need to execute all these commands as `root`.
-**Note:** This guide assumes you have no running containers, in other words, that
+**NOTE:** This guide assumes you have no running containers, in other words, that
the IP of the first docker container that will be run is `172.17.0.2`. You can check
-this with `docker run busybox ip addr`.
+this with `docker run --rm busybox ip addr`.
-First set up some files and certificates using `prepare.sh`
+First set up some files and certificates using `prepare.sh` from this directory:
```bash
./prepare.sh
@@ -31,10 +31,10 @@ ignite run weaveworks/ignite-kubeadm:latest \
--name master-0
```
-Log into it using `ignite ssh master-0` and an initialize it with `kubeadm`:
+Initialize it with `kubeadm` using `ignite exec`:
```bash
-kubeadm init --config /kubeadm.yaml --upload-certs
+ignite exec master-0 kubeadm init --config /kubeadm.yaml --upload-certs
```
### Join additional masters
@@ -52,14 +52,16 @@ for i in {1..2}; do
done
```
-SSH into each VM with `ignite ssh master-{1,2}`, and join the control plane:
+Use `ignite exec` to join each VM to the control plane:
```bash
-kubeadm join firekube.luxas.dev:6443 \
- --token ${TOKEN} \
- --discovery-token-ca-cert-hash sha256:${CA_HASH} \
- --certificate-key ${CERT_KEY} \
- --control-plane
+for i in {1..2}; do
+ ignite exec master-${i} kubeadm join firekube.luxas.dev:6443 \
+ --token ${TOKEN} \
+ --discovery-token-ca-cert-hash sha256:${CA_HASH} \
+ --certificate-key ${CERT_KEY} \
+ --control-plane
+done
```
### Set up a HAProxy loadbalancer locally
@@ -82,7 +84,7 @@ Right now it's expected that the nodes are in state `NotReady`, as CNI networkin
#### Install a CNI Network -- Weave Net
-We're gonna use [Weave Net](https://github.com/weaveworks/weave).
+We're going to use [Weave Net](https://github.com/weaveworks/weave).
```bash
kubectl apply -f https://git.io/weave-kube-1.6
@@ -101,6 +103,5 @@ kubectl get nodes
```
What's happening underneath here is that HAproxy (or any other loadbalancer) notices that
-`master-0` is unhealthy, and removes it from the roundrobin list, while etcd also realizes
-that one peer is lost, and re-electing a leader amongst the two that are still standing.
-When this is done (takes a second or two) the cluster can continue to serve requests as before.
+`master-0` is unhealthy, and removes it from the roundrobin list. etcd also realizes
+that one peer is lost, and re-elects a leader amongst the two that are still standing.
\ No newline at end of file
diff --git a/pkg/apis/ignite/v1alpha1/conversion.go b/pkg/apis/ignite/v1alpha1/conversion.go
index 556f73cd5..699635a95 100644
--- a/pkg/apis/ignite/v1alpha1/conversion.go
+++ b/pkg/apis/ignite/v1alpha1/conversion.go
@@ -175,8 +175,8 @@ func Convert_v1alpha1_VMKernelSpec_To_ignite_VMKernelSpec(in *VMKernelSpec, out
return Convert_v1alpha1_OCIClaim_To_ignite_OCI(&in.OCIClaim, &out.OCI)
}
-// Convert_v1alpha1_VMNetworkSpec_To_ignite_VMNetworkSpec calls the autogenerated conversion function and custom conversion logic
+// Convert_v1alpha1_VMNetworkSpec_To_ignite_VMNetworkSpec calls the autogenerated conversion function along with custom conversion logic
func Convert_v1alpha1_VMNetworkSpec_To_ignite_VMNetworkSpec(in *VMNetworkSpec, out *ignite.VMNetworkSpec, s conversion.Scope) error {
- // .Spec.Network.Mode is not tracked in the v1alpha2 and newer, so there's no extra conversion logic
+ // .Spec.Network.Mode is not tracked in v1alpha2 and newer, so there's no extra conversion logic
return autoConvert_v1alpha1_VMNetworkSpec_To_ignite_VMNetworkSpec(in, out, s)
}