Rust client for Kubernetes in the style of a more generic client-go, a runtime abstraction inspired by controller-runtime, and a derive macro for CRDs inspired by kubebuilder.
These crates make certain assumptions about the kubernetes apimachinery + api concepts to enable generic abstractions. These abstractions allow rust reinterpretations of reflectors, informers, controllers, and custom resource interfaces, so that you can write applications easily.
Select a version of kube
along with the generated k8s-openapi types corresponding for your cluster version:
[dependencies]
kube = "0.55.0"
kube-runtime = "0.55.0"
k8s-openapi = { version = "0.11.0", default-features = false, features = ["v1_20"] }
We recommend turning off default-features
for k8s-openapi
to speed up your compilation.
Please check the CHANGELOG when upgrading. All crates herein are versioned and released together to guarantee compatibility before 1.0.
See the examples directory for how to use any of these crates.
Official examples:
-
version-rs: super lightweight reflector deployment with actix 2 and prometheus metrics
-
controller-rs:
Controller
owned by aManager
inside actix
Real world users:
- krustlet - a complete
WASM
runningkubelet
- stackabletech operators - (kafka, zookeeper, and more)
- kdash tui - terminal dashboard for kubernetes
- logdna agent
- kubeapps pinniped
- kubectl-view-allocations - kubectl plugin to list resource allocations
The direct Api
type takes a client, and is constructed with either the ::all
or ::namespaced
functions:
use k8s_openapi::api::core::v1::Pod;
let pods: Api<Pod> = Api::namespaced(client, "default");
let p = pods.get("blog").await?;
println!("Got blog pod with containers: {:?}", p.spec.unwrap().containers);
let patch = json!({"spec": {
"activeDeadlineSeconds": 5
}});
let pp = PatchParams::apply("my_manager");
let patched = pods.patch("blog", &pp, &Patch::Apply(patch)).await?;
assert_eq!(patched.spec.active_deadline_seconds, Some(5));
pods.delete("blog", &DeleteParams::default()).await?;
See the examples ending in _api
examples for more detail.
Working with custom resources uses automatic code-generation via proc_macros in kube-derive.
You need to #[derive(CustomResource)]
and some #[kube(attrs..)]
on a spec struct:
#[derive(CustomResource, Debug, Serialize, Deserialize, Default, Clone, JsonSchema)]
#[kube(group = "clux.dev", version = "v1", kind = "Foo", namespaced)]
pub struct FooSpec {
name: String,
info: String,
}
Then you can use the generated wrapper struct Foo
as a kube::Resource
:
let foos: Api<Foo> = Api::namespaced(client, "default");
let f = Foo::new("my-foo", FooSpec::default());
println!("foo: {:?}", f);
println!("crd: {:?}", serde_yaml::to_string(&Foo::crd()));
There are a ton of kubebuilder-like instructions that you can annotate with here. See the documentation or the crd_
prefixed examples for more.
NB: #[derive(CustomResource)]
requires the derive
feature enabled on kube
.
The kube_runtime
crate contains sets of higher level abstractions on top of the Api
and Resource
types so that you don't have to do all the watch
/resourceVersion
/storage book-keeping yourself.
A low level streaming interface (similar to informers) that presents Applied
, Deleted
or Restarted
events.
let api = Api::<Pod>::namespaced(client, "default");
let watcher = watcher(api, ListParams::default());
This now gives a continual stream of events and you do not need to care about the watch having to restart, or connections dropping.
let mut apply_events = try_flatten_applied(watcher).boxed_local();
while let Some(event) = apply_events.try_next().await? {
println!("Applied: {}", event.name());
}
NB: the plain stream items a watcher
returns are different from WatchEvent
. If you are following along to "see what changed", you should flatten it with one of the utilities like try_flatten_applied
or try_flatten_touched
.
A reflector
is a watcher
with Store
on K
. It acts on all the Event<K>
exposed by watcher
to ensure that the state in the Store
is as accurate as possible.
let nodes: Api<Node> = Api::namespaced(client, &namespace);
let lp = ListParams::default()
.labels("beta.kubernetes.io/instance-type=m4.2xlarge");
let store = reflector::store::Writer::<Node>::default();
let reader = store.as_reader();
let rf = reflector(store, watcher(nodes, lp));
At this point you can listen to the reflector
as if it was a watcher
, but you can also query the reader
at any point.
A Controller
is a reflector
along with an arbitrary number of watchers that schedule events internally to send events through a reconciler:
Controller::new(root_kind_api, ListParams::default())
.owns(child_kind_api, ListParams::default())
.run(reconcile, error_policy, context)
.for_each(|res| async move {
match res {
Ok(o) => info!("reconciled {:?}", o),
Err(e) => warn!("reconcile failed: {}", Report::from(e)),
}
})
.await;
Here reconcile
and error_policy
refer to functions you define. The first will be called when the root or child elements change, and the second when the reconciler
returns an Err
.
Kube has basic support (with caveats) for rustls as a replacement for the openssl
dependency. To use this, turn off default features, and enable rustls-tls
:
[dependencies]
kube = { version = "0.55.0", default-features = false, features = ["rustls-tls"] }
kube-runtime = { version = "0.55.0", default-features = false, features = ["rustls-tls"] }
k8s-openapi = { version = "0.11.0", default-features = false, features = ["v1_20"] }
This will pull in hyper-rustls
and tokio-rustls
.
Kube will work with distroless, scratch, and alpine
(it's also possible to use alpine as a builder with some caveats).
Apache 2.0 licensed. See LICENSE for details.