Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

The big picture

Kosisochukwu Anyanwu edited this page Dec 9, 2019 · 1 revision

This document describes the big picture of Lokomotive-Kubernetes project and its components.

Currently, cluster creation is done via Terraform configuration files. In the future, Lokomotive will include a tool (lokoctl) which will simplify the creation and updating of the cluster configuration.

Lokomotive-Kubernetes

Lokomotive is an open source project by Kinvolk which distributes pure upstream Kubernetes. It provides a Terraform module for each supported platform with Flatcar Linux as it’s operating system. Supported platforms are:

High level overview

high level overview image

Image source

Why Flatcar Linux

Security

Cluster wide Pod Security Policy

  • Lokomotive clusters are deployed with security as a top concern. Hence the clusters have PodSecurityPolicy (PSP) enabled by default. The cluster comes with two default PSPs for general purpose application usage, viz: restricted and privileged.

  • restricted PSP is allowed for all workloads in all namespaces. The definition of the restricted PSP can be found here. This PSP has the following restrictions:

    • Does not allow pods to be run as root,
    • Allows only whitelisted volumes,
    • Allows only whitelisted capabilities,
    • The linux kernel host namespace sharing is not allowed for any pod using this PSP. The default docker seccomp profiles are used.
  • privileged PSP is allowed to workloads in kube-system namespace only. The definition of privileged PSP can be found here. This PSP does not restrict any workloads from any required permissions. Although this PSP can be used for a workload in another namespace by creating a RoleBinding that binds the ClusterRole privileged-psp to corresponding ServiceAccount, it is actually not recommended.

  • To allow special permissions to the workloads, the recommended way is to create a bespoke PSP and allow the workloads’ ServiceAccount to use that PSP.

Global Network Policies (for Packet Platform only)

Lokomotive installs Calico’s GlobalNetworkPolicy by default. This helps with restricting access to the nodes from outside the cluster.

The policy named ssh is worth noting, since there we can define who can ssh into the nodes. To edit the IP address list, run following command. Add add the IP Addresses to the whitelist you want to allow ssh access to the host from. The list is present at json path {.spec.ingress[0].source.nets}. Also remove the IP block 0.0.0.0/0 from the whitelist, if there is any.

kubectl edit globalnetworkpolicies ssh

Components

A component is a Kubernetes workload which adds functionality to a Lokomotive cluster. Lokomotive components can take care of tasks such as load balancing, monitoring, authentication, storage, etc.

System Components

  • DNS: The cluster DNS used in Lokomotive is CoreDNS.

  • Networking: Calico is the networking provider for Lokomotive.

User Authentication

Lokomotive provides dex and gangway components for user authentication via OpenID Connect (OIDC). With these components, you can securely manage access to the Kubernetes cluster and resource. The combination of both components can be likened to the AWS IAM service.

Lokomotive also provides cert-manager component, for automating the management and issuance of TLS certificates from various issuing sources. This component can be likened to the AWS Certificate Manager.

Cert manager

  • Cert-manager is used to automatically provision TLS certs for various other components like Dex and Gangway using Let’s Encrypt certificate authority.

  • It also supports rotation of certificates when they are expired for the components.

  • It runs in the namespace cert-manager.

Dex

  • The default authentication provider for Lokomotive is Dex. It is an OIDC and Oauth 2.0 provider with various connectors running as plugins.

  • Dex runs in dex namespace.

Gangway

  • Gangway is a frontend to Dex. It is a web application used to easily enable authentication flows via OIDC workflows for Kubernetes clusters.

  • Generating the kubeconfig can be done easily on Gangway and users do not have to bother about creating them manually and admins also do not have to worry about creating them for each user.

  • Gangway related manifests are deployed in gangway namespace.

Monitoring/Metrics

Lokomotive provides a prometheus-operator component that creates, configures and manages Prometheus clusters atop Kubernetes.

Prometheus Operator

  • Prometheus Operator provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.

  • It requires a PersistentVolume plugin, e.g. [OpenEBS(https://docs.openebs.io/) or one of the built-in plugins to store and manage data natively.

  • To start monitoring your applications running on Kubernetes, you will need to create a ServiceMonitor object in that namespace.

  • The Prometheus dashboard can be accessed via a URL defined by the user during configuration and runs on port 9090 on the prometheus-operated pod (created by Prometheus Operator).

Storage

Lokomotive provides openebs-operator, rook and rook-ceph storage components for creating and managing storage natively on a Lokomotive cluster. OpenEBS can be likened to AWS block storage while Ceph can be likened to AWS S3.

OpenEBS Operator

  • OpenEBS provides persistent and containerised block storage solution for Lokomotive cluster.

  • OpenEBS Operator requires available disks, i.e. disks that aren't mounted by anything. This means that by default, OpenEBS will not work on machines with just a single physical disk, e.g. Packet's t1.small.x86 (because the disk will be used for the operating system).

  • Lokomotive also provides openebs-default-storage-class component that creates a storage pool and a storage class with a replica count of 3 using physical disks, and makes the installed storage class the default.

  • You can monitor its metrics with the Prometheus Operator component.

  • OpenEBS Operator runs in openebs namespace.

Rook

  • Rook is a storage orchestrator, providing the platform, framework and support for Ceph to natively integrate with Lokomotive.

  • Rook workloads runs in the namespace specified by the user. The default namespace is rook.

Rook Ceph

  • The Rook Ceph component installs a Ceph cluster managed by the Rook operator. Rook component is a requirement for Rook Ceph.

  • To deploy Ceph cluster using rook, the custom resource (CR) being used is CephCluster called rook-ceph.

  • Because Rook Ceph is managed by Rook, it runs in the namespace specified by the user. The default namespace is rook.

Load Balancing/Ingress (for Packet platform only)

Lokomotive provides MetalLB component for load balancing and Contour component for ingress.

MetalLB

  • One of our supported cloud platforms - Packet does not provide a Load Balancer, therefore, MetalLB is used in Lokomotive as a replacement for that.

  • MetalLB is a kubernetes native bare metal load balancer. It operates by allocating one IPv4 address to each service of type LoadBalancer created on the cluster. It then advertises this address to one or more upstream BGP routers. This enables both high availability and load balancing: high availability is achieved since BGP naturally converges upon node failure, and load balancing is achieved using ECMP.

  • MetalLB runs in the namespace metallb-system.

Contour

  • Contour is an ingress controller using Lyft's Envoy proxy.

  • The default ingress provider for Lokomotive is Contour. The contour service is of type LoadBalancer, which gets exposed via EIP.

  • Contour can run as a daemonset or as deployment.

    • Installing as a daemonset spreads the load on all the cluster nodes, with the obvious consequence of consuming resources on all cluster nodes.

    & Installing as a deployment uses only the desired replicas but each pod might have more traffic load.

  • It requires MetalLB component installed and configured.

  • Contour runs in the namespace heptio-contour.

Custom Controllers

Lokomotive provides calico-host-endpoint-controller component for adding and removing Calico HostEndpoint objects and cluster-autoscaler component for adjusting the size of Lokomotive cluster.

Calico HostEndpoint controller

  • The Calico HostEndpoint controller ensures that any new node added to the Lokomotive cluster, gets HostEndpoint objects. It also ensures that they are removed when the nodes they refer to, are deleted. This is relevant for bare-metal or Packet clusters because there are no external security primitives and nodes must rely on HostEndpoint objects to be secured.

  • Calico HostEndpoint controller runs in namespace kube-system.

Cluster Autoscaler (for Packet platform only)

  • Cluster Autoscaler is used to automatically adjust the size of a Lokomotive cluster when one of the following conditions is true:

    • there are pods that failed to run in the cluster due to insufficient resources,

    • there are nodes that have been underutilized for an extended period of time and pods can be placed on other existing nodes.

  • Cluster Autoscaler runs in namespace kube-system.

Update

Lokomotive provides flatcar-linux-update-operator for draining a node before rebooting after a Flatcar Linux OS update.

Flatcar Linux Update Operator

  • When a reboot is needed after a system update happens via update_engine, Flatcar Linux Update Operator will drain the node before rebooting it. You can add an annotation to prevent a certain node from rebooting.

  • Flatcar Linux Update Operator component runs in the namespace reboot-coordinator.