Skip to content

Commit

Permalink
SSH access: add ssh-bastion feature
Browse files Browse the repository at this point in the history
add bastion dockerfile and bastion-operator
tenant's public keys injection to vms with cloud-init
tests with ginkgo
docs
  • Loading branch information
claudious96 committed Dec 12, 2020
1 parent fc22aac commit e66a4dd
Show file tree
Hide file tree
Showing 19 changed files with 773 additions and 9 deletions.
11 changes: 11 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,14 @@ jobs:
- crownlabs-image-list
- delete-stale-instances
- tenant-operator
- bastion-operator

# Frontend
- frontend

# SSH bastion
- ssh-bastion

# Laboratory environments
- novnc
- tigervnc
Expand All @@ -96,6 +100,9 @@ jobs:
- component: tenant-operator
context: ./operators
dockerfile: ./operators/build/tenant-operator/Dockerfile
- component: bastion-operator
context: ./operators
dockerfile: ./operators/build/bastion-operator/Dockerfile

# Frontend
- component: frontend
Expand All @@ -111,6 +118,10 @@ jobs:
- component: pycharm
context: ./provisioning/containers/pycharm

# SSH bastion
- component: ssh-bastion
context: ./operators/build/ssh-bastion

steps:
- name: Checkout
uses: actions/checkout@v2
Expand Down
3 changes: 3 additions & 0 deletions operators/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,6 @@

# Python
__pycache__

# Host keys for the ssh-bastion
ssh_host_key*
2 changes: 1 addition & 1 deletion operators/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ gen: generate fmt vet manifests

#run all tests
test:
go test ./... -coverprofile coverage.out
go test ./... -p 1 -coverprofile coverage.out

test-python: python-dependencies
python3 ./cmd/delete-stale-instances/test_delete_stale_instances.py
Expand Down
57 changes: 57 additions & 0 deletions operators/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,63 @@ make install

N.B. So far, the readiness check for VirtualMachines is performed by assuming that the operator is running on the same cluster of the Virtual Machines. This prevents the possibility to have *ready* VMs when testing the operator outside the cluster.

## SSH bastion

The SSH bastion is composed of a two basic blocks:
1. `bastion-operator`: an operator based on on [Kubebuilder 2.3](https://github.com/kubernetes-sigs/kubebuilder.git)
2. `ssh-bastion`: a lightweight alpine based container running [sshd](https://linux.die.net/man/8/sshd)

### Installation

#### Pre-requirements

The only pre-requirement needed in order to deploy the SSH bastion is `ssh-keygen` and it is needed only in case you don't already have the host keys that sshd will use.
You can check if you already have `ssh-keygen` install running:
```bash
ssh-keygen --help
```
To install it (i.e. on Ubuntu) run:
```bash
apt install openssh-client
```

#### Deployment

To deploy the SSH bastion in your cluster, you have to do the following steps.

First, generate the host keys needed to run sshd using:
```bash
# Generate the keys in this folder (they will be ignored by git) or in a folder outside the project
ssh-keygen -f ssh_host_key_ecdsa -N "" -t ecdsa
ssh-keygen -f ssh_host_key_ed25519 -N "" -t ed25519
ssh-keygen -f ssh_host_key_rsa -N "" -t rsa
```

Now create the secret holding the keys. If the bastion is going to run on a namespace different than default add the `--namespace=<namespace>` option.
```bash
kubectl create secret generic ssh-bastion-host-keys \
--from-file=./ssh_host_key_ecdsa \
--from-file=./ssh_host_key_ed25519 \
--from-file=./ssh_host_key_rsa
```

Then set the desired values in `operators/deploy/bastion-operator/k8s-manifest-example.env` .

Export the environment variables and generate the manifest from the template using:

```bash
cd operators/deploy/bastion-operator
export $(xargs < k8s-manifest-example.env)
envsubst < k8s-manifest.yaml.tmpl > k8s-manifest.yaml
```

After the manifest have been correctly generated you can install the cluster role and deploy the SSH bastion using:

```bash
kubectl apply -f k8s-cluster-role.yaml
kubectl apply -f k8s-manifest.yaml
```

## CrownLabs Image List

The CrownLabs Image List script allows to to gather the list of available images from a Docker Registry and expose it as an ImageList custom resource, to be consumed from the CrownLabs dashboard.
Expand Down
13 changes: 13 additions & 0 deletions operators/build/bastion-operator/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Build the manager binary
FROM golang:1.15 as builder
ENV PATH /go/bin:/usr/local/go/bin:$PATH
ENV GOPATH /go
COPY ./ /go/src/github.com/netgroup-polito/CrownLabs/operators/
WORKDIR /go/src/github.com/netgroup-polito/CrownLabs/operators/
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o controller ./cmd/bastion-operator/main.go
RUN cp controller /usr/bin/controller

FROM busybox
COPY --from=builder /usr/bin/controller /usr/bin/controller
USER 20000:20000
ENTRYPOINT [ "/usr/bin/controller" ]
18 changes: 18 additions & 0 deletions operators/build/ssh-bastion/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM alpine:3.12.1
RUN apk add --no-cache dumb-init openssh

# Create new user bastion with nologin
RUN adduser -D -s /sbin/nologin bastion
RUN passwd -u -d bastion

# sshd configuration file
COPY ./sshd_config_custom /etc/ssh/sshd_config_custom

# welcome message to be displayed in case the user does not use the -J option
COPY ./motd /etc/motd

EXPOSE 2222

ENTRYPOINT ["/usr/bin/dumb-init", "--"]

CMD ["/usr/sbin/sshd", "-D", "-e", "-f", "/etc/ssh/sshd_config_custom"]
8 changes: 8 additions & 0 deletions operators/build/ssh-bastion/motd
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
********** Welcome to CrownLabs! 👑 **********

If you landed here you probably forgot to jump to one of your remote environments.
You can get the ip of your environments in your personal dashboard.
Then on your local machine try running:

ssh -J bastion@crownlabs.polito.it username@<vm_ip>

13 changes: 13 additions & 0 deletions operators/build/ssh-bastion/sshd_config_custom
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# sshd config

port 2222
PasswordAuthentication no
PubkeyAuthentication yes

# We need this otherwise sshd will raise ownership issues about the file updated from the sidecar
StrictModes no

# host_keys volume is expected to be mounted using a secret
HostKey /host-keys/ssh_host_key_rsa
HostKey /host-keys/ssh_host_key_ecdsa
HostKey /host-keys/ssh_host_key_ed25519
92 changes: 92 additions & 0 deletions operators/cmd/bastion-operator/main.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package main

import (
"flag"
"github.com/netgroup-polito/CrownLabs/operators/pkg/bastion-controller"
"os"

crownlabsv1alpha1 "github.com/netgroup-polito/CrownLabs/operators/api/v1alpha1"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
// +kubebuilder:scaffold:imports
)

var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)

func init() {
_ = clientgoscheme.AddToScheme(scheme)

_ = crownlabsv1alpha1.AddToScheme(scheme)
// +kubebuilder:scaffold:scheme
}

func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.Parse()

ctrl.SetLogger(zap.New(zap.UseDevMode(true)))

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "95f0db32.crownlabs.polito.it",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}

authorizedKeysPath, isEnvSet := os.LookupEnv("AUTHORIZED_KEYS_PATH")
if !isEnvSet {
setupLog.Info("AUTHORIZED_KEYS_PATH env var is not set. Using default path \"/auth-keys-vol/authorized_keys\"")
authorizedKeysPath = "/auth-keys-vol/authorized_keys"
} else {
setupLog.Info("AUTHORIZED_KEYS_PATH env var found. Using path " + authorizedKeysPath)
}

if err = (&bastion_controller.BastionReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("Bastion"),
Scheme: mgr.GetScheme(),
AuthorizedKeysPath: authorizedKeysPath,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Bastion")
os.Exit(1)
}
// +kubebuilder:scaffold:builder

setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}
8 changes: 8 additions & 0 deletions operators/deploy/bastion-operator/k8s-cluster-role.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: crownlabs-bastion-operator
rules:
- apiGroups: ["crownlabs.polito.it"]
resources: ["tenants"]
verbs: ["list","watch"]
8 changes: 8 additions & 0 deletions operators/deploy/bastion-operator/k8s-manifest-example.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
NAMESPACE_SSH_BASTION=default
SSH_BASTION_CRB_NAME=crownlabs-bastion-operator
REPLICAS_SSH_BASTION=3
MAX_SURGE_SSH_BASTION=1
MAX_UNAVAILABLE_SSH_BASTION=1
SSH_BASTION_IMAGE_TAG=latest
IMAGE_TAG=v0.0.1
SERVICE_PORT_SSH_BASTION=2222
108 changes: 108 additions & 0 deletions operators/deploy/bastion-operator/k8s-manifest.yaml.tmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: ${NAMESPACE_SSH_BASTION}

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bastion-operator
namespace: ${NAMESPACE_SSH_BASTION}

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ${SSH_BASTION_CRB_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: crownlabs-bastion-operator
subjects:
- kind: ServiceAccount
name: bastion-operator
namespace: ${NAMESPACE_SSH_BASTION}

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ssh-bastion
namespace: ${NAMESPACE_SSH_BASTION}
spec:
progressDeadlineSeconds: 600
replicas: ${REPLICAS_SSH_BASTION}
revisionHistoryLimit: 10
selector:
matchLabels:
app: ssh-bastion
strategy:
rollingUpdate:
maxSurge: ${MAX_SURGE_SSH_BASTION}
maxUnavailable: ${MAX_UNAVAILABLE_SSH_BASTION}
type: RollingUpdate
template:
metadata:
labels:
app: ssh-bastion
spec:
serviceAccountName: bastion-operator
containers:
- name: sidecar
image: crownlabs/bastion-operator${IMAGE_SUFFIX}:${IMAGE_TAG}
imagePullPolicy: Always
command: ["/usr/bin/controller"]
resources: {}
volumeMounts:
- name: authorized-keys
mountPath: /auth-keys-vol
securityContext:
allowPrivilegeEscalation: false
runAsUser: 20000
runAsGroup: 20000
privileged: false
- name: bastion
args: ["-D", "-e", "-f","/etc/ssh/sshd_config_custom"]
command: ["/usr/sbin/sshd"]
image: crownlabs/ssh-bastion${IMAGE_SUFFIX}:${SSH_BASTION_IMAGE_TAG}
imagePullPolicy: Always
ports:
- containerPort: 2222
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/bastion/.ssh
name: authorized-keys
- mountPath: /host-keys
name : host-keys
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: authorized-keys
emptyDir: {}
- name: host-keys
secret:
secretName: ssh-bastion-host-keys
defaultMode: 0400

---
apiVersion: v1
kind: Service
metadata:
name: ssh-bastion
namespace: ${NAMESPACE_SSH_BASTION}
spec:
ports:
- port: ${SERVICE_PORT_SSH_BASTION}
targetPort: 2222
name: ssh
selector:
app: ssh-bastion
type: LoadBalancer
Loading

0 comments on commit e66a4dd

Please sign in to comment.