Skip to content

kvaps/kubectl-node-shell

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 

Repository files navigation

kubectl node-shell

(formerly known as kubectl-enter)

Start a root shell in the node's host OS running. Uses an alpine pod with nsenter for Linux nodes and a HostProcess pod with PowerShell for Windows nodes.

demo

Installation

using krew:

Plugin can be installed from the official krew repository:

kubectl krew install node-shell

Or from our own krew repository:

kubectl krew index add kvaps https://github.com/kvaps/krew-index
kubectl krew install kvaps/node-shell

or using curl:

curl -LO https://github.com/kvaps/kubectl-node-shell/raw/master/kubectl-node_shell
chmod +x ./kubectl-node_shell
sudo mv ./kubectl-node_shell /usr/local/bin/kubectl-node_shell

Usage

# Get standard bash shell
kubectl node-shell <node>

# Use custom image for pod
kubectl node-shell <node> --image <image>

# Use X-mode (mount /host, and do not enter host namespace)
kubectl node-shell -x <node>

# Skip specific namespace types to enter, choose any of ipc, mount, pid, net, uts
kubectl node-shell <node> --no-ipc

# Execute custom command
kubectl node-shell <node> -- echo 123

# Use stdin
cat /etc/passwd | kubectl node-shell <node> -- sh -c 'cat > /tmp/passwd'

# Run oneliner script
kubectl node-shell <node> -- sh -c 'cat /tmp/passwd; rm -f /tmp/passwd'

X-mode

X-mode can be useful for debugging minimal systems that do not have a built-in shell (eg. Talos).
Here's an example of how you can debug the network for a rootless kube-apiserver container without a filesystem:

kubectl node-shell -x <node>

# Download crictl
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz -O- | \
  tar -xzf- -C /usr/local/bin/

# Setup CRI endpoint
export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock

# Find your container
crictl ps | grep kube-apiserver
#3ff4626a9f10e       e7972205b6614       6 hours ago         Running             kube-apiserver         0                   215107b47bd7e       kube-apiserver-talos-rzq-nkg

# Find pid of the container
crictl inspect 3ff4626a9f10e | grep pid
#    "pid": 2152,
#            "pid": 1
#            "type": "pid"
#                "getpid",
#                "getppid",
#                "pidfd_open",
#                "pidfd_send_signal",
#                "waitpid",

# Go to network namespace of the pid, but keep mount namespace of the debug container
nsenter -t 2152 -n

You need to be able to start privileged containers for that.

Mounting External CSI Volumes

You can mount volumes from your CSI storage layer using the -m flag. This allows you to move data to/from node devices seamlessly. The PVC will be mounted at /opt-pvc. This is useful for failover in minimal systems that do not have a built in shell (eg. Talos). Here is an example of how you can retrieve zfs/lvm data from a volume on a failed CSI node and put it back in your distributed storage layer:

k node-shell -n <namespace> -x <node_with_data> -m <pvc_name>

# install rsync
apk add rsync

# Add lvm/zfs libs
# ZFS
mount -o bind /host/dev /dev
mount -o bind /host/usr/local /usr/local
touch /lib/libuuid.so.1
mount -o bind /host/lib/libuuid.so.1 /lib/libuuid.so.1
touch /lib/libuuid.so.1.3.0
mount -o bind /host/lib/libuuid.so.1.3.0 /lib/libuuid.so.1.3.0
touch /lib/libblkid.so.1
mount -o bind /host/lib/libblkid.so.1 /lib/libblkid.so.1
touch /lib/libblkid.so.1.1.0
mount -o bind /host/lib/libblkid.so.1.1.0 /lib/libblkid.so.1.1.0
#LVM
touch /usr/lib/libaio.so.1
mount -o bind /host/usr/lib/libaio.so.1.0.2 /usr/lib/libaio.so.1
touch /usr/lib/libudev.so.1
mount -o bind /host/usr/lib/libudev.so.1 /usr/lib/libudev.so.1
export PATH=$PATH:/host/sbin
mkdir /lib/modules
mount -o bind /host/lib/modules /lib/modules

# look for data to recover
zfs list
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
hdd-1                                                   15.9T  7.52T    96K  /hdd-1
hdd-1/SOME-OLD-PVC-FROM-PREVIOUS-NODE-INSTALL            361G  7.52T   361G  -                  -

# mount the failed volume
zfs set mountpoint=/mnt hdd-1/SOME-OLD-PVC-FROM-PREVIOUS-NODE-INSTALL
zfs mount /hdd-1/SOME-OLD-PVC-FROM-PREVIOUS-NODE-INSTALL

# recover the data : copy it to the mounted CSI volume
rsync -avh --info=progress2 /mnt/ /opt-pvc/

the above exemple assumes pvc_name already exists in namespace. You need to be able to start privileged containers.