Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Katacoda platform hosting "hello minikube" could use open alternatives #14228

Closed
afbjorklund opened this issue May 25, 2022 · 20 comments
Closed
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 25, 2022

The docs are running a "none" driver installation:

https://kubernetes.io/docs/tutorials/hello-minikube/

It is closing: https://www.oreilly.com/online-learning/leveraging-katacoda-technology.html

It would be nice if people could still run a minikube-based solution, for learning kubernetes ?


Document how to use minikube, in such an environment:

It should allow for running minikube start, without any issues.

After that, minikube kubectl should "just work", out of the box.

@afbjorklund afbjorklund added the kind/documentation Categorizes issue or PR as related to documentation. label May 25, 2022
@afbjorklund afbjorklund changed the title Katacoda platform hosting "hello minikube" is shutting dow Katacoda platform hosting "hello minikube" is shutting down May 25, 2022
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 25, 2022

This is the version currently deployed:

  • minikube 1.18 (Mar '21)
  • kubernetes 1.20 (Jan '21)
  • ubuntu 18.04
  • docker 19.03
Your Interactive Learning Environment Bash Terminal

$ start.sh
Starting Kubernetes...minikube version: v1.18.0
commit: ec61815d60f66a6e4f6353030a40b12362557caa-dirty
* minikube v1.18.0 on Ubuntu 18.04 (amd64)
* Using the none driver based on existing profile

X The requested memory allocation of 2200MiB does not leave room for system overhead (total system memory: 2460MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=2200mb'

* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=2, Memory=2460MB, Disk=194868MB) ...
* OS release is Ubuntu 18.04.5 LTS
* Preparing Kubernetes v1.20.2 on Docker 19.03.13 ...
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring local host environment ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  - Using image k8s.gcr.io/metrics-server-amd64:v0.2.1
* The 'metrics-server' addon is enabled
  - Using image kubernetesui/metrics-scraper:v1.0.4
  - Using image kubernetesui/dashboard:v2.1.0
* Some dashboard features require the metrics-server addon. To enable all features please run:

        minikube addons enable metrics-server


* The 'dashboard' addon is enabled
Kubernetes Started
$ whoami
root
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 25, 2022

Earlier efforts:

Here are some hacks, using vagrant: (it could be changed, to use the "none" driver instead of the "ssh" driver)

Eventually the efforts to run kubernetes with containerd and with ubuntu were moved to "lima" project instead:

https://github.com/lima-vm/lima/blob/master/examples/k8s.yaml

It doesn't have any add-ons, though. Especially, no dashboard.


Note: these solutions (vagrant and lima) will also install a VM, similar to minikube VM drivers.

The katacoda platform does all the VM setup itself and runs on CP, so it uses the "none" driver.

Currently the bare metal drivers have several bugs, that makes them harder to use than needed.

Running kubectl from a different shell than the kubeadm ("generic") is also an improvement.

@Rabattkarte
Copy link
Contributor

One alternative is killercoda.com

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 27, 2022

I think this is the scenario, the image source code and the licensing is unclear.

https://github.com/katacoda-scenarios/kubernetes-bootcamp-scenarios

At least for https://kubernetes.io/docs/tutorials/kubernetes-basics/

But minikube is not doing the hosting or setup for this, it's part of the k8s.io docs.


$ which start.sh
/usr/bin/start.sh
$ cat /usr/bin/start.sh
echo -n "Starting Kubernetes..."

minikube version
minikube start --wait=false
sleep 2
n=0
until [ $n -ge 10 ]
do
   (minikube addons enable metrics-server && minikube addons enable dashboard) && break
   n=$[$n+1]
   sleep 1
done
sleep 1
n=0
until [ $n -ge 10 ]
do
   kubectl apply -f /opt/kubernetes-dashboard.yaml &>/dev/null  && break
   n=$[$n+1]
   sleep 1
done

echo "Kubernetes Started"
$ cat .minikube/config/config.json 
{
    "ShowBootstrapperDeprecationNotification": false,
    "WantNoneDriverWarning": false,
    "WantReportErrorPrompt": false,
    "WantUpdateNotification": false,
    "driver": "none",
    "kubernetes-version": "v1.20.2"
}$ cat /opt/kubernetes-dashboard.yaml 
apiVersion: v1
kind: Namespace
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/minikube-addons: dashboard
  name: kubernetes-dashboard
  selfLink: /api/v1/namespaces/kubernetes-dashboard
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard-katacoda
  namespace: kubernetes-dashboard
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9090
    nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 27, 2022

Here is a vagrant file that handles most things, except for the storage provisioner (automount):

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.provider "virtualbox" do |vb|
    vb.cpus = 2
    vb.memory = 2048
  end
  config.vm.provision "shell", inline: <<-SHELL
    # docker provisioning
    if ! type docker; then curl -sSL https://get.docker.com | sh -; fi
    usermod -aG docker vagrant
    # minikube requirements
    apt-get update
    apt-get install -y conntrack
    # minikube installation
    curl -sSLO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
    sudo dpkg -i minikube_latest_amd64.deb
    # minikube preparation
    su vagrant -c "minikube config set driver none"
    su vagrant -c "minikube start --download-only"
  SHELL
end

After doing vagrant up, one can do vagrant ssh and see the scary output from the default:

vagrant@ubuntu-focal:~$ minikube start
😄  minikube v1.25.2 on Ubuntu 20.04 (vbox/amd64)
✨  Using the none driver based on user configuration

🧯  The requested memory allocation of 1983MiB does not leave room for system overhead (total system memory: 1983MiB). You may face stability issues.
💡  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1983mb'

👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=2, Memory=1983MB, Disk=39642MB) ...
ℹ️  OS release is Ubuntu 20.04.4 LTS
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.16 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗  kubectl and minikube configuration will be stored in /home/vagrant
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/vagrant/.kube /home/vagrant/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Missing:

  • storage provisioner automount script
  • kubernetes dashboard and metrics-server

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 27, 2022

Note that the above will break with v1.24.0, since it doesn't install crictl and cri-dockerd.

❌ Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1

❌ Exiting due to RUNTIME_ENABLE: stat /var/run/cri-dockerd.sock: exit status 1

It also doesn't install a CNI configuration, so it doesn't work with any other container runtimes.

❗ Using the 'containerd' runtime with the 'none' driver is an untested configuration!

Driver none used, CNI unnecessary in this configuration, recommending no CNI

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 27, 2022

This extra information in the output is not particularly helpful:

🧯 The requested memory allocation of 1983MiB does not leave room for system overhead (total system memory: 1983MiB). You may face stability issues.
💡 Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1983mb'

(notice that the memory allocation and suggested memory allocation are the same)

🤹 Configuring local host environment ...
❗ The 'none' driver is designed for experts who need to integrate with an existing VM
💡 Most users should use the newer 'docker' driver instead, which does not require root!
📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

(running with fakenode and fakeroot is still error-prone, compared to none and root)

❗ kubectl and minikube configuration will be stored in /home/vagrant
❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
▪ sudo mv /home/vagrant/.kube /home/vagrant/.minikube $HOME
▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

(note that $HOME and /home/vagrant is the same thing, also $USER and vagrant...)

Also kubectl and dashboard are inaccessible, outside the node.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 28, 2022

One alternative is killercoda.com

It is not entirely obvious, where the source code for these images (either katacoda or killercoda) are being hosted ?

https://github.com/killercoda/scenario-examples

kubernetes-kubeadm-2nodes: Kubeadm latest (atm 1.24) cluster with one controlplane and one node, ready to schedule workload.
kubernetes-kubeadm-1node: Kubeadm latest (atm 1.24) cluster with one controlplane, taint removed, ready to schedule workload. Loads faster then 2nodes!

https://github.com/katacoda-scenarios/kubernetes-scenarios

Kubernetes Cluster 1.21 | 2 nodes with kubeadm installed, nothing running. Based on Ubuntu.Designed for teaching how to use Kubernetes from scratch | kubernetes-cluster / kubernetes-cluster:1.14
Kubernetes Cluster 1.21 (Pre-configured) | 2 node cluster with 1 main, 1 node. Based on Ubuntu.Run launch.sh when the scenario starts to ensure cluster is running. | kubernetes-cluster-running / kubernetes-cluster-running:1.14

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 28, 2022

It is also possible to use the same vagrant setup, but over-allocate two system containers on the same VM.

minikube start --driver=docker --nodes=2

NAME           STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
minikube       Ready    control-plane,master   59s   v1.23.3   192.168.49.2   <none>        Ubuntu 20.04.2 LTS   5.4.0-113-generic   docker://20.10.12
minikube-m02   Ready    <none>                 34s   v1.23.3   192.168.49.3   <none>        Ubuntu 20.04.2 LTS   5.4.0-113-generic   docker://20.10.12

This way it will have both CRI and CNI, installed on the nodes. Both nodes will share the system resources.

vagrant@ubuntu-focal:~$ minikube start --driver=docker --nodes=2
😄  minikube v1.25.2 on Ubuntu 20.04 (vbox/amd64)
✨  Using the docker driver based on user configuration

🧯  The requested memory allocation of 1983MiB does not leave room for system overhead (total system memory: 1983MiB). You may face stability issues.
💡  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1983mb'

👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=1983MB) ...
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ kubelet.cni-conf-dir=/etc/cni/net.mk
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner

👍  Starting worker node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=1983MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Preparing the download cache could be a good idea.

506M	.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
380M	.minikube/cache/kic/amd64/kicbase_v0.0.30@sha256_02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2.tar

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jun 3, 2022

Apparently "hello minikube" is not shutting down

The Katacoda scenarios on those pages will not be affected by the shutdown of the public site.

Could still be useful with an alternative (more updated?)

@afbjorklund afbjorklund changed the title Katacoda platform hosting "hello minikube" is shutting down Katacoda platform hosting "hello minikube" could use open alternatives Jun 3, 2022
@k8s-triage-robot

This comment was marked as outdated.

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2022
@k8s-triage-robot

This comment was marked as outdated.

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 1, 2022
@k8s-triage-robot

This comment was marked as outdated.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 10, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afbjorklund afbjorklund reopened this Jan 3, 2023
@afbjorklund
Copy link
Collaborator Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 3, 2023
@afbjorklund
Copy link
Collaborator Author

Katacoda (O'Reilly) is shutting down all environments in 2022, including the Kubernetes.io tutorials and images.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 8, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants