-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubespray offline (on-premise) installation support #5973
Comments
Considering that there are many ways to do offline environment depending the requirements and services available in said environment. Anyway, I'll flag @EppO who showed some interest in offline environments. |
Thank you @Miouge1 . |
This is my current list I'm using for offline installation in my inventory: # Registry overrides
gcr_image_repo: "{{ registry_host }}"
docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"
kubeadm_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubectl"
kubelet_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubelet"
# etcd is optional if you use etcd_deployment != host
etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz"
cni_download_url: "{{ files_repo }}/kubernetes/cni/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
crictl_download_url: "{{ files_repo }}/kubernetes/cri-tools/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
sonobuoy_url: "{{ files_repo }}/kubernetes/sonobuoy/sonobuoy_{{ sonobuoy_version }}-{{ ansible_system | lower }}-{{ sonobuoy_arch }}.tar.gz"
# CentOS/RedHat docker-ce repo
docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/7/x86_64/"
docker_rh_repo_gpgkey: "{{ yum_repo }}/repo-key.gpg" Then in your inventory, you need to define the following variables:
Bottom line, the requirements for a full offline installation:
I will update the docs accordingly to make it clear. |
I propose defining a variable:
Similar proposal for yum_repo as well. Per my research, found out that for pypi server: jinja templating is the major package needed along with may by couple others if any. Also, binaries need to be introduced to the offline cluster manually, and mostly consists of kubectl, kubelet and kubeadm only. Since, binaries and pypi packages are numbered, just carrying those in pre defined dirs may be a more feasible option. We can define a dir like binaries/ and say copy from this folder. For pypi we can similarly just install like Also, all variables for offline install should be defined at one place only, eg., in Please suggest. |
I am new to contributing to open source. I notice that @EppO opened a new membership request. What is supposedly the next step here? Am I supposed to give a pr directly or what am I supposed to do, to be able to contribute to kubespray project. Please guide. |
You don't need a membership to contribute :) |
I like the idea of this "offline_install" but the main drawback is that we're losing flexibility to point to different container registries, http servers and yum repositories compared to a simple inventory override. Not sure that's bad but I don't know if everybody using air-gap clusters use it the same way. |
Hello, I have been testing offline install a few time and agree with the introduction of the var: offline_install: true
Example: Failed to pull image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Typically, one would download all kubernetes images into <some_internal_image_repo>, re-tag them and push to a local repo that is then accessed during kubespray installation. We do that for our application images successfully. In my case, I wanted to use kubespray native "download_only" method with dedicated "download" host as I wanted to have images available locally on masaters/nodes in case my "registry_host" goes kaput or have some level of control. _ansible-playbook -i ./inventory/mycluster/$my_inventory_file -b --become-user=root cluster.yml -e download_cache_dir="$my_root/kubespray_cache" -e download_keep_remote_cache=true -e download_localhost=true -e download_run_once=true -e download_force_cache=false -vv -e ignore_assert_errors=yes --flush-cache I also agree with introduction of these inventory items as I do have them as well. registry_host I am not using files_repo, but the other two assist with pointing to my yum_repo with rpms and python files, registry_host (can be many) basically fetches the host for these group_vars: repo_host_ip: "{{ hostvars[groups['registry_host'][0]]['ansible_host'] }}" I will be happy to contribute my verdicts with offline installer. |
I did install kubespray cluster on my personal pc virtualbox sucessfully , boy for my enterprise PC tried that couple of times on vmwareworkstation by defining company proxy in different parts of vms but looks need to define them inside playbooks , do you have experience on deploy kubespray behind company proxy , it haven't worked forme yet. |
What would you like to be added:
Offline support for Kubespray needs rpm's to be installed and docker images to be downloaded from internet first and present on cluster cut off from internet access. Other than the generalized steps mentioned in https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md, the document does not guide the user to define exact variables needed for offline installation. Also, steps needed to be performed for offline installation which according to my research on a high level are:
These steps can be included as a block in one of the config files defined by variables which user needs to populate and the complete setup will run accordingly, after user has rpm and docker images content present in some repository.
Why is this needed:
This is needed to perform an offline on premise cluster installation for one of our clients.
Please let me know what you think about this use-case and how to proceed forward from here. We actively work for Production on-prem cluster installs.
The text was updated successfully, but these errors were encountered: