Skip to content

Commit

Permalink
Merge branch 'kubernetes-sigs:master' into feature/k8s-1.31
Browse files Browse the repository at this point in the history
  • Loading branch information
philipsabri authored Sep 20, 2024
2 parents 4f11f9c + 89ff071 commit 17d6d9b
Show file tree
Hide file tree
Showing 36 changed files with 167 additions and 1,333 deletions.
24 changes: 6 additions & 18 deletions Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ $subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu2004"
$network_plugin ||= "flannel"
$inventory ||= "inventory/sample"
$inventories ||= [$inventory]
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
$multi_networking ||= "False"
$download_run_once ||= "True"
Expand Down Expand Up @@ -93,19 +95,6 @@ if ! SUPPORTED_OS.key?($os)
end

$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
$inventory = File.absolute_path($inventory, File.dirname(__FILE__))

# if $inventory has a hosts.ini file use it, otherwise copy over
# vars etc to where vagrant expects dynamic inventory to be
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
$vagrant_ansible = File.join(File.absolute_path($vagrant_dir), "provisioners", "ansible")
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
$vagrant_inventory = File.join($vagrant_ansible,"inventory")
FileUtils.rm_f($vagrant_inventory)
FileUtils.ln_s($inventory, $vagrant_inventory)
end

if Vagrant.has_plugin?("vagrant-proxyconf")
$no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
Expand Down Expand Up @@ -286,14 +275,13 @@ Vagrant.configure("2") do |config|
ansible.playbook = $playbook
ansible.compatibility_mode = "2.0"
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true
ansible.limit = "all,localhost"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.raw_arguments = ["--forks=#{$num_instances}",
"--flush-cache",
"-e ansible_become_pass=vagrant"] +
$inventories.map {|inv| ["-i", inv]}.flatten
ansible.host_vars = host_vars
ansible.extra_vars = $extra_vars
if $ansible_tags != ""
Expand Down
7 changes: 7 additions & 0 deletions docs/ansible/vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,13 @@ in the form of dicts of key-value pairs of configuration parameters that will be
* *kube_kubeadm_controller_extra_args*
* *kube_kubeadm_scheduler_extra_args*

### Kubeadm patches

When extra flags are not sufficient and there is a need to further customize kubernetes components,
[kubeadm patches](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches)
can be used.
You should use the [`kubeadm_patches` variable](../../roles/kubernetes/kubeadm_common/defaults/main.yml) for that purpose.

## App variables

* *helm_version* - Only supports v3.x. Existing v2 installs (with Tiller) will not be modified and need to be removed manually.
7 changes: 1 addition & 6 deletions docs/developers/test_cases.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Node Layouts

There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
There are five node layout types: `default`, `separate`, `ha`, `all-in-one`, and `node-etcd-client`.

`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
Expand All @@ -11,11 +11,6 @@ and the `etcd` group merged with the `kube_control_plane`.
`ha` layout consists of two etcd nodes, two control planes and a single worker node,
with role intersection.

`scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts
in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.

`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.

`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.
Expand Down
87 changes: 30 additions & 57 deletions docs/developers/vagrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,26 @@ You can override the default settings in the `Vagrantfile` either by
directly modifying the `Vagrantfile` or through an override file.
In the same directory as the `Vagrantfile`, create a folder called
`vagrant` and create `config.rb` file in it.
An example of how to configure this file is given below.

Example:

```ruby
# vagrant/config.rb
$instance_name_prefix = "kub"
$vm_cpus = 1
$num_instances = 3
$os = "centos8-bento"
$subnet = "10.0.20"
$network_plugin = "flannel"

$extra_vars = {
dns_domain: my.custom.domain
}
# or
$extra_vars = "path/to/extra/vars/file.yml"
```

For all available options look at the Vagrantfile (search for "CONFIG")

## Use alternative OS for Vagrant

Expand Down Expand Up @@ -57,73 +76,33 @@ see [download documentation](/docs/advanced/downloads.md).
## Example use of Vagrant

The following is an example of setting up and running kubespray using `vagrant`.
For repeated runs, you could save the script to a file in the root of the
kubespray and run it by executing `source <name_of_the_file>`.
Customize your settings as shown, above, then run the commands:

```ShellSession
# use virtualenv to install all python requirements
VENVDIR=venv
virtualenv --python=/usr/bin/python3.7 $VENVDIR
source $VENVDIR/bin/activate
pip install -r requirements.txt

# prepare an inventory to test with
INV=inventory/my_lab
rm -rf ${INV}.bak &> /dev/null
mv ${INV} ${INV}.bak &> /dev/null
cp -a inventory/sample ${INV}
rm -f ${INV}/hosts.ini

# customize the vagrant environment
mkdir vagrant
cat << EOF > vagrant/config.rb
\$instance_name_prefix = "kub"
\$vm_cpus = 1
\$num_instances = 3
\$os = "centos8-bento"
\$subnet = "10.0.20"
\$network_plugin = "flannel"
\$inventory = "$INV"
\$shared_folders = { 'temp/docker_rpms' => "/var/cache/yum/x86_64/7/docker-ce/packages" }
\$extra_vars = {
dns_domain: my.custom.domain
}
# or
\$extra_vars = "path/to/extra/vars/file.yml"
EOF
$ virtualenv --python=/usr/bin/python3.7 $VENVDIR
$ source $VENVDIR/bin/activate
$ pip install -r requirements.txt

# make the rpm cache
mkdir -p temp/docker_rpms
$ vagrant up

vagrant up

# make a copy of the downloaded docker rpm, to speed up the next provisioning run
scp kub-1:/var/cache/yum/x86_64/7/docker-ce/packages/* temp/docker_rpms/

# copy kubectl access configuration in place
mkdir $HOME/.kube/ &> /dev/null
ln -s $PWD/$INV/artifacts/admin.conf $HOME/.kube/config
# Access the cluster
$ export INV=.vagrant/provisionners/ansible/inventory
$ export KUBECONFIG=${INV}/artifacts/admin.conf
# make the kubectl binary available
sudo ln -s $PWD/$INV/artifacts/kubectl /usr/local/bin/kubectl
#or
export PATH=$PATH:$PWD/$INV/artifacts
$ export PATH=$PATH:$PWD/$INV/artifacts
```

If a vagrant run failed and you've made some changes to fix the issue causing
the fail, here is how you would re-run ansible:

```ShellSession
ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml
vagrant provision
```

If all went well, you check if it's all working as expected:

```ShellSession
kubectl get nodes
```

The output should look like this:

```ShellSession
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
Expand All @@ -134,12 +113,6 @@ kub-3 Ready <none> 3m7s v1.22.5

Another nice test is the following:

```ShellSession
kubectl get pods --all-namespaces -o wide
```

Which should yield something like the following:

```ShellSession
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Expand Down
2 changes: 2 additions & 0 deletions inventory/sample/group_vars/k8s_cluster/addons.yml
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,8 @@ gateway_api_enabled: false
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_annotations:
# example.io/loadbalancerIPs: 1.2.3.4
# ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: ""
Expand Down
24 changes: 19 additions & 5 deletions inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -366,11 +366,25 @@ auto_renew_certificates: false
# First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"

# kubeadm patches path
kubeadm_patches:
enabled: false
source_dir: "{{ inventory_dir }}/patches"
dest_dir: "{{ kube_config_dir }}/patches"
kubeadm_patches_dir: "{{ kube_config_dir }}/patches"
kubeadm_patches: []
# See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches
# Correspondance with this link
# patchtype = type
# target = target
# suffix -> managed automatically
# extension -> always "yaml"
# kubeadm_patches:
# - target: kube-apiserver|kube-controller-manager|kube-scheduler|etcd|kubeletconfiguration
# type: strategic(default)|json|merge
# patch:
# metadata:
# annotations:
# example.com/test: "true"
# labels:
# example.com/prod_level: "{{ prod_level }}"
# - ...
# Patches are applied in the order they are specified.

# Set to true to remove the role binding to anonymous users created by kubeadm
remove_anonymous_access: false
8 changes: 0 additions & 8 deletions inventory/sample/patches/kube-controller-manager+merge.yaml

This file was deleted.

8 changes: 0 additions & 8 deletions inventory/sample/patches/kube-scheduler+merge.yaml

This file was deleted.

2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ansible==9.8.0
ansible==9.10.0
# Needed for jinja2 json_query templating
jmespath==1.0.1
# Needed for ansible.utils.validate module
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ ingress_nginx_host_network: false
ingress_nginx_service_type: LoadBalancer
ingress_nginx_service_nodeport_http: ""
ingress_nginx_service_nodeport_https: ""
ingress_nginx_service_annotations: {}
ingress_publish_status_address: ""
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
{% if ingress_nginx_service_annotations %}
annotations:
{{ ingress_nginx_service_annotations | to_nice_yaml(indent=2, width=1337) | indent(width=4) }}
{% endif %}
spec:
type: {{ ingress_nginx_service_type }}
ports:
Expand Down
1 change: 1 addition & 0 deletions roles/kubernetes/control-plane/meta/main.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
dependencies:
- role: kubernetes/kubeadm_common
- role: kubernetes/tokens
when: kube_token_auth
tags:
Expand Down
24 changes: 0 additions & 24 deletions roles/kubernetes/control-plane/tasks/kubeadm-setup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -122,15 +122,6 @@
- item in kube_apiserver_admission_plugins_needs_configuration
loop: "{{ kube_apiserver_enable_admission_plugins }}"

- name: Kubeadm | Configure default cluster podnodeslector
template:
src: "podnodeselector.yaml.j2"
dest: "{{ kube_config_dir }}/admission-controls/podnodeselector.yaml"
mode: "0640"
when:
- kube_apiserver_admission_plugins_podnodeselector_default_node_selector is defined
- kube_apiserver_admission_plugins_podnodeselector_default_node_selector | length > 0

- name: Kubeadm | Check apiserver.crt SANs
vars:
apiserver_ips: "{{ apiserver_sans | map('ansible.utils.ipaddr') | reject('equalto', False) | list }}"
Expand Down Expand Up @@ -176,21 +167,6 @@
- apiserver_sans_ip_check.changed or apiserver_sans_host_check.changed
- not kube_external_ca_mode

- name: Kubeadm | Create directory to store kubeadm patches
file:
path: "{{ kubeadm_patches.dest_dir }}"
state: directory
mode: "0640"
when: kubeadm_patches is defined and kubeadm_patches.enabled

- name: Kubeadm | Copy kubeadm patches from inventory files
copy:
src: "{{ kubeadm_patches.source_dir }}/"
dest: "{{ kubeadm_patches.dest_dir }}"
owner: "root"
mode: "0644"
when: kubeadm_patches is defined and kubeadm_patches.enabled

- name: Kubeadm | Initialize first control plane node
command: >-
timeout -k {{ kubeadm_init_timeout }} {{ kubeadm_init_timeout }}
Expand Down
4 changes: 2 additions & 2 deletions roles/kubernetes/control-plane/tasks/kubeadm-upgrade.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
--ignore-preflight-errors=all
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | bool | lower }}
{% if kubeadm_patches is defined and kubeadm_patches.enabled %}--patches={{ kubeadm_patches.dest_dir }}{% endif %}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
register: kubeadm_upgrade
# Retry is because upload config sometimes fails
Expand All @@ -39,7 +39,7 @@
--ignore-preflight-errors=all
--allow-experimental-upgrades
--etcd-upgrade={{ (etcd_deployment_type == "kubeadm") | bool | lower }}
{% if kubeadm_patches is defined and kubeadm_patches.enabled %}--patches={{ kubeadm_patches.dest_dir }}{% endif %}
{% if kubeadm_patches | length > 0 %}--patches={{ kubeadm_patches_dir }}{% endif %}
--force
register: kubeadm_upgrade
# Retry is because upload config sometimes fails
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
{% endif %}
{% if kubeadm_patches is defined and kubeadm_patches.enabled %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches.dest_dir }}
directory: {{ kubeadm_patches_dir }}
{% endif %}
---
apiVersion: kubeadm.k8s.io/v1beta3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ nodeRegistration:
{% else %}
taints: []
{% endif %}
{% if kubeadm_patches is defined and kubeadm_patches.enabled %}
{% if kubeadm_patches | length > 0 %}
patches:
directory: {{ kubeadm_patches.dest_dir }}
directory: {{ kubeadm_patches_dir }}
{% endif %}
7 changes: 6 additions & 1 deletion roles/kubernetes/control-plane/vars/main.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
# list of admission plugins that needs to be configured
kube_apiserver_admission_plugins_needs_configuration: [EventRateLimit, PodSecurity]
# https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
kube_apiserver_admission_plugins_needs_configuration:
- EventRateLimit
- ImagePolicyWebhook
- PodSecurity
- PodNodeSelector
3 changes: 3 additions & 0 deletions roles/kubernetes/kubeadm/meta/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
dependencies:
- role: kubernetes/kubeadm_common
Loading

0 comments on commit 17d6d9b

Please sign in to comment.