Compare commits

..

24 Commits

Author SHA1 Message Date
Techno Tim
ea3b3c776a chore(deps) pre-commit updates (#438)
* chore(deps): Updated pre-commit

* fix(actions): cleaning up comments
2024-01-30 11:54:28 -06:00
dependabot[bot]
5beca87783 chore(deps): bump ansible-core from 2.16.2 to 2.16.3 (#436)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.16.2 to 2.16.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.16.2...v2.16.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 21:29:07 -06:00
sholdee
6ffc25dfe5 Add Cilium CNI option (#435)
* Add Cilium CNI option

* Tweak version checks and add BGP resource verify

* Update metallb detection for kube-vip feat compat
2024-01-29 19:29:13 -06:00
Gereon Vey
bcd37a6904 add kube-vip as a service load balancer (#432)
* add kube-vip as a service load balancer

* add molecule scenario kube-vip

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-29 09:13:13 -06:00
Techno Tim
8dd3ffc825 fix(ci): Don't run CI for certain files (#433)
* fix(ci): Don't run CI for certain files

* fix(ci): Don't run CI for certain files
2024-01-28 20:42:28 +00:00
dependabot[bot]
f6ba208b5c chore(deps): bump actions/upload-artifact from 3.1.1 to 4.3.0 (#426)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.1 to 4.3.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](83fd05a356...26f96dfa69)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-28 19:40:47 +00:00
dependabot[bot]
a22d8f7aaf chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions (#425)
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 2.0.1 to 3.0.3.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](af2eb32266...ba37328d4e)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-28 17:20:46 +00:00
dependabot[bot]
05fb6b566d chore(deps): bump actions/setup-python from 2.3.3 to 5.0.0 (#423) 2024-01-28 01:57:41 +00:00
egandro
3aeb7d69ea added fix for metallb version upgrades (#394)
* added fix for metallb version upgrades

* use bash to allow pipefail

---------

Co-authored-by: Harald Fielker <harald.fielker@gmail.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-28 00:50:13 +00:00
dependabot[bot]
61bf3971ef chore(deps): bump actions/checkout from 2.5.0 to 4.1.1 (#424)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.5.0 to 4.1.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](e2f20e631a...b4ffde65f4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-27 17:26:12 -06:00
Gereon Vey
3f06a11c8d fetch kubeconfig from master after deployment (#431)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-27 16:30:13 -06:00
Techno Tim
3888a29bb1 fix(ci): only run CI for PRs (#430)
* fix(ci): only run CI for PRs

* fix(ci): ensure that branch is up to date

* fix(ci): ensure that branch is up to date
2024-01-27 15:35:47 -06:00
Timothy Stewart
98ef696f31 fix(ci): fixes for ephemeral nodes 2024-01-26 23:12:50 -06:00
Timothy Stewart
de26a79a4c fix(ci): fixes for ephemeral nodes 2024-01-26 23:09:30 -06:00
Timothy Stewart
ab7ca9b551 fix(ci): fixes for ephemeral nodes 2024-01-26 23:06:02 -06:00
Timothy Stewart
c5f71c9e2e fix(ci): fixes for ephemeral nodes 2024-01-26 22:52:19 -06:00
sholdee
0f23e7e258 Add Calico CNI option (#414)
* Add Tigera Operator/Calico CNI option

Small tweak to reduce delta from head

Set calico option to be disabled by default

Add rescue blocks in case updating existing

Refactor items and update comments

Refactor and consolidate calico.yml into block

Refactor to use template for Calico CRs

Revert use_calico to false

Template blockSize

Align default cidr in template with all.yml sample

Apply upstream version tags

Revert to current ver tags. Upstream's don't work.

Update template address detection

Add Tigera Operator/Calico CNI option

* Add calico-apiserver check

* Add eBPF dataplane option

* Add kube svc endpoint configmap when ebpf enabled

* Add /etc/cni/net.d to reset task

* Refactor based on comments

* Add molecule scenario

* Fix lint

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-26 18:53:27 -06:00
Techno Tim
121061d875 chore(deps) Updated LBs (#428)
* chore(deps): Updated metallb

* chore(deps): Updated kube-vip
2024-01-26 23:54:33 +00:00
João Gonçalves
db53f595fd feat(k3s): added support for latest raspberrypi os (debian 12 bookworm) (#404)
* feat(k3s): added support for latest raspberrypi os (debian 12 bookworm)

* Update test.yml

* Revert test workflow

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-26 22:20:06 +00:00
Techno Tim
7b6b24ce4d feat(k3s): Updated to v1.29.0+k3s1 (#421) 2024-01-26 14:49:24 -06:00
Techno Tim
a5728da35e feat(k3s): Updated to v1.28 (#420)
* feat(k3s): Updated to v1.28.5+k3s1
2024-01-26 13:10:21 -06:00
Techno Tim
cda7c92203 feat(k3s): Updated to v1.27 (#294)
* feat(k3s): Updated to v1.27.1+k3s1

* feat(k3s): Updated to v1.27.1+k3s1

* feat(k3s): Updated to v1.27.4+k3s1

* feat(k3s): Updated to v1.27.9+k3s1
2024-01-26 18:54:58 +00:00
Techno Tim
d910b83bf3 fix(molecule): Cleanup all artifacts, side effects, and actions in case nodes are not ephemeral (#427) 2024-01-26 17:16:26 +00:00
Techno Tim
101313f880 feat(dependabot): Added docker docker and github actions (#422) 2024-01-26 16:19:42 +00:00
32 changed files with 891 additions and 32 deletions

View File

@@ -18,3 +18,4 @@ exclude_paths:
skip_list: skip_list:
- 'fqcn-builtins' - 'fqcn-builtins'
- var-naming[no-role-prefix]

View File

@@ -37,6 +37,11 @@ systemd_dir: ""
flannel_iface: "" flannel_iface: ""
#calico_iface: ""
calico_ebpf: ""
calico_cidr: ""
calico_tag: ""
apiserver_endpoint: "" apiserver_endpoint: ""
k3s_token: "NA" k3s_token: "NA"
@@ -46,6 +51,9 @@ extra_agent_args: ""
kube_vip_tag_version: "" kube_vip_tag_version: ""
kube_vip_cloud_provider_tag_version: ""
kube_vip_lb_ip_range: ""
metal_lb_speaker_tag_version: "" metal_lb_speaker_tag_version: ""
metal_lb_controller_tag_version: "" metal_lb_controller_tag_version: ""

View File

@@ -9,3 +9,18 @@ updates:
ignore: ignore:
- dependency-name: "*" - dependency-name: "*"
update-types: ["version-update:semver-major"] update-types: ["version-update:semver-major"]
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
rebase-strategy: "auto"
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
rebase-strategy: "auto"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]

View File

@@ -11,12 +11,12 @@ jobs:
steps: steps:
- name: Check out the codebase - name: Check out the codebase
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3 uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies

View File

@@ -2,11 +2,20 @@
name: "CI" name: "CI"
on: on:
pull_request: pull_request:
push: types:
branches: - opened
- master - synchronize
paths-ignore: paths-ignore:
- '**/README.md' - '**/.gitignore'
- '**/FUNDING.yml'
- '**/host.ini'
- '**/*.md'
- '**/.editorconfig'
- '**/ansible.example.cfg'
- '**/deploy.sh'
- '**/LICENSE'
- '**/reboot.sh'
- '**/reset.sh'
jobs: jobs:
pre: pre:
uses: ./.github/workflows/cache.yml uses: ./.github/workflows/cache.yml

View File

@@ -11,12 +11,12 @@ jobs:
steps: steps:
- name: Check out the codebase - name: Check out the codebase
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3 uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies
@@ -45,9 +45,9 @@ jobs:
runs-on: self-hosted runs-on: self-hosted
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
- name: Ensure SHA pinned actions - name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@af2eb3226618e2494e3d9084f515ad6dcf16e229 # 2.0.1 uses: zgosalvez/github-actions-ensure-sha-pinned-actions@ba37328d4ea95eaf8b3bd6c6cef308f709a5f2ec # 3.0.3
with: with:
allowlist: | allowlist: |
aws-actions/ aws-actions/

View File

@@ -12,16 +12,44 @@ jobs:
- default - default
- ipv6 - ipv6
- single_node - single_node
- calico
- cilium
- kube-vip
fail-fast: false fail-fast: false
env: env:
PYTHON_VERSION: "3.11" PYTHON_VERSION: "3.11"
steps: steps:
- name: Check out the codebase - name: Check out the codebase
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
# these steps are necessary if not using ephemeral nodes
- name: Delete old Vagrant box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force
- name: Remove all local Vagrant boxes
if: always() # do this even if a step before has failed
run: if vagrant box list 2>/dev/null; then vagrant box list | cut -f 1 -d ' ' | xargs -L 1 vagrant box remove -f 2>/dev/null && echo "All Vagrant boxes removed." || echo "No Vagrant boxes found."; else echo "No Vagrant boxes found."; fi
- name: Remove all Virtualbox VMs
if: always() # do this even if a step before has failed
run: VBoxManage list vms | awk -F'"' '{print $2}' | xargs -I {} VBoxManage unregistervm --delete "{}"
- name: Remove all Virtualbox HDs
if: always() # do this even if a step before has failed
run: VBoxManage list hdds | awk -F':' '/^UUID:/ {print $2}' | xargs -I {} VBoxManage closemedium disk "{}" --delete
- name: Remove all Virtualbox Networks
if: always() # do this even if a step before has failed
run: VBoxManage list hostonlyifs | grep '^Name:' | awk '{print $2}' | grep '^vboxnet' | xargs -I {} VBoxManage hostonlyif remove {}
- name: Remove Virtualbox network config
if: always() # do this even if a step before has failed
run: sudo rm /etc/vbox/networks.conf || true
- name: Configure VirtualBox - name: Configure VirtualBox
run: |- run: |-
sudo mkdir -p /etc/vbox sudo mkdir -p /etc/vbox
@@ -31,7 +59,7 @@ jobs:
EOF EOF
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3 uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies
@@ -63,14 +91,36 @@ jobs:
PY_COLORS: 1 PY_COLORS: 1
ANSIBLE_FORCE_COLOR: 1 ANSIBLE_FORCE_COLOR: 1
# these steps are necessary if not using ephemeral nodes
- name: Delete old Vagrant box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force
- name: Remove all local Vagrant boxes
if: always() # do this even if a step before has failed
run: if vagrant box list 2>/dev/null; then vagrant box list | cut -f 1 -d ' ' | xargs -L 1 vagrant box remove -f 2>/dev/null && echo "All Vagrant boxes removed." || echo "No Vagrant boxes found."; else echo "No Vagrant boxes found."; fi
- name: Remove all Virtualbox VMs
if: always() # do this even if a step before has failed
run: VBoxManage list vms | awk -F'"' '{print $2}' | xargs -I {} VBoxManage unregistervm --delete "{}"
- name: Remove all Virtualbox HDs
if: always() # do this even if a step before has failed
run: VBoxManage list hdds | awk -F':' '/^UUID:/ {print $2}' | xargs -I {} VBoxManage closemedium disk "{}" --delete
- name: Remove all Virtualbox Networks
if: always() # do this even if a step before has failed
run: VBoxManage list hostonlyifs | grep '^Name:' | awk '{print $2}' | grep '^vboxnet' | xargs -I {} VBoxManage hostonlyif remove {}
- name: Remove Virtualbox network config
if: always() # do this even if a step before has failed
run: sudo rm /etc/vbox/networks.conf || true
- name: Upload log files - name: Upload log files
if: always() # do this even if a step before has failed if: always() # do this even if a step before has failed
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # 3.1.1 uses: actions/upload-artifact@26f96dfa697d77e81fd5907df203aa23a56210a8 # 4.3.0
with: with:
name: logs name: logs
path: | path: |
${{ runner.temp }}/logs ${{ runner.temp }}/logs
overwrite: true
- name: Delete old box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
.env/ .env/
*.log *.log
ansible.cfg ansible.cfg
kubeconfig

View File

@@ -1,7 +1,7 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: f71fa2c1f9cf5cb705f73dffe4b21f7c61470ba9 # frozen: v4.4.0 rev: v4.5.0
hooks: hooks:
- id: requirements-txt-fixer - id: requirements-txt-fixer
- id: sort-simple-yaml - id: sort-simple-yaml
@@ -12,24 +12,24 @@ repos:
- id: trailing-whitespace - id: trailing-whitespace
args: [--markdown-linebreak-ext=md] args: [--markdown-linebreak-ext=md]
- repo: https://github.com/adrienverge/yamllint.git - repo: https://github.com/adrienverge/yamllint.git
rev: b05e028c5881819161d11cb543fd96a30c06cceb # frozen: v1.32.0 rev: v1.33.0
hooks: hooks:
- id: yamllint - id: yamllint
args: [-c=.yamllint] args: [-c=.yamllint]
- repo: https://github.com/ansible-community/ansible-lint.git - repo: https://github.com/ansible-community/ansible-lint.git
rev: 3293b64b939c0de16ef8cb81dd49255e475bf89a # frozen: v6.17.2 rev: v6.22.2
hooks: hooks:
- id: ansible-lint - id: ansible-lint
- repo: https://github.com/shellcheck-py/shellcheck-py - repo: https://github.com/shellcheck-py/shellcheck-py
rev: 375289a39f5708101b1f916eb729e8d6da96993f # frozen: v0.9.0.5 rev: v0.9.0.6
hooks: hooks:
- id: shellcheck - id: shellcheck
- repo: https://github.com/Lucas-C/pre-commit-hooks - repo: https://github.com/Lucas-C/pre-commit-hooks
rev: 12885e376b93dc4536ad68d156065601e4433665 # frozen: v1.5.1 rev: v1.5.4
hooks: hooks:
- id: remove-crlf - id: remove-crlf
- id: remove-tabs - id: remove-tabs
- repo: https://github.com/sirosen/texthooks - repo: https://github.com/sirosen/texthooks
rev: c4ffd3e31669dd4fa4d31a23436cc13839730084 # frozen: 0.5.0 rev: 0.6.4
hooks: hooks:
- id: fix-smartquotes - id: fix-smartquotes

View File

@@ -1,5 +1,5 @@
--- ---
k3s_version: v1.26.12+k3s1 k3s_version: v1.29.0+k3s1
# this is the user that has ssh access to these machines # this is the user that has ssh access to these machines
ansible_user: ansibleuser ansible_user: ansibleuser
systemd_dir: /etc/systemd/system systemd_dir: /etc/systemd/system
@@ -10,6 +10,30 @@ system_timezone: "Your/Timezone"
# interface which will be used for flannel # interface which will be used for flannel
flannel_iface: "eth0" flannel_iface: "eth0"
# uncomment calico_iface to use tigera operator/calico cni instead of flannel https://docs.tigera.io/calico/latest/about
# calico_iface: "eth0"
calico_ebpf: false # use eBPF dataplane instead of iptables
calico_tag: "v3.27.0" # calico version tag
# uncomment cilium_iface to use cilium cni instead of flannel or calico
# ensure v4.19.57, v5.1.16, v5.2.0 or more recent kernel
# cilium_iface: "eth0"
cilium_mode: "native" # native when nodes on same subnet or using bgp, else set routed
cilium_tag: "v1.14.6" # cilium version tag
cilium_hubble: true # enable hubble observability relay and ui
# if using calico or cilium, you may specify the cluster pod cidr pool
cluster_cidr: "10.52.0.0/16"
# enable cilium bgp control plane for lb services and pod cidrs. disables metallb.
cilium_bgp: false
# bgp parameters for cilium cni. only active when cilium_iface is defined and cilium_bgp is true.
cilium_bgp_my_asn: "64513"
cilium_bgp_peer_asn: "64512"
cilium_bgp_peer_address: "192.168.30.1"
cilium_bgp_lb_cidr: "192.168.31.0/24" # cidr for cilium loadbalancer ipam
# apiserver_endpoint is virtual ip-address which will be configured on each master # apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.30.222" apiserver_endpoint: "192.168.30.222"
@@ -20,28 +44,42 @@ k3s_token: "some-SUPER-DEDEUPER-secret-password"
# The IP on which the node is reachable in the cluster. # The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override # Here, a sensible default is provided, you can still override
# it for each of your hosts, though. # it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}' k3s_node_ip: "{{ ansible_facts[(cilium_iface | default(calico_iface | default(flannel_iface)))]['ipv4']['address'] }}"
# Disable the taint manually by setting: k3s_master_taint = false # Disable the taint manually by setting: k3s_master_taint = false
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}" k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"
# these arguments are recommended for servers as well as agents: # these arguments are recommended for servers as well as agents:
extra_args: >- extra_args: >-
--flannel-iface={{ flannel_iface }} {{ '--flannel-iface=' + flannel_iface if calico_iface is not defined and cilium_iface is not defined else '' }}
--node-ip={{ k3s_node_ip }} --node-ip={{ k3s_node_ip }}
# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }} # change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }}
# the contents of the if block is also required if using calico or cilium
extra_server_args: >- extra_server_args: >-
{{ extra_args }} {{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }} {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
{% if calico_iface is defined or cilium_iface is defined %}
--flannel-backend=none
--disable-network-policy
--cluster-cidr={{ cluster_cidr | default('10.52.0.0/16') }}
{% endif %}
--tls-san {{ apiserver_endpoint }} --tls-san {{ apiserver_endpoint }}
--disable servicelb --disable servicelb
--disable traefik --disable traefik
extra_agent_args: >- extra_agent_args: >-
{{ extra_args }} {{ extra_args }}
# image tag for kube-vip # image tag for kube-vip
kube_vip_tag_version: "v0.5.12" kube_vip_tag_version: "v0.6.4"
# tag for kube-vip-cloud-provider manifest
# kube_vip_cloud_provider_tag_version: "main"
# kube-vip ip range for load balancer
# (uncomment to use kube-vip for services instead of MetalLB)
# kube_vip_lb_ip_range: "192.168.30.80-192.168.30.90"
# metallb type frr or native # metallb type frr or native
metal_lb_type: "native" metal_lb_type: "native"
@@ -55,8 +93,8 @@ metal_lb_mode: "layer2"
# metal_lb_bgp_peer_address: "192.168.30.1" # metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb # image tag for metal lb
metal_lb_speaker_tag_version: "v0.13.9" metal_lb_speaker_tag_version: "v0.13.12"
metal_lb_controller_tag_version: "v0.13.9" metal_lb_controller_tag_version: "v0.13.12"
# metallb ip range for load balancer # metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90" metal_lb_ip_range: "192.168.30.80-192.168.30.90"

View File

@@ -13,6 +13,12 @@ We have these scenarios:
To save a bit of test time, this cluster is _not_ highly available, it consists of only one control and one worker node. To save a bit of test time, this cluster is _not_ highly available, it consists of only one control and one worker node.
- **single_node**: - **single_node**:
Very similar to the default scenario, but uses only a single node for all cluster functionality. Very similar to the default scenario, but uses only a single node for all cluster functionality.
- **calico**:
The same as single node, but uses calico cni instead of flannel.
- **cilium**:
The same as single node, but uses cilium cni instead of flannel.
- **kube-vip**
The same as single node, but uses kube-vip as service loadbalancer instead of MetalLB
## How to execute ## How to execute

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.62
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,16 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
calico_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.224"
metal_lb_ip_range: "192.168.30.100-192.168.30.109"

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.63
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,16 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
cilium_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225"
metal_lb_ip_range: "192.168.30.110-192.168.30.119"

View File

@@ -22,7 +22,7 @@ platforms:
ssh.password: "vagrant" ssh.password: "vagrant"
- name: control2 - name: control2
box: generic/debian11 box: generic/debian12
memory: 1024 memory: 1024
cpus: 2 cpus: 2
groups: groups:

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.62
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,17 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
flannel_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225"
# Use kube-vip instead of MetalLB
kube_vip_lb_ip_range: "192.168.30.110-192.168.30.119"

View File

@@ -6,7 +6,7 @@
# #
ansible-compat==4.1.11 ansible-compat==4.1.11
# via molecule # via molecule
ansible-core==2.16.2 ansible-core==2.16.3
# via # via
# -r requirements.in # -r requirements.in
# ansible-compat # ansible-compat

View File

@@ -0,0 +1,27 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip cloud provider manifest to first master
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/{{ kube_vip_cloud_provider_tag_version | default('main') }}/manifest/kube-vip-cloud-controller.yaml" # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/kube-vip-cloud-controller.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy kubevip configMap manifest to first master
template:
src: "kubevip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/kubevip.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -29,6 +29,12 @@
- name: Deploy metallb manifest - name: Deploy metallb manifest
include_tasks: metallb.yml include_tasks: metallb.yml
tags: metallb tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Deploy kube-vip manifest
include_tasks: kube-vip.yml
tags: kubevip
when: kube_vip_lb_ip_range is defined
- name: Init cluster inside the transient k3s-init service - name: Init cluster inside the transient k3s-init service
command: command:

View File

@@ -0,0 +1,13 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubevip
namespace: kube-system
data:
{% if kube_vip_lb_ip_range is string %}
{# kube_vip_lb_ip_range was used in the legacy way: single string instead of a list #}
{# => transform to list with single element #}
{% set kube_vip_lb_ip_range = [kube_vip_lb_ip_range] %}
{% endif %}
range-global: {{ kube_vip_lb_ip_range | join(',') }}

View File

@@ -43,7 +43,7 @@ spec:
- name: vip_ddns - name: vip_ddns
value: "false" value: "false"
- name: svc_enable - name: svc_enable
value: "false" value: "{{ 'true' if kube_vip_lb_ip_range is defined else 'false' }}"
- name: vip_leaderelection - name: vip_leaderelection
value: "true" value: "true"
- name: vip_leaseduration - name: vip_leaseduration

View File

@@ -0,0 +1,114 @@
---
- name: Deploy Calico to cluster
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
run_once: true
block:
- name: Create manifests directory on first master
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0755
- name: "Download to first master: manifest for Tigera Operator and Calico CRDs"
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml"
dest: "/tmp/k3s/tigera-operator.yaml"
owner: root
group: root
mode: 0755
- name: Copy Calico custom resources manifest to first master
ansible.builtin.template:
src: "calico.crs.j2"
dest: /tmp/k3s/custom-resources.yaml
owner: root
group: root
mode: 0755
- name: Deploy or replace Tigera Operator
block:
- name: Deploy Tigera Operator
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/tigera-operator.yaml
register: create_operator
changed_when: "'created' in create_operator.stdout"
failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr"
rescue:
- name: Replace existing Tigera Operator
ansible.builtin.command:
cmd: kubectl replace -f /tmp/k3s/tigera-operator.yaml
register: replace_operator
changed_when: "'replaced' in replace_operator.stdout"
failed_when: "'Error' in replace_operator.stderr"
- name: Wait for Tigera Operator resources
command: >-
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='tigera-operator'
--for=condition=Available=True
--timeout=7s
register: tigera_result
changed_when: false
until: tigera_result is succeeded
retries: 7
delay: 7
with_items:
- {name: tigera-operator, type: deployment}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
- name: Deploy Calico custom resources
block:
- name: Deploy custom resources for Calico
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/custom-resources.yaml
register: create_cr
changed_when: "'created' in create_cr.stdout"
failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr"
rescue:
- name: Apply new Calico custom resource manifest
ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/custom-resources.yaml
register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'Error' in apply_cr.stderr"
- name: Wait for Calico system resources to be available
command: >-
{% if item.type == 'daemonset' %}
k3s kubectl wait pods
--namespace='{{ item.namespace }}'
--selector={{ item.selector }}
--for=condition=Ready
{% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='{{ item.namespace }}'
--for=condition=Available
{% endif %}
--timeout=7s
register: cr_result
changed_when: false
until: cr_result is succeeded
retries: 30
delay: 7
with_items:
- {name: calico-typha, type: deployment, namespace: calico-system}
- {name: calico-kube-controllers, type: deployment, namespace: calico-system}
- {name: csi-node-driver, type: daemonset, selector: 'k8s-app=csi-node-driver', namespace: calico-system}
- {name: calico-node, type: daemonset, selector: 'k8s-app=calico-node', namespace: calico-system}
- {name: calico-apiserver, type: deployment, namespace: calico-apiserver}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
- name: Patch Felix configuration for eBPF mode
ansible.builtin.command:
cmd: >
kubectl patch felixconfiguration default
--type='merge'
--patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}'
register: patch_result
changed_when: "'felixconfiguration.projectcalico.org/default patched' in patch_result.stdout"
failed_when: "'Error' in patch_result.stderr"
when: calico_ebpf

View File

@@ -0,0 +1,253 @@
---
- name: Prepare Cilium CLI on first master and deploy CNI
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
run_once: true
block:
- name: Create tmp directory on first master
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0755
- name: Check if Cilium CLI is installed
ansible.builtin.command: cilium version
register: cilium_cli_installed
failed_when: false
changed_when: false
ignore_errors: true
- name: Check for Cilium CLI version in command output
set_fact:
installed_cli_version: >-
{{
cilium_cli_installed.stdout_lines
| join(' ')
| regex_findall('cilium-cli: (v\d+\.\d+\.\d+)')
| first
| default('unknown')
}}
when: cilium_cli_installed.rc == 0
- name: Get latest stable Cilium CLI version file
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt"
dest: "/tmp/k3s/cilium-cli-stable.txt"
owner: root
group: root
mode: 0755
- name: Read Cilium CLI stable version from file
ansible.builtin.command: cat /tmp/k3s/cilium-cli-stable.txt
register: cli_ver
changed_when: false
- name: Log installed Cilium CLI version
ansible.builtin.debug:
msg: "Installed Cilium CLI version: {{ installed_cli_version | default('Not installed') }}"
- name: Log latest stable Cilium CLI version
ansible.builtin.debug:
msg: "Latest Cilium CLI version: {{ cli_ver.stdout }}"
- name: Determine if Cilium CLI needs installation or update
set_fact:
cilium_cli_needs_update: >-
{{
cilium_cli_installed.rc != 0 or
(cilium_cli_installed.rc == 0 and
installed_cli_version != cli_ver.stdout)
}}
- name: Install or update Cilium CLI
when: cilium_cli_needs_update
block:
- name: Set architecture variable
ansible.builtin.set_fact:
cli_arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"
- name: Download Cilium CLI and checksum
ansible.builtin.get_url:
url: "{{ cilium_base_url }}/cilium-linux-{{ cli_arch }}{{ item }}"
dest: "/tmp/k3s/cilium-linux-{{ cli_arch }}{{ item }}"
owner: root
group: root
mode: 0755
loop:
- ".tar.gz"
- ".tar.gz.sha256sum"
vars:
cilium_base_url: "https://github.com/cilium/cilium-cli/releases/download/{{ cli_ver.stdout }}"
- name: Verify the downloaded tarball
ansible.builtin.shell: |
cd /tmp/k3s && sha256sum --check cilium-linux-{{ cli_arch }}.tar.gz.sha256sum
args:
executable: /bin/bash
changed_when: false
- name: Extract Cilium CLI to /usr/local/bin
ansible.builtin.unarchive:
src: "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz"
dest: /usr/local/bin
remote_src: true
- name: Remove downloaded tarball and checksum file
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz"
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz.sha256sum"
- name: Wait for connectivity to kube VIP
ansible.builtin.command: ping -c 1 {{ apiserver_endpoint }}
register: ping_result
until: ping_result.rc == 0
retries: 21
delay: 1
ignore_errors: true
changed_when: false
- name: Fail if kube VIP not reachable
ansible.builtin.fail:
msg: "API endpoint {{ apiserver_endpoint }} is not reachable"
when: ping_result.rc != 0
- name: Test for existing Cilium install
ansible.builtin.command: k3s kubectl -n kube-system get daemonsets cilium
register: cilium_installed
failed_when: false
changed_when: false
ignore_errors: true
- name: Check existing Cilium install
when: cilium_installed.rc == 0
block:
- name: Check Cilium version
ansible.builtin.command: cilium version
register: cilium_version
failed_when: false
changed_when: false
ignore_errors: true
- name: Parse installed Cilium version
set_fact:
installed_cilium_version: >-
{{
cilium_version.stdout_lines
| join(' ')
| regex_findall('cilium image.+(\d+\.\d+\.\d+)')
| first
| default('unknown')
}}
- name: Determine if Cilium needs update
set_fact:
cilium_needs_update: >-
{{ 'v' + installed_cilium_version != cilium_tag }}
- name: Log result
ansible.builtin.debug:
msg: >
Installed Cilium version: {{ installed_cilium_version }},
Target Cilium version: {{ cilium_tag }},
Update needed: {{ cilium_needs_update }}
- name: Install Cilium
ansible.builtin.command: >-
{% if cilium_installed.rc != 0 %}
cilium install
{% else %}
cilium upgrade
{% endif %}
--version "{{ cilium_tag }}"
--helm-set operator.replicas="1"
{{ '--helm-set devices=' + cilium_iface if cilium_iface != 'auto' else '' }}
--helm-set ipam.operator.clusterPoolIPv4PodCIDRList={{ cluster_cidr }}
{% if cilium_mode == "native" or (cilium_bgp and cilium_exportPodCIDR != 'false') %}
--helm-set ipv4NativeRoutingCIDR={{ cluster_cidr }}
{% endif %}
--helm-set k8sServiceHost={{ apiserver_endpoint }}
--helm-set k8sServicePort="6443"
--helm-set routingMode={{ cilium_mode | default("native") }}
--helm-set autoDirectNodeRoutes={{ "true" if cilium_mode == "native" else "false" }}
--helm-set kubeProxyReplacement={{ kube_proxy_replacement | default("true") }}
--helm-set bpf.masquerade={{ enable_bpf_masquerade | default("true") }}
--helm-set bgpControlPlane.enabled={{ cilium_bgp | default("false") }}
--helm-set hubble.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.relay.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.ui.enabled={{ "true" if cilium_hubble else "false" }}
{% if kube_proxy_replacement is not false %}
--helm-set bpf.loadBalancer.algorithm={{ bpf_lb_algorithm | default("maglev") }}
--helm-set bpf.loadBalancer.mode={{ bpf_lb_mode | default("hybrid") }}
{% endif %}
environment:
KUBECONFIG: /home/{{ ansible_user }}/.kube/config
register: cilium_install_result
changed_when: cilium_install_result.rc == 0
when: cilium_installed.rc != 0 or cilium_needs_update
- name: Wait for Cilium resources
command: >-
{% if item.type == 'daemonset' %}
k3s kubectl wait pods
--namespace=kube-system
--selector='k8s-app=cilium'
--for=condition=Ready
{% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace=kube-system
--for=condition=Available
{% endif %}
--timeout=7s
register: cr_result
changed_when: false
until: cr_result is succeeded
retries: 30
delay: 7
with_items:
- {name: cilium-operator, type: deployment}
- {name: cilium, type: daemonset, selector: 'k8s-app=cilium'}
- {name: hubble-relay, type: deployment, check_hubble: true}
- {name: hubble-ui, type: deployment, check_hubble: true}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
when: >-
not item.check_hubble | default(false) or (item.check_hubble | default(false) and cilium_hubble)
- name: Configure Cilium BGP
when: cilium_bgp
block:
- name: Copy BGP manifests to first master
ansible.builtin.template:
src: "cilium.crs.j2"
dest: /tmp/k3s/cilium-bgp.yaml
owner: root
group: root
mode: 0755
- name: Apply BGP manifests
ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/cilium-bgp.yaml
register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'is invalid' in apply_cr.stderr"
ignore_errors: true
- name: Print error message if BGP manifests application fails
ansible.builtin.debug:
msg: "{{ apply_cr.stderr }}"
when: "'is invalid' in apply_cr.stderr"
- name: Test for BGP config resources
ansible.builtin.command: "{{ item }}"
loop:
- k3s kubectl get CiliumBGPPeeringPolicy.cilium.io
- k3s kubectl get CiliumLoadBalancerIPPool.cilium.io
changed_when: false
loop_control:
label: "{{ item }}"

View File

@@ -1,7 +1,18 @@
--- ---
- name: Deploy calico
include_tasks: calico.yml
tags: calico
when: calico_iface is defined and cilium_iface is not defined
- name: Deploy cilium
include_tasks: cilium.yml
tags: cilium
when: cilium_iface is defined
- name: Deploy metallb pool - name: Deploy metallb pool
include_tasks: metallb.yml include_tasks: metallb.yml
tags: metallb tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Remove tmp directory used for manifests - name: Remove tmp directory used for manifests
file: file:

View File

@@ -8,6 +8,27 @@
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true run_once: true
- name: Delete outdated metallb replicas
shell: |-
set -o pipefail
REPLICAS=$(k3s kubectl --namespace='metallb-system' get replicasets \
-l 'component=controller,app=metallb' \
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}, {.items[0].metadata.name}' 2>/dev/null || true)
REPLICAS_SETS=$(echo ${REPLICAS} | grep -v '{{ metal_lb_controller_tag_version }}' | sed -e "s/^.*\s//g")
if [ -n "${REPLICAS_SETS}" ] ; then
for REPLICAS in "${REPLICAS_SETS}"
do
k3s kubectl --namespace='metallb-system' \
delete rs "${REPLICAS}"
done
fi
args:
executable: /bin/bash
changed_when: false
run_once: true
with_items: "{{ groups[group_name_master | default('master')] }}"
- name: Copy metallb CRs manifest to first master - name: Copy metallb CRs manifest to first master
template: template:
src: "metallb.crs.j2" src: "metallb.crs.j2"

View File

@@ -0,0 +1,41 @@
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: {{ calico_blockSize | default('26') }}
cidr: {{ cluster_cidr | default('10.52.0.0/16') }}
encapsulation: {{ calico_encapsulation | default('VXLANCrossSubnet') }}
natOutgoing: {{ calico_natOutgoing | default('Enabled') }}
nodeSelector: {{ calico_nodeSelector | default('all()') }}
nodeAddressAutodetectionV4:
interface: {{ calico_iface }}
linuxDataplane: {{ 'BPF' if calico_ebpf else 'Iptables' }}
---
# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
{% if calico_ebpf %}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: '{{ apiserver_endpoint }}'
KUBERNETES_SERVICE_PORT: '6443'
{% endif %}

View File

@@ -0,0 +1,29 @@
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
name: 01-bgp-peering-policy
spec: # CiliumBGPPeeringPolicySpec
virtualRouters: # []CiliumBGPVirtualRouter
- localASN: {{ cilium_bgp_my_asn }}
exportPodCIDR: {{ cilium_exportPodCIDR | default('true') }}
neighbors: # []CiliumBGPNeighbor
- peerAddress: '{{ cilium_bgp_peer_address + "/32"}}'
peerASN: {{ cilium_bgp_peer_asn }}
eBGPMultihopTTL: 10
connectRetryTimeSeconds: 120
holdTimeSeconds: 90
keepAliveTimeSeconds: 30
gracefulRestart:
enabled: true
restartTimeSeconds: 120
serviceSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "01-lb-pool"
spec:
cidrs:
- cidr: "{{ cilium_bgp_lb_cidr }}"

View File

@@ -31,6 +31,14 @@
- raspberry_pi|default(false) - raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|')) - ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|'))
- name: Set detected_distribution to Raspbian (ARM64 on Debian Bookworm)
set_fact:
detected_distribution: Raspbian
when:
- ansible_facts.architecture is search("aarch64")
- raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match("Debian.*bookworm")
- name: Set detected_distribution_major_version - name: Set detected_distribution_major_version
set_fact: set_fact:
detected_distribution_major_version: "{{ ansible_facts.lsb.major_release }}" detected_distribution_major_version: "{{ ansible_facts.lsb.major_release }}"

View File

@@ -45,6 +45,7 @@
- /var/lib/rancher/k3s - /var/lib/rancher/k3s
- /var/lib/rancher/ - /var/lib/rancher/
- /var/lib/cni/ - /var/lib/cni/
- /etc/cni/net.d
- name: Remove K3s http_proxy files - name: Remove K3s http_proxy files
file: file:

View File

@@ -46,3 +46,14 @@
roles: roles:
- role: k3s_server_post - role: k3s_server_post
become: true become: true
- name: Storing kubeconfig in the playbook directory
hosts: master
environment: "{{ proxy_env | default({}) }}"
tasks:
- name: Copying kubeconfig from {{ hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] }}
ansible.builtin.fetch:
src: "{{ ansible_user_dir }}/.kube/config"
dest: ./kubeconfig
flat: true
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']