Compare commits

..

14 Commits

Author SHA1 Message Date
dependabot[bot]
507b9d602a Merge c1abdd7762 into 924a2f528c 2024-09-09 02:26:36 +00:00
dependabot[bot]
c1abdd7762 chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.11 to 3.0.12.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](3c16e895bb...0901cf7b71)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 02:26:35 +00:00
dependabot[bot]
924a2f528c chore(deps): bump actions/setup-python from 5.1.1 to 5.2.0 (#573)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.1 to 5.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](39cd14951b...f677139bbe)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-31 23:15:52 -05:00
dependabot[bot]
2892ac3858 chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions (#571)
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.10 to 3.0.11.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](b88cd0aad2...3c16e895bb)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-31 14:33:37 -05:00
Christian Berendt
df8e8dd591 Make kubectl binary configurable with the k3s_kubectl_binary parameter (#567)
Closes techno-tim/k3s-ansible#566

Signed-off-by: Christian Berendt <berendt@osism.tech>
2024-08-22 17:58:15 -05:00
dependabot[bot]
3a0303d130 chore(deps): bump ansible-core from 2.17.2 to 2.17.3 (#564)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.17.2 to 2.17.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.17.2...v2.17.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-13 06:14:10 +00:00
Richard Holmboe
b077a49e1f Change to FQCN with ansible-lint fixer (#553)
* Change to FQCN with ansible-lint fixer

Since ansible-base 2.10 (later ansible-core), FQCN is the new way to go.

Updated .ansible-lint with a production profile and removed fqcn in skip_list.
Updated .yamllint with rules needed.

Ran ansible-lint --fix=all, then manually applied some minor changes.

* Changed octal value in molecule/ipv6/prepare.yml
2024-08-12 22:59:59 -05:00
dependabot[bot]
635f0b21b3 chore(deps): bump actions/upload-artifact from 4.3.5 to 4.3.6 (#561)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.5 to 4.3.6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](89ef406dd8...834a144ee9)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-07 20:22:17 +00:00
dependabot[bot]
4a64ad42df chore(deps): bump pyyaml from 6.0.1 to 6.0.2 (#562)
Bumps [pyyaml](https://github.com/yaml/pyyaml) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/yaml/pyyaml/releases)
- [Changelog](https://github.com/yaml/pyyaml/blob/main/CHANGES)
- [Commits](https://github.com/yaml/pyyaml/compare/6.0.1...6.0.2)

---
updated-dependencies:
- dependency-name: pyyaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-07 18:16:52 +00:00
Christian Berendt
d0537736de k3s_server: add missing parameter descriptions (#559)
The commit 3a20500f9c has introduced
argument specs in the role meta information. These two parameters
were still missing there.

Realted to 2d0596209e

Signed-off-by: Christian Berendt <berendt@osism.tech>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-07 16:10:16 +00:00
Christian Berendt
2149827800 k3s_server: add kube-vip BGP support (#554)
With the kube_vip_bgp parameter it is possible to enable the kube-vip
BGP support (https://kube-vip.io/docs/modes/bgp/).

The configuration is possible with the following new parameters:

* kube_vip_bgp_routerid
* kube_vip_bgp_as
* kube_vip_bgp_peeraddress
* kube_vip_bgp_peeras

Signed-off-by: Christian Berendt <berendt@osism.tech>
2024-08-07 09:36:05 -05:00
Christian Berendt
2d0596209e Make it possible to disable the creation of the kubectl/crictl symlinks (#558)
If k3s_create_kubectl_symlink is set to false the kubectl symlink will
not be created.

If k3s_create_crictl_symlink is set to false the crictl symlink will not
be created.

By default the symlinks will be created. The default behavior is not
changed.

Signed-off-by: Christian Berendt <berendt@osism.tech>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-05 21:19:57 -05:00
Dov Benyomin Sohacheski
3a20500f9c Add default values to roles (#509)
*  Add default values to roles

* 🚚 Move to use meta files for roles

* 🛠 Fix descriptions

*  Add meta for server

* 🚧 WIP

* 🌟 Complete

* 🧹 Ran and fix lint errors

* 🔨 Fix required and default conflict

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-08-05 17:00:24 -05:00
dependabot[bot]
9ce9fecc5b chore(deps): bump actions/upload-artifact from 4.3.4 to 4.3.5 (#555)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.4 to 4.3.5.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](0b2256b8c0...89ef406dd8)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-05 13:55:38 -05:00
69 changed files with 799 additions and 378 deletions

View File

@@ -1,21 +1,21 @@
--- ---
profile: production
exclude_paths: exclude_paths:
# default paths # default paths
- '.cache/' - .cache/
- '.github/' - .github/
- 'test/fixtures/formatting-before/' - test/fixtures/formatting-before/
- 'test/fixtures/formatting-prettier/' - test/fixtures/formatting-prettier/
# The "converge" and "reset" playbooks use import_playbook in # The "converge" and "reset" playbooks use import_playbook in
# conjunction with the "env" lookup plugin, which lets the # conjunction with the "env" lookup plugin, which lets the
# syntax check of ansible-lint fail. # syntax check of ansible-lint fail.
- 'molecule/**/converge.yml' - molecule/**/converge.yml
- 'molecule/**/prepare.yml' - molecule/**/prepare.yml
- 'molecule/**/reset.yml' - molecule/**/reset.yml
# The file was generated by galaxy ansible - don't mess with it. # The file was generated by galaxy ansible - don't mess with it.
- 'galaxy.yml' - galaxy.yml
skip_list: skip_list:
- 'fqcn-builtins'
- var-naming[no-role-prefix] - var-naming[no-role-prefix]

View File

@@ -16,7 +16,7 @@ jobs:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@39cd14951b08e74b54015e9e001cdefcf80e669f # 5.1.1 uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # 5.2.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies

View File

@@ -16,7 +16,7 @@ jobs:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@39cd14951b08e74b54015e9e001cdefcf80e669f # 5.1.1 uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # 5.2.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies
@@ -47,7 +47,7 @@ jobs:
- name: Checkout code - name: Checkout code
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # 4.1.7 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # 4.1.7
- name: Ensure SHA pinned actions - name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@b88cd0aad2c36a63e42c71f81cb1958fed95ac87 # 3.0.10 uses: zgosalvez/github-actions-ensure-sha-pinned-actions@0901cf7b71c7ea6261ec69a3dc2bd3f9264f893e # 3.0.12
with: with:
allowlist: | allowlist: |
aws-actions/ aws-actions/

View File

@@ -59,7 +59,7 @@ jobs:
EOF EOF
- name: Set up Python ${{ env.PYTHON_VERSION }} - name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@39cd14951b08e74b54015e9e001cdefcf80e669f # 5.1.1 uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # 5.2.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies cache: 'pip' # caching pip dependencies
@@ -118,7 +118,7 @@ jobs:
- name: Upload log files - name: Upload log files
if: always() # do this even if a step before has failed if: always() # do this even if a step before has failed
uses: actions/upload-artifact@0b2256b8c012f0828dc542b3febcab082c67f72b # 4.3.4 uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # 4.3.6
with: with:
name: logs name: logs
path: | path: |

View File

@@ -2,10 +2,19 @@
extends: default extends: default
rules: rules:
comments:
min-spaces-from-content: 1
comments-indentation: false
braces:
max-spaces-inside: 1
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
line-length: line-length:
max: 120 max: 120
level: warning level: warning
truthy: truthy:
allowed-values: ['true', 'false'] allowed-values: ["true", "false"]
ignore: ignore:
- galaxy.yml - galaxy.yml

View File

@@ -5,25 +5,25 @@ ansible_user: ansibleuser
systemd_dir: /etc/systemd/system systemd_dir: /etc/systemd/system
# Set your timezone # Set your timezone
system_timezone: "Your/Timezone" system_timezone: Your/Timezone
# interface which will be used for flannel # interface which will be used for flannel
flannel_iface: "eth0" flannel_iface: eth0
# uncomment calico_iface to use tigera operator/calico cni instead of flannel https://docs.tigera.io/calico/latest/about # uncomment calico_iface to use tigera operator/calico cni instead of flannel https://docs.tigera.io/calico/latest/about
# calico_iface: "eth0" # calico_iface: "eth0"
calico_ebpf: false # use eBPF dataplane instead of iptables calico_ebpf: false # use eBPF dataplane instead of iptables
calico_tag: "v3.28.0" # calico version tag calico_tag: v3.28.0 # calico version tag
# uncomment cilium_iface to use cilium cni instead of flannel or calico # uncomment cilium_iface to use cilium cni instead of flannel or calico
# ensure v4.19.57, v5.1.16, v5.2.0 or more recent kernel # ensure v4.19.57, v5.1.16, v5.2.0 or more recent kernel
# cilium_iface: "eth0" # cilium_iface: "eth0"
cilium_mode: "native" # native when nodes on same subnet or using bgp, else set routed cilium_mode: native # native when nodes on same subnet or using bgp, else set routed
cilium_tag: "v1.16.0" # cilium version tag cilium_tag: v1.16.0 # cilium version tag
cilium_hubble: true # enable hubble observability relay and ui cilium_hubble: true # enable hubble observability relay and ui
# if using calico or cilium, you may specify the cluster pod cidr pool # if using calico or cilium, you may specify the cluster pod cidr pool
cluster_cidr: "10.52.0.0/16" cluster_cidr: 10.52.0.0/16
# enable cilium bgp control plane for lb services and pod cidrs. disables metallb. # enable cilium bgp control plane for lb services and pod cidrs. disables metallb.
cilium_bgp: false cilium_bgp: false
@@ -31,15 +31,27 @@ cilium_bgp: false
# bgp parameters for cilium cni. only active when cilium_iface is defined and cilium_bgp is true. # bgp parameters for cilium cni. only active when cilium_iface is defined and cilium_bgp is true.
cilium_bgp_my_asn: "64513" cilium_bgp_my_asn: "64513"
cilium_bgp_peer_asn: "64512" cilium_bgp_peer_asn: "64512"
cilium_bgp_peer_address: "192.168.30.1" cilium_bgp_peer_address: 192.168.30.1
cilium_bgp_lb_cidr: "192.168.31.0/24" # cidr for cilium loadbalancer ipam cilium_bgp_lb_cidr: 192.168.31.0/24 # cidr for cilium loadbalancer ipam
# enable kube-vip ARP broadcasts
kube_vip_arp: true
# enable kube-vip BGP peering
kube_vip_bgp: false
# bgp parameters for kube-vip
kube_vip_bgp_routerid: "127.0.0.1" # Defines the router ID for the BGP server
kube_vip_bgp_as: "64513" # Defines the AS for the BGP server
kube_vip_bgp_peeraddress: "192.168.30.1" # Defines the address for the BGP peer
kube_vip_bgp_peeras: "64512" # Defines the AS for the BGP peer
# apiserver_endpoint is virtual ip-address which will be configured on each master # apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.30.222" apiserver_endpoint: 192.168.30.222
# k3s_token is required masters can talk together securely # k3s_token is required masters can talk together securely
# this token should be alpha numeric only # this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password" k3s_token: some-SUPER-DEDEUPER-secret-password
# The IP on which the node is reachable in the cluster. # The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override # Here, a sensible default is provided, you can still override
@@ -72,7 +84,7 @@ extra_agent_args: >-
{{ extra_args }} {{ extra_args }}
# image tag for kube-vip # image tag for kube-vip
kube_vip_tag_version: "v0.8.2" kube_vip_tag_version: v0.8.2
# tag for kube-vip-cloud-provider manifest # tag for kube-vip-cloud-provider manifest
# kube_vip_cloud_provider_tag_version: "main" # kube_vip_cloud_provider_tag_version: "main"
@@ -82,10 +94,10 @@ kube_vip_tag_version: "v0.8.2"
# kube_vip_lb_ip_range: "192.168.30.80-192.168.30.90" # kube_vip_lb_ip_range: "192.168.30.80-192.168.30.90"
# metallb type frr or native # metallb type frr or native
metal_lb_type: "native" metal_lb_type: native
# metallb mode layer2 or bgp # metallb mode layer2 or bgp
metal_lb_mode: "layer2" metal_lb_mode: layer2
# bgp options # bgp options
# metal_lb_bgp_my_asn: "64513" # metal_lb_bgp_my_asn: "64513"
@@ -93,11 +105,11 @@ metal_lb_mode: "layer2"
# metal_lb_bgp_peer_address: "192.168.30.1" # metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb # image tag for metal lb
metal_lb_speaker_tag_version: "v0.14.8" metal_lb_speaker_tag_version: v0.14.8
metal_lb_controller_tag_version: "v0.14.8" metal_lb_controller_tag_version: v0.14.8
# metallb ip range for load balancer # metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90" metal_lb_ip_range: 192.168.30.80-192.168.30.90
# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes # Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file. # in your hosts.ini file.

View File

@@ -1,2 +1,2 @@
--- ---
ansible_user: '{{ proxmox_lxc_ssh_user }}' ansible_user: "{{ proxmox_lxc_ssh_user }}"

View File

@@ -11,8 +11,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
groups: groups:
- k3s_cluster - k3s_cluster
- master - master

View File

@@ -12,5 +12,5 @@
retry_count: 45 retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios # Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.224" apiserver_endpoint: 192.168.30.224
metal_lb_ip_range: "192.168.30.100-192.168.30.109" metal_lb_ip_range: 192.168.30.100-192.168.30.109

View File

@@ -11,8 +11,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
groups: groups:
- k3s_cluster - k3s_cluster
- master - master

View File

@@ -12,5 +12,5 @@
retry_count: 45 retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios # Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225" apiserver_endpoint: 192.168.30.225
metal_lb_ip_range: "192.168.30.110-192.168.30.119" metal_lb_ip_range: 192.168.30.110-192.168.30.119

View File

@@ -4,7 +4,6 @@ dependency:
driver: driver:
name: vagrant name: vagrant
platforms: platforms:
- name: control1 - name: control1
box: generic/ubuntu2204 box: generic/ubuntu2204
memory: 1024 memory: 1024
@@ -18,8 +17,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
- name: control2 - name: control2
box: generic/debian12 box: generic/debian12
@@ -56,8 +55,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
- name: node2 - name: node2
box: generic/rocky9 box: generic/rocky9

View File

@@ -17,8 +17,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
- name: control2 - name: control2
box: generic/ubuntu2204 box: generic/ubuntu2204
@@ -33,8 +33,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
- name: node1 - name: node1
box: generic/ubuntu2204 box: generic/ubuntu2204
@@ -49,8 +49,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
provisioner: provisioner:
name: ansible name: ansible
env: env:

View File

@@ -38,7 +38,7 @@
dest: /etc/netplan/55-flannel-ipv4.yaml dest: /etc/netplan/55-flannel-ipv4.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
register: netplan_template register: netplan_template
- name: Apply netplan configuration - name: Apply netplan configuration

View File

@@ -11,8 +11,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
groups: groups:
- k3s_cluster - k3s_cluster
- master - master

View File

@@ -12,6 +12,6 @@
retry_count: 45 retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios # Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225" apiserver_endpoint: 192.168.30.225
# Use kube-vip instead of MetalLB # Use kube-vip instead of MetalLB
kube_vip_lb_ip_range: "192.168.30.110-192.168.30.119" kube_vip_lb_ip_range: 192.168.30.110-192.168.30.119

View File

@@ -27,7 +27,7 @@
name: nginx name: nginx
namespace: "{{ testing_namespace }}" namespace: "{{ testing_namespace }}"
kubeconfig: "{{ kubecfg_path }}" kubeconfig: "{{ kubecfg_path }}"
vars: &load_balancer_metadata vars:
metallb_ip: status.loadBalancer.ingress[0].ip metallb_ip: status.loadBalancer.ingress[0].ip
metallb_port: spec.ports[0].port metallb_port: spec.ports[0].port
register: nginx_services register: nginx_services

View File

@@ -9,7 +9,7 @@
ansible.builtin.assert: ansible.builtin.assert:
that: found_nodes == expected_nodes that: found_nodes == expected_nodes
success_msg: "Found nodes as expected: {{ found_nodes }}" success_msg: "Found nodes as expected: {{ found_nodes }}"
fail_msg: "Expected nodes {{ expected_nodes }}, but found nodes {{ found_nodes }}" fail_msg: Expected nodes {{ expected_nodes }}, but found nodes {{ found_nodes }}
vars: vars:
found_nodes: >- found_nodes: >-
{{ cluster_nodes | json_query('resources[*].metadata.name') | unique | sort }} {{ cluster_nodes | json_query('resources[*].metadata.name') | unique | sort }}

View File

@@ -11,8 +11,8 @@ platforms:
config_options: config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04, # We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405 # see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant" ssh.username: vagrant
ssh.password: "vagrant" ssh.password: vagrant
groups: groups:
- k3s_cluster - k3s_cluster
- master - master

View File

@@ -12,5 +12,5 @@
retry_count: 45 retry_count: 45
# Make sure that our IP ranges do not collide with those of the default scenario # Make sure that our IP ranges do not collide with those of the default scenario
apiserver_endpoint: "192.168.30.223" apiserver_endpoint: 192.168.30.223
metal_lb_ip_range: "192.168.30.91-192.168.30.99" metal_lb_ip_range: 192.168.30.91-192.168.30.99

View File

@@ -5,6 +5,6 @@
tasks: tasks:
- name: Reboot the nodes (and Wait upto 5 mins max) - name: Reboot the nodes (and Wait upto 5 mins max)
become: true become: true
reboot: ansible.builtin.reboot:
reboot_command: "{{ custom_reboot_command | default(omit) }}" reboot_command: "{{ custom_reboot_command | default(omit) }}"
reboot_timeout: 300 reboot_timeout: 300

View File

@@ -6,7 +6,7 @@
# #
ansible-compat==4.1.11 ansible-compat==4.1.11
# via molecule # via molecule
ansible-core==2.17.2 ansible-core==2.17.3
# via # via
# -r requirements.in # -r requirements.in
# ansible-compat # ansible-compat
@@ -114,7 +114,7 @@ python-dateutil==2.8.2
# via kubernetes # via kubernetes
python-vagrant==1.0.0 python-vagrant==1.0.0
# via molecule-plugins # via molecule-plugins
pyyaml==6.0.1 pyyaml==6.0.2
# via # via
# -r requirements.in # -r requirements.in
# ansible-compat # ansible-compat

View File

@@ -7,11 +7,11 @@
become: true become: true
- role: raspberrypi - role: raspberrypi
become: true become: true
vars: {state: absent} vars: { state: absent }
post_tasks: post_tasks:
- name: Reboot and wait for node to come back up - name: Reboot and wait for node to come back up
become: true become: true
reboot: ansible.builtin.reboot:
reboot_command: "{{ custom_reboot_command | default(omit) }}" reboot_command: "{{ custom_reboot_command | default(omit) }}"
reboot_timeout: 3600 reboot_timeout: 3600

View File

@@ -0,0 +1,8 @@
---
argument_specs:
main:
short_description: Manage the downloading of K3S binaries
options:
k3s_version:
description: The desired version of K3S
required: true

View File

@@ -1,36 +1,34 @@
--- ---
- name: Download k3s binary x64 - name: Download k3s binary x64
get_url: ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s
checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-amd64.txt checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-amd64.txt
dest: /usr/local/bin/k3s dest: /usr/local/bin/k3s
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
when: ansible_facts.architecture == "x86_64" when: ansible_facts.architecture == "x86_64"
- name: Download k3s binary arm64 - name: Download k3s binary arm64
get_url: ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-arm64 url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-arm64
checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm64.txt checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm64.txt
dest: /usr/local/bin/k3s dest: /usr/local/bin/k3s
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
when: when:
- ( ansible_facts.architecture is search("arm") and - ( ansible_facts.architecture is search("arm") and ansible_facts.userspace_bits == "64" )
ansible_facts.userspace_bits == "64" ) or or ansible_facts.architecture is search("aarch64")
ansible_facts.architecture is search("aarch64")
- name: Download k3s binary armhf - name: Download k3s binary armhf
get_url: ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-armhf url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s-armhf
checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm.txt checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-arm.txt
dest: /usr/local/bin/k3s dest: /usr/local/bin/k3s
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
when: when:
- ansible_facts.architecture is search("arm") - ansible_facts.architecture is search("arm")
- ansible_facts.userspace_bits == "32" - ansible_facts.userspace_bits == "32"

View File

@@ -0,0 +1,4 @@
---
extra_agent_args: ""
group_name_master: master
systemd_dir: /etc/systemd/system

View File

@@ -0,0 +1,34 @@
---
argument_specs:
main:
short_description: Setup k3s agents
options:
apiserver_endpoint:
description: Virtual ip-address configured on each master
required: true
extra_agent_args:
description: Extra arguments for agents nodes
group_name_master:
description: Name of the master group
default: master
k3s_token:
description: Token used to communicate between masters
proxy_env:
type: dict
description: Internet proxy configurations
default: ~
options:
HTTP_PROXY:
required: true
HTTPS_PROXY:
required: true
NO_PROXY:
required: true
systemd_dir:
description: Path to systemd services
default: /etc/systemd/system

View File

@@ -1,18 +1,18 @@
--- ---
- name: Create k3s-node.service.d directory - name: Create k3s-node.service.d directory
file: ansible.builtin.file:
path: '{{ systemd_dir }}/k3s-node.service.d' path: "{{ systemd_dir }}/k3s-node.service.d"
state: directory state: directory
owner: root owner: root
group: root group: root
mode: '0755' mode: "0755"
when: proxy_env is defined when: proxy_env is defined
- name: Copy K3s http_proxy conf file - name: Copy K3s http_proxy conf file
template: ansible.builtin.template:
src: "http_proxy.conf.j2" src: http_proxy.conf.j2
dest: "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf" dest: "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf"
owner: root owner: root
group: root group: root
mode: '0755' mode: "0755"
when: proxy_env is defined when: proxy_env is defined

View File

@@ -17,16 +17,16 @@
ansible.builtin.include_tasks: http_proxy.yml ansible.builtin.include_tasks: http_proxy.yml
- name: Deploy K3s http_proxy conf - name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml ansible.builtin.include_tasks: http_proxy.yml
when: proxy_env is defined when: proxy_env is defined
- name: Configure the k3s service - name: Configure the k3s service
ansible.builtin.template: ansible.builtin.template:
src: "k3s.service.j2" src: k3s.service.j2
dest: "{{ systemd_dir }}/k3s-node.service" dest: "{{ systemd_dir }}/k3s-node.service"
owner: root owner: root
group: root group: root
mode: '0755' mode: "0755"
- name: Manage k3s service - name: Manage k3s service
ansible.builtin.systemd: ansible.builtin.systemd:

View File

@@ -12,7 +12,7 @@ ExecStart=/usr/local/bin/k3s agent \
--server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 \ --server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 \
{% if is_pxe_booted | default(false) %}--snapshotter native \ {% if is_pxe_booted | default(false) %}--snapshotter native \
{% endif %}--token {{ hostvars[groups[group_name_master | default('master')][0]]['token'] | default(k3s_token) }} \ {% endif %}--token {{ hostvars[groups[group_name_master | default('master')][0]]['token'] | default(k3s_token) }} \
{{ extra_agent_args | default("") }} {{ extra_agent_args }}
KillMode=process KillMode=process
Delegate=yes Delegate=yes
LimitNOFILE=1048576 LimitNOFILE=1048576

View File

@@ -1,6 +0,0 @@
---
# Indicates whether custom registries for k3s should be configured
# Possible values:
# - present
# - absent
state: present

View File

@@ -0,0 +1,20 @@
---
argument_specs:
main:
short_description: Configure the use of a custom container registry
options:
custom_registries_yaml:
description:
- YAML block defining custom registries.
- >
The following is an example that pulls all images used in
this playbook through your private registries.
- >
It also allows you to pull your own images from your private
registry, without having to use imagePullSecrets in your
deployments.
- >
If all you need is your own images and you don't care about
caching the docker/quay/ghcr.io images, you can just remove
those from the mirrors: section.
required: true

View File

@@ -1,17 +1,16 @@
--- ---
- name: Create directory /etc/rancher/k3s - name: Create directory /etc/rancher/k3s
file: ansible.builtin.file:
path: "/etc/{{ item }}" path: /etc/{{ item }}
state: directory state: directory
mode: '0755' mode: "0755"
loop: loop:
- rancher - rancher
- rancher/k3s - rancher/k3s
- name: Insert registries into /etc/rancher/k3s/registries.yaml - name: Insert registries into /etc/rancher/k3s/registries.yaml
blockinfile: ansible.builtin.blockinfile:
path: /etc/rancher/k3s/registries.yaml path: /etc/rancher/k3s/registries.yaml
block: "{{ custom_registries_yaml }}" block: "{{ custom_registries_yaml }}"
mode: '0600' mode: "0600"
create: true create: true

View File

@@ -1,15 +1,27 @@
--- ---
# If you want to explicitly define an interface that ALL control nodes extra_server_args: ""
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null
# Enables ARP broadcasts from Leader k3s_kubectl_binary: k3s kubectl
kube_vip_arp: true
# Name of the master group
group_name_master: master group_name_master: master
kube_vip_arp: true
kube_vip_iface:
kube_vip_cloud_provider_tag_version: main
kube_vip_tag_version: v0.7.2
kube_vip_bgp: false
kube_vip_bgp_routerid: 127.0.0.1
kube_vip_bgp_as: "64513"
kube_vip_bgp_peeraddress: 192.168.30.1
kube_vip_bgp_peeras: "64512"
metal_lb_controller_tag_version: v0.14.3
metal_lb_speaker_tag_version: v0.14.3
metal_lb_type: native
retry_count: 20
# yamllint disable rule:line-length # yamllint disable rule:line-length
server_init_args: >- server_init_args: >-
{% if groups[group_name_master | default('master')] | length > 1 %} {% if groups[group_name_master | default('master')] | length > 1 %}
@@ -20,4 +32,6 @@ server_init_args: >-
{% endif %} {% endif %}
--token {{ k3s_token }} --token {{ k3s_token }}
{% endif %} {% endif %}
{{ extra_server_args | default('') }} {{ extra_server_args }}
systemd_dir: /etc/systemd/system

View File

@@ -0,0 +1,121 @@
---
argument_specs:
main:
short_description: Setup k3s servers
options:
apiserver_endpoint:
description: Virtual ip-address configured on each master
required: true
cilium_bgp:
description:
- Enable cilium BGP control plane for LB services and pod cidrs.
- Disables the use of MetalLB.
type: bool
default: ~
cilium_iface:
description: The network interface used for when Cilium is enabled
default: ~
extra_server_args:
description: Extra arguments for server nodes
default: ""
group_name_master:
description: Name of the master group
default: master
k3s_create_kubectl_symlink:
description: Create the kubectl -> k3s symlink
default: false
type: bool
k3s_create_crictl_symlink:
description: Create the crictl -> k3s symlink
default: false
type: bool
kube_vip_arp:
description: Enables kube-vip ARP broadcasts
default: true
type: bool
kube_vip_bgp:
description: Enables kube-vip BGP peering
default: false
type: bool
kube_vip_bgp_routerid:
description: Defines the router ID for the kube-vip BGP server
default: "127.0.0.1"
kube_vip_bgp_as:
description: Defines the AS for the kube-vip BGP server
default: "64513"
kube_vip_bgp_peeraddress:
description: Defines the address for the kube-vip BGP peer
default: "192.168.30.1"
kube_vip_bgp_peeras:
description: Defines the AS for the kube-vip BGP peer
default: "64512"
kube_vip_iface:
description:
- Explicitly define an interface that ALL control nodes
- should use to propagate the VIP, define it here.
- Otherwise, kube-vip will determine the right interface
- automatically at runtime.
default: ~
kube_vip_tag_version:
description: Image tag for kube-vip
default: v0.7.2
kube_vip_cloud_provider_tag_version:
description: Tag for kube-vip-cloud-provider manifest when enabled
default: main
kube_vip_lb_ip_range:
description: IP range for kube-vip load balancer
default: ~
metal_lb_controller_tag_version:
description: Image tag for MetalLB
default: v0.14.3
metal_lb_speaker_tag_version:
description: Image tag for MetalLB
default: v0.14.3
metal_lb_type:
choices:
- frr
- native
default: native
proxy_env:
type: dict
description: Internet proxy configurations
default: ~
options:
HTTP_PROXY:
required: true
HTTPS_PROXY:
required: true
NO_PROXY:
required: true
retry_count:
description: Amount of retries when verifying that nodes joined
type: int
default: 20
server_init_args:
description: Arguments for server nodes
systemd_dir:
description: Path to systemd services
default: /etc/systemd/system

View File

@@ -23,6 +23,6 @@
ansible.builtin.template: ansible.builtin.template:
src: content.j2 src: content.j2
dest: "{{ log_destination }}/k3s-init@{{ ansible_hostname }}.log" dest: "{{ log_destination }}/k3s-init@{{ ansible_hostname }}.log"
mode: 0644 mode: "0644"
vars: vars:
content: "{{ k3s_init_log.stdout }}" content: "{{ k3s_init_log.stdout }}"

View File

@@ -1,18 +1,16 @@
--- ---
- name: Create k3s.service.d directory - name: Create k3s.service.d directory
file: ansible.builtin.file:
path: '{{ systemd_dir }}/k3s.service.d' path: "{{ systemd_dir }}/k3s.service.d"
state: directory state: directory
owner: root owner: root
group: root group: root
mode: '0755' mode: "0755"
- name: Copy K3s http_proxy conf file - name: Copy K3s http_proxy conf file
template: ansible.builtin.template:
src: "http_proxy.conf.j2" src: http_proxy.conf.j2
dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf" dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
owner: root owner: root
group: root group: root
mode: '0755' mode: "0755"

View File

@@ -1,27 +1,27 @@
--- ---
- name: Create manifests directory on first master - name: Create manifests directory on first master
file: ansible.builtin.file:
path: /var/lib/rancher/k3s/server/manifests path: /var/lib/rancher/k3s/server/manifests
state: directory state: directory
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip cloud provider manifest to first master - name: Download vip cloud provider manifest to first master
ansible.builtin.get_url: ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/{{ kube_vip_cloud_provider_tag_version | default('main') }}/manifest/kube-vip-cloud-controller.yaml" # noqa yaml[line-length] url: https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/{{ kube_vip_cloud_provider_tag_version | default('main') }}/manifest/kube-vip-cloud-controller.yaml # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/kube-vip-cloud-controller.yaml" dest: /var/lib/rancher/k3s/server/manifests/kube-vip-cloud-controller.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy kubevip configMap manifest to first master - name: Copy kubevip configMap manifest to first master
template: ansible.builtin.template:
src: "kubevip.yaml.j2" src: kubevip.yaml.j2
dest: "/var/lib/rancher/k3s/server/manifests/kubevip.yaml" dest: /var/lib/rancher/k3s/server/manifests/kubevip.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -1,55 +1,50 @@
--- ---
- name: Stop k3s-init - name: Stop k3s-init
systemd: ansible.builtin.systemd:
name: k3s-init name: k3s-init
state: stopped state: stopped
failed_when: false failed_when: false
# k3s-init won't work if the port is already in use # k3s-init won't work if the port is already in use
- name: Stop k3s - name: Stop k3s
systemd: ansible.builtin.systemd:
name: k3s name: k3s
state: stopped state: stopped
failed_when: false failed_when: false
- name: Clean previous runs of k3s-init # noqa command-instead-of-module - name: Clean previous runs of k3s-init # noqa command-instead-of-module
# The systemd module does not support "reset-failed", so we need to resort to command. # The systemd module does not support "reset-failed", so we need to resort to command.
command: systemctl reset-failed k3s-init ansible.builtin.command: systemctl reset-failed k3s-init
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Deploy K3s http_proxy conf - name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml ansible.builtin.include_tasks: http_proxy.yml
when: proxy_env is defined when: proxy_env is defined
- name: Deploy vip manifest - name: Deploy vip manifest
include_tasks: vip.yml ansible.builtin.include_tasks: vip.yml
- name: Deploy metallb manifest - name: Deploy metallb manifest
include_tasks: metallb.yml ansible.builtin.include_tasks: metallb.yml
tags: metallb tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined) when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Deploy kube-vip manifest - name: Deploy kube-vip manifest
include_tasks: kube-vip.yml ansible.builtin.include_tasks: kube-vip.yml
tags: kubevip tags: kubevip
when: kube_vip_lb_ip_range is defined when: kube_vip_lb_ip_range is defined
- name: Init cluster inside the transient k3s-init service - name: Init cluster inside the transient k3s-init service
command: ansible.builtin.command:
cmd: "systemd-run -p RestartSec=2 \ cmd: systemd-run -p RestartSec=2 -p Restart=on-failure --unit=k3s-init k3s server {{ server_init_args }}
-p Restart=on-failure \
--unit=k3s-init \
k3s server {{ server_init_args }}"
creates: "{{ systemd_dir }}/k3s-init.service" creates: "{{ systemd_dir }}/k3s-init.service"
- name: Verification - name: Verification
when: not ansible_check_mode when: not ansible_check_mode
block: block:
- name: Verify that all nodes actually joined (check k3s-init.service if this fails) - name: Verify that all nodes actually joined (check k3s-init.service if this fails)
command: ansible.builtin.command:
cmd: k3s kubectl get nodes -l "node-role.kubernetes.io/master=true" -o=jsonpath="{.items[*].metadata.name}" cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} get nodes -l 'node-role.kubernetes.io/master=true' -o=jsonpath='{.items[*].metadata.name}'" # yamllint disable-line rule:line-length
register: nodes register: nodes
until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups[group_name_master | default('master')] | length) # yamllint disable-line rule:line-length until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups[group_name_master | default('master')] | length) # yamllint disable-line rule:line-length
retries: "{{ retry_count | default(20) }}" retries: "{{ retry_count | default(20) }}"
@@ -57,116 +52,118 @@
changed_when: false changed_when: false
always: always:
- name: Save logs of k3s-init.service - name: Save logs of k3s-init.service
include_tasks: fetch_k3s_init_logs.yml ansible.builtin.include_tasks: fetch_k3s_init_logs.yml
when: log_destination when: log_destination
vars: vars:
log_destination: >- log_destination: >-
{{ lookup('ansible.builtin.env', 'ANSIBLE_K3S_LOG_DIR', default=False) }} {{ lookup('ansible.builtin.env', 'ANSIBLE_K3S_LOG_DIR', default=False) }}
- name: Kill the temporary service used for initialization - name: Kill the temporary service used for initialization
systemd: ansible.builtin.systemd:
name: k3s-init name: k3s-init
state: stopped state: stopped
failed_when: false failed_when: false
- name: Copy K3s service file - name: Copy K3s service file
register: k3s_service register: k3s_service
template: ansible.builtin.template:
src: "k3s.service.j2" src: k3s.service.j2
dest: "{{ systemd_dir }}/k3s.service" dest: "{{ systemd_dir }}/k3s.service"
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
- name: Enable and check K3s service - name: Enable and check K3s service
systemd: ansible.builtin.systemd:
name: k3s name: k3s
daemon_reload: true daemon_reload: true
state: restarted state: restarted
enabled: true enabled: true
- name: Wait for node-token - name: Wait for node-token
wait_for: ansible.builtin.wait_for:
path: /var/lib/rancher/k3s/server/node-token path: /var/lib/rancher/k3s/server/node-token
- name: Register node-token file access mode - name: Register node-token file access mode
stat: ansible.builtin.stat:
path: /var/lib/rancher/k3s/server path: /var/lib/rancher/k3s/server
register: p register: p
- name: Change file access node-token - name: Change file access node-token
file: ansible.builtin.file:
path: /var/lib/rancher/k3s/server path: /var/lib/rancher/k3s/server
mode: "g+rx,o+rx" mode: g+rx,o+rx
- name: Read node-token from master - name: Read node-token from master
slurp: ansible.builtin.slurp:
src: /var/lib/rancher/k3s/server/node-token src: /var/lib/rancher/k3s/server/node-token
register: node_token register: node_token
- name: Store Master node-token - name: Store Master node-token
set_fact: ansible.builtin.set_fact:
token: "{{ node_token.content | b64decode | regex_replace('\n', '') }}" token: "{{ node_token.content | b64decode | regex_replace('\n', '') }}"
- name: Restore node-token file access - name: Restore node-token file access
file: ansible.builtin.file:
path: /var/lib/rancher/k3s/server path: /var/lib/rancher/k3s/server
mode: "{{ p.stat.mode }}" mode: "{{ p.stat.mode }}"
- name: Create directory .kube - name: Create directory .kube
file: ansible.builtin.file:
path: "{{ ansible_user_dir }}/.kube" path: "{{ ansible_user_dir }}/.kube"
state: directory state: directory
owner: "{{ ansible_user_id }}" owner: "{{ ansible_user_id }}"
mode: "u=rwx,g=rx,o=" mode: u=rwx,g=rx,o=
- name: Copy config file to user home directory - name: Copy config file to user home directory
copy: ansible.builtin.copy:
src: /etc/rancher/k3s/k3s.yaml src: /etc/rancher/k3s/k3s.yaml
dest: "{{ ansible_user_dir }}/.kube/config" dest: "{{ ansible_user_dir }}/.kube/config"
remote_src: true remote_src: true
owner: "{{ ansible_user_id }}" owner: "{{ ansible_user_id }}"
mode: "u=rw,g=,o=" mode: u=rw,g=,o=
- name: Configure kubectl cluster to {{ endpoint_url }} - name: Configure kubectl cluster to {{ endpoint_url }}
command: >- ansible.builtin.command: >-
k3s kubectl config set-cluster default {{ k3s_kubectl_binary | default('k3s kubectl') }} config set-cluster default
--server={{ endpoint_url }} --server={{ endpoint_url }}
--kubeconfig {{ ansible_user_dir }}/.kube/config --kubeconfig {{ ansible_user_dir }}/.kube/config
changed_when: true changed_when: true
vars: vars:
endpoint_url: >- endpoint_url: >-
https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443
# Deactivated linter rules: # Deactivated linter rules:
# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap # - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap
# would be undefined. This will not be the case during playbook execution. # would be undefined. This will not be the case during playbook execution.
# noqa jinja[invalid] # noqa jinja[invalid]
- name: Create kubectl symlink - name: Create kubectl symlink
file: ansible.builtin.file:
src: /usr/local/bin/k3s src: /usr/local/bin/k3s
dest: /usr/local/bin/kubectl dest: /usr/local/bin/kubectl
state: link state: link
when: k3s_create_kubectl_symlink | default(true) | bool
- name: Create crictl symlink - name: Create crictl symlink
file: ansible.builtin.file:
src: /usr/local/bin/k3s src: /usr/local/bin/k3s
dest: /usr/local/bin/crictl dest: /usr/local/bin/crictl
state: link state: link
when: k3s_create_crictl_symlink | default(true) | bool
- name: Get contents of manifests folder - name: Get contents of manifests folder
find: ansible.builtin.find:
paths: /var/lib/rancher/k3s/server/manifests paths: /var/lib/rancher/k3s/server/manifests
file_type: file file_type: file
register: k3s_server_manifests register: k3s_server_manifests
- name: Get sub dirs of manifests folder - name: Get sub dirs of manifests folder
find: ansible.builtin.find:
paths: /var/lib/rancher/k3s/server/manifests paths: /var/lib/rancher/k3s/server/manifests
file_type: directory file_type: directory
register: k3s_server_manifests_directories register: k3s_server_manifests_directories
- name: Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start - name: Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start
file: ansible.builtin.file:
path: "{{ item.path }}" path: "{{ item.path }}"
state: absent state: absent
with_items: with_items:

View File

@@ -1,30 +1,30 @@
--- ---
- name: Create manifests directory on first master - name: Create manifests directory on first master
file: ansible.builtin.file:
path: /var/lib/rancher/k3s/server/manifests path: /var/lib/rancher/k3s/server/manifests
state: directory state: directory
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: "Download to first master: manifest for metallb-{{ metal_lb_type }}" - name: "Download to first master: manifest for metallb-{{ metal_lb_type }}"
ansible.builtin.get_url: ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/metallb/metallb/{{ metal_lb_controller_tag_version }}/config/manifests/metallb-{{ metal_lb_type }}.yaml" # noqa yaml[line-length] url: https://raw.githubusercontent.com/metallb/metallb/{{ metal_lb_controller_tag_version }}/config/manifests/metallb-{{ metal_lb_type }}.yaml # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml" dest: /var/lib/rancher/k3s/server/manifests/metallb-crds.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Set image versions in manifest for metallb-{{ metal_lb_type }} - name: Set image versions in manifest for metallb-{{ metal_lb_type }}
ansible.builtin.replace: ansible.builtin.replace:
path: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml" path: /var/lib/rancher/k3s/server/manifests/metallb-crds.yaml
regexp: "{{ item.change | ansible.builtin.regex_escape }}" regexp: "{{ item.change | ansible.builtin.regex_escape }}"
replace: "{{ item.to }}" replace: "{{ item.to }}"
with_items: with_items:
- change: "metallb/speaker:{{ metal_lb_controller_tag_version }}" - change: metallb/speaker:{{ metal_lb_controller_tag_version }}
to: "metallb/speaker:{{ metal_lb_speaker_tag_version }}" to: metallb/speaker:{{ metal_lb_speaker_tag_version }}
loop_control: loop_control:
label: "{{ item.change }} => {{ item.to }}" label: "{{ item.change }} => {{ item.to }}"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -1,27 +1,27 @@
--- ---
- name: Create manifests directory on first master - name: Create manifests directory on first master
file: ansible.builtin.file:
path: /var/lib/rancher/k3s/server/manifests path: /var/lib/rancher/k3s/server/manifests
state: directory state: directory
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip rbac manifest to first master - name: Download vip rbac manifest to first master
ansible.builtin.get_url: ansible.builtin.get_url:
url: "https://kube-vip.io/manifests/rbac.yaml" url: https://kube-vip.io/manifests/rbac.yaml
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml" dest: /var/lib/rancher/k3s/server/manifests/vip-rbac.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy vip manifest to first master - name: Copy vip manifest to first master
template: ansible.builtin.template:
src: "vip.yaml.j2" src: vip.yaml.j2
dest: "/var/lib/rancher/k3s/server/manifests/vip.yaml" dest: /var/lib/rancher/k3s/server/manifests/vip.yaml
owner: root owner: root
group: root group: root
mode: 0644 mode: "0644"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -27,7 +27,9 @@ spec:
- manager - manager
env: env:
- name: vip_arp - name: vip_arp
value: "{{ 'true' if kube_vip_arp | bool else 'false' }}" value: "{{ 'true' if kube_vip_arp | default(true) | bool else 'false' }}"
- name: bgp_enable
value: "{{ 'true' if kube_vip_bgp | default(false) | bool else 'false' }}"
- name: port - name: port
value: "6443" value: "6443"
{% if kube_vip_iface %} {% if kube_vip_iface %}
@@ -54,6 +56,24 @@ spec:
value: "2" value: "2"
- name: address - name: address
value: {{ apiserver_endpoint }} value: {{ apiserver_endpoint }}
{% if kube_vip_bgp | default(false) | bool %}
{% if kube_vip_bgp_routerid is defined %}
- name: bgp_routerid
value: "{{ kube_vip_bgp_routerid }}"
{% endif %}
{% if kube_vip_bgp_as is defined %}
- name: bgp_as
value: "{{ kube_vip_bgp_as }}"
{% endif %}
{% if kube_vip_bgp_peeraddress is defined %}
- name: bgp_peeraddress
value: "{{ kube_vip_bgp_peeraddress }}"
{% endif %}
{% if kube_vip_bgp_peeras is defined %}
- name: bgp_peeras
value: "{{ kube_vip_bgp_peeras }}"
{% endif %}
{% endif %}
image: ghcr.io/kube-vip/kube-vip:{{ kube_vip_tag_version }} image: ghcr.io/kube-vip/kube-vip:{{ kube_vip_tag_version }}
imagePullPolicy: Always imagePullPolicy: Always
name: kube-vip name: kube-vip

View File

@@ -1,6 +1,30 @@
--- ---
# Timeout to wait for MetalLB services to come up k3s_kubectl_binary: k3s kubectl
metal_lb_available_timeout: 240s
# Name of the master group bpf_lb_algorithm: maglev
bpf_lb_mode: hybrid
calico_blockSize: 26 # noqa var-naming
calico_ebpf: false
calico_encapsulation: VXLANCrossSubnet
calico_natOutgoing: Enabled # noqa var-naming
calico_nodeSelector: all() # noqa var-naming
calico_tag: v3.27.2
cilium_bgp: false
cilium_exportPodCIDR: true # noqa var-naming
cilium_bgp_my_asn: 64513
cilium_bgp_peer_asn: 64512
cilium_bgp_lb_cidr: 192.168.31.0/24
cilium_hubble: true
cilium_mode: native
cluster_cidr: 10.52.0.0/16
enable_bpf_masquerade: true
kube_proxy_replacement: true
group_name_master: master group_name_master: master
metal_lb_mode: layer2
metal_lb_available_timeout: 240s
metal_lb_controller_tag_version: v0.14.3
metal_lb_ip_range: 192.168.30.80-192.168.30.90

View File

@@ -0,0 +1,145 @@
---
argument_specs:
main:
short_description: Configure k3s cluster
options:
apiserver_endpoint:
description: Virtual ip-address configured on each master
required: true
bpf_lb_algorithm:
description: BPF lb algorithm
default: maglev
bpf_lb_mode:
description: BPF lb mode
default: hybrid
calico_blockSize:
description: IP pool block size
type: int
default: 26
calico_ebpf:
description: Use eBPF dataplane instead of iptables
type: bool
default: false
calico_encapsulation:
description: IP pool encapsulation
default: VXLANCrossSubnet
calico_natOutgoing:
description: IP pool NAT outgoing
default: Enabled
calico_nodeSelector:
description: IP pool node selector
default: all()
calico_iface:
description: The network interface used for when Calico is enabled
default: ~
calico_tag:
description: Calico version tag
default: v3.27.2
cilium_bgp:
description:
- Enable cilium BGP control plane for LB services and pod cidrs.
- Disables the use of MetalLB.
type: bool
default: false
cilium_bgp_my_asn:
description: Local ASN for BGP peer
type: int
default: 64513
cilium_bgp_peer_asn:
description: BGP peer ASN
type: int
default: 64512
cilium_bgp_peer_address:
description: BGP peer address
default: ~
cilium_bgp_lb_cidr:
description: BGP load balancer IP range
default: 192.168.31.0/24
cilium_exportPodCIDR:
description: Export pod CIDR
type: bool
default: true
cilium_hubble:
description: Enable Cilium Hubble
type: bool
default: true
cilium_iface:
description: The network interface used for when Cilium is enabled
default: ~
cilium_mode:
description: Inner-node communication mode
default: native
choices:
- native
- routed
cluster_cidr:
description: Inner-cluster IP range
default: 10.52.0.0/16
enable_bpf_masquerade:
description: Use IP masquerading
type: bool
default: true
group_name_master:
description: Name of the master group
default: master
kube_proxy_replacement:
description: Replace the native kube-proxy with Cilium
type: bool
default: true
kube_vip_lb_ip_range:
description: IP range for kube-vip load balancer
default: ~
metal_lb_available_timeout:
description: Wait for MetalLB resources
default: 240s
metal_lb_ip_range:
description: MetalLB ip range for load balancer
default: 192.168.30.80-192.168.30.90
metal_lb_controller_tag_version:
description: Image tag for MetalLB
default: v0.14.3
metal_lb_mode:
description: Metallb mode
default: layer2
choices:
- bgp
- layer2
metal_lb_bgp_my_asn:
description: BGP ASN configurations
default: ~
metal_lb_bgp_peer_asn:
description: BGP peer ASN configurations
default: ~
metal_lb_bgp_peer_address:
description: BGP peer address
default: ~

View File

@@ -4,48 +4,48 @@
run_once: true run_once: true
block: block:
- name: Create manifests directory on first master - name: Create manifests directory on first master
file: ansible.builtin.file:
path: /tmp/k3s path: /tmp/k3s
state: directory state: directory
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: "Download to first master: manifest for Tigera Operator and Calico CRDs" - name: "Download to first master: manifest for Tigera Operator and Calico CRDs"
ansible.builtin.get_url: ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml" url: https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml
dest: "/tmp/k3s/tigera-operator.yaml" dest: /tmp/k3s/tigera-operator.yaml
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: Copy Calico custom resources manifest to first master - name: Copy Calico custom resources manifest to first master
ansible.builtin.template: ansible.builtin.template:
src: "calico.crs.j2" src: calico.crs.j2
dest: /tmp/k3s/custom-resources.yaml dest: /tmp/k3s/custom-resources.yaml
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: Deploy or replace Tigera Operator - name: Deploy or replace Tigera Operator
block: block:
- name: Deploy Tigera Operator - name: Deploy Tigera Operator
ansible.builtin.command: ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/tigera-operator.yaml cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} create -f /tmp/k3s/tigera-operator.yaml"
register: create_operator register: create_operator
changed_when: "'created' in create_operator.stdout" changed_when: "'created' in create_operator.stdout"
failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr" failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr"
rescue: rescue:
- name: Replace existing Tigera Operator - name: Replace existing Tigera Operator
ansible.builtin.command: ansible.builtin.command:
cmd: kubectl replace -f /tmp/k3s/tigera-operator.yaml cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} replace -f /tmp/k3s/tigera-operator.yaml"
register: replace_operator register: replace_operator
changed_when: "'replaced' in replace_operator.stdout" changed_when: "'replaced' in replace_operator.stdout"
failed_when: "'Error' in replace_operator.stderr" failed_when: "'Error' in replace_operator.stderr"
- name: Wait for Tigera Operator resources - name: Wait for Tigera Operator resources
command: >- ansible.builtin.command: >-
k3s kubectl wait {{ item.type }}/{{ item.name }} {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }}
--namespace='tigera-operator' --namespace='tigera-operator'
--for=condition=Available=True --for=condition=Available=True
--timeout=30s --timeout=30s
@@ -55,7 +55,7 @@
retries: 7 retries: 7
delay: 7 delay: 7
with_items: with_items:
- {name: tigera-operator, type: deployment} - { name: tigera-operator, type: deployment }
loop_control: loop_control:
label: "{{ item.type }}/{{ item.name }}" label: "{{ item.type }}/{{ item.name }}"
@@ -63,27 +63,27 @@
block: block:
- name: Deploy custom resources for Calico - name: Deploy custom resources for Calico
ansible.builtin.command: ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/custom-resources.yaml cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} create -f /tmp/k3s/custom-resources.yaml"
register: create_cr register: create_cr
changed_when: "'created' in create_cr.stdout" changed_when: "'created' in create_cr.stdout"
failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr" failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr"
rescue: rescue:
- name: Apply new Calico custom resource manifest - name: Apply new Calico custom resource manifest
ansible.builtin.command: ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/custom-resources.yaml cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/custom-resources.yaml"
register: apply_cr register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout" changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'Error' in apply_cr.stderr" failed_when: "'Error' in apply_cr.stderr"
- name: Wait for Calico system resources to be available - name: Wait for Calico system resources to be available
command: >- ansible.builtin.command: >-
{% if item.type == 'daemonset' %} {% if item.type == 'daemonset' %}
k3s kubectl wait pods {{ k3s_kubectl_binary | default('k3s kubectl') }} wait pods
--namespace='{{ item.namespace }}' --namespace='{{ item.namespace }}'
--selector={{ item.selector }} --selector={{ item.selector }}
--for=condition=Ready --for=condition=Ready
{% else %} {% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }} {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }}
--namespace='{{ item.namespace }}' --namespace='{{ item.namespace }}'
--for=condition=Available --for=condition=Available
{% endif %} {% endif %}
@@ -94,18 +94,24 @@
retries: 30 retries: 30
delay: 7 delay: 7
with_items: with_items:
- {name: calico-typha, type: deployment, namespace: calico-system} - { name: calico-typha, type: deployment, namespace: calico-system }
- {name: calico-kube-controllers, type: deployment, namespace: calico-system} - { name: calico-kube-controllers, type: deployment, namespace: calico-system }
- {name: csi-node-driver, type: daemonset, selector: 'k8s-app=csi-node-driver', namespace: calico-system} - name: csi-node-driver
- {name: calico-node, type: daemonset, selector: 'k8s-app=calico-node', namespace: calico-system} type: daemonset
- {name: calico-apiserver, type: deployment, namespace: calico-apiserver} selector: k8s-app=csi-node-driver
namespace: calico-system
- name: calico-node
type: daemonset
selector: k8s-app=calico-node
namespace: calico-system
- { name: calico-apiserver, type: deployment, namespace: calico-apiserver }
loop_control: loop_control:
label: "{{ item.type }}/{{ item.name }}" label: "{{ item.type }}/{{ item.name }}"
- name: Patch Felix configuration for eBPF mode - name: Patch Felix configuration for eBPF mode
ansible.builtin.command: ansible.builtin.command:
cmd: > cmd: >
kubectl patch felixconfiguration default {{ k3s_kubectl_binary | default('k3s kubectl') }} patch felixconfiguration default
--type='merge' --type='merge'
--patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}' --patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}'
register: patch_result register: patch_result

View File

@@ -4,12 +4,12 @@
run_once: true run_once: true
block: block:
- name: Create tmp directory on first master - name: Create tmp directory on first master
file: ansible.builtin.file:
path: /tmp/k3s path: /tmp/k3s
state: directory state: directory
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: Check if Cilium CLI is installed - name: Check if Cilium CLI is installed
ansible.builtin.command: cilium version ansible.builtin.command: cilium version
@@ -19,7 +19,7 @@
ignore_errors: true ignore_errors: true
- name: Check for Cilium CLI version in command output - name: Check for Cilium CLI version in command output
set_fact: ansible.builtin.set_fact:
installed_cli_version: >- installed_cli_version: >-
{{ {{
cilium_cli_installed.stdout_lines cilium_cli_installed.stdout_lines
@@ -32,11 +32,11 @@
- name: Get latest stable Cilium CLI version file - name: Get latest stable Cilium CLI version file
ansible.builtin.get_url: ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt" url: https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt
dest: "/tmp/k3s/cilium-cli-stable.txt" dest: /tmp/k3s/cilium-cli-stable.txt
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: Read Cilium CLI stable version from file - name: Read Cilium CLI stable version from file
ansible.builtin.command: cat /tmp/k3s/cilium-cli-stable.txt ansible.builtin.command: cat /tmp/k3s/cilium-cli-stable.txt
@@ -52,7 +52,7 @@
msg: "Latest Cilium CLI version: {{ cli_ver.stdout }}" msg: "Latest Cilium CLI version: {{ cli_ver.stdout }}"
- name: Determine if Cilium CLI needs installation or update - name: Determine if Cilium CLI needs installation or update
set_fact: ansible.builtin.set_fact:
cilium_cli_needs_update: >- cilium_cli_needs_update: >-
{{ {{
cilium_cli_installed.rc != 0 or cilium_cli_installed.rc != 0 or
@@ -70,15 +70,15 @@
- name: Download Cilium CLI and checksum - name: Download Cilium CLI and checksum
ansible.builtin.get_url: ansible.builtin.get_url:
url: "{{ cilium_base_url }}/cilium-linux-{{ cli_arch }}{{ item }}" url: "{{ cilium_base_url }}/cilium-linux-{{ cli_arch }}{{ item }}"
dest: "/tmp/k3s/cilium-linux-{{ cli_arch }}{{ item }}" dest: /tmp/k3s/cilium-linux-{{ cli_arch }}{{ item }}
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
loop: loop:
- ".tar.gz" - .tar.gz
- ".tar.gz.sha256sum" - .tar.gz.sha256sum
vars: vars:
cilium_base_url: "https://github.com/cilium/cilium-cli/releases/download/{{ cli_ver.stdout }}" cilium_base_url: https://github.com/cilium/cilium-cli/releases/download/{{ cli_ver.stdout }}
- name: Verify the downloaded tarball - name: Verify the downloaded tarball
ansible.builtin.shell: | ansible.builtin.shell: |
@@ -89,7 +89,7 @@
- name: Extract Cilium CLI to /usr/local/bin - name: Extract Cilium CLI to /usr/local/bin
ansible.builtin.unarchive: ansible.builtin.unarchive:
src: "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz" src: /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz
dest: /usr/local/bin dest: /usr/local/bin
remote_src: true remote_src: true
@@ -98,8 +98,8 @@
path: "{{ item }}" path: "{{ item }}"
state: absent state: absent
loop: loop:
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz" - /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz.sha256sum" - /tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz.sha256sum
- name: Wait for connectivity to kube VIP - name: Wait for connectivity to kube VIP
ansible.builtin.command: ping -c 1 {{ apiserver_endpoint }} ansible.builtin.command: ping -c 1 {{ apiserver_endpoint }}
@@ -112,11 +112,12 @@
- name: Fail if kube VIP not reachable - name: Fail if kube VIP not reachable
ansible.builtin.fail: ansible.builtin.fail:
msg: "API endpoint {{ apiserver_endpoint }} is not reachable" msg: API endpoint {{ apiserver_endpoint }} is not reachable
when: ping_result.rc != 0 when: ping_result.rc != 0
- name: Test for existing Cilium install - name: Test for existing Cilium install
ansible.builtin.command: k3s kubectl -n kube-system get daemonsets cilium ansible.builtin.command: |
{{ k3s_kubectl_binary | default('k3s kubectl') }} -n kube-system get daemonsets cilium
register: cilium_installed register: cilium_installed
failed_when: false failed_when: false
changed_when: false changed_when: false
@@ -125,7 +126,6 @@
- name: Check existing Cilium install - name: Check existing Cilium install
when: cilium_installed.rc == 0 when: cilium_installed.rc == 0
block: block:
- name: Check Cilium version - name: Check Cilium version
ansible.builtin.command: cilium version ansible.builtin.command: cilium version
register: cilium_version register: cilium_version
@@ -134,7 +134,7 @@
ignore_errors: true ignore_errors: true
- name: Parse installed Cilium version - name: Parse installed Cilium version
set_fact: ansible.builtin.set_fact:
installed_cilium_version: >- installed_cilium_version: >-
{{ {{
cilium_version.stdout_lines cilium_version.stdout_lines
@@ -145,7 +145,7 @@
}} }}
- name: Determine if Cilium needs update - name: Determine if Cilium needs update
set_fact: ansible.builtin.set_fact:
cilium_needs_update: >- cilium_needs_update: >-
{{ 'v' + installed_cilium_version != cilium_tag }} {{ 'v' + installed_cilium_version != cilium_tag }}
@@ -172,17 +172,17 @@
{% endif %} {% endif %}
--helm-set k8sServiceHost="127.0.0.1" --helm-set k8sServiceHost="127.0.0.1"
--helm-set k8sServicePort="6444" --helm-set k8sServicePort="6444"
--helm-set routingMode={{ cilium_mode | default("native") }} --helm-set routingMode={{ cilium_mode }}
--helm-set autoDirectNodeRoutes={{ "true" if cilium_mode == "native" else "false" }} --helm-set autoDirectNodeRoutes={{ "true" if cilium_mode == "native" else "false" }}
--helm-set kubeProxyReplacement={{ kube_proxy_replacement | default("true") }} --helm-set kubeProxyReplacement={{ kube_proxy_replacement }}
--helm-set bpf.masquerade={{ enable_bpf_masquerade | default("true") }} --helm-set bpf.masquerade={{ enable_bpf_masquerade }}
--helm-set bgpControlPlane.enabled={{ cilium_bgp | default("false") }} --helm-set bgpControlPlane.enabled={{ cilium_bgp | default("false") }}
--helm-set hubble.enabled={{ "true" if cilium_hubble else "false" }} --helm-set hubble.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.relay.enabled={{ "true" if cilium_hubble else "false" }} --helm-set hubble.relay.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.ui.enabled={{ "true" if cilium_hubble else "false" }} --helm-set hubble.ui.enabled={{ "true" if cilium_hubble else "false" }}
{% if kube_proxy_replacement is not false %} {% if kube_proxy_replacement is not false %}
--helm-set bpf.loadBalancer.algorithm={{ bpf_lb_algorithm | default("maglev") }} --helm-set bpf.loadBalancer.algorithm={{ bpf_lb_algorithm }}
--helm-set bpf.loadBalancer.mode={{ bpf_lb_mode | default("hybrid") }} --helm-set bpf.loadBalancer.mode={{ bpf_lb_mode }}
{% endif %} {% endif %}
environment: environment:
KUBECONFIG: "{{ ansible_user_dir }}/.kube/config" KUBECONFIG: "{{ ansible_user_dir }}/.kube/config"
@@ -191,14 +191,14 @@
when: cilium_installed.rc != 0 or cilium_needs_update when: cilium_installed.rc != 0 or cilium_needs_update
- name: Wait for Cilium resources - name: Wait for Cilium resources
command: >- ansible.builtin.command: >-
{% if item.type == 'daemonset' %} {% if item.type == 'daemonset' %}
k3s kubectl wait pods {{ k3s_kubectl_binary | default('k3s kubectl') }} wait pods
--namespace=kube-system --namespace=kube-system
--selector='k8s-app=cilium' --selector='k8s-app=cilium'
--for=condition=Ready --for=condition=Ready
{% else %} {% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }} {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.type }}/{{ item.name }}
--namespace=kube-system --namespace=kube-system
--for=condition=Available --for=condition=Available
{% endif %} {% endif %}
@@ -209,10 +209,10 @@
retries: 30 retries: 30
delay: 7 delay: 7
with_items: with_items:
- {name: cilium-operator, type: deployment} - { name: cilium-operator, type: deployment }
- {name: cilium, type: daemonset, selector: 'k8s-app=cilium'} - { name: cilium, type: daemonset, selector: k8s-app=cilium }
- {name: hubble-relay, type: deployment, check_hubble: true} - { name: hubble-relay, type: deployment, check_hubble: true }
- {name: hubble-ui, type: deployment, check_hubble: true} - { name: hubble-ui, type: deployment, check_hubble: true }
loop_control: loop_control:
label: "{{ item.type }}/{{ item.name }}" label: "{{ item.type }}/{{ item.name }}"
when: >- when: >-
@@ -221,18 +221,17 @@
- name: Configure Cilium BGP - name: Configure Cilium BGP
when: cilium_bgp when: cilium_bgp
block: block:
- name: Copy BGP manifests to first master - name: Copy BGP manifests to first master
ansible.builtin.template: ansible.builtin.template:
src: "cilium.crs.j2" src: cilium.crs.j2
dest: /tmp/k3s/cilium-bgp.yaml dest: /tmp/k3s/cilium-bgp.yaml
owner: root owner: root
group: root group: root
mode: 0755 mode: "0755"
- name: Apply BGP manifests - name: Apply BGP manifests
ansible.builtin.command: ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/cilium-bgp.yaml cmd: "{{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/cilium-bgp.yaml"
register: apply_cr register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout" changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'is invalid' in apply_cr.stderr" failed_when: "'is invalid' in apply_cr.stderr"
@@ -246,8 +245,8 @@
- name: Test for BGP config resources - name: Test for BGP config resources
ansible.builtin.command: "{{ item }}" ansible.builtin.command: "{{ item }}"
loop: loop:
- k3s kubectl get CiliumBGPPeeringPolicy.cilium.io - "{{ k3s_kubectl_binary | default('k3s kubectl') }} get CiliumBGPPeeringPolicy.cilium.io"
- k3s kubectl get CiliumLoadBalancerIPPool.cilium.io - "{{ k3s_kubectl_binary | default('k3s kubectl') }} get CiliumLoadBalancerIPPool.cilium.io"
changed_when: false changed_when: false
loop_control: loop_control:
label: "{{ item }}" label: "{{ item }}"

View File

@@ -1,20 +1,20 @@
--- ---
- name: Deploy calico - name: Deploy calico
include_tasks: calico.yml ansible.builtin.include_tasks: calico.yml
tags: calico tags: calico
when: calico_iface is defined and cilium_iface is not defined when: calico_iface is defined and cilium_iface is not defined
- name: Deploy cilium - name: Deploy cilium
include_tasks: cilium.yml ansible.builtin.include_tasks: cilium.yml
tags: cilium tags: cilium
when: cilium_iface is defined when: cilium_iface is defined
- name: Deploy metallb pool - name: Deploy metallb pool
include_tasks: metallb.yml ansible.builtin.include_tasks: metallb.yml
tags: metallb tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined) when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Remove tmp directory used for manifests - name: Remove tmp directory used for manifests
file: ansible.builtin.file:
path: /tmp/k3s path: /tmp/k3s
state: absent state: absent

View File

@@ -1,25 +1,25 @@
--- ---
- name: Create manifests directory for temp configuration - name: Create manifests directory for temp configuration
file: ansible.builtin.file:
path: /tmp/k3s path: /tmp/k3s
state: directory state: directory
owner: "{{ ansible_user_id }}" owner: "{{ ansible_user_id }}"
mode: 0755 mode: "0755"
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true run_once: true
- name: Delete outdated metallb replicas - name: Delete outdated metallb replicas
shell: |- ansible.builtin.shell: |-
set -o pipefail set -o pipefail
REPLICAS=$(k3s kubectl --namespace='metallb-system' get replicasets \ REPLICAS=$({{ k3s_kubectl_binary | default('k3s kubectl') }} --namespace='metallb-system' get replicasets \
-l 'component=controller,app=metallb' \ -l 'component=controller,app=metallb' \
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}, {.items[0].metadata.name}' 2>/dev/null || true) -o jsonpath='{.items[0].spec.template.spec.containers[0].image}, {.items[0].metadata.name}' 2>/dev/null || true)
REPLICAS_SETS=$(echo ${REPLICAS} | grep -v '{{ metal_lb_controller_tag_version }}' | sed -e "s/^.*\s//g") REPLICAS_SETS=$(echo ${REPLICAS} | grep -v '{{ metal_lb_controller_tag_version }}' | sed -e "s/^.*\s//g")
if [ -n "${REPLICAS_SETS}" ] ; then if [ -n "${REPLICAS_SETS}" ] ; then
for REPLICAS in "${REPLICAS_SETS}" for REPLICAS in "${REPLICAS_SETS}"
do do
k3s kubectl --namespace='metallb-system' \ {{ k3s_kubectl_binary | default('k3s kubectl') }} --namespace='metallb-system' \
delete rs "${REPLICAS}" delete rs "${REPLICAS}"
done done
fi fi
@@ -30,24 +30,24 @@
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
- name: Copy metallb CRs manifest to first master - name: Copy metallb CRs manifest to first master
template: ansible.builtin.template:
src: "metallb.crs.j2" src: metallb.crs.j2
dest: "/tmp/k3s/metallb-crs.yaml" dest: /tmp/k3s/metallb-crs.yaml
owner: "{{ ansible_user_id }}" owner: "{{ ansible_user_id }}"
mode: 0755 mode: "0755"
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true run_once: true
- name: Test metallb-system namespace - name: Test metallb-system namespace
command: >- ansible.builtin.command: >-
k3s kubectl -n metallb-system {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system
changed_when: false changed_when: false
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true run_once: true
- name: Wait for MetalLB resources - name: Wait for MetalLB resources
command: >- ansible.builtin.command: >-
k3s kubectl wait {{ item.resource }} {{ k3s_kubectl_binary | default('k3s kubectl') }} wait {{ item.resource }}
--namespace='metallb-system' --namespace='metallb-system'
{% if item.name | default(False) -%}{{ item.name }}{%- endif %} {% if item.name | default(False) -%}{{ item.name }}{%- endif %}
{% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %} {% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %}
@@ -84,7 +84,7 @@
label: "{{ item.description }}" label: "{{ item.description }}"
- name: Set metallb webhook service name - name: Set metallb webhook service name
set_fact: ansible.builtin.set_fact:
metallb_webhook_service_name: >- metallb_webhook_service_name: >-
{{ {{
( (
@@ -98,15 +98,15 @@
}} }}
- name: Test metallb-system webhook-service endpoint - name: Test metallb-system webhook-service endpoint
command: >- ansible.builtin.command: >-
k3s kubectl -n metallb-system get endpoints {{ metallb_webhook_service_name }} {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get endpoints {{ metallb_webhook_service_name }}
changed_when: false changed_when: false
with_items: "{{ groups[group_name_master | default('master')] }}" with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true run_once: true
- name: Apply metallb CRs - name: Apply metallb CRs
command: >- ansible.builtin.command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml {{ k3s_kubectl_binary | default('k3s kubectl') }} apply -f /tmp/k3s/metallb-crs.yaml
--timeout='{{ metal_lb_available_timeout }}' --timeout='{{ metal_lb_available_timeout }}'
register: this register: this
changed_when: false changed_when: false
@@ -115,8 +115,8 @@
retries: 5 retries: 5
- name: Test metallb-system resources for Layer 2 configuration - name: Test metallb-system resources for Layer 2 configuration
command: >- ansible.builtin.command: >-
k3s kubectl -n metallb-system get {{ item }} {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get {{ item }}
changed_when: false changed_when: false
run_once: true run_once: true
when: metal_lb_mode == "layer2" when: metal_lb_mode == "layer2"
@@ -125,8 +125,8 @@
- L2Advertisement - L2Advertisement
- name: Test metallb-system resources for BGP configuration - name: Test metallb-system resources for BGP configuration
command: >- ansible.builtin.command: >-
k3s kubectl -n metallb-system get {{ item }} {{ k3s_kubectl_binary | default('k3s kubectl') }} -n metallb-system get {{ item }}
changed_when: false changed_when: false
run_once: true run_once: true
when: metal_lb_mode == "bgp" when: metal_lb_mode == "bgp"

View File

@@ -9,11 +9,11 @@ spec:
calicoNetwork: calicoNetwork:
# Note: The ipPools section cannot be modified post-install. # Note: The ipPools section cannot be modified post-install.
ipPools: ipPools:
- blockSize: {{ calico_blockSize | default('26') }} - blockSize: {{ calico_blockSize }}
cidr: {{ cluster_cidr | default('10.52.0.0/16') }} cidr: {{ cluster_cidr }}
encapsulation: {{ calico_encapsulation | default('VXLANCrossSubnet') }} encapsulation: {{ calico_encapsulation }}
natOutgoing: {{ calico_natOutgoing | default('Enabled') }} natOutgoing: {{ calico_natOutgoing }}
nodeSelector: {{ calico_nodeSelector | default('all()') }} nodeSelector: {{ calico_nodeSelector }}
nodeAddressAutodetectionV4: nodeAddressAutodetectionV4:
interface: {{ calico_iface }} interface: {{ calico_iface }}
linuxDataplane: {{ 'BPF' if calico_ebpf else 'Iptables' }} linuxDataplane: {{ 'BPF' if calico_ebpf else 'Iptables' }}

View File

@@ -1,6 +1,6 @@
--- ---
- name: Reboot server - name: Reboot server
become: true become: true
reboot: ansible.builtin.reboot:
reboot_command: "{{ custom_reboot_command | default(omit) }}" reboot_command: "{{ custom_reboot_command | default(omit) }}"
listen: reboot server listen: reboot server

7
roles/lxc/meta/main.yml Normal file
View File

@@ -0,0 +1,7 @@
---
argument_specs:
main:
short_description: Configure LXC
options:
custom_reboot_command:
default: ~

View File

@@ -1,20 +1,20 @@
--- ---
- name: Check for rc.local file - name: Check for rc.local file
stat: ansible.builtin.stat:
path: /etc/rc.local path: /etc/rc.local
register: rcfile register: rcfile
- name: Create rc.local if needed - name: Create rc.local if needed
lineinfile: ansible.builtin.lineinfile:
path: /etc/rc.local path: /etc/rc.local
line: "#!/bin/sh -e" line: "#!/bin/sh -e"
create: true create: true
insertbefore: BOF insertbefore: BOF
mode: "u=rwx,g=rx,o=rx" mode: u=rwx,g=rx,o=rx
when: not rcfile.stat.exists when: not rcfile.stat.exists
- name: Write rc.local file - name: Write rc.local file
blockinfile: ansible.builtin.blockinfile:
path: /etc/rc.local path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}" content: "{{ lookup('template', 'templates/rc.local.j2') }}"
state: present state: present

View File

@@ -1,4 +1,4 @@
--- ---
secure_path: secure_path:
RedHat: '/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin' RedHat: /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
Suse: '/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin' Suse: /usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin

View File

@@ -0,0 +1,7 @@
---
argument_specs:
main:
short_description: Prerequisites
options:
system_timezone:
description: Timezone to be set on all nodes

View File

@@ -34,10 +34,10 @@
tags: sysctl tags: sysctl
- name: Add br_netfilter to /etc/modules-load.d/ - name: Add br_netfilter to /etc/modules-load.d/
copy: ansible.builtin.copy:
content: "br_netfilter" content: br_netfilter
dest: /etc/modules-load.d/br_netfilter.conf dest: /etc/modules-load.d/br_netfilter.conf
mode: "u=rw,g=,o=" mode: u=rw,g=,o=
when: ansible_os_family == "RedHat" when: ansible_os_family == "RedHat"
- name: Load br_netfilter - name: Load br_netfilter
@@ -59,11 +59,11 @@
tags: sysctl tags: sysctl
- name: Add /usr/local/bin to sudo secure_path - name: Add /usr/local/bin to sudo secure_path
lineinfile: ansible.builtin.lineinfile:
line: 'Defaults secure_path = {{ secure_path[ansible_os_family] }}' line: Defaults secure_path = {{ secure_path[ansible_os_family] }}
regexp: "Defaults(\\s)*secure_path(\\s)*=" regexp: Defaults(\s)*secure_path(\s)*=
state: present state: present
insertafter: EOF insertafter: EOF
path: /etc/sudoers path: /etc/sudoers
validate: 'visudo -cf %s' validate: visudo -cf %s
when: ansible_os_family in [ "RedHat", "Suse" ] when: ansible_os_family in [ "RedHat", "Suse" ]

View File

@@ -2,12 +2,12 @@
- name: Reboot containers - name: Reboot containers
block: block:
- name: Get container ids from filtered files - name: Get container ids from filtered files
set_fact: ansible.builtin.set_fact:
proxmox_lxc_filtered_ids: >- proxmox_lxc_filtered_ids: >-
{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }} {{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}
listen: reboot containers listen: reboot containers
- name: Reboot container - name: Reboot container
command: "pct reboot {{ item }}" ansible.builtin.command: pct reboot {{ item }}
loop: "{{ proxmox_lxc_filtered_ids }}" loop: "{{ proxmox_lxc_filtered_ids }}"
changed_when: true changed_when: true
listen: reboot containers listen: reboot containers

View File

@@ -1,44 +1,43 @@
--- ---
- name: Check for container files that exist on this host - name: Check for container files that exist on this host
stat: ansible.builtin.stat:
path: "/etc/pve/lxc/{{ item }}.conf" path: /etc/pve/lxc/{{ item }}.conf
loop: "{{ proxmox_lxc_ct_ids }}" loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results register: stat_results
- name: Filter out files that do not exist - name: Filter out files that do not exist
set_fact: ansible.builtin.set_fact:
proxmox_lxc_filtered_files: proxmox_lxc_filtered_files: '{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}' # noqa yaml[line-length]
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 # https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185
- name: Ensure lxc config has the right apparmor profile - name: Ensure lxc config has the right apparmor profile
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.apparmor.profile" regexp: ^lxc.apparmor.profile
line: "lxc.apparmor.profile: unconfined" line: "lxc.apparmor.profile: unconfined"
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Ensure lxc config has the right cgroup - name: Ensure lxc config has the right cgroup
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow" regexp: ^lxc.cgroup.devices.allow
line: "lxc.cgroup.devices.allow: a" line: "lxc.cgroup.devices.allow: a"
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Ensure lxc config has the right cap drop - name: Ensure lxc config has the right cap drop
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.cap.drop" regexp: ^lxc.cap.drop
line: "lxc.cap.drop: " line: "lxc.cap.drop: "
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Ensure lxc config has the right mounts - name: Ensure lxc config has the right mounts
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.mount.auto" regexp: ^lxc.mount.auto
line: 'lxc.mount.auto: "proc:rw sys:rw"' line: 'lxc.mount.auto: "proc:rw sys:rw"'
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers

View File

@@ -1,5 +1,5 @@
--- ---
- name: Reboot - name: Reboot
reboot: ansible.builtin.reboot:
reboot_command: "{{ custom_reboot_command | default(omit) }}" reboot_command: "{{ custom_reboot_command | default(omit) }}"
listen: reboot listen: reboot

View File

@@ -1,38 +1,37 @@
--- ---
- name: Test for raspberry pi /proc/cpuinfo - name: Test for raspberry pi /proc/cpuinfo
command: grep -E "Raspberry Pi|BCM2708|BCM2709|BCM2835|BCM2836" /proc/cpuinfo ansible.builtin.command: grep -E "Raspberry Pi|BCM2708|BCM2709|BCM2835|BCM2836" /proc/cpuinfo
register: grep_cpuinfo_raspberrypi register: grep_cpuinfo_raspberrypi
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Test for raspberry pi /proc/device-tree/model - name: Test for raspberry pi /proc/device-tree/model
command: grep -E "Raspberry Pi" /proc/device-tree/model ansible.builtin.command: grep -E "Raspberry Pi" /proc/device-tree/model
register: grep_device_tree_model_raspberrypi register: grep_device_tree_model_raspberrypi
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Set raspberry_pi fact to true - name: Set raspberry_pi fact to true
set_fact: ansible.builtin.set_fact:
raspberry_pi: true raspberry_pi: true
when: when: grep_cpuinfo_raspberrypi.rc == 0 or grep_device_tree_model_raspberrypi.rc == 0
grep_cpuinfo_raspberrypi.rc == 0 or grep_device_tree_model_raspberrypi.rc == 0
- name: Set detected_distribution to Raspbian (ARM64 on Raspbian, Debian Buster/Bullseye/Bookworm) - name: Set detected_distribution to Raspbian (ARM64 on Raspbian, Debian Buster/Bullseye/Bookworm)
set_fact: ansible.builtin.set_fact:
detected_distribution: Raspbian detected_distribution: Raspbian
vars: vars:
allowed_descriptions: allowed_descriptions:
- "[Rr]aspbian.*" - "[Rr]aspbian.*"
- "Debian.*buster" - Debian.*buster
- "Debian.*bullseye" - Debian.*bullseye
- "Debian.*bookworm" - Debian.*bookworm
when: when:
- ansible_facts.architecture is search("aarch64") - ansible_facts.architecture is search("aarch64")
- raspberry_pi|default(false) - raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|')) - ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|'))
- name: Set detected_distribution to Raspbian (ARM64 on Debian Bookworm) - name: Set detected_distribution to Raspbian (ARM64 on Debian Bookworm)
set_fact: ansible.builtin.set_fact:
detected_distribution: Raspbian detected_distribution: Raspbian
when: when:
- ansible_facts.architecture is search("aarch64") - ansible_facts.architecture is search("aarch64")
@@ -40,13 +39,13 @@
- ansible_facts.lsb.description|default("") is match("Debian.*bookworm") - ansible_facts.lsb.description|default("") is match("Debian.*bookworm")
- name: Set detected_distribution_major_version - name: Set detected_distribution_major_version
set_fact: ansible.builtin.set_fact:
detected_distribution_major_version: "{{ ansible_facts.lsb.major_release }}" detected_distribution_major_version: "{{ ansible_facts.lsb.major_release }}"
when: when:
- detected_distribution | default("") == "Raspbian" - detected_distribution | default("") == "Raspbian"
- name: Execute OS related tasks on the Raspberry Pi - {{ action_ }} - name: Execute OS related tasks on the Raspberry Pi - {{ action_ }}
include_tasks: "{{ item }}" ansible.builtin.include_tasks: "{{ item }}"
with_first_found: with_first_found:
- "{{ action_ }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml" - "{{ action_ }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml"
- "{{ action_ }}/{{ detected_distribution }}.yml" - "{{ action_ }}/{{ detected_distribution }}.yml"

View File

@@ -1,13 +1,13 @@
--- ---
- name: Test for cmdline path - name: Test for cmdline path
stat: ansible.builtin.stat:
path: /boot/firmware/cmdline.txt path: /boot/firmware/cmdline.txt
register: boot_cmdline_path register: boot_cmdline_path
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Set cmdline path based on Debian version and command result - name: Set cmdline path based on Debian version and command result
set_fact: ansible.builtin.set_fact:
cmdline_path: >- cmdline_path: >-
{{ {{
( (
@@ -20,20 +20,20 @@
}} }}
- name: Activating cgroup support - name: Activating cgroup support
lineinfile: ansible.builtin.lineinfile:
path: "{{ cmdline_path }}" path: "{{ cmdline_path }}"
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$' regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory' line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
backrefs: true backrefs: true
notify: reboot notify: reboot
- name: Install iptables - name: Install iptables
apt: ansible.builtin.apt:
name: iptables name: iptables
state: present state: present
- name: Flush iptables before changing to iptables-legacy - name: Flush iptables before changing to iptables-legacy
iptables: ansible.builtin.iptables:
flush: true flush: true
- name: Changing to iptables-legacy - name: Changing to iptables-legacy

View File

@@ -1,9 +1,9 @@
--- ---
- name: Enable cgroup via boot commandline if not already enabled for Rocky - name: Enable cgroup via boot commandline if not already enabled for Rocky
lineinfile: ansible.builtin.lineinfile:
path: /boot/cmdline.txt path: /boot/cmdline.txt
backrefs: true backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$' regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory' line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
notify: reboot notify: reboot
when: not ansible_check_mode when: not ansible_check_mode

View File

@@ -1,13 +1,13 @@
--- ---
- name: Enable cgroup via boot commandline if not already enabled for Ubuntu on a Raspberry Pi - name: Enable cgroup via boot commandline if not already enabled for Ubuntu on a Raspberry Pi
lineinfile: ansible.builtin.lineinfile:
path: /boot/firmware/cmdline.txt path: /boot/firmware/cmdline.txt
backrefs: true backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$' regexp: ^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory' line: \1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
notify: reboot notify: reboot
- name: Install linux-modules-extra-raspi - name: Install linux-modules-extra-raspi
apt: ansible.builtin.apt:
name: linux-modules-extra-raspi name: linux-modules-extra-raspi
state: present state: present

View File

@@ -1,5 +1,5 @@
--- ---
- name: Remove linux-modules-extra-raspi - name: Remove linux-modules-extra-raspi
apt: ansible.builtin.apt:
name: linux-modules-extra-raspi name: linux-modules-extra-raspi
state: absent state: absent

View File

@@ -0,0 +1,2 @@
---
systemd_dir: /etc/systemd/system

View File

@@ -0,0 +1,8 @@
---
argument_specs:
main:
short_description: Reset all nodes
options:
systemd_dir:
description: Path to systemd services
default: /etc/systemd/system

View File

@@ -1,6 +1,6 @@
--- ---
- name: Disable services - name: Disable services
systemd: ansible.builtin.systemd:
name: "{{ item }}" name: "{{ item }}"
state: stopped state: stopped
enabled: false enabled: false
@@ -12,12 +12,12 @@
- name: RUN pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc" - name: RUN pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc"
register: pkill_containerd_shim_runc register: pkill_containerd_shim_runc
command: pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc" ansible.builtin.command: pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc"
changed_when: "pkill_containerd_shim_runc.rc == 0" changed_when: pkill_containerd_shim_runc.rc == 0
failed_when: false failed_when: false
- name: Umount k3s filesystems - name: Umount k3s filesystems
include_tasks: umount_with_children.yml ansible.builtin.include_tasks: umount_with_children.yml
with_items: with_items:
- /run/k3s - /run/k3s
- /var/lib/kubelet - /var/lib/kubelet
@@ -30,7 +30,7 @@
loop_var: mounted_fs loop_var: mounted_fs
- name: Remove service files, binaries and data - name: Remove service files, binaries and data
file: ansible.builtin.file:
name: "{{ item }}" name: "{{ item }}"
state: absent state: absent
with_items: with_items:
@@ -48,7 +48,7 @@
- /etc/cni/net.d - /etc/cni/net.d
- name: Remove K3s http_proxy files - name: Remove K3s http_proxy files
file: ansible.builtin.file:
name: "{{ item }}" name: "{{ item }}"
state: absent state: absent
with_items: with_items:
@@ -59,22 +59,22 @@
when: proxy_env is defined when: proxy_env is defined
- name: Reload daemon_reload - name: Reload daemon_reload
systemd: ansible.builtin.systemd:
daemon_reload: true daemon_reload: true
- name: Remove tmp directory used for manifests - name: Remove tmp directory used for manifests
file: ansible.builtin.file:
path: /tmp/k3s path: /tmp/k3s
state: absent state: absent
- name: Check if rc.local exists - name: Check if rc.local exists
stat: ansible.builtin.stat:
path: /etc/rc.local path: /etc/rc.local
register: rcfile register: rcfile
- name: Remove rc.local modifications for proxmox lxc containers - name: Remove rc.local modifications for proxmox lxc containers
become: true become: true
blockinfile: ansible.builtin.blockinfile:
path: /etc/rc.local path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}" content: "{{ lookup('template', 'templates/rc.local.j2') }}"
create: false create: false
@@ -83,14 +83,14 @@
- name: Check rc.local for cleanup - name: Check rc.local for cleanup
become: true become: true
slurp: ansible.builtin.slurp:
src: /etc/rc.local src: /etc/rc.local
register: rcslurp register: rcslurp
when: proxmox_lxc_configure and rcfile.stat.exists when: proxmox_lxc_configure and rcfile.stat.exists
- name: Cleanup rc.local if we only have a Shebang line - name: Cleanup rc.local if we only have a Shebang line
become: true become: true
file: ansible.builtin.file:
path: /etc/rc.local path: /etc/rc.local
state: absent state: absent
when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1 when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1

View File

@@ -1,6 +1,6 @@
--- ---
- name: Get the list of mounted filesystems - name: Get the list of mounted filesystems
shell: set -o pipefail && cat /proc/mounts | awk '{ print $2}' | grep -E "^{{ mounted_fs }}" ansible.builtin.shell: set -o pipefail && cat /proc/mounts | awk '{ print $2}' | grep -E "^{{ mounted_fs }}"
register: get_mounted_filesystems register: get_mounted_filesystems
args: args:
executable: /bin/bash executable: /bin/bash
@@ -12,5 +12,4 @@
ansible.posix.mount: ansible.posix.mount:
path: "{{ item }}" path: "{{ item }}"
state: unmounted state: unmounted
with_items: with_items: "{{ get_mounted_filesystems.stdout_lines | reverse | list }}"
"{{ get_mounted_filesystems.stdout_lines | reverse | list }}"

View File

@@ -1,46 +1,45 @@
--- ---
- name: Check for container files that exist on this host - name: Check for container files that exist on this host
stat: ansible.builtin.stat:
path: "/etc/pve/lxc/{{ item }}.conf" path: /etc/pve/lxc/{{ item }}.conf
loop: "{{ proxmox_lxc_ct_ids }}" loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results register: stat_results
- name: Filter out files that do not exist - name: Filter out files that do not exist
set_fact: ansible.builtin.set_fact:
proxmox_lxc_filtered_files: proxmox_lxc_filtered_files: '{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}' # noqa yaml[line-length]
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
- name: Remove LXC apparmor profile - name: Remove LXC apparmor profile
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.apparmor.profile" regexp: ^lxc.apparmor.profile
line: "lxc.apparmor.profile: unconfined" line: "lxc.apparmor.profile: unconfined"
state: absent state: absent
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Remove lxc cgroups - name: Remove lxc cgroups
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow" regexp: ^lxc.cgroup.devices.allow
line: "lxc.cgroup.devices.allow: a" line: "lxc.cgroup.devices.allow: a"
state: absent state: absent
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Remove lxc cap drop - name: Remove lxc cap drop
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.cap.drop" regexp: ^lxc.cap.drop
line: "lxc.cap.drop: " line: "lxc.cap.drop: "
state: absent state: absent
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers notify: reboot containers
- name: Remove lxc mounts - name: Remove lxc mounts
lineinfile: ansible.builtin.lineinfile:
dest: "{{ item }}" dest: "{{ item }}"
regexp: "^lxc.mount.auto" regexp: ^lxc.mount.auto
line: 'lxc.mount.auto: "proc:rw sys:rw"' line: 'lxc.mount.auto: "proc:rw sys:rw"'
state: absent state: absent
loop: "{{ proxmox_lxc_filtered_files }}" loop: "{{ proxmox_lxc_filtered_files }}"

View File

@@ -3,8 +3,8 @@
hosts: all hosts: all
pre_tasks: pre_tasks:
- name: Verify Ansible is version 2.11 or above. (If this fails you may need to update Ansible) - name: Verify Ansible is version 2.11 or above. (If this fails you may need to update Ansible)
assert: ansible.builtin.assert:
that: "ansible_version.full is version_compare('2.11', '>=')" that: ansible_version.full is version_compare('2.11', '>=')
msg: > msg: >
"Ansible is out of date. See here for more info: https://docs.technotim.live/posts/ansible-automation/" "Ansible is out of date. See here for more info: https://docs.technotim.live/posts/ansible-automation/"