Compare commits

...

66 Commits

Author SHA1 Message Date
Techno Tim
ea3b3c776a chore(deps) pre-commit updates (#438)
* chore(deps): Updated pre-commit

* fix(actions): cleaning up comments
2024-01-30 11:54:28 -06:00
dependabot[bot]
5beca87783 chore(deps): bump ansible-core from 2.16.2 to 2.16.3 (#436)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.16.2 to 2.16.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.16.2...v2.16.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 21:29:07 -06:00
sholdee
6ffc25dfe5 Add Cilium CNI option (#435)
* Add Cilium CNI option

* Tweak version checks and add BGP resource verify

* Update metallb detection for kube-vip feat compat
2024-01-29 19:29:13 -06:00
Gereon Vey
bcd37a6904 add kube-vip as a service load balancer (#432)
* add kube-vip as a service load balancer

* add molecule scenario kube-vip

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-29 09:13:13 -06:00
Techno Tim
8dd3ffc825 fix(ci): Don't run CI for certain files (#433)
* fix(ci): Don't run CI for certain files

* fix(ci): Don't run CI for certain files
2024-01-28 20:42:28 +00:00
dependabot[bot]
f6ba208b5c chore(deps): bump actions/upload-artifact from 3.1.1 to 4.3.0 (#426)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.1 to 4.3.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](83fd05a356...26f96dfa69)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-28 19:40:47 +00:00
dependabot[bot]
a22d8f7aaf chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions (#425)
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 2.0.1 to 3.0.3.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](af2eb32266...ba37328d4e)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-28 17:20:46 +00:00
dependabot[bot]
05fb6b566d chore(deps): bump actions/setup-python from 2.3.3 to 5.0.0 (#423) 2024-01-28 01:57:41 +00:00
egandro
3aeb7d69ea added fix for metallb version upgrades (#394)
* added fix for metallb version upgrades

* use bash to allow pipefail

---------

Co-authored-by: Harald Fielker <harald.fielker@gmail.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-28 00:50:13 +00:00
dependabot[bot]
61bf3971ef chore(deps): bump actions/checkout from 2.5.0 to 4.1.1 (#424)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.5.0 to 4.1.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](e2f20e631a...b4ffde65f4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-27 17:26:12 -06:00
Gereon Vey
3f06a11c8d fetch kubeconfig from master after deployment (#431)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-27 16:30:13 -06:00
Techno Tim
3888a29bb1 fix(ci): only run CI for PRs (#430)
* fix(ci): only run CI for PRs

* fix(ci): ensure that branch is up to date

* fix(ci): ensure that branch is up to date
2024-01-27 15:35:47 -06:00
Timothy Stewart
98ef696f31 fix(ci): fixes for ephemeral nodes 2024-01-26 23:12:50 -06:00
Timothy Stewart
de26a79a4c fix(ci): fixes for ephemeral nodes 2024-01-26 23:09:30 -06:00
Timothy Stewart
ab7ca9b551 fix(ci): fixes for ephemeral nodes 2024-01-26 23:06:02 -06:00
Timothy Stewart
c5f71c9e2e fix(ci): fixes for ephemeral nodes 2024-01-26 22:52:19 -06:00
sholdee
0f23e7e258 Add Calico CNI option (#414)
* Add Tigera Operator/Calico CNI option

Small tweak to reduce delta from head

Set calico option to be disabled by default

Add rescue blocks in case updating existing

Refactor items and update comments

Refactor and consolidate calico.yml into block

Refactor to use template for Calico CRs

Revert use_calico to false

Template blockSize

Align default cidr in template with all.yml sample

Apply upstream version tags

Revert to current ver tags. Upstream's don't work.

Update template address detection

Add Tigera Operator/Calico CNI option

* Add calico-apiserver check

* Add eBPF dataplane option

* Add kube svc endpoint configmap when ebpf enabled

* Add /etc/cni/net.d to reset task

* Refactor based on comments

* Add molecule scenario

* Fix lint

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-26 18:53:27 -06:00
Techno Tim
121061d875 chore(deps) Updated LBs (#428)
* chore(deps): Updated metallb

* chore(deps): Updated kube-vip
2024-01-26 23:54:33 +00:00
João Gonçalves
db53f595fd feat(k3s): added support for latest raspberrypi os (debian 12 bookworm) (#404)
* feat(k3s): added support for latest raspberrypi os (debian 12 bookworm)

* Update test.yml

* Revert test workflow

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-26 22:20:06 +00:00
Techno Tim
7b6b24ce4d feat(k3s): Updated to v1.29.0+k3s1 (#421) 2024-01-26 14:49:24 -06:00
Techno Tim
a5728da35e feat(k3s): Updated to v1.28 (#420)
* feat(k3s): Updated to v1.28.5+k3s1
2024-01-26 13:10:21 -06:00
Techno Tim
cda7c92203 feat(k3s): Updated to v1.27 (#294)
* feat(k3s): Updated to v1.27.1+k3s1

* feat(k3s): Updated to v1.27.1+k3s1

* feat(k3s): Updated to v1.27.4+k3s1

* feat(k3s): Updated to v1.27.9+k3s1
2024-01-26 18:54:58 +00:00
Techno Tim
d910b83bf3 fix(molecule): Cleanup all artifacts, side effects, and actions in case nodes are not ephemeral (#427) 2024-01-26 17:16:26 +00:00
Techno Tim
101313f880 feat(dependabot): Added docker docker and github actions (#422) 2024-01-26 16:19:42 +00:00
Techno Tim
12be355867 feat(k3s): Updated to v1.26 (#207)
* feat(k3s): Updated to v1.26.0+k3s2

* feat(k3s): Updated to v1.26.2+k3s1

* feat(k3s): Updated to v1.26.3+k3s1

* feat(k3s): Updated to v1.26.4+k3s1

* feat(k3s): Updated to v1.26.7+k3s1

* feat(k3s): Updated to v1.26.11+k3s2

* feat(k3s): Updated to v1.26.12+k3s1
2024-01-25 22:09:08 +00:00
Gabor A
aa09e3e9df fix: typos (#416)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-25 20:40:56 +00:00
sholdee
511c410451 Add Debian Bookworm support and refactor Pi OS detection (#415)
* Refactor Pi OS detection and add Debian Bookworm support

* Add bullseye back

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-25 19:20:02 +00:00
Balázs Hasprai
df9c6f3014 Fix http_proxy service dir in k3s_agent role (#400)
* Fix http_proxy service dir in k3s_agent role

* Fix http_proxy reset: rm conf files before dirs

* Fix http_proxy reset rm order

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-25 11:34:46 -06:00
Timothy Stewart
5ae8fd1223 fix(molecule): lower resources for nodes 2024-01-25 09:30:02 -06:00
Techno Tim
e2e9881f0f Fix CI (#389)
did all the things to make it work
2024-01-24 22:26:38 -06:00
egandro
edf0c9eebd fix for recreating new control planes (2nd run) (#393)
Co-authored-by: Harald Fielker <harald.fielker@gmail.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-19 08:37:14 -06:00
egandro
7669fd4721 initial galaxy.yml (#388)
* initial galaxy.yml

* added readme

* lint fix

* Updated description

Co-authored-by: Dov Benyomin Sohacheski <b@kloud.email>

* Updated license_file section

Co-authored-by: Dov Benyomin Sohacheski <b@kloud.email>

* Updated tags section

Co-authored-by: Dov Benyomin Sohacheski <b@kloud.email>

* Updated dependencies section

Co-authored-by: Dov Benyomin Sohacheski <b@kloud.email>

* removed extra empty line galaxy created

---------

Co-authored-by: Harald Fielker <harald.fielker@gmail.com>
Co-authored-by: Dov Benyomin Sohacheski <b@kloud.email>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-18 18:35:19 -06:00
Balázs Hasprai
cddbfc8e40 Update truthy values to true/false only, #204 (#387)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-15 12:43:44 -06:00
Techno Tim
70e658cf98 feat(k3s): Updated to v1.25.16+k3s4 (#407) 2024-01-12 21:34:23 -06:00
dependabot[bot]
7badfbd7bd chore(deps): bump netaddr from 0.9.0 to 0.10.0 (#411)
Bumps [netaddr](https://github.com/drkjam/netaddr) from 0.9.0 to 0.10.0.
- [Release notes](https://github.com/drkjam/netaddr/releases)
- [Changelog](https://github.com/netaddr/netaddr/blob/master/CHANGELOG)
- [Commits](https://github.com/drkjam/netaddr/compare/0.9.0...0.10.0)

---
updated-dependencies:
- dependency-name: netaddr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-01 20:46:52 -06:00
Balázs Hasprai
e880f08d26 Add option for install behind http_proxy (#384)
* Add option for install behind http_proxy

* Tidy up http_proxy usage
2023-10-21 00:18:36 +00:00
Balázs Hasprai
95b2836dfc Add option to disable MetalLB, for use w/ ext LBs (#383)
* Add option to disable MetalLB, for use w/ ext LBs

* Add option to disable MetalLB, for use w/ ext LBs - add defaults

* Skip MetalLB with tags instead of flag
2023-10-18 22:07:07 +00:00
balazshasprai
505c2eeff2 Add option for custom registries / mirrors (#382) 2023-10-18 03:33:30 +00:00
balazshasprai
9b6d551dd6 Expand secure_path with support for Suse (#381) 2023-10-13 04:14:47 +00:00
dependabot[bot]
a64e882fb7 chore(deps): bump pre-commit-hooks from 4.4.0 to 4.5.0 (#379)
Bumps [pre-commit-hooks](https://github.com/pre-commit/pre-commit-hooks) from 4.4.0 to 4.5.0.
- [Release notes](https://github.com/pre-commit/pre-commit-hooks/releases)
- [Changelog](https://github.com/pre-commit/pre-commit-hooks/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit-hooks/compare/v4.4.0...v4.5.0)

---
updated-dependencies:
- dependency-name: pre-commit-hooks
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 15:39:23 +00:00
johnnyrun
38e773315b sysctl tags (#373)
* sysctl tags

* lost tag

---------

Co-authored-by: Gianni <gianni@chainlabo.com>
Co-authored-by: Gianni Carabelli <gianni.carabelli@skytv.it>
2023-10-09 10:00:31 -05:00
dependabot[bot]
70ddf7b63c chore(deps): bump netaddr from 0.8.0 to 0.9.0 (#365)
Bumps [netaddr](https://github.com/drkjam/netaddr) from 0.8.0 to 0.9.0.
- [Changelog](https://github.com/netaddr/netaddr/blob/master/CHANGELOG)
- [Commits](https://github.com/drkjam/netaddr/commits)

---
updated-dependencies:
- dependency-name: netaddr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-21 12:39:15 -05:00
dependabot[bot]
fb3128a783 chore(deps): bump ansible-core from 2.15.3 to 2.15.4 (#362)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.15.3 to 2.15.4.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.15.3...v2.15.4)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-14 13:48:59 -05:00
Techno Tim
2e318e0862 feat(k3s): Updated to v1.25.12+k3s1 (#351) 2023-08-18 08:59:08 -05:00
dependabot[bot]
0607eb8aa4 chore(deps): bump ansible-core from 2.15.2 to 2.15.3 (#349)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.15.2 to 2.15.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.15.2...v2.15.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-16 13:27:35 -05:00
Marek Pilch
a9904d1562 fixes: ERROR! The requested handler <'Reboot containers' / 'Reboot se… (#348)
* fixes: ERROR! The requested handler <'Reboot containers' / 'Reboot server' / 'Reboot>' was not found in either the main handlers list nor in the listening handlers list

* Update main.yml
2023-08-14 17:37:20 -05:00
Techno Tim
9707bc8a58 fix(docs): updated kube-vip url (#341) 2023-08-14 17:30:42 +00:00
Phil Bolduc
e635bd2626 Change reboot.sh to be executable (#344)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-08-07 11:29:03 -05:00
dependabot[bot]
1aabb5a927 chore(deps): bump jsonpatch from 1.32 to 1.33 (#318) 2023-07-23 19:32:01 +00:00
Christian Berendt
215690b55b Replace hardcoded 'master' group name with 'group_name_master' variable (#337)
For improved flexibility and maintainability.

* Update tasks in node role to use 'group_name_master' variable instead
  of hardcoded 'master' group name
* Update tasks in master role to use 'group_name_master' variable instead
  of hardcoded 'master' group name
* Update tasks in post role to use 'group_name_master' variable instead of
  hardcoded 'master' group name

Signed-off-by: Christian Berendt <berendt@23technologies.cloud>
2023-07-21 16:37:57 -05:00
Simon Leiner
bd44a9b126 Remove unused variable metal_lb_frr_tag_version (#331) 2023-07-21 05:06:04 +00:00
dependabot[bot]
8d61fe81e5 chore(deps): bump pyyaml from 6.0 to 6.0.1 (#334) 2023-07-20 23:20:55 -05:00
dependabot[bot]
c0ff304f22 chore(deps): bump ansible-core from 2.14.5 to 2.15.2 (#335)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.5 to 2.15.2.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.5...v2.15.2)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-20 21:54:40 -05:00
Techno Tim
83077ecdd1 Fix CI - python version (#338)
* fix(README): Updated docs link

* fix(ci): set PYTHON_VERSION to 3.11
2023-07-20 21:19:53 -05:00
Simon Leiner
33ae0d4970 Fix CI (#332)
* Update pre-commit actions

This was done by running "pre-commit autoupdate --freeze".

* Remove pre-commit only dependencies from requirements.in

Including them in the file would create the illusion that those were the
versions actually used in CI, but they are not. The exact versions are
determined by the pre-commit hooks which are pinned in
.pre-commit-config.yaml.

* Ansible Lint: Fix role-name[path]

* Ansible Lint: Fix name[play]

* Ansible Lint: Fix key-order[task]

* Ansible Lint: Fix jinja[spacing]

* Ansible Lint: Fix no-free-form

* Ansible Lint: Fix var-naming[no-reserved]

* Ansible Lint: Fix yaml[comments]

* Ansible Lint: Fix yaml[line-length]

* Ansible Lint: Fix name[casing]

* Ansible Lint: Fix no-changed-when

* Ansible Lint: Fix fqcn[action]

* Ansible Lint: Fix args[module]

* Improve task naming
2023-07-20 10:50:02 -05:00
Techno Tim
edd4838407 feat(k3s): Updated to v1.25 (#187)
* feat(k3s): Updated to v1.25.4+k3s1

* feat(k3s): Updated to v1.25.5+k3s1

* feat(k3s): Updated to v1.25.7+k3s1

* feat(k3s): Updated to v1.25.8+k3s1

* feat(k3s): Updated to v1.25.9+k3s1

* feat(kube-vip): Update to v0.5.12
2023-04-27 23:09:46 -05:00
dependabot[bot]
5c79ea9b71 chore(deps): bump ansible-core from 2.14.4 to 2.14.5 (#287)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.4 to 2.14.5.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.4...v2.14.5)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 14:19:52 -05:00
dependabot[bot]
3d204ad851 chore(deps): bump yamllint from 1.30.0 to 1.31.0 (#284)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.30.0 to 1.31.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.30.0...v1.31.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-04-24 11:17:02 -05:00
dependabot[bot]
13bd868faa chore(deps): bump ansible-lint from 6.14.6 to 6.15.0 (#285)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.6 to 6.15.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.6...v6.15.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-23 23:10:28 -05:00
dependabot[bot]
c564a8562a chore(deps): bump ansible-lint from 6.14.3 to 6.14.6 (#275)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.3 to 6.14.6.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.3...v6.14.6)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-14 23:34:03 -05:00
Sam Schmit-Van Werweke
0d6d43e7ca Bump k3s version to v1.24.12+k3s1 (#269) 2023-04-02 21:31:20 -05:00
dependabot[bot]
c0952288c2 chore(deps): bump ansible-core from 2.14.3 to 2.14.4 (#265)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.3 to 2.14.4.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.3...v2.14.4)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 15:07:16 -05:00
dependabot[bot]
1c9796e98b chore(deps): bump ansible-lint from 6.14.2 to 6.14.3 (#264)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.2 to 6.14.3.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.2...v6.14.3)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 12:18:52 -05:00
ThePCGeek
288c4089e0 Pc geek fix proxmox lxc (#263)
* (fix): correct var

var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.

* add vars file for proxmox host group

* remove remote_user from site.yml for proxmox

* added newline to fix lint issue

* fix added ---

---------

Co-authored-by: ThePCGeek <thepcgeek1776@gmail.com>
2023-03-25 22:02:59 -05:00
ThePCGeek
49f0a2ce6b (fix): correct var (#262)
var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.
2023-03-25 20:41:04 -05:00
dependabot[bot]
6c4621bd56 chore(deps): bump yamllint from 1.29.0 to 1.30.0 (#261)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.29.0 to 1.30.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.29.0...v1.30.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-24 02:46:16 +00:00
85 changed files with 1478 additions and 381 deletions

View File

@@ -13,5 +13,9 @@ exclude_paths:
- 'molecule/**/prepare.yml'
- 'molecule/**/reset.yml'
# The file was generated by galaxy ansible - don't mess with it.
- 'galaxy.yml'
skip_list:
- 'fqcn-builtins'
- var-naming[no-role-prefix]

View File

@@ -37,6 +37,11 @@ systemd_dir: ""
flannel_iface: ""
#calico_iface: ""
calico_ebpf: ""
calico_cidr: ""
calico_tag: ""
apiserver_endpoint: ""
k3s_token: "NA"
@@ -46,6 +51,9 @@ extra_agent_args: ""
kube_vip_tag_version: ""
kube_vip_cloud_provider_tag_version: ""
kube_vip_lb_ip_range: ""
metal_lb_speaker_tag_version: ""
metal_lb_controller_tag_version: ""

View File

@@ -9,3 +9,18 @@ updates:
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
rebase-strategy: "auto"
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
rebase-strategy: "auto"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]

42
.github/workflows/cache.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
---
name: "Cache"
on:
workflow_call:
jobs:
molecule:
name: cache
runs-on: self-hosted
env:
PYTHON_VERSION: "3.11"
steps:
- name: Check out the codebase
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
- name: Cache Vagrant boxes
id: cache-vagrant
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2 # 4.0
with:
lookup-only: true #if it exists, we don't need to restore and can skip the next step
path: |
~/.vagrant.d/boxes
key: vagrant-boxes-${{ hashFiles('**/molecule.yml') }}
restore-keys: |
vagrant-boxes
- name: Download Vagrant boxes for all scenarios
# To save some cache space, all scenarios share the same cache key.
# On the other hand, this means that the cache contents should be
# the same across all scenarios. This step ensures that.
if: steps.cache-vagrant.outputs.cache-hit != 'true' # only run if false since this is just a cache step
run: |
./.github/download-boxes.sh
vagrant box list

View File

@@ -2,14 +2,26 @@
name: "CI"
on:
pull_request:
push:
branches:
- master
types:
- opened
- synchronize
paths-ignore:
- '**/README.md'
- '**/.gitignore'
- '**/FUNDING.yml'
- '**/host.ini'
- '**/*.md'
- '**/.editorconfig'
- '**/ansible.example.cfg'
- '**/deploy.sh'
- '**/LICENSE'
- '**/reboot.sh'
- '**/reset.sh'
jobs:
pre:
uses: ./.github/workflows/cache.yml
lint:
uses: ./.github/workflows/lint.yml
needs: [pre]
test:
uses: ./.github/workflows/test.yml
needs: [lint]
needs: [pre, lint]

View File

@@ -5,37 +5,27 @@ on:
jobs:
pre-commit-ci:
name: Pre-Commit
runs-on: ubuntu-latest
runs-on: self-hosted
env:
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.11"
steps:
- name: Check out the codebase
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
- name: Cache pip
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('./requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Cache Ansible
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
- name: Restore Ansible cache
uses: actions/cache/restore@13aacd865c20de90d75de3b17ebe84f7a17d57d2 # 4.0
with:
path: ~/.ansible/collections
key: ${{ runner.os }}-ansible-${{ hashFiles('collections/requirements.txt') }}
restore-keys: |
${{ runner.os }}-ansible-
key: ansible-${{ hashFiles('collections/requirements.yml') }}
- name: Install dependencies
run: |
@@ -47,21 +37,17 @@ jobs:
python3 -m pip install -r requirements.txt
echo "::endgroup::"
echo "::group::Install Ansible role requirements from collections/requirements.yml"
ansible-galaxy install -r collections/requirements.yml
echo "::endgroup::"
- name: Run pre-commit
uses: pre-commit/action@646c83fcd040023954eafda54b4db0192ce70507 # 3.0.0
ensure-pinned-actions:
name: Ensure SHA Pinned Actions
runs-on: ubuntu-latest
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@af2eb3226618e2494e3d9084f515ad6dcf16e229 # 2.0.1
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@ba37328d4ea95eaf8b3bd6c6cef308f709a5f2ec # 3.0.3
with:
allowlist: |
aws-actions/

View File

@@ -5,23 +5,51 @@ on:
jobs:
molecule:
name: Molecule
runs-on: macos-12
runs-on: self-hosted
strategy:
matrix:
scenario:
- default
- ipv6
- single_node
- calico
- cilium
- kube-vip
fail-fast: false
env:
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.11"
steps:
- name: Check out the codebase
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # 4.1.1
with:
ref: ${{ github.event.pull_request.head.sha }}
# these steps are necessary if not using ephemeral nodes
- name: Delete old Vagrant box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force
- name: Remove all local Vagrant boxes
if: always() # do this even if a step before has failed
run: if vagrant box list 2>/dev/null; then vagrant box list | cut -f 1 -d ' ' | xargs -L 1 vagrant box remove -f 2>/dev/null && echo "All Vagrant boxes removed." || echo "No Vagrant boxes found."; else echo "No Vagrant boxes found."; fi
- name: Remove all Virtualbox VMs
if: always() # do this even if a step before has failed
run: VBoxManage list vms | awk -F'"' '{print $2}' | xargs -I {} VBoxManage unregistervm --delete "{}"
- name: Remove all Virtualbox HDs
if: always() # do this even if a step before has failed
run: VBoxManage list hdds | awk -F':' '/^UUID:/ {print $2}' | xargs -I {} VBoxManage closemedium disk "{}" --delete
- name: Remove all Virtualbox Networks
if: always() # do this even if a step before has failed
run: VBoxManage list hostonlyifs | grep '^Name:' | awk '{print $2}' | grep '^vboxnet' | xargs -I {} VBoxManage hostonlyif remove {}
- name: Remove Virtualbox network config
if: always() # do this even if a step before has failed
run: sudo rm /etc/vbox/networks.conf || true
- name: Configure VirtualBox
run: |-
sudo mkdir -p /etc/vbox
@@ -30,35 +58,19 @@ jobs:
* fdad:bad:ba55::/64
EOF
- name: Cache pip
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('./requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Cache Vagrant boxes
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: |
~/.vagrant.d/boxes
key: vagrant-boxes-${{ hashFiles('**/molecule.yml') }}
restore-keys: |
vagrant-boxes
- name: Download Vagrant boxes for all scenarios
# To save some cache space, all scenarios share the same cache key.
# On the other hand, this means that the cache contents should be
# the same across all scenarios. This step ensures that.
run: ./.github/download-boxes.sh
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # 5.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
- name: Restore vagrant Boxes cache
uses: actions/cache/restore@13aacd865c20de90d75de3b17ebe84f7a17d57d2 # 4.0
with:
path: ~/.vagrant.d/boxes
key: vagrant-boxes-${{ hashFiles('**/molecule.yml') }}
fail-on-cache-miss: true
- name: Install dependencies
run: |
echo "::group::Upgrade pip"
@@ -75,18 +87,40 @@ jobs:
env:
ANSIBLE_K3S_LOG_DIR: ${{ runner.temp }}/logs/k3s-ansible/${{ matrix.scenario }}
ANSIBLE_SSH_RETRIES: 4
ANSIBLE_TIMEOUT: 60
ANSIBLE_TIMEOUT: 120
PY_COLORS: 1
ANSIBLE_FORCE_COLOR: 1
# these steps are necessary if not using ephemeral nodes
- name: Delete old Vagrant box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force
- name: Remove all local Vagrant boxes
if: always() # do this even if a step before has failed
run: if vagrant box list 2>/dev/null; then vagrant box list | cut -f 1 -d ' ' | xargs -L 1 vagrant box remove -f 2>/dev/null && echo "All Vagrant boxes removed." || echo "No Vagrant boxes found."; else echo "No Vagrant boxes found."; fi
- name: Remove all Virtualbox VMs
if: always() # do this even if a step before has failed
run: VBoxManage list vms | awk -F'"' '{print $2}' | xargs -I {} VBoxManage unregistervm --delete "{}"
- name: Remove all Virtualbox HDs
if: always() # do this even if a step before has failed
run: VBoxManage list hdds | awk -F':' '/^UUID:/ {print $2}' | xargs -I {} VBoxManage closemedium disk "{}" --delete
- name: Remove all Virtualbox Networks
if: always() # do this even if a step before has failed
run: VBoxManage list hostonlyifs | grep '^Name:' | awk '{print $2}' | grep '^vboxnet' | xargs -I {} VBoxManage hostonlyif remove {}
- name: Remove Virtualbox network config
if: always() # do this even if a step before has failed
run: sudo rm /etc/vbox/networks.conf || true
- name: Upload log files
if: always() # do this even if a step before has failed
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # 3.1.1
uses: actions/upload-artifact@26f96dfa697d77e81fd5907df203aa23a56210a8 # 4.3.0
with:
name: logs
path: |
${{ runner.temp }}/logs
- name: Delete old box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force
overwrite: true

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
.env/
*.log
ansible.cfg
kubeconfig

View File

@@ -1,7 +1,7 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: 3298ddab3c13dd77d6ce1fc0baf97691430d84b0 # v4.3.0
rev: v4.5.0
hooks:
- id: requirements-txt-fixer
- id: sort-simple-yaml
@@ -12,24 +12,24 @@ repos:
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- repo: https://github.com/adrienverge/yamllint.git
rev: 9cce2940414e9560ae4c8518ddaee2ac1863a4d2 # v1.28.0
rev: v1.33.0
hooks:
- id: yamllint
args: [-c=.yamllint]
- repo: https://github.com/ansible-community/ansible-lint.git
rev: a058554b9bcf88f12ad09ab9fb93b267a214368f # v6.8.6
rev: v6.22.2
hooks:
- id: ansible-lint
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: 4c7c3dd7161ef39e984cb295e93a968236dc8e8a # v0.8.0.4
rev: v0.9.0.6
hooks:
- id: shellcheck
- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: 04618e68aa2380828a36a23ff5f65a06ae8f59b9 # v1.3.1
rev: v1.5.4
hooks:
- id: remove-crlf
- id: remove-tabs
- repo: https://github.com/sirosen/texthooks
rev: 30d9af95631de0d7cff4e282bde9160d38bb0359 # 0.4.0
rev: 0.6.4
hooks:
- id: fix-smartquotes

View File

@@ -6,4 +6,6 @@ rules:
max: 120
level: warning
truthy:
allowed-values: ['true', 'false', 'yes', 'no']
allowed-values: ['true', 'false']
ignore:
- galaxy.yml

View File

@@ -4,11 +4,11 @@
This playbook will build an HA Kubernetes cluster with `k3s`, `kube-vip` and MetalLB via `ansible`.
This is based on the work from [this fork](https://github.com/212850a/k3s-ansible) which is based on the work from [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible). It uses [kube-vip](https://kube-vip.chipzoller.dev/) to create a load balancer for control plane, and [metal-lb](https://metallb.universe.tf/installation/) for its service `LoadBalancer`.
This is based on the work from [this fork](https://github.com/212850a/k3s-ansible) which is based on the work from [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible). It uses [kube-vip](https://kube-vip.io/) to create a load balancer for control plane, and [metal-lb](https://metallb.universe.tf/installation/) for its service `LoadBalancer`.
If you want more context on how this works, see:
📄 [Documentation](https://docs.technotim.live/posts/k3s-etcd-ansible/) (including example commands)
📄 [Documentation](https://technotim.live/posts/k3s-etcd-ansible/) (including example commands)
📺 [Watch the Video](https://www.youtube.com/watch?v=CbkEWcUZ7zM)
@@ -28,7 +28,7 @@ on processor architecture:
## ✅ System requirements
- Control Node (the machine you are running `ansible` commands) must have Ansible 2.11+ If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://docs.technotim.live/posts/ansible-automation/).
- Control Node (the machine you are running `ansible` commands) must have Ansible 2.11+ If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://technotim.live/posts/ansible-automation/).
- You will also need to install collections that this playbook uses by running `ansible-galaxy collection install -r ./collections/requirements.yml` (important❗)
@@ -101,7 +101,7 @@ scp debian@master_ip:~/.kube/config ~/.kube/config
### 🔨 Testing your cluster
See the commands [here](https://docs.technotim.live/posts/k3s-etcd-ansible/#testing-your-cluster).
See the commands [here](https://technotim.live/posts/k3s-etcd-ansible/#testing-your-cluster).
### Troubleshooting
@@ -118,6 +118,28 @@ You can find more information about it [here](molecule/README.md).
This repo uses `pre-commit` and `pre-commit-hooks` to lint and fix common style and syntax errors. Be sure to install python packages and then run `pre-commit install`. For more information, see [pre-commit](https://pre-commit.com/)
## 🌌 Ansible Galaxy
This collection can now be used in larger ansible projects.
Instructions:
- create or modify a file `collections/requirements.yml` in your project
```yml
collections:
- name: ansible.utils
- name: community.general
- name: ansible.posix
- name: kubernetes.core
- name: https://github.com/techno-tim/k3s-ansible.git
type: git
version: master
```
- install via `ansible-galaxy collection install -r ./collections/requirements.yml`
- every role is now available via the prefix `techno_tim.k3s_ansible.` e.g. `techno_tim.k3s_ansible.lxc`
## Thanks 🤝
This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas:

81
galaxy.yml Normal file
View File

@@ -0,0 +1,81 @@
### REQUIRED
# The namespace of the collection. This can be a company/brand/organization or product namespace under which all
# content lives. May only contain alphanumeric lowercase characters and underscores. Namespaces cannot start with
# underscores or numbers and cannot contain consecutive underscores
namespace: techno_tim
# The name of the collection. Has the same character restrictions as 'namespace'
name: k3s_ansible
# The version of the collection. Must be compatible with semantic versioning
version: 1.0.0
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: README.md
# A list of the collection's content authors. Can be just the name or in the format 'Full Name <email> (url)
# @nicks:irc/im.site#channel'
authors:
- your name <example@domain.com>
### OPTIONAL but strongly recommended
# A short summary description of the collection
description: >
The easiest way to bootstrap a self-hosted High Availability Kubernetes
cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB,
and more.
# Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only
# accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file'
license:
- Apache-2.0
# A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character
# requirements as 'namespace' and 'name'
tags:
- etcd
- high-availability
- k8s
- k3s
- k3s-cluster
- kube-vip
- kubernetes
- metallb
- rancher
# Collections that this collection requires to be installed for it to be usable. The key of the dict is the
# collection label 'namespace.name'. The value is a version range
# L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification). Multiple version
# range specifiers can be set and are separated by ','
dependencies:
ansible.utils: '*'
ansible.posix: '*'
community.general: '*'
kubernetes.core: '*'
# The URL of the originating SCM repository
repository: https://github.com/techno-tim/k3s-ansible
# The URL to any online docs
documentation: https://github.com/techno-tim/k3s-ansible
# The URL to the homepage of the collection/project
homepage: https://www.youtube.com/watch?v=CbkEWcUZ7zM
# The URL to the collection issue tracker
issues: https://github.com/techno-tim/k3s-ansible/issues
# A list of file glob-like patterns used to filter any files or directories that should not be included in the build
# artifact. A pattern is matched from the relative path of the file or directory of the collection directory. This
# uses 'fnmatch' to match the files or directories. Some directories and files like 'galaxy.yml', '*.pyc', '*.retry',
# and '.git' are always filtered. Mutually exclusive with 'manifest'
build_ignore: []
# A dict controlling use of manifest directives used in building the collection artifact. The key 'directives' is a
# list of MANIFEST.in style
# L(directives,https://packaging.python.org/en/latest/guides/using-manifest-in/#manifest-in-commands). The key
# 'omit_default_directives' is a boolean that controls whether the default directives are used. Mutually exclusive
# with 'build_ignore'
# manifest: null

View File

@@ -1,5 +1,5 @@
---
k3s_version: v1.24.11+k3s1
k3s_version: v1.29.0+k3s1
# this is the user that has ssh access to these machines
ansible_user: ansibleuser
systemd_dir: /etc/systemd/system
@@ -10,6 +10,30 @@ system_timezone: "Your/Timezone"
# interface which will be used for flannel
flannel_iface: "eth0"
# uncomment calico_iface to use tigera operator/calico cni instead of flannel https://docs.tigera.io/calico/latest/about
# calico_iface: "eth0"
calico_ebpf: false # use eBPF dataplane instead of iptables
calico_tag: "v3.27.0" # calico version tag
# uncomment cilium_iface to use cilium cni instead of flannel or calico
# ensure v4.19.57, v5.1.16, v5.2.0 or more recent kernel
# cilium_iface: "eth0"
cilium_mode: "native" # native when nodes on same subnet or using bgp, else set routed
cilium_tag: "v1.14.6" # cilium version tag
cilium_hubble: true # enable hubble observability relay and ui
# if using calico or cilium, you may specify the cluster pod cidr pool
cluster_cidr: "10.52.0.0/16"
# enable cilium bgp control plane for lb services and pod cidrs. disables metallb.
cilium_bgp: false
# bgp parameters for cilium cni. only active when cilium_iface is defined and cilium_bgp is true.
cilium_bgp_my_asn: "64513"
cilium_bgp_peer_asn: "64512"
cilium_bgp_peer_address: "192.168.30.1"
cilium_bgp_lb_cidr: "192.168.31.0/24" # cidr for cilium loadbalancer ipam
# apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.30.222"
@@ -20,28 +44,42 @@ k3s_token: "some-SUPER-DEDEUPER-secret-password"
# The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override
# it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'
k3s_node_ip: "{{ ansible_facts[(cilium_iface | default(calico_iface | default(flannel_iface)))]['ipv4']['address'] }}"
# Disable the taint manually by setting: k3s_master_taint = false
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"
# these arguments are recommended for servers as well as agents:
extra_args: >-
--flannel-iface={{ flannel_iface }}
{{ '--flannel-iface=' + flannel_iface if calico_iface is not defined and cilium_iface is not defined else '' }}
--node-ip={{ k3s_node_ip }}
# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }}
# the contents of the if block is also required if using calico or cilium
extra_server_args: >-
{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
{% if calico_iface is defined or cilium_iface is defined %}
--flannel-backend=none
--disable-network-policy
--cluster-cidr={{ cluster_cidr | default('10.52.0.0/16') }}
{% endif %}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
extra_agent_args: >-
{{ extra_args }}
# image tag for kube-vip
kube_vip_tag_version: "v0.5.11"
kube_vip_tag_version: "v0.6.4"
# tag for kube-vip-cloud-provider manifest
# kube_vip_cloud_provider_tag_version: "main"
# kube-vip ip range for load balancer
# (uncomment to use kube-vip for services instead of MetalLB)
# kube_vip_lb_ip_range: "192.168.30.80-192.168.30.90"
# metallb type frr or native
metal_lb_type: "native"
@@ -55,9 +93,8 @@ metal_lb_mode: "layer2"
# metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
metal_lb_speaker_tag_version: "v0.13.12"
metal_lb_controller_tag_version: "v0.13.12"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90"
@@ -67,9 +104,9 @@ metal_lb_ip_range: "192.168.30.80-192.168.30.90"
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# containers are significantly more resource efficient compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have partiularly low powered proxmox nodes where the overhead of
# I would only really recommend using this if you have particularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
@@ -82,3 +119,49 @@ proxmox_lxc_ct_ids:
- 202
- 203
- 204
# Only enable this if you have set up your own container registry to act as a mirror / pull-through cache
# (harbor / nexus / docker's official registry / etc).
# Can be beneficial for larger dev/test environments (for example if you're getting rate limited by docker hub),
# or air-gapped environments where your nodes don't have internet access after the initial setup
# (which is still needed for downloading the k3s binary and such).
# k3s's documentation about private registries here: https://docs.k3s.io/installation/private-registry
custom_registries: false
# The registries can be authenticated or anonymous, depending on your registry server configuration.
# If they allow anonymous access, simply remove the following bit from custom_registries_yaml
# configs:
# "registry.domain.com":
# auth:
# username: yourusername
# password: yourpassword
# The following is an example that pulls all images used in this playbook through your private registries.
# It also allows you to pull your own images from your private registry, without having to use imagePullSecrets
# in your deployments.
# If all you need is your own images and you don't care about caching the docker/quay/ghcr.io images,
# you can just remove those from the mirrors: section.
custom_registries_yaml: |
mirrors:
docker.io:
endpoint:
- "https://registry.domain.com/v2/dockerhub"
quay.io:
endpoint:
- "https://registry.domain.com/v2/quayio"
ghcr.io:
endpoint:
- "https://registry.domain.com/v2/ghcrio"
registry.domain.com:
endpoint:
- "https://registry.domain.com"
configs:
"registry.domain.com":
auth:
username: yourusername
password: yourpassword
# Only enable and configure these if you access the internet through a proxy
# proxy_env:
# HTTP_PROXY: "http://proxy.domain.local:3128"
# HTTPS_PROXY: "http://proxy.domain.local:3128"
# NO_PROXY: "*.domain.local,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

View File

@@ -0,0 +1,2 @@
---
ansible_user: '{{ proxmox_lxc_ssh_user }}'

View File

@@ -13,6 +13,12 @@ We have these scenarios:
To save a bit of test time, this cluster is _not_ highly available, it consists of only one control and one worker node.
- **single_node**:
Very similar to the default scenario, but uses only a single node for all cluster functionality.
- **calico**:
The same as single node, but uses calico cni instead of flannel.
- **cilium**:
The same as single node, but uses cilium cni instead of flannel.
- **kube-vip**
The same as single node, but uses kube-vip as service loadbalancer instead of MetalLB
## How to execute

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.62
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,16 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
calico_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.224"
metal_lb_ip_range: "192.168.30.100-192.168.30.109"

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.63
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,16 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
cilium_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225"
metal_lb_ip_range: "192.168.30.110-192.168.30.119"

View File

@@ -7,7 +7,7 @@ platforms:
- name: control1
box: generic/ubuntu2204
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -22,8 +22,8 @@ platforms:
ssh.password: "vagrant"
- name: control2
box: generic/debian11
memory: 2048
box: generic/debian12
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -34,7 +34,7 @@ platforms:
- name: control3
box: generic/rocky9
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -45,7 +45,7 @@ platforms:
- name: node1
box: generic/ubuntu2204
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -61,7 +61,7 @@ platforms:
- name: node2
box: generic/rocky9
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -72,6 +72,8 @@ platforms:
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
@@ -82,7 +84,6 @@ provisioner:
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax

View File

@@ -4,7 +4,8 @@
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See: https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant # noqa yaml[line-length]
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
flannel_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:

View File

@@ -17,6 +17,6 @@
# and security needs.
ansible.builtin.systemd:
name: firewalld
enabled: no
enabled: false
state: stopped
become: true

View File

@@ -6,7 +6,7 @@ driver:
platforms:
- name: control1
box: generic/ubuntu2204
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -22,7 +22,7 @@ platforms:
- name: control2
box: generic/ubuntu2204
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -38,7 +38,7 @@ platforms:
- name: node1
box: generic/ubuntu2204
memory: 2048
memory: 1024
cpus: 2
groups:
- k3s_cluster
@@ -53,6 +53,8 @@ platforms:
ssh.password: "vagrant"
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
@@ -63,7 +65,6 @@ provisioner:
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax

View File

@@ -4,7 +4,8 @@
tasks:
- name: Override host variables (1/2)
ansible.builtin.set_fact:
# See: https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant # noqa yaml[line-length]
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
flannel_iface: eth1
# In this scenario, we have multiple interfaces that the VIP could be

View File

@@ -0,0 +1,49 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.62
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,17 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
flannel_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the other scenarios
apiserver_endpoint: "192.168.30.225"
# Use kube-vip instead of MetalLB
kube_vip_lb_ip_range: "192.168.30.110-192.168.30.119"

View File

@@ -2,4 +2,4 @@
- name: Verify
hosts: all
roles:
- verify/from_outside
- verify_from_outside

View File

@@ -6,4 +6,4 @@ outside_host: localhost
testing_namespace: molecule-verify-from-outside
# The directory in which the example manifests reside
example_manifests_path: ../../../../example
example_manifests_path: ../../../example

View File

@@ -34,14 +34,14 @@
- name: Assert that the nginx welcome page is available
ansible.builtin.uri:
url: http://{{ ip | ansible.utils.ipwrap }}:{{ port }}/
return_content: yes
url: http://{{ ip | ansible.utils.ipwrap }}:{{ port_ }}/
return_content: true
register: result
failed_when: "'Welcome to nginx!' not in result.content"
vars:
ip: >-
{{ nginx_services.resources[0].status.loadBalancer.ingress[0].ip }}
port: >-
port_: >-
{{ nginx_services.resources[0].spec.ports[0].port }}
# Deactivated linter rules:
# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap

View File

@@ -21,6 +21,8 @@ platforms:
ip: 192.168.30.50
provisioner:
name: ansible
env:
ANSIBLE_VERBOSITY: 1
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
@@ -31,7 +33,6 @@ provisioner:
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax

View File

@@ -4,7 +4,8 @@
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See: https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant # noqa yaml[line-length]
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
flannel_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:

0
reboot.sh Normal file → Executable file
View File

View File

@@ -1,7 +1,7 @@
---
- name: Reboot k3s_cluster
hosts: k3s_cluster
gather_facts: yes
gather_facts: true
tasks:
- name: Reboot the nodes (and Wait upto 5 mins max)
become: true

View File

@@ -1,12 +1,10 @@
ansible-core>=2.13.5
ansible-lint>=6.8.6
ansible-core>=2.16.2
jmespath>=1.0.1
jsonpatch>=1.32
kubernetes>=25.3.0
molecule-vagrant>=1.0.0
molecule>=4.0.3
netaddr>=0.8.0
pre-commit>=2.20.0
pre-commit-hooks>=1.3.1
pyyaml>=6.0
yamllint>=1.28.0
jsonpatch>=1.33
kubernetes>=29.0.0
molecule-plugins[vagrant]
molecule>=6.0.3
netaddr>=0.10.1
pre-commit>=3.6.0
pre-commit-hooks>=4.5.0
pyyaml>=6.0.1

View File

@@ -1,211 +1,169 @@
#
# This file is autogenerated by pip-compile with python 3.8
# To update, run:
# This file is autogenerated by pip-compile with Python 3.11
# by the following command:
#
# pip-compile requirements.in
#
ansible-compat==3.0.1
ansible-compat==4.1.11
# via molecule
ansible-core==2.14.3
ansible-core==2.16.3
# via
# -r requirements.in
# ansible-compat
# ansible-lint
ansible-lint==6.14.2
# via -r requirements.in
arrow==1.2.3
# via jinja2-time
attrs==22.1.0
# via jsonschema
binaryornot==0.4.4
# via cookiecutter
black==22.10.0
# via ansible-lint
bracex==2.3.post1
# molecule
attrs==23.2.0
# via
# jsonschema
# referencing
bracex==2.4
# via wcmatch
cachetools==5.2.0
cachetools==5.3.2
# via google-auth
certifi==2022.9.24
certifi==2023.11.17
# via
# kubernetes
# requests
cffi==1.15.1
cffi==1.16.0
# via cryptography
cfgv==3.3.1
cfgv==3.4.0
# via pre-commit
chardet==5.0.0
# via binaryornot
charset-normalizer==2.1.1
charset-normalizer==3.3.2
# via requests
click==8.1.3
click==8.1.7
# via
# black
# click-help-colors
# cookiecutter
# molecule
click-help-colors==0.9.1
click-help-colors==0.9.4
# via molecule
commonmark==0.9.1
# via rich
cookiecutter==2.1.1
# via molecule
cryptography==38.0.3
cryptography==41.0.7
# via ansible-core
distlib==0.3.6
distlib==0.3.8
# via virtualenv
distro==1.8.0
# via selinux
enrich==1.2.7
# via molecule
filelock==3.8.0
# via
# ansible-lint
# virtualenv
google-auth==2.14.0
filelock==3.13.1
# via virtualenv
google-auth==2.26.2
# via kubernetes
identify==2.5.8
identify==2.5.33
# via pre-commit
idna==3.4
idna==3.6
# via requests
jinja2==3.1.2
jinja2==3.1.3
# via
# ansible-core
# cookiecutter
# jinja2-time
# molecule
# molecule-vagrant
jinja2-time==0.2.0
# via cookiecutter
jmespath==1.0.1
# via -r requirements.in
jsonpatch==1.32
jsonpatch==1.33
# via -r requirements.in
jsonpointer==2.3
jsonpointer==2.4
# via jsonpatch
jsonschema==4.17.0
jsonschema==4.21.1
# via
# ansible-compat
# ansible-lint
# molecule
kubernetes==25.3.0
jsonschema-specifications==2023.12.1
# via jsonschema
kubernetes==29.0.0
# via -r requirements.in
markupsafe==2.1.1
markdown-it-py==3.0.0
# via rich
markupsafe==2.1.4
# via jinja2
molecule==4.0.4
mdurl==0.1.2
# via markdown-it-py
molecule==6.0.3
# via
# -r requirements.in
# molecule-vagrant
molecule-vagrant==1.0.0
# molecule-plugins
molecule-plugins[vagrant]==23.5.0
# via -r requirements.in
mypy-extensions==0.4.3
# via black
netaddr==0.8.0
netaddr==0.10.1
# via -r requirements.in
nodeenv==1.7.0
nodeenv==1.8.0
# via pre-commit
oauthlib==3.2.2
# via requests-oauthlib
packaging==21.3
# via
# kubernetes
# requests-oauthlib
packaging==23.2
# via
# ansible-compat
# ansible-core
# ansible-lint
# molecule
pathspec==0.10.1
# via
# black
# yamllint
platformdirs==2.5.2
# via
# black
# virtualenv
pluggy==1.0.0
platformdirs==4.1.0
# via virtualenv
pluggy==1.3.0
# via molecule
pre-commit==2.21.0
pre-commit==3.6.0
# via -r requirements.in
pre-commit-hooks==4.4.0
pre-commit-hooks==4.5.0
# via -r requirements.in
pyasn1==0.4.8
pyasn1==0.5.1
# via
# pyasn1-modules
# rsa
pyasn1-modules==0.2.8
pyasn1-modules==0.3.0
# via google-auth
pycparser==2.21
# via cffi
pygments==2.13.0
pygments==2.17.2
# via rich
pyparsing==3.0.9
# via packaging
pyrsistent==0.19.2
# via jsonschema
python-dateutil==2.8.2
# via
# arrow
# kubernetes
python-slugify==6.1.2
# via cookiecutter
# via kubernetes
python-vagrant==1.0.0
# via molecule-vagrant
pyyaml==6.0
# via molecule-plugins
pyyaml==6.0.1
# via
# -r requirements.in
# ansible-compat
# ansible-core
# ansible-lint
# cookiecutter
# kubernetes
# molecule
# molecule-vagrant
# pre-commit
# yamllint
requests==2.28.1
referencing==0.32.1
# via
# jsonschema
# jsonschema-specifications
requests==2.31.0
# via
# cookiecutter
# kubernetes
# requests-oauthlib
requests-oauthlib==1.3.1
# via kubernetes
resolvelib==0.8.1
resolvelib==1.0.1
# via ansible-core
rich==12.6.0
rich==13.7.0
# via
# ansible-lint
# enrich
# molecule
rpds-py==0.17.1
# via
# jsonschema
# referencing
rsa==4.9
# via google-auth
ruamel-yaml==0.17.21
# via
# ansible-lint
# pre-commit-hooks
selinux==0.2.1
# via molecule-vagrant
ruamel-yaml==0.18.5
# via pre-commit-hooks
ruamel-yaml-clib==0.2.8
# via ruamel-yaml
six==1.16.0
# via
# google-auth
# kubernetes
# python-dateutil
subprocess-tee==0.4.1
# via
# ansible-compat
# ansible-lint
text-unidecode==1.3
# via python-slugify
urllib3==1.26.12
# via ansible-compat
urllib3==2.1.0
# via
# kubernetes
# requests
virtualenv==20.16.6
virtualenv==20.25.0
# via pre-commit
wcmatch==8.4.1
# via ansible-lint
websocket-client==1.4.2
wcmatch==8.5
# via molecule
websocket-client==1.7.0
# via kubernetes
yamllint==1.29.0
# via
# -r requirements.in
# ansible-lint
# The following packages are considered to be unsafe in a requirements file:
# setuptools

View File

@@ -1,7 +1,7 @@
---
- hosts: k3s_cluster
gather_facts: yes
- name: Reset k3s cluster
hosts: k3s_cluster
gather_facts: true
roles:
- role: reset
become: true
@@ -14,9 +14,10 @@
reboot:
reboot_timeout: 3600
- hosts: proxmox
- name: Revert changes to Proxmox cluster
hosts: proxmox
gather_facts: true
become: yes
become: true
remote_user: "{{ proxmox_lxc_ssh_user }}"
roles:
- role: reset_proxmox_lxc

View File

@@ -1,16 +0,0 @@
---
# If you want to explicitly define an interface that ALL control nodes
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null
server_init_args: >-
{% if groups['master'] | length > 1 %}
{% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}
--cluster-init
{% else %}
--server https://{{ hostvars[groups['master'][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443
{% endif %}
--token {{ k3s_token }}
{% endif %}
{{ extra_server_args | default('') }}

View File

@@ -0,0 +1,3 @@
---
# Name of the master group
group_name_master: master

View File

@@ -1,3 +0,0 @@
---
# Timeout to wait for MetalLB services to come up
metal_lb_available_timeout: 120s

View File

@@ -1,8 +0,0 @@
---
- name: Deploy metallb pool
include_tasks: metallb.yml
- name: Remove tmp directory used for manifests
file:
path: /tmp/k3s
state: absent

View File

@@ -0,0 +1,18 @@
---
- name: Create k3s-node.service.d directory
file:
path: '{{ systemd_dir }}/k3s-node.service.d'
state: directory
owner: root
group: root
mode: '0755'
- name: Copy K3s http_proxy conf file
template:
src: "http_proxy.conf.j2"
dest: "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf"
owner: root
group: root
mode: '0755'

View File

@@ -1,5 +1,9 @@
---
- name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml
when: proxy_env is defined
- name: Copy K3s service file
template:
src: "k3s.service.j2"
@@ -11,6 +15,6 @@
- name: Enable and check K3s service
systemd:
name: k3s-node
daemon_reload: yes
daemon_reload: true
state: restarted
enabled: yes
enabled: true

View File

@@ -0,0 +1,4 @@
[Service]
Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }}
Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }}
Environment=NO_PROXY={{ proxy_env.NO_PROXY }}

View File

@@ -7,7 +7,7 @@ After=network-online.target
Type=notify
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s agent --server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 --token {{ hostvars[groups['master'][0]]['token'] | default(k3s_token) }} {{ extra_agent_args | default("") }}
ExecStart=/usr/local/bin/k3s agent --server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 --token {{ hostvars[groups[group_name_master | default('master')][0]]['token'] | default(k3s_token) }} {{ extra_agent_args | default("") }}
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead

View File

@@ -0,0 +1,6 @@
---
# Indicates whether custom registries for k3s should be configured
# Possible values:
# - present
# - absent
state: present

View File

@@ -0,0 +1,17 @@
---
- name: Create directory /etc/rancher/k3s
file:
path: "/etc/{{ item }}"
state: directory
mode: '0755'
loop:
- rancher
- rancher/k3s
- name: Insert registries into /etc/rancher/k3s/registries.yaml
blockinfile:
path: /etc/rancher/k3s/registries.yaml
block: "{{ custom_registries_yaml }}"
mode: '0600'
create: true

View File

@@ -0,0 +1,20 @@
---
# If you want to explicitly define an interface that ALL control nodes
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null
# Name of the master group
group_name_master: master
# yamllint disable rule:line-length
server_init_args: >-
{% if groups[group_name_master | default('master')] | length > 1 %}
{% if ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] %}
--cluster-init
{% else %}
--server https://{{ hostvars[groups[group_name_master | default('master')][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443
{% endif %}
--token {{ k3s_token }}
{% endif %}
{{ extra_server_args | default('') }}

View File

@@ -0,0 +1,18 @@
---
- name: Create k3s.service.d directory
file:
path: '{{ systemd_dir }}/k3s.service.d'
state: directory
owner: root
group: root
mode: '0755'
- name: Copy K3s http_proxy conf file
template:
src: "http_proxy.conf.j2"
dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
owner: root
group: root
mode: '0755'

View File

@@ -0,0 +1,27 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip cloud provider manifest to first master
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/{{ kube_vip_cloud_provider_tag_version | default('main') }}/manifest/kube-vip-cloud-controller.yaml" # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/kube-vip-cloud-controller.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy kubevip configMap manifest to first master
template:
src: "kubevip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/kubevip.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -1,23 +1,40 @@
---
- name: Clean previous runs of k3s-init
- name: Stop k3s-init
systemd:
name: k3s-init
state: stopped
failed_when: false
- name: Clean previous runs of k3s-init
# k3s-init won't work if the port is already in use
- name: Stop k3s
systemd:
name: k3s
state: stopped
failed_when: false
- name: Clean previous runs of k3s-init # noqa command-instead-of-module
# The systemd module does not support "reset-failed", so we need to resort to command.
command: systemctl reset-failed k3s-init
failed_when: false
changed_when: false
args:
warn: false # The ansible systemd module does not support reset-failed
- name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml
when: proxy_env is defined
- name: Deploy vip manifest
include_tasks: vip.yml
- name: Deploy metallb manifest
include_tasks: metallb.yml
tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Deploy kube-vip manifest
include_tasks: kube-vip.yml
tags: kubevip
when: kube_vip_lb_ip_range is defined
- name: Init cluster inside the transient k3s-init service
command:
@@ -25,15 +42,16 @@
-p Restart=on-failure \
--unit=k3s-init \
k3s server {{ server_init_args }}"
creates: "{{ systemd_dir }}/k3s.service"
creates: "{{ systemd_dir }}/k3s-init.service"
- name: Verification
when: not ansible_check_mode
block:
- name: Verify that all nodes actually joined (check k3s-init.service if this fails)
command:
cmd: k3s kubectl get nodes -l "node-role.kubernetes.io/master=true" -o=jsonpath="{.items[*].metadata.name}"
register: nodes
until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups['master'] | length)
until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups[group_name_master | default('master')] | length) # yamllint disable-line rule:line-length
retries: "{{ retry_count | default(20) }}"
delay: 10
changed_when: false
@@ -49,7 +67,6 @@
name: k3s-init
state: stopped
failed_when: false
when: not ansible_check_mode
- name: Copy K3s service file
register: k3s_service
@@ -63,9 +80,9 @@
- name: Enable and check K3s service
systemd:
name: k3s
daemon_reload: yes
daemon_reload: true
state: restarted
enabled: yes
enabled: true
- name: Wait for node-token
wait_for:
@@ -106,7 +123,7 @@
copy:
src: /etc/rancher/k3s/k3s.yaml
dest: "{{ ansible_user_dir }}/.kube/config"
remote_src: yes
remote_src: true
owner: "{{ ansible_user_id }}"
mode: "u=rw,g=,o="

View File

@@ -6,7 +6,7 @@
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: "Download to first master: manifest for metallb-{{ metal_lb_type }}"
ansible.builtin.get_url:
@@ -15,7 +15,7 @@
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Set image versions in manifest for metallb-{{ metal_lb_type }}
ansible.builtin.replace:
@@ -27,4 +27,4 @@
to: "metallb/speaker:{{ metal_lb_speaker_tag_version }}"
loop_control:
label: "{{ item.change }} => {{ item.to }}"
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -6,7 +6,7 @@
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip rbac manifest to first master
ansible.builtin.get_url:
@@ -15,7 +15,7 @@
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy vip manifest to first master
template:
@@ -24,4 +24,4 @@
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -0,0 +1,4 @@
[Service]
Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }}
Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }}
Environment=NO_PROXY={{ proxy_env.NO_PROXY }}

View File

@@ -0,0 +1,13 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubevip
namespace: kube-system
data:
{% if kube_vip_lb_ip_range is string %}
{# kube_vip_lb_ip_range was used in the legacy way: single string instead of a list #}
{# => transform to list with single element #}
{% set kube_vip_lb_ip_range = [kube_vip_lb_ip_range] %}
{% endif %}
range-global: {{ kube_vip_lb_ip_range | join(',') }}

View File

@@ -43,7 +43,7 @@ spec:
- name: vip_ddns
value: "false"
- name: svc_enable
value: "false"
value: "{{ 'true' if kube_vip_lb_ip_range is defined else 'false' }}"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration

View File

@@ -0,0 +1,6 @@
---
# Timeout to wait for MetalLB services to come up
metal_lb_available_timeout: 240s
# Name of the master group
group_name_master: master

View File

@@ -0,0 +1,114 @@
---
- name: Deploy Calico to cluster
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
run_once: true
block:
- name: Create manifests directory on first master
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0755
- name: "Download to first master: manifest for Tigera Operator and Calico CRDs"
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml"
dest: "/tmp/k3s/tigera-operator.yaml"
owner: root
group: root
mode: 0755
- name: Copy Calico custom resources manifest to first master
ansible.builtin.template:
src: "calico.crs.j2"
dest: /tmp/k3s/custom-resources.yaml
owner: root
group: root
mode: 0755
- name: Deploy or replace Tigera Operator
block:
- name: Deploy Tigera Operator
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/tigera-operator.yaml
register: create_operator
changed_when: "'created' in create_operator.stdout"
failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr"
rescue:
- name: Replace existing Tigera Operator
ansible.builtin.command:
cmd: kubectl replace -f /tmp/k3s/tigera-operator.yaml
register: replace_operator
changed_when: "'replaced' in replace_operator.stdout"
failed_when: "'Error' in replace_operator.stderr"
- name: Wait for Tigera Operator resources
command: >-
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='tigera-operator'
--for=condition=Available=True
--timeout=7s
register: tigera_result
changed_when: false
until: tigera_result is succeeded
retries: 7
delay: 7
with_items:
- {name: tigera-operator, type: deployment}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
- name: Deploy Calico custom resources
block:
- name: Deploy custom resources for Calico
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/custom-resources.yaml
register: create_cr
changed_when: "'created' in create_cr.stdout"
failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr"
rescue:
- name: Apply new Calico custom resource manifest
ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/custom-resources.yaml
register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'Error' in apply_cr.stderr"
- name: Wait for Calico system resources to be available
command: >-
{% if item.type == 'daemonset' %}
k3s kubectl wait pods
--namespace='{{ item.namespace }}'
--selector={{ item.selector }}
--for=condition=Ready
{% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='{{ item.namespace }}'
--for=condition=Available
{% endif %}
--timeout=7s
register: cr_result
changed_when: false
until: cr_result is succeeded
retries: 30
delay: 7
with_items:
- {name: calico-typha, type: deployment, namespace: calico-system}
- {name: calico-kube-controllers, type: deployment, namespace: calico-system}
- {name: csi-node-driver, type: daemonset, selector: 'k8s-app=csi-node-driver', namespace: calico-system}
- {name: calico-node, type: daemonset, selector: 'k8s-app=calico-node', namespace: calico-system}
- {name: calico-apiserver, type: deployment, namespace: calico-apiserver}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
- name: Patch Felix configuration for eBPF mode
ansible.builtin.command:
cmd: >
kubectl patch felixconfiguration default
--type='merge'
--patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}'
register: patch_result
changed_when: "'felixconfiguration.projectcalico.org/default patched' in patch_result.stdout"
failed_when: "'Error' in patch_result.stderr"
when: calico_ebpf

View File

@@ -0,0 +1,253 @@
---
- name: Prepare Cilium CLI on first master and deploy CNI
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
run_once: true
block:
- name: Create tmp directory on first master
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0755
- name: Check if Cilium CLI is installed
ansible.builtin.command: cilium version
register: cilium_cli_installed
failed_when: false
changed_when: false
ignore_errors: true
- name: Check for Cilium CLI version in command output
set_fact:
installed_cli_version: >-
{{
cilium_cli_installed.stdout_lines
| join(' ')
| regex_findall('cilium-cli: (v\d+\.\d+\.\d+)')
| first
| default('unknown')
}}
when: cilium_cli_installed.rc == 0
- name: Get latest stable Cilium CLI version file
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt"
dest: "/tmp/k3s/cilium-cli-stable.txt"
owner: root
group: root
mode: 0755
- name: Read Cilium CLI stable version from file
ansible.builtin.command: cat /tmp/k3s/cilium-cli-stable.txt
register: cli_ver
changed_when: false
- name: Log installed Cilium CLI version
ansible.builtin.debug:
msg: "Installed Cilium CLI version: {{ installed_cli_version | default('Not installed') }}"
- name: Log latest stable Cilium CLI version
ansible.builtin.debug:
msg: "Latest Cilium CLI version: {{ cli_ver.stdout }}"
- name: Determine if Cilium CLI needs installation or update
set_fact:
cilium_cli_needs_update: >-
{{
cilium_cli_installed.rc != 0 or
(cilium_cli_installed.rc == 0 and
installed_cli_version != cli_ver.stdout)
}}
- name: Install or update Cilium CLI
when: cilium_cli_needs_update
block:
- name: Set architecture variable
ansible.builtin.set_fact:
cli_arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"
- name: Download Cilium CLI and checksum
ansible.builtin.get_url:
url: "{{ cilium_base_url }}/cilium-linux-{{ cli_arch }}{{ item }}"
dest: "/tmp/k3s/cilium-linux-{{ cli_arch }}{{ item }}"
owner: root
group: root
mode: 0755
loop:
- ".tar.gz"
- ".tar.gz.sha256sum"
vars:
cilium_base_url: "https://github.com/cilium/cilium-cli/releases/download/{{ cli_ver.stdout }}"
- name: Verify the downloaded tarball
ansible.builtin.shell: |
cd /tmp/k3s && sha256sum --check cilium-linux-{{ cli_arch }}.tar.gz.sha256sum
args:
executable: /bin/bash
changed_when: false
- name: Extract Cilium CLI to /usr/local/bin
ansible.builtin.unarchive:
src: "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz"
dest: /usr/local/bin
remote_src: true
- name: Remove downloaded tarball and checksum file
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz"
- "/tmp/k3s/cilium-linux-{{ cli_arch }}.tar.gz.sha256sum"
- name: Wait for connectivity to kube VIP
ansible.builtin.command: ping -c 1 {{ apiserver_endpoint }}
register: ping_result
until: ping_result.rc == 0
retries: 21
delay: 1
ignore_errors: true
changed_when: false
- name: Fail if kube VIP not reachable
ansible.builtin.fail:
msg: "API endpoint {{ apiserver_endpoint }} is not reachable"
when: ping_result.rc != 0
- name: Test for existing Cilium install
ansible.builtin.command: k3s kubectl -n kube-system get daemonsets cilium
register: cilium_installed
failed_when: false
changed_when: false
ignore_errors: true
- name: Check existing Cilium install
when: cilium_installed.rc == 0
block:
- name: Check Cilium version
ansible.builtin.command: cilium version
register: cilium_version
failed_when: false
changed_when: false
ignore_errors: true
- name: Parse installed Cilium version
set_fact:
installed_cilium_version: >-
{{
cilium_version.stdout_lines
| join(' ')
| regex_findall('cilium image.+(\d+\.\d+\.\d+)')
| first
| default('unknown')
}}
- name: Determine if Cilium needs update
set_fact:
cilium_needs_update: >-
{{ 'v' + installed_cilium_version != cilium_tag }}
- name: Log result
ansible.builtin.debug:
msg: >
Installed Cilium version: {{ installed_cilium_version }},
Target Cilium version: {{ cilium_tag }},
Update needed: {{ cilium_needs_update }}
- name: Install Cilium
ansible.builtin.command: >-
{% if cilium_installed.rc != 0 %}
cilium install
{% else %}
cilium upgrade
{% endif %}
--version "{{ cilium_tag }}"
--helm-set operator.replicas="1"
{{ '--helm-set devices=' + cilium_iface if cilium_iface != 'auto' else '' }}
--helm-set ipam.operator.clusterPoolIPv4PodCIDRList={{ cluster_cidr }}
{% if cilium_mode == "native" or (cilium_bgp and cilium_exportPodCIDR != 'false') %}
--helm-set ipv4NativeRoutingCIDR={{ cluster_cidr }}
{% endif %}
--helm-set k8sServiceHost={{ apiserver_endpoint }}
--helm-set k8sServicePort="6443"
--helm-set routingMode={{ cilium_mode | default("native") }}
--helm-set autoDirectNodeRoutes={{ "true" if cilium_mode == "native" else "false" }}
--helm-set kubeProxyReplacement={{ kube_proxy_replacement | default("true") }}
--helm-set bpf.masquerade={{ enable_bpf_masquerade | default("true") }}
--helm-set bgpControlPlane.enabled={{ cilium_bgp | default("false") }}
--helm-set hubble.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.relay.enabled={{ "true" if cilium_hubble else "false" }}
--helm-set hubble.ui.enabled={{ "true" if cilium_hubble else "false" }}
{% if kube_proxy_replacement is not false %}
--helm-set bpf.loadBalancer.algorithm={{ bpf_lb_algorithm | default("maglev") }}
--helm-set bpf.loadBalancer.mode={{ bpf_lb_mode | default("hybrid") }}
{% endif %}
environment:
KUBECONFIG: /home/{{ ansible_user }}/.kube/config
register: cilium_install_result
changed_when: cilium_install_result.rc == 0
when: cilium_installed.rc != 0 or cilium_needs_update
- name: Wait for Cilium resources
command: >-
{% if item.type == 'daemonset' %}
k3s kubectl wait pods
--namespace=kube-system
--selector='k8s-app=cilium'
--for=condition=Ready
{% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace=kube-system
--for=condition=Available
{% endif %}
--timeout=7s
register: cr_result
changed_when: false
until: cr_result is succeeded
retries: 30
delay: 7
with_items:
- {name: cilium-operator, type: deployment}
- {name: cilium, type: daemonset, selector: 'k8s-app=cilium'}
- {name: hubble-relay, type: deployment, check_hubble: true}
- {name: hubble-ui, type: deployment, check_hubble: true}
loop_control:
label: "{{ item.type }}/{{ item.name }}"
when: >-
not item.check_hubble | default(false) or (item.check_hubble | default(false) and cilium_hubble)
- name: Configure Cilium BGP
when: cilium_bgp
block:
- name: Copy BGP manifests to first master
ansible.builtin.template:
src: "cilium.crs.j2"
dest: /tmp/k3s/cilium-bgp.yaml
owner: root
group: root
mode: 0755
- name: Apply BGP manifests
ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/cilium-bgp.yaml
register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'is invalid' in apply_cr.stderr"
ignore_errors: true
- name: Print error message if BGP manifests application fails
ansible.builtin.debug:
msg: "{{ apply_cr.stderr }}"
when: "'is invalid' in apply_cr.stderr"
- name: Test for BGP config resources
ansible.builtin.command: "{{ item }}"
loop:
- k3s kubectl get CiliumBGPPeeringPolicy.cilium.io
- k3s kubectl get CiliumLoadBalancerIPPool.cilium.io
changed_when: false
loop_control:
label: "{{ item }}"

View File

@@ -0,0 +1,20 @@
---
- name: Deploy calico
include_tasks: calico.yml
tags: calico
when: calico_iface is defined and cilium_iface is not defined
- name: Deploy cilium
include_tasks: cilium.yml
tags: cilium
when: cilium_iface is defined
- name: Deploy metallb pool
include_tasks: metallb.yml
tags: metallb
when: kube_vip_lb_ip_range is not defined and (not cilium_bgp or cilium_iface is not defined)
- name: Remove tmp directory used for manifests
file:
path: /tmp/k3s
state: absent

View File

@@ -5,23 +5,44 @@
state: directory
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups['master'] }}"
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Delete outdated metallb replicas
shell: |-
set -o pipefail
REPLICAS=$(k3s kubectl --namespace='metallb-system' get replicasets \
-l 'component=controller,app=metallb' \
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}, {.items[0].metadata.name}' 2>/dev/null || true)
REPLICAS_SETS=$(echo ${REPLICAS} | grep -v '{{ metal_lb_controller_tag_version }}' | sed -e "s/^.*\s//g")
if [ -n "${REPLICAS_SETS}" ] ; then
for REPLICAS in "${REPLICAS_SETS}"
do
k3s kubectl --namespace='metallb-system' \
delete rs "${REPLICAS}"
done
fi
args:
executable: /bin/bash
changed_when: false
run_once: true
with_items: "{{ groups[group_name_master | default('master')] }}"
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups['master'] }}"
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups['master'] }}"
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Wait for MetalLB resources
@@ -66,7 +87,7 @@
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups['master'] }}"
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Apply metallb CRs

View File

@@ -0,0 +1,41 @@
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: {{ calico_blockSize | default('26') }}
cidr: {{ cluster_cidr | default('10.52.0.0/16') }}
encapsulation: {{ calico_encapsulation | default('VXLANCrossSubnet') }}
natOutgoing: {{ calico_natOutgoing | default('Enabled') }}
nodeSelector: {{ calico_nodeSelector | default('all()') }}
nodeAddressAutodetectionV4:
interface: {{ calico_iface }}
linuxDataplane: {{ 'BPF' if calico_ebpf else 'Iptables' }}
---
# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
{% if calico_ebpf %}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: '{{ apiserver_endpoint }}'
KUBERNETES_SERVICE_PORT: '6443'
{% endif %}

View File

@@ -0,0 +1,29 @@
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
name: 01-bgp-peering-policy
spec: # CiliumBGPPeeringPolicySpec
virtualRouters: # []CiliumBGPVirtualRouter
- localASN: {{ cilium_bgp_my_asn }}
exportPodCIDR: {{ cilium_exportPodCIDR | default('true') }}
neighbors: # []CiliumBGPNeighbor
- peerAddress: '{{ cilium_bgp_peer_address + "/32"}}'
peerASN: {{ cilium_bgp_peer_asn }}
eBGPMultihopTTL: 10
connectRetryTimeSeconds: 120
holdTimeSeconds: 90
keepAliveTimeSeconds: 30
gracefulRestart:
enabled: true
restartTimeSeconds: 120
serviceSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "01-lb-pool"
spec:
cidrs:
- cidr: "{{ cilium_bgp_lb_cidr }}"

View File

@@ -1,4 +1,5 @@
---
- name: reboot server
- name: Reboot server
become: true
reboot:
listen: reboot server

View File

@@ -0,0 +1,4 @@
---
secure_path:
RedHat: '/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin'
Suse: '/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin'

View File

@@ -1,34 +1,37 @@
---
- name: Set same timezone on every Server
timezone:
community.general.timezone:
name: "{{ system_timezone }}"
when: (system_timezone is defined) and (system_timezone != "Your/Timezone")
- name: Set SELinux to disabled state
selinux:
ansible.posix.selinux:
state: disabled
when: ansible_os_family == "RedHat"
- name: Enable IPv4 forwarding
sysctl:
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: "1"
state: present
reload: yes
reload: true
tags: sysctl
- name: Enable IPv6 forwarding
sysctl:
ansible.posix.sysctl:
name: net.ipv6.conf.all.forwarding
value: "1"
state: present
reload: yes
reload: true
tags: sysctl
- name: Enable IPv6 router advertisements
sysctl:
ansible.posix.sysctl:
name: net.ipv6.conf.all.accept_ra
value: "2"
state: present
reload: yes
reload: true
tags: sysctl
- name: Add br_netfilter to /etc/modules-load.d/
copy:
@@ -38,28 +41,29 @@
when: ansible_os_family == "RedHat"
- name: Load br_netfilter
modprobe:
community.general.modprobe:
name: br_netfilter
state: present
when: ansible_os_family == "RedHat"
- name: Set bridge-nf-call-iptables (just to be sure)
sysctl:
ansible.posix.sysctl:
name: "{{ item }}"
value: "1"
state: present
reload: yes
reload: true
when: ansible_os_family == "RedHat"
loop:
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
tags: sysctl
- name: Add /usr/local/bin to sudo secure_path
lineinfile:
line: 'Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin'
line: 'Defaults secure_path = {{ secure_path[ansible_os_family] }}'
regexp: "Defaults(\\s)*secure_path(\\s)*="
state: present
insertafter: EOF
path: /etc/sudoers
validate: 'visudo -cf %s'
when: ansible_os_family == "RedHat"
when: ansible_os_family in [ "RedHat", "Suse" ]

View File

@@ -1,5 +1,13 @@
---
- name: reboot containers
command:
"pct reboot {{ item }}"
- name: Reboot containers
block:
- name: Get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids: >-
{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}
listen: reboot containers
- name: Reboot container
command: "pct reboot {{ item }}"
loop: "{{ proxmox_lxc_filtered_ids }}"
changed_when: true
listen: reboot containers

View File

@@ -1,21 +1,15 @@
---
- name: check for container files that exist on this host
- name: Check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: filter out files that do not exist
- name: Filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# used for the reboot handler
- name: get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids:
'{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}'
# https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185
- name: Ensure lxc config has the right apparmor profile
lineinfile:

View File

@@ -1,3 +1,4 @@
---
- name: reboot
- name: Reboot
reboot:
listen: reboot

View File

@@ -17,21 +17,27 @@
when:
grep_cpuinfo_raspberrypi.rc == 0 or grep_device_tree_model_raspberrypi.rc == 0
- name: Set detected_distribution to Raspbian
- name: Set detected_distribution to Raspbian (ARM64 on Raspbian, Debian Buster/Bullseye/Bookworm)
set_fact:
detected_distribution: Raspbian
when: >
raspberry_pi|default(false) and
( ansible_facts.lsb.id|default("") == "Raspbian" or
ansible_facts.lsb.description|default("") is match("[Rr]aspbian.*") )
vars:
allowed_descriptions:
- "[Rr]aspbian.*"
- "Debian.*buster"
- "Debian.*bullseye"
- "Debian.*bookworm"
when:
- ansible_facts.architecture is search("aarch64")
- raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match(allowed_descriptions | join('|'))
- name: Set detected_distribution to Raspbian (ARM64 on Debian Buster)
- name: Set detected_distribution to Raspbian (ARM64 on Debian Bookworm)
set_fact:
detected_distribution: Raspbian
when:
- ansible_facts.architecture is search("aarch64")
- raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match("Debian.*buster")
- ansible_facts.lsb.description|default("") is match("Debian.*bookworm")
- name: Set detected_distribution_major_version
set_fact:
@@ -39,28 +45,16 @@
when:
- detected_distribution | default("") == "Raspbian"
- name: Set detected_distribution to Raspbian (ARM64 on Debian Bullseye)
set_fact:
detected_distribution: Raspbian
when:
- ansible_facts.architecture is search("aarch64")
- raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match("Debian.*bullseye")
- name: execute OS related tasks on the Raspberry Pi - {{ action }}
- name: Execute OS related tasks on the Raspberry Pi - {{ action_ }}
include_tasks: "{{ item }}"
with_first_found:
- "{{ action }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml"
- "{{ action }}/{{ detected_distribution }}.yml"
- "{{ action }}/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
- "{{ action }}/{{ ansible_distribution }}.yml"
- "{{ action }}/default.yml"
- "{{ action_ }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml"
- "{{ action_ }}/{{ detected_distribution }}.yml"
- "{{ action_ }}/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
- "{{ action_ }}/{{ ansible_distribution }}.yml"
- "{{ action_ }}/default.yml"
vars:
action: >-
{% if state == "present" -%}
setup
{%- else -%}
teardown
{%- endif %}
action_: >-
{% if state == "present" %}setup{% else %}teardown{% endif %}
when:
- raspberry_pi|default(false)

View File

@@ -8,20 +8,22 @@
notify: reboot
- name: Install iptables
apt: name=iptables state=present
apt:
name: iptables
state: present
- name: Flush iptables before changing to iptables-legacy
iptables:
flush: true
- name: Changing to iptables-legacy
alternatives:
community.general.alternatives:
path: /usr/sbin/iptables-legacy
name: iptables
register: ip4_legacy
- name: Changing to ip6tables-legacy
alternatives:
community.general.alternatives:
path: /usr/sbin/ip6tables-legacy
name: ip6tables
register: ip6_legacy

View File

@@ -2,7 +2,7 @@
- name: Enable cgroup via boot commandline if not already enabled for Rocky
lineinfile:
path: /boot/cmdline.txt
backrefs: yes
backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot

View File

@@ -2,7 +2,7 @@
- name: Enable cgroup via boot commandline if not already enabled for Ubuntu on a Raspberry Pi
lineinfile:
path: /boot/firmware/cmdline.txt
backrefs: yes
backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot

View File

@@ -3,7 +3,7 @@
systemd:
name: "{{ item }}"
state: stopped
enabled: no
enabled: false
failed_when: false
with_items:
- k3s
@@ -45,10 +45,22 @@
- /var/lib/rancher/k3s
- /var/lib/rancher/
- /var/lib/cni/
- /etc/cni/net.d
- name: Remove K3s http_proxy files
file:
name: "{{ item }}"
state: absent
with_items:
- "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
- "{{ systemd_dir }}/k3s.service.d"
- "{{ systemd_dir }}/k3s-node.service.d/http_proxy.conf"
- "{{ systemd_dir }}/k3s-node.service.d"
when: proxy_env is defined
- name: Reload daemon_reload
systemd:
daemon_reload: yes
daemon_reload: true
- name: Remove tmp directory used for manifests
file:
@@ -67,18 +79,18 @@
content: "{{ lookup('template', 'templates/rc.local.j2') }}"
create: false
state: absent
when: proxmox_lxc_configure and rclocal.stat.exists
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Check rc.local for cleanup
become: true
slurp:
src: /etc/rc.local
register: rcslurp
when: proxmox_lxc_configure and rclocal.stat.exists
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Cleanup rc.local if we only have a Shebang line
become: true
file:
path: /etc/rc.local
state: absent
when: proxmox_lxc_configure and rclocal.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1
when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1

View File

@@ -9,7 +9,7 @@
check_mode: false
- name: Umount filesystem
mount:
ansible.posix.mount:
path: "{{ item }}"
state: unmounted
with_items:

View File

@@ -1,5 +0,0 @@
---
- name: reboot containers
command:
"pct reboot {{ item }}"
loop: "{{ proxmox_lxc_filtered_ids }}"

View File

@@ -0,0 +1 @@
../../proxmox_lxc/handlers/main.yml

View File

@@ -1,21 +1,15 @@
---
- name: check for container files that exist on this host
- name: Check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: filter out files that do not exist
- name: Filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# used for the reboot handler
- name: get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids:
'{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}'
- name: Remove LXC apparmor profile
lineinfile:
dest: "{{ item }}"

View File

@@ -1,15 +1,17 @@
---
- hosts: proxmox
- name: Prepare Proxmox cluster
hosts: proxmox
gather_facts: true
become: yes
remote_user: "{{ proxmox_lxc_ssh_user }}"
become: true
environment: "{{ proxy_env | default({}) }}"
roles:
- role: proxmox_lxc
when: proxmox_lxc_configure
- hosts: k3s_cluster
gather_facts: yes
- name: Prepare k3s nodes
hosts: k3s_cluster
gather_facts: true
environment: "{{ proxy_env | default({}) }}"
roles:
- role: lxc
become: true
@@ -20,18 +22,38 @@
become: true
- role: raspberrypi
become: true
- role: k3s_custom_registries
become: true
when: custom_registries
- hosts: master
- name: Setup k3s servers
hosts: master
environment: "{{ proxy_env | default({}) }}"
roles:
- role: k3s/master
- role: k3s_server
become: true
- hosts: node
- name: Setup k3s agents
hosts: node
environment: "{{ proxy_env | default({}) }}"
roles:
- role: k3s/node
- role: k3s_agent
become: true
- hosts: master
- name: Configure k3s cluster
hosts: master
environment: "{{ proxy_env | default({}) }}"
roles:
- role: k3s/post
- role: k3s_server_post
become: true
- name: Storing kubeconfig in the playbook directory
hosts: master
environment: "{{ proxy_env | default({}) }}"
tasks:
- name: Copying kubeconfig from {{ hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] }}
ansible.builtin.fetch:
src: "{{ ansible_user_dir }}/.kube/config"
dest: ./kubeconfig
flat: true
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']