Compare commits

...

112 Commits

Author SHA1 Message Date
Ethan Shold
2cd03f38f2 Add calico-apiserver check 2024-01-17 10:14:04 -06:00
sholdee
8e1265fbae Merge branch 'techno-tim:master' into calico 2024-01-17 09:46:24 -06:00
Balázs Hasprai
cddbfc8e40 Update truthy values to true/false only, #204 (#387)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2024-01-15 12:43:44 -06:00
sholdee
f6ee0c72ef Merge branch 'techno-tim:master' into calico 2024-01-14 01:40:08 -06:00
Ethan Shold
e7ba494a00 Add Tigera Operator/Calico CNI option
Small tweak to reduce delta from head

Set calico option to be disabled by default

Add rescue blocks in case updating existing

Refactor items and update comments

Refactor and consolidate calico.yml into block

Refactor to use template for Calico CRs

Revert use_calico to false

Template blockSize

Align default cidr in template with all.yml sample

Apply upstream version tags

Revert to current ver tags. Upstream's don't work.

Update template address detection

Add Tigera Operator/Calico CNI option
2024-01-14 01:31:42 -06:00
Techno Tim
70e658cf98 feat(k3s): Updated to v1.25.16+k3s4 (#407) 2024-01-12 21:34:23 -06:00
dependabot[bot]
7badfbd7bd chore(deps): bump netaddr from 0.9.0 to 0.10.0 (#411)
Bumps [netaddr](https://github.com/drkjam/netaddr) from 0.9.0 to 0.10.0.
- [Release notes](https://github.com/drkjam/netaddr/releases)
- [Changelog](https://github.com/netaddr/netaddr/blob/master/CHANGELOG)
- [Commits](https://github.com/drkjam/netaddr/compare/0.9.0...0.10.0)

---
updated-dependencies:
- dependency-name: netaddr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-01 20:46:52 -06:00
Balázs Hasprai
e880f08d26 Add option for install behind http_proxy (#384)
* Add option for install behind http_proxy

* Tidy up http_proxy usage
2023-10-21 00:18:36 +00:00
Balázs Hasprai
95b2836dfc Add option to disable MetalLB, for use w/ ext LBs (#383)
* Add option to disable MetalLB, for use w/ ext LBs

* Add option to disable MetalLB, for use w/ ext LBs - add defaults

* Skip MetalLB with tags instead of flag
2023-10-18 22:07:07 +00:00
balazshasprai
505c2eeff2 Add option for custom registries / mirrors (#382) 2023-10-18 03:33:30 +00:00
balazshasprai
9b6d551dd6 Expand secure_path with support for Suse (#381) 2023-10-13 04:14:47 +00:00
dependabot[bot]
a64e882fb7 chore(deps): bump pre-commit-hooks from 4.4.0 to 4.5.0 (#379)
Bumps [pre-commit-hooks](https://github.com/pre-commit/pre-commit-hooks) from 4.4.0 to 4.5.0.
- [Release notes](https://github.com/pre-commit/pre-commit-hooks/releases)
- [Changelog](https://github.com/pre-commit/pre-commit-hooks/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit-hooks/compare/v4.4.0...v4.5.0)

---
updated-dependencies:
- dependency-name: pre-commit-hooks
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 15:39:23 +00:00
johnnyrun
38e773315b sysctl tags (#373)
* sysctl tags

* lost tag

---------

Co-authored-by: Gianni <gianni@chainlabo.com>
Co-authored-by: Gianni Carabelli <gianni.carabelli@skytv.it>
2023-10-09 10:00:31 -05:00
dependabot[bot]
70ddf7b63c chore(deps): bump netaddr from 0.8.0 to 0.9.0 (#365)
Bumps [netaddr](https://github.com/drkjam/netaddr) from 0.8.0 to 0.9.0.
- [Changelog](https://github.com/netaddr/netaddr/blob/master/CHANGELOG)
- [Commits](https://github.com/drkjam/netaddr/commits)

---
updated-dependencies:
- dependency-name: netaddr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-21 12:39:15 -05:00
dependabot[bot]
fb3128a783 chore(deps): bump ansible-core from 2.15.3 to 2.15.4 (#362)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.15.3 to 2.15.4.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.15.3...v2.15.4)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-14 13:48:59 -05:00
Techno Tim
2e318e0862 feat(k3s): Updated to v1.25.12+k3s1 (#351) 2023-08-18 08:59:08 -05:00
dependabot[bot]
0607eb8aa4 chore(deps): bump ansible-core from 2.15.2 to 2.15.3 (#349)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.15.2 to 2.15.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.15.2...v2.15.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-16 13:27:35 -05:00
Marek Pilch
a9904d1562 fixes: ERROR! The requested handler <'Reboot containers' / 'Reboot se… (#348)
* fixes: ERROR! The requested handler <'Reboot containers' / 'Reboot server' / 'Reboot>' was not found in either the main handlers list nor in the listening handlers list

* Update main.yml
2023-08-14 17:37:20 -05:00
Techno Tim
9707bc8a58 fix(docs): updated kube-vip url (#341) 2023-08-14 17:30:42 +00:00
Phil Bolduc
e635bd2626 Change reboot.sh to be executable (#344)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-08-07 11:29:03 -05:00
dependabot[bot]
1aabb5a927 chore(deps): bump jsonpatch from 1.32 to 1.33 (#318) 2023-07-23 19:32:01 +00:00
Christian Berendt
215690b55b Replace hardcoded 'master' group name with 'group_name_master' variable (#337)
For improved flexibility and maintainability.

* Update tasks in node role to use 'group_name_master' variable instead
  of hardcoded 'master' group name
* Update tasks in master role to use 'group_name_master' variable instead
  of hardcoded 'master' group name
* Update tasks in post role to use 'group_name_master' variable instead of
  hardcoded 'master' group name

Signed-off-by: Christian Berendt <berendt@23technologies.cloud>
2023-07-21 16:37:57 -05:00
Simon Leiner
bd44a9b126 Remove unused variable metal_lb_frr_tag_version (#331) 2023-07-21 05:06:04 +00:00
dependabot[bot]
8d61fe81e5 chore(deps): bump pyyaml from 6.0 to 6.0.1 (#334) 2023-07-20 23:20:55 -05:00
dependabot[bot]
c0ff304f22 chore(deps): bump ansible-core from 2.14.5 to 2.15.2 (#335)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.5 to 2.15.2.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.5...v2.15.2)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-20 21:54:40 -05:00
Techno Tim
83077ecdd1 Fix CI - python version (#338)
* fix(README): Updated docs link

* fix(ci): set PYTHON_VERSION to 3.11
2023-07-20 21:19:53 -05:00
Simon Leiner
33ae0d4970 Fix CI (#332)
* Update pre-commit actions

This was done by running "pre-commit autoupdate --freeze".

* Remove pre-commit only dependencies from requirements.in

Including them in the file would create the illusion that those were the
versions actually used in CI, but they are not. The exact versions are
determined by the pre-commit hooks which are pinned in
.pre-commit-config.yaml.

* Ansible Lint: Fix role-name[path]

* Ansible Lint: Fix name[play]

* Ansible Lint: Fix key-order[task]

* Ansible Lint: Fix jinja[spacing]

* Ansible Lint: Fix no-free-form

* Ansible Lint: Fix var-naming[no-reserved]

* Ansible Lint: Fix yaml[comments]

* Ansible Lint: Fix yaml[line-length]

* Ansible Lint: Fix name[casing]

* Ansible Lint: Fix no-changed-when

* Ansible Lint: Fix fqcn[action]

* Ansible Lint: Fix args[module]

* Improve task naming
2023-07-20 10:50:02 -05:00
Techno Tim
edd4838407 feat(k3s): Updated to v1.25 (#187)
* feat(k3s): Updated to v1.25.4+k3s1

* feat(k3s): Updated to v1.25.5+k3s1

* feat(k3s): Updated to v1.25.7+k3s1

* feat(k3s): Updated to v1.25.8+k3s1

* feat(k3s): Updated to v1.25.9+k3s1

* feat(kube-vip): Update to v0.5.12
2023-04-27 23:09:46 -05:00
dependabot[bot]
5c79ea9b71 chore(deps): bump ansible-core from 2.14.4 to 2.14.5 (#287)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.4 to 2.14.5.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.4...v2.14.5)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 14:19:52 -05:00
dependabot[bot]
3d204ad851 chore(deps): bump yamllint from 1.30.0 to 1.31.0 (#284)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.30.0 to 1.31.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.30.0...v1.31.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-04-24 11:17:02 -05:00
dependabot[bot]
13bd868faa chore(deps): bump ansible-lint from 6.14.6 to 6.15.0 (#285)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.6 to 6.15.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.6...v6.15.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-23 23:10:28 -05:00
dependabot[bot]
c564a8562a chore(deps): bump ansible-lint from 6.14.3 to 6.14.6 (#275)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.3 to 6.14.6.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.3...v6.14.6)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-14 23:34:03 -05:00
Sam Schmit-Van Werweke
0d6d43e7ca Bump k3s version to v1.24.12+k3s1 (#269) 2023-04-02 21:31:20 -05:00
dependabot[bot]
c0952288c2 chore(deps): bump ansible-core from 2.14.3 to 2.14.4 (#265)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.3 to 2.14.4.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.3...v2.14.4)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 15:07:16 -05:00
dependabot[bot]
1c9796e98b chore(deps): bump ansible-lint from 6.14.2 to 6.14.3 (#264)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.2 to 6.14.3.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.2...v6.14.3)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 12:18:52 -05:00
ThePCGeek
288c4089e0 Pc geek fix proxmox lxc (#263)
* (fix): correct var

var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.

* add vars file for proxmox host group

* remove remote_user from site.yml for proxmox

* added newline to fix lint issue

* fix added ---

---------

Co-authored-by: ThePCGeek <thepcgeek1776@gmail.com>
2023-03-25 22:02:59 -05:00
ThePCGeek
49f0a2ce6b (fix): correct var (#262)
var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.
2023-03-25 20:41:04 -05:00
dependabot[bot]
6c4621bd56 chore(deps): bump yamllint from 1.29.0 to 1.30.0 (#261)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.29.0 to 1.30.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.29.0...v1.30.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-24 02:46:16 +00:00
Techno Tim
3e16ab6809 Chore: Update kube vip and MetalLB (#257)
* chore(dependencies): updated metallb to v0.13.9

* chore(dependencies): updated kube-vip to v0.5.11
2023-03-15 04:32:26 +00:00
Techno Tim
83fe50797c feat(k3s): Updated to v1.24.11+k3s1 (#255) 2023-03-14 04:04:06 +00:00
dependabot[bot]
2db0b3024c chore(deps): bump ansible-lint from 6.14.1 to 6.14.2 (#249)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.1 to 6.14.2.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.1...v6.14.2)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-13 08:47:43 -05:00
dependabot[bot]
6b2af77e74 chore(deps): bump ansible-lint from 6.14.0 to 6.14.1 (#248)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.0 to 6.14.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.0...v6.14.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-09 08:38:59 -06:00
dependabot[bot]
d1d1bc3d91 chore(deps): bump ansible-lint from 6.13.1 to 6.14.0 (#246)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.13.1 to 6.14.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.13.1...v6.14.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-05 22:33:16 -06:00
Noms
3a1a7a19aa Fix LXC container implementations (#231)
* Need to become to reboot

* Fix rc.local insertion of script

* Fix syntax

Add new line to lxc.yml

* Remove need to set fact

* Add reset for LXC container config

* Fix syntax

Its always the newlines..

* remove fact setting from reset task

We should mirror the deployment task

* Proxmox LXC reset functions

* Handle if rc.local already has data

* Dont compare literal

* Cleanup Erroneous newline

* Handle rc.local not present on a hybrid cluster

* Update roles/reset/tasks/main.yml

Co-authored-by: Simon Leiner <simon@leiner.me>

* Update roles/lxc/tasks/main.yml

Co-authored-by: Simon Leiner <simon@leiner.me>

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
Co-authored-by: Simon Leiner <simon@leiner.me>
2023-03-03 11:28:14 -06:00
dependabot[bot]
030eeb4b75 chore(deps): bump ansible-core from 2.14.2 to 2.14.3 (#244)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.2 to 2.14.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.2...v2.14.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 21:59:16 -06:00
Techno Tim
4aeeb124ef docs(README): Removed note about ansible version (#243) 2023-02-26 14:01:21 -06:00
Timothy Stewart
511c020bec docs(README): Updated with a note about ansible version on control node 2023-02-25 10:09:05 -06:00
dependabot[bot]
c47da38b53 chore(deps): bump ansible-lint from 6.12.1 to 6.13.1 (#240)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.12.1 to 6.13.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.12.1...v6.13.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 20:04:38 +00:00
Simon Leiner
6448948e9f Fix dual-stack clusters with multiple master nodes (#237)
* Test IPv6 scenario with two master nodes

* Fix IPv6 multimaster setup

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-02-20 05:24:19 +00:00
Simon Leiner
7bc198ab26 Pick kube-vip interface automatically by default (#238)
* Pick kube-vip interface automatically by default

* molecule: Fix ipv6 scenario

* Choose a more restrictive molecule timeout in CI
2023-02-20 04:08:36 +00:00
Simon Leiner
65bbc8e2ac Simplify download and patching of MetalLB manifests (#239)
This removes duplicated code and cleans up Ansible log lines a bit.
2023-02-19 21:34:22 -06:00
Mike Thomas
dc2976e7f6 Metallb BGP support (#212)
* Add metallb frr and bgp support

* Set metallb mode to layer2 as default in sample

* Add BGP resource check

* Add automatic downloading of metallb-frr

* Remove frr manifest
2023-02-09 23:58:58 -06:00
dependabot[bot]
5a7ba98968 chore(deps): bump ansible-lint from 6.12.0 to 6.12.1 (#226)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.12.0 to 6.12.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.12.0...v6.12.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-02-06 23:23:42 -06:00
Simon Leiner
10c6ef1d57 Download MetalLB CRDs for respective versions (#225)
* Download MetalLB CRDs for respective versions

This ensures that the CRDs match the actual MetalLB controller version,
as given by the user.

* Download VIP RBAC definitions for respective version
2023-02-06 22:24:02 -06:00
Timothy Stewart
ed4d888e3d fix(gitignore): ignore ansible.cfg 2023-02-05 22:09:50 -06:00
Simon Leiner
49d6d484ae Override less Ansible settings (#224)
* Do not escalate privileges by default

* Do not disable host key checking by default

* Do not mute deprecation warnings by default

* Provide ansible.cfg only as an example

The new example file does ONLY contain options that are related to this
playbook.

* Remove explicit inventory path from scripts

The inventory file is specified in ansible.cfg, see README.md.
2023-02-05 21:52:44 -06:00
dependabot[bot]
96c49c864e chore(deps): bump ansible-lint from 6.11.0 to 6.12.0 (#222)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.11.0 to 6.12.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.11.0...v6.12.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-03 23:11:31 -06:00
dependabot[bot]
60adb1de42 chore(deps): bump ansible-core from 2.14.1 to 2.14.2 (#220)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.1 to 2.14.2.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.1...v2.14.2)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-30 20:57:15 -06:00
Techno Tim
e023808f2f feat(k3s): Updated to v1.24.10+k3s1 (#215) 2023-01-29 21:25:09 -06:00
acdoussan
511ec493d6 add support for proxmox lxc containers (#209)
Co-authored-by: Adam Doussan <acdoussan@Adams-MacBook-Pro.local>
2023-01-29 21:23:31 -06:00
Simon Leiner
be3e72e173 Do not rely on ansible_user (#214)
* Apply "become" on roles instead of plays

This leads to facts being gathered for the "regular" login user, instead
of root.

* Do not rely on ansible_user

Instead of reading ansible_user (which may or may not be defined), this
patch lets the roles rely on Ansible facts [1].

[1]: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html
2023-01-29 21:20:25 -06:00
dependabot[bot]
e33cbe52c1 chore(deps): bump ansible-lint from 6.8.6 to 6.11.0 (#213)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.8.6 to 6.11.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.8.6...v6.11.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-29 16:06:26 -06:00
dependabot[bot]
c06af919f3 chore(deps): bump yamllint from 1.28.0 to 1.29.0 (#201)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.28.0 to 1.29.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.28.0...v1.29.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 22:56:45 -06:00
Techno Tim
b86384c439 fix(raspberrypi): Fix handler name (#200) 2023-01-10 21:26:27 -06:00
Techno Tim
bf2bd1edc5 feat(k3s): Updated to v1.24.9+k3s1 (#197) 2023-01-06 18:53:40 -06:00
irish1986
e98e3ee77c Split manifest into separate task for ease of use (#191) 2023-01-01 23:04:22 -06:00
dependabot[bot]
78f7a60378 chore(deps): bump pre-commit from 2.20.0 to 2.21.0 (#188)
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 2.20.0 to 2.21.0.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v2.20.0...v2.21.0)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-25 23:50:56 -06:00
dependabot[bot]
e64fea760d chore(deps): bump ansible-core from 2.13.5 to 2.14.1 (#176)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.13.5 to 2.14.1.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.13.5...v2.14.1)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 22:30:24 -06:00
dependabot[bot]
764e32c778 chore(deps): bump molecule from 4.0.3 to 4.0.4 (#175)
Bumps [molecule](https://github.com/ansible-community/molecule) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: molecule
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 22:26:07 -06:00
Techno Tim
e6cf14ea78 K3s 1 24 8 (#171)
* chore(dependencies): Updated actions

* chore(dependencies): updated to k3s to v1.24.8+k3s1 and kube-vip to v0.5.7
2022-12-02 23:14:06 -06:00
theonejj
da049dcc28 fix: config warning callback_whitelist (#170)
Co-authored-by: Jan Jansen <j.jansen@powerspex.nl>
2022-12-01 23:09:02 -06:00
Sherif Metwally
2604caa483 "command" module no longer supports "warn" argument (#169)
* "command" module no longer supports "warn" argument

* correct indetation lint errors
2022-11-29 20:26:01 -06:00
dependabot[bot]
82d820805f chore(deps): bump pre-commit-hooks from 4.3.0 to 4.4.0 (#168)
Bumps [pre-commit-hooks](https://github.com/pre-commit/pre-commit-hooks) from 4.3.0 to 4.4.0.
- [Release notes](https://github.com/pre-commit/pre-commit-hooks/releases)
- [Changelog](https://github.com/pre-commit/pre-commit-hooks/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit-hooks/compare/v4.3.0...v4.4.0)

---
updated-dependencies:
- dependency-name: pre-commit-hooks
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-11-24 20:54:33 -06:00
Timothy Stewart
da72884a5b fix(ci): remove self-hosted 2022-11-23 23:30:06 -06:00
Techno Tim
17a74b66c8 Pre commit fixes (#167)
* chore(dependencies): updated kube-vip to 0.5.6

* fix(pre-commit): pin to hash

* fix(pre-commit): added more hooks and fixed lint

* fix(pre-commit): added pre-commit hook so we don't have to run it manually

* fix(pre-commit): Added docs to readme

* fix(pre-commit): added texthooks

* fix(pre-commit): pin to hash

* fix(pre-commit): added mor hooks and fixed lint

* fix(lint): Fixing quotes

* fix(ci): only run test if linting passes

* fix(ci): convert to reusable workflows

* fix(pr template): Reorder steps
2022-11-13 22:42:49 -06:00
Techno Tim
88d679ecb6 chore(dependencies): updated kube-vip to 0.5.6 (#166) 2022-11-13 17:17:03 -06:00
Techno Tim
6bf3bcce92 docs(README): Updated readme with fixes and context (#154) 2022-11-06 14:07:07 -06:00
Techno Tim
cff815a031 Updates (#151)
* fix(gitignore): Add ansible logs

* chore(metallb): Updated to 0.13.9

* chore(metallb): Updated to 1.24.7

* chore(python): Upddate dependencies

* fix(metal-lb): set to 0.13.7 (latest released)

* fix(requirements.txt): dedup and sort alpha
2022-11-06 12:08:19 -06:00
automationxpert
f892029fcf Adding additional reboot (optional) (#139)
* Create reboot.yml

* Create reboot.sh

* Updated the Playbook and Tasks Name

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-11-06 05:54:29 +00:00
snoopy82481
6b37ba5e60 chore: Multiple configuration changes (#144)
Added yaml stdout for better readability, optimize ssh connections, moved become to correct section
2022-11-05 21:54:06 -05:00
Techno Tim
b1fee44403 GitHub Actions Fixes (#150) 2022-11-05 19:57:36 -05:00
Techno Tim
a1c7175bd1 fix(requirements.txt): Use pip-compile (#148)
* fix(requirements.txt): Use pip-compile

* fix(lint): Remove anchors from molecule since they aren't yet supported via lint

* fix(lint): Remove anchors from molecule since they aren't yet supported via lint
2022-11-05 18:37:46 -05:00
dependabot[bot]
69d3bdcd88 chore(deps): bump pyrsistent from 0.18.1 to 0.19.2 (#141)
Bumps [pyrsistent](https://github.com/tobgu/pyrsistent) from 0.18.1 to 0.19.2.
- [Release notes](https://github.com/tobgu/pyrsistent/releases)
- [Changelog](https://github.com/tobgu/pyrsistent/blob/master/CHANGES.txt)
- [Commits](https://github.com/tobgu/pyrsistent/commits)

---
updated-dependencies:
- dependency-name: pyrsistent
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-04 23:07:18 -05:00
Techno Tim
5268ef305a Revert "feat(ci): switching to self-hosted runners (#133)" (#135)
This reverts commit a840571733.
2022-10-31 18:50:34 -05:00
Techno Tim
a840571733 feat(ci): switching to self-hosted runners (#133)
* feat(ci): switching to self-hosted runners

* feat(gh-actions-controller): added

* feat(gh-actions-controller): added
2022-10-31 17:56:22 -05:00
dependabot[bot]
b1370406ea chore(deps): bump ansible-lint from 6.8.3 to 6.8.4 (#130)
Bumps [ansible-lint](https://github.com/ansible-community/ansible-lint) from 6.8.3 to 6.8.4.
- [Release notes](https://github.com/ansible-community/ansible-lint/releases)
- [Commits](https://github.com/ansible-community/ansible-lint/compare/v6.8.3...v6.8.4)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-28 17:16:35 -05:00
dependabot[bot]
12d57a07d0 chore(deps): bump ansible-lint from 6.8.2 to 6.8.3 (#129)
Bumps [ansible-lint](https://github.com/ansible-community/ansible-lint) from 6.8.2 to 6.8.3.
- [Release notes](https://github.com/ansible-community/ansible-lint/releases)
- [Commits](https://github.com/ansible-community/ansible-lint/compare/v6.8.2...v6.8.3)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-26 21:55:42 -05:00
samerbahri98
4f3b8ec9e0 Pre-commit hooks (#125)
* feat: pre-commit

* empty

* fix: requirements.txt
2022-10-26 19:15:24 -05:00
dependabot[bot]
45ddd65e74 chore(deps): bump zipp from 3.9.0 to 3.10.0 (#128)
Bumps [zipp](https://github.com/jaraco/zipp) from 3.9.0 to 3.10.0.
- [Release notes](https://github.com/jaraco/zipp/releases)
- [Changelog](https://github.com/jaraco/zipp/blob/main/CHANGES.rst)
- [Commits](https://github.com/jaraco/zipp/compare/v3.9.0...v3.10.0)

---
updated-dependencies:
- dependency-name: zipp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-26 19:13:53 -05:00
dependabot[bot]
b2a62ea4eb chore(deps): bump ruamel-yaml-clib from 0.2.6 to 0.2.7 (#124)
Bumps [ruamel-yaml-clib](https://sourceforge.net/p/ruamel-yaml-clib/code/ci/default/tree) from 0.2.6 to 0.2.7.

---
updated-dependencies:
- dependency-name: ruamel-yaml-clib
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-22 13:37:11 -05:00
dependabot[bot]
a8697edc99 chore(deps): bump oauthlib from 3.2.1 to 3.2.2 (#123)
Bumps [oauthlib](https://github.com/oauthlib/oauthlib) from 3.2.1 to 3.2.2.
- [Release notes](https://github.com/oauthlib/oauthlib/releases)
- [Changelog](https://github.com/oauthlib/oauthlib/blob/v3.2.2/CHANGELOG.rst)
- [Commits](https://github.com/oauthlib/oauthlib/compare/v3.2.1...v3.2.2)

---
updated-dependencies:
- dependency-name: oauthlib
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-18 19:20:28 -05:00
dependabot[bot]
d3218f5d5c chore(deps): bump google-auth from 2.12.0 to 2.13.0 (#122)
Bumps [google-auth](https://github.com/googleapis/google-auth-library-python) from 2.12.0 to 2.13.0.
- [Release notes](https://github.com/googleapis/google-auth-library-python/releases)
- [Changelog](https://github.com/googleapis/google-auth-library-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/googleapis/google-auth-library-python/compare/v2.12.0...v2.13.0)

---
updated-dependencies:
- dependency-name: google-auth
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-17 23:06:58 -05:00
Irakli Nadareishvili
590a8029fd Removing accidental tear-down step that is clearly a typo (#117)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-10-15 14:15:25 -05:00
Techno Tim
cb2fa7c441 k3s, metallb, kube-vip updates (#119)
* feat(k3s): Updated to v1.24.6+k3s1

* feat(kube-vip): Update to v0.5.5

* feat(metal-lb): Update to v0.13.6

* fix(pip): Freeze requirements

* fix(lint): Fixed ansible-lint
2022-10-15 12:23:50 -05:00
ccoane
14508ec8dc Add "collection" to the ansible-galaxy command as it will run without making changes if that collection argument is not provided. (#113) 2022-10-04 20:41:19 -05:00
Ioannis Angelakopoulos
fb6c9a6866 adds colors to molecule testing in GitHub action (#109) 2022-09-28 03:48:25 +00:00
Simon Leiner
d5d02280c1 Fix download-boxes.sh if no boxes are present (#106)
In case of grep not matching any line, it would return an error code
and thus stop the script. This patch sets "present_boxes" to an empty
value in case any of the commands fail.
2022-09-26 17:21:37 -05:00
Simon Leiner
57e528832b Fix role order in reset playbook (#104) 2022-09-25 12:35:36 -05:00
Ioannis Angelakopoulos
cd76fa05a7 fix master taint implementation - linting problems (#95)
* add virtual-ip to certificate SAN entries

Adds the kube-vip IP as a Subject Alternative Name in the TLS cert. It is needed otherwise you cannot access the cluster.

* fixes bug with master taints (#1)

- improves taint logic

* fixes typo

* fixes formatting

* fixes undefined group['node'] if missing from hosts.ini (#2)

* fixes undefined group['node'] if missing from hosts.ini

- improves application of master taint by centralizing code

* improves molecule testing, fixes linting

* hacking at linter problems, small tweaks

- increases the metallb timeout error due to intermittent testing errors in GitHub actions

* improves context by renaming taint variable

- makes variable boolean

* fix bug

* removes linting hacks

Co-authored-by: Ioannis Angelakopoulos <ioangel@gmail.com>
2022-09-24 20:12:24 -05:00
Simon Leiner
d5b37acd8a Drop support for CentOS, test Rocky and Debian in CI (#92)
* Test CentOS 7 in CI

* Drop support for CentOS, test on Rocky and Debian

* Fix reset playbook for Rocky Linux

* Fix typo

* Disable firewalld during testing

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-09-24 05:10:55 +00:00
Simon Leiner
5225493ca0 CI: Fix linting job for ansible-lint 6.6.0 (#96)
* CI: Fix linting job for ansible-lint 6.6.0

* Increase MetalLB timeout to mitigate CI flakiness
2022-09-23 23:28:21 -05:00
BMeach
4acbe91b6c Fix master node taints in multi node installs (#93)
* Taint master nodes if more than one node

* Kick off fork workflow tests

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-09-17 15:56:09 -05:00
Techno Tim
f1c2f3b7dd fix(github): ignore readme updates (#94) 2022-09-17 00:18:56 -05:00
Techno Tim
76718a010c chore(docs): Updated with ansible collections install (#89)
* chore(docs): Fixing thanks section

* chore(docs): Updated with collections command
2022-09-15 02:32:34 +00:00
Simon Leiner
a1ef590442 Add support for API servers on IPv6 addresses (#48)
* Remove duplicate file for deletion

* Add support for IPv6 clusters

To correctly escape IPv6 addresses when ports are used, they must be
wrapped in square brackets [1]. This patch adds support for that,
using Ansible's ipwrap filter [2].

[1]: https://datatracker.ietf.org/doc/html/rfc4038#section-5.1
[2]: http://docs.ansible.com/ansible/latest/collections/ansible/utils/docsite/filters_ipaddr.html#wrapping-ipv6-addresses-in-brackets

* Do not abort other molecule jobs on failure

* Fix cache keys for Vagrant boxes

* Molecule: Derive overrides.yml location from scenario dir

# Conflicts:
#	molecule/default/molecule.yml
#	molecule/ipv6/molecule.yml
2022-09-10 12:57:38 -05:00
Simon Leiner
9ff3bb6b87 Test single-node cluster (#78)
* Molecule: Derive overrides.yml location from scenario dir

# Conflicts:
#	molecule/default/molecule.yml
#	molecule/ipv6/molecule.yml

* Molecule: Add single_node scenario

* Fix get_nodes test for the case of empty groups
2022-09-09 11:47:26 -05:00
Techno Tim
b1df9663fa fix(ansible): Fix group permissions on tmp folder (#77) 2022-09-09 03:00:54 +00:00
Vitalij Dovhanyc
58c3a61bbb add editorconfig and fix trailing whitespaces (#68)
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-09-07 20:00:13 -05:00
Simon Leiner
60bc09b085 Mitigate CI flakiness (#70)
* Increase SSH connection timeouts and retries

* Make MetalLB timeouts configurable

* Retry applying MetalLB CRs

* Fix location of MetalLB CRs template

* Make MetalLB wait logic more compact

* Fix typo

* retrigger 1

* retrigger 2

* retrigger 3

* retrigger 4

* retrigger 5
2022-09-07 18:47:58 -05:00
Timothy Stewart
4365a2a54b fix(ansible): fixing permissions on tmp folder 2022-09-06 19:07:09 -05:00
Simon Leiner
a6b2a95b7e Test playbook using molecule (#67)
* Test cluster using molecule

* Fix detection of first control node

* Include --flannel-iface and --node-ip as k3s arguments

* Store logs of k3s-init.service as GitHub job artifacts
2022-09-03 10:36:28 -05:00
Timothy Stewart
3c36dc8bfd fix(ansible): use k3s kubectl 2022-09-02 11:07:17 -05:00
104 changed files with 2074 additions and 2387 deletions

View File

@@ -1,3 +1,17 @@
---
exclude_paths:
# default paths
- '.cache/'
- '.github/'
- 'test/fixtures/formatting-before/'
- 'test/fixtures/formatting-prettier/'
# The "converge" and "reset" playbooks use import_playbook in
# conjunction with the "env" lookup plugin, which lets the
# syntax check of ansible-lint fail.
- 'molecule/**/converge.yml'
- 'molecule/**/prepare.yml'
- 'molecule/**/reset.yml'
skip_list:
- 'fqcn-builtins'

13
.editorconfig Normal file
View File

@@ -0,0 +1,13 @@
root = true
[*]
indent_style = space
indent_size = 2
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
end_of_line = lf
max_line_length = off
[Makefile]
indent_style = tab
[*.go]
indent_style = tab

View File

@@ -35,7 +35,7 @@ k3s_version: ""
ansible_user: NA
systemd_dir: ""
flannel_iface: ""
container_iface: ""
apiserver_endpoint: ""

View File

@@ -11,4 +11,5 @@
- [ ] Ran `site.yml` playbook
- [ ] Ran `reset.yml` playbook
- [ ] Did not add any unnecessary changes
- [ ] Ran pre-commit install at least once before committing
- [ ] 🚀

37
.github/download-boxes.sh vendored Executable file
View File

@@ -0,0 +1,37 @@
#!/bin/bash
# download-boxes.sh
# Check all molecule.yml files for required Vagrant boxes and download the ones that are not
# already present on the system.
set -euo pipefail
GIT_ROOT=$(git rev-parse --show-toplevel)
PROVIDER=virtualbox
# Read all boxes for all platforms from the "molecule.yml" files
all_boxes=$(cat "${GIT_ROOT}"/molecule/*/molecule.yml |
yq -r '.platforms[].box' | # Read the "box" property of each node under "platforms"
grep --invert-match --regexp=--- | # Filter out file separators
sort |
uniq)
# Read the boxes that are currently present on the system (for the current provider)
present_boxes=$(
(vagrant box list |
grep "${PROVIDER}" | # Filter by boxes available for the current provider
awk '{print $1;}' | # The box name is the first word in each line
sort |
uniq) ||
echo "" # In case any of these commands fails, just use an empty list
)
# The boxes that we need to download are the ones present in $all_boxes, but not $present_boxes.
download_boxes=$(comm -2 -3 <(echo "${all_boxes}") <(echo "${present_boxes}"))
# Actually download the necessary boxes
if [ -n "${download_boxes}" ]; then
echo "${download_boxes}" | while IFS= read -r box; do
vagrant box add --provider "${PROVIDER}" "${box}"
done
fi

15
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
---
name: "CI"
on:
pull_request:
push:
branches:
- master
paths-ignore:
- '**/README.md'
jobs:
lint:
uses: ./.github/workflows/lint.yml
test:
uses: ./.github/workflows/test.yml
needs: [lint]

View File

@@ -1,30 +1,68 @@
---
name: Linting
on:
pull_request:
push:
branches:
- master
workflow_call:
jobs:
ansible-lint:
name: YAML Lint + Ansible Lint
pre-commit-ci:
name: Pre-Commit
runs-on: ubuntu-latest
env:
PYTHON_VERSION: "3.11"
steps:
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
- name: Set up Python 3.x
uses: actions/setup-python@b55428b1882923874294fa556849718a1d7f2ca5 #4.0.2
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
with:
python-version: '3.x'
ref: ${{ github.event.pull_request.head.sha }}
- name: Install test dependencies
run: pip3 install yamllint ansible-lint ansible
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
- name: Run yamllint
run: yamllint .
- name: Cache pip
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('./requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Run ansible-lint
run: ansible-lint
- name: Cache Ansible
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: ~/.ansible/collections
key: ${{ runner.os }}-ansible-${{ hashFiles('collections/requirements.txt') }}
restore-keys: |
${{ runner.os }}-ansible-
- name: Install dependencies
run: |
echo "::group::Upgrade pip"
python3 -m pip install --upgrade pip
echo "::endgroup::"
echo "::group::Install Python requirements from requirements.txt"
python3 -m pip install -r requirements.txt
echo "::endgroup::"
echo "::group::Install Ansible role requirements from collections/requirements.yml"
ansible-galaxy install -r collections/requirements.yml
echo "::endgroup::"
- name: Run pre-commit
uses: pre-commit/action@646c83fcd040023954eafda54b4db0192ce70507 # 3.0.0
ensure-pinned-actions:
name: Ensure SHA Pinned Actions
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@af2eb3226618e2494e3d9084f515ad6dcf16e229 # 2.0.1
with:
allowlist: |
aws-actions/
docker/login-action

View File

@@ -1,69 +1,92 @@
---
name: Test
on:
pull_request:
push:
branches:
- master
workflow_call:
jobs:
vagrant:
name: Vagrant
molecule:
name: Molecule
runs-on: macos-12
strategy:
matrix:
scenario:
- default
- ipv6
- single_node
fail-fast: false
env:
HOMEBREW_NO_INSTALL_CLEANUP: 1
VAGRANT_CWD: ${{ github.workspace }}/vagrant
PYTHON_VERSION: "3.11"
steps:
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
- name: Install Ansible
run: brew install ansible
- name: Install role dependencies
run: ansible-galaxy install -r collections/requirements.yml
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Configure VirtualBox
run: >-
sudo mkdir -p /etc/vbox &&
echo "* 192.168.30.0/24" | sudo tee -a /etc/vbox/networks.conf > /dev/null
run: |-
sudo mkdir -p /etc/vbox
cat <<EOF | sudo tee -a /etc/vbox/networks.conf > /dev/null
* 192.168.30.0/24
* fdad:bad:ba55::/64
EOF
- name: Cache pip
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('./requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Cache Vagrant boxes
uses: actions/cache@fd5de65bc895cf536527842281bea11763fefd77 # 3.0.8
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # 3.0.11
with:
path: |
~/.vagrant.d/boxes
key: vagrant-boxes-${{ hashFiles('**/Vagrantfile') }}
key: vagrant-boxes-${{ hashFiles('**/molecule.yml') }}
restore-keys: |
vagrant-boxes
- name: Create virtual machines
run: vagrant up
timeout-minutes: 10
- name: Download Vagrant boxes for all scenarios
# To save some cache space, all scenarios share the same cache key.
# On the other hand, this means that the cache contents should be
# the same across all scenarios. This step ensures that.
run: ./.github/download-boxes.sh
- name: Provision cluster using Ansible
# Since Ansible sets up _all_ machines, it is sufficient to run it only
# once (i.e, for a single node - we are choosing control1 here)
run: vagrant provision control1 --provision-with ansible
timeout-minutes: 25
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
- name: Set up kubectl on the host
run: brew install kubectl &&
mkdir -p ~/.kube &&
vagrant ssh control1 --command "cat ~/.kube/config" > ~/.kube/config
- name: Install dependencies
run: |
echo "::group::Upgrade pip"
python3 -m pip install --upgrade pip
echo "::endgroup::"
- name: Show cluster nodes
run: kubectl describe -A nodes
echo "::group::Install Python requirements from requirements.txt"
python3 -m pip install -r requirements.txt
echo "::endgroup::"
- name: Show cluster pods
run: kubectl describe -A pods
- name: Test with molecule
run: molecule test --scenario-name ${{ matrix.scenario }}
timeout-minutes: 90
env:
ANSIBLE_K3S_LOG_DIR: ${{ runner.temp }}/logs/k3s-ansible/${{ matrix.scenario }}
ANSIBLE_SSH_RETRIES: 4
ANSIBLE_TIMEOUT: 60
PY_COLORS: 1
ANSIBLE_FORCE_COLOR: 1
- name: Test cluster
run: $VAGRANT_CWD/test_cluster.py --verbose --locals
timeout-minutes: 5
- name: Destroy virtual machines
- name: Upload log files
if: always() # do this even if a step before has failed
run: vagrant destroy --force
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # 3.1.1
with:
name: logs
path: |
${{ runner.temp }}/logs
- name: Delete old box versions
if: always() # do this even if a step before has failed
run: vagrant box prune --force

4
.gitignore vendored
View File

@@ -1 +1,3 @@
.vagrant
.env/
*.log
ansible.cfg

35
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,35 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: f71fa2c1f9cf5cb705f73dffe4b21f7c61470ba9 # frozen: v4.4.0
hooks:
- id: requirements-txt-fixer
- id: sort-simple-yaml
- id: detect-private-key
- id: check-merge-conflict
- id: end-of-file-fixer
- id: mixed-line-ending
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- repo: https://github.com/adrienverge/yamllint.git
rev: b05e028c5881819161d11cb543fd96a30c06cceb # frozen: v1.32.0
hooks:
- id: yamllint
args: [-c=.yamllint]
- repo: https://github.com/ansible-community/ansible-lint.git
rev: 3293b64b939c0de16ef8cb81dd49255e475bf89a # frozen: v6.17.2
hooks:
- id: ansible-lint
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: 375289a39f5708101b1f916eb729e8d6da96993f # frozen: v0.9.0.5
hooks:
- id: shellcheck
- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: 12885e376b93dc4536ad68d156065601e4433665 # frozen: v1.5.1
hooks:
- id: remove-crlf
- id: remove-tabs
- repo: https://github.com/sirosen/texthooks
rev: c4ffd3e31669dd4fa4d31a23436cc13839730084 # frozen: 0.5.0
hooks:
- id: fix-smartquotes

View File

@@ -6,4 +6,4 @@ rules:
max: 120
level: warning
truthy:
allowed-values: ['true', 'false', 'yes', 'no']
allowed-values: ['true', 'false']

View File

@@ -174,4 +174,4 @@
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
END OF TERMS AND CONDITIONS

View File

@@ -4,21 +4,21 @@
This playbook will build an HA Kubernetes cluster with `k3s`, `kube-vip` and MetalLB via `ansible`.
This is based on the work from [this fork](https://github.com/212850a/k3s-ansible) which is based on the work from [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible). It uses [kube-vip](https://kube-vip.chipzoller.dev/) to create a load balancer for control plane, and [metal-lb](https://metallb.universe.tf/installation/) for its service `LoadBalancer`.
This is based on the work from [this fork](https://github.com/212850a/k3s-ansible) which is based on the work from [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible). It uses [kube-vip](https://kube-vip.io/) to create a load balancer for control plane, and [metal-lb](https://metallb.universe.tf/installation/) for its service `LoadBalancer`.
If you want more context on how this works, see:
📄 [Documentation](https://docs.technotim.live/posts/k3s-etcd-ansible/) (including example commands)
📄 [Documentation](https://technotim.live/posts/k3s-etcd-ansible/) (including example commands)
📺 [Video](https://www.youtube.com/watch?v=CbkEWcUZ7zM)
📺 [Watch the Video](https://www.youtube.com/watch?v=CbkEWcUZ7zM)
## 📖 k3s Ansible Playbook
Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a HA Kubernetes cluster on machines running:
- [X] Debian
- [X] Ubuntu
- [X] CentOS
- [x] Debian (tested on version 11)
- [x] Ubuntu (tested on version 22.04)
- [x] Rocky (tested on version 9)
on processor architecture:
@@ -28,7 +28,12 @@ on processor architecture:
## ✅ System requirements
- Deployment environment must have Ansible 2.4.0+. If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://docs.technotim.live/posts/ansible-automation/).
- Control Node (the machine you are running `ansible` commands) must have Ansible 2.11+ If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://technotim.live/posts/ansible-automation/).
- You will also need to install collections that this playbook uses by running `ansible-galaxy collection install -r ./collections/requirements.yml` (important❗)
- [`netaddr` package](https://pypi.org/project/netaddr/) must be available to Ansible. If you have installed Ansible via apt, this is already taken care of. If you have installed Ansible via `pip`, make sure to install `netaddr` into the respective virtual environment.
- `server` and `agent` nodes should have passwordless SSH access, if not you can supply arguments to provide credentials `--ask-pass --ask-become-pass` to each command.
## 🚀 Getting Started
@@ -62,6 +67,8 @@ node
If multiple hosts are in the master group, the playbook will automatically set up k3s in [HA mode with etcd](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/).
Finally, copy `ansible.example.cfg` to `ansible.cfg` and adapt the inventory path to match the files that you just created.
This requires at least k3s version `1.19.1` however the version is configurable by using the `k3s_version` variable.
If needed, you can also edit `inventory/my-cluster/group_vars/all.yml` to match your environment.
@@ -94,24 +101,26 @@ scp debian@master_ip:~/.kube/config ~/.kube/config
### 🔨 Testing your cluster
See the commands [here](https://docs.technotim.live/posts/k3s-etcd-ansible/#testing-your-cluster).
See the commands [here](https://technotim.live/posts/k3s-etcd-ansible/#testing-your-cluster).
### Troubleshooting
Be sure to see [this post](https://github.com/techno-tim/k3s-ansible/discussions/20) on how to troubleshoot common problems
### 🔷 Vagrant
### Testing the playbook using molecule
You may want to kickstart your k3s cluster by using Vagrant to quickly build you all needed VMs with one command.
Head to the `vagrant` subfolder and type `vagrant up` to get your environment setup.
After the VMs have got build, deploy k3s using the Ansible playbook `site.yml` by the
`vagrant provision --provision-with ansible` command.
This playbook includes a [molecule](https://molecule.rtfd.io/)-based test setup.
It is run automatically in CI, but you can also run the tests locally.
This might be helpful for quick feedback in a few cases.
You can find more information about it [here](molecule/README.md).
### Pre-commit Hooks
This repo uses `pre-commit` and `pre-commit-hooks` to lint and fix common style and syntax errors. Be sure to install python packages and then run `pre-commit install`. For more information, see [pre-commit](https://pre-commit.com/)
## Thanks 🤝
This repo is really standing on the shoulders of giants. To all those who have contributed.
Thanks to these repos for code and ideas:
This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas:
- [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible)
- [geerlingguy/turing-pi-cluster](https://github.com/geerlingguy/turing-pi-cluster)

View File

@@ -1,12 +0,0 @@
[defaults]
nocows = True
roles_path = ./roles
inventory = ./hosts.ini
remote_tmp = $HOME/.ansible/tmp
local_tmp = $HOME/.ansible/tmp
pipelining = True
become = True
host_key_checking = False
deprecation_warnings = False
callback_whitelist = profile_tasks

2
ansible.example.cfg Normal file
View File

@@ -0,0 +1,2 @@
[defaults]
inventory = inventory/my-cluster/hosts.ini ; Adapt this to the path to your inventory file

View File

@@ -1,4 +1,6 @@
---
collections:
- name: ansible.utils
- name: community.general
- name: ansible.posix
- name: kubernetes.core

View File

@@ -1,3 +1,3 @@
#!/bin/bash
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
ansible-playbook site.yml

View File

@@ -4,6 +4,7 @@ kind: Service
metadata:
name: nginx
spec:
ipFamilyPolicy: PreferDualStack
selector:
app: nginx
ports:

View File

@@ -1,5 +1,5 @@
---
k3s_version: v1.24.4+k3s1
k3s_version: v1.25.16+k3s4
# this is the user that has ssh access to these machines
ansible_user: ansibleuser
systemd_dir: /etc/systemd/system
@@ -7,8 +7,14 @@ systemd_dir: /etc/systemd/system
# Set your timezone
system_timezone: "Your/Timezone"
# interface which will be used for flannel
flannel_iface: "eth0"
# node interface which will be used for the container network interface (flannel or calico)
container_iface: "eth0"
# set use_calico to true to use tigera operator/calico instead of the default CNI flannel
# install reference: https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/multi-node-install#install-calico
use_calico: false
calico_cidr: "10.52.0.0/16" # pod cidr pool
calico_tag: "v3.27.0" # calico version tag
# apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.30.222"
@@ -17,16 +23,120 @@ apiserver_endpoint: "192.168.30.222"
# this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password"
# change these to your liking, the only required one is--disable servicelb
extra_server_args: "--disable servicelb --disable traefik"
extra_agent_args: ""
# The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override
# it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[container_iface]["ipv4"]["address"] }}'
# Disable the taint manually by setting: k3s_master_taint = false
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"
# these arguments are recommended for servers as well as agents:
extra_args: >-
{{ '--flannel-iface=' + container_iface if not use_calico else '' }}
--node-ip={{ k3s_node_ip }}
# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }}
# the contents of the if block is also required if using calico
extra_server_args: >-
{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
{% if use_calico %}
--flannel-backend=none
--disable-network-policy
--cluster-cidr={{ calico_cidr }}
{% endif %}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
extra_agent_args: >-
{{ extra_args }}
# image tag for kube-vip
kube_vip_tag_version: "v0.5.0"
kube_vip_tag_version: "v0.5.12"
# metallb type frr or native
metal_lb_type: "native"
# metallb mode layer2 or bgp
metal_lb_mode: "layer2"
# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb
metal_lb_speaker_tag_version: "v0.13.5"
metal_lb_controller_tag_version: "v0.13.5"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90"
# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file.
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have partiularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
# set this value to some-user
proxmox_lxc_ssh_user: root
# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes
proxmox_lxc_ct_ids:
- 200
- 201
- 202
- 203
- 204
# Only enable this if you have set up your own container registry to act as a mirror / pull-through cache
# (harbor / nexus / docker's official registry / etc).
# Can be beneficial for larger dev/test environments (for example if you're getting rate limited by docker hub),
# or air-gapped environments where your nodes don't have internet access after the initial setup
# (which is still needed for downloading the k3s binary and such).
# k3s's documentation about private registries here: https://docs.k3s.io/installation/private-registry
custom_registries: false
# The registries can be authenticated or anonymous, depending on your registry server configuration.
# If they allow anonymous access, simply remove the following bit from custom_registries_yaml
# configs:
# "registry.domain.com":
# auth:
# username: yourusername
# password: yourpassword
# The following is an example that pulls all images used in this playbook through your private registries.
# It also allows you to pull your own images from your private registry, without having to use imagePullSecrets
# in your deployments.
# If all you need is your own images and you don't care about caching the docker/quay/ghcr.io images,
# you can just remove those from the mirrors: section.
custom_registries_yaml: |
mirrors:
docker.io:
endpoint:
- "https://registry.domain.com/v2/dockerhub"
quay.io:
endpoint:
- "https://registry.domain.com/v2/quayio"
ghcr.io:
endpoint:
- "https://registry.domain.com/v2/ghcrio"
registry.domain.com:
endpoint:
- "https://registry.domain.com"
configs:
"registry.domain.com":
auth:
username: yourusername
password: yourpassword
# Only enable and configure these if you access the internet through a proxy
# proxy_env:
# HTTP_PROXY: "http://proxy.domain.local:3128"
# HTTPS_PROXY: "http://proxy.domain.local:3128"
# NO_PROXY: "*.domain.local,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

View File

@@ -0,0 +1,2 @@
---
ansible_user: '{{ proxmox_lxc_ssh_user }}'

View File

@@ -7,6 +7,11 @@
192.168.30.41
192.168.30.42
# only required if proxmox_lxc_configure: true
# must contain all proxmox instances that have a master or worker node
# [proxmox]
# 192.168.30.43
[k3s_cluster:children]
master
node

73
molecule/README.md Normal file
View File

@@ -0,0 +1,73 @@
# Test suites for `k3s-ansible`
This folder contains the [molecule](https://molecule.rtfd.io/)-based test setup for this playbook.
## Scenarios
We have these scenarios:
- **default**:
A 3 control + 2 worker node cluster based very closely on the [sample inventory](../inventory/sample/).
- **ipv6**:
A cluster that is externally accessible via IPv6 ([more information](ipv6/README.md))
To save a bit of test time, this cluster is _not_ highly available, it consists of only one control and one worker node.
- **single_node**:
Very similar to the default scenario, but uses only a single node for all cluster functionality.
## How to execute
To test on your local machine, follow these steps:
### System requirements
Make sure that the following software packages are available on your system:
- [Python 3](https://www.python.org/downloads)
- [Vagrant](https://www.vagrantup.com/downloads)
- [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
### Set up VirtualBox networking on Linux and macOS
_You can safely skip this if you are working on Windows._
Furthermore, the test cluster uses the `192.168.30.0/24` subnet which is [not set up by VirtualBox automatically](https://www.virtualbox.org/manual/ch06.html#network_hostonly).
To set the subnet up for use with VirtualBox, please make sure that `/etc/vbox/networks.conf` exists and that it contains this line:
```
* 192.168.30.0/24
* fdad:bad:ba55::/64
```
### Install Python dependencies
You will get [Molecule, Ansible and a few extra dependencies](../requirements.txt) via [pip](https://pip.pypa.io/).
Usually, it is advisable to work in a [virtual environment](https://docs.python.org/3/tutorial/venv.html) for this:
```bash
cd /path/to/k3s-ansible
# Create a virtualenv at ".env". You only need to do this once.
python3 -m venv .env
# Activate the virtualenv for your current shell session.
# If you start a new session, you will have to repeat this.
source .env/bin/activate
# Install the required packages into the virtualenv.
# These remain installed across shell sessions.
python3 -m pip install -r requirements.txt
```
### Run molecule
With the virtual environment from the previous step active in your shell session, you can now use molecule to test the playbook.
Interesting commands are:
- `molecule create`: Create virtual machines for the test cluster nodes.
- `molecule destroy`: Delete the virtual machines for the test cluster nodes.
- `molecule converge`: Run the `site` playbook on the nodes of the test cluster.
- `molecule side_effect`: Run the `reset` playbook on the nodes of the test cluster.
- `molecule verify`: Verify that the cluster works correctly.
- `molecule test`: The "all-in-one" sequence of steps that is executed in CI.
This includes the `create`, `converge`, `verify`, `side_effect` and `destroy` steps.
See [`molecule.yml`](default/molecule.yml) for more details.

View File

@@ -0,0 +1,99 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.38
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: control2
box: generic/debian11
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.39
- name: control3
box: generic/rocky9
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.40
- name: node1
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- node
interfaces:
- network_name: private_network
ip: 192.168.30.41
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: node2
box: generic/rocky9
memory: 2048
cpus: 2
groups:
- k3s_cluster
- node
interfaces:
- network_name: private_network
ip: 192.168.30.42
provisioner:
name: ansible
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,12 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
container_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45

View File

@@ -0,0 +1,22 @@
---
- name: Apply overrides
ansible.builtin.import_playbook: >-
{{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml
- name: Network setup
hosts: all
tasks:
- name: Disable firewalld
when: ansible_distribution == "Rocky"
# Rocky Linux comes with firewalld enabled. It blocks some of the network
# connections needed for our k3s cluster. For our test setup, we just disable
# it since the VM host's firewall is still active for connections to and from
# the Internet.
# When building your own cluster, please DO NOT blindly copy this. Instead,
# please create a custom firewall configuration that fits your network design
# and security needs.
ansible.builtin.systemd:
name: firewalld
enabled: false
state: stopped
become: true

35
molecule/ipv6/README.md Normal file
View File

@@ -0,0 +1,35 @@
# Sample IPv6 configuration for `k3s-ansible`
This scenario contains a cluster configuration which is _IPv6 first_, but still supports dual-stack networking with IPv4 for most things.
This means:
- The API server VIP is an IPv6 address.
- The MetalLB pool consists of both IPv4 and IPv4 addresses.
- Nodes as well as cluster-internal resources (pods and services) are accessible via IPv4 as well as IPv6.
## Network design
All IPv6 addresses used in this scenario share a single `/48` prefix: `fdad:bad:ba55`.
The following subnets are used:
- `fdad:bad:ba55:`**`0`**`::/64` is the subnet which contains the cluster components meant for external access.
That includes:
- The VIP for the Kubernetes API server: `fdad:bad:ba55::333`
- Services load-balanced by MetalLB: `fdad:bad:ba55::1b:0/112`
- Cluster nodes: `fdad:bad:ba55::de:0/112`
- The host executing Vagrant: `fdad:bad:ba55::1`
In a home lab setup, this might be your LAN.
- `fdad:bad:ba55:`**`4200`**`::/56` is used internally by the cluster for pods.
- `fdad:bad:ba55:`**`4300`**`::/108` is used internally by the cluster for services.
IPv4 networking is also available:
- The nodes have addresses inside `192.168.123.0/24`.
MetalLB also has a bit of address space in this range: `192.168.123.80-192.168.123.90`
- For pods and services, the k3s defaults (`10.42.0.0/16` and `10.43.0.0/16)` are used.
Note that the host running Vagrant is not part any of these IPv4 networks.

View File

@@ -0,0 +1,3 @@
---
node_ipv4: 192.168.123.11
node_ipv6: fdad:bad:ba55::de:11

View File

@@ -0,0 +1,3 @@
---
node_ipv4: 192.168.123.12
node_ipv6: fdad:bad:ba55::de:12

View File

@@ -0,0 +1,3 @@
---
node_ipv4: 192.168.123.21
node_ipv6: fdad:bad:ba55::de:21

View File

@@ -0,0 +1,80 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: fdad:bad:ba55::de:11
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: control2
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: fdad:bad:ba55::de:12
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: node1
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- node
interfaces:
- network_name: private_network
ip: fdad:bad:ba55::de:21
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
provisioner:
name: ansible
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,51 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables (1/2)
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
container_iface: eth1
# In this scenario, we have multiple interfaces that the VIP could be
# broadcasted on. Since we have assigned a dedicated private network
# here, let's make sure that it is used.
kube_vip_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# IPv6 configuration
# ######################################################################
# The API server will be reachable on IPv6 only
apiserver_endpoint: fdad:bad:ba55::333
# We give MetalLB address space for both IPv4 and IPv6
metal_lb_ip_range:
- fdad:bad:ba55::1b:0/112
- 192.168.123.80-192.168.123.90
# k3s_node_ip is by default set to the IPv4 address of container_iface.
# We want IPv6 addresses here of course, so we just specify them
# manually below.
k3s_node_ip: "{{ node_ipv4 }},{{ node_ipv6 }}"
- name: Override host variables (2/2)
# Since "extra_args" depends on "k3s_node_ip" and "container_iface" we have
# to set this AFTER overriding the both of them.
ansible.builtin.set_fact:
# A few extra server args are necessary:
# - the network policy needs to be disabled.
# - we need to manually specify the subnets for services and pods, as
# the default has IPv4 ranges only.
extra_server_args: >-
{{ extra_args }}
--tls-san {{ apiserver_endpoint }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
--disable servicelb
--disable traefik
--disable-network-policy
--cluster-cidr=10.42.0.0/16,fdad:bad:ba55:4200::/56
--service-cidr=10.43.0.0/16,fdad:bad:ba55:4300::/108

51
molecule/ipv6/prepare.yml Normal file
View File

@@ -0,0 +1,51 @@
---
- name: Apply overrides
ansible.builtin.import_playbook: >-
{{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml
- name: Configure dual-stack networking
hosts: all
become: true
# Unfortunately, as of 2022-09, Vagrant does not support the configuration
# of both IPv4 and IPv6 addresses for a single network adapter. So we have
# to configure that ourselves.
# Moreover, we have to explicitly enable IPv6 for the loopback interface.
tasks:
- name: Enable IPv6 for network interfaces
ansible.posix.sysctl:
name: net.ipv6.conf.{{ item }}.disable_ipv6
value: "0"
with_items:
- all
- default
- lo
- name: Disable duplicate address detection
# Duplicate address detection did repeatedly fail within the virtual
# network. But since this setup does not use SLAAC anyway, we can safely
# disable it.
ansible.posix.sysctl:
name: net.ipv6.conf.{{ item }}.accept_dad
value: "0"
with_items:
- "{{ container_iface }}"
- name: Write IPv4 configuration
ansible.builtin.template:
src: 55-flannel-ipv4.yaml.j2
dest: /etc/netplan/55-flannel-ipv4.yaml
owner: root
group: root
mode: 0644
register: netplan_template
- name: Apply netplan configuration
# Conceptually, this should be a handler rather than a task.
# However, we are currently not in a role context - creating
# one just for this seemed overkill.
when: netplan_template.changed
ansible.builtin.command:
cmd: netplan apply
changed_when: true

View File

@@ -0,0 +1,8 @@
---
network:
version: 2
renderer: networkd
ethernets:
{{ container_iface }}:
addresses:
- {{ node_ipv4 }}/24

View File

@@ -0,0 +1,7 @@
---
- name: Apply overrides
ansible.builtin.import_playbook: >-
{{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml
- name: Converge
ansible.builtin.import_playbook: ../../site.yml

View File

@@ -0,0 +1,7 @@
---
- name: Apply overrides
ansible.builtin.import_playbook: >-
{{ lookup("ansible.builtin.env", "MOLECULE_SCENARIO_DIRECTORY") }}/overrides.yml
- name: Reset
ansible.builtin.import_playbook: ../../reset.yml

View File

@@ -0,0 +1,5 @@
---
- name: Verify
hosts: all
roles:
- verify_from_outside

View File

@@ -0,0 +1,9 @@
---
# A host outside of the cluster from which the checks shall be performed
outside_host: localhost
# This kubernetes namespace will be used for testing
testing_namespace: molecule-verify-from-outside
# The directory in which the example manifests reside
example_manifests_path: ../../../example

View File

@@ -0,0 +1,5 @@
---
- name: Clean up kubecfg
ansible.builtin.file:
path: "{{ kubecfg.path }}"
state: absent

View File

@@ -0,0 +1,19 @@
---
- name: Create temporary directory for kubecfg
ansible.builtin.tempfile:
state: directory
suffix: kubecfg
register: kubecfg
- name: Gathering facts
delegate_to: "{{ groups['master'][0] }}"
ansible.builtin.gather_facts:
- name: Download kubecfg
ansible.builtin.fetch:
src: "{{ ansible_env.HOME }}/.kube/config"
dest: "{{ kubecfg.path }}/"
flat: true
delegate_to: "{{ groups['master'][0] }}"
delegate_facts: true
- name: Store path to kubecfg
ansible.builtin.set_fact:
kubecfg_path: "{{ kubecfg.path }}/config"

View File

@@ -0,0 +1,14 @@
---
- name: Verify
run_once: true
delegate_to: "{{ outside_host }}"
block:
- name: "Test CASE: Get kube config"
ansible.builtin.import_tasks: kubecfg-fetch.yml
- name: "TEST CASE: Get nodes"
ansible.builtin.include_tasks: test/get-nodes.yml
- name: "TEST CASE: Deploy example"
ansible.builtin.include_tasks: test/deploy-example.yml
always:
- name: "TEST CASE: Cleanup"
ansible.builtin.import_tasks: kubecfg-cleanup.yml

View File

@@ -0,0 +1,58 @@
---
- name: Deploy example
block:
- name: "Create namespace: {{ testing_namespace }}"
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: "{{ testing_namespace }}"
state: present
wait: true
kubeconfig: "{{ kubecfg_path }}"
- name: Apply example manifests
kubernetes.core.k8s:
src: "{{ example_manifests_path }}/{{ item }}"
namespace: "{{ testing_namespace }}"
state: present
wait: true
kubeconfig: "{{ kubecfg_path }}"
with_items:
- deployment.yml
- service.yml
- name: Get info about nginx service
kubernetes.core.k8s_info:
kind: service
name: nginx
namespace: "{{ testing_namespace }}"
kubeconfig: "{{ kubecfg_path }}"
vars: &load_balancer_metadata
metallb_ip: status.loadBalancer.ingress[0].ip
metallb_port: spec.ports[0].port
register: nginx_services
- name: Assert that the nginx welcome page is available
ansible.builtin.uri:
url: http://{{ ip | ansible.utils.ipwrap }}:{{ port_ }}/
return_content: true
register: result
failed_when: "'Welcome to nginx!' not in result.content"
vars:
ip: >-
{{ nginx_services.resources[0].status.loadBalancer.ingress[0].ip }}
port_: >-
{{ nginx_services.resources[0].spec.ports[0].port }}
# Deactivated linter rules:
# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap
# would be undefined. This will not be the case during playbook execution.
# noqa jinja[invalid]
always:
- name: "Remove namespace: {{ testing_namespace }}"
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: "{{ testing_namespace }}"
state: absent
kubeconfig: "{{ kubecfg_path }}"

View File

@@ -0,0 +1,28 @@
---
- name: Get all nodes in cluster
kubernetes.core.k8s_info:
kind: node
kubeconfig: "{{ kubecfg_path }}"
register: cluster_nodes
- name: Assert that the cluster contains exactly the expected nodes
ansible.builtin.assert:
that: found_nodes == expected_nodes
success_msg: "Found nodes as expected: {{ found_nodes }}"
fail_msg: "Expected nodes {{ expected_nodes }}, but found nodes {{ found_nodes }}"
vars:
found_nodes: >-
{{ cluster_nodes | json_query('resources[*].metadata.name') | unique | sort }}
expected_nodes: |-
{{
(
( groups['master'] | default([]) ) +
( groups['node'] | default([]) )
)
| unique
| sort
}}
# Deactivated linter rules:
# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap
# would be undefined. This will not be the case during playbook execution.
# noqa jinja[invalid]

View File

@@ -0,0 +1,48 @@
---
dependency:
name: galaxy
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 4096
cpus: 4
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: 192.168.30.50
provisioner:
name: ansible
playbooks:
converge: ../resources/converge.yml
side_effect: ../resources/reset.yml
verify: ../resources/verify.yml
inventory:
links:
group_vars: ../../inventory/sample/group_vars
scenario:
test_sequence:
- dependency
- lint
- cleanup
- destroy
- syntax
- create
- prepare
- converge
# idempotence is not possible with the playbook in its current form.
- verify
# We are repurposing side_effect here to test the reset playbook.
# This is why we do not run it before verify (which tests the cluster),
# but after the verify step.
- side_effect
- cleanup
- destroy

View File

@@ -0,0 +1,16 @@
---
- name: Apply overrides
hosts: all
tasks:
- name: Override host variables
ansible.builtin.set_fact:
# See:
# https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant
container_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45
# Make sure that our IP ranges do not collide with those of the default scenario
apiserver_endpoint: "192.168.30.223"
metal_lb_ip_range: "192.168.30.91-192.168.30.99"

3
reboot.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
ansible-playbook reboot.yml

9
reboot.yml Normal file
View File

@@ -0,0 +1,9 @@
---
- name: Reboot k3s_cluster
hosts: k3s_cluster
gather_facts: true
tasks:
- name: Reboot the nodes (and Wait upto 5 mins max)
become: true
reboot:
reboot_timeout: 300

10
requirements.in Normal file
View File

@@ -0,0 +1,10 @@
ansible-core>=2.13.5
jmespath>=1.0.1
jsonpatch>=1.32
kubernetes>=25.3.0
molecule-vagrant>=1.0.0
molecule>=4.0.3
netaddr>=0.8.0
pre-commit>=2.20.0
pre-commit-hooks>=1.3.1
pyyaml>=6.0

178
requirements.txt Normal file
View File

@@ -0,0 +1,178 @@
#
# This file is autogenerated by pip-compile with Python 3.11
# by the following command:
#
# pip-compile requirements.in
#
ansible-compat==3.0.1
# via molecule
ansible-core==2.15.4
# via
# -r requirements.in
# ansible-compat
arrow==1.2.3
# via jinja2-time
attrs==22.1.0
# via jsonschema
binaryornot==0.4.4
# via cookiecutter
cachetools==5.2.0
# via google-auth
certifi==2022.9.24
# via
# kubernetes
# requests
cffi==1.15.1
# via cryptography
cfgv==3.3.1
# via pre-commit
chardet==5.0.0
# via binaryornot
charset-normalizer==2.1.1
# via requests
click==8.1.3
# via
# click-help-colors
# cookiecutter
# molecule
click-help-colors==0.9.1
# via molecule
commonmark==0.9.1
# via rich
cookiecutter==2.1.1
# via molecule
cryptography==38.0.3
# via ansible-core
distlib==0.3.6
# via virtualenv
distro==1.8.0
# via selinux
enrich==1.2.7
# via molecule
filelock==3.8.0
# via virtualenv
google-auth==2.14.0
# via kubernetes
identify==2.5.8
# via pre-commit
idna==3.4
# via requests
jinja2==3.1.2
# via
# ansible-core
# cookiecutter
# jinja2-time
# molecule
# molecule-vagrant
jinja2-time==0.2.0
# via cookiecutter
jmespath==1.0.1
# via -r requirements.in
jsonpatch==1.33
# via -r requirements.in
jsonpointer==2.3
# via jsonpatch
jsonschema==4.17.0
# via
# ansible-compat
# molecule
kubernetes==25.3.0
# via -r requirements.in
markupsafe==2.1.1
# via jinja2
molecule==4.0.4
# via
# -r requirements.in
# molecule-vagrant
molecule-vagrant==1.0.0
# via -r requirements.in
netaddr==0.10.0
# via -r requirements.in
nodeenv==1.7.0
# via pre-commit
oauthlib==3.2.2
# via requests-oauthlib
packaging==21.3
# via
# ansible-compat
# ansible-core
# molecule
platformdirs==2.5.2
# via virtualenv
pluggy==1.0.0
# via molecule
pre-commit==2.21.0
# via -r requirements.in
pre-commit-hooks==4.5.0
# via -r requirements.in
pyasn1==0.4.8
# via
# pyasn1-modules
# rsa
pyasn1-modules==0.2.8
# via google-auth
pycparser==2.21
# via cffi
pygments==2.13.0
# via rich
pyparsing==3.0.9
# via packaging
pyrsistent==0.19.2
# via jsonschema
python-dateutil==2.8.2
# via
# arrow
# kubernetes
python-slugify==6.1.2
# via cookiecutter
python-vagrant==1.0.0
# via molecule-vagrant
pyyaml==6.0.1
# via
# -r requirements.in
# ansible-compat
# ansible-core
# cookiecutter
# kubernetes
# molecule
# molecule-vagrant
# pre-commit
requests==2.28.1
# via
# cookiecutter
# kubernetes
# requests-oauthlib
requests-oauthlib==1.3.1
# via kubernetes
resolvelib==0.8.1
# via ansible-core
rich==12.6.0
# via
# enrich
# molecule
rsa==4.9
# via google-auth
ruamel-yaml==0.17.21
# via pre-commit-hooks
selinux==0.2.1
# via molecule-vagrant
six==1.16.0
# via
# google-auth
# kubernetes
# python-dateutil
subprocess-tee==0.4.1
# via ansible-compat
text-unidecode==1.3
# via python-slugify
urllib3==1.26.12
# via
# kubernetes
# requests
virtualenv==20.16.6
# via pre-commit
websocket-client==1.4.2
# via kubernetes
# The following packages are considered to be unsafe in a requirements file:
# setuptools

View File

@@ -1,3 +1,3 @@
#!/bin/bash
ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini
ansible-playbook reset.yml

View File

@@ -1,7 +1,24 @@
---
- hosts: k3s_cluster
gather_facts: yes
become: yes
- name: Reset k3s cluster
hosts: k3s_cluster
gather_facts: true
roles:
- role: reset
become: true
- role: raspberrypi
become: true
vars: {state: absent}
post_tasks:
- name: Reboot and wait for node to come back up
become: true
reboot:
reboot_timeout: 3600
- name: Revert changes to Proxmox cluster
hosts: proxmox
gather_facts: true
become: true
remote_user: "{{ proxmox_lxc_ssh_user }}"
roles:
- role: reset_proxmox_lxc
when: proxmox_lxc_configure

View File

@@ -1,12 +0,0 @@
---
ansible_user: root
server_init_args: >-
{% if groups['master'] | length > 1 %}
{% if ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) %}
--cluster-init
{% else %}
--server https://{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}:6443
{% endif %}
--token {{ k3s_token }}
{% endif %}
{{ extra_server_args | default('') }}

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb

View File

@@ -1,32 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-vip
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: system:kube-vip-role
rules:
- apiGroups: [""]
resources: ["services", "services/status", "nodes", "endpoints"]
verbs: ["list","get","watch", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["list", "get", "watch", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-vip-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-vip-role
subjects:
- kind: ServiceAccount
name: kube-vip
namespace: kube-system

View File

@@ -0,0 +1,3 @@
---
# Name of the master group
group_name_master: master

View File

@@ -1,107 +0,0 @@
---
- name: Create manifests directory for temp configuration
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0644
with_items: "{{ groups['master'] }}"
run_once: true
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: root
group: root
mode: 0644
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb controller to be running
command: >-
kubectl wait deployment -n metallb-system controller --for condition=Available=True --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb webhook service to be running
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.phase}'=Running pods \
--selector component=controller --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb pods in replicasets
command: >-
kubectl wait pods -n metallb-system --for condition=Ready \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller readyReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.readyReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller fullyLabeledReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.fullyLabeledReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller availableReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.availableReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system webhook-service endpoint
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Apply metallb CRs
command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system IPAddressPool
command: >-
k3s kubectl -n metallb-system get IPAddressPool
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system L2Advertisement
command: >-
k3s kubectl -n metallb-system get L2Advertisement
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Remove tmp director used for manifests
file:
path: /tmp/k3s
state: absent

View File

@@ -1,14 +0,0 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- {{ metal_lb_ip_range }}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system

View File

@@ -0,0 +1,18 @@
---
- name: Create k3s.service.d directory
file:
path: '{{ systemd_dir }}/k3s.service.d'
state: directory
owner: root
group: root
mode: '0755'
- name: Copy K3s http_proxy conf file
template:
src: "http_proxy.conf.j2"
dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
owner: root
group: root
mode: '0755'

View File

@@ -1,5 +1,9 @@
---
- name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml
when: proxy_env is defined
- name: Copy K3s service file
template:
src: "k3s.service.j2"
@@ -11,6 +15,6 @@
- name: Enable and check K3s service
systemd:
name: k3s-node
daemon_reload: yes
daemon_reload: true
state: restarted
enabled: yes
enabled: true

View File

@@ -0,0 +1,4 @@
[Service]
Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }}
Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }}
Environment=NO_PROXY={{ proxy_env.NO_PROXY }}

View File

@@ -7,7 +7,7 @@ After=network-online.target
Type=notify
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s agent --server https://{{ apiserver_endpoint }}:6443 --token {{ hostvars[groups['master'][0]]['token'] | default(k3s_token) }} {{ extra_agent_args | default("") }}
ExecStart=/usr/local/bin/k3s agent --server https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443 --token {{ hostvars[groups[group_name_master | default('master')][0]]['token'] | default(k3s_token) }} {{ extra_agent_args | default("") }}
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead

View File

@@ -0,0 +1,6 @@
---
# Indicates whether custom registries for k3s should be configured
# Possible values:
# - present
# - absent
state: present

View File

@@ -0,0 +1,17 @@
---
- name: Create directory /etc/rancher/k3s
file:
path: "/etc/{{ item }}"
state: directory
mode: '0755'
loop:
- rancher
- rancher/k3s
- name: Insert registries into /etc/rancher/k3s/registries.yaml
blockinfile:
path: /etc/rancher/k3s/registries.yaml
block: "{{ custom_registries_yaml }}"
mode: '0600'
create: true

View File

@@ -0,0 +1,20 @@
---
# If you want to explicitly define an interface that ALL control nodes
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null
# Name of the master group
group_name_master: master
# yamllint disable rule:line-length
server_init_args: >-
{% if groups[group_name_master | default('master')] | length > 1 %}
{% if ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname'] %}
--cluster-init
{% else %}
--server https://{{ hostvars[groups[group_name_master | default('master')][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443
{% endif %}
--token {{ k3s_token }}
{% endif %}
{{ extra_server_args | default('') }}

View File

@@ -0,0 +1,28 @@
---
# Download logs of k3s-init.service from the nodes to localhost.
# Note that log_destination must be set.
- name: Fetch k3s-init.service logs
ansible.builtin.command:
cmd: journalctl --all --unit=k3s-init.service
changed_when: false
register: k3s_init_log
- name: Create {{ log_destination }}
delegate_to: localhost
run_once: true
become: false
ansible.builtin.file:
path: "{{ log_destination }}"
state: directory
mode: "0755"
- name: Store logs to {{ log_destination }}
delegate_to: localhost
become: false
ansible.builtin.template:
src: content.j2
dest: "{{ log_destination }}/k3s-init@{{ ansible_hostname }}.log"
mode: 0644
vars:
content: "{{ k3s_init_log.stdout }}"

View File

@@ -0,0 +1,18 @@
---
- name: Create k3s.service.d directory
file:
path: '{{ systemd_dir }}/k3s.service.d'
state: directory
owner: root
group: root
mode: '0755'
- name: Copy K3s http_proxy conf file
template:
src: "http_proxy.conf.j2"
dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
owner: root
group: root
mode: '0755'

View File

@@ -1,63 +1,27 @@
---
- name: Clean previous runs of k3s-init
- name: Stop k3s-init
systemd:
name: k3s-init
state: stopped
failed_when: false
- name: Clean previous runs of k3s-init
- name: Clean previous runs of k3s-init # noqa command-instead-of-module
# The systemd module does not support "reset-failed", so we need to resort to command.
command: systemctl reset-failed k3s-init
failed_when: false
changed_when: false
args:
warn: false # The ansible systemd module does not support reset-failed
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Deploy K3s http_proxy conf
include_tasks: http_proxy.yml
when: proxy_env is defined
- name: Copy vip rbac manifest to first master
template:
src: "vip.rbac.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Deploy vip manifest
include_tasks: vip.yml
- name: Copy vip manifest to first master
template:
src: "vip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
# these will be copied and installed now, then tested later and apply config
- name: Copy metallb namespace to first master
template:
src: "metallb.namespace.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-namespace.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Copy metallb namespace to first master
template:
src: "metallb.crds.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Deploy metallb manifest
include_tasks: metallb.yml
tags: metallb
- name: Init cluster inside the transient k3s-init service
command:
@@ -66,26 +30,30 @@
--unit=k3s-init \
k3s server {{ server_init_args }}"
creates: "{{ systemd_dir }}/k3s.service"
args:
warn: false # The ansible systemd module does not support transient units
- name: Verification
when: not ansible_check_mode
block:
- name: Verify that all nodes actually joined (check k3s-init.service if this fails)
command:
cmd: k3s kubectl get nodes -l "node-role.kubernetes.io/master=true" -o=jsonpath="{.items[*].metadata.name}"
register: nodes
until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups['master'] | length)
until: nodes.rc == 0 and (nodes.stdout.split() | length) == (groups[group_name_master | default('master')] | length) # yamllint disable-line rule:line-length
retries: "{{ retry_count | default(20) }}"
delay: 10
changed_when: false
always:
- name: Save logs of k3s-init.service
include_tasks: fetch_k3s_init_logs.yml
when: log_destination
vars:
log_destination: >-
{{ lookup('ansible.builtin.env', 'ANSIBLE_K3S_LOG_DIR', default=False) }}
- name: Kill the temporary service used for initialization
systemd:
name: k3s-init
state: stopped
failed_when: false
when: not ansible_check_mode
- name: Copy K3s service file
register: k3s_service
@@ -99,9 +67,9 @@
- name: Enable and check K3s service
systemd:
name: k3s
daemon_reload: yes
daemon_reload: true
state: restarted
enabled: yes
enabled: true
- name: Wait for node-token
wait_for:
@@ -133,25 +101,32 @@
- name: Create directory .kube
file:
path: ~{{ ansible_user }}/.kube
path: "{{ ansible_user_dir }}/.kube"
state: directory
owner: "{{ ansible_user }}"
owner: "{{ ansible_user_id }}"
mode: "u=rwx,g=rx,o="
- name: Copy config file to user home directory
copy:
src: /etc/rancher/k3s/k3s.yaml
dest: ~{{ ansible_user }}/.kube/config
remote_src: yes
owner: "{{ ansible_user }}"
dest: "{{ ansible_user_dir }}/.kube/config"
remote_src: true
owner: "{{ ansible_user_id }}"
mode: "u=rw,g=,o="
- name: Configure kubectl cluster to https://{{ apiserver_endpoint }}:6443
- name: Configure kubectl cluster to {{ endpoint_url }}
command: >-
k3s kubectl config set-cluster default
--server=https://{{ apiserver_endpoint }}:6443
--kubeconfig ~{{ ansible_user }}/.kube/config
--server={{ endpoint_url }}
--kubeconfig {{ ansible_user_dir }}/.kube/config
changed_when: true
vars:
endpoint_url: >-
https://{{ apiserver_endpoint | ansible.utils.ipwrap }}:6443
# Deactivated linter rules:
# - jinja[invalid]: As of version 6.6.0, ansible-lint complains that the input to ipwrap
# would be undefined. This will not be the case during playbook execution.
# noqa jinja[invalid]
- name: Create kubectl symlink
file:

View File

@@ -0,0 +1,30 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: "Download to first master: manifest for metallb-{{ metal_lb_type }}"
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/metallb/metallb/{{ metal_lb_controller_tag_version }}/config/manifests/metallb-{{ metal_lb_type }}.yaml" # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Set image versions in manifest for metallb-{{ metal_lb_type }}
ansible.builtin.replace:
path: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
regexp: "{{ item.change | ansible.builtin.regex_escape }}"
replace: "{{ item.to }}"
with_items:
- change: "metallb/speaker:{{ metal_lb_controller_tag_version }}"
to: "metallb/speaker:{{ metal_lb_speaker_tag_version }}"
loop_control:
label: "{{ item.change }} => {{ item.to }}"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -0,0 +1,27 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Download vip rbac manifest to first master
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/kube-vip/kube-vip/{{ kube_vip_tag_version }}/docs/manifests/rbac.yaml"
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
- name: Copy vip manifest to first master
template:
src: "vip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']

View File

@@ -0,0 +1,5 @@
{#
This is a really simple template that just outputs the
value of the "content" variable.
#}
{{ content }}

View File

@@ -0,0 +1,4 @@
[Service]
Environment=HTTP_PROXY={{ proxy_env.HTTP_PROXY }}
Environment=HTTPS_PROXY={{ proxy_env.HTTPS_PROXY }}
Environment=NO_PROXY={{ proxy_env.NO_PROXY }}

View File

@@ -30,10 +30,12 @@ spec:
value: "true"
- name: port
value: "6443"
{% if kube_vip_iface %}
- name: vip_interface
value: {{ flannel_iface }}
value: {{ kube_vip_iface }}
{% endif %}
- name: vip_cidr
value: "32"
value: "{{ apiserver_endpoint | ansible.utils.ipsubnet | ansible.utils.ipaddr('prefix') }}"
- name: cp_enable
value: "true"
- name: cp_namespace

View File

@@ -0,0 +1,6 @@
---
# Timeout to wait for MetalLB services to come up
metal_lb_available_timeout: 120s
# Name of the master group
group_name_master: master

View File

@@ -0,0 +1,99 @@
---
- block:
- name: Create manifests directory on first master
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0755
- name: "Download to first master: manifest for Tigera Operator and Calico CRDs"
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/projectcalico/calico/{{ calico_tag }}/manifests/tigera-operator.yaml"
dest: "/tmp/k3s/tigera-operator.yaml"
owner: root
group: root
mode: 0755
- name: Copy Calico custom resources manifest to first master
ansible.builtin.template:
src: "calico.crs.j2"
dest: /tmp/k3s/custom-resources.yaml
- name: Deploy or replace Tigera Operator
block:
- name: Deploy Tigera Operator
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/tigera-operator.yaml
register: create_operator
changed_when: "'created' in create_operator.stdout"
failed_when: "'Error' in create_operator.stderr and 'already exists' not in create_operator.stderr"
rescue:
- name: Replace existing Tigera Operator
ansible.builtin.command:
cmd: kubectl replace -f /tmp/k3s/tigera-operator.yaml
register: replace_operator
changed_when: "'replaced' in replace_operator.stdout"
failed_when: "'Error' in replace_operator.stderr"
- name: Wait for Tigera Operator resources
command: >-
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='tigera-operator'
--for=condition=Available=True
--timeout=7s
register: tigera_result
changed_when: false
until: tigera_result is succeeded
retries: 7
delay: 7
with_items:
- { name: tigera-operator, type: deployment }
loop_control:
label: "{{ item.type }}/{{ item.name }}"
- name: Deploy Calico custom resources
block:
- name: Deploy custom resources for Calico
ansible.builtin.command:
cmd: kubectl create -f /tmp/k3s/custom-resources.yaml
register: create_cr
changed_when: "'created' in create_cr.stdout"
failed_when: "'Error' in create_cr.stderr and 'already exists' not in create_cr.stderr"
rescue:
- name: Apply new Calico custom resource manifest
ansible.builtin.command:
cmd: kubectl apply -f /tmp/k3s/custom-resources.yaml
register: apply_cr
changed_when: "'configured' in apply_cr.stdout or 'created' in apply_cr.stdout"
failed_when: "'Error' in apply_cr.stderr"
- name: Wait for Calico system resources to be available
command: >-
{% if item.type == 'daemonset' %}
k3s kubectl wait pods
--namespace='{{ item.namespace }}'
--selector={{ item.selector }}
--for=condition=Ready
{% else %}
k3s kubectl wait {{ item.type }}/{{ item.name }}
--namespace='{{ item.namespace }}'
--for=condition=Available
{% endif %}
--timeout=7s
register: cr_result
changed_when: false
until: cr_result is succeeded
retries: 30
delay: 7
with_items:
- { name: calico-typha, type: deployment, namespace: calico-system }
- { name: calico-kube-controllers, type: deployment, namespace: calico-system }
- { name: csi-node-driver, type: daemonset, selector: 'k8s-app=csi-node-driver', namespace: calico-system }
- { name: calico-node, type: daemonset, selector: 'k8s-app=calico-node', namespace: calico-system }
- { name: calico-apiserver, type: deployment, selector: 'k8s-app=calico-apiserver', namespace: calico-apiserver }
loop_control:
label: "{{ item.type }}/{{ item.name }}"
when: ansible_hostname == hostvars[groups[group_name_master | default('master')][0]]['ansible_hostname']
run_once: true # stops "skipped" log spam

View File

@@ -0,0 +1,14 @@
---
- name: Deploy calico
include_tasks: calico.yml
tags: calico
when: use_calico == true
- name: Deploy metallb pool
include_tasks: metallb.yml
tags: metallb
- name: Remove tmp directory used for manifests
file:
path: /tmp/k3s
state: absent

View File

@@ -0,0 +1,101 @@
---
- name: Create manifests directory for temp configuration
file:
path: /tmp/k3s
state: directory
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Wait for MetalLB resources
command: >-
k3s kubectl wait {{ item.resource }}
--namespace='metallb-system'
{% if item.name | default(False) -%}{{ item.name }}{%- endif %}
{% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %}
{% if item.condition | default(False) -%}{{ item.condition }}{%- endif %}
--timeout='{{ metal_lb_available_timeout }}'
changed_when: false
run_once: true
with_items:
- description: controller
resource: deployment
name: controller
condition: --for condition=Available=True
- description: webhook service
resource: pod
selector: component=controller
condition: --for=jsonpath='{.status.phase}'=Running
- description: pods in replica sets
resource: pod
selector: component=controller,app=metallb
condition: --for condition=Ready
- description: ready replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.readyReplicas}'=1
- description: fully labeled replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.fullyLabeledReplicas}'=1
- description: available replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.availableReplicas}'=1
loop_control:
label: "{{ item.description }}"
- name: Test metallb-system webhook-service endpoint
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups[group_name_master | default('master')] }}"
run_once: true
- name: Apply metallb CRs
command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml
--timeout='{{ metal_lb_available_timeout }}'
register: this
changed_when: false
run_once: true
until: this.rc == 0
retries: 5
- name: Test metallb-system resources for Layer 2 configuration
command: >-
k3s kubectl -n metallb-system get {{ item }}
changed_when: false
run_once: true
when: metal_lb_mode == "layer2"
with_items:
- IPAddressPool
- L2Advertisement
- name: Test metallb-system resources for BGP configuration
command: >-
k3s kubectl -n metallb-system get {{ item }}
changed_when: false
run_once: true
when: metal_lb_mode == "bgp"
with_items:
- IPAddressPool
- BGPPeer
- BGPAdvertisement

View File

@@ -0,0 +1,28 @@
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: {{ calico_blockSize if calico_blockSize is defined else '26' }}
cidr: {{ calico_cidr if calico_cidr is defined else '10.52.0.0/16' }}
encapsulation: {{ calico_encapsulation if calico_encapsulation is defined else 'VXLANCrossSubnet' }}
natOutgoing: {{ calico_natOutgoing if calico_natOutgoing is defined else 'Enabled' }}
nodeSelector: {{ calico_nodeSelector if calico_nodeSelector is defined else 'all()' }}
nodeAddressAutodetectionV4:
interface: {{ container_iface if container_iface is defined else 'eth0' }}
---
# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@@ -0,0 +1,43 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
{% if metal_lb_ip_range is string %}
{# metal_lb_ip_range was used in the legacy way: single string instead of a list #}
{# => transform to list with single element #}
{% set metal_lb_ip_range = [metal_lb_ip_range] %}
{% endif %}
{% for range in metal_lb_ip_range %}
- {{ range }}
{% endfor %}
{% if metal_lb_mode == "layer2" %}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
{% endif %}
{% if metal_lb_mode == "bgp" %}
---
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: default
namespace: metallb-system
spec:
myASN: {{ metal_lb_bgp_my_asn }}
peerASN: {{ metal_lb_bgp_peer_asn }}
peerAddress: {{ metal_lb_bgp_peer_address }}
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: default
namespace: metallb-system
{% endif %}

View File

@@ -0,0 +1,5 @@
---
- name: Reboot server
become: true
reboot:
listen: reboot server

21
roles/lxc/tasks/main.yml Normal file
View File

@@ -0,0 +1,21 @@
---
- name: Check for rc.local file
stat:
path: /etc/rc.local
register: rcfile
- name: Create rc.local if needed
lineinfile:
path: /etc/rc.local
line: "#!/bin/sh -e"
create: true
insertbefore: BOF
mode: "u=rwx,g=rx,o=rx"
when: not rcfile.stat.exists
- name: Write rc.local file
blockinfile:
path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}"
state: present
notify: reboot server

View File

@@ -0,0 +1,4 @@
---
secure_path:
RedHat: '/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin'
Suse: '/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin'

View File

@@ -1,27 +1,37 @@
---
- name: Set same timezone on every Server
timezone:
community.general.timezone:
name: "{{ system_timezone }}"
when: (system_timezone is defined) and (system_timezone != "Your/Timezone")
- name: Set SELinux to disabled state
selinux:
ansible.posix.selinux:
state: disabled
when: ansible_os_family == "RedHat"
- name: Enable IPv4 forwarding
sysctl:
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: "1"
state: present
reload: yes
reload: true
tags: sysctl
- name: Enable IPv6 forwarding
sysctl:
ansible.posix.sysctl:
name: net.ipv6.conf.all.forwarding
value: "1"
state: present
reload: yes
reload: true
tags: sysctl
- name: Enable IPv6 router advertisements
ansible.posix.sysctl:
name: net.ipv6.conf.all.accept_ra
value: "2"
state: present
reload: true
tags: sysctl
- name: Add br_netfilter to /etc/modules-load.d/
copy:
@@ -31,28 +41,29 @@
when: ansible_os_family == "RedHat"
- name: Load br_netfilter
modprobe:
community.general.modprobe:
name: br_netfilter
state: present
when: ansible_os_family == "RedHat"
- name: Set bridge-nf-call-iptables (just to be sure)
sysctl:
ansible.posix.sysctl:
name: "{{ item }}"
value: "1"
state: present
reload: yes
reload: true
when: ansible_os_family == "RedHat"
loop:
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
tags: sysctl
- name: Add /usr/local/bin to sudo secure_path
lineinfile:
line: 'Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin'
line: 'Defaults secure_path = {{ secure_path[ansible_os_family] }}'
regexp: "Defaults(\\s)*secure_path(\\s)*="
state: present
insertafter: EOF
path: /etc/sudoers
validate: 'visudo -cf %s'
when: ansible_os_family == "RedHat"
when: ansible_os_family in [ "RedHat", "Suse" ]

View File

@@ -0,0 +1,13 @@
---
- name: Reboot containers
block:
- name: Get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids: >-
{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}
listen: reboot containers
- name: Reboot container
command: "pct reboot {{ item }}"
loop: "{{ proxmox_lxc_filtered_ids }}"
changed_when: true
listen: reboot containers

View File

@@ -0,0 +1,44 @@
---
- name: Check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: Filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185
- name: Ensure lxc config has the right apparmor profile
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.apparmor.profile"
line: "lxc.apparmor.profile: unconfined"
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right cgroup
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow"
line: "lxc.cgroup.devices.allow: a"
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right cap drop
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cap.drop"
line: "lxc.cap.drop: "
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right mounts
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.mount.auto"
line: 'lxc.mount.auto: "proc:rw sys:rw"'
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers

View File

@@ -0,0 +1,6 @@
---
# Indicates whether the k3s prerequisites for Raspberry Pi should be set up
# Possible values:
# - present
# - absent
state: present

View File

@@ -1,3 +1,4 @@
---
- name: Reboot
reboot:
listen: reboot

View File

@@ -47,13 +47,16 @@
- raspberry_pi|default(false)
- ansible_facts.lsb.description|default("") is match("Debian.*bullseye")
- name: execute OS related tasks on the Raspberry Pi
- name: Execute OS related tasks on the Raspberry Pi - {{ action_ }}
include_tasks: "{{ item }}"
with_first_found:
- "prereq/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml"
- "prereq/{{ detected_distribution }}.yml"
- "prereq/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
- "prereq/{{ ansible_distribution }}.yml"
- "prereq/default.yml"
- "{{ action_ }}/{{ detected_distribution }}-{{ detected_distribution_major_version }}.yml"
- "{{ action_ }}/{{ detected_distribution }}.yml"
- "{{ action_ }}/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
- "{{ action_ }}/{{ ansible_distribution }}.yml"
- "{{ action_ }}/default.yml"
vars:
action_: >-
{% if state == "present" %}setup{% else %}teardown{% endif %}
when:
- raspberry_pi|default(false)

View File

@@ -8,20 +8,22 @@
notify: reboot
- name: Install iptables
apt: name=iptables state=present
apt:
name: iptables
state: present
- name: Flush iptables before changing to iptables-legacy
iptables:
flush: true
- name: Changing to iptables-legacy
alternatives:
community.general.alternatives:
path: /usr/sbin/iptables-legacy
name: iptables
register: ip4_legacy
- name: Changing to ip6tables-legacy
alternatives:
community.general.alternatives:
path: /usr/sbin/ip6tables-legacy
name: ip6tables
register: ip6_legacy

View File

@@ -1,8 +1,8 @@
---
- name: Enable cgroup via boot commandline if not already enabled for Centos
- name: Enable cgroup via boot commandline if not already enabled for Rocky
lineinfile:
path: /boot/cmdline.txt
backrefs: yes
backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot

View File

@@ -2,12 +2,12 @@
- name: Enable cgroup via boot commandline if not already enabled for Ubuntu on a Raspberry Pi
lineinfile:
path: /boot/firmware/cmdline.txt
backrefs: yes
backrefs: true
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot
when: not ansible_check_mode
- name: Install linux-modules-extra-raspi
apt: name=linux-modules-extra-raspi state=present
when: (raspberry_pi) and (not ansible_check_mode)
apt:
name: linux-modules-extra-raspi
state: present

View File

@@ -0,0 +1 @@
---

View File

@@ -0,0 +1 @@
---

View File

@@ -0,0 +1,5 @@
---
- name: Remove linux-modules-extra-raspi
apt:
name: linux-modules-extra-raspi
state: absent

View File

@@ -0,0 +1 @@
---

View File

@@ -3,7 +3,7 @@
systemd:
name: "{{ item }}"
state: stopped
enabled: no
enabled: false
failed_when: false
with_items:
- k3s
@@ -44,21 +44,50 @@
- /var/lib/kubelet
- /var/lib/rancher/k3s
- /var/lib/rancher/
- /usr/local/bin/k3s
- /var/lib/cni/
- name: Remove K3s http_proxy files
file:
name: "{{ item }}"
state: absent
with_items:
- "{{ systemd_dir }}/k3s.service.d"
- "{{ systemd_dir }}/k3s-node.service.d"
when: proxy_env is defined
- name: Reload daemon_reload
systemd:
daemon_reload: yes
daemon_reload: true
- name: Remove linux-modules-extra-raspi
apt: name=linux-modules-extra-raspi state=absent
- name: Remove tmp director used for manifests
- name: Remove tmp directory used for manifests
file:
path: /tmp/k3s
state: absent
- name: Reboot and wait for node to come back up
reboot:
reboot_timeout: 3600
- name: Check if rc.local exists
stat:
path: /etc/rc.local
register: rcfile
- name: Remove rc.local modifications for proxmox lxc containers
become: true
blockinfile:
path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}"
create: false
state: absent
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Check rc.local for cleanup
become: true
slurp:
src: /etc/rc.local
register: rcslurp
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Cleanup rc.local if we only have a Shebang line
become: true
file:
path: /etc/rc.local
state: absent
when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1

View File

@@ -9,7 +9,7 @@
check_mode: false
- name: Umount filesystem
mount:
ansible.posix.mount:
path: "{{ item }}"
state: unmounted
with_items:

View File

@@ -0,0 +1 @@
../../proxmox_lxc/handlers/main.yml

View File

@@ -0,0 +1,47 @@
---
- name: Check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: Filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
- name: Remove LXC apparmor profile
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.apparmor.profile"
line: "lxc.apparmor.profile: unconfined"
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc cgroups
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow"
line: "lxc.cgroup.devices.allow: a"
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc cap drop
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cap.drop"
line: "lxc.cap.drop: "
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc mounts
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.mount.auto"
line: 'lxc.mount.auto: "proc:rw sys:rw"'
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers

Some files were not shown because too many files have changed in this diff Show More