Compare commits

..

47 Commits

Author SHA1 Message Date
Techno Tim
edd4838407 feat(k3s): Updated to v1.25 (#187)
* feat(k3s): Updated to v1.25.4+k3s1

* feat(k3s): Updated to v1.25.5+k3s1

* feat(k3s): Updated to v1.25.7+k3s1

* feat(k3s): Updated to v1.25.8+k3s1

* feat(k3s): Updated to v1.25.9+k3s1

* feat(kube-vip): Update to v0.5.12
2023-04-27 23:09:46 -05:00
dependabot[bot]
5c79ea9b71 chore(deps): bump ansible-core from 2.14.4 to 2.14.5 (#287)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.4 to 2.14.5.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.4...v2.14.5)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 14:19:52 -05:00
dependabot[bot]
3d204ad851 chore(deps): bump yamllint from 1.30.0 to 1.31.0 (#284)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.30.0 to 1.31.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.30.0...v1.31.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-04-24 11:17:02 -05:00
dependabot[bot]
13bd868faa chore(deps): bump ansible-lint from 6.14.6 to 6.15.0 (#285)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.6 to 6.15.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.6...v6.15.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-23 23:10:28 -05:00
dependabot[bot]
c564a8562a chore(deps): bump ansible-lint from 6.14.3 to 6.14.6 (#275)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.3 to 6.14.6.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.3...v6.14.6)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-14 23:34:03 -05:00
Sam Schmit-Van Werweke
0d6d43e7ca Bump k3s version to v1.24.12+k3s1 (#269) 2023-04-02 21:31:20 -05:00
dependabot[bot]
c0952288c2 chore(deps): bump ansible-core from 2.14.3 to 2.14.4 (#265)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.3 to 2.14.4.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.3...v2.14.4)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 15:07:16 -05:00
dependabot[bot]
1c9796e98b chore(deps): bump ansible-lint from 6.14.2 to 6.14.3 (#264)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.2 to 6.14.3.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.2...v6.14.3)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 12:18:52 -05:00
ThePCGeek
288c4089e0 Pc geek fix proxmox lxc (#263)
* (fix): correct var

var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.

* add vars file for proxmox host group

* remove remote_user from site.yml for proxmox

* added newline to fix lint issue

* fix added ---

---------

Co-authored-by: ThePCGeek <thepcgeek1776@gmail.com>
2023-03-25 22:02:59 -05:00
ThePCGeek
49f0a2ce6b (fix): correct var (#262)
var registered for rc.local check is rcfile but under when it said rclocal which was undefined. changed to rcfile to correct.
2023-03-25 20:41:04 -05:00
dependabot[bot]
6c4621bd56 chore(deps): bump yamllint from 1.29.0 to 1.30.0 (#261)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.29.0 to 1.30.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.29.0...v1.30.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-24 02:46:16 +00:00
Techno Tim
3e16ab6809 Chore: Update kube vip and MetalLB (#257)
* chore(dependencies): updated metallb to v0.13.9

* chore(dependencies): updated kube-vip to v0.5.11
2023-03-15 04:32:26 +00:00
Techno Tim
83fe50797c feat(k3s): Updated to v1.24.11+k3s1 (#255) 2023-03-14 04:04:06 +00:00
dependabot[bot]
2db0b3024c chore(deps): bump ansible-lint from 6.14.1 to 6.14.2 (#249)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.1 to 6.14.2.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.1...v6.14.2)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-13 08:47:43 -05:00
dependabot[bot]
6b2af77e74 chore(deps): bump ansible-lint from 6.14.0 to 6.14.1 (#248)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.14.0 to 6.14.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.14.0...v6.14.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-09 08:38:59 -06:00
dependabot[bot]
d1d1bc3d91 chore(deps): bump ansible-lint from 6.13.1 to 6.14.0 (#246)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.13.1 to 6.14.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.13.1...v6.14.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-05 22:33:16 -06:00
Noms
3a1a7a19aa Fix LXC container implementations (#231)
* Need to become to reboot

* Fix rc.local insertion of script

* Fix syntax

Add new line to lxc.yml

* Remove need to set fact

* Add reset for LXC container config

* Fix syntax

Its always the newlines..

* remove fact setting from reset task

We should mirror the deployment task

* Proxmox LXC reset functions

* Handle if rc.local already has data

* Dont compare literal

* Cleanup Erroneous newline

* Handle rc.local not present on a hybrid cluster

* Update roles/reset/tasks/main.yml

Co-authored-by: Simon Leiner <simon@leiner.me>

* Update roles/lxc/tasks/main.yml

Co-authored-by: Simon Leiner <simon@leiner.me>

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
Co-authored-by: Simon Leiner <simon@leiner.me>
2023-03-03 11:28:14 -06:00
dependabot[bot]
030eeb4b75 chore(deps): bump ansible-core from 2.14.2 to 2.14.3 (#244)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.2 to 2.14.3.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.2...v2.14.3)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 21:59:16 -06:00
Techno Tim
4aeeb124ef docs(README): Removed note about ansible version (#243) 2023-02-26 14:01:21 -06:00
Timothy Stewart
511c020bec docs(README): Updated with a note about ansible version on control node 2023-02-25 10:09:05 -06:00
dependabot[bot]
c47da38b53 chore(deps): bump ansible-lint from 6.12.1 to 6.13.1 (#240)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.12.1 to 6.13.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.12.1...v6.13.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 20:04:38 +00:00
Simon Leiner
6448948e9f Fix dual-stack clusters with multiple master nodes (#237)
* Test IPv6 scenario with two master nodes

* Fix IPv6 multimaster setup

---------

Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-02-20 05:24:19 +00:00
Simon Leiner
7bc198ab26 Pick kube-vip interface automatically by default (#238)
* Pick kube-vip interface automatically by default

* molecule: Fix ipv6 scenario

* Choose a more restrictive molecule timeout in CI
2023-02-20 04:08:36 +00:00
Simon Leiner
65bbc8e2ac Simplify download and patching of MetalLB manifests (#239)
This removes duplicated code and cleans up Ansible log lines a bit.
2023-02-19 21:34:22 -06:00
Mike Thomas
dc2976e7f6 Metallb BGP support (#212)
* Add metallb frr and bgp support

* Set metallb mode to layer2 as default in sample

* Add BGP resource check

* Add automatic downloading of metallb-frr

* Remove frr manifest
2023-02-09 23:58:58 -06:00
dependabot[bot]
5a7ba98968 chore(deps): bump ansible-lint from 6.12.0 to 6.12.1 (#226)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.12.0 to 6.12.1.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.12.0...v6.12.1)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2023-02-06 23:23:42 -06:00
Simon Leiner
10c6ef1d57 Download MetalLB CRDs for respective versions (#225)
* Download MetalLB CRDs for respective versions

This ensures that the CRDs match the actual MetalLB controller version,
as given by the user.

* Download VIP RBAC definitions for respective version
2023-02-06 22:24:02 -06:00
Timothy Stewart
ed4d888e3d fix(gitignore): ignore ansible.cfg 2023-02-05 22:09:50 -06:00
Simon Leiner
49d6d484ae Override less Ansible settings (#224)
* Do not escalate privileges by default

* Do not disable host key checking by default

* Do not mute deprecation warnings by default

* Provide ansible.cfg only as an example

The new example file does ONLY contain options that are related to this
playbook.

* Remove explicit inventory path from scripts

The inventory file is specified in ansible.cfg, see README.md.
2023-02-05 21:52:44 -06:00
dependabot[bot]
96c49c864e chore(deps): bump ansible-lint from 6.11.0 to 6.12.0 (#222)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.11.0 to 6.12.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.11.0...v6.12.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-03 23:11:31 -06:00
dependabot[bot]
60adb1de42 chore(deps): bump ansible-core from 2.14.1 to 2.14.2 (#220)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.14.1 to 2.14.2.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.14.1...v2.14.2)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-30 20:57:15 -06:00
Techno Tim
e023808f2f feat(k3s): Updated to v1.24.10+k3s1 (#215) 2023-01-29 21:25:09 -06:00
acdoussan
511ec493d6 add support for proxmox lxc containers (#209)
Co-authored-by: Adam Doussan <acdoussan@Adams-MacBook-Pro.local>
2023-01-29 21:23:31 -06:00
Simon Leiner
be3e72e173 Do not rely on ansible_user (#214)
* Apply "become" on roles instead of plays

This leads to facts being gathered for the "regular" login user, instead
of root.

* Do not rely on ansible_user

Instead of reading ansible_user (which may or may not be defined), this
patch lets the roles rely on Ansible facts [1].

[1]: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html
2023-01-29 21:20:25 -06:00
dependabot[bot]
e33cbe52c1 chore(deps): bump ansible-lint from 6.8.6 to 6.11.0 (#213)
Bumps [ansible-lint](https://github.com/ansible/ansible-lint) from 6.8.6 to 6.11.0.
- [Release notes](https://github.com/ansible/ansible-lint/releases)
- [Commits](https://github.com/ansible/ansible-lint/compare/v6.8.6...v6.11.0)

---
updated-dependencies:
- dependency-name: ansible-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-29 16:06:26 -06:00
dependabot[bot]
c06af919f3 chore(deps): bump yamllint from 1.28.0 to 1.29.0 (#201)
Bumps [yamllint](https://github.com/adrienverge/yamllint) from 1.28.0 to 1.29.0.
- [Release notes](https://github.com/adrienverge/yamllint/releases)
- [Changelog](https://github.com/adrienverge/yamllint/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/adrienverge/yamllint/compare/v1.28.0...v1.29.0)

---
updated-dependencies:
- dependency-name: yamllint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 22:56:45 -06:00
Techno Tim
b86384c439 fix(raspberrypi): Fix handler name (#200) 2023-01-10 21:26:27 -06:00
Techno Tim
bf2bd1edc5 feat(k3s): Updated to v1.24.9+k3s1 (#197) 2023-01-06 18:53:40 -06:00
irish1986
e98e3ee77c Split manifest into separate task for ease of use (#191) 2023-01-01 23:04:22 -06:00
dependabot[bot]
78f7a60378 chore(deps): bump pre-commit from 2.20.0 to 2.21.0 (#188)
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 2.20.0 to 2.21.0.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v2.20.0...v2.21.0)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-25 23:50:56 -06:00
dependabot[bot]
e64fea760d chore(deps): bump ansible-core from 2.13.5 to 2.14.1 (#176)
Bumps [ansible-core](https://github.com/ansible/ansible) from 2.13.5 to 2.14.1.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.13.5...v2.14.1)

---
updated-dependencies:
- dependency-name: ansible-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-06 22:30:24 -06:00
dependabot[bot]
764e32c778 chore(deps): bump molecule from 4.0.3 to 4.0.4 (#175)
Bumps [molecule](https://github.com/ansible-community/molecule) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/ansible-community/molecule/releases)
- [Commits](https://github.com/ansible-community/molecule/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: molecule
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-05 22:26:07 -06:00
Techno Tim
e6cf14ea78 K3s 1 24 8 (#171)
* chore(dependencies): Updated actions

* chore(dependencies): updated to k3s to v1.24.8+k3s1 and kube-vip to v0.5.7
2022-12-02 23:14:06 -06:00
theonejj
da049dcc28 fix: config warning callback_whitelist (#170)
Co-authored-by: Jan Jansen <j.jansen@powerspex.nl>
2022-12-01 23:09:02 -06:00
Sherif Metwally
2604caa483 "command" module no longer supports "warn" argument (#169)
* "command" module no longer supports "warn" argument

* correct indetation lint errors
2022-11-29 20:26:01 -06:00
dependabot[bot]
82d820805f chore(deps): bump pre-commit-hooks from 4.3.0 to 4.4.0 (#168)
Bumps [pre-commit-hooks](https://github.com/pre-commit/pre-commit-hooks) from 4.3.0 to 4.4.0.
- [Release notes](https://github.com/pre-commit/pre-commit-hooks/releases)
- [Changelog](https://github.com/pre-commit/pre-commit-hooks/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit-hooks/compare/v4.3.0...v4.4.0)

---
updated-dependencies:
- dependency-name: pre-commit-hooks
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-11-24 20:54:33 -06:00
Timothy Stewart
da72884a5b fix(ci): remove self-hosted 2022-11-23 23:30:06 -06:00
39 changed files with 501 additions and 2053 deletions

View File

@@ -11,12 +11,12 @@ jobs:
steps:
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # 4.3.0
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
@@ -56,12 +56,12 @@ jobs:
ensure-pinned-actions:
name: Ensure SHA Pinned Actions
runs-on: self-hosted
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@6ca5574367befbc9efdb2fa25978084159c5902d # 1.3.0
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@af2eb3226618e2494e3d9084f515ad6dcf16e229 # 2.0.1
with:
allowlist: |
aws-actions/

View File

@@ -18,7 +18,7 @@ jobs:
steps:
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
uses: actions/checkout@e2f20e631ae6d7dd3b768f56a5d2af784dd54791 # v3 2.5.0
with:
ref: ${{ github.event.pull_request.head.sha }}
@@ -54,7 +54,7 @@ jobs:
run: ./.github/download-boxes.sh
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # 4.3.0
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # 2.3.3
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip' # caching pip dependencies
@@ -71,6 +71,7 @@ jobs:
- name: Test with molecule
run: molecule test --scenario-name ${{ matrix.scenario }}
timeout-minutes: 90
env:
ANSIBLE_K3S_LOG_DIR: ${{ runner.temp }}/logs/k3s-ansible/${{ matrix.scenario }}
ANSIBLE_SSH_RETRIES: 4

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
.env/
*.log
ansible.cfg

View File

@@ -28,7 +28,7 @@ on processor architecture:
## ✅ System requirements
- Deployment environment must have Ansible 2.4.0+. If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://docs.technotim.live/posts/ansible-automation/).
- Control Node (the machine you are running `ansible` commands) must have Ansible 2.11+ If you need a quick primer on Ansible [you can check out my docs and setting up Ansible](https://docs.technotim.live/posts/ansible-automation/).
- You will also need to install collections that this playbook uses by running `ansible-galaxy collection install -r ./collections/requirements.yml` (important❗)
@@ -67,6 +67,8 @@ node
If multiple hosts are in the master group, the playbook will automatically set up k3s in [HA mode with etcd](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/).
Finally, copy `ansible.example.cfg` to `ansible.cfg` and adapt the inventory path to match the files that you just created.
This requires at least k3s version `1.19.1` however the version is configurable by using the `k3s_version` variable.
If needed, you can also edit `inventory/my-cluster/group_vars/all.yml` to match your environment.

View File

@@ -1,23 +0,0 @@
[defaults]
nocows = True
roles_path = ./roles
inventory = ./hosts.ini
stdout_callback = yaml
remote_tmp = $HOME/.ansible/tmp
local_tmp = $HOME/.ansible/tmp
timeout = 60
host_key_checking = False
deprecation_warnings = False
callback_whitelist = profile_tasks
log_path = ./ansible.log
[privilege_escalation]
become = True
[ssh_connection]
scp_if_ssh = smart
retries = 3
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o Compression=yes -o ServerAliveInterval=15s
pipelining = True
control_path = %(directory)s/%%h-%%r

2
ansible.example.cfg Normal file
View File

@@ -0,0 +1,2 @@
[defaults]
inventory = inventory/my-cluster/hosts.ini ; Adapt this to the path to your inventory file

View File

@@ -1,3 +1,3 @@
#!/bin/bash
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
ansible-playbook site.yml

View File

@@ -1,5 +1,5 @@
---
k3s_version: v1.24.7+k3s1
k3s_version: v1.25.9+k3s1
# this is the user that has ssh access to these machines
ansible_user: ansibleuser
systemd_dir: /etc/systemd/system
@@ -41,11 +41,44 @@ extra_agent_args: >-
{{ extra_args }}
# image tag for kube-vip
kube_vip_tag_version: "v0.5.6"
kube_vip_tag_version: "v0.5.12"
# metallb type frr or native
metal_lb_type: "native"
# metallb mode layer2 or bgp
metal_lb_mode: "layer2"
# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb
metal_lb_speaker_tag_version: "v0.13.7"
metal_lb_controller_tag_version: "v0.13.7"
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90"
# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file.
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have partiularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
# set this value to some-user
proxmox_lxc_ssh_user: root
# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes
proxmox_lxc_ct_ids:
- 200
- 201
- 202
- 203
- 204

View File

@@ -0,0 +1,2 @@
---
ansible_user: '{{ proxmox_lxc_ssh_user }}'

View File

@@ -7,6 +7,11 @@
192.168.30.41
192.168.30.42
# only required if proxmox_lxc_configure: true
# must contain all proxmox instances that have a master or worker node
# [proxmox]
# 192.168.30.43
[k3s_cluster:children]
master
node

View File

@@ -0,0 +1,3 @@
---
node_ipv4: 192.168.123.12
node_ipv6: fdad:bad:ba55::de:12

View File

@@ -4,7 +4,6 @@ dependency:
driver:
name: vagrant
platforms:
- name: control1
box: generic/ubuntu2204
memory: 2048
@@ -21,6 +20,22 @@ platforms:
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: control2
box: generic/ubuntu2204
memory: 2048
cpus: 2
groups:
- k3s_cluster
- master
interfaces:
- network_name: private_network
ip: fdad:bad:ba55::de:12
config_options:
# We currently can not use public-key based authentication on Ubuntu 22.04,
# see: https://github.com/chef/bento/issues/1405
ssh.username: "vagrant"
ssh.password: "vagrant"
- name: node1
box: generic/ubuntu2204
memory: 2048

View File

@@ -7,6 +7,11 @@
# See: https://github.com/flannel-io/flannel/blob/67d603aaf45ef80f5dd39f43714fc5e6f8a637eb/Documentation/troubleshooting.md#Vagrant # noqa yaml[line-length]
flannel_iface: eth1
# In this scenario, we have multiple interfaces that the VIP could be
# broadcasted on. Since we have assigned a dedicated private network
# here, let's make sure that it is used.
kube_vip_iface: eth1
# The test VMs might be a bit slow, so we give them more time to join the cluster:
retry_count: 45

View File

@@ -1,3 +1,3 @@
#!/bin/bash
ansible-playbook reboot.yml -i inventory/my-cluster/hosts.ini
ansible-playbook reboot.yml

View File

@@ -2,8 +2,8 @@
- name: Reboot k3s_cluster
hosts: k3s_cluster
gather_facts: yes
become: yes
tasks:
- name: Reboot the nodes (and Wait upto 5 mins max)
become: true
reboot:
reboot_timeout: 300

View File

@@ -4,15 +4,14 @@
#
# pip-compile requirements.in
#
ansible-compat==2.2.4
# via
# ansible-lint
# molecule
ansible-core==2.13.5
ansible-compat==3.0.1
# via molecule
ansible-core==2.14.5
# via
# -r requirements.in
# ansible-compat
# ansible-lint
ansible-lint==6.8.6
ansible-lint==6.15.0
# via -r requirements.in
arrow==1.2.3
# via jinja2-time
@@ -68,8 +67,6 @@ identify==2.5.8
# via pre-commit
idna==3.4
# via requests
importlib-resources==5.10.0
# via jsonschema
jinja2==3.1.2
# via
# ansible-core
@@ -94,7 +91,7 @@ kubernetes==25.3.0
# via -r requirements.in
markupsafe==2.1.1
# via jinja2
molecule==4.0.3
molecule==4.0.4
# via
# -r requirements.in
# molecule-vagrant
@@ -118,17 +115,15 @@ pathspec==0.10.1
# via
# black
# yamllint
pkgutil-resolve-name==1.3.10
# via jsonschema
platformdirs==2.5.2
# via
# black
# virtualenv
pluggy==1.0.0
# via molecule
pre-commit==2.20.0
pre-commit==2.21.0
# via -r requirements.in
pre-commit-hooks==4.3.0
pre-commit-hooks==4.4.0
# via -r requirements.in
pyasn1==0.4.8
# via
@@ -184,8 +179,6 @@ ruamel-yaml==0.17.21
# via
# ansible-lint
# pre-commit-hooks
ruamel-yaml-clib==0.2.7
# via ruamel-yaml
selinux==0.2.1
# via molecule-vagrant
six==1.16.0
@@ -193,20 +186,12 @@ six==1.16.0
# google-auth
# kubernetes
# python-dateutil
subprocess-tee==0.3.5
# via ansible-compat
subprocess-tee==0.4.1
# via
# ansible-compat
# ansible-lint
text-unidecode==1.3
# via python-slugify
toml==0.10.2
# via pre-commit
tomli==2.0.1
# via
# black
# pre-commit-hooks
typing-extensions==4.4.0
# via
# black
# rich
urllib3==1.26.12
# via
# kubernetes
@@ -217,12 +202,10 @@ wcmatch==8.4.1
# via ansible-lint
websocket-client==1.4.2
# via kubernetes
yamllint==1.28.0
yamllint==1.31.0
# via
# -r requirements.in
# ansible-lint
zipp==3.10.0
# via importlib-resources
# The following packages are considered to be unsafe in a requirements file:
# setuptools

View File

@@ -1,3 +1,3 @@
#!/bin/bash
ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini
ansible-playbook reset.yml

View File

@@ -2,12 +2,22 @@
- hosts: k3s_cluster
gather_facts: yes
become: yes
roles:
- role: reset
become: true
- role: raspberrypi
become: true
vars: {state: absent}
post_tasks:
- name: Reboot and wait for node to come back up
become: true
reboot:
reboot_timeout: 3600
- hosts: proxmox
gather_facts: true
become: yes
remote_user: "{{ proxmox_lxc_ssh_user }}"
roles:
- role: reset_proxmox_lxc
when: proxmox_lxc_configure

View File

@@ -1,11 +1,15 @@
---
ansible_user: root
# If you want to explicitly define an interface that ALL control nodes
# should use to propagate the VIP, define it here. Otherwise, kube-vip
# will determine the right interface automatically at runtime.
kube_vip_iface: null
server_init_args: >-
{% if groups['master'] | length > 1 %}
{% if ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname'] %}
--cluster-init
{% else %}
--server https://{{ hostvars[groups['master'][0]].k3s_node_ip }}:6443
--server https://{{ hostvars[groups['master'][0]].k3s_node_ip | split(",") | first | ansible.utils.ipwrap }}:6443
{% endif %}
--token {{ k3s_token }}
{% endif %}

View File

@@ -13,51 +13,11 @@
args:
warn: false # The ansible systemd module does not support reset-failed
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Deploy vip manifest
include_tasks: vip.yml
- name: Copy vip rbac manifest to first master
template:
src: "vip.rbac.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Copy vip manifest to first master
template:
src: "vip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
# these will be copied and installed now, then tested later and apply config
- name: Copy metallb namespace to first master
template:
src: "metallb.namespace.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-namespace.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Copy metallb namespace to first master
template:
src: "metallb.crds.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Deploy metallb manifest
include_tasks: metallb.yml
- name: Init cluster inside the transient k3s-init service
command:
@@ -66,8 +26,6 @@
--unit=k3s-init \
k3s server {{ server_init_args }}"
creates: "{{ systemd_dir }}/k3s.service"
args:
warn: false # The ansible systemd module does not support transient units
- name: Verification
block:
@@ -139,24 +97,24 @@
- name: Create directory .kube
file:
path: ~{{ ansible_user }}/.kube
path: "{{ ansible_user_dir }}/.kube"
state: directory
owner: "{{ ansible_user }}"
owner: "{{ ansible_user_id }}"
mode: "u=rwx,g=rx,o="
- name: Copy config file to user home directory
copy:
src: /etc/rancher/k3s/k3s.yaml
dest: ~{{ ansible_user }}/.kube/config
dest: "{{ ansible_user_dir }}/.kube/config"
remote_src: yes
owner: "{{ ansible_user }}"
owner: "{{ ansible_user_id }}"
mode: "u=rw,g=,o="
- name: Configure kubectl cluster to {{ endpoint_url }}
command: >-
k3s kubectl config set-cluster default
--server={{ endpoint_url }}
--kubeconfig ~{{ ansible_user }}/.kube/config
--kubeconfig {{ ansible_user_dir }}/.kube/config
changed_when: true
vars:
endpoint_url: >-

View File

@@ -0,0 +1,30 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: "Download to first master: manifest for metallb-{{ metal_lb_type }}"
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/metallb/metallb/{{ metal_lb_controller_tag_version }}/config/manifests/metallb-{{metal_lb_type}}.yaml" # noqa yaml[line-length]
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Set image versions in manifest for metallb-{{ metal_lb_type }}
ansible.builtin.replace:
path: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
regexp: "{{ item.change | ansible.builtin.regex_escape }}"
replace: "{{ item.to }}"
with_items:
- change: "metallb/speaker:{{ metal_lb_controller_tag_version }}"
to: "metallb/speaker:{{ metal_lb_speaker_tag_version }}"
loop_control:
label: "{{ item.change }} => {{ item.to }}"
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']

View File

@@ -0,0 +1,27 @@
---
- name: Create manifests directory on first master
file:
path: /var/lib/rancher/k3s/server/manifests
state: directory
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Download vip rbac manifest to first master
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/kube-vip/kube-vip/{{ kube_vip_tag_version }}/docs/manifests/rbac.yaml"
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']
- name: Copy vip manifest to first master
template:
src: "vip.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip.yaml"
owner: root
group: root
mode: 0644
when: ansible_hostname == hostvars[groups['master'][0]]['ansible_hostname']

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb

View File

@@ -1,32 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-vip
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: system:kube-vip-role
rules:
- apiGroups: [""]
resources: ["services", "services/status", "nodes", "endpoints"]
verbs: ["list","get","watch", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["list", "get", "watch", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-vip-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-vip-role
subjects:
- kind: ServiceAccount
name: kube-vip
namespace: kube-system

View File

@@ -30,8 +30,10 @@ spec:
value: "true"
- name: port
value: "6443"
{% if kube_vip_iface %}
- name: vip_interface
value: {{ flannel_iface }}
value: {{ kube_vip_iface }}
{% endif %}
- name: vip_cidr
value: "{{ apiserver_endpoint | ansible.utils.ipsubnet | ansible.utils.ipaddr('prefix') }}"
- name: cp_enable

View File

@@ -1,92 +1,6 @@
---
- name: Create manifests directory for temp configuration
file:
path: /tmp/k3s
state: directory
owner: "{{ ansible_user }}"
mode: 0755
with_items: "{{ groups['master'] }}"
run_once: true
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: "{{ ansible_user }}"
mode: 0755
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for MetalLB resources
command: >-
k3s kubectl wait {{ item.resource }}
--namespace='metallb-system'
{% if item.name | default(False) -%}{{ item.name }}{%- endif %}
{% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %}
{% if item.condition | default(False) -%}{{ item.condition }}{%- endif %}
--timeout='{{ metal_lb_available_timeout }}'
changed_when: false
run_once: true
with_items:
- description: controller
resource: deployment
name: controller
condition: --for condition=Available=True
- description: webhook service
resource: pod
selector: component=controller
condition: --for=jsonpath='{.status.phase}'=Running
- description: pods in replica sets
resource: pod
selector: component=controller,app=metallb
condition: --for condition=Ready
- description: ready replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.readyReplicas}'=1
- description: fully labeled replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.fullyLabeledReplicas}'=1
- description: available replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.availableReplicas}'=1
loop_control:
label: "{{ item.description }}"
- name: Test metallb-system webhook-service endpoint
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Apply metallb CRs
command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml
--timeout='{{ metal_lb_available_timeout }}'
register: this
changed_when: false
run_once: true
until: this.rc == 0
retries: 5
- name: Test metallb-system resources
command: >-
k3s kubectl -n metallb-system get {{ item }}
changed_when: false
run_once: true
with_items:
- IPAddressPool
- L2Advertisement
- name: Deploy metallb pool
include_tasks: metallb.yml
- name: Remove tmp directory used for manifests
file:

View File

@@ -0,0 +1,101 @@
---
- name: Create manifests directory for temp configuration
file:
path: /tmp/k3s
state: directory
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups['master'] }}"
run_once: true
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: "{{ ansible_user_id }}"
mode: 0755
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for MetalLB resources
command: >-
k3s kubectl wait {{ item.resource }}
--namespace='metallb-system'
{% if item.name | default(False) -%}{{ item.name }}{%- endif %}
{% if item.selector | default(False) -%}--selector='{{ item.selector }}'{%- endif %}
{% if item.condition | default(False) -%}{{ item.condition }}{%- endif %}
--timeout='{{ metal_lb_available_timeout }}'
changed_when: false
run_once: true
with_items:
- description: controller
resource: deployment
name: controller
condition: --for condition=Available=True
- description: webhook service
resource: pod
selector: component=controller
condition: --for=jsonpath='{.status.phase}'=Running
- description: pods in replica sets
resource: pod
selector: component=controller,app=metallb
condition: --for condition=Ready
- description: ready replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.readyReplicas}'=1
- description: fully labeled replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.fullyLabeledReplicas}'=1
- description: available replicas of controller
resource: replicaset
selector: component=controller,app=metallb
condition: --for=jsonpath='{.status.availableReplicas}'=1
loop_control:
label: "{{ item.description }}"
- name: Test metallb-system webhook-service endpoint
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Apply metallb CRs
command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml
--timeout='{{ metal_lb_available_timeout }}'
register: this
changed_when: false
run_once: true
until: this.rc == 0
retries: 5
- name: Test metallb-system resources for Layer 2 configuration
command: >-
k3s kubectl -n metallb-system get {{ item }}
changed_when: false
run_once: true
when: metal_lb_mode == "layer2"
with_items:
- IPAddressPool
- L2Advertisement
- name: Test metallb-system resources for BGP configuration
command: >-
k3s kubectl -n metallb-system get {{ item }}
changed_when: false
run_once: true
when: metal_lb_mode == "bgp"
with_items:
- IPAddressPool
- BGPPeer
- BGPAdvertisement

View File

@@ -13,9 +13,31 @@ spec:
{% for range in metal_lb_ip_range %}
- {{ range }}
{% endfor %}
{% if metal_lb_mode == "layer2" %}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
{% endif %}
{% if metal_lb_mode == "bgp" %}
---
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: default
namespace: metallb-system
spec:
myASN: {{ metal_lb_bgp_my_asn }}
peerASN: {{ metal_lb_bgp_peer_asn }}
peerAddress: {{ metal_lb_bgp_peer_address }}
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: default
namespace: metallb-system
{% endif %}

View File

@@ -0,0 +1,4 @@
---
- name: reboot server
become: true
reboot:

21
roles/lxc/tasks/main.yml Normal file
View File

@@ -0,0 +1,21 @@
---
- name: Check for rc.local file
stat:
path: /etc/rc.local
register: rcfile
- name: Create rc.local if needed
lineinfile:
path: /etc/rc.local
line: "#!/bin/sh -e"
create: true
insertbefore: BOF
mode: "u=rwx,g=rx,o=rx"
when: not rcfile.stat.exists
- name: Write rc.local file
blockinfile:
path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}"
state: present
notify: reboot server

View File

@@ -0,0 +1,5 @@
---
- name: reboot containers
command:
"pct reboot {{ item }}"
loop: "{{ proxmox_lxc_filtered_ids }}"

View File

@@ -0,0 +1,50 @@
---
- name: check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# used for the reboot handler
- name: get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids:
'{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}'
# https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185
- name: Ensure lxc config has the right apparmor profile
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.apparmor.profile"
line: "lxc.apparmor.profile: unconfined"
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right cgroup
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow"
line: "lxc.cgroup.devices.allow: a"
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right cap drop
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cap.drop"
line: "lxc.cap.drop: "
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Ensure lxc config has the right mounts
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.mount.auto"
line: 'lxc.mount.auto: "proc:rw sys:rw"'
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers

View File

@@ -1,3 +1,3 @@
---
- name: Reboot
- name: reboot
reboot:

View File

@@ -54,3 +54,31 @@
file:
path: /tmp/k3s
state: absent
- name: Check if rc.local exists
stat:
path: /etc/rc.local
register: rcfile
- name: Remove rc.local modifications for proxmox lxc containers
become: true
blockinfile:
path: /etc/rc.local
content: "{{ lookup('template', 'templates/rc.local.j2') }}"
create: false
state: absent
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Check rc.local for cleanup
become: true
slurp:
src: /etc/rc.local
register: rcslurp
when: proxmox_lxc_configure and rcfile.stat.exists
- name: Cleanup rc.local if we only have a Shebang line
become: true
file:
path: /etc/rc.local
state: absent
when: proxmox_lxc_configure and rcfile.stat.exists and ((rcslurp.content | b64decode).splitlines() | length) <= 1

View File

@@ -0,0 +1,5 @@
---
- name: reboot containers
command:
"pct reboot {{ item }}"
loop: "{{ proxmox_lxc_filtered_ids }}"

View File

@@ -0,0 +1,53 @@
---
- name: check for container files that exist on this host
stat:
path: "/etc/pve/lxc/{{ item }}.conf"
loop: "{{ proxmox_lxc_ct_ids }}"
register: stat_results
- name: filter out files that do not exist
set_fact:
proxmox_lxc_filtered_files:
'{{ stat_results.results | rejectattr("stat.exists", "false") | map(attribute="stat.path") }}'
# used for the reboot handler
- name: get container ids from filtered files
set_fact:
proxmox_lxc_filtered_ids:
'{{ proxmox_lxc_filtered_files | map("split", "/") | map("last") | map("split", ".") | map("first") }}'
- name: Remove LXC apparmor profile
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.apparmor.profile"
line: "lxc.apparmor.profile: unconfined"
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc cgroups
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cgroup.devices.allow"
line: "lxc.cgroup.devices.allow: a"
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc cap drop
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.cap.drop"
line: "lxc.cap.drop: "
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers
- name: Remove lxc mounts
lineinfile:
dest: "{{ item }}"
regexp: "^lxc.mount.auto"
line: 'lxc.mount.auto: "proc:rw sys:rw"'
state: absent
loop: "{{ proxmox_lxc_filtered_files }}"
notify: reboot containers

View File

@@ -1,24 +1,36 @@
---
- hosts: proxmox
gather_facts: true
become: yes
roles:
- role: proxmox_lxc
when: proxmox_lxc_configure
- hosts: k3s_cluster
gather_facts: yes
become: yes
roles:
- role: lxc
become: true
when: proxmox_lxc_configure
- role: prereq
become: true
- role: download
become: true
- role: raspberrypi
become: true
- hosts: master
become: yes
roles:
- role: k3s/master
become: true
- hosts: node
become: yes
roles:
- role: k3s/node
become: true
- hosts: master
become: yes
roles:
- role: k3s/post
become: true

8
templates/rc.local.j2 Normal file
View File

@@ -0,0 +1,8 @@
# Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead
# see: https://github.com/kubernetes-sigs/kind/issues/662
if [ ! -e /dev/kmsg ]; then
ln -s /dev/console /dev/kmsg
fi
# https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
mount --make-rshared /