Compare commits

...

24 Commits

Author SHA1 Message Date
Techno Tim
6695d13683 upgrade k3s to v1.24.4+k3s1 (#64)
* feat(k3s): Upgrade to v1.24.4+k3s1
* feat(metallb): updated to v0.13.5
2022-09-01 21:20:25 -05:00
Techno Tim
74e1dc1dfe Pin GitHub Actions to SHA + Dependabot (#62)
* feat(repo): Add dependabot

* fix(ci): clean up

* fix(gh-actions): pin to sha

* fix(lint): fixing yaml lint

* feat(repo): Add dependabot

* fix(vagrant): up retry count to 60 because gh actions are sloooooow
2022-08-30 23:15:15 -05:00
Techno Tim
56f8f21850 fix(ansible): Install services separate from config (#63) 2022-08-30 21:44:55 -05:00
Timothy Stewart
117c608a73 fix(ansible): added longer wait with todo 2022-08-29 23:16:13 -05:00
niki-on-github
e28d8f38e2 add ansible.posix module to requirements.yml (#59)
Co-authored-by: arch <arch@local>
Co-authored-by: Techno Tim <timothystewart6@gmail.com>
2022-08-29 22:58:57 -05:00
Simon Leiner
9d8a5cc2b8 Execute Vagrant cluster in CI (#57) 2022-08-29 19:45:07 -05:00
Techno Tim
2296959894 fix(ci): Fix Linting (#61) 2022-08-28 20:36:05 -05:00
Timothy Stewart
6d793c5c96 fix(ansible): add wait 2022-08-28 17:49:38 -05:00
Timothy Stewart
47ac514dc6 fix(ansible): fix lint 2022-08-28 16:42:07 -05:00
Timothy Stewart
611cf5ab0b fix(ansible): fix lint 2022-08-28 16:32:52 -05:00
Timothy Stewart
c82cbfc501 fix(ansible): fix lint 2022-08-28 16:29:04 -05:00
Timothy Stewart
f603a048c3 fix(ansible): fix lint 2022-08-28 16:26:46 -05:00
Timothy Stewart
4b959719ba fix(ansible): run task on one master 2022-08-28 16:00:10 -05:00
Timothy Stewart
db8fbd9447 chore(lint): Fix yaml lint 2022-08-28 14:27:22 -05:00
Techno Tim
aa05ab153e fix(ansible): Refactored ansible steps to now install metallb in post… (#58)
* fix(ansible): Refactored ansible steps to now install metallb in post task and verify
2022-08-28 14:25:09 -05:00
Simon Leiner
370e19169b Print fewer logs when removing manifests (#55) 2022-08-23 23:26:08 -05:00
Timothy Stewart
e04f3bac61 chore(github): Updated issue template 2022-08-20 16:22:56 -05:00
Techno Tim
cdd7c4e668 Fix k3s manifest (#53)
* fix(k3s): Remove manifests and folders from bootstrapped cluster
2022-08-20 16:19:20 -05:00
Lance A. Brown
90bbc0a399 Add linux-modules-extra-raspi package for Ubuntu 22.x on Raspberry. (#50)
* Add task for linux-modules-extra-raspi

Ubuntu 22.x on Raspberry Pi needs the linux-modules-extra-raspi package
for the vxlans kernel module.

* Remove linux-modules-extra-reaspi package

Not sure we want to do this but including it in the PR anyway for discussion.
2022-08-11 21:23:56 -05:00
slemmercs
1e4b48f039 replaced --no-deploy with --disable (#49)
According to https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/ > Kubernetes Components section the --disable <value> flag should be used as the --no-deploy is a deprecated option
2022-08-11 21:23:47 -05:00
Timothy Stewart
ac5325a670 fix(kube-vip): Cleaning up; adding missing rbac api groups 2022-07-30 22:11:28 -05:00
Techno Tim
a33ed487e0 feat(upgrades): Updated k3s, metalls, and kubevip and fixed bugs (#46) 2022-07-27 23:13:43 -05:00
Simon Leiner
1830b9c9a1 Fix .gitignore (#40)
For more details, see:
https://stackoverflow.com/a/20652768
2022-07-27 21:24:59 -05:00
SwaggaRitz
39581f4ba7 Replaced manifest files with double extention to '-' (#41)
Co-authored-by: Adrian Jones <adrian@geektowers.local>
2022-07-27 21:21:38 -05:00
23 changed files with 2197 additions and 547 deletions

View File

@@ -26,7 +26,7 @@ Operating system:
Hardware:
### Variables Used:
### Variables Used
`all.yml`
@@ -52,7 +52,7 @@ metal_lb_controller_tag_version: ""
metal_lb_ip_range: ""
```
### Hosts
### Hosts
`host.ini`
@@ -73,3 +73,5 @@ node
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
- [ ] I've checked the [General Troubleshooting Guide](https://github.com/techno-tim/k3s-ansible/discussions/20)

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
---
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"
rebase-strategy: "auto"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]

View File

@@ -1,31 +1,30 @@
---
name: Lint
'on':
name: Linting
on:
pull_request:
push:
branches:
- master
jobs:
test:
name: Lint
ansible-lint:
name: YAML Lint + Ansible Lint
runs-on: ubuntu-latest
steps:
- name: Check out the codebase.
uses: actions/checkout@v2
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
- name: Set up Python 3.7.
uses: actions/setup-python@v2
- name: Set up Python 3.x
uses: actions/setup-python@b55428b1882923874294fa556849718a1d7f2ca5 #4.0.2
with:
python-version: '3.x'
- name: Install test dependencies.
- name: Install test dependencies
run: pip3 install yamllint ansible-lint ansible
- name: Run yamllint.
- name: Run yamllint
run: yamllint .
- name: Run ansible-lint.
- name: Run ansible-lint
run: ansible-lint

69
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
---
name: Test
on:
pull_request:
push:
branches:
- master
jobs:
vagrant:
name: Vagrant
runs-on: macos-12
env:
HOMEBREW_NO_INSTALL_CLEANUP: 1
VAGRANT_CWD: ${{ github.workspace }}/vagrant
steps:
- name: Check out the codebase
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b # 3.0.2
- name: Install Ansible
run: brew install ansible
- name: Install role dependencies
run: ansible-galaxy install -r collections/requirements.yml
- name: Configure VirtualBox
run: >-
sudo mkdir -p /etc/vbox &&
echo "* 192.168.30.0/24" | sudo tee -a /etc/vbox/networks.conf > /dev/null
- name: Cache Vagrant boxes
uses: actions/cache@fd5de65bc895cf536527842281bea11763fefd77 # 3.0.8
with:
path: |
~/.vagrant.d/boxes
key: vagrant-boxes-${{ hashFiles('**/Vagrantfile') }}
restore-keys: |
vagrant-boxes
- name: Create virtual machines
run: vagrant up
timeout-minutes: 10
- name: Provision cluster using Ansible
# Since Ansible sets up _all_ machines, it is sufficient to run it only
# once (i.e, for a single node - we are choosing control1 here)
run: vagrant provision control1 --provision-with ansible
timeout-minutes: 25
- name: Set up kubectl on the host
run: brew install kubectl &&
mkdir -p ~/.kube &&
vagrant ssh control1 --command "cat ~/.kube/config" > ~/.kube/config
- name: Show cluster nodes
run: kubectl describe -A nodes
- name: Show cluster pods
run: kubectl describe -A pods
- name: Test cluster
run: $VAGRANT_CWD/test_cluster.py --verbose --locals
timeout-minutes: 5
- name: Destroy virtual machines
if: always() # do this even if a step before has failed
run: vagrant destroy --force

View File

@@ -1,3 +1,4 @@
---
collections:
- name: community.general
- name: ansible.posix

View File

@@ -1,3 +1,3 @@
*
/*
!.gitignore
!sample/
!sample/

View File

@@ -1,5 +1,5 @@
---
k3s_version: v1.23.4+k3s1
k3s_version: v1.24.4+k3s1
# this is the user that has ssh access to these machines
ansible_user: ansibleuser
systemd_dir: /etc/systemd/system
@@ -17,16 +17,16 @@ apiserver_endpoint: "192.168.30.222"
# this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password"
# change these to your liking, the only required one is--no-deploy servicelb
extra_server_args: "--no-deploy servicelb --no-deploy traefik"
# change these to your liking, the only required one is--disable servicelb
extra_server_args: "--disable servicelb --disable traefik"
extra_agent_args: ""
# image tag for kube-vip
kube_vip_tag_version: "v0.4.4"
kube_vip_tag_version: "v0.5.0"
# image tag for metal lb
metal_lb_speaker_tag_version: "v0.12.1"
metal_lb_controller_tag_version: "v0.12.1"
metal_lb_speaker_tag_version: "v0.13.5"
metal_lb_controller_tag_version: "v0.13.5"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.30.80-192.168.30.90"

View File

@@ -25,7 +25,7 @@
- name: Copy vip rbac manifest to first master
template:
src: "vip.rbac.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/vip.rbac.yaml"
dest: "/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml"
owner: root
group: root
mode: 0644
@@ -40,28 +40,20 @@
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Copy metallb namespace manifest to first master
# these will be copied and installed now, then tested later and apply config
- name: Copy metallb namespace to first master
template:
src: "metallb.namespace.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb.namespace.yaml"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-namespace.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Copy metallb ConfigMap manifest to first master
- name: Copy metallb namespace to first master
template:
src: "metallb.configmap.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb.configmap.yaml"
owner: root
group: root
mode: 0644
when: ansible_host == hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0])
- name: Copy metallb main manifest to first master
template:
src: "metallb.yaml.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb.yaml"
src: "metallb.crds.j2"
dest: "/var/lib/rancher/k3s/server/manifests/metallb-crds.yaml"
owner: root
group: root
mode: 0644
@@ -93,6 +85,7 @@
name: k3s-init
state: stopped
failed_when: false
when: not ansible_check_mode
- name: Copy K3s service file
register: k3s_service
@@ -171,3 +164,25 @@
src: /usr/local/bin/k3s
dest: /usr/local/bin/crictl
state: link
- name: Get contents of manifests folder
find:
paths: /var/lib/rancher/k3s/server/manifests
file_type: file
register: k3s_server_manifests
- name: Get sub dirs of manifests folder
find:
paths: /var/lib/rancher/k3s/server/manifests
file_type: directory
register: k3s_server_manifests_directories
- name: Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start
file:
path: "{{ item.path }}"
state: absent
with_items:
- "{{ k3s_server_manifests.files }}"
- "{{ k3s_server_manifests_directories.files }}"
loop_control:
label: "{{ item.path }}"

View File

@@ -1,13 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- {{ metal_lb_ip_range }}

File diff suppressed because it is too large Load Diff

View File

@@ -1,481 +0,0 @@
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
labels:
app: metallb
name: controller
spec:
allowPrivilegeEscalation: false
allowedCapabilities: []
allowedHostPaths: []
defaultAddCapabilities: []
defaultAllowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
ranges:
- max: 65535
min: 1
rule: MustRunAs
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- secret
- emptyDir
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
labels:
app: metallb
name: speaker
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- NET_RAW
allowedHostPaths: []
defaultAddCapabilities: []
defaultAllowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
hostIPC: false
hostNetwork: true
hostPID: false
hostPorts:
- max: 7472
min: 7472
- max: 7946
min: 7946
privileged: true
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- secret
- emptyDir
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: speaker
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:controller
rules:
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services/status
verbs:
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- policy
resourceNames:
- controller
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:speaker
rules:
- apiGroups:
- ''
resources:
- services
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- policy
resourceNames:
- speaker
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: pod-lister
namespace: metallb-system
rules:
- apiGroups:
- ''
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- create
- apiGroups:
- ''
resources:
- secrets
resourceNames:
- memberlist
verbs:
- list
- apiGroups:
- apps
resources:
- deployments
resourceNames:
- controller
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: pod-lister
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-lister
subjects:
- kind: ServiceAccount
name: speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: controller
subjects:
- kind: ServiceAccount
name: controller
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: metallb
component: speaker
name: speaker
namespace: metallb-system
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
annotations:
prometheus.io/port: '7472'
prometheus.io/scrape: 'true'
labels:
app: metallb
component: speaker
spec:
containers:
- args:
- --port=7472
- --config=config
- --log-level=info
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: METALLB_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: METALLB_ML_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
# needed when another software is also using memberlist / port 7946
# when changing this default you also need to update the container ports definition
# and the PodSecurityPolicy hostPorts definition
#- name: METALLB_ML_BIND_PORT
# value: "7946"
- name: METALLB_ML_LABELS
value: "app=metallb,component=speaker"
- name: METALLB_ML_SECRET_KEY
valueFrom:
secretKeyRef:
name: memberlist
key: secretkey
image: quay.io/metallb/speaker:{{ metal_lb_speaker_tag_version }}
name: speaker
ports:
- containerPort: 7472
name: monitoring
- containerPort: 7946
name: memberlist-tcp
- containerPort: 7946
name: memberlist-udp
protocol: UDP
livenessProbe:
httpGet:
path: /metrics
port: monitoring
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /metrics
port: monitoring
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_RAW
drop:
- ALL
readOnlyRootFilesystem: true
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: speaker
terminationGracePeriodSeconds: 2
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: metallb
component: controller
name: controller
namespace: metallb-system
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
annotations:
prometheus.io/port: '7472'
prometheus.io/scrape: 'true'
labels:
app: metallb
component: controller
spec:
containers:
- args:
- --port=7472
- --config=config
- --log-level=info
env:
- name: METALLB_ML_SECRET_NAME
value: memberlist
- name: METALLB_DEPLOYMENT
value: controller
image: quay.io/metallb/controller:{{ metal_lb_controller_tag_version }}
name: controller
ports:
- containerPort: 7472
name: monitoring
livenessProbe:
httpGet:
path: /metrics
port: monitoring
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /metrics
port: monitoring
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 65534
fsGroup: 65534
serviceAccountName: controller
terminationGracePeriodSeconds: 0

View File

@@ -12,7 +12,7 @@ metadata:
name: system:kube-vip-role
rules:
- apiGroups: [""]
resources: ["services", "services/status", "nodes"]
resources: ["services", "services/status", "nodes", "endpoints"]
verbs: ["list","get","watch", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
@@ -30,4 +30,3 @@ subjects:
- kind: ServiceAccount
name: kube-vip
namespace: kube-system

View File

@@ -1,7 +1,6 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
name: kube-vip-ds
namespace: kube-system
spec:
@@ -10,7 +9,6 @@ spec:
name: kube-vip-ds
template:
metadata:
creationTimestamp: null
labels:
name: kube-vip-ds
spec:

View File

@@ -0,0 +1,107 @@
---
- name: Create manifests directory for temp configuration
file:
path: /tmp/k3s
state: directory
owner: root
group: root
mode: 0644
with_items: "{{ groups['master'] }}"
run_once: true
- name: Copy metallb CRs manifest to first master
template:
src: "metallb.crs.j2"
dest: "/tmp/k3s/metallb-crs.yaml"
owner: root
group: root
mode: 0644
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system namespace
command: >-
k3s kubectl -n metallb-system
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb controller to be running
command: >-
kubectl wait deployment -n metallb-system controller --for condition=Available=True --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb webhook service to be running
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.phase}'=Running pods \
--selector component=controller --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for metallb pods in replicasets
command: >-
kubectl wait pods -n metallb-system --for condition=Ready \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller readyReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.readyReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller fullyLabeledReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.fullyLabeledReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Wait for the metallb controller availableReplicas
command: >-
kubectl wait -n metallb-system --for=jsonpath='{.status.availableReplicas}'=1 replicasets \
--selector component=controller,app=metallb --timeout=60s
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system webhook-service endpoint
command: >-
k3s kubectl -n metallb-system get endpoints webhook-service
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Apply metallb CRs
command: >-
k3s kubectl apply -f /tmp/k3s/metallb-crs.yaml
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system IPAddressPool
command: >-
k3s kubectl -n metallb-system get IPAddressPool
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Test metallb-system L2Advertisement
command: >-
k3s kubectl -n metallb-system get L2Advertisement
changed_when: false
with_items: "{{ groups['master'] }}"
run_once: true
- name: Remove tmp director used for manifests
file:
path: /tmp/k3s
state: absent

View File

@@ -0,0 +1,14 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- {{ metal_lb_ip_range }}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system

View File

@@ -1,3 +1,3 @@
---
- name: reboot
- name: Reboot
reboot:

View File

@@ -6,3 +6,4 @@
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot
when: not ansible_check_mode

View File

@@ -13,7 +13,6 @@
- name: Flush iptables before changing to iptables-legacy
iptables:
flush: true
changed_when: false # iptables flush always returns changed
- name: Changing to iptables-legacy
alternatives:

View File

@@ -6,3 +6,8 @@
regexp: '^((?!.*\bcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory\b).*)$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
notify: reboot
when: not ansible_check_mode
- name: Install linux-modules-extra-raspi
apt: name=linux-modules-extra-raspi state=present
when: (raspberry_pi) and (not ansible_check_mode)

View File

@@ -10,7 +10,7 @@
- k3s-node
- k3s-init
- name: pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc"
- name: RUN pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc"
register: pkill_containerd_shim_runc
command: pkill -9 -f "k3s/data/[^/]+/bin/containerd-shim-runc"
changed_when: "pkill_containerd_shim_runc.rc == 0"
@@ -47,10 +47,18 @@
- /usr/local/bin/k3s
- /var/lib/cni/
- name: daemon_reload
- name: Reload daemon_reload
systemd:
daemon_reload: yes
- name: Remove linux-modules-extra-raspi
apt: name=linux-modules-extra-raspi state=absent
- name: Remove tmp director used for manifests
file:
path: /tmp/k3s
state: absent
- name: Reboot and wait for node to come back up
reboot:
reboot_timeout: 3600

View File

@@ -17,3 +17,8 @@
become: yes
roles:
- role: k3s/node
- hosts: master
become: yes
roles:
- role: k3s/post

16
vagrant/Vagrantfile vendored
View File

@@ -3,12 +3,12 @@
Vagrant.configure("2") do |config|
# General configuration
config.vm.box = "generic/ubuntu2110"
config.vm.box = "generic/ubuntu2204"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.ssh.insert_key = false
config.vm.provider :virtualbox do |v|
v.memory = 4096
v.memory = 2048
v.cpus = 2
v.linked_clone = true
end
@@ -50,7 +50,7 @@ Vagrant.configure("2") do |config|
"master" => ["control1", "control2", "control3"],
"node" => ["node1", "node2"],
"k3s_cluster:children" => ["master", "node"],
"k3s_cluster:vars" => {"k3s_version" => "v1.23.4+k3s1",
"k3s_cluster:vars" => {"k3s_version" => "v1.24.4+k3s1",
"ansible_user" => "vagrant",
"systemd_dir" => "/etc/systemd/system",
"flannel_iface" => "eth1",
@@ -58,11 +58,11 @@ Vagrant.configure("2") do |config|
"k3s_token" => "supersecret",
"extra_server_args" => "--node-ip={{ ansible_eth1.ipv4.address }} --flannel-iface={{ flannel_iface }} --no-deploy servicelb --no-deploy traefik",
"extra_agent_args" => "--flannel-iface={{ flannel_iface }}",
"kube_vip_tag_version" => "v0.4.2",
"metal_lb_speaker_tag_version" => "v0.12.1",
"metal_lb_controller_tag_version" => "v0.12.1",
"kube_vip_tag_version" => "v0.5.0",
"metal_lb_speaker_tag_version" => "v0.13.4",
"metal_lb_controller_tag_version" => "v0.13.4",
"metal_lb_ip_range" => "192.168.30.80-192.168.30.90",
"retry_count" => "30"}
"retry_count" => "60"}
}
ansible.host_vars = {
"control1" => {
@@ -76,4 +76,4 @@ Vagrant.configure("2") do |config|
}
}
end
end
end

114
vagrant/test_cluster.py Executable file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
# Perform a few tests on a cluster created with this playbook.
# To simplify test execution, the scripts does not depend on any third-party
# packages, only the Python standard library.
import json
import subprocess
import unittest
from pathlib import Path
from time import sleep
from warnings import warn
VAGRANT_DIR = Path(__file__).parent.absolute()
PLAYBOOK_DIR = VAGRANT_DIR.parent.absolute()
class TestK3sCluster(unittest.TestCase):
def _kubectl(self, args: str, json_out: bool = True) -> dict | None:
cmd = "kubectl"
if json_out:
cmd += " -o json"
cmd += f" {args}"
result = subprocess.run(cmd, capture_output=True, shell=True, check=True)
if json_out:
return json.loads(result.stdout)
else:
return None
def _curl(self, url: str) -> str:
options = [
"--silent", # no progress info
"--show-error", # ... but errors should still be shown
"--fail", # set exit code on error
"--location", # follow redirects
]
cmd = f'curl {" ".join(options)} "{url}"'
result = subprocess.run(cmd, capture_output=True, shell=True, check=True)
output = result.stdout.decode("utf-8")
return output
def _apply_manifest(self, manifest_file: Path) -> dict:
apply_result = self._kubectl(
f'apply --filename="{manifest_file}" --cascade="background"'
)
self.addCleanup(
lambda: self._kubectl(
f'delete --filename="{manifest_file}"',
json_out=False,
)
)
return apply_result
@staticmethod
def _retry(function, retries: int = 5, seconds_between_retries=1):
for retry in range(1, retries + 1):
try:
return function()
except Exception as exc:
if retry < retries:
sleep(seconds_between_retries)
continue
else:
raise exc
def _get_load_balancer_ip(
self,
service: str,
namespace: str = "default",
) -> str | None:
svc_description = self._kubectl(
f'get --namespace="{namespace}" service "{service}"'
)
ip = svc_description["status"]["loadBalancer"]["ingress"][0]["ip"]
return ip
def test_nodes_exist(self):
out = self._kubectl("get nodes")
node_names = {item["metadata"]["name"] for item in out["items"]}
self.assertEqual(
node_names,
{"control1", "control2", "control3", "node1", "node2"},
)
def test_ip_address_pool_exists(self):
out = self._kubectl("get --all-namespaces IpAddressPool")
pools = out["items"]
self.assertGreater(len(pools), 0)
def test_nginx_example_page(self):
# Deploy the manifests to the cluster
deployment = self._apply_manifest(PLAYBOOK_DIR / "example" / "deployment.yml")
service = self._apply_manifest(PLAYBOOK_DIR / "example" / "service.yml")
# Assert that the dummy page is available
metallb_ip = self._retry(
lambda: self._get_load_balancer_ip(service["metadata"]["name"])
)
# Now that an IP address was assigned, let's reload the service description:
service = self._kubectl(f'get service "{service["metadata"]["name"]}"')
metallb_port = service["spec"]["ports"][0]["port"]
response_body = self._retry(
lambda: self._curl(f"http://{metallb_ip}:{metallb_port}/")
)
self.assertIn("Welcome to nginx!", response_body)
if __name__ == "__main__":
unittest.main()