Test Deployment Notes
What follows is my, potentially slightly incoherent, stream of consciousness as I try to teach myself Kubernetes
First Deployment
- Minikube running with QEMU driver on local machine
- used the following deployment and service YAML:
1---
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: uptime-kuma
6 labels:
7 app: uptime-kuma
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: uptime-kuma
13 template:
14 metadata:
15 labels:
16 app: uptime-kuma
17 spec:
18 containers:
19 - name: uptime-kuma
20 image: louislam/uptime-kuma
21 ports:
22 - containerPort: 3001
23---
24apiVersion: v1
25kind: Service
26metadata:
27 name: uptime-kuma-service
28spec:
29 selector:
30 app: uptime-kuma
31 type: LoadBalancer
32 ports:
33 - protocol: TCP
34 port: 3001
35 targetPort: 3001
36 nodePort: 30000
- kubectl apply -f uptime-kuma-deployment.yml
- Got “MK_UNIMPLEMENTED” error when running
minikube service uptime-kuma-service
to get URL. Google showed that minikube service was unavailable when using QEMU. Saw an option to use Docker instead. - ran
minikube delete
and redeployed minikube withminikube start --driver=docker
- re-ran kubectl apply
- minikube service command gave URL http://192.168.49.2:30000 which brought up uptime-kuma in the browser
Adding Volumes for Persistent Storage
1---
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: uptime-kuma
6 labels:
7 app: uptime-kuma
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: uptime-kuma
13 template:
14 metadata:
15 labels:
16 app: uptime-kuma
17 spec:
18 containers:
19 - name: uptime-kuma
20 image: louislam/uptime-kuma
21 ports:
22 - containerPort: 3001
23 volumeMounts: # Volume must be created along with volumeMount (see next below)
24 - name: uptime-kuma-data
25 mountPath: /app/data # Path within the container, like the right side of a docker bind mount -- /tmp/data:/app/data
26 volumes: # Defines a volume that uses an existing PVC (defined below)
27 - name: uptime-kuma-data
28 persistentVolumeClaim:
29 claimName: uptime-kuma-pvc
30---
31apiVersion: v1
32kind: Service
33metadata:
34 name: uptime-kuma-service
35spec:
36 selector:
37 app: uptime-kuma
38 type: LoadBalancer
39 ports:
40 - protocol: TCP
41 port: 3001
42 targetPort: 3001
43 nodePort: 30000
44---
45apiVersion: v1
46kind: PersistentVolume
47metadata:
48 name: uptime-kuma-pv
49spec:
50 accessModes:
51 - ReadWriteOnce
52 capacity:
53 storage: 1Gi
54 storageClassName: standard
55 hostPath:
56 path: /tmp/uptime/data # Path on the host
57---
58apiVersion: v1
59kind: PersistentVolumeClaim
60metadata:
61 name: uptime-kuma-pvc
62spec:
63 accessModes:
64 - ReadWriteOnce
65 resources:
66 requests:
67 storage: 1Gi
68 storageClassName: standard
69 volumeName: uptime-kuma-pv
- Was confused on the need for
storageClassName
– tried to comment it out and redeploy:- Pod was stuck in pending status
kubectl describe pod
showed:Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m10s default-scheduler 0/1 nodes are available: persistentvolumeclaim "uptime-kuma-pvc" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m9s default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kubectl describe pvc uptime-kuma-pvc
showed:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeMismatch 7s (x26 over 6m20s) persistentvolume-controller Cannot bind to requested volume "uptime-kuma-pv": storageClassName does not match
-kubectl describe pv uptime-kuma-pv
shows that uptime-kuma-pv has no storage class assigned:
Name: uptime-kuma-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
- The docs state the following: You can create a PersistentVolumeClaim without specifying a
storageClassName
for the new PVC, and you can do so even when no default StorageClass exists in your cluster. In this case, the new PVC creates as you defined it, and thestorageClassName
of that PVC remains unset until a default becomes available. - Checking what my default storage class is with
kubectl get storageclass
(I didn’t set this… must be a minikube default)
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 2d
- It appears that a PVC will be set to the default storage class if one is not specified, but the same is not true for a PV – the docs also state: A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
- After understanding this I ran
kubectl apply -f uptime-kuma-deployment.yml
with the above configuration. - Ran
minikube service uptime-kuma-service
to pull it up in the browser - Created an uptime-kuma user and created a monitor for google.com
- Ran
kubectl delete deployment uptime-kuma
to delete all pods, then re-rankubectl apply -f uptime-kuma-deployment
, then opened it in the browser again withminikube service uptime-kuma-service
- My user and monitor still exist after recreating the deployment, so data was successfully persisted with a PV and PVC! 🎉🎉🎉
Using an SMB share as persistent storage
- Config files are in the folder
remote-storage-smb-deployment
- Created a share on TrueNAS called ‘kubernetes-testing’
- Installed the SMB CSI driver – https://github.com/kubernetes-csi/csi-driver-smb/tree/master
- Generate a secret to store share credentials with
kubectl create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD"
- note: the generic flag is used to denote that the secret being created is an ‘Opaque’ secret (arbitrary, user-defined data). There are other secret types for various specific use cases such as docker registry credentials or tls certificates. https://kubernetes.io/docs/concepts/configuration/secret/
- Used this video from the Youtube channel Jim’s garage for reference with the PV and PVC configs https://www.youtube.com/watch?v=3S5oeB2qhyg&t=318s
- Got a permission denied error on the pod, appears to be when mounting smb share in fstab
- Ran the following to verify contents of the secret:
kubectl get secret smbcreds -o jsonpath='{.data}'
which gave the following output:{"password":"MXFhQFdTM2Vk","username":"a3VzZXI="}
- These values are base64 encoded, so I ran the password through an online decoder and found that it appears to have been cut off after 9 characters.
- Found out the reason that it was cut off was that when using double quotes the shell will interpret special characters. Since the password was a waterfall of 1qa@WS3ed$RF5tg –> the shell interpreted everything after the dollar sign as a variable (which was obviously empty)
- After fixing this, everything appears to deploy without errors, however connecting with minikube service gives ‘ERR_CONNECTION_REFUSED’
- Checking pod logs – it appears the app can’t connect to SQLLite because the database is locked
- Switched to an nginx image rather than uptme-kuma – you wouldn’t want a SQLLite DB on an SMB share anyway.
- Gave a mountpath of /usr/share/nginx/html
- I thought if nothing existed for this volume it would create a default splashpage – it didn’t, and I got 403 forbidden
- Added a hello-world HTML file to the share and then ran
kubectl rollout restart deployment nginx
to restart the pod(s) - It worked! :tada:
- Scaled pods up from 1 to 2 and re-ran kubectl apply – still works! :tada:
- Data is thrown into the share like data soup (no sub-directories based on pod, pvc etc)
- Added this storage class config based on the docs:
1apiVersion: storage.k8s.io/v1
2kind: StorageClass
3metadata:
4 name: smb
5provisioner: smb.csi.k8s.io
6parameters:
7 source: //10.0.30.11/kubernetes-testing
8 # if csi.storage.k8s.io/provisioner-secret is provided, will create a sub directory
9 # with PV name under source
10 csi.storage.k8s.io/provisioner-secret-name: smbcreds
11 csi.storage.k8s.io/provisioner-secret-namespace: default
12 csi.storage.k8s.io/node-stage-secret-name: smbcreds
13 csi.storage.k8s.io/node-stage-secret-namespace: default
14reclaimPolicy: Delete # available values: Delete, Retain
15volumeBindingMode: Immediate
16allowVolumeExpansion: true
17mountOptions:
18 - dir_mode=0777
19 - file_mode=0777
20 - uid=1000
21 - gid=1000
- Also removed the PersistentVolume config – this dynamically provisioned volumes and created sub-directories on the share based on the names of the dynamically provisioned volumes (ex. pvc-646faec2-c9cc-4a75-9b3a-4b6c9742f339)
- I wanted a sub-directory created based on the deployment name rather than the difficult-to-remember PVC names.
- I found two ways to do this:
- Add
subDir
parameter to the Storage Class config. This will create a single sub-directory that every PVC will use rather than a unique folder based on the PVC name. - Add
subPath
in the Container Template config undervolumeMounts
– this will put the contents of that volume underneath that sub-folder - Both of these methods can be used at the same time
- Add
- Added both of these configurations, attempted to add additional heimdall deployment – same issue with database locking. Decided to try to deploy it as a statefulset:
- When deleting heimdall’s PVC it deleted the entire subDir defined in the storage path
- Found out the reason for this is that the default of the
onDelte
parameter is to delete the directories when the volume is deleted. Because the storageclass’s reclaim policy was Delete – the volume was deleted when I deleted the PVC. Therefore, the CSI driver deleted the directories on the share as well. - https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/driver-parameters.md