Bill of Materials for the Raspberry Pi Cluster Link to heading
- (1) Raspberry Pi 5 (8gb)
- (1) SanDisk 128gb
- (1) USB-C, right-angle, “HotNow” brand cables
- (1) 1-foot CAT6 flat ethernet cables
- (1) Ubiquiti Flex Mini switch
- (1) Anker Prime Charger, 200W, 6-port GaN charging station
- (1) Raspberry Pi 5 case
That’s everything. I power the switch and the device from the same charging station.
Better Power Link to heading
I’ve upgraded the charging station since the last blog.
This time, I’m providing better power to the device, in hopes we can boot from USB.
When I boot the previous cluster, that didn’t have proper power, I saw this error:
This power supply is not capable of supplying 5A; power to peripherals
will be restricted
See man:pemmican-cli(1) for information on suppressing this warning,
or https://rptl.io/rpi5-power-supply-info for more information on the
Raspberry Pi 5 power supply
I came across this command to verify the power:
vcgencmd get_throttled
throttled=0x0
This means there is no throttling
throttled=5000x0
This (or anything else) means the device not getting proper power
Interesting thing here, is that previous cluster and current cluster both show the error at login.
Both clusters also return throttled=0x0
from the vgencmd
command.
The important difference is that with the new power, I can boot via USB.
I would still like be in a spot where I don’t see that warning. I will continue to report on my progress with that.
Installing Ubuntu 24.10 on Raspberry Pi Devices Link to heading
I install Ubuntu Server 24.10, using Raspberry Pi Imager.
Edit the settings:
- Configure the hostname
- Set country
- Add user (dashaun)
- Enable SSH server on boot
- Add public key for SSH auth
Update to the latest Link to heading
# On each Raspberry Pi
sudo apt update
sudo apt upgrade -y
sudo shutdown -r now
Smooth sailing from here
Setting Up K3s with K3sup on Raspberry Pi Link to heading
From my laptop, not from the Raspberry Pi device.
k3sup install --ip 10.0.0.30 --user dashaun --k3s-extra-args '--disable traefik' --merge --local-path ~/.kube/config --context pikorifi
The –user is the same user that I configured with public key for ssh via Raspbery Pi Imager above.
I add --k3s-extra-args '--disable traefik'
because Korifi provides Contour for ingress.
I --merge
into my local ~/.kube/config
so I can use kubectl/kubectx.
I named this cluster context pikorifi
but you can choose whatever you like.
After a few seconds, you have a single-node cluster of k3s.
Let’s take a look at our cluster now!
kubectx pikorifi
Switched to context “pikorifi”.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pikorifi00 Ready control-plane,master 33m v1.31.4+k3s1 10.0.0.30 <none> Ubuntu 24.10 6.11.0-1006-raspi containerd://1.7.23-k3s2
single-node k3s Kubernetes cluster
Installing and Configuring Korifi on Kubernetes Link to heading
I configured my properties with a .envrc file for direnv
export ROOT_NAMESPACE="cf"
export KORIFI_NAMESPACE="korifi"
export ADMIN_USERNAME="cf-admin"
export BASE_DOMAIN="pikorifi00.korifi.cc"
export GATEWAY_CLASS_NAME="contour"
export DOCKERHUB_USERNAME="dashaun"
export DOCKERHUB_PASSWORD="*******"
Install Cert-Manager Link to heading
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
New release! 2025-01-16
Install Kpack Link to heading
kubectl apply -f https://github.com/buildpacks-community/kpack/releases/download/v0.16.1/release-0.16.1.yaml
New release! 2025-01-16
Make Kpack run on ARM64 Link to heading
Kpack doesn’t deliver ARM64 images yet. My modernized binfmt-daemonset.yaml
below makes it work!
I changed from busybox:stable
to arm64v8/alpine
for the smaller image.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: binfmt
labels:
app: binfmt-setup
spec:
selector:
matchLabels:
name: binfmt
template:
metadata:
labels:
name: binfmt
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
initContainers:
- name: binfmt
image: tonistiigi/binfmt
args: ["--install", "all"]
securityContext:
privileged: true
containers:
- name: pause
image: arm64v8/alpine
command: ["sleep","infinity"]
resources:
requests:
memory: "1Mi"
cpu: "1m"
kubectl apply -f ./binfmt-daemonset.yaml
It took about 1-minute for
kpack
status to becomeRunning
after applying.
Install Contour Link to heading
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour/release-1.30/examples/render/contour-gateway-provisioner.yaml
kubectl apply -f - <<EOF
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: $GATEWAY_CLASS_NAME
spec:
controllerName: projectcontour.io/gateway-controller
EOF
Dynamic provisioning
Metrics Server Link to heading
The Kubernetes Metrics Server
got installed with k3s
so we are moving along.
Install Service Bindings Controller Link to heading
kubectl apply -f https://github.com/servicebinding/runtime/releases/download/v1.0.0/servicebinding-runtime-v1.0.0.yaml
Create namespaces: Link to heading
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: $ROOT_NAMESPACE
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: $KORIFI_NAMESPACE
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted
EOF
Add container registry credentials: Link to heading
kubectl --namespace "$ROOT_NAMESPACE" create secret docker-registry image-registry-credentials \
--docker-username="$DOCKERHUB_USERNAME" \
--docker-password="$DOCKERHUB_PASSWORD"
I’m using DockerHub
I’m still cool with self-signed certificates for now.
Install Korifi with Helm Link to heading
helm install korifi https://github.com/cloudfoundry/korifi/releases/download/v0.14.0/korifi-0.14.0.tgz \
--namespace="$KORIFI_NAMESPACE" \
--set=generateIngressCertificates=true \
--set=rootNamespace="$ROOT_NAMESPACE" \
--set=adminUserName="$ADMIN_USERNAME" \
--set=api.apiServer.url="api.$BASE_DOMAIN" \
--set=defaultAppDomainName="apps.$BASE_DOMAIN" \
--set=containerRepositoryPrefix=index.docker.io/dashaun/ \
--set=kpackImageBuilder.builderRepository=index.docker.io/dashaun/kpack-builder \
--set=networking.gatewayClass=$GATEWAY_CLASS_NAME \
--wait
These are my values, update
containerRepositoryPrefix
andkpackImageBuilder.builderRepository
with your values.
Post-Install Config Link to heading
I’m running locally, not public.
kubectl get service envoy-korifi -n korifi-gateway -ojsonpath='{.status.loadBalancer.ingress[0]}'
{"ip":"10.0.0.30","ipMode":"VIP"}
I added that IP address to my /etc/hosts
for api.pikorifi00.korifi.cc
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
10.0.0.30 api.pikorifi00.korifi.cc
cf api https://api.$BASE_DOMAIN --skip-ssl-validation
Setting API endpoint to https://api.pikorifi00.korifi.cc...
OK
API endpoint: https://api.pikorifi00.korifi.cc
API version: 3.117.0+cf-k8s
Not logged in. Use 'cf login' or 'cf login --sso' to log in.
So far, so good!
cf login
I’m presented with a list of all of the users in my ~/.kube/config
and I choose the entry in the list associated to $ADMIN_USERNAME
API endpoint: https://api.pikorifi00.korifi.cc
1. korifi-v0-12
2. k3d-thunder
3. bootiful
4. concourse
5. cf-admin
6. clusterhat00
7. coffeesoftware
8. javagruntjr
9. kind-bcn
10. korifi-sandbox
11. tanzuplatform
12. mgmt
13. nucs
14. chaos-engr
15. redis-enterprise
Choose your Kubernetes authentication info (enter to skip): 5
Authenticating...
OK
API endpoint: https://api.pikorifi00.korifi.cc
API version: 3.117.0+cf-k8s
user: cf-admin
No org or space targeted, use 'cf target -o ORG -s SPACE'
This is working!
cf create-org pikorifi-cc
cf create-space -o pikorifi-cc production
cf target -o pikorifi-cc -s production
API endpoint: https://api.pikorifi00.korifi.cc
API version: 3.117.0+cf-k8s
user: cf-admin
org: pikorifi-cc
space: production
Generate a UI with Vaadin Link to heading
I created a simple website using Vaadin.
git clone https://github.com/pikorifi-cc/cc.pikorifi.www
cd cc.pikorifi.www
./mvnw spring-boot:build-image -Pproduction \
-Dspring-boot.build-image.imagePlatform=linux/arm64 \
-Dspring-boot.build-image.imageName=dashaun/cc.pikorifi.www:latest
docker push dashaun/cc.pikorifi.www:latest
Build native ARM64 image locally and push to registry
Production Link to heading
cf push www --docker-image dashaun/cc.pikorifi.www:latest
Pushing app www to org pikorifi-cc / space production as cf-admin...
Staging app and tracing logs...
Waiting for app www to start...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
name: www
requested state: started
routes: www.pikorifi.cc, www.apps.pikorifi00.korifi.cc
last uploaded: Sat 25 Jan 20:55:48 CST 2025
stack:
docker image: dashaun/cc.pikorifi.www:latest
type: web
sidecars:
instances: 1/1
memory usage: 1024M
state since cpu memory disk logging cpu entitlement details
#0 running 2025-01-26T02:56:07Z 0.0% 0B of 0B 0B of 0B 0B/s of 0B/s
Check it out for yourself!
Conclusion Link to heading
The kpack
build was giving me issues on this single-node install. I didn’t investigate further.
Doing cf push
with an OCI image is orders of magnitude more enjoyable than dealing with yaml.
This is a $50 USD device, running www.pikorifi.cc
with a Vaadin Spring Boot UI.
I’m really excited to see how far I can push things now. I want to see how many apps/domains I can host on a cluster of Raspberry Pi or even a single Raspberry Pi, with Cloud Foundry Korifi!
As always, your feedback is welcomed.
Links Link to heading
Multi-Node Korifi on Raspberry Pi from scratch(YouTube) Multi-arch Spring Boot OCI images from anywhere with Paketo https://github.com/pikorifi-cc/cc.pikorifi.www Korifi Stack of the Week S2E1 Pragmatic approach to architecture (and development)