Overview of Korifi Deployment on Raspberry Pi Link to heading
A simple question, the week before the holiday, has turned into a wonderful ride. I now have Cloud Foundry Korifi running on, at least one, 4-node, k3s cluster, of Raspberry Pi 5 (8gb) devices running Ubuntu 24.10.
Here are the details, both technical and non.
Bill of Materials for the Raspberry Pi Cluster Link to heading
I have about 100 Raspberry Pi devices at home, but honestly, I’ve lost track. Buying a few more couldn’t hurt. During a family trip to Dallas, I made a trip to MicroCenter with my son. I purchased 4 Raspberry Pi 5 (8gb), a case, some usb-c cables and some Cat-6 ethernet. I brought a switch with me, so I had enough to get started. When I got home I made some changes, but here is the entire BOM for this project.
- (4) Raspberry Pi 5 (8gb)
- (4) MicroCenter 128gb micro-SDCard
- (4) USB-C, right-angle, “HotNow” brand cables
- (5) 1-foot CAT6 flat ethernet cables
- (1) Ubiquiti Flex Mini switch
- (1) Anker 60W, 6-port charging station
- (1) 4-level, Acrylic Raspberry Pi rack with fans.
That’s everything. I power the switch and the 4 devices from the same charging station. That IS NOT the right way to power these Raspberry Pi 5 devices. It DOES NOT provide enough power to operate the USB drives properly. I’m not connecting anything else to these devices, other than the fans. However, I didn’t need the USB drives for this project. I’m booting and running with the SDcards.
Installing Ubuntu 24.10 on Raspberry Pi Devices Link to heading
For each device, I first installed Ubuntu Server 24.10, using Raspberry Pi Imager. It’s such a cool tool. It allows you to edit the config for each device, before image creation. For each of the four machines I used the settings:
- Configure the hostname
- Set country
- Add user (dashaun)
- Enable SSH server on boot
- Add public key for SSH auth
After the creating that first image and booting each of the devices, I update and patch everything.
# On each Raspberry Pi
sudo apt update
sudo apt upgrade -y
sudo shutdown -r now
That’s the most time-consuming part of this entire project.
In the past, I needed to update /boot/firmware/cmdline.txt
by adding cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
.
This time, I looked to see if I still need to do that step.
cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 0 313 1
cpu 0 313 1
cpuacct 0 313 1
blkio 0 313 1
memory 0 313 1
devices 0 313 1
freezer 0 313 1
net_cls 0 313 1
perf_event 0 313 1
net_prio 0 313 1
hugetlb 0 313 1
pids 0 313 1
rdma 0 313 1
misc 0 313 1
Everything appears to be enabled already. Thanks Ubuntu.
Now I have 4 computers, patched and ready for Kubernetes:
- 10.0.0.20
- 10.0.0.21
- 10.0.0.22
- 10.0.0.23
You will probably get different IP addresses, just replace them below.
Setting Up K3s with K3sup on Raspberry Pi Link to heading
I deploy Kubernetes about once a week.
My default flavor is k3s.
I’ve really enjoyed using k3sup
for years now.
Watch how easy it is.
From my laptop, not from one of the Raspberry Pi devices.
k3sup install --ip 10.0.0.20 --user dashaun --k3s-extra-args '--disable traefik' --merge --local-path ~/.kube/config --context coffeesoftware
The –user is the same user that I configured with public key for ssh via Raspbery Pi Imager above.
I add --k3s-extra-args '--disable traefik'
because Korifi provides Contour for ingress.
I --merge
into my local ~/.kube/config
so I can use kubectl/kubectx.
I named this cluster context coffeesoftware
but you can choose whatever you like.
After a few seconds, you have have a single-node cluster of k3s.
Now add three more nodes.
k3sup join --ip 10.0.0.21 --server-ip 10.0.0.20 --user dashaun
k3sup join --ip 10.0.0.22 --server-ip 10.0.0.20 --user dashaun
k3sup join --ip 10.0.0.23 --server-ip 10.0.0.20 --user dashaun
It’s so simple!
Let’s take a look at our cluster now!
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
coffeesoft0 Ready control-plane,master 1d6h v1.31.4+k3s1 10.0.0.20 <none> Ubuntu 24.10 6.11.0-1006-raspi containerd://1.7.23-k3s2
coffeesoft1 Ready <none> 1d6h v1.31.4+k3s1 10.0.0.21 <none> Ubuntu 24.10 6.11.0-1006-raspi containerd://1.7.23-k3s2
coffeesoft2 Ready <none> 1d6h v1.31.4+k3s1 10.0.0.22 <none> Ubuntu 24.10 6.11.0-1006-raspi containerd://1.7.23-k3s2
coffeesoft3 Ready <none> 1d6h v1.31.4+k3s1 10.0.0.22 <none> Ubuntu 24.10 6.11.0-1006-raspi containerd://1.7.23-k3s2
4-node Kubernetes cluster
Installing and Configuring Korifi on Kubernetes Link to heading
The Korifi install documentation is great. I followed the documentation, but I’m documenting exactly the steps I took, here.
I configured my properties with a .envrc file for direnv
export ROOT_NAMESPACE="cf"
export KORIFI_NAMESPACE="korifi"
export ADMIN_USERNAME="cf-admin"
export BASE_DOMAIN="korifi.cc"
export GATEWAY_CLASS_NAME="contour"
export DOCKERHUB_USERNAME="dashaun"
export DOCKERHUB_PASSWORD="*******"
Install Cert-Manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
Install Kpack
kubectl apply -f https://github.com/buildpacks-community/kpack/releases/download/v0.15.0/release-0.15.0.yaml
This is where we run into problems! Kpack doesn’t deliver ARM64 images yet. “Crash Loop Backoff” This is where I always got stuck before. This time is different. I’ve been running Korifi on Apple Silicon for at least a year now. MacOS comes with AMD64 emulation enabled by default. So it always just worked. I needed to get emulation installed for each node of the cluster, in order to make this work. Of all the workarounds I’ve done in the past, this is the one I like the most.
I deploy the qemu
emulation to each node of the cluster, using a daemonset
with the tonistiigi/binfmt
image.
I got this approach from here.
My modernized binfmt-daemonset.yaml
below.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: binfmt
labels:
app: binfmt-setup
spec:
selector:
matchLabels:
name: binfmt
template:
metadata:
labels:
name: binfmt
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
initContainers:
- name: binfmt
image: tonistiigi/binfmt
args: ["--install", "all"]
securityContext:
privileged: true
containers:
- name: pause
image: busybox:stable
command: ["sleep","infinity"]
resources:
requests:
memory: "1Mi"
cpu: "1m"
kubectl apply -f ./binfmt-daemonset.yaml
That’s it! That’s the trick that gets Korifi working on Raspberry Pi. Kpack status will now get to Running
and we are smiling.
I decided to do the dynamic provisioning of Contour
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour/release-1.30/examples/render/contour-gateway-provisioner.yaml
kubectl apply -f - <<EOF
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: $GATEWAY_CLASS_NAME
spec:
controllerName: projectcontour.io/gateway-controller
EOF
The Kubernetes Metrics Server
got installed with k3s
so we are moving along.
Deploy the Service Bindings Controller
:
kubectl apply -f https://github.com/servicebinding/runtime/releases/download/v1.0.0/servicebinding-runtime-v1.0.0.yaml
Create namespaces:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: $ROOT_NAMESPACE
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: $KORIFI_NAMESPACE
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted
EOF
Add container registry credentials:
kubectl --namespace "$ROOT_NAMESPACE" create secret docker-registry image-registry-credentials \
--docker-username="$DOCKERHUB_USERNAME" \
--docker-password="$DOCKERHUB_PASSWORD"
I’m using DockerHub
I’m cool with self-signed certificates for now.
Install Korifi via the helm
chart. These are my values, update containerRepositoryPrefix
and kpackImageBuilder.builderRepository
with your values.
helm install korifi https://github.com/cloudfoundry/korifi/releases/download/v0.13.0/korifi-0.13.0.tgz \
--namespace="$KORIFI_NAMESPACE" \
--set=generateIngressCertificates=true \
--set=rootNamespace="$ROOT_NAMESPACE" \
--set=adminUserName="$ADMIN_USERNAME" \
--set=api.apiServer.url="api.$BASE_DOMAIN" \
--set=defaultAppDomainName="apps.$BASE_DOMAIN" \
--set=containerRepositoryPrefix=index.docker.io/dashaun/ \
--set=kpackImageBuilder.builderRepository=index.docker.io/dashaun/kpack-builder \
--set=networking.gatewayClass=$GATEWAY_CLASS_NAME \
--wait
Post-Install Config Link to heading
I’m running locally, not public.
kubectl get service envoy-korifi -n korifi-gateway -ojsonpath='{.status.loadBalancer.ingress[0]}'
{"ip":"10.0.0.21","ipMode":"VIP"}
I added that IP address to my /etc/hosts
for api.korifi.cc
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
10.0.0.21 api.korifi.cc
cf api https://api.$BASE_DOMAIN --skip-ssl-validation
Setting API endpoint to https://api.korifi.cc...
OK
API endpoint: https://api.korifi.cc
API version: 3.117.0+cf-k8s
Not logged in. Use 'cf login' or 'cf login --sso' to log in.
So far, so good!
cf login
I’m presented with a list of all of the users in my ~/.kube/config
and I choose the entry in the list associated to $ADMIN_USERNAME
API endpoint: https://api.korifi.cc
1. korifi-v0-12
2. k3d-thunder
3. bootiful
4. concourse
5. cf-admin
6. clusterhat00
7. coffeesoftware
8. javagruntjr
9. kind-bcn
10. korifi-sandbox
11. tanzuplatform
12. mgmt
13. nucs
14. chaos-engr
15. redis-enterprise
Choose your Kubernetes authentication info (enter to skip): 5
Authenticating...
OK
This is working!
cf create-org org1
cf create-space -o org1 space1
cf target -o org1 -s space1
API endpoint: https://api.korifi.cc
API version: 3.117.0+cf-k8s
user: cf-admin
org: org1
space: space1
Validating everything with cf push Link to heading
I created a simple project to test this out.
git clone https://github.com/dashaun/pikorifi-demo.git
cd pikorifi-demo
cf push demo -t 120
I needed to increase the startup time, because of the kpack emulation, and AMD64 generated image
It took a few minutes to complete.
My output for comparison:
Pushing app demo to org org1 / space space1 as cf-admin...
Packaging files to upload...
Uploading files...
15.51 KiB / 15.51 KiB [====================================================================================================================================================================] 100.00% 11s
Waiting for API to complete processing files...
Staging app and tracing logs...
Build reason(s): CONFIG
CONFIG:
env:
- name: VCAP_APPLICATION
valueFrom:
secretKeyRef:
key: VCAP_APPLICATION
name: f87e070b-6e1b-4049-80ab-d358ea8f27dc-vcap-application
- name: VCAP_SERVICES
valueFrom:
secretKeyRef:
key: VCAP_SERVICES
name: f87e070b-6e1b-4049-80ab-d358ea8f27dc-vcap-services
resources: {}
source:
registry:
- image: index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-packages@sha256:8d766ba1f9399952808ff11e5917617ce326bde1046499f670cad9f02ddfb89a
+ image: index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-packages@sha256:ca06bf54c3774cd8db9cf81934c62c3bc0c9be8db666b478634339bcb55ec2ce
imagePullSecrets:
- name: image-registry-credentials
Loading registry credentials from service account secrets
Loading secret for "https://index.docker.io/v1/" from secret "image-registry-credentials" at location "/var/build-secrets/image-registry-credentials"
Loading cluster credential helpers
Pulling index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-packages@sha256:ca06bf54c3774cd8db9cf81934c62c3bc0c9be8db666b478634339bcb55ec2ce...
Successfully pulled index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-packages@sha256:ca06bf54c3774cd8db9cf81934c62c3bc0c9be8db666b478634339bcb55ec2ce in path "/workspace"
Timer: Analyzer started at 2025-01-14T04:18:37Z
Restoring data for SBOM from previous image
Timer: Analyzer ran for 3.119875517s and ended at 2025-01-14T04:18:40Z
Paketo Buildpack for CA Certificates 3.9.0
https://github.com/paketo-buildpacks/ca-certificates
Build Configuration:
$BP_EMBED_CERTS false Embed certificates into the image
$BP_ENABLE_RUNTIME_CERT_BINDING true Deprecated: Enable/disable certificate helper layer to add certs at runtime
$BP_RUNTIME_CERT_BINDING_DISABLED false Disable certificate helper layer to add certs at runtime
Launch Helper: Reusing cached layer
Paketo Buildpack for BellSoft Liberica 11.0.1
https://github.com/paketo-buildpacks/bellsoft-liberica
Build Configuration:
$BP_JVM_JLINK_ARGS --no-man-pages --no-header-files --strip-debug --compress=1 configure custom link arguments (--output must be omitted)
$BP_JVM_JLINK_ENABLED false enables running jlink tool to generate custom JRE
$BP_JVM_TYPE JRE the JVM type - JDK or JRE
$BP_JVM_VERSION 21 the Java version
Launch Configuration:
$BPL_DEBUG_ENABLED false enables Java remote debugging support
$BPL_DEBUG_PORT 8000 configure the remote debugging port
$BPL_DEBUG_SUSPEND false configure whether to suspend execution until a debugger has attached
$BPL_HEAP_DUMP_PATH write heap dumps on error to this path
$BPL_JAVA_NMT_ENABLED true enables Java Native Memory Tracking (NMT)
$BPL_JAVA_NMT_LEVEL summary configure level of NMT, summary or detail
$BPL_JFR_ARGS configure custom Java Flight Recording (JFR) arguments
$BPL_JFR_ENABLED false enables Java Flight Recording (JFR)
$BPL_JMX_ENABLED false enables Java Management Extensions (JMX)
$BPL_JMX_PORT 5000 configure the JMX port
$BPL_JVM_HEAD_ROOM 0 the headroom in memory calculation
$BPL_JVM_LOADED_CLASS_COUNT 35% of classes the number of loaded classes in memory calculation
$BPL_JVM_THREAD_COUNT 250 the number of threads in memory calculation
$JAVA_TOOL_OPTIONS the JVM launch flags
Using buildpack default Java version 21
BellSoft Liberica JDK 21.0.5: Contributing to layer
Downloading from https://github.com/bell-sw/Liberica/releases/download/21.0.5+11/bellsoft-jdk21.0.5+11-linux-amd64.tar.gz
Verifying checksum
Expanding to /layers/paketo-buildpacks_bellsoft-liberica/jdk
Adding 146 container CA certificates to JVM truststore
Writing env.build/JAVA_HOME.override
Writing env.build/JDK_HOME.override
BellSoft Liberica JRE 21.0.5: Reusing cached layer
Launch Helper: Reusing cached layer
Java Security Properties: Reusing cached layer
Paketo Buildpack for Syft 2.6.1
https://github.com/paketo-buildpacks/syft
Downloading from https://github.com/anchore/syft/releases/download/v1.18.1/syft_1.18.1_linux_amd64.tar.gz
Verifying checksum
Writing env.build/SYFT_CHECK_FOR_APP_UPDATE.default
Paketo Buildpack for Maven 6.19.2
https://github.com/paketo-buildpacks/maven
Build Configuration:
$BP_EXCLUDE_FILES colon separated list of glob patterns, matched source files are removed
$BP_INCLUDE_FILES colon separated list of glob patterns, matched source files are included
$BP_JAVA_INSTALL_NODE false whether to install Yarn/Node binaries based on the presence of a package.json or yarn.lock file
$BP_MAVEN_ACTIVE_PROFILES the active profiles (comma separated: such as: p1,!p2,?p3) to pass to Maven
$BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS the additionnal arguments (appended to BP_MAVEN_BUILD_ARGUMENTS) to pass to Maven
$BP_MAVEN_BUILD_ARGUMENTS -Dmaven.test.skip=true --no-transfer-progress package the arguments to pass to Maven
$BP_MAVEN_BUILT_ARTIFACT target/*.[ejw]ar the built application artifact explicitly. Supersedes $BP_MAVEN_BUILT_MODULE
$BP_MAVEN_BUILT_MODULE the module to find application artifact in
$BP_MAVEN_DAEMON_ENABLED false use maven daemon
$BP_MAVEN_POM_FILE pom.xml the location of the main pom.xml file, relative to the application root
$BP_MAVEN_SETTINGS_PATH the path to a Maven settings file
$BP_MAVEN_VERSION 3 the Maven version
$BP_NODE_PROJECT_PATH configure a project subdirectory to look for `package.json` and `yarn.lock` files
Creating cache directory /home/cnb/.m2
Compiled Application: Contributing to layer
Executing mvnw --batch-mode -Dmaven.test.skip=true --no-transfer-progress package
[INFO] Scanning for projects...
[INFO]
[INFO] -----------------------< com.example:korifidemo >-----------------------
[INFO] Building korifidemo 0.0.1-SNAPSHOT
[INFO] from pom.xml
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- resources:3.3.1:resources (default-resources) @ korifidemo ---
[INFO] Copying 1 resource from src/main/resources to target/classes
[INFO] Copying 0 resource from src/main/resources to target/classes
[INFO]
[INFO] --- compiler:3.13.0:compile (default-compile) @ korifidemo ---
[INFO] Recompiling the module because of changed source code.
[INFO] Compiling 1 source file with javac [debug parameters release 21] to target/classes
[INFO]
[INFO] --- resources:3.3.1:testResources (default-testResources) @ korifidemo ---
[INFO] Not copying test resources
[INFO]
[INFO] --- compiler:3.13.0:testCompile (default-testCompile) @ korifidemo ---
[INFO] Not compiling test sources
[INFO]
[INFO] --- surefire:3.5.2:test (default-test) @ korifidemo ---
[INFO] Tests are skipped.
[INFO]
[INFO] --- jar:3.4.2:jar (default-jar) @ korifidemo ---
[INFO] Building jar: /workspace/target/korifidemo-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot:3.4.1:repackage (repackage) @ korifidemo ---
[INFO] Replacing main artifact /workspace/target/korifidemo-0.0.1-SNAPSHOT.jar with repackaged archive, adding nested dependencies in BOOT-INF/.
[INFO] The original artifact has been renamed to /workspace/target/korifidemo-0.0.1-SNAPSHOT.jar.original
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:22 min
[INFO] Finished at: 2025-01-14T04:22:52Z
[INFO] ------------------------------------------------------------------------
Removing source code
Restoring application artifact
Paketo Buildpack for Executable JAR 6.12.0
https://github.com/paketo-buildpacks/executable-jar
Class Path: Contributing to layer
Writing env/CLASSPATH.delim
Writing env/CLASSPATH.prepend
Process types:
executable-jar: java org.springframework.boot.loader.launch.JarLauncher (direct)
task: java org.springframework.boot.loader.launch.JarLauncher (direct)
web: java org.springframework.boot.loader.launch.JarLauncher (direct)
SKIPPED: `Main-Class` found in `META-INF/MANIFEST.MF`, skipping build
Paketo Buildpack for Spring Boot 5.32.0
https://github.com/paketo-buildpacks/spring-boot
Build Configuration:
$BPL_JVM_CDS_ENABLED false whether to enable CDS optimizations at runtime
$BPL_SPRING_AOT_ENABLED false whether to enable Spring AOT at runtime
$BP_JVM_CDS_ENABLED false whether to enable CDS & perform JVM training run
$BP_SPRING_AOT_ENABLED false whether to enable Spring AOT
$BP_SPRING_CLOUD_BINDINGS_DISABLED false whether to contribute Spring Boot cloud bindings support
$BP_SPRING_CLOUD_BINDINGS_VERSION 1 default version of Spring Cloud Bindings library to contribute
Launch Configuration:
$BPL_SPRING_CLOUD_BINDINGS_DISABLED false whether to auto-configure Spring Boot environment properties from bindings
$BPL_SPRING_CLOUD_BINDINGS_ENABLED true Deprecated - whether to auto-configure Spring Boot environment properties from bindings
Creating slices from layers index
dependencies (22.0 MB)
spring-boot-loader (458.8 KB)
snapshot-dependencies (0.0 B)
application (40.7 KB)
Spring Cloud Bindings 2.0.4: Contributing to layer
Downloading from https://repo1.maven.org/maven2/org/springframework/cloud/spring-cloud-bindings/2.0.4/spring-cloud-bindings-2.0.4.jar
Verifying checksum
Copying to /layers/paketo-buildpacks_spring-boot/spring-cloud-bindings
Web Application Type: Reusing cached layer
Launch Helper: Reusing cached layer
4 application slices
Image labels:
org.opencontainers.image.title
org.opencontainers.image.version
org.springframework.boot.version
Timer: Builder ran for 5m25.095811025s and ended at 2025-01-14T04:24:18Z
Reusing layers from image 'index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-droplets@sha256:1d0351c93020b07d54e6d25c1ec7487bc8cd5eb85d4ca13be5a2bb18ea79d562'
Timer: Exporter started at 2025-01-14T04:24:22Z
Reusing layer 'paketo-buildpacks/ca-certificates:helper'
Reusing layer 'paketo-buildpacks/bellsoft-liberica:helper'
Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
Reusing layer 'paketo-buildpacks/executable-jar:classpath'
Reusing layer 'paketo-buildpacks/spring-boot:helper'
Reusing layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings'
Reusing layer 'paketo-buildpacks/spring-boot:web-application-type'
Reusing layer 'buildpacksio/lifecycle:launch.sbom'
Reusing 5/5 app layer(s)
Reusing layer 'buildpacksio/lifecycle:launcher'
Reusing layer 'buildpacksio/lifecycle:config'
Reusing layer 'buildpacksio/lifecycle:process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Adding label 'org.opencontainers.image.title'
Adding label 'org.opencontainers.image.version'
Adding label 'org.springframework.boot.version'
Setting default process type 'web'
Timer: Saving index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-droplets... started at 2025-01-14T04:24:24Z
*** Images (sha256:9965d3e5185b11b1ec1a00e24978c66792ee60f3affc5a1f4373ee760feb8e4d):
index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-droplets
index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-droplets:b2.20250114.041625
Timer: Saving index.docker.io/dashaun/f87e070b-6e1b-4049-80ab-d358ea8f27dc-droplets... ran for 3.598362133s and ended at 2025-01-14T04:24:28Z
Timer: Exporter ran for 6.488099245s and ended at 2025-01-14T04:24:28Z
Timer: Cache started at 2025-01-14T04:24:28Z
Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
Adding cache layer 'paketo-buildpacks/syft:syft'
Adding cache layer 'paketo-buildpacks/maven:application'
Adding cache layer 'paketo-buildpacks/maven:cache'
Adding cache layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings'
Adding cache layer 'buildpacksio/lifecycle:cache.sbom'
Timer: Cache ran for 26.643969709s and ended at 2025-01-14T04:24:55Z
Build successful
Waiting for app demo to start...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
Instances starting...
name: demo
requested state: started
routes: demo.apps.korifi.cc, demo.pikorifi.cc
last uploaded: Mon 13 Jan 22:16:15 CST 2025
stack: io.buildpacks.stacks.jammy
buildpacks:
type: web
sidecars:
instances: 1/1
memory usage: 1024M
start command: java "org.springframework.boot.loader.launch.JarLauncher"
state since cpu memory disk logging cpu entitlement details
#0 running 2025-01-14T04:28:07Z 198.6% 347.6M of 1G 0B of 1G 0B/s of 0B/s
type: executable-jar
sidecars:
instances: 0/0
memory usage: 1024M
start command: java "org.springframework.boot.loader.launch.JarLauncher"
There are no running instances of this process.
type: task
sidecars:
instances: 0/0
memory usage: 1024M
start command: java "org.springframework.boot.loader.launch.JarLauncher"
There are no running instances of this process.
No running instances?
cf apps
Getting apps in org org1 / space space1 as cf-admin...
name requested state processes routes
demo started web:1/1, executable-jar:0/0, task:0/0 demo.apps.korifi.cc
That makes me feel better!
Production Link to heading
I added cloudflared
to expose my Korifi cluster to the public, with public DNS records.
cf create-shared-domain pikorifi.cc
cf create-route pikorifi.cc --hostname demo
cf map-route demo pikorifi.cc --hostname demo
cf apps
Getting apps in org org1 / space space1 as cf-admin...
name requested state processes routes
demo started web:1/1, executable-jar:0/0, task:0/0 demo.apps.korifi.cc, demo.pikorifi.cc
Take a look for yourself:
https://demo.pikorifi.cc/actuator/health
Conclusion Link to heading
I’m not done with this project. I didn’t go into too much detail, but I hope this enough for you to replicate my solution.
As always, your feedback is welcomed.
Links Link to heading
Multi-Node Korifi on Raspberry Pi from scratch(YouTube) pikorifi-demo Korifi