A bare-metal Kubernetes cluster on a Raspberry Pi 5 that shows what managed Kubernetes hides from you.

Why I Built This

When you only use managed Kubernetes, you miss what’s actually happening underneath. On a Pi with 4GB RAM, every resource request matters. You learn about memory limits, CPU scheduling, and container efficiency in a way that EKS abstracts away.

The constraint IS the lesson. When I go back to EKS after debugging something on the Pi, I understand every abstraction layer better.

What It Is

A self-hosted Kubernetes learning environment on physical hardware:

  • Hardware: Raspberry Pi 5, ARM64, 4GB RAM, 64GB USB 3.0 SSD
  • K8s Distribution: k3s (lightweight, perfect for constrained hardware)
  • Container Runtime: containerd
  • OS: Raspberry Pi OS (Debian-based)
  • GitOps: FluxCD for automated deployments
  • Secrets: HashiCorp Vault
  • Observability: Prometheus + Grafana
  • Access: Cloudflare Tunnels for secure remote access

Architecture

Raspberry Pi 5 (ARM64, 4GB RAM, 64GB SSD)
        │
        └── k3s (lightweight Kubernetes)
              ├── apps/
              │     └── deployed applications
              ├── monitoring/
              │     ├── Prometheus
              │     └── Grafana
              └── clusters/staging/
                    └── cluster configs

GitOps: FluxCD watches Git → auto-deploys changes
Secrets: HashiCorp Vault (not K8s secrets)
Access: Cloudflare Tunnels (no port forwarding)

What I Learned

Running Kubernetes on constrained hardware teaches you things no certification can:

  • Resource limits matter. When you have 4GB total, you learn exactly how much memory Prometheus needs versus what the docs say.
  • ARM64 is different. Not every container image supports ARM. You learn to build multi-arch images and check manifests.
  • Storage on bare metal is real. No EBS to auto-provision. You understand what CSI drivers actually do.
  • Networking without a cloud VPC. You configure MetalLB, understand ARP, and deal with DHCP leases. The cloud hides all of this.
  • GitOps on real hardware. When FluxCD reconciles on a Pi, you see the actual resource pressure of the reconciliation loop.

Planned

  • Multi-node cluster expansion
  • ArgoCD alongside FluxCD comparison
  • Longhorn persistent storage
  • Network policies
  • CI/CD pipeline integration
  • Backup and recovery