My take on Kubernetes has changed a lot over the years.

Different stages

I went through several phases:

  • “This looks very cool, I want to learn to do complex things like that”
  • “This is so complex”
  • “Wow, it really handles everything for us”
  • “I like it, but for a pet project or a startup it’s way too complex”
  • “It’s really good for pretty much everything”

This looks very cool

It looked like something serious people did. It was a big plus for joining Habx in 2018.

This is so complex

It turned out the people who set it up back then didn’t have much experience with it. We had a monolithic application running in dual mode: a backend and a frontend. They each had 8 replicas, and the replicas were frequently dying and restarting. I started to ask questions about it and the answers weren’t satisfactory (“yeah, it’s Kubernetes’ life, pods die and restart”).

We didn’t have any resource requests or limits set on our deployments. This is Kubernetes 101: without them, the scheduler can’t make informed decisions and pods get evicted unexpectedly when nodes run out of memory.

Progressively, with all the crashes we faced, I started to learn about all the features. One I really like and is often overlooked is PriorityClass. We had none of those either. One day, the cluster autoscaler couldn’t find a node to run on, existing nodes were fully packed, and we couldn’t deploy anything anymore. With proper priority classes, critical system components would have been able to preempt less important workloads.

It handles everything for us

The first time the two people of the devops team were in holiday at the same time, I was a bit worried. Then I realized that it’s pretty much what Kubernetes excels in: If we give it the right information, it will handle most scenarios pretty well. By order of priority, I would say:

  • Define correct resource requests and limits
  • Set PriorityClass for critical workloads
  • Configure pod anti-affinity rules (spread replicas across nodes)
  • Add monitoring (OpenTelemetry + OpenObserve)
  • Set PodDisruptionBudget (control how many pods can be down during maintenance)
  • Add horizontal pod autoscaling

But still complex

For personal projects, I was still building custom scripts to handle deployments around docker-compose. Because it’s lighter, I don’t want to waste 512MB of RAM to have an empty cluster on which to host my apps on. And I’m fine spending a few minutes — I mean hours — I mean, well, maybe days — maintaining ugly-looking Python scripts.

Or is it?

Actually, not that complex. Setting up a Kubernetes cluster on a 6€/month VPS can be done in minutes using k3s. Yes, it’s heavier than docker-compose—you need to accept “wasting” around 512MB of RAM. But in return, you get a proper orchestrator that handles restarts, rolling updates, and resource management for you.

How I use it

This might not be the “right” way to do it. But it’s the way that has made me the most efficient and happy:

  • I use raw Debian 13 instances with nothing else than k3s
  • I mostly use raw Kubernetes YAML files with Kustomize to manage multiple environments—I genuinely like the Kubernetes YAML syntax
  • I sometimes use Helm but avoid it whenever possible
  • I enable automatic upgrades for k3s
  • I use Claude Code to help me define deployments and diagnose issues
  • CI and container registry are still on GitHub Actions

Conclusion

Kubernetes has a reputation for being overkill, and for years I agreed. But the reality is that a properly configured k3s cluster on a cheap VPS gives you production-grade orchestration with minimal overhead.