Lab: Kubernetes The Hard Way by Kelsey Hightower
What I Actually Learned
I recently finished Kubernetes The Hard Way, and I’m glad I didn’t rush it.
This wasn’t about getting a cluster running as fast as possible. It was about slowing down and understanding what Kubernetes is actually doing underneath all the abstractions we usually rely on.
And honestly — it was intense.
Why I decided to do it
Kubernetes is easy to use today. It’s much harder to understand.
Managed services and automation tools do an incredible job of hiding complexity, but that can also create blind spots. I wanted to understand what happens when things break, not just how to deploy workloads when everything is healthy.
Kubernetes The Hard Way forces you to confront that complexity head-on:
- Certificates are generated manually
- Components are wired together explicitly
- Networking doesn’t magically work unless you make it work
- Security boundaries are real, not assumed
There’s no shortcut through this lab — and that’s the point.
What stood out the most
A few things really clicked for me while working through it.
Fundamentals matter more than tools
Networking, routing, certificates, and permissions aren’t “Kubernetes problems.”
They’re infrastructure problems that Kubernetes sits on top of.
When any of those are off, the cluster may look healthy but behave completely broken.
“Running” doesn’t mean “working”
I hit multiple moments where:
- All components were up
- Nodes were registered
- Nothing actually functioned
Logging failed. Exec failed. Services didn’t route. Pods couldn’t talk across nodes.
Those moments were frustrating — but they were also where the real learning happened.
RBAC finally made sense
RBAC (Role-Based Access Control) stopped being an abstract concept and became very real once logging, exec, and kubelet access broke.
It’s not just about permissions.
It’s about clearly defining trust boundaries between components.
Encryption at rest isn’t theoretical
Seeing secrets stored as encrypted data directly in etcd was a strong reminder that security isn’t automatic.
It’s configured.
It’s enforced.
And it’s validated — or it doesn’t exist.
What this changed for me
Finishing the lab helped me connect the dots between:
- What Kubernetes abstracts away
- What still exists underneath
- Where things fail when assumptions are wrong
It also gave me a deeper appreciation for why managed Kubernetes exists — and why platform teams work so hard to remove operational footguns.
At the same time, it made me more intentional about how I think about Kubernetes usage.
What’s next
Instead of treating Kubernetes as “something to run because it’s there,” I’m now thinking carefully about where it actually makes sense in my own network.
Some workloads benefit from:
- Orchestration
- Self-healing
- Declarative configuration
- Automation
Others don’t.
Forcing Kubernetes where it doesn’t belong creates more complexity — not less.
On on-prem and cloud
One thing I really appreciate after going through this is how Kubernetes helps bridge the gap between on-prem and cloud environments.
The underlying infrastructure may differ, but the patterns — deployment, networking, security, automation — stay consistent.
That shared operational model is powerful when it’s applied intentionally.
If you’re curious
This is the lab I worked through, taking my time with each step:
It’s not easy, and it’s not fast — but it’s one of the most educational experiences I’ve had working with Kubernetes.
Still learning.
Still refining.
But walking away with a deeper respect for what Kubernetes is actually solving — and what it demands in return.