Table of Contents
Skeptical Critique: "What Do You Do Here, Bob?"
Skeptical of “hyperscaler” claims around Kubernetes (also called K8)? You're absolutely right to be skeptical. Kubernetes often feels like a massive, distributed Rube Goldberg machine doing things you already solved decades ago with /etc/fstab, crontab, ip addr add (or ifconfig), and a few if statements in a shell loop.
1. "I already mount filesystems with ''/etc/fstab''"
You: mount /dev/sda1 /data → aaaand done. Kubernetes:
apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce hostPath: path: /data --- kind: PersistentVolumeClaim ...
Then bind it to a pod, hope the node doesn’t die, and pray the CSI driver doesn’t flake.
Reality:
You just replaced one line in /etc/fstab with 50 lines of YAML, a storage class, a PVC, a PV, and a node affinity rule.
And if the node fails? Your data is now node-local unless you paid for a distributed filesystem (Ceph, Longhorn, etc.) — which is another 10 services to manage.
Verdict: Kubernetes *didn’t solve storage*. It moved the problem into a declarative abstraction that’s slower, more fragile, and harder to debug.
2. "I already had automount (amd / autofs)"
You: automountd watches /net, mounts NFS on demand. Powered by ONE config file in /etc.
Kubernetes:
- Dynamic volume provisioning
- CSI drivers
- StorageClasses
- Pod restarts on node failure
- PV reclaim policies
Reality: Automount was instant, local, zero-config. Kubernetes “automount” requires:
- A cloud provider or on-prem storage backend
- IAM roles / service accounts
- Network policies
- Latency from API server round-trips
And if the control plane is down? No mounts. No pods. No app.
Verdict: You traded fast, simple, local automount for slow, distributed, API-dependent automount.
3. "I already stored secrets in Redis / env files / vault"
You:
redis-cli SET myapp:db_pass "s3cr3t"
or
export DB_PASS=$(aws ssm get-parameter ...)
Kubernetes:
apiVersion: v1 kind: Secret metadata: name: db-creds data: password: czNjcjN0 # base64, not encrypted
Stored in etcd. In plaintext. Unless you add Vault, Sealed Secrets, or external KMS.
Reality:
- Base64 ≠ encryption
- etcd is a single point of failure
- RBAC misconfigs leak secrets cluster-wide
- You now need secret rotation policies, webhook mutators, sidecar injectors
Verdict: Kubernetes *didn’t secure secrets* — it created a new attack surface and called it a feature.
4. "I already had ''crontab''"
You:
*/5 * * * * /check-app.sh || restart-app.sh
Kubernetes:
apiVersion: batch/v1 kind: CronJob spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: checker image: alpine command: ["/bin/sh", "-c", "check && restart"] restartPolicy: OnFailure
Now you have:
- A new API object
- A job controller
- A pod per run
- Logs scattered across nodes
- Garbage collection settings
Verdict: You replaced one line in crontab with a distributed job scheduler that spawns containers to run a shell script.
It’s not simpler. It’s insane overhead for */5 * * * *.
5. "I already had IPv4/IPv6"
You:
ip -6 addr add 2001:db8::1/64 dev eth0
Kubernetes:
- Enable dual-stack in kubelet
- Configure CNI (Calico, Cilium)
- Set
ipFamilyPolicy: PreferDualStack - Hope your cloud provider supports it
- Debug pod-to-pod routing across IP families
Reality: You had dual-stack in 1998. Kubernetes made it a cluster-wide feature flag that breaks half the CNIs.
Verdict: Not innovation. Just reinventing ip with extra steps.
6. "I already scaled with load balancers"
You: HAProxy → [web1, web2, web3]
Kubernetes:
- Deployment with 3 replicas
- Service type LoadBalancer
- Cloud provider provisions ELB
- Ingress controller
- HPA
- Metrics server
- Custom metrics adapter
Reality: Your HAProxy scaled to 100k RPS with one config file. Kubernetes needs 10+ components to do the same — and still throttles on control plane bottlenecks.
Verdict: You didn’t gain scale. You gained latency, cost, and points of failure.
7. "Chasing desired state? I had a shell script."
You:
#!/bin/sh if ! pg_isready -h db; then echo "DB down, restarting..." systemctl restart postgresql fi
Kubernetes:
- Liveness probe
- Restart policy
- Pod disruption budget
- Node affinity
- Taints/tolerations
- Init containers
- Webhook validations
Reality: Your shell script ran in 2ms. Kubernetes “self-healing” takes 30+ seconds (API → scheduler → kubelet → container runtime). And if the API server is slow? Nothing heals.
Verdict: Declarative state is slower, less observable, and less debuggable than if ! ping; then restart; fi.
So What ''Does'' Kubernetes Actually Do?
| You Had (1990–2010) | Kubernetes Replaces It With… | Net Gain? |
|---|---|---|
fstab | PV/PVC/CSI/StorageClass | ❌ More complex |
crontab | CronJob + pod per run | ❌ Overhead |
systemd / supervisord | Deployment + restartPolicy + probes | ❌ Slower |
HAProxy / nginx | Service + Ingress + HPA + metrics pipeline | ❌ Fragile |
Shell if loops | Controllers, webhooks, operators | ❌ Indirection |
redis / etcd (your own) | etcd (mandatory), Redis (optional) | ❌ Forced dependency |
The Real Answer to "What Do You Do Here?"
Kubernetes doesn’t solve your problems. Itmovesthem into a distributed, declarative, API-driven abstraction layer — and charges you in complexity, latency, and cloud bills.
It’s useful only when:
- You have 100s of services
- You deploy 50+ times/day
- You run multi-region, multi-cloud
- You have dedicated platform teams
For everyone else — including 95% of companies — it’s architectural overreach.
Final Thought: The "Bob" Test
If you can’t explain what Kubernetes does in one sentencewithoutsaying “orchestration,” “declarative,” or “cloud-native” — then it’s probably not doing anything you couldn’t do withssh,rsync, and a shell script.
You’re not missing something. Kubernetes is the answer to a problem most people don’t have.
And when they adopt it anyway?
They spend the next 3 years trying to get back to the simplicity of crontab and /etc/fstab.
TL;DR
You don’t need Kubernetes. You need a shell script and a beer.
Kubernetes needsyou— to keep its ecosystem alive.