Configs2026年4月13日·1 分钟阅读

Kubernetes — Production-Grade Container Orchestration

Kubernetes (K8s) is the open-source platform for automating deployment, scaling, and management of containerized applications. Originally designed by Google and now maintained by the CNCF, it is the industry standard for running containers in production.

AI
AI Open Source · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

# Local development cluster
# Using kind (Kubernetes in Docker)
kind create cluster --name dev

# Or minikube
minikube start

# Deploy an application
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods
kubectl get services

# Scale up
kubectl scale deployment nginx --replicas=3

Introduction

Kubernetes is the operating system for the cloud. It automates the deployment, scaling, networking, and management of containerized applications across clusters of machines. Originally created by Google (based on their internal Borg system), Kubernetes is now maintained by the Cloud Native Computing Foundation (CNCF) and has become the universal platform for running production workloads.

With over 122,000 GitHub stars, Kubernetes is used by virtually every major technology company. AWS (EKS), Google (GKE), Azure (AKS), and all cloud providers offer managed Kubernetes services. It has spawned an entire ecosystem of tools, certifications, and practices.

What Kubernetes Does

Kubernetes manages containers at scale. You declare the desired state of your application (how many replicas, what resources, which ports, health checks), and Kubernetes continuously works to maintain that state. If a container crashes, Kubernetes restarts it. If a node fails, workloads are rescheduled. If traffic increases, it can auto-scale.

Architecture Overview

[Control Plane]
+-------+-------+-------+
|       |       |       |
[API     [etcd]  [Scheduler]
Server]  Cluster  Assigns pods
REST API state   to nodes
         store
    [Controller Manager]
    Reconciliation loops:
    ReplicaSet, Deployment,
    Service, Job controllers
        |
[Worker Nodes]
+-------+-------+-------+
|       |       |       |
[kubelet] [kube-proxy] [Container]
Node      Network      Runtime
agent     proxy        containerd
   |         |            |
[Pod]     [Service]    [Volumes]
Smallest  Load balance Persistent
unit      & discovery  storage

Self-Hosting & Configuration

# deployment.yaml — typical application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:1.0.0
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
# Apply and manage
kubectl apply -f deployment.yaml
kubectl get pods -w
kubectl logs -f deployment/my-app
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app  # rollback

Key Features

  • Self-Healing — restarts failed containers, replaces unhealthy pods
  • Auto-Scaling — horizontal pod autoscaler based on CPU/memory/custom metrics
  • Rolling Updates — zero-downtime deployments with automatic rollback
  • Service Discovery — built-in DNS and load balancing for services
  • Storage Orchestration — attach any storage system (cloud, local, NFS)
  • Secret Management — store and manage sensitive configuration
  • Namespace Isolation — multi-tenant clusters with resource quotas
  • Extensibility — Custom Resource Definitions (CRDs) and operators

Comparison with Similar Tools

Feature Kubernetes Docker Swarm Nomad ECS Docker Compose
Complexity High Low Moderate Moderate Very Low
Scale Unlimited Moderate Large Large Single host
Ecosystem Massive Small Growing AWS only Development
Auto-Scaling Yes Limited Yes Yes No
Service Mesh Istio, Linkerd No Consul Connect App Mesh No
Learning Curve High Low Moderate Moderate Very Low
Best For Production at scale Simple clusters Multi-workload AWS native Local dev

FAQ

Q: When should I NOT use Kubernetes? A: For simple applications, single-server deployments, or small teams. Docker Compose or a PaaS (Railway, Fly.io, Render) is simpler. Kubernetes adds value at scale with multiple services and teams.

Q: How do I learn Kubernetes? A: Start with minikube or kind for local clusters. Learn core concepts: Pods, Deployments, Services, ConfigMaps, Secrets. Then explore Helm, Ingress, and RBAC. The CKA certification provides structured learning.

Q: Managed vs self-hosted Kubernetes? A: Use managed (EKS, GKE, AKS) unless you have a strong reason to self-host. Managed services handle control plane availability, upgrades, and security patches.

Q: What is K3s? A: K3s is a lightweight Kubernetes distribution by Rancher/SUSE. It is a fully conformant K8s in a single binary, ideal for edge, IoT, and development environments.

Sources

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产