k8s安装

124
  1. 关闭防火墙(全部节点)

    firewall-cmd --state          # 查看防火墙状态
    systemctl stop firewalld      # 停止防火墙的服务
    systemctl disable firewalld   # 禁止开机启动
    
  2. 修改hostname将多节点关联(全部节点)

    # 各自机器执行各自指令,给自己的主机设置hostname
    hostnamectl set-hostname k8s-1.22.3-master
    hostnamectl set-hostname k8s-1.22.3-node01
    hostnamectl set-hostname k8s-1.22.3-node02
    
    # 每一台机器都在hosts中维护互相的ip
    vi /etc/hosts
    192.168.20.32 k8s-1.22.3-master
    192.168.20.35 k8s-1.22.3-node01
    192.168.20.36 k8s-1.22.3-node02
    
  3. 关闭selinux(全部节点)

    # 临时禁用selinux
    setenforce 0
    # 查看状态
    sestatus -v
    # 永久关闭selinux,需要重启
    vi /etc/sysconfig/selinux
    SELINUX=disabled
    
  4. 关闭swap(全部节点)(防止使用磁盘存储内存,oom就直接异常,使用新pod替换故障)

    swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab
    
  5. 添加kubernetes的yum源(全部节点)

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    查看yum源是否可用

    yum repolist
    
  6. 允许 iptables 检查桥接流量(全部节点)

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    
    sysctl --system
    
    # 查看网桥过滤模块是否加载成功
    lsmod | grep br_netfilter
    
  7. 配置ipvs代替iptable(全部节点)

    yum install ipset ipvsadm -y
    
    cat <<EOF> /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    
    chmod +x /etc/sysconfig/modules/ipvs.modules
    
    /bin/bash /etc/sysconfig/modules/ipvs.modules
    
    # 查看对应的模块是否加载成功
    lsmod | grep -e -ip_vs -e nf_conntrack_ipv4
    
  8. yum安装kubernetes(1.22.3版本)(全部节点)

    yum install -y kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3
    

    配置kubelet的cgroup

    vim /etc/sysconfig/kubelet
    KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
    KUBE_PROXY_MODE="ipvs"
    

    kubelet开机自启

    systemctl enable kubelet
    
  9. 下载kubeadm初始化需要用到的镜像(全部节点)

    检测kubeadm安装所需要的镜像

    kubeadm config images list --kubernetes-version=1.22.3
    

    k8s.gcr.io/kube-apiserver:v1.22.3
    k8s.gcr.io/kube-controller-manager:v1.22.3
    k8s.gcr.io/kube-scheduler:v1.22.3
    k8s.gcr.io/kube-proxy:v1.22.3
    k8s.gcr.io/pause:3.5
    k8s.gcr.io/etcd:3.5.0-0
    k8s.gcr.io/coredns/coredns:v1.8.4

    通过ali镜像源拉取镜像,将k8s.gcr.io替换成registry.cn-hangzhou.aliyuncs.com/google_containers后pull

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3

    最后一个coredns特例,应使用docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4

    # 拉取镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
    

    替换拉取的镜像tag为kubeadm所需要的tag

    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3 k8s.gcr.io/kube-apiserver:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
    
  10. 集群初始化(master节点)

    后面使用flannel网络打通集群内通信,所以使用pod-network-cidr=10.244.0.0/16,在flannel的github主页有说明

    kubeadm init --kubernetes-version=1.22.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
    

    记录下kubeadm join的内容,用于工作节点加入到主节点集群中

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
     export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
     https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.20.32:6443 --token 1i8sqz.6y8xx8z90fqy1hy9 \
           --discovery-token-ca-cert-hash sha256:84fd294929f901517071c93e2bd7c179ba4848e853dbb506b7d1a7be575ccd11
    

    master节点执行初始化成功中提示的命令

    如果忘记了token,使用如下命令在master节点生成token

    kubeadm token create --print-join-command
    

    工作节点使用kubeadm join加入集群

  11. 安装flannel打通集群内的网络(master执行,自动给工作节点安装)

    避免命令行无法从互联网上下载文件,我们提前从github主页提供的链接https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml下载保存到master机器上叫kube-flannel.yml

    ---
    kind: Namespace
    apiVersion: v1
    metadata:
      name: kube-flannel
      labels:
        pod-security.kubernetes.io/enforce: privileged
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-flannel
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-flannel
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
           #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
           #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
           #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
            image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
    

    k8s部署flannel

    kubectl apply -f kube-flannel.yml
    

    查看是否与工作机联通

    kubectl get nodes

  12. 安装kubernetes-dashboard

    https://v1-22.docs.kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/

    下载配置文件https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml到kubernetes-dashboard.yml

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
        - name: http
          port: 443
          targetPort: 8443
          nodePort: 32659
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.2.0
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.6
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    

    修改kubernetes-dashboard的service,使用NodePort方式方便访问

    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
        - name: http
          port: 443
          targetPort: 8443
          nodePort: 32659
      selector:
        k8s-app: kubernetes-dashboard
    

    启动

    kubectl apply -f kubernetes-dashboard.yml

    为dashboard所在namespace创建账户以及权限

    https://github.com/kubernetes/dashboard/blob/v2.2.0/docs/user/access-control/creating-sample-user.md

    kubernetes-dashboard-admin-user.yml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
    ```shell
    kubectl apply -f kubernetes-dashboard-admin-user.yml
    

    给admin-user账户生成token

    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    

    如果忘记了token,使用如下命令查看

    kubectl describe secret admin-user -n kubernetes-dashboard
    
  13. 安装Ingress-nginx,注意使用NodePort类型的service来实现nginx-ingress-controller,因为自建的集群不支持LoadBalancer

  14. Centos8安装nfs

    # 安装nfs
    dnf install nfs-utils
    # 开机自启并立即启动nfs服务
    systemctl enable --now nfs-server
    # 创建自己的nfs目录文件夹
    mkdir xxx
    # 将创建的nfs文件夹开放到nfs服务中
    vim /etc/exports
    # 添加内容, 在整个网段中开放
    xxx(自己的目录) 192.168.0.0/24(rw,sync,no_subtree_check)
    # 保存文件后,刷新配置
    exportfs -r
    
  15. 如果节点机要使用nfs,那么需要提前安装nfs

    dnf install nfs-utils
    # 似乎只安装就可以了不需要启动服务
    
  16. k8s版本回滚

    使用对象资源文件yaml的历史版本为依据来做回滚

    # 查看deployment对象的历史版本
    kubectl rollout history deployment deployment的名字 -n namespace
    # 查看某一个版本的具体详情
    kubectl rollout history deployment deployment的名字 --revision=版本号 -n namespace
    # 回滚到上一个版本
    kubectl rollout undo deployment deployment的名字 -n namespace
    # 回滚到指定版本
    kubectl rollout undo deployment deployment的名字 -n namespace --to-revision=指定版本号