昨日内容回顾:
     - 污点:
         影响Pod调度,污点是作用在worker节点上。语法格式: KEY[=VALUE]:EFFECT 有三种污点。
             EFFECT:
                 - NoSchedule:
                     已经调度到当前节点的Pod不驱逐,并且不在接受新的Pod调度。
                 - PreferNoSchedule:
                     尽可能将Pod调度到其他节点,若其他节点不符合条件后,在调度到当前节点。
                 - NoExecute:
                     会驱逐旧的Pod,不接受新的Pod调度。
     - 污点容忍:
         一个POd想要调度某个节点,则必须容忍该节点的所有污点。
     
     - 亲和性:
         - 节点亲和性:
             将Pod调度到期望的节点。也可以选择不调度到这些节点。
         - Pod亲和性:
             根据拓扑域,将pod调度到同一个拓扑域。
         - Pod反亲和性:
             将pod调度到不同的拓扑域
             
     - 节点选择器:
         根据节点的标签进行选择?
         
     - 控制器:
         - Job:
             一次性任务。需要将重启策略设置为Never。
         - CronJob:            ---> cj
             周期性任务。底层调用的依旧是Job控制器。
         - DaemonSet:        ---> ds
             每个节点有且只有一个Pod运行。
         - StatusFulSet:    ---> sts
             部署用状态的服务,需要用到headless service提供网络ID的唯一标识。
             
     - 存储进阶:
         - pv:
             关联后端存储设备。声明存储大小。
         - pvc:
             和pv进行绑定。无需关心后端的存储设备。
         - sc:
             动态的创建pv。
Q1: 请问节点亲和性和节点选择器有啥区别?
     相同点:
         都可以基于标签选择期望调度的节点。
         
     不同点:
         节点亲和性可以选择调度到指定的节点,也可以选择不调度到这些节点。功能更强。
         而节点选择器只能用于选择调度到这些节点。
 Q2:有了ds资源,为啥还要使用节点选择器呢?
     ds资源是每个节点都部署一个pod,
     而当集群规模较大时,可能有上千个节点,我们的服务并不需要每个节点都部署的情况,就可以考虑使用节点选择器或者节点亲和性。
 Q3: sc,pv,pvc之间的关系是啥?
     sc ---> 动态创建   pv 
     pvc ---> 绑定|关联 (sc)---> pv
 Q4: 影响Pod调度的方式有哪些?
     1.resources
     2.nodeName
     3.nodeSelector
     4.亲和性(节点亲和性,Pod亲和性,Pod的反亲和性)
     5.污点
     6.污点容忍
 Q5: 请问外部访问k8s的Pod有哪些方式?
     1.hostNetwork
     2.svc.nodePort
     3.po.spec.containers.ports.hostPort
     4.ingress ---> ing 
     5.apiserver反向代理
     6.kubectl proxy 
 Q6: 生产环境环境中你公司的K8S是怎么部署的?多少个master?多少node?为什么这么设计?请简单阐述。
     20 ---》 3 master 3 etcd ---》 3
             17 worker
             
     5 --> 3 MASTER 
           5 WOKER
           
     50 --->
作业讲解: ---> 将"jasonyin2020/oldboyedu-games:v0.3"镜像使用deployment组件部署,要求如下:
         - 使用cm资源配置nginx配置文件
         - 使用svc暴露服务
         - 使用sc存储网站的代码
         - 要求将该镜像传输到harbor的私有仓库(oldboyedu-homework),要求用户名为:"linux82",密码为:"oldboyEDU@2022",需要使用secret资源
         - 要求所有节点打污点"school=oldboyedu"
         - 要求上述所有资源清单使用单独的文件,然后再合并为一个资源清单。
         - 要求浏览器访问任意worker节点的[80-90]端口时能够访问到11个游戏哟
         
         
 分析需求: ---> 拆分需求
     1.下载镜像并推送到harbor仓库;
     2.使用deployment资源部署服务;
     3.使用cm,svc;
     4.整理污点;
     5.使用sc存储网站的代码
     
 下载镜像并推送到harbor仓库
     1.下载镜像 
 docker pull jasonyin2020/oldboyedu-games:v0.3
    2.创建harbor项目和用户
 图形化操作,略。
    3.推送镜像 
 cat > /etc/docker/daemon.json <<EOF
 {
   "insecure-registries": ["k8s151.oldboyedu.com:5000","10.0.0.250"],
   "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"]
 }
 EOF
 systemctl restart docker
 docker login -u linux82 -p oldboyEDU@2022 10.0.0.250
 docker tag jasonyin2020/oldboyedu-games:v0.3 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
 docker push 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
使用deployment资源部署服务:
 [root@k8s151.oldboyedu.com games]# cat 01-deploy-oldboyedu-games.yaml 
 kind: Deployment
 apiVersion: extensions/v1beta1
 metadata:
   name: oldboyedu-linux82-homework-games
 spec:
   replicas: 1
   selector:
      matchLabels:
        apps: oldboyedu-homework
   template:
      metadata:
         name: linux82-games-pods
         labels:
            apps: oldboyedu-homework
      spec:
         imagePullSecrets:
         - name: oldboyedu-linux82-harbor
         containers:
         - name: linux82-games-containers
           image: 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# cat 02-harbor-secret.yaml 
 apiVersion: v1
 data:
   .dockerconfigjson: eyJhdXRocyI6eyIxMC4wLjAuMjUwIjp7InVzZXJuYW1lIjoibGludXg4MiIsInBhc3N3b3JkIjoib2xkYm95RURVQDIwMjIiLCJlbWFpbCI6Im9sZGJveUBvbGRib3llZHUuY29tIiwiYXV0aCI6ImJHbHVkWGc0TWpwdmJHUmliM2xGUkZWQU1qQXlNZz09In19fQ==
 kind: Secret
 metadata:
   name: oldboyedu-linux82-harbor
   namespace: default
 type: kubernetes.io/dockerconfigjson
 [root@k8s151.oldboyedu.com games]# 
 使用cm,svc:
 [root@k8s151.oldboyedu.com games]# cat 01-deploy-oldboyedu-games.yaml 
 kind: Deployment
 apiVersion: extensions/v1beta1
 metadata:
   name: oldboyedu-linux82-homework-games
 spec:
   replicas: 1
   selector:
      matchLabels:
        apps: oldboyedu-homework
   template:
      metadata:
         name: linux82-games-pods
         labels:
            apps: oldboyedu-homework
      spec:
         imagePullSecrets:
         - name: oldboyedu-linux82-harbor
         containers:
         - name: linux82-games-containers
           image: 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
           volumeMounts:
           - name: games
             mountPath: /etc/nginx/nginx.conf
             subPath: nginx.conf
         volumes:
         - name: games
           configMap:
             name: oldboyedu-games-cm
             items:
             - key: nginx.conf
               path: nginx.conf
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# cat 02-harbor-secret.yaml 
 apiVersion: v1
 data:
   .dockerconfigjson: eyJhdXRocyI6eyIxMC4wLjAuMjUwIjp7InVzZXJuYW1lIjoibGludXg4MiIsInBhc3N3b3JkIjoib2xkYm95RURVQDIwMjIiLCJlbWFpbCI6Im9sZGJveUBvbGRib3llZHUuY29tIiwiYXV0aCI6ImJHbHVkWGc0TWpwdmJHUmliM2xGUkZWQU1qQXlNZz09In19fQ==
 kind: Secret
 metadata:
   name: oldboyedu-linux82-harbor
   namespace: default
 type: kubernetes.io/dockerconfigjson
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# cat 03-games-cm.yaml 
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: oldboyedu-games-cm
 data:
    nginx.conf: |
       worker_processes  1;
       events {
           worker_connections  1024;
       }
       http {
           include       mime.types;
           default_type  application/octet-stream;
           sendfile        on;
           keepalive_timeout  65;
           # include /usr/local/nginx/conf/conf.d/*.conf;
           server {
               listen       80;
               root        /usr/local/nginx/html/bird/;
               server_name   game01.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/pinshu/;
               server_name   game02.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/tanke/;
               server_name   game03.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/chengbao/;
               server_name   game04.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/motuo/;
               server_name   game05.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/liferestart/;
               server_name   game06.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/huangjinkuanggong/;
               server_name   game07.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/feijidazhan/;
               server_name   game08.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/zhiwudazhanjiangshi/;
               server_name   game09.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/xiaobawang/;
               server_name   game10.oldboyedu.com;
           }
       
           server {
               listen       80;
               root        /usr/local/nginx/html/pingtai/;
               server_name   game11.oldboyedu.com;
           }
       }
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# cat 04-games-svc.yaml 
 apiVersion: v1
 kind: Service
 metadata:
   name: linux82-games-svc
 spec:
   type: NodePort
   ports:
   - port: 80
     targetPort: 80
     nodePort: 80
   selector:
     apps: oldboyedu-homework
 [root@k8s151.oldboyedu.com games]# 
整理污点
     1.打污点
 kubectl taint node --all --overwrite school=oldboyedu:NoExecute
    2.配置污点容忍
 [root@k8s151.oldboyedu.com games]# cat 01-deploy-oldboyedu-games.yaml 
 kind: Deployment
 apiVersion: extensions/v1beta1
 metadata:
   name: oldboyedu-linux82-homework-games
 spec:
   replicas: 1
   selector:
      matchLabels:
        apps: oldboyedu-homework
   template:
      metadata:
         name: linux82-games-pods
         labels:
            apps: oldboyedu-homework
      spec:
         # tolerations:
         # - operator: Exists
         tolerations:
         - key: school
           value: oldboyedu
           operator: Equal
         - key: node-role.kubernetes.io/master
           operator: Exists
           effect: NoSchedule
         imagePullSecrets:
         - name: oldboyedu-linux82-harbor
         containers:
         - name: linux82-games-containers
           image: 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
           volumeMounts:
           - name: games
             mountPath: /etc/nginx/nginx.conf
             subPath: nginx.conf
         volumes:
         - name: games
           configMap:
             name: oldboyedu-games-cm
             items:
             - key: nginx.conf
               path: nginx.conf
 [root@k8s151.oldboyedu.com games]# 
 使用sc存储网站的代码:
 [root@k8s151.oldboyedu.com games]# cat 05-games-pvc.yaml 
 kind: PersistentVolumeClaim
 apiVersion: v1
 metadata:
   name: oldboyedu-games-pvc
 spec:
   storageClassName: "managed-nfs-storage"
   accessModes:
     - ReadWriteMany
   resources:
     requests:
       storage: 500Mi
 [root@k8s151.oldboyedu.com games]# 
 [root@k8s151.oldboyedu.com games]# cat 01-deploy-oldboyedu-games.yaml 
 kind: Deployment
 apiVersion: extensions/v1beta1
 metadata:
   name: oldboyedu-linux82-homework-games
 spec:
   replicas: 1
   selector:
      matchLabels:
        apps: oldboyedu-homework
   template:
      metadata:
         name: linux82-games-pods
         labels:
            apps: oldboyedu-homework
      spec:
         # tolerations:
         # - operator: Exists
         tolerations:
         - key: school
           value: oldboyedu
           operator: Equal
         - key: node-role.kubernetes.io/master
           operator: Exists
           effect: NoSchedule
         imagePullSecrets:
         - name: oldboyedu-linux82-harbor
         containers:
         - name: linux82-games-containers
           image: 10.0.0.250/oldboyedu-homework/jasonyin2020/oldboyedu-games:v0.3
           volumeMounts:
           - name: nginx-config
             mountPath: /etc/nginx/nginx.conf
             subPath: nginx.conf
           - name: code-data
             mountPath: /usr/local/nginx/html
         volumes:
         - name: nginx-config
           configMap:
             name: oldboyedu-games-cm
             items:
             - key: nginx.conf
               path: nginx.conf
         - name: code-data
           persistentVolumeClaim:
             claimName: oldboyedu-games-pvc
 [root@k8s151.oldboyedu.com games]# 
 RBAC(Role Base Access Control)授权案例:
     1.使用k8s ca签发客户端证书
         (1)解压证书管理工具包
 wget http://192.168.17.253/Kubernetes/day07-/softwaress/oldboyedu-cfssl.tar.gz
 tar xf oldboyedu-cfssl.tar.gz -C /usr/bin/  && chmod +x /usr/bin/cfssl*
        (2)编写证书请求
 cat > ca-config.json <<EOF
 {
   "signing": {
     "default": {
       "expiry": "87600h"
     },
     "profiles": {
       "kubernetes": {
         "usages": [
             "signing",
             "key encipherment",
             "server auth",
             "client auth"
         ],
         "expiry": "87600h"
       }
     }
   }
 }
 EOF
 cat > oldboyedu-csr.json <<EOF
 {
   "CN": "oldboyedu",
   "hosts": [],
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "BeiJing",
       "L": "BeiJing",
       "O": "k8s",
       "OU": "System"
     }
   ]
 }
 EOF
         (3)生成证书
 cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes oldboyedu-csr.json | cfssljson -bare oldboyedu
 温馨提示:
     查看证书"cfssl-certinfo --cert oldboyedu.pem"。
     2.生成kubeconfig授权文件
         (1)编写生成kubeconfig文件的脚本
 cat > kubeconfig.sh <<'EOF'
 # 配置集群
 # --certificate-authority
 #   指定K8s的ca根证书文件路径
 # --embed-certs
 #   如果设置为true,表示将根证书文件的内容写入到配置文件中,
 #   如果设置为false,则只是引用配置文件,将kubeconfig
 # --server
 #   指定APIServer的地址。
 # --kubeconfig
 #   指定kubeconfig的配置文件名称
 kubectl config set-cluster oldboyedu-linux82 \
   --certificate-authority=/etc/kubernetes/pki/ca.crt \
   --embed-certs=true \
   --server=https://10.0.0.151:6443 \
   --kubeconfig=oldboyedu-linux82.kubeconfig
  
 # 设置客户端认证
 kubectl config set-credentials oldboyedu \
   --client-key=oldboyedu-key.pem \
   --client-certificate=oldboyedu.pem \
   --embed-certs=true \
   --kubeconfig=oldboyedu-linux82.kubeconfig
# 设置上下文
 kubectl config set-context linux82 \
   --cluster=oldboyedu-linux82 \
   --user=oldboyedu \
   --kubeconfig=oldboyedu-linux82.kubeconfig
# 设置当前使用的上下文
 kubectl config use-context linux82 --kubeconfig=oldboyedu-linux82.kubeconfig
 EOF
         (2)生成kubeconfig文件
 bash kubeconfig.sh
    3.创建RBAC授权策略
         (1)创建rbac等配置文件
 vi 01-rbac-pods-get.yaml
 ikind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   namespace: default
   name: linux82-role
 rules:
   # API组,""表示核心组,该组包括但不限于"configmaps","nodes","pods","services"等资源.
   # "extensions"组对于低于k8s 1.15版本而言,deployment资源在该组内,但高于k8s1.15版本,则为apps组。
   #
   # 想要知道哪个资源使用在哪个组,我们只需要根据"kubectl api-resources"命令等输出结果就可以轻松判断哟~
   # API组,""表示核心组。
 - apiGroups: ["","extensions"]  
   # 资源类型,不支持写简称,必须写全称哟!!
   resources: ["pods","configmaps","deployments"]  
   # 对资源的操作方法.
   verbs: ["list","delete"]  
---
kind: RoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: oldboyedu-linux82-resources-reader
   namespace: default
 subjects:
   # 主体类型
 - kind: User  
   # 用户名
   name: oldboyedu  
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   # 角色类型
   kind: Role  
   # 绑定角色名称
   name: linux82-role
   apiGroup: rbac.authorization.k8s.io
 (2)应用rbac授权
 kubectl apply -f 01-rbac-pods-get.yaml
 (3)访问测试
 kubectl get pods --kubeconfig=oldboyedu-linux82.kubeconfig
 kubectl get pods,no --kubeconfig=oldboyedu-linux82.kubeconfig
 RBAC补充知识:
 [root@k8s151.oldboyedu.com rbac]# cat 02-rbac-pods-get.yaml 
 kind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   namespace: default
   name: linux82-role-002
 rules:
   # API组,""表示核心组,该组包括但不限于"configmaps","nodes","pods","services"等资源.
   # "extensions"组对于低于k8s 1.15版本而言,deployment资源在该组内,但高于k8s1.15版本,则为apps组。
   #
   # 想要知道哪个资源使用在哪个组,我们只需要根据"kubectl api-resources"命令等输出结果就可以轻松判断哟~
   # API组,""表示核心组。
 - apiGroups: [""]  
   # 资源类型,不支持写简称,必须写全称哟!!
   resources: ["pods","configmaps"]  
   # 对资源的操作方法.
   verbs: ["list","delete"]  
 - apiGroups: ["extensions"]
   resources: ["deployments"]
   verbs: ["list"]
 - apiGroups: ["apps"]
   resources: ["deployments"]
   verbs: ["create"]
---
kind: RoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: oldboyedu-linux82-resources-reader-002
   namespace: default
 subjects:
   # 主体类型
 - kind: User  
   # 用户名
   name: oldboyedu  
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   # 角色类型
   kind: Role  
   # 绑定角色名称
   name: linux82-role-002
   apiGroup: rbac.authorization.k8s.io
 [root@k8s151.oldboyedu.com rbac]# 
Dashboard概述:
     Dashboard是K8S集群管理的一个GUI的WebUI实现,它是一个k8s附加组件,所以需要单独部署。
我们可以以图形化的方式创建k8s资源。
    GitHub地址:
         https://github.com/kubernetes/dashboard#kubernetes-dashboard
部署dashboard:
     1.上传镜像:
 wget http://192.168.17.253/Kubernetes/day07-/images/oldboyedu-dashboard-1_10_1.tar.gz
 docker load -i oldboyedu-dashboard-1_10_1.tar.gz
 docker tag k8s150.oldboyedu.com:5000/kubernetes-dashboard-amd64:v1.10.1 10.0.0.250/oldboyedu-adds-on/kubernetes-dashboard-amd64:v1.10.1
 docker login -u admin -p 1 10.0.0.250
 docker push 10.0.0.250/oldboyedu-adds-on/kubernetes-dashboard-amd64:v1.10.1
    2.修改kubernetes-dashboard.yaml文件的镜像名称为私有仓库的
 wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
 vim kubernetes-dashboard.yaml
 ...
 kind: Deployment
 metadata:
   ...
   name: kubernetes-dashboard
 spec:
   ...
   template:
     ...
     spec:
       # 将官方的污点容忍注释掉,使用咱们的。
       tolerations:
       - operator: Exists
       containers:
       - name: kubernetes-dashboard
         # image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
         image: 10.0.0.250/oldboyedu-adds-on/kubernetes-dashboard-amd64:v1.10.1
     3.修改kubernetes-dashboard.yaml配置文件的svc
 kind: Service
 ...
 spec:
   # 类型改为NodePort
   type: NodePort
   ports:
     - port: 443
       targetPort: 8443
       # 指定NodePort的端口。
       nodePort: 8443
   selector:
     k8s-app: kubernetes-dashboard
     
     4.创建资源
 kubectl apply -f kubernetes-dashboard.yaml 
     5.检查Pod是否正常运行
 kubectl -n kube-system get pods | grep kubernetes-dashboard
 ss -ntl | grep 8443
    6.访dashboard的WebUI
 https://10.0.0.152:8443/
     小彩蛋: 鼠标单机空白处输入: "thisisunsafe"
     
 自定义登录用户:
     (1)编写K8S的yaml资源清单文件
 cat > oldboyedu-dashboard-rbac.yaml <<'EOF'
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   # 创建一个名为"oldboyedu"的账户
   name: oldboyedu
   namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kube-system
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   # 既然绑定的是集群角色,那么类型也应该为"ClusterRole",而不是"Role"哟~
   kind: ClusterRole
   # 关于集群角色可以使用"kubectl get clusterrole | grep admin"进行过滤哟~
   name: cluster-admin
 subjects:
   - kind: ServiceAccount
     # 此处要注意哈,绑定的要和我们上面的服务账户一致哟~
     name: oldboyedu
     namespace: kube-system
 EOF
  
     (2)创建资源清单
 kubectl apply -f oldboyedu-dashboard-rbac.yaml
     (3)查看sa资源的Tokens名称
 kubectl describe serviceaccounts -n kube-system  oldboyedu | grep Tokens
    (4)根据上一步的token名称的查看token值
 kubectl -n kube-system describe secrets oldboyedu-token-gns4h  
     
     (5)登录dashboard的WebUI,使用上一步的Token值登录即可(注意,复制时不要有换行哟)
 如上图所示。
 温馨提示:
     如下图所示,由于咱们创建的ServiceAccount绑定的角色为"cluster-admin"这个角色,因此oldboyedu用户的token是可以访问集群的所有资源的哟~
     
     
     
 使用kubeconfig登录:
     (1)编写生成kubeconf的配置文件的脚本
 cat > oldboyedu-generate-context-conf.sh <<'EOF'
 #!/bin/bash
 # auther: Jason Yin
 # 获取secret的名称
 SECRET_NAME=`kubectl get secrets -n kube-system  | grep oldboyedu | awk {'print $1'}`
# 指定API SERVER的地址
 API_SERVER=k8s151.oldboyedu.com:6443
# 指定kubeconfig配置文件的路径名称
 KUBECONFIG_NAME=/root/oldboyedu-k8s-dashboard-admin.conf
# 获取oldboyedu用户的tocken
 OLDBOYEDU_TOCKEN=`kubectl get secrets -n kube-system $SECRET_NAME -o jsonpath={.data.token} | base64 -d`
# 在kubeconfig配置文件中设置群集项
 kubectl config set-cluster oldboyedu-k8s-dashboard-cluster --server=$API_SERVER --kubeconfig=$KUBECONFIG_NAME
# 在kubeconfig中设置用户项
 kubectl config set-credentials oldboyedu-k8s-dashboard-user --token=$OLDBOYEDU_TOCKEN --kubeconfig=$KUBECONFIG_NAME
# 配置上下文,即绑定用户和集群的上下文关系,可以将多个集群和用户进行绑定哟~
 kubectl config set-context oldboyedu-admin --cluster=oldboyedu-k8s-dashboard-cluster --user=oldboyedu-k8s-dashboard-user --kubeconfig=$KUBECONFIG_NAME
# 配置当前使用的上下文
 kubectl config use-context oldboyedu-admin --kubeconfig=$KUBECONFIG_NAME
 EOF
     (2)运行上述脚本并下载上一步生成的配置文件到桌面,如上图所示,选择并选择该文件进行登录
 sz oldboyedu-k8s-dashboard-admin.conf
     (3)进入到dashboard的WebUI
 如下图所示,我们可以访问任意的Pod,当然也可以直接进入到有终端的容器哟
 K8S 1.19高可用集群部署:
 一:基础环境准备
     1.集群规划
 cat >> /etc/hosts <<EOF
 10.0.0.191 k8s191.oldboyedu.com
 10.0.0.192 k8s192.oldboyedu.com
 10.0.0.193 k8s193.oldboyedu.com
 EOF
    2.禁用不必要的服务
         2.1 禁用防火墙,网络管理,邮箱
 systemctl disable  --now firewalld NetworkManager postfix
        2.2 禁用selinux
 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config 
 grep ^SELINUX= /etc/selinux/config
        2.3 禁用swap分区
 swapoff -a && sysctl -w vm.swappiness=0 
 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
 grep swap /etc/fstab
 free -h
     3.Linux基础优化
         3.1 修改sshd服务优化
 sed -ri  's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
 sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config
 grep ^UseDNS /etc/ssh/sshd_config 
 grep ^GSSAPIAuthentication  /etc/ssh/sshd_config
         3.2 修改文件打开数量的限制(退出当前会话立即生效)
 cat > /etc/security/limits.d/k8s.conf <<'EOF'
 *       soft    nofile     65535
 *       hard    nofile    131070
 EOF
 ulimit -Sn
 ulimit -Hn
         3.3 修改终端颜色
 cat <<EOF >>  ~/.bashrc 
 PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
 EOF
 source ~/.bashrc 
        3.4 所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败!)
 modprobe br_netfilter
 modprobe ip_conntrack
 cat >>/etc/rc.sysinit<<EOF
 #!/bin/bash
 for file in /etc/sysconfig/modules/*.modules ; do
 [ -x $file ] && $file
 done
 EOF
 echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules
 echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
 chmod 755 /etc/sysconfig/modules/br_netfilter.modules
 chmod 755 /etc/sysconfig/modules/ip_conntrack.modules
 lsmod | grep br_netfilter
         3.5 所有节点配置集群时间同步
 基于ntpdate配合crontab实现集群同步:
             (1)手动同步时区和时间
 yum -y install ntpdate     
 ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
 echo 'Asia/Shanghai' >/etc/timezone
 ntpdate ntp.aliyun.com
            (2)定期任务同步("crontab -e")
 */5 * * * * /usr/sbin/ntpdate ntp.aliyun.com
 基于chronyd守护进程实现集群时间同步:
             (1)手动同步时区和时间
 \cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 
            (2)安装服务chrony
 yum -y install ntpdate chrony
            (3)修改配置文件
 vim /etc/chrony.conf 
 ...
 server ntp.aliyun.com iburst
 server ntp1.aliyun.com iburst
 server ntp2.aliyun.com iburst
 server ntp3.aliyun.com iburst
 server ntp4.aliyun.com iburst
 server ntp5.aliyun.com iburst
     
             (4)启动服务
 systemctl enable --now chronyd  
            (5)查看服务状态
 systemctl status chronyd
     4.配置软件源并安装集群常用软件
         4.1 配置阿里源
 curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
         4.2 配置docker软件源(可跳过,因为k8s 1.24已经弃用docker-shim,因此也不需要部署docker啦)
 curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
 sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
         4.3 配置K8S软件源
 cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
 [kubernetes]
 name=Kubernetes
 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
 enabled=1
 gpgcheck=0
 repo_gpgcheck=0
 EOF
        4.4 安装集群常用软件
 yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync
             局域网的同学可以这样操作:
                 curl -o 01-oldboyeduchangyong-softwares.tar.gz http://192.168.17.253/Kubernetes/day07-/softwaress/01-oldboyeduchangyong-softwares.tar.gz
                 tar xf 01-oldboyeduchangyong-softwares.tar.gz  && yum -y localinstall 01-changyong-softwares/*.rpm
        4.5下载配置文件及脚本(不用下载啦,直接用我发QQ群里的文件即可)
 git clone https://gitee.com/jasonyin2020/oldboyedu-linux-Cloud_Native
     5.k8s191.oldboyedu.com配置免密钥登录集群并配置同步脚本
         5.1 配置批量免密钥登录
 cat > password_free_login.sh <<'EOF'
 #!/bin/bash
 # auther: Jason Yin
# 创建密钥对
 ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q
# 声明你服务器密码,建议所有节点的密码均一致,否则该脚本需要再次进行优化
 export mypasswd=yinzhengjie
# 定义主机列表
 k8s_host_list=(k8s191.oldboyedu.com k8s192.oldboyedu.com k8s193.oldboyedu.com)
# 配置免密登录,利用expect工具免交互输入
 for i in ${k8s_host_list[@]};do
 expect -c "
 spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
   expect {
     \"*yes/no*\" {send \"yes\r\"; exp_continue}
     \"*password*\" {send \"$mypasswd\r\"; exp_continue}
   }"
 done
 EOF
 sh password_free_login.sh
     (2)编写同步脚本
 cat > /usr/local/sbin/data_rsync.sh <<'EOF'
 #!/bin/bash
 # Auther: Jason Yin
if  [ $# -ne 1 ];then
    echo "Usage: $0 /path/to/file(绝对路径)"
    exit
 fi 
if [ ! -e $1 ];then
     echo "[ $1 ] dir or file not find!"
     exit
 fi
fullpath=`dirname $1`
basename=`basename $1`
cd $fullpath
k8s_host_list=(k8s191.oldboyedu.com k8s192.oldboyedu.com k8s193.oldboyedu.com)
for host in ${k8s_host_list[@]};do
   tput setaf 2
     echo ===== rsyncing ${host}: $basename =====
     tput setaf 7
     rsync -az $basename  `whoami`@${host}:$fullpath
     if [ $? -eq 0 ];then
       echo "命令执行成功!"
     fi
 done
 EOF
 chmod +x /usr/local/sbin/data_rsync.sh
    6.所有节点安装ipvsadm以实现kube-proxy的负载均衡
         6.1 安装ipvsadm等相关工具
 yum -y install ipvsadm ipset sysstat conntrack libseccomp 
             局域网的同学可以这样操作:
                 curl -o 02-oldboyedu-ipvs-softwares.tar.gz http://192.168.17.253/Kubernetes/day07-/softwaress/02-oldboyedu-ipvs-softwares.tar.gz
                 tar xf 02-oldboyedu-ipvs-softwares.tar.gz  && yum -y localinstall 02-ipvs-softwares/*.rpm
         6.2 编写加载ipvs的配置文件
 cat > /etc/sysconfig/modules/ipvs.modules <<EOF
 #!/bin/bash
modprobe -- ip_vs
 modprobe -- ip_vs_rr
 modprobe -- ip_vs_wrr
 modprobe -- ip_vs_sh
 modprobe -- nf_conntrack_ipv4
 EOF
        6.3 加载ipvs相关模块并查看
 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    7.所有节点修改Linux内核参数调优
         7.1 所有节点修改Linux内核参数调优
 cat > /etc/sysctl.d/k8s.conf <<'EOF'
 net.ipv4.ip_forward = 1
 net.bridge.bridge-nf-call-iptables = 1
 net.bridge.bridge-nf-call-ip6tables = 1
 fs.may_detach_mounts = 1
 vm.overcommit_memory=1
 vm.panic_on_oom=0
 fs.inotify.max_user_watches=89100
 fs.file-max=52706963
 fs.nr_open=52706963
 net.netfilter.nf_conntrack_max=2310720
 net.ipv4.tcp_keepalive_time = 600
 net.ipv4.tcp_keepalive_probes = 3
 net.ipv4.tcp_keepalive_intvl =15
 net.ipv4.tcp_max_tw_buckets = 36000
 net.ipv4.tcp_tw_reuse = 1
 net.ipv4.tcp_max_orphans = 327680
 net.ipv4.tcp_orphan_retries = 3
 net.ipv4.tcp_syncookies = 1
 net.ipv4.tcp_max_syn_backlog = 16384
 net.ipv4.ip_conntrack_max = 65536
 net.ipv4.tcp_max_syn_backlog = 16384
 net.ipv4.tcp_timestamps = 0
 net.core.somaxconn = 16384
 net.ipv6.conf.all.disable_ipv6 = 1
 EOF
 sysctl --system
         7.2 重启虚拟机
 reboot
         7.3 拍快照
 如果所有节点都可以正常重启,说明我们的配置是正确的!接下来就是拍快照。
  
  
  
  
 部署高可用负载均衡器:
     
 一.部署nginx环境
     1.编译安装nginx
         为了方便后面扩展插件,我这里使用编译安装nginx。
         (1)所有的master节点创建运行nginx的用户
 useradd nginx -s /sbin/nologin -M
        (2)安装依赖
 yum -y install pcre pcre-devel openssl openssl-devel gcc gcc-c++ automake autoconf libtool make
             局域网的同学可以这样操作:
                 curl -o  03-oldboyedunginx-softwares-lib.tar.gz   http://192.168.17.253/Kubernetes/day07-/softwaress/03-oldboyedunginx-softwares-lib.tar.gz
                 tar xf 03-oldboyedunginx-softwares-lib.tar.gz  && yum -y localinstall 03-nginx-softwares-lib/*.rpm
         (3)下载nginx软件包
 wget http://nginx.org/download/nginx-1.21.6.tar.gz
        (4)解压软件包
 tar xf nginx-1.21.6.tar.gz
         (4)配置nginx
 cd nginx-1.21.6
 ./configure --prefix=/usr/local/nginx/ \
             --with-pcre \
             --with-http_ssl_module \
             --with-http_stub_status_module \
             --with-stream \
             --with-http_stub_status_module \
             --with-http_gzip_static_module
        (5)编译并安装nginx
 make -j 4 &&  make install 
        (6)使用systemctl管理,并设置开机启动
 cat >/usr/lib/systemd/system/nginx.service <<'EOF'
 [Unit]
 Description=The nginx HTTP and reverse proxy server
 After=network.target sshd-keygen.service
[Service]
 Type=forking
 EnvironmentFile=/etc/sysconfig/sshd
 ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf
 ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
 ExecReload=/usr/local/nginx/sbin/nginx -s reload
 ExecStop=/usr/local/nginx/sbin/nginx -s stop
 Restart=on-failure
 RestartSec=42s
[Install]
 WantedBy=multi-user.target
 EOF
         (7)配置nginx的开机自启动
 systemctl enable --now nginx 
         (8)检查nginx服务是否启动
 systemctl status nginx 
 ps -ef|grep nginx
         (9)同步nginx软件包和脚本到集群的其他master节点
 data_rsync.sh /usr/local/nginx/
 data_rsync.sh /usr/lib/systemd/system/nginx.service
     2. 修改nginx的配置文件
         (1)创建nginx配置文件
 cat > /usr/local/nginx/conf/nginx.conf <<'EOF'
 user nginx nginx;
 worker_processes auto;
events {
     worker_connections  20240;
     use epoll;
 }
error_log /var/log/nginx_error.log info;
stream {
     upstream kube-servers {
         hash $remote_addr consistent;
         
         server k8s191.oldboyedu.com:6443 weight=5 max_fails=1 fail_timeout=3s;
         server k8s192.oldboyedu.com:6443 weight=5 max_fails=1 fail_timeout=3s;
         server k8s193.oldboyedu.com:6443 weight=5 max_fails=1 fail_timeout=3s;
     }
    server {
         listen 8443 reuseport;
         proxy_connect_timeout 3s;
         proxy_timeout 3000s;
         proxy_pass kube-servers;
     }
 }
 EOF
     (2)同步nginx的配置文件到其他master节点
 data_rsync.sh /usr/local/nginx/conf/nginx.conf
     (3)其他master节点启动nginx服务
 systemctl enable --now nginx 
 systemctl restart nginx 
 systemctl status nginx 
温馨提示:
     (1)将nginx同步到其他节点后别忘记设置开机自启动哈。
     (2)启动服务时别忘记在别的节点创建nginx用户哟;
     
     
     3.部署keepalived
         3.1 安装keepalived
 yum  -y install  keepalived
            局域网的同学可以这样操作:
                 curl -o  05-oldboyedu-keepalived-softwares.tar.gz  http://192.168.17.253/Kubernetes/day07-/softwaress/05-oldboyedu-keepalived-softwares.tar.gz
                 tar xf 05-oldboyedu-keepalived-softwares.tar.gz  && yum -y localinstall 05-keepalived-softwares/*.rpm
         3.2 修改keepalive的配置文件
     (1)编写配置文件,各个master节点需要执行修改router_id和mcast_src_ip的值即可。(注意网卡名称)
 cat > /etc/keepalived/keepalived.conf <<EOF
 ! Configuration File for keepalived
 global_defs {
    router_id 10.0.0.191
 }
 vrrp_script chk_nginx {
     script "/etc/keepalived/check_port.sh 8443"
     interval 2
     weight -20
 }
 vrrp_instance VI_1 {
     state MASTER
     interface eth0
     virtual_router_id 251
     priority 100
     advert_int 1
     mcast_src_ip 10.0.0.191
     nopreempt
     authentication {
         auth_type PASS
         auth_pass 11111111
     }
     track_script {
          chk_nginx
     }
     virtual_ipaddress {
         10.0.0.82
     }
 }
 EOF
     (2)各节点编写健康检查脚本
 vi /etc/keepalived/check_port.sh
 iCHK_PORT=$1
 if [ -n "$CHK_PORT" ];then
     PORT_PROCESS=\`ss -lt|grep $CHK_PORT|wc -l\`
     if [ $PORT_PROCESS -eq 0 ];then
         echo "Port $CHK_PORT Is Not Used,End."
         exit 1
     fi
 else
     echo "Check Port Cant Be Empty!"
 fi
 chmod +x /etc/keepalived/check_port.sh
    (3)启动keepalived
 systemctl enable --now keepalived
  
  
  
 温馨提示:
     router_id:
         节点ip,master每个节点配置自己的IP
     mcast_src_ip:
         节点IP,master每个节点配置自己的IP
     virtual_ipaddress:
         虚拟IP,即VIP。
         
         
         
         
 待参考检测脚本
 cat > /etc/keepalived/check_apiserver.sh <<'EOF'
 #!/bin/bash
err=0
 for k in $(seq 1 3)
 do
     check_code=$(pgrep haproxy)
     if [[ $check_code == "" ]]; then
         err=$(expr $err + 1)
         sleep 1
         continue
     else
         err=0
         break
     fi
 done
if [[ $err != "0" ]]; then
     echo "systemctl stop keepalived"
     /usr/bin/systemctl stop keepalived
     exit 1
 else
     exit 0
 fi
 EOF
     4.测试vip飘逸,先停止服务,在查看VIP即可。
 systemctl stop keepalived  
 ip a 
kubeadm组件初始化K8S集群:
     0.部署docker环境
         1)安装docker
 curl -o  oldboyedu-docker1809.tar.gz  http://192.168.17.253/Kubernetes/day07-/softwaress/oldboyedu-docker1809.tar.gz        
 tar xf oldboyedu-docker1809.tar.gz  && yum -y localinstall docker-rpm-18-09/*
         2)配置docker优化
 mkdir -pv /etc/docker && cat <<EOF | sudo tee /etc/docker/daemon.json
 {
   "insecure-registries": ["k8s191.oldboyedu.com:5000","10.0.0.250"],
   "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"]
 }
 EOF
     
         3)配置docker开机自启动
 systemctl enable --now docker && systemctl status docker
    1.安装kubeadm
         (1)所有节点安装kubeadm和master相关依赖组建
 yum install -y kubelet-1.19.16 kubeadm-1.19.16 kubectl-1.19.16
             局域网的同学可以这样操作:
                 curl -o  06-oldboyedu-kubeadm-softwares.tar.gz   http://192.168.17.253/Kubernetes/day07-/softwaress/06-oldboyedu-kubeadm-softwares.tar.gz
                 tar xf 06-oldboyedu-kubeadm-softwares.tar.gz  && yum -y localinstall 06-kubeadm-softwares/*.rpm
         (2)所有节点kubelet设置成开机启动
 systemctl enable --now kubelet 
 systemctl status kubelet
        (3)检查kubectl工具版本号
 kubectl version --client --output=yaml
     2.配置kubeadm文件
     (1)在k8s191.oldboyedu.com节点上配置打印init默认配置信息
 kubeadm config print init-defaults > kubeadm-init.yaml
    (2)根据默认的配置格式进行自定义修改即可
 cat > kubeadm-init.yaml <<EOF
 apiVersion: kubeadm.k8s.io/v1beta2
 bootstrapTokens:
 - groups:
   - system:bootstrappers:kubeadm:default-node-token
   token: abcdef.0123456789abcdef
   ttl: 24h0m0s
   usages:
   - signing
   - authentication
 kind: InitConfiguration
 localAPIEndpoint:
   # 指定master节点当前节点地址
   advertiseAddress: 10.0.0.191
   bindPort: 6443
 nodeRegistration:
   criSocket: /var/run/dockershim.sock
   name: k8s191.oldboyedu.com
   taints:
   - effect: NoSchedule
     key: node-role.kubernetes.io/master
 ---
 apiServer:
   timeoutForControlPlane: 4m0s
 apiVersion: kubeadm.k8s.io/v1beta2
 certificatesDir: /etc/kubernetes/pki
 clusterName: kubernetes
 controllerManager: {}
 dns:
   type: CoreDNS
 etcd:
   local:
     dataDir: /var/lib/etcd
 # imageRepository: k8s.gcr.io
 # 指定镜像下载地址
 imageRepository: registry.aliyuncs.com/google_containers
 kind: ClusterConfiguration
 kubernetesVersion: v1.19.0
 networking:
   dnsDomain: cluster.local
   serviceSubnet: 10.96.0.0/12
 scheduler: {}
 EOF
     
     (3)检查配置文件是否有错误,正确效果如上图所示
 kubeadm init --config kubeadm-init.yaml --dry-run
 温馨提示:
     (1)创建默认的kubeadm-config.yaml文件:
 kubeadm config print init-defaults  > kubeadm-config.yaml
    (2)生成KubeletConfiguration示例文件 
 kubeadm config print init-defaults --component-configs KubeletConfiguration
    (3)生成KubeProxyConfiguration示例文件 
 kubeadm config print init-defaults --component-configs KubeProxyConfiguration
    (4)如果kubeadm-config.yaml文件的版本较低,可以尝试更新哟
 kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
    (5)kubeadm和etcdadm
 虽然kubeadm作为etcd节点的管理工具,但请注意kubeadm不打算支持此类节点的证书轮换或升级。
 长期计划是使用etcdadm来工具来进行管理。
参考链接:
     https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/
     3.预先拉取镜像
 kubeadm config images list --config kubeadm-init.yaml
    4.基于kubeadm配置文件初始化集群
 官方基于配置文件给出的初始化方式,参考链接:
     https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/high-availability/
     
 官方的初始化方式我们在部署单点的master中已经使用过了,本次我打算使用上一步生成的kubeadm-config.yaml文件来初始化:
     kubeadm init --config kubeadm-init.yaml  --upload-certs
     
     
 NODE节点加入集群: (根据自己的实际情况复制哟!我的token你不能用!)
 kubeadm join 10.0.0.191:6443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:01b420220a224242ef1be6466625fb3e1286189c46bba7b4c01f241a2b8d22f8 
配置网络:
     0.确保每个节点上都flanneld存在二进制文件/opt/bin/flanneld
 wget https://github.com/flannel-io/flannel/releases/download/v0.19.2/flannel-v0.19.2-linux-amd64.tar.gz
 tar xf flannel-v0.19.2-linux-amd64.tar.gz
 mkdir /opt/bin
 cp flanneld /opt/bin/flanneld
 data_rsync.sh /opt/bin/
     1.下载文件
 wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    2.修改配置文件
 kind: ConfigMap
 ...
 metadata:
   name: kube-flannel-cfg
 data:
     ...
     net-conf.json: |
       ...
       "Network": "10.96.0.0/12",
     
     3.创建资源
     kubectl apply -f kube-flannel.yml
    4.测试集群是否可用
 docker run -dp 5000:5000 --restart always --name oldboyedu-registry registry:2
    5.
     
     
 今日内容回顾:
     - RBAC            *****
     - dashboard        ***  ---> rancher
     
     svc : ---> 4层