怎样做自己网站免费服务器ip

pingmian/2026/1/25 15:27:14/文章来源:
怎样做自己网站,免费服务器ip,企业网站名是什么意思,产品网站建站文章目录 环境软件版本服务器系统初始化设置关于etcd签名证书etcd集群部署负载均衡器组件安装设置关于k8s自签证书自签CAkube-apiserver 自签证书kube-controller-manager自签证书kube-scheduler自签证书kube-proxy 自签证书admin 自签证书 控制平面节点组件部署**部署kube-api… 文章目录 环境软件版本服务器系统初始化设置关于etcd签名证书etcd集群部署负载均衡器组件安装设置关于k8s自签证书自签CAkube-apiserver 自签证书kube-controller-manager自签证书kube-scheduler自签证书kube-proxy 自签证书admin 自签证书 控制平面节点组件部署**部署kube-apiserver****部署kube-controller-manager****部署kube-scheduler****查看集群状态** 数据平面节点组件部署容器运行时安装部署kubelet部署kube-proxy calico网络组件部署coredns 组件部署dashboard 组件部署Rancher 管理k8s集群metrics-server 组件部署ingress 组件部署helm、kubens、crictl、ctr 工具nfs storageclass动态pv存储loki 日志采集部署Promthous 组件部署argocd组件部署FAQ 环境 说明本次实验共有5台主机3台master节点同时又是workeros128、os129、os130 节点主机容器运行时用的containerdworker131、worker132主机的用的docker 主机名IP组件系统os128192.168.177.128etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9os129192.168.177.129etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9os130192.168.177.130etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9worker131192.168.177.131haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9worker132192.168.177.132haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9VIP192.168.177.127 软件版本 软件版本明细 软件版本下载地址备注CentOS7.9.2009https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.isokernel3.10.0-1160.105.1.el7.x86_64(系统默认)kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxyv1.27.2https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gzetcdv3.5.5https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gzcfsslv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64cfssljsonv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64cfssl-certinfov1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64containerdv.1.6.6https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gzruncv1.1.11https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64containerd中自带的runc有问题需要替换docker20.10.24.https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgzcri-dockerd0.3.6https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgzcrictlv1.29.0https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz使用docker作为runtime时需要单独安装这个管理工具containerd的安装包中自带了此工具haproxy1.5系统默认yum源keepalived1.3.5系统默认yum源calicov3.25.0https://docs.tigera.io/archive/v3.25/manifests/calico.yamlcorednsv1.11.1https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.basedashboardv2.7https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yamlmetrics-server0.6.1https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml 服务器系统初始化 # 安装依赖包 yum -y install epel-release.noarch yum update -y yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl bash-completion lrzsz sysstat openssh-clients -y # 关闭防火墙 与selinux 和ssh优化systemctl stop firewalldsystemctl disable firewalldyum install iptables* -ysetenforce 0sed -i s/^SELINUXenforcing/SELINUXdisabled/g /etc/sysconfig/selinuxsed -i s#SELINUXenforcing#SELINUXdisabled#g /etc/selinux/configsed -i /^#UseDNS/s/#UseDNS yes/UseDNS no/g /etc/ssh/sshd_configsed -i s/#PermitEmptyPasswords no/PermitEmptyPasswords no/g /etc/ssh/sshd_config sed -i s/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g /etc/ssh/sshd_configsystemctl restart sshd # 关闭交换分区 sed -ri s/.*swap.*/#/ /etc/fstab swapoff -a sysctl -w vm.swappiness0# 配置系统句柄数 ulimit -SHn 655350 cat /etc/security/limits.conf EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF cat /etc/security/limits.d/20-nproc.conf EOF * soft nproc unlimited * hard nproc unlimited EOF# 主机ipvs管理工具安装及模块加载 yum -y install ipvsadm ipset sysstat conntrack libseccomp cat /etc/sysconfig/modules/ipvs.modules EOF modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF # 授权、运行、检查是否加载 chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack #内核优化k8s.conf cat EOF /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables 1 net.bridge.bridge-nf-call-ip6tables 1 net.ipv4.ip_forward 1 vm.swappiness 0 fs.may_detach_mounts 1 vm.overcommit_memory1 vm.panic_on_oom0 fs.inotify.max_user_watches89100 fs.file-max52706963 fs.nr_open52706963 net.netfilter.nf_conntrack_max2310720net.ipv4.tcp_keepalive_time 600 net.ipv4.tcp_keepalive_probes 3 net.ipv4.tcp_keepalive_intvl 15 net.ipv4.tcp_max_tw_buckets 36000 net.ipv4.tcp_tw_reuse 1 net.ipv4.tcp_max_orphans 327680 net.ipv4.tcp_orphan_retries 3 net.ipv4.tcp_syncookies 1 net.ipv4.tcp_max_syn_backlog 16384 net.ipv4.ip_conntrack_max 131072 net.ipv4.tcp_max_syn_backlog 16384 net.ipv4.tcp_timestamps 0 net.core.somaxconn 16384 EOF #设置生效 sysctl --system #加载br_netfilter modprobe br_netfilter #查看是否加载 lsmod | grep br_netfilter 设置关于etcd签名证书 准备签名证书需要的工具 cfssl、cfssljson、cfssl-certinfo(选择一台主机即可此次证书相关的都在os128上操作) wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64mv cfssl_1.6.1_linux_amd64 /usr/bin/cfsslmv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljsonmv cfssl-certinfo_1.6.1_linux_amd64 /usr/bin/cfssl-certinfochmod x /usr/bin/cfssl*自签etcd 的CA mkdir -p ~/TLS/{etcd,k8s}cd ~/TLS/etcd #自签CA cat ca-config.json EOF {signing: {default: {expiry: 87600h},profiles: {www: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}} } EOFcat ca-csr.json EOF {CA: {expiry: 87600h},CN: etcd CA,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing}] } EOF#生成证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件使用自签CA签发Etcd HTTPS证书 #创建证书申请文件 cd ~/TLS/etcd cat server-csr.json EOF {CN: etcd,hosts: [192.168.177.128,192.168.177.129,192.168.177.130],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing}] } EOF#注上述文件hosts字段中IP为所有etcd节点的集群内部通信IP一个都不能少为了方便后期扩容可以多写几个预留的IP。 #生成证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilewww server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。etcd集群部署 Etcd 的概念 Etcd 是一个分布式键值存储系统Kubernetes使用Etcd进行数据存储所以先准备一个Etcd数据库为解决Etcd单点故障应采用集群方式部署这里使用3台组建集群可容忍1台机器故障当然你也可以使用5台组建集群可容忍2台机器故障。以下在节点os128上操作为简化操作待会将节点os128生成的所有文件拷贝到节点os129和节点os130 # 准备etcd的安装包 wget https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz mkdir -pv /opt/etcd/{bin,cfg,ssl} tar zxvf etcd-v3.5.5-linux-amd64.tar.gz mv etcd-v3.5.5-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/准备etcd的配置文件 #os128主机 etcd 配置文件 cat /opt/etcd/cfg/etcd.conf EOF #[Member] ETCD_NAMEetcd-1 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.177.128:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.177.128:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.128:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.128:2379 ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew EOF --- ETCD_NAME节点名称集群中唯一 ETCD_DATA_DIR数据目录 ETCD_LISTEN_PEER_URLS集群通信监听地址 ETCD_LISTEN_CLIENT_URLS客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS集群通告地址 ETCD_ADVERTISE_CLIENT_URLS客户端通告地址 ETCD_INITIAL_CLUSTER集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN集群Token ETCD_INITIAL_CLUSTER_STATE加入集群的当前状态new是新集群existing表示加入已有集群 ---# systemd管理etcd cat /usr/lib/systemd/system/etcd.service EOF [Unit] DescriptionEtcd Server Afternetwork.target Afternetwork-online.target Wantsnetwork-online.target[Service] Typenotify EnvironmentFile/opt/etcd/cfg/etcd.conf ExecStart/opt/etcd/bin/etcd \ --cert-file/opt/etcd/ssl/server.pem \ --key-file/opt/etcd/ssl/server-key.pem \ --peer-cert-file/opt/etcd/ssl/server.pem \ --peer-key-file/opt/etcd/ssl/server-key.pem \ --trusted-ca-file/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file/opt/etcd/ssl/ca.pem \ --loggerzap Restarton-failure LimitNOFILE65536[Install] WantedBymulti-user.target EOF安装etcd集群 #拷贝刚才生成的证书 #把刚才生成的证书拷贝到配置文件中的路径 cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/# 同步所有主机 scp -r /opt/etcd/ root192.168.177.129:/opt/ scp -r /opt/etcd/ root192.168.177.130:/opt/ scp /usr/lib/systemd/system/etcd.service root192.168.177.129:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/etcd.service root192.168.177.130:/usr/lib/systemd/system/# os129 主机etcd的配置文件 cat /opt/etcd/cfg/etcd.conf EOF #[Member] ETCD_NAMEetcd-2 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.177.129:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.177.129:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.129:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.129:2379 ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew EOF# os130主机etcd配置文件 cat /opt/etcd/cfg/etcd.conf EOF #[Member] ETCD_NAMEetcd-3 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.177.130:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.177.130:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.130:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.130:2379 ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew EOF启动etcd并设置开启自启 启动etcd: systemctl daemon-reload systemctl start etcd systemctl enable etcd使用etcdctl验证etcd集群 ETCDCTL_API3 /opt/etcd/bin/etcdctl --cacert/opt/etcd/ssl/ca.pem --cert/opt/etcd/ssl/server.pem --key/opt/etcd/ssl/server-key.pem --endpointshttps://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 endpoint health --write-outtable负载均衡器组件安装 worker131、worker132主机上执行 安装haproxy、keepalived yum install haproxy keepalived -yhaproxy 配置 cat /etc/haproxy/haproxy.cfg EOF globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 6000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- defaultsmode tcplog globaloption tcplogoption dontlognulloption redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000 #--------------------------------------------------------------------- listen statsbind 0.0.0.0:9100mode httpoption httplogstats uri /statusstats refresh 30sstats realm Haproxy Managerstats auth admin:passwordstats hide-versionstats admin if TRUE #--------------------------------------------------------------------- frontend k8s-master-default-nodepool-apiserverbind *:6443mode tcpdefault_backend k8s-master-default-nodepool #--------------------------------------------------------------------- backend k8s-master-default-nodepoolbalance roundrobinmode tcpserver k8s-apiserver-1 192.168.177.128:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server k8s-apiserver-2 192.168.177.129:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server k8s-apiserver-3 192.168.177.130:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3 EOFkeepalived配置 worker131 主机配置 cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id LVS_DEVELscript_user rootenable_script_security } vrrp_script check_haproxy {script /etc/keepalived/check_haproxy.shinterval 5weight -5fall 2 rise 1 } vrrp_instance VI_1 {state BACKUPinterface ens33# 非抢占vip模式nopreempt# 单播unicast_src_ip 192.168.177.131unicast_peer {192.168.177.132}virtual_router_id 51#优先级100大于从服务的99priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {#配置规划的虚拟ip192.168.177.127}#配置对worker131主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy} } EOFworker132 主机配置 cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id LVS_DEVEL script_user rootenable_script_security } vrrp_script check_haproxy {script /etc/keepalived/check_haproxy.shinterval 5weight -5fall 2 rise 1 } vrrp_instance VI_1 {state BACKUPinterface ens33nopreemptunicast_src_ip 192.168.177.132unicast_peer {192.168.177.131}virtual_router_id 51priority 99advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.177.127}#配置对worker132主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy} } EOF健康检查脚本 cat /etc/keepalived/check_haproxy.sh EOF #!/bin/bash err0 for k in $(seq 1 3) docheck_code$(pgrep haproxy)if [[ $check_code ]]; thenerr$(expr $err 1)sleep 1continueelseerr0breakfi doneif [[ $err ! 0 ]]; thenecho systemctl stop keepalived/usr/bin/systemctl stop keepalivedexit 1 elseexit 0 fi EOF chmod x /etc/keepalived/check_haproxy.sh设置开启自启并验证高可用VIP systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived #查看启动状态 systemctl status keepalived haproxy #查看虚拟ip是否配置成功了 ip address showhaproxy 监控页面: 查看vip: 此时手动停止woker131主机上的haproxy服务模拟故障由于keepalived中配置的有监控脚本把woker131主机keepalived服务停掉vip会自动漂移到worker132的主机上几乎不会丢包回出现网络的轻微抖动如果woker131的keepalived 服务故障恢复启动后不会抢占vip(配置的非抢占模式) 设置关于k8s自签证书 自签CA #创建k8s 的kube-apiserver证书 cd ~/TLS/k8scat ca-config.json EOF {signing: {default: {expiry: 87600h},profiles: {kubernetes: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}} } EOF cat ca-csr.json EOF {CA: {expiry: 87600h},CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing,O: k8s,OU: System}] } EOF#生成证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件。kube-apiserver 自签证书 #创建证书申请文件 cat server-csr.json EOF {CN: kubernetes,hosts: [10.0.0.1,127.0.0.1,192.168.177.127,192.168.177.128,192.168.177.129,192.168.177.130,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}] } EOF#注上述文件hosts字段中IP为所有Master/LB/VIP IP一个都不能少为了方便后期扩容可以多写几个预留的IP。cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。kube-controller-manager自签证书 # 创建证书请求文件 cat kube-controller-manager-csr.json EOF {CN: system:kube-controller-manager,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing, ST: BeiJing,O: system:masters,OU: System}] } EOF# 生成证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-managerkube-scheduler自签证书 # 创建证书请求文件 cat kube-scheduler-csr.json EOF {CN: system:kube-scheduler,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: system:masters,OU: System}] } EOF# 生成证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-scheduler-csr.json | cfssljson -bare kube-schedulerkube-proxy 自签证书 # 创建证书请求文件 cat kube-proxy-csr.json EOF {CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}] } EOF # 生成证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyadmin 自签证书 #生成kubectl连接集群的证书 cat admin-csr.json EOF {CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: system:masters,OU: System}] } EOFcfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin此时/root/TLS/k8s目录下会有如下这么多文件 控制平面节点组件部署 准备工作(在os128节点上操作) #部署k8s1.27.2 #下载安装包 wget https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz#解压二进制包 mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin cp kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /opt/kubernetes/bin cp kubectl /usr/bin/ cp kubectl /usr/local/bin/ # 证书拷贝 cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/部署kube-apiserver 创建kube-apiserver配置文件 # 创建kube-apiserver配置文件cat /opt/kubernetes/cfg/kube-apiserver.conf EOFKUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\--v2 \\--etcd-servershttps://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 \\--bind-address192.168.177.128 \\--secure-port6443 \\--advertise-address192.168.177.128 \\--allow-privilegedtrue \\--service-cluster-ip-range10.0.0.0/24 \\--authorization-modeRBAC,Node \\--enable-bootstrap-token-authtrue \\--token-auth-file/opt/kubernetes/cfg/token.csv \\--service-node-port-range30000-32767 \\--kubelet-client-certificate/opt/kubernetes/ssl/server.pem \\--kubelet-client-key/opt/kubernetes/ssl/server-key.pem \\--tls-cert-file/opt/kubernetes/ssl/server.pem \\--tls-private-key-file/opt/kubernetes/ssl/server-key.pem \\--client-ca-file/opt/kubernetes/ssl/ca.pem \\--service-account-key-file/opt/kubernetes/ssl/ca-key.pem \\--service-account-issuerapi \\--service-account-signing-key-file/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile/opt/etcd/ssl/ca.pem \\--etcd-certfile/opt/etcd/ssl/server.pem \\--etcd-keyfile/opt/etcd/ssl/server-key.pem \\--requestheader-client-ca-file/opt/kubernetes/ssl/ca.pem \\--proxy-client-cert-file/opt/kubernetes/ssl/server.pem \\--proxy-client-key-file/opt/kubernetes/ssl/server-key.pem \\--requestheader-allowed-nameskubernetes \\--requestheader-extra-headers-prefixX-Remote-Extra- \\--requestheader-group-headersX-Remote-Group \\--requestheader-username-headersX-Remote-User \\--enable-aggregator-routingtrue \\--audit-log-maxage30 \\--audit-log-maxbackup3 \\--audit-log-maxsize100 \\--service-account-issuerhttps://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-typesInternalIP,ExternalIP,Hostname \\--audit-log-path/opt/kubernetes/logs/k8s-audit.logEOF启用 TLS Bootstrapping 机制 TLS BootstrapingMaster apiserver启用TLS认证后Node节点kubelet和 kube-proxy要与kube-apiserver进行通信必须使用CA签发的有效证书才可以 当Node节点很多时这种客户端证书颁发需要大量工作同样也会增加集群扩展复杂度。 为了简化流程Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书 kubelet会以一个低权限用户自动向apiserver申请证书 kubelet的证书由apiserver动态签署。 所以强烈建议在Node上使用这种方式目前主要用于kubeletkube-proxy 还是由我们统一颁发一个证书。创建token文件 cat /opt/kubernetes/cfg/token.csv EOFc47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,system:node-bootstrapperEOF格式token用户名UID用户组token可自行生成替换head -c 16 /dev/urandom | od -An -t x | tr -d systemd管理kube-apiserver #systemd管理apiserver cat /usr/lib/systemd/system/kube-apiserver.service EOF [Unit] DescriptionKubernetes API Server Documentationhttps://github.com/kubernetes/kubernetes[Service] EnvironmentFile/opt/kubernetes/cfg/kube-apiserver.conf ExecStart/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restarton-failure[Install] WantedBymulti-user.target EOF以下路径文件分发到其他master主机对应的路径 /opt/kubernetes/bin /opt/kubernetes/ssl /opt/kubernetes/cfg /usr/lib/systemd/system/kube-apiserver.service 不同主机的/opt/kubernetes/cfg/kube-apiserver.conf配置文件里面的IP要改成相应主机的 启动并设置开机启动 systemctl daemon-reload systemctl start kube-apiserver systemctl enable kube-apiserver部署kube-controller-manager 创建配置文件 # 创建配置文件 cat /opt/kubernetes/cfg/kube-controller-manager.conf EOF KUBE_CONTROLLER_MANAGER_OPTS \\ --v2 \\ --leader-electtrue \\ --kubeconfig/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\ --bind-address127.0.0.1 \\ --allocate-node-cidrstrue \\ --cluster-cidr10.244.0.0/16 \\ --service-cluster-ip-range10.0.0.0/24 \\ --cluster-signing-cert-file/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file/opt/kubernetes/ssl/ca-key.pem \\ --cluster-signing-duration87600h0m0s EOF•--kubeconfig连接apiserver配置文件 •--leader-elect当该组件启动多个时自动选举HA •--cluster-signing-cert-file/--cluster-signing-key-file自动为kubelet颁发证书的CA与apiserver保持一致说明–bind-address监听的地址必须是127.0.0.1 生成kube-controller-manager.kubeconfig文件 KUBE_CONFIG/opt/kubernetes/cfg/kube-controller-manager.kubeconfig KUBE_APISERVERhttps://192.168.177.127:6443 cd ~/TLS/k8s kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG} kubectl config set-credentials kube-controller-manager \--client-certificate./kube-controller-manager.pem \--client-key./kube-controller-manager-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG} kubectl config set-context default \--clusterkubernetes \--userkube-controller-manager \--kubeconfig${KUBE_CONFIG} kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理controller-manager # systemd管理controller-manager cat /usr/lib/systemd/system/kube-controller-manager.service EOF [Unit] DescriptionKubernetes Controller Manager Documentationhttps://github.com/kubernetes/kubernetes[Service] EnvironmentFile/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restarton-failure[Install] WantedBymulti-user.target EOF以下文件分发到其他master节点主机 /opt/kubernetes/bin/kube-controller-manager /usr/lib/systemd/system/kube-controller-manager.service /opt/kubernetes/cfg/kube-controller-manager.conf /opt/kubernetes/cfg/kube-controller-manager.kubeconfig 启动并设置开机启动 systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager部署kube-scheduler 创建配置文件 cat /opt/kubernetes/cfg/kube-scheduler.conf EOF KUBE_SCHEDULER_OPTS \\ --v2 \\ --leader-elect \\ --kubeconfig/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\ --bind-address127.0.0.1 EOF--kubeconfig连接apiserver配置文件--leader-elect当该组件启动多个时自动选举HA说明 --bind-address监听地址必须是127.0.0.1 生成kube-scheduler.kubeconfig cd ~/TLS/k8s KUBE_CONFIG//opt/kubernetes/cfg/kube-scheduler.kubeconfig KUBE_APISERVERhttps://192.168.177.127:6443kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG} kubectl config set-credentials kube-scheduler \--client-certificate./kube-scheduler.pem \--client-key./kube-scheduler-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG} kubectl config set-context default \--clusterkubernetes \--userkube-scheduler \--kubeconfig${KUBE_CONFIG} kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kube-scheduler # systemd管理scheduler cat /usr/lib/systemd/system/kube-scheduler.service EOF [Unit] DescriptionKubernetes Scheduler Documentationhttps://github.com/kubernetes/kubernetes[Service] EnvironmentFile/opt/kubernetes/cfg/kube-scheduler.conf ExecStart/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restarton-failure[Install] WantedBymulti-user.target EOF以下文件分发到其他master主机对应的路径 /opt/kubernetes/bin/kube-scheduler /usr/lib/systemd/system/kube-scheduler.service /opt/kubernetes/cfg/kube-scheduler.conf /opt/kubernetes/cfg/kube-scheduler.kubeconfig 启动并设置开机启动 # 启动并设置开机启动 systemctl daemon-reload systemctl start kube-scheduler systemctl enable kube-scheduler查看集群状态 生成管理集群的kubeconfig认证文件 # 生成管理集群的kubeconfig认证文件 cd ~/TLS/k8s mkdir /root/.kube KUBE_CONFIG/root/.kube/config KUBE_APISERVERhttps://192.168.177.127:6443 kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG} kubectl config set-credentials cluster-admin \--client-certificate./admin.pem \--client-key./admin-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG} kubectl config set-context default \--clusterkubernetes \--usercluster-admin \--kubeconfig${KUBE_CONFIG} kubectl config use-context default --kubeconfig${KUBE_CONFIG}使用kubectl 查看集群的状态 #查看集群信息 kubectl cluster-info #查看集群组件状态 kubectl get cs图片中的coredns可以忽略后面会有coredns的部署 授权kubelet-bootstrap用户允许请求证书 授权kubelet-bootstrap用户允许请求证书 kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrolesystem:node-bootstrapper \ --userkubelet-bootstrap数据平面节点组件部署 容器运行时安装 安装docker(os131,os132主机) # 二进制包下载地址https://download.docker.com/linux/static/stable/x86_64/ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz #解压 tar xvf docker-20.10.24.tgz #拷贝二进制文件 cp docker/* /usr/bin/ #创建containerd的service文件,并且启动 cat /etc/systemd/system/containerd.service EOF [Unit] Descriptioncontainerd container runtime Documentationhttps://containerd.io Afternetwork.target local-fs.target [Service] ExecStartPre-/sbin/modprobe overlay ExecStart/usr/bin/containerd Typenotify Delegateyes KillModeprocess Restartalways RestartSec5 LimitNPROCinfinity LimitCOREinfinity LimitNOFILE1048576 TasksMaxinfinity OOMScoreAdjust-999 [Install] WantedBymulti-user.target EOF systemctl enable --now containerd.service #准备docker的service文件 cat /etc/systemd/system/docker.service EOF [Unit] DescriptionDocker Application Container Engine Documentationhttps://docs.docker.com Afternetwork-online.target firewalld.service containerd.service Wantsnetwork-online.target Requiresdocker.socket containerd.service [Service] Typenotify ExecStart/usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock ExecReload/bin/kill -s HUP $MAINPID TimeoutSec0 RestartSec2 Restartalways StartLimitBurst3 StartLimitInterval60s LimitNOFILEinfinity LimitNPROCinfinity LimitCOREinfinity TasksMaxinfinity Delegateyes KillModeprocess OOMScoreAdjust-500 [Install] WantedBymulti-user.target EOF #准备docker的socket文件 cat /etc/systemd/system/docker.socket EOF [Unit] DescriptionDocker Socket for the API [Socket] ListenStream/var/run/docker.sock SocketMode0660 SocketUserroot SocketGroupdocker [Install] WantedBysockets.target EOF #创建docker组 groupadd docker #启动docker systemctl enable --now docker.socket systemctl enable --now docker.service #验证 docker info cat /etc/docker/daemon.json EOF {exec-opts: [native.cgroupdriversystemd],registry-mirrors: [https://docker.mirrors.ustc.edu.cn,http://hub-mirror.c.163.com],max-concurrent-downloads: 10,log-driver: json-file,log-level: warn,log-opts: {max-size: 10m,max-file: 3},data-root: /var/lib/docker } EOF systemctl restart docker安装cri-dockerdos131,os132主机 由于1.24以及更高版本不支持docker所以安装cri-docker # 下载cri-docker wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz # 解压cri-docker tar -zxvf cri-dockerd-0.3.6.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/ chmod x /usr/bin/cri-dockerd # 写入启动配置文件 cat /usr/lib/systemd/system/cri-docker.service EOF [Unit] DescriptionCRI Interface for Docker Application Container Engine Documentationhttps://docs.mirantis.com Afternetwork-online.target firewalld.service docker.service Wantsnetwork-online.target Requirescri-docker.socket[Service] Typenotify ExecStart/usr/bin/cri-dockerd --network-plugincni --pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.9 ExecReload/bin/kill -s HUP $MAINPID TimeoutSec0 RestartSec2 RestartalwaysStartLimitBurst3StartLimitInterval60sLimitNOFILEinfinity LimitNPROCinfinity LimitCOREinfinityTasksMaxinfinity Delegateyes KillModeprocess[Install] WantedBymulti-user.target EOF# 写入socket配置文件 cat /usr/lib/systemd/system/cri-docker.socket EOF [Unit] DescriptionCRI Docker Socket for the API PartOfcri-docker.service[Socket] ListenStream%t/cri-dockerd.sock SocketMode0660 SocketUserroot SocketGroupdocker[Install] WantedBysockets.target EOF# 进行启动cri-docker systemctl daemon-reload ; systemctl enable cri-docker --now安装containerdos128,os129,os130主机 wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz tar xvf cri-containerd-cni-1.6.6-linux-amd64.tar.gz -C / #配置 Containerd 所需的模块 cat /etc/modules-load.d/containerd.conf EOF overlay br_netfilter EOF #加载模块 systemctl restart systemd-modules-load.servicemkdir /etc/containerd containerd config default /etc/containerd/config.toml sed -i s/\(sandbox_image\) .*/\1 registry.aliyuncs.com\/google_containers\/pause:3.9/g /etc/containerd/config.toml systemctl daemon-reload systemctl enable --now containerd systemctl status containerd #查看containerd相关模块加载情况 lsmod | egrep br_netfilter|overlay安装runcos128,os129,os130主机 默认runc执行时提示runc: symbol lookup error: runc: undefined symbol wget https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64 mv runc.amd64 /usr/local/bin/runc 部署kubelet 准备工作 #在所有worker节点创建工作目录 mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests} 创建配置文件 cat /opt/kubernetes/cfg/kubelet.conf EOF KUBELET_OPTS \\ --v2 \\ --hostname-override$(hostname) \\ --kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config/opt/kubernetes/cfg/kubelet-config.yml \\ --cert-dir/opt/kubernetes/ssl \\ --runtime-request-timeout15m \\ --container-runtime-endpointunix:///run/cri-dockerd.sock \\ --cgroup-driversystemd \\ --node-labelsnode.kubernetes.io/nodeLinux EOF--container-runtime-endpoint参数默认为containerd docker: unix:///run/cri-dockerd.sockcontainerd: unix:///run/containerd/containerd.sock生成kubelet-conf.yml配置参数文件 cat /opt/kubernetes/cfg/kubelet-conf.yml EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.0.0.2 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /opt/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF生成kubelet初次加入集群引导bootstrap.kubeconfig文件 #生成kubelet初次加入集群引导kubeconfig文件 KUBE_CONFIG/opt/kubernetes/cfg/bootstrap.kubeconfig KUBE_APISERVERhttps://192.168.177.127:6443 #与token.csv里保持一致 TOKENc47ffb939f5ca36231d9e3121a252940# 生成 kubelet bootstrap kubeconfig 配置文件 kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG} kubectl config set-credentials kubelet-bootstrap \--token${TOKEN} \--kubeconfig${KUBE_CONFIG} kubectl config set-context default \--clusterkubernetes \--userkubelet-bootstrap \--kubeconfig${KUBE_CONFIG} kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kubelet # systemd管理kubelet cat /usr/lib/systemd/system/kubelet.service EOF [Unit] DescriptionKubernetes Kubelet #此处如果用的cri是docker不用修改如果是containerd则需要改成containerd.service Afterdocker.service[Service] EnvironmentFile/opt/kubernetes/cfg/kubelet.conf ExecStart/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restarton-failure LimitNOFILE65536[Install] WantedBymulti-user.target EOF启动并设置开机启动 # 启动并设置开机启动 systemctl daemon-reload systemctl start kubelet systemctl enable kubelet批准kubelet证书申请并加入集群 # 查看kubelet证书请求 [rootos128 system]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ 14s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap none Pending# 批准申请 kubectl certificate approve node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ# 查看节点 kubectl get node其他worker节点kubelet 安装 # 从master节点上同步以下配置文件修改成对应主机的启动kubelet即可 /opt/kubernetes/cfg/kubelet.conf # hostname-override、container-runtime-endpoint 参数的值需要注意hostname-override的值需要集群中唯一container-runtime-endpoint的值取决于runtime 用的哪个 /usr/lib/systemd/system/kubelet.service # After 的值取决于主机上的runtime 用的哪个 /opt/kubernetes/cfg/kubelet-config.yml #不需要修改 /opt/kubernetes/cfg/kubelet.kubeconfig #不需要修改 /opt/kubernetes/cfg/bootstrap.kubeconfig #不需要修改 /opt/kubernetes/ssl/ca.pem #不需要修改 /opt/kubernetes/bin/kubelet #不需要修改 启动kubelet并设置开机启动,加入集群批准证书申请参照上面步骤查看所有节点加入情况 kubectl get node 部署kube-proxy 生成配置参数文件 cat /opt/kubernetes/cfg/kube-proxy.yaml EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection:acceptContentTypes: burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /opt/kubernetes/kubeconfig/kube-proxy.kubeconfigqps: 5 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 15m0s conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: $(hostname) iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: rrsyncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: ipvs nodePortAddresses: null oomScoreAdj: -999 portRange: udpIdleTimeout: 250ms EOF生成kube-proxy.kubeconfig文件 cd ~/TLS/k8s KUBE_CONFIG/opt/kubernetes/cfg/kube-proxy.kubeconfig KUBE_APISERVERhttps://192.168.177.127:6443 kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG} kubectl config set-credentials kube-proxy \--client-certificate./kube-proxy.pem \--client-key./kube-proxy-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG} kubectl config set-context default \--clusterkubernetes \--userkube-proxy \--kubeconfig${KUBE_CONFIG} kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kube-proxy systemd管理kube-proxycat /usr/lib/systemd/system/kube-proxy.service EOF [Unit] DescriptionKubernetes Proxy Afternetwork.target[Service] EnvironmentFile/opt/kubernetes/cfg/kube-proxy.conf ExecStart/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restarton-failure LimitNOFILE65536[Install] WantedBymulti-user.target EOF启动并设置开机启动 #启动并设置开机启动 systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy其他worker节点kube-proxy安装 #从master节点同步以下配置文件 /opt/kubernetes/bin/kube-proxy /usr/lib/systemd/system/kube-proxy.service /opt/kubernetes/cfg/kube-proxy.kubeconfig /opt/kubernetes/cfg/kube-proxy.yaml #hostnameOverride参数需要确认和当前主机是否一致 启动并设置开机启动calico网络组件部署 下载calico wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml修改默认网段 # 把calico.yaml里pod所在网段改成 --cluster-cidr10.244.0.0/16 时选项所指定的网段 #直接用vim编辑打开此文件查找192按如下标记进行修改 # no effect. This should fall within --cluster-cidr. # - name: CALICO_IPV4POOL_CIDR # value: 192.168.1.0/16 # Disable file logging so kubectl logs works. - name: CALICO_DISABLE_FILE_LOGGINGvalue: true把两个#及#后面的空格去掉并把192.168.1.0/16改成10.244.0.0/16 # no effect. This should fall within --cluster-cidr. - name: CALICO_IPV4POOL_CIDRvalue: 10.244.0.0/16 # Disable file logging so kubectl logs works. - name: CALICO_DISABLE_FILE_LOGGINGvalue: true部署calico kubectl apply -f calico.yaml验证calico kubectl get pods -n kube-system 授权apiserver访问kubelet #应用场景例如kubectl logs cat apiserver-to-kubelet-rbac.yaml EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet rules:- apiGroups:- resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- * --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: system:kube-apiservernamespace: roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes EOF kubectl apply -f apiserver-to-kubelet-rbac.yamlcoredns 组件部署 准备coredns.yml内容https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base cat coredns.yml EOF apiVersion: v1 kind: ServiceAccount metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcile--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns rules: - apiGroups:- resources:- endpoints- services- pods- namespacesverbs:- list- watch - apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns subjects: - kind: ServiceAccountname: corednsnamespace: kube-system--- apiVersion: v1 kind: ConfigMap metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {pods insecurefallthrough in-addr.arpa ip6.arpattl 30}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}--- apiVersion: apps/v1 kind: Deployment metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: CoreDNS spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:securityContext:seccompProfile:type: RuntimeDefaultpriorityClassName: system-cluster-criticalserviceAccountName: corednsaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: [kube-dns]topologyKey: kubernetes.io/hostnametolerations:- key: CriticalAddonsOnlyoperator: ExistsnodeSelector:kubernetes.io/os: linuxcontainers:- name: corednsimage: registry.k8s.io/coredns/coredns:v1.11.1imagePullPolicy: IfNotPresentresources:limits:memory: __DNS__MEMORY__LIMIT__requests:cpu: 100mmemory: 70Miargs: [ -conf, /etc/coredns/Corefile ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- ALLreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile--- apiVersion: v1 kind: Service metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: 9153prometheus.io/scrape: truelabels:k8s-app: kube-dnskubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: CoreDNS spec:selector:k8s-app: kube-dnsclusterIP: __DNS__SERVER__ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP EOF部署coredns kubectl apply -f coredns.yml查看coredns 服务部署 kubectl get pod -n kube-system | grep coredns 生产环境需要调整coredns的资源分配并加上hpa dashboard 组件部署 部署dashboard wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml # 修改svc为nodePort方式 vim recommended.yaml ---- spec:ports:- port: 443targetPort: 8443nodePort: 30001type: NodePortselector:k8s-app: kubernetes-dashboard ---- kubectl apply -f recommended.yaml # 查看dashboard服务 kubectl get pods -n kubernetes-dashboard kubectl get pods,svc -n kubernetes-dashboard创建service account并绑定默认cluster-admin管理员集群角色 # 创建service account并绑定默认cluster-admin管理员集群角色cat dashadmin.yaml EOF apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kubernetes-dashboard--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-user roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard EOF kubectl apply -f dashadmin.yaml # 创建用户登录token,生成的token可以用来登录dashboard kubectl -n kubernetes-dashboard create token admin-user验证dashboard登录访问https://192.168.177.128:30001,token用上面生成的或者使用kubeconfig文件登录 Rancher 管理k8s集群 k8s 的dashboard 也可以使用Rancher来管理,图形界面账号项目权限更友好,功能更强大 简单使用docker部署rancher 生产环境建议直接部署在k8s集群中通过ingress的方式来访问 docker run -d --restartalways --privilegedtrue -p 443:443 -v /data/rancher:/var/lib/rancher/ --name rancher-server -e CATTLE_SYSTEM_CATALOGbundled rancher/rancher:stable 把上面部署的二进制k8s集群在 Rancher web页面上按照指引一步步导入即可 登录成功界面如下 ![在这里插入图片描述](https://img-blog.csdnimg.cn/direct/8cfccb1048224047ada4c47bb14ea8dc.png metrics-server 组件部署 部署metrics-server # 下载 wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml # 修改文件中服务的镜像地址 sed -i s/\(image:\).*/\1 registry.aliyuncs.com\/google_containers\/metrics-server:v0.6.1/g components.yaml # 部署 kubectl apply -f components.yaml # 验证, 使用kubectl top 可以看到数据说明就正常了 kubectl top node kubectl top pod -A ingress 组件部署 部署ingress-nginx-deploy # 下载 wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx-deploy.yaml #查看镜像地址grep image: ingress-nginx-deploy.yaml # mage: registry.k8s.io/ingress-nginx/controller:v1.8.0sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b # 替换镜像 sed -i /controller/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.8.0/ ingress-nginx-deploy.yaml sed -i /kube-webhook-certgen/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v20230407/ ingress-nginx-deploy.yaml # 部署ingress-nginx kubectl apply -f ingress-nginx-deploy.yaml #查看ingress-nginx服务kubectl get all -n ingress-nginxhelm、kubens、crictl、ctr 工具 helm Helm 是一个用于管理 Kubernetes 应用程序的包管理工具。它允许您定义、安装和升级 Kubernetes 应用程序的预定义包这些包被称为 “charts”。每个 Helm chart 包含了一组描述 Kubernetes 资源的文件例如部署、服务、配置映射等。 #下载wget https://get.helm.sh/helm-v3.14.0-linux-amd64.tar.gztar xvf helm-v3.14.0-linux-amd64.tar.gzmv helm /usr/local/binchmod x /usr/local/bin/helmkubens kubens 是一个用于快速切换 Kubernetes 命名空间的命令行工具。它是 kubectx 工具包的一部分用于管理 Kubernetes 上下文和命名空间 #下载 wget https://github.com/ahmetb/kubectx/releases/download/v0.9.5/kubens_v0.9.5_linux_x86_64.tar.gz # 解压 tar xvf kubens_v0.9.5_linux_x86_64.tar.gz mv kubens /usr/local/bin chmod x /usr/local/bin/kubens # kubens命令用法 kubens列出当前配置的所有命名空间。 kubens namespace切换到指定的命名空间。 kubens -c列出当前配置的所有上下文。 kubens -u列出当前用户有权访问的所有命名空间crictl crictl 是一个用于与容器运行时Container Runtime InterfaceCRI接口兼容的容器运行时进行交互的命令行工具默认配置文件路径/etc/crictl.yaml #下载 wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz tar xvf crictl-v1.29.0-linux-amd64.tar.gz mv crictl /usr/local/bin chmod x /usr/local/bin/crictl #crictl 命令使用 crictl version 查看版本 crictl pods 列出主机上有哪些pod crictl images列出容器运行时中的镜像列表。 crictl ps列出容器运行时中正在运行的容器列表。 crictl create创建一个新的容器。 crictl start启动一个已经创建的容器。 crictl stop停止一个正在运行的容器。 crictl rm删除一个容器。 crictl logs查看容器的日志。 crictl inspect查看容器或镜像的详细信息。 crictl pull从容器镜像仓库中拉取镜像。 crictl rmi删除一个镜像。ctr ctr是Containerd开发的一个命令行工具,可以与Containerd进行交互,用于管理容器、镜像以及其他资源,Containerd 中每个容器实例都会关联到一个命名空间,默认是默认命名空间(default) #查看有哪些namespace默认namespace: defaultctr namespaces# 查看namespacek8s.io下面有哪些container/task/imagectr -n k8s.io containers list ctr -n k8s.io tasks list ctr -n k8s.io images listnfs storageclass动态pv存储 参考之前的写的这篇博客 loki 日志采集部署 待完善 Promthous 组件部署 待完善 argocd组件部署 待完善 FAQ kubelet服务启动报错 validate CRI v1 runtime API for endpoint unix:///run/cri-dockerd.sock: rpc error: code Unimplemented desc unknown service runtime.v1.RuntimeService 原因及解决方案cri-dokcerd-v0.2.6的版本有问题更换到cri-dokcerd-v0.3.6 的版本 问题解决某个节点上的calico-node 启动报错 ERROR][1] cni-installer/nil nil: Unable to create token for CNI kubeconfig errorPost https://10.0.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token: dial tcp 10.0.0.1:443: connect: connection refused 原因及方案该节点上的kube-proxy 忘记启动启动kube-proxy服务问题解决

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/85807.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

网页设计项目报告总结做搜狗网站优化

0x0 最近在clone yaffs2仓库时发现clone的异常缓慢,就算开了代理也是,搜索一番发现网上大多都是将设置http、https、ssh协议的代理,对于git协定的代理讲的很少,下面分享下如何让git协议走socks代理 以下内容前提是里已经在电脑上…

外贸公司网站建设哪家好福州建设发展集团有限公司网站

Infortrend 使企业能够实现高效和可靠的数据备份,确保业务不间断的运行,保护有价值的业务信息。用户可以依靠我们的存储解决方案实现恢复时间目标(RTO)和恢复点目标(RPO),用于广泛的备份应用场景…

交互式网站是什么意思深圳市住房和建设局办事大厅

目录 文章导航一、字段解释1、电站基础信息表2、电站事实表 二、需求三、操作步骤1、将新增一列日期12、以左关联的形式增加装机容量3、年度发电总量4、年度售电完成率4、发电量及发电效率5、年售电完成比、售电回款比、管理费用比、运维费用比5、总装机容量6、最近日期7、最近…

可以做富集分析的网站国内最大设计网站

转自 http://blog.sina.com.cn/s/blog_4fd2a65a0101gg2o.html 在做安卓应用是我们经常要判断用户对返回键的操作,一般为了防止误操作都是在用户连续按下两次返回键的时候提示用户是否退出应用程序。 第一种实现的基本原理就是,当按下BACK键时&#xff0c…

网站开发程序制作域名备案公众号外链网站怎么做

点击上方“占小狼的博客”,选择“设为星标“本文阅读时间大约4分钟。来源:https://dwz.cn/dLRLBZabJava虚拟机层面所暴露给我们的状态,与操作系统底层的线程状态是两个不同层面的事。具体而言,这里说的 Java 线程状态均来自于 Thr…

绍兴网站网站建设做网站买服务器

一 、伪类(不存在的类,特殊的类) -伪类用来描述一个元素的特殊状态 比如:第一个元素,被点击的元素,鼠标移入的元素 -特点:一般请情况下,使用:开头 1、 :first-child …

株洲建设网站制作深圳市住房和建设局官网站首页

Mysql之聚合函数 什么是聚合函数常见的聚合函数GROUP BYWITH ROLLUPHAVINGHAVING与WHERE的对比 总结SQL底层原理 什么是聚合函数 对一组数据进行汇总的函数,但是还是返回一个结果 聚合函数也叫聚集,分组函数 常见的聚合函数 1.AVG(): 求平均值 2.SUM() :…

重庆电力建设公司网站网站建设推广文案

工服穿戴检测联动门禁开关算法通过yolov8深度学习框架模型,工服穿戴检测联动门禁开关算法能够准确识别和检测作业人员是否按照规定进行工服着装,只有当人员合规着装时,算法会发送开关量信号给门禁设备,使门禁自动打开。YOLO的结构…

肇庆有哪家做企业网站的注册城乡规划师好考吗

Hadoop的介绍Hadoop最早起源于Nutch.Nutch的设计目标是构建一个大型的全网搜索引擎,包括网页抓取、索引、查询等功能,但随着抓取网页数量的增加,遇到了严重的可扩展性问题——如何解决数十亿网页的存储和索引问题. 2003年、2004年谷歌发表的两…

公司网站 设计北京公司注册地址政策

mytomcat项目简介自己实现的简易的TomcatTomca实现说明Tomcat,这只3脚猫,大学的时候就认识了,直到现在工作中,也常会和它打交道。这是一只神奇的猫,我们可以通过实现它来深刻了;了解它的实现原理。考虑自己…

安阳空气做优化网站是什么意思

1、beforeRouteEnter 进入页面 to – 即将要跳转到的页面 form – 跳转前的页面,从哪个页面跳转过来的 next – 下一步,若无指定跳转的路由,设置为空 next() 即可 beforeRouteEnter(to, from, next) {next() }, 使用 beforeRouteEnter 时&…

网站设计培训班老师免费html网站开发教程

相信很多考生在阅读高校招生简章的时候,在录取规则那里都会看到专业级差这么一个词,很多同学和家长就不明白了,这个专业级差到底是什么意思呢?该怎么去理解这个专业级差呢?下面新亚艺考培训学校就带着大家一起了解什么…

阿里云服务器做盗版视频网站吗网站会员系统怎么做

IDEA启动失败报错解决思路 背景:在IDEA里安装插件失败,重启后直接进不去了,然后分析问题解决问题的过程记录下来。方便下次遇到快速解决。也是一种解决问题的思路,分享出去。 启动报错信息 Internal error. Please refer to https…

响应式网站建设哪里有跨境电商软件下载

1. 结构体: 1. 结构体类型定义: 嵌入式学习第十三天!(const指针、函数指针和指针函数、构造数据类型)-CSDN博客 2. 结构体变量的定义: 嵌入式学习第十三天!(const指针、函数指针和…

单位网站建设情况调查情况品牌推广文案

一、效果: 如下图所示,进入该页面后,默认选中第一个分类,以及第一个分类下的列表数据。 二、代码实现: 关键代码: 进入页面时,默认调用分类的接口,在分类接口里做判断&#xff…

建设网站赚的是什么钱一个门户网站需要多大的空间

来源: 大数据实验室“是说芯语”已陪伴您439天现在微电子集成电路技术对世界的各种科技电子产品越来越应用广泛了,一个国家的发展越来越离不开高端芯片了,一个国家越是发展得越快对高端芯片需求量越大,比如我国的芯片需求占世界的50&#xff…

建设企业网站官网下载dede网站维护暂时关闭

上期小编给大家汇总介绍了mysql的6个基础的知识点,下面继续给大家分享一下另外7个知识点:7、什么是死锁?怎么解决?死锁:两个或多个事务相互占用了对方的锁,就会一直处于等待的状态。常见的解决死锁的方法:(…

网站建设 设备网站建设犭金手指a15

目录 这个社会的规则或者真相,跟人情一毛钱关系都没有 心平能愈三千疾(无欲无求是完人) 永远不要做羞耻心太重的人(丢人是成长最快的方式) 好脾气留给最亲的人 这个社会的规则或者真相,跟人情一毛钱关系…

h5商城网站怎么建立无锡做网站的公司电话

1. 题目 给定一个字符串,请将字符串里的字符按照出现的频率降序排列。 输入: "tree"输出: "eert"2. 优先队列解题 先用map统计字符出现次数再将字符何其次数插入优先队列出队 struct cmp { //写在类内也可以,写在函数里也行bool…

医疗网站的运营wordpress tag标签

题目链接:leetcode使用最小花费爬楼梯 目录 题目解析: 算法原理 1.状态表示 2.状态转移方程 3.初始化 4.填表顺序 5.返回值 编写代码 题目解析: 题目让我们求达到楼梯顶部的最低花费. 由题可得: cost[i] 是从楼梯第 i 个…