怎样做自己网站免费服务器ip
怎样做自己网站,免费服务器ip,企业网站名是什么意思,产品网站建站文章目录 环境软件版本服务器系统初始化设置关于etcd签名证书etcd集群部署负载均衡器组件安装设置关于k8s自签证书自签CAkube-apiserver 自签证书kube-controller-manager自签证书kube-scheduler自签证书kube-proxy 自签证书admin 自签证书 控制平面节点组件部署**部署kube-api… 文章目录 环境软件版本服务器系统初始化设置关于etcd签名证书etcd集群部署负载均衡器组件安装设置关于k8s自签证书自签CAkube-apiserver 自签证书kube-controller-manager自签证书kube-scheduler自签证书kube-proxy 自签证书admin 自签证书 控制平面节点组件部署**部署kube-apiserver****部署kube-controller-manager****部署kube-scheduler****查看集群状态** 数据平面节点组件部署容器运行时安装部署kubelet部署kube-proxy calico网络组件部署coredns 组件部署dashboard 组件部署Rancher 管理k8s集群metrics-server 组件部署ingress 组件部署helm、kubens、crictl、ctr 工具nfs storageclass动态pv存储loki 日志采集部署Promthous 组件部署argocd组件部署FAQ 环境
说明本次实验共有5台主机3台master节点同时又是workeros128、os129、os130 节点主机容器运行时用的containerdworker131、worker132主机的用的docker
主机名IP组件系统os128192.168.177.128etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9os129192.168.177.129etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9os130192.168.177.130etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerdCentOS7.9worker131192.168.177.131haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9worker132192.168.177.132haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerdCentOS7.9VIP192.168.177.127
软件版本
软件版本明细
软件版本下载地址备注CentOS7.9.2009https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.isokernel3.10.0-1160.105.1.el7.x86_64(系统默认)kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxyv1.27.2https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gzetcdv3.5.5https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gzcfsslv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64cfssljsonv1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64cfssl-certinfov1.6.1https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64containerdv.1.6.6https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gzruncv1.1.11https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64containerd中自带的runc有问题需要替换docker20.10.24.https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgzcri-dockerd0.3.6https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgzcrictlv1.29.0https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz使用docker作为runtime时需要单独安装这个管理工具containerd的安装包中自带了此工具haproxy1.5系统默认yum源keepalived1.3.5系统默认yum源calicov3.25.0https://docs.tigera.io/archive/v3.25/manifests/calico.yamlcorednsv1.11.1https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.basedashboardv2.7https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yamlmetrics-server0.6.1https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
服务器系统初始化
# 安装依赖包
yum -y install epel-release.noarch
yum update -y
yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl bash-completion lrzsz sysstat openssh-clients -y
# 关闭防火墙 与selinux 和ssh优化systemctl stop firewalldsystemctl disable firewalldyum install iptables* -ysetenforce 0sed -i s/^SELINUXenforcing/SELINUXdisabled/g /etc/sysconfig/selinuxsed -i s#SELINUXenforcing#SELINUXdisabled#g /etc/selinux/configsed -i /^#UseDNS/s/#UseDNS yes/UseDNS no/g /etc/ssh/sshd_configsed -i s/#PermitEmptyPasswords no/PermitEmptyPasswords no/g /etc/ssh/sshd_config sed -i s/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g /etc/ssh/sshd_configsystemctl restart sshd
# 关闭交换分区
sed -ri s/.*swap.*/#/ /etc/fstab
swapoff -a sysctl -w vm.swappiness0# 配置系统句柄数
ulimit -SHn 655350
cat /etc/security/limits.conf EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
cat /etc/security/limits.d/20-nproc.conf EOF
* soft nproc unlimited
* hard nproc unlimited
EOF# 主机ipvs管理工具安装及模块加载
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat /etc/sysconfig/modules/ipvs.modules EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack
#内核优化k8s.conf
cat EOF /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.ipv4.ip_forward 1
vm.swappiness 0
fs.may_detach_mounts 1
vm.overcommit_memory1
vm.panic_on_oom0
fs.inotify.max_user_watches89100
fs.file-max52706963
fs.nr_open52706963
net.netfilter.nf_conntrack_max2310720net.ipv4.tcp_keepalive_time 600
net.ipv4.tcp_keepalive_probes 3
net.ipv4.tcp_keepalive_intvl 15
net.ipv4.tcp_max_tw_buckets 36000
net.ipv4.tcp_tw_reuse 1
net.ipv4.tcp_max_orphans 327680
net.ipv4.tcp_orphan_retries 3
net.ipv4.tcp_syncookies 1
net.ipv4.tcp_max_syn_backlog 16384
net.ipv4.ip_conntrack_max 131072
net.ipv4.tcp_max_syn_backlog 16384
net.ipv4.tcp_timestamps 0
net.core.somaxconn 16384
EOF
#设置生效
sysctl --system
#加载br_netfilter
modprobe br_netfilter
#查看是否加载
lsmod | grep br_netfilter
设置关于etcd签名证书
准备签名证书需要的工具 cfssl、cfssljson、cfssl-certinfo(选择一台主机即可此次证书相关的都在os128上操作) wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64mv cfssl_1.6.1_linux_amd64 /usr/bin/cfsslmv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljsonmv cfssl-certinfo_1.6.1_linux_amd64 /usr/bin/cfssl-certinfochmod x /usr/bin/cfssl*自签etcd 的CA
mkdir -p ~/TLS/{etcd,k8s}cd ~/TLS/etcd
#自签CA
cat ca-config.json EOF
{signing: {default: {expiry: 87600h},profiles: {www: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}}
}
EOFcat ca-csr.json EOF
{CA: {expiry: 87600h},CN: etcd CA,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing}]
}
EOF#生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件使用自签CA签发Etcd HTTPS证书 #创建证书申请文件
cd ~/TLS/etcd
cat server-csr.json EOF
{CN: etcd,hosts: [192.168.177.128,192.168.177.129,192.168.177.130],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing}]
}
EOF#注上述文件hosts字段中IP为所有etcd节点的集群内部通信IP一个都不能少为了方便后期扩容可以多写几个预留的IP。
#生成证书
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilewww server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。etcd集群部署
Etcd 的概念 Etcd 是一个分布式键值存储系统Kubernetes使用Etcd进行数据存储所以先准备一个Etcd数据库为解决Etcd单点故障应采用集群方式部署这里使用3台组建集群可容忍1台机器故障当然你也可以使用5台组建集群可容忍2台机器故障。以下在节点os128上操作为简化操作待会将节点os128生成的所有文件拷贝到节点os129和节点os130
# 准备etcd的安装包
wget https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz mkdir -pv /opt/etcd/{bin,cfg,ssl}
tar zxvf etcd-v3.5.5-linux-amd64.tar.gz
mv etcd-v3.5.5-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/准备etcd的配置文件
#os128主机 etcd 配置文件
cat /opt/etcd/cfg/etcd.conf EOF
#[Member]
ETCD_NAMEetcd-1
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.177.128:2380
ETCD_LISTEN_CLIENT_URLShttps://192.168.177.128:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.128:2380
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.128:2379
ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
EOF
---
ETCD_NAME节点名称集群中唯一
ETCD_DATA_DIR数据目录
ETCD_LISTEN_PEER_URLS集群通信监听地址
ETCD_LISTEN_CLIENT_URLS客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS集群通告地址
ETCD_ADVERTISE_CLIENT_URLS客户端通告地址
ETCD_INITIAL_CLUSTER集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN集群Token
ETCD_INITIAL_CLUSTER_STATE加入集群的当前状态new是新集群existing表示加入已有集群
---# systemd管理etcd
cat /usr/lib/systemd/system/etcd.service EOF
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile/opt/etcd/cfg/etcd.conf
ExecStart/opt/etcd/bin/etcd \
--cert-file/opt/etcd/ssl/server.pem \
--key-file/opt/etcd/ssl/server-key.pem \
--peer-cert-file/opt/etcd/ssl/server.pem \
--peer-key-file/opt/etcd/ssl/server-key.pem \
--trusted-ca-file/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file/opt/etcd/ssl/ca.pem \
--loggerzap
Restarton-failure
LimitNOFILE65536[Install]
WantedBymulti-user.target
EOF安装etcd集群
#拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/# 同步所有主机
scp -r /opt/etcd/ root192.168.177.129:/opt/
scp -r /opt/etcd/ root192.168.177.130:/opt/
scp /usr/lib/systemd/system/etcd.service root192.168.177.129:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root192.168.177.130:/usr/lib/systemd/system/# os129 主机etcd的配置文件
cat /opt/etcd/cfg/etcd.conf EOF
#[Member]
ETCD_NAMEetcd-2
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.177.129:2380
ETCD_LISTEN_CLIENT_URLShttps://192.168.177.129:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.129:2380
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.129:2379
ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
EOF# os130主机etcd配置文件
cat /opt/etcd/cfg/etcd.conf EOF
#[Member]
ETCD_NAMEetcd-3
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.177.130:2380
ETCD_LISTEN_CLIENT_URLShttps://192.168.177.130:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.177.130:2380
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.177.130:2379
ETCD_INITIAL_CLUSTERetcd-1https://192.168.177.128:2380,etcd-2https://192.168.177.129:2380,etcd-3https://192.168.177.130:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
EOF启动etcd并设置开启自启
启动etcd:
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd使用etcdctl验证etcd集群 ETCDCTL_API3 /opt/etcd/bin/etcdctl --cacert/opt/etcd/ssl/ca.pem --cert/opt/etcd/ssl/server.pem --key/opt/etcd/ssl/server-key.pem --endpointshttps://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 endpoint health --write-outtable负载均衡器组件安装
worker131、worker132主机上执行
安装haproxy、keepalived yum install haproxy keepalived -yhaproxy 配置
cat /etc/haproxy/haproxy.cfg EOF
globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 6000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
defaultsmode tcplog globaloption tcplogoption dontlognulloption redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000
#---------------------------------------------------------------------
listen statsbind 0.0.0.0:9100mode httpoption httplogstats uri /statusstats refresh 30sstats realm Haproxy Managerstats auth admin:passwordstats hide-versionstats admin if TRUE
#---------------------------------------------------------------------
frontend k8s-master-default-nodepool-apiserverbind *:6443mode tcpdefault_backend k8s-master-default-nodepool
#---------------------------------------------------------------------
backend k8s-master-default-nodepoolbalance roundrobinmode tcpserver k8s-apiserver-1 192.168.177.128:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server k8s-apiserver-2 192.168.177.129:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3server k8s-apiserver-3 192.168.177.130:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3
EOFkeepalived配置 worker131 主机配置 cat /etc/keepalived/keepalived.conf EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVELscript_user rootenable_script_security
}
vrrp_script check_haproxy {script /etc/keepalived/check_haproxy.shinterval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33# 非抢占vip模式nopreempt# 单播unicast_src_ip 192.168.177.131unicast_peer {192.168.177.132}virtual_router_id 51#优先级100大于从服务的99priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {#配置规划的虚拟ip192.168.177.127}#配置对worker131主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy}
}
EOFworker132 主机配置 cat /etc/keepalived/keepalived.conf EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script check_haproxy {script /etc/keepalived/check_haproxy.shinterval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33nopreemptunicast_src_ip 192.168.177.132unicast_peer {192.168.177.131}virtual_router_id 51priority 99advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.177.127}#配置对worker132主机haproxy进行监控的脚本track_script {#指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)check_haproxy}
}
EOF健康检查脚本
cat /etc/keepalived/check_haproxy.sh EOF
#!/bin/bash
err0
for k in $(seq 1 3)
docheck_code$(pgrep haproxy)if [[ $check_code ]]; thenerr$(expr $err 1)sleep 1continueelseerr0breakfi
doneif [[ $err ! 0 ]]; thenecho systemctl stop keepalived/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
EOF
chmod x /etc/keepalived/check_haproxy.sh设置开启自启并验证高可用VIP
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
#查看启动状态
systemctl status keepalived haproxy
#查看虚拟ip是否配置成功了
ip address showhaproxy 监控页面: 查看vip: 此时手动停止woker131主机上的haproxy服务模拟故障由于keepalived中配置的有监控脚本把woker131主机keepalived服务停掉vip会自动漂移到worker132的主机上几乎不会丢包回出现网络的轻微抖动如果woker131的keepalived 服务故障恢复启动后不会抢占vip(配置的非抢占模式)
设置关于k8s自签证书 自签CA
#创建k8s 的kube-apiserver证书
cd ~/TLS/k8scat ca-config.json EOF
{signing: {default: {expiry: 87600h},profiles: {kubernetes: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}}
}
EOF
cat ca-csr.json EOF
{CA: {expiry: 87600h},CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing,O: k8s,OU: System}]
}
EOF#生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -会生成ca.pem和ca-key.pem文件。kube-apiserver 自签证书
#创建证书申请文件
cat server-csr.json EOF
{CN: kubernetes,hosts: [10.0.0.1,127.0.0.1,192.168.177.127,192.168.177.128,192.168.177.129,192.168.177.130,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}]
}
EOF#注上述文件hosts字段中IP为所有Master/LB/VIP IP一个都不能少为了方便后期扩容可以多写几个预留的IP。cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes server-csr.json | cfssljson -bare server#会生成server.pem和server-key.pem文件。kube-controller-manager自签证书
# 创建证书请求文件
cat kube-controller-manager-csr.json EOF
{CN: system:kube-controller-manager,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing, ST: BeiJing,O: system:masters,OU: System}]
}
EOF# 生成证书
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-managerkube-scheduler自签证书
# 创建证书请求文件
cat kube-scheduler-csr.json EOF
{CN: system:kube-scheduler,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: system:masters,OU: System}]
}
EOF# 生成证书
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-scheduler-csr.json | cfssljson -bare kube-schedulerkube-proxy 自签证书
# 创建证书请求文件
cat kube-proxy-csr.json EOF
{CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}]
}
EOF
# 生成证书
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyadmin 自签证书
#生成kubectl连接集群的证书
cat admin-csr.json EOF
{CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: system:masters,OU: System}]
}
EOFcfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin此时/root/TLS/k8s目录下会有如下这么多文件
控制平面节点组件部署
准备工作(在os128节点上操作)
#部署k8s1.27.2
#下载安装包
wget https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz#解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /opt/kubernetes/bin
cp kubectl /usr/bin/
cp kubectl /usr/local/bin/
# 证书拷贝
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/部署kube-apiserver 创建kube-apiserver配置文件 # 创建kube-apiserver配置文件cat /opt/kubernetes/cfg/kube-apiserver.conf EOFKUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\--v2 \\--etcd-servershttps://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 \\--bind-address192.168.177.128 \\--secure-port6443 \\--advertise-address192.168.177.128 \\--allow-privilegedtrue \\--service-cluster-ip-range10.0.0.0/24 \\--authorization-modeRBAC,Node \\--enable-bootstrap-token-authtrue \\--token-auth-file/opt/kubernetes/cfg/token.csv \\--service-node-port-range30000-32767 \\--kubelet-client-certificate/opt/kubernetes/ssl/server.pem \\--kubelet-client-key/opt/kubernetes/ssl/server-key.pem \\--tls-cert-file/opt/kubernetes/ssl/server.pem \\--tls-private-key-file/opt/kubernetes/ssl/server-key.pem \\--client-ca-file/opt/kubernetes/ssl/ca.pem \\--service-account-key-file/opt/kubernetes/ssl/ca-key.pem \\--service-account-issuerapi \\--service-account-signing-key-file/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile/opt/etcd/ssl/ca.pem \\--etcd-certfile/opt/etcd/ssl/server.pem \\--etcd-keyfile/opt/etcd/ssl/server-key.pem \\--requestheader-client-ca-file/opt/kubernetes/ssl/ca.pem \\--proxy-client-cert-file/opt/kubernetes/ssl/server.pem \\--proxy-client-key-file/opt/kubernetes/ssl/server-key.pem \\--requestheader-allowed-nameskubernetes \\--requestheader-extra-headers-prefixX-Remote-Extra- \\--requestheader-group-headersX-Remote-Group \\--requestheader-username-headersX-Remote-User \\--enable-aggregator-routingtrue \\--audit-log-maxage30 \\--audit-log-maxbackup3 \\--audit-log-maxsize100 \\--service-account-issuerhttps://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-typesInternalIP,ExternalIP,Hostname \\--audit-log-path/opt/kubernetes/logs/k8s-audit.logEOF启用 TLS Bootstrapping 机制
TLS BootstrapingMaster apiserver启用TLS认证后Node节点kubelet和
kube-proxy要与kube-apiserver进行通信必须使用CA签发的有效证书才可以
当Node节点很多时这种客户端证书颁发需要大量工作同样也会增加集群扩展复杂度。
为了简化流程Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书
kubelet会以一个低权限用户自动向apiserver申请证书
kubelet的证书由apiserver动态签署。
所以强烈建议在Node上使用这种方式目前主要用于kubeletkube-proxy
还是由我们统一颁发一个证书。创建token文件 cat /opt/kubernetes/cfg/token.csv EOFc47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,system:node-bootstrapperEOF格式token用户名UID用户组token可自行生成替换head -c 16 /dev/urandom | od -An -t x | tr -d systemd管理kube-apiserver
#systemd管理apiserver
cat /usr/lib/systemd/system/kube-apiserver.service EOF
[Unit]
DescriptionKubernetes API Server
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target
EOF以下路径文件分发到其他master主机对应的路径 /opt/kubernetes/bin /opt/kubernetes/ssl /opt/kubernetes/cfg /usr/lib/systemd/system/kube-apiserver.service 不同主机的/opt/kubernetes/cfg/kube-apiserver.conf配置文件里面的IP要改成相应主机的
启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver部署kube-controller-manager 创建配置文件
# 创建配置文件
cat /opt/kubernetes/cfg/kube-controller-manager.conf EOF
KUBE_CONTROLLER_MANAGER_OPTS \\
--v2 \\
--leader-electtrue \\
--kubeconfig/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address127.0.0.1 \\
--allocate-node-cidrstrue \\
--cluster-cidr10.244.0.0/16 \\
--service-cluster-ip-range10.0.0.0/24 \\
--cluster-signing-cert-file/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration87600h0m0s
EOF•--kubeconfig连接apiserver配置文件
•--leader-elect当该组件启动多个时自动选举HA
•--cluster-signing-cert-file/--cluster-signing-key-file自动为kubelet颁发证书的CA与apiserver保持一致说明–bind-address监听的地址必须是127.0.0.1
生成kube-controller-manager.kubeconfig文件
KUBE_CONFIG/opt/kubernetes/cfg/kube-controller-manager.kubeconfig
KUBE_APISERVERhttps://192.168.177.127:6443
cd ~/TLS/k8s
kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \--client-certificate./kube-controller-manager.pem \--client-key./kube-controller-manager-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG}
kubectl config set-context default \--clusterkubernetes \--userkube-controller-manager \--kubeconfig${KUBE_CONFIG}
kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理controller-manager
# systemd管理controller-manager
cat /usr/lib/systemd/system/kube-controller-manager.service EOF
[Unit]
DescriptionKubernetes Controller Manager
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target
EOF以下文件分发到其他master节点主机
/opt/kubernetes/bin/kube-controller-manager
/usr/lib/systemd/system/kube-controller-manager.service
/opt/kubernetes/cfg/kube-controller-manager.conf
/opt/kubernetes/cfg/kube-controller-manager.kubeconfig 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager部署kube-scheduler 创建配置文件
cat /opt/kubernetes/cfg/kube-scheduler.conf EOF
KUBE_SCHEDULER_OPTS \\
--v2 \\
--leader-elect \\
--kubeconfig/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address127.0.0.1
EOF--kubeconfig连接apiserver配置文件--leader-elect当该组件启动多个时自动选举HA说明 --bind-address监听地址必须是127.0.0.1
生成kube-scheduler.kubeconfig
cd ~/TLS/k8s
KUBE_CONFIG//opt/kubernetes/cfg/kube-scheduler.kubeconfig
KUBE_APISERVERhttps://192.168.177.127:6443kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \--client-certificate./kube-scheduler.pem \--client-key./kube-scheduler-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG}
kubectl config set-context default \--clusterkubernetes \--userkube-scheduler \--kubeconfig${KUBE_CONFIG}
kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kube-scheduler
# systemd管理scheduler
cat /usr/lib/systemd/system/kube-scheduler.service EOF
[Unit]
DescriptionKubernetes Scheduler
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target
EOF以下文件分发到其他master主机对应的路径
/opt/kubernetes/bin/kube-scheduler
/usr/lib/systemd/system/kube-scheduler.service
/opt/kubernetes/cfg/kube-scheduler.conf
/opt/kubernetes/cfg/kube-scheduler.kubeconfig 启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler查看集群状态 生成管理集群的kubeconfig认证文件
# 生成管理集群的kubeconfig认证文件
cd ~/TLS/k8s
mkdir /root/.kube
KUBE_CONFIG/root/.kube/config
KUBE_APISERVERhttps://192.168.177.127:6443
kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \--client-certificate./admin.pem \--client-key./admin-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG}
kubectl config set-context default \--clusterkubernetes \--usercluster-admin \--kubeconfig${KUBE_CONFIG}
kubectl config use-context default --kubeconfig${KUBE_CONFIG}使用kubectl 查看集群的状态
#查看集群信息
kubectl cluster-info
#查看集群组件状态
kubectl get cs图片中的coredns可以忽略后面会有coredns的部署
授权kubelet-bootstrap用户允许请求证书
授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrolesystem:node-bootstrapper \
--userkubelet-bootstrap数据平面节点组件部署 容器运行时安装 安装docker(os131,os132主机)
# 二进制包下载地址https://download.docker.com/linux/static/stable/x86_64/
wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
#解压
tar xvf docker-20.10.24.tgz
#拷贝二进制文件
cp docker/* /usr/bin/
#创建containerd的service文件,并且启动
cat /etc/systemd/system/containerd.service EOF
[Unit]
Descriptioncontainerd container runtime
Documentationhttps://containerd.io
Afternetwork.target local-fs.target
[Service]
ExecStartPre-/sbin/modprobe overlay
ExecStart/usr/bin/containerd
Typenotify
Delegateyes
KillModeprocess
Restartalways
RestartSec5
LimitNPROCinfinity
LimitCOREinfinity
LimitNOFILE1048576
TasksMaxinfinity
OOMScoreAdjust-999
[Install]
WantedBymulti-user.target
EOF
systemctl enable --now containerd.service
#准备docker的service文件
cat /etc/systemd/system/docker.service EOF
[Unit]
DescriptionDocker Application Container Engine
Documentationhttps://docs.docker.com
Afternetwork-online.target firewalld.service containerd.service
Wantsnetwork-online.target
Requiresdocker.socket containerd.service
[Service]
Typenotify
ExecStart/usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock
ExecReload/bin/kill -s HUP $MAINPID
TimeoutSec0
RestartSec2
Restartalways
StartLimitBurst3
StartLimitInterval60s
LimitNOFILEinfinity
LimitNPROCinfinity
LimitCOREinfinity
TasksMaxinfinity
Delegateyes
KillModeprocess
OOMScoreAdjust-500
[Install]
WantedBymulti-user.target
EOF
#准备docker的socket文件
cat /etc/systemd/system/docker.socket EOF
[Unit]
DescriptionDocker Socket for the API
[Socket]
ListenStream/var/run/docker.sock
SocketMode0660
SocketUserroot
SocketGroupdocker
[Install]
WantedBysockets.target
EOF
#创建docker组
groupadd docker
#启动docker
systemctl enable --now docker.socket systemctl enable --now docker.service
#验证
docker info
cat /etc/docker/daemon.json EOF
{exec-opts: [native.cgroupdriversystemd],registry-mirrors: [https://docker.mirrors.ustc.edu.cn,http://hub-mirror.c.163.com],max-concurrent-downloads: 10,log-driver: json-file,log-level: warn,log-opts: {max-size: 10m,max-file: 3},data-root: /var/lib/docker
}
EOF
systemctl restart docker安装cri-dockerdos131,os132主机 由于1.24以及更高版本不支持docker所以安装cri-docker
# 下载cri-docker
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz # 解压cri-docker
tar -zxvf cri-dockerd-0.3.6.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/
chmod x /usr/bin/cri-dockerd
# 写入启动配置文件
cat /usr/lib/systemd/system/cri-docker.service EOF
[Unit]
DescriptionCRI Interface for Docker Application Container Engine
Documentationhttps://docs.mirantis.com
Afternetwork-online.target firewalld.service docker.service
Wantsnetwork-online.target
Requirescri-docker.socket[Service]
Typenotify
ExecStart/usr/bin/cri-dockerd --network-plugincni --pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.9
ExecReload/bin/kill -s HUP $MAINPID
TimeoutSec0
RestartSec2
RestartalwaysStartLimitBurst3StartLimitInterval60sLimitNOFILEinfinity
LimitNPROCinfinity
LimitCOREinfinityTasksMaxinfinity
Delegateyes
KillModeprocess[Install]
WantedBymulti-user.target
EOF# 写入socket配置文件
cat /usr/lib/systemd/system/cri-docker.socket EOF
[Unit]
DescriptionCRI Docker Socket for the API
PartOfcri-docker.service[Socket]
ListenStream%t/cri-dockerd.sock
SocketMode0660
SocketUserroot
SocketGroupdocker[Install]
WantedBysockets.target
EOF# 进行启动cri-docker
systemctl daemon-reload ; systemctl enable cri-docker --now安装containerdos128,os129,os130主机
wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
tar xvf cri-containerd-cni-1.6.6-linux-amd64.tar.gz -C /
#配置 Containerd 所需的模块
cat /etc/modules-load.d/containerd.conf EOF
overlay
br_netfilter
EOF
#加载模块
systemctl restart systemd-modules-load.servicemkdir /etc/containerd
containerd config default /etc/containerd/config.toml
sed -i s/\(sandbox_image\) .*/\1 registry.aliyuncs.com\/google_containers\/pause:3.9/g /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd
#查看containerd相关模块加载情况
lsmod | egrep br_netfilter|overlay安装runcos128,os129,os130主机 默认runc执行时提示runc: symbol lookup error: runc: undefined symbol
wget https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64
mv runc.amd64 /usr/local/bin/runc 部署kubelet 准备工作
#在所有worker节点创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests} 创建配置文件
cat /opt/kubernetes/cfg/kubelet.conf EOF
KUBELET_OPTS \\
--v2 \\
--hostname-override$(hostname) \\
--kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir/opt/kubernetes/ssl \\
--runtime-request-timeout15m \\
--container-runtime-endpointunix:///run/cri-dockerd.sock \\
--cgroup-driversystemd \\
--node-labelsnode.kubernetes.io/nodeLinux
EOF--container-runtime-endpoint参数默认为containerd docker: unix:///run/cri-dockerd.sockcontainerd: unix:///run/containerd/containerd.sock生成kubelet-conf.yml配置参数文件
cat /opt/kubernetes/cfg/kubelet-conf.yml EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF生成kubelet初次加入集群引导bootstrap.kubeconfig文件
#生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG/opt/kubernetes/cfg/bootstrap.kubeconfig
KUBE_APISERVERhttps://192.168.177.127:6443
#与token.csv里保持一致
TOKENc47ffb939f5ca36231d9e3121a252940# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG}
kubectl config set-credentials kubelet-bootstrap \--token${TOKEN} \--kubeconfig${KUBE_CONFIG}
kubectl config set-context default \--clusterkubernetes \--userkubelet-bootstrap \--kubeconfig${KUBE_CONFIG}
kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kubelet
# systemd管理kubelet
cat /usr/lib/systemd/system/kubelet.service EOF
[Unit]
DescriptionKubernetes Kubelet
#此处如果用的cri是docker不用修改如果是containerd则需要改成containerd.service
Afterdocker.service[Service]
EnvironmentFile/opt/kubernetes/cfg/kubelet.conf
ExecStart/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restarton-failure
LimitNOFILE65536[Install]
WantedBymulti-user.target
EOF启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet批准kubelet证书申请并加入集群 # 查看kubelet证书请求
[rootos128 system]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ 14s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap none Pending# 批准申请
kubectl certificate approve node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ# 查看节点
kubectl get node其他worker节点kubelet 安装
# 从master节点上同步以下配置文件修改成对应主机的启动kubelet即可
/opt/kubernetes/cfg/kubelet.conf # hostname-override、container-runtime-endpoint 参数的值需要注意hostname-override的值需要集群中唯一container-runtime-endpoint的值取决于runtime 用的哪个
/usr/lib/systemd/system/kubelet.service # After 的值取决于主机上的runtime 用的哪个
/opt/kubernetes/cfg/kubelet-config.yml #不需要修改
/opt/kubernetes/cfg/kubelet.kubeconfig #不需要修改
/opt/kubernetes/cfg/bootstrap.kubeconfig #不需要修改
/opt/kubernetes/ssl/ca.pem #不需要修改
/opt/kubernetes/bin/kubelet #不需要修改
启动kubelet并设置开机启动,加入集群批准证书申请参照上面步骤查看所有节点加入情况 kubectl get node 部署kube-proxy 生成配置参数文件
cat /opt/kubernetes/cfg/kube-proxy.yaml EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /opt/kubernetes/kubeconfig/kube-proxy.kubeconfigqps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: $(hostname)
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: rrsyncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: null
oomScoreAdj: -999
portRange:
udpIdleTimeout: 250ms
EOF生成kube-proxy.kubeconfig文件
cd ~/TLS/k8s
KUBE_CONFIG/opt/kubernetes/cfg/kube-proxy.kubeconfig
KUBE_APISERVERhttps://192.168.177.127:6443
kubectl config set-cluster kubernetes \--certificate-authority/opt/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfig${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \--client-certificate./kube-proxy.pem \--client-key./kube-proxy-key.pem \--embed-certstrue \--kubeconfig${KUBE_CONFIG}
kubectl config set-context default \--clusterkubernetes \--userkube-proxy \--kubeconfig${KUBE_CONFIG}
kubectl config use-context default --kubeconfig${KUBE_CONFIG}systemd管理kube-proxy
systemd管理kube-proxycat /usr/lib/systemd/system/kube-proxy.service EOF
[Unit]
DescriptionKubernetes Proxy
Afternetwork.target[Service]
EnvironmentFile/opt/kubernetes/cfg/kube-proxy.conf
ExecStart/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restarton-failure
LimitNOFILE65536[Install]
WantedBymulti-user.target
EOF启动并设置开机启动
#启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy其他worker节点kube-proxy安装
#从master节点同步以下配置文件
/opt/kubernetes/bin/kube-proxy
/usr/lib/systemd/system/kube-proxy.service
/opt/kubernetes/cfg/kube-proxy.kubeconfig
/opt/kubernetes/cfg/kube-proxy.yaml #hostnameOverride参数需要确认和当前主机是否一致
启动并设置开机启动calico网络组件部署
下载calico
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml修改默认网段
# 把calico.yaml里pod所在网段改成 --cluster-cidr10.244.0.0/16 时选项所指定的网段
#直接用vim编辑打开此文件查找192按如下标记进行修改
# no effect. This should fall within --cluster-cidr.
# - name: CALICO_IPV4POOL_CIDR
# value: 192.168.1.0/16
# Disable file logging so kubectl logs works.
- name: CALICO_DISABLE_FILE_LOGGINGvalue: true把两个#及#后面的空格去掉并把192.168.1.0/16改成10.244.0.0/16
# no effect. This should fall within --cluster-cidr.
- name: CALICO_IPV4POOL_CIDRvalue: 10.244.0.0/16
# Disable file logging so kubectl logs works.
- name: CALICO_DISABLE_FILE_LOGGINGvalue: true部署calico kubectl apply -f calico.yaml验证calico kubectl get pods -n kube-system 授权apiserver访问kubelet
#应用场景例如kubectl logs
cat apiserver-to-kubelet-rbac.yaml EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- *
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace:
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yamlcoredns 组件部署
准备coredns.yml内容https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
cat coredns.yml EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcile---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- resources:- endpoints- services- pods- namespacesverbs:- list- watch
- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {pods insecurefallthrough in-addr.arpa ip6.arpattl 30}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: CoreDNS
spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:securityContext:seccompProfile:type: RuntimeDefaultpriorityClassName: system-cluster-criticalserviceAccountName: corednsaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: [kube-dns]topologyKey: kubernetes.io/hostnametolerations:- key: CriticalAddonsOnlyoperator: ExistsnodeSelector:kubernetes.io/os: linuxcontainers:- name: corednsimage: registry.k8s.io/coredns/coredns:v1.11.1imagePullPolicy: IfNotPresentresources:limits:memory: __DNS__MEMORY__LIMIT__requests:cpu: 100mmemory: 70Miargs: [ -conf, /etc/coredns/Corefile ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- ALLreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: 9153prometheus.io/scrape: truelabels:k8s-app: kube-dnskubernetes.io/cluster-service: trueaddonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: CoreDNS
spec:selector:k8s-app: kube-dnsclusterIP: __DNS__SERVER__ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
EOF部署coredns kubectl apply -f coredns.yml查看coredns 服务部署 kubectl get pod -n kube-system | grep coredns 生产环境需要调整coredns的资源分配并加上hpa
dashboard 组件部署
部署dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 修改svc为nodePort方式
vim recommended.yaml
----
spec:ports:- port: 443targetPort: 8443nodePort: 30001type: NodePortselector:k8s-app: kubernetes-dashboard
----
kubectl apply -f recommended.yaml
# 查看dashboard服务
kubectl get pods -n kubernetes-dashboard
kubectl get pods,svc -n kubernetes-dashboard创建service account并绑定默认cluster-admin管理员集群角色
# 创建service account并绑定默认cluster-admin管理员集群角色cat dashadmin.yaml EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
EOF
kubectl apply -f dashadmin.yaml
# 创建用户登录token,生成的token可以用来登录dashboard
kubectl -n kubernetes-dashboard create token admin-user验证dashboard登录访问https://192.168.177.128:30001,token用上面生成的或者使用kubeconfig文件登录
Rancher 管理k8s集群
k8s 的dashboard 也可以使用Rancher来管理,图形界面账号项目权限更友好,功能更强大 简单使用docker部署rancher 生产环境建议直接部署在k8s集群中通过ingress的方式来访问 docker run -d --restartalways --privilegedtrue -p 443:443 -v /data/rancher:/var/lib/rancher/ --name rancher-server -e CATTLE_SYSTEM_CATALOGbundled rancher/rancher:stable 把上面部署的二进制k8s集群在 Rancher web页面上按照指引一步步导入即可 登录成功界面如下 .*/\1 registry.aliyuncs.com\/google_containers\/metrics-server:v0.6.1/g components.yaml
# 部署
kubectl apply -f components.yaml
# 验证, 使用kubectl top 可以看到数据说明就正常了
kubectl top node
kubectl top pod -A ingress 组件部署
部署ingress-nginx-deploy
# 下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx-deploy.yaml
#查看镜像地址grep image: ingress-nginx-deploy.yaml
# mage: registry.k8s.io/ingress-nginx/controller:v1.8.0sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b#image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
# 替换镜像
sed -i /controller/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.8.0/ ingress-nginx-deploy.yaml
sed -i /kube-webhook-certgen/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v20230407/ ingress-nginx-deploy.yaml
# 部署ingress-nginx
kubectl apply -f ingress-nginx-deploy.yaml #查看ingress-nginx服务kubectl get all -n ingress-nginxhelm、kubens、crictl、ctr 工具
helm Helm 是一个用于管理 Kubernetes 应用程序的包管理工具。它允许您定义、安装和升级 Kubernetes 应用程序的预定义包这些包被称为 “charts”。每个 Helm chart 包含了一组描述 Kubernetes 资源的文件例如部署、服务、配置映射等。 #下载wget https://get.helm.sh/helm-v3.14.0-linux-amd64.tar.gztar xvf helm-v3.14.0-linux-amd64.tar.gzmv helm /usr/local/binchmod x /usr/local/bin/helmkubens kubens 是一个用于快速切换 Kubernetes 命名空间的命令行工具。它是 kubectx 工具包的一部分用于管理 Kubernetes 上下文和命名空间
#下载
wget https://github.com/ahmetb/kubectx/releases/download/v0.9.5/kubens_v0.9.5_linux_x86_64.tar.gz
# 解压
tar xvf kubens_v0.9.5_linux_x86_64.tar.gz
mv kubens /usr/local/bin
chmod x /usr/local/bin/kubens
# kubens命令用法
kubens列出当前配置的所有命名空间。
kubens namespace切换到指定的命名空间。
kubens -c列出当前配置的所有上下文。
kubens -u列出当前用户有权访问的所有命名空间crictl crictl 是一个用于与容器运行时Container Runtime InterfaceCRI接口兼容的容器运行时进行交互的命令行工具默认配置文件路径/etc/crictl.yaml
#下载
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz
tar xvf crictl-v1.29.0-linux-amd64.tar.gz
mv crictl /usr/local/bin
chmod x /usr/local/bin/crictl
#crictl 命令使用
crictl version 查看版本
crictl pods 列出主机上有哪些pod
crictl images列出容器运行时中的镜像列表。
crictl ps列出容器运行时中正在运行的容器列表。
crictl create创建一个新的容器。
crictl start启动一个已经创建的容器。
crictl stop停止一个正在运行的容器。
crictl rm删除一个容器。
crictl logs查看容器的日志。
crictl inspect查看容器或镜像的详细信息。
crictl pull从容器镜像仓库中拉取镜像。
crictl rmi删除一个镜像。ctr ctr是Containerd开发的一个命令行工具,可以与Containerd进行交互,用于管理容器、镜像以及其他资源,Containerd 中每个容器实例都会关联到一个命名空间,默认是默认命名空间(default) #查看有哪些namespace默认namespace: defaultctr namespaces# 查看namespacek8s.io下面有哪些container/task/imagectr -n k8s.io containers list ctr -n k8s.io tasks list ctr -n k8s.io images listnfs storageclass动态pv存储
参考之前的写的这篇博客
loki 日志采集部署
待完善
Promthous 组件部署
待完善
argocd组件部署
待完善
FAQ
kubelet服务启动报错 validate CRI v1 runtime API for endpoint unix:///run/cri-dockerd.sock: rpc error: code Unimplemented desc unknown service runtime.v1.RuntimeService 原因及解决方案cri-dokcerd-v0.2.6的版本有问题更换到cri-dokcerd-v0.3.6 的版本 问题解决某个节点上的calico-node 启动报错 ERROR][1] cni-installer/nil nil: Unable to create token for CNI kubeconfig errorPost https://10.0.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token: dial tcp 10.0.0.1:443: connect: connection refused 原因及方案该节点上的kube-proxy 忘记启动启动kube-proxy服务问题解决
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/85807.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!