使用背景和场景
业务中的某个关键服务,配置了多个replica,结果在部署时,发现多个相同的副本同时部署在同一个主机上,结果主机故障时,所有副本同时漂移了,导致服务间断性中断
基于以上背景,实现一个服务的多个副本分散到不同的主机上,使每个主机有且只能运行服务的一个副本,这里用到的是Pod anti-affinity属性,即pod反亲和性,特性是根据已经运行在node上的pod的label,不再将相同label的pod也调度到该node,实现每个node上只运行一个副本的pod
pod亲和性和反亲和性的区别
亲和性(podAffinity):和指定label的pod部署在相同node上
反亲和性(podAntiAffinity):不想和指定label的pod的服务部署在相同node上
podAntiAffinity实战部署
反亲和性分软性要求和硬性要求
requiredDuringSchedulingIgnoredDuringExecution:硬性要求,必须满足条件,保证分散部署的效果最好使用用此方式
preferredDuringSchedulingIgnoredDuringExecution:软性要求,可以不完全满足,即有可能同一node上可以跑多个副本
# 配置如下,只需要修改label的配置,即matchExpressions中的key和values的值# 硬性要求
# 如果节点上的pod标签存在满足app=nginx,则不能部署到节点上spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- nginxtopologyKey: "kubernetes.io/hostname"# 软性要求
# 如果节点上的pod标签存在满足app=nginx,也可以部署到节点上,尽可能先部署到其它节点,如果没有满足也可以部署到此节点(大概是这么理解吧)spec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:- labelSelector:matchExpressions:- key: appoperator: Invalues:- nginxtopologyKey: "kubernetes.io/hostname"
附完整的deployment.yaml配置
apiVersion: apps/v1
kind: Deployment
metadata:name: nginxlabels:app: nginx
spec:replicas: 3strategy:rollingUpdate:maxSurge: 30%maxUnavailable: 0type: RollingUpdateminReadySeconds: 10selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- nginxtopologyKey: "kubernetes.io/hostname"restartPolicy: "Always"containers:- name: nginximage: nginximagePullPolicy: "IfNotPresent"ports:- containerPort: 80name: httpprotocol: TCP
实际生产环境用的pod反亲和性
podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:# Never schedule multiple replicas on the same node- topologyKey: kubernetes.io/hostnamelabelSelector:matchLabels:app.kubernetes.io/name: ${service}app.kubernetes.io/instance: ${service}
apiVersion: apps/v1
kind: Deployment
metadata:name: ${service}labels:app.kubernetes.io/name: ${service}app.kubernetes.io/version: 0.0.0app.kubernetes.io/instance: ${service}environment: ${env}
spec:replicas: ${replicas}revisionHistoryLimit: 5selector:matchLabels:app.kubernetes.io/name: ${service}strategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:labels:app.kubernetes.io/name: ${service}app.kubernetes.io/version: 0.0.0app.kubernetes.io/instance: ${service}logging: "false"armsPilotAutoEnable: "off"armsPilotCreateAppName: "${service}-${env}"spec:serviceAccountName: defaultdnsPolicy: ClusterFirstimagePullSecrets:- name: gemdale-registry.cn-shenzhen.cr.aliyuncs.com-secretcontainers:- name: ${service}image: ${image}imagePullPolicy: IfNotPresentenv:- name: CONSUL_HOSTvalueFrom:fieldRef:fieldPath: status.hostIP- name: ELASTIC_APM_SERVER_URLSvalue: http://apm-server.logging:8200- name: HOST_IPvalueFrom:fieldRef:fieldPath: status.hostIP- name: SERVER_PORTvalue: "80"- name: JAVA_OPTSvalue: -Duser.timezone=Asia/Shanghai- name: WFWAPPvalue: wfw-applogvolumeMounts:- mountPath: /data/appdata/name: appdata- mountPath: /data/config-repo/name: config-repo- mountPath: /data/logs/name: logs- mountPath: /mnt/hgfs/name: mnt-hgfsports:- containerPort: 80name: httpresources:{}affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: microserviceoperator: Invalues:- "true"podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:# Never schedule multiple replicas on the same node- topologyKey: kubernetes.io/hostnamelabelSelector:matchLabels:app.kubernetes.io/name: ${service}app.kubernetes.io/instance: ${service}volumes:- hostPath:path: /data/appdata/type: DirectoryOrCreatename: appdata- hostPath:path: /data/config-repo/type: DirectoryOrCreatename: config-repo- hostPath:path: /data/logs/type: DirectoryOrCreatename: logs- hostPath:path: /mnt/hgfs/type: DirectoryOrCreatename: mnt-hgfs
---
apiVersion: v1
kind: Service
metadata:name: ${service}labels:app.kubernetes.io/name: ${service}app.kubernetes.io/version: 0.0.0app.kubernetes.io/instance: ${service}environment: ${env}
spec:type: ClusterIPports:- name: httpport: 80protocol: TCPtargetPort: httpselector:app.kubernetes.io/name: ${service}app.kubernetes.io/instance: ${service}