一、为什么要有亲和力和反亲和力?
当我们想把Pod部署到指定的node节点时,可以采用Label+NodeSelector或者污点Traint+容忍度Toloration,其中NodeSelector的主动权在Pod,也就是Pod可以根据Node节点上的标签来选择要部署的节点;而污点主动权在于节点,只有Pod配置该污点对应的容忍度之后才能部署到对应节点上。
但这两种方式都只是比较简单的选择节点,不能应对复杂调整的节点选择,比如要把Pod部署到标签不是xxx的节点上,那么无论是NodeSelector还是污点,都是做不到的,这个时候就需要用到亲和力了,它能够满足复杂条件的Node节点的选择,此外支持Pod的选择,比如支持跟某个Pod部署在一起等。
亲和力除了拥有NodeSelector和污点全部功能之外,还支持强制选择和优先选择两种策略,使得选择更加灵活。
二、简单使用
亲和力和反亲和力有三种类型,分别是NodeAffinity(节点亲和力)、PodAffinity(Pod亲和力)和PodAntiAffinity(Pod反亲和力)。
上面所有的亲和力都支持requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution两种策略,这两种策略可以同时使用。
(1)requiredDuringSchedulingIgnoredDuringExecution:硬亲和力,强制要求。
(2)preferredDuringSchedulingIgnoredDuringExecution:软亲和力,优先要求,允许不满足的情况。
1、 节点亲和力
节点亲和力的作用跟NodeSelector类似,作用于Node节点上,可以支持复杂的选择条件。
(1)环境准备
查看当前node节点的标签
[root@test-99 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
develop Ready work 9d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=develop,kubernetes.io/os=linux,node-role.kubernetes.io/work=,node.kubernetes.io/node=
test-99 Ready master 9d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=test-99,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/node=
给develop节点定义一个标签normal-node=1
[root@test-99 ~]# kubectl label node develop normal-node=1
node/develop labeled
[root@test-99 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
develop Ready work 9d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=develop,kubernetes.io/os=linux,node-role.kubernetes.io/work=,node.kubernetes.io/node=,normal-node=1
test-99 Ready master 9d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=test-99,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/node=
(1)定义一个节点亲和力
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-with-node-affinity
labels:
app: nginx-with-node-affinity
spec:
replicas: 2
selector:
matchLabels:
app: nginx-with-node-affinity
template:
metadata:
labels:
app: nginx-with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: normal-node
operator: In
values:
- "1"
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
在上面的配置中,我们定义了一个Deployment,并在Pod中配置了亲和力,并同时配置了requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution两种策略,这两种策略的配置有些不同。其中requiredDuringSchedulingIgnoredDuringExecution通过nodeSelectorTerms配置多个节点选择条件,而preferredDuringSchedulingIgnoredDuringExecution则通过weight来区分优先级。
matchExpressions用于配置匹配表达式,其中key对应node节点的标签,例如上面的normal-node和another-node-label-key,values对应的是值,而operator表示操作符,有以下多种操作符:
- In : label值需要是values列表中的其中一项
- NotIn: label值不能是values列表中的其中一项
- Exists:存在某个label
- DoesNotExist:不能存在某个label
- Gt:label值大于values(只能是数字)
- Lt:label值小于values(只能是数字)
(2)测试
通过kubectl apply -f xxx执行创建之后,可以通过kubectl get pods -o wide查看Pod的部署情况。
[root@test-99 affinities]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-with-node-affinity-54c55b4dcc-f9jfc 1/1 Running 0 8s 192.168.43.103 develop <none> <none>
nginx-deployment-with-node-affinity-54c55b4dcc-pknmr 1/1 Running 0 8s 192.168.43.102 develop <none> <none>
2、Pod亲和力
Pod亲和力作用于Pod上,比如我们希望新部署的Pod跟已存在的Pod部署在同一个节点上。
(1)环境准备
我们在上面的例子上进行测试Pod亲和力,如下面的Pod信息,我们选择app=nginx-with-node-affinity作为Pod亲和力条件。
[root@test-99 affinities]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-with-node-affinity-7fcd75fd68-gm67s 1/1 Running 0 11s app=nginx-with-node-affinity,pod-template-hash=7fcd75fd68
nginx-deployment-with-node-affinity-7fcd75fd68-zvpzq 1/1 Running 0 11s app=nginx-with-node-affinity,pod-template-hash=7fcd75fd68
(2)定义一个Pod亲和力
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-with-pod-affinity
labels:
app: nginx-with-pod-affinity
spec:
replicas: 1
selector:
matchLabels:
app: nginx-with-pod-affinity
template:
metadata:
labels:
app: nginx-with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-with-node-affinity
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Pod亲和力的配置与节点亲和力的配置比较类型,差别比较大的是Pod亲和力需要配置topologyKey,用于指定拓扑域,其中label=value为一个拓扑域,也就是及时label相同,但是value不同,也是不同的拓扑域,拓扑域可以简单理解为node节点上的标签。
(3)测试
通过kubectl apply -f创建Deployment之后,通过kubectl get pods -o wide查看到Pod的部署情况与预期一致。
[root@test-99 affinities]# kubectl apply -f 02-create-pod-affinity.yml
deployment.apps/nginx-deployment-with-pod-affinity created
[root@test-99 affinities]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-with-node-affinity-7fcd75fd68-gm67s 1/1 Running 0 11m 192.168.43.107 develop <none> <none>
nginx-deployment-with-node-affinity-7fcd75fd68-zvpzq 1/1 Running 0 11m 192.168.43.108 develop <none> <none>
nginx-deployment-with-pod-affinity-5457887b6b-xpbvg 1/1 Running 0 6s 192.168.43.109 develop <none> <none>
3、Pod反亲和力
这个是用得比较多的功能,比如我们部署Redis集群,希望每个Redis Pod不要部署在同一个node节点上,这时候就可以用Pod反亲和力。
(1)定义一个Pod反亲和力
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-with-pod-anti-affinity
labels:
app: nginx-with-pod-anti-affinity
spec:
replicas: 2
selector:
matchLabels:
app: nginx-with-pod-anti-affinity
template:
metadata:
labels:
app: nginx-with-pod-anti-affinity
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-with-pod-anti-affinity
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
在上面的配置中,我们给pod打上了app=nginx-with-pod-anti-affinity的标签,但是Pod配置了反亲和力Pod不能配置在同一个节点上存在app=nginx-with-pod-anti-affinity标签的Pod,从而达到了每个Pod部署在不同节点上的效果。
[root@test-99 affinities]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-with-node-affinity-7fcd75fd68-gm67s 1/1 Running 0 36m 192.168.43.107 develop <none> <none>
nginx-deployment-with-node-affinity-7fcd75fd68-zvpzq 1/1 Running 0 36m 192.168.43.108 develop <none> <none>
nginx-deployment-with-pod-affinity-5457887b6b-xpbvg 1/1 Running 0 24m 192.168.43.109 develop <none> <none>
nginx-deployment-with-pod-anti-affinity-55bb56c776-4rzl7 1/1 Running 0 2m27s 192.168.252.86 test-99 <none> <none>
nginx-deployment-with-pod-anti-affinity-55bb56c776-z9jht 1/1 Running 0 2m27s 192.168.43.110 develop <none> <none>