Kubernetes Ingress實戰(五):Bare metal環境下Kubernetes Ingress邊緣節點的高可用(基於IPVS)
前面我們基於Keepavlied實現了Kubernetes叢集邊緣節點的高可用,詳見ofollow,noindex" target="_blank">《Kubernetes Ingress實戰(四):Bare metal環境下Kubernetes Ingress邊緣節點的高可用》 。當kube-proxy開啟了ipvs模式後,可以不再使用keepalived,ingress controller的採用externalIp的Service,externalIp指定的就是VIP,由kube-proxy ipvs接管。試驗環境如下:
kubectl get node -o wide NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIME node1Readyedge,master5h58mv1.12.0192.168.61.11<none>CentOS Linux 7 (Core)3.10.0-693.el7.x86_64docker://18.6.1 node2Readyedge5h55mv1.12.0192.168.61.12<none>CentOS Linux 7 (Core)3.10.0-693.el7.x86_64docker://18.6.1
node1是master節點,同時我們希望node1、node2同時作為叢集的edge節點。我們還是使用helm來部署nginx ingress,對stable/nginx-ingress chart的值檔案ingress-nginx.yaml稍作調整:
controller: replicaCount: 2 service: externalIPs: - 192.168.61.10 nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule
nginx ingress controller的副本數replicaCount為2,將被排程到node1和node2這兩個邊緣節點上。externalIPs指定的192.168.61.10為VIP,將繫結到kube-proxy
kube-ipvs0
網絡卡上。
helm install stable/nginx-ingress \ -n nginx-ingress \ --namespace ingress-nginx\ -f ingress-nginx.yaml
Servicenginx-ingress-controller
:
kubectl get svc -n ingress-nginx NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE nginx-ingress-controllerLoadBalancer10.99.214.125192.168.61.1080:30750/TCP,443:30961/TCP4m48s nginx-ingress-default-backendClusterIP10.105.78.103<none>80/TCP4m48s
在node1上檢視kube-ipvs0
網絡卡:
ip addr sh kube-ipvs0 6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN link/ether f6:3b:12:a5:79:82 brd ff:ff:ff:ff:ff:ff inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0 valid_lft forever preferred_lft forever
在node2上檢視kube-ipvs0
網絡卡:
ip addr sh kube-ipvs0 6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN link/ether fa:c5:24:df:22:eb brd ff:ff:ff:ff:ff:ff inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.108.71.144/32 brd 10.108.71.144 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.101.228.188/32 brd 10.101.228.188 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.99.214.125/32 brd 10.99.214.125 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 192.168.61.10/32 brd 192.168.61.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.105.78.103/32 brd 10.105.78.103 scope global kube-ipvs0 valid_lft forever preferred_lft forever
可以在kube-ipvs0上看到192.168.61.10
這個VIP。