07-k8s对外提供服务

摘要

本文内容转自网络,个人学习记录使用,请勿传播

Service

Service存在的意义

k8s集群中部署的服务对外提供访问可以通过NodePort的方式对外暴露端口,

k8s集群中部署的pod随着日常的部署、变更、故障经常会发生变化,如果需要将pod组成的服务对外提供访问就需要动态的感知pod的变化并提供统一、固定的入口来对外提供用户访问。

k8sservice的引入就是为了解决感知pod动态变化,并对外提供统一入口的解决方案。

  • service可以通过标签选择器来识别一组pod,为同一组pod提供统一入口(服务发现)
  • service可以感知pod的变化,防止pod失联
  • serivce可以定义一组pod的访问策略(负载均衡)

image-20210823123445965

PodService的关系

  • service通过标签选择器来关联一组pod
  • service使用iptables或者ipvs来为一组pod提供负载均衡能力
  • k8s通过kube-proxy来实现service

image-20210823124524684

创建service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
# 启动一个deploy
$ kubectl create deploy nginx01 --image=nginx:1.17.10 --replicas=3 --dry-run=client -o yaml > nginx.yaml
$ cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx01
name: nginx01
spec:
replicas: 3
selector:
matchLabels:
app: nginx01
strategy: {}
template:
metadata:
labels:
app: nginx01
spec:
containers:
- image: nginx:1.17.10
name: nginx
resources: {}

$ kubectl apply -f nginx.yaml
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx01-5759677f4c-9bt6l 1/1 Running 0 9s
nginx01-5759677f4c-kpwlt 1/1 Running 0 9s
nginx01-5759677f4c-lsfvd 1/1 Running 0 9s

# 创建service
$ kubectl expose deploy nginx01 --port=80 --target-port=80 --type=ClusterIP --name nginx01 --dry-run=client -o yaml > service-nginx01.yaml
$ cat service-nginx01.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx01
name: nginx01
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx01
type: ClusterIP

$ kubectl apply -f service-nginx01.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx01 ClusterIP 10.97.167.224 <none> 80/TCP 3s

$ curl 10.97.167.224 -I
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Mon, 23 Aug 2021 04:57:16 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes

多端口的service定义

service定义多个端口的时候,需要通过name进行区分

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx01
name: nginx01
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx01

Service的三种类型

  • ClusterIP:k8s集群内部使用
  • NodePort:在node节点上开放随机端口,对外暴露服务
  • LoadBalancer:通过调用公有云lb服务对外暴露应用

ClusterIP

service的模式类型,为一组服务提供一个固定的集群IP(VIP),默认只能在集群内部访问。

image-20210823130328803

1
2
3
4
5
6
7
8
spec:
type: ClusterIP # 指定service类型。默认不写也是ClusterIP类型
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx

NodePort

在集群中每个节点上通过kube-proxy启用一个端口来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址。

  • 访问地址:<任意NodeIP>:<NodePort>

  • 默认端口范围:30000-32767

image-20210823130612102

1
2
3
4
5
6
7
8
9
spec:
type: NodePort # 指定service类型为NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30009 # 指定nodePort端口,不指定会随机分配
selector:
app: nginx
  • NodePort 会在每个node节点上监听端口提供对外访问,通常来说我们需要在外部增加一个负载均衡设备来代理多台node

image-20210823130916569

LoadBalancer

与NodePort类似,会在每个节点上启用一个端口来暴露服务。除此之外,Kubernetes会通过api请求底层云平台(例如阿里云、腾

讯云、AWS等)上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。

image-20210823131031316

Service的代理模式

service有两种代理模式,传统的方式使用iptables代理模式,近期增加了对ipvs的支持

  • iptables:通过linux系统的iptables对流量进行转发
  • ipvs:通过内核ipvs负载均衡模块对流量进行转发

image-20210823131226173

  • 在kubernetes v1.8 中引入了 ipvs模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。
  • iptables 模式在 v1.1 中就添加支持了,从 v1.2版本开始 iptables 就是 kube-proxy 默认的操作模式
  • ipvs 和iptables 都是基于netfilter的。ipvs 会使用 iptables 进行包过滤、SNAT、masquared。
    具体来说,ipvs 将使用ipset来存储需要DROP或masquared的流量的源或目标地址,
    以确保 iptables 规则的数量是恒定的,这样我们就不需要关心我们有多少服务了。

iptables的流量转发实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx02 NodePort 10.100.171.115 <none> 80:8100/TCP 5s

$ iptables-save | grep nginx02
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx02" -m tcp --dport 8100 -j KUBE-MARK-MASQ
# NodePort类型:将访问本机8100端口的流量转发给 KUBE-SVC-Q3YZ4CDE6HCUYEMJ 链
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx02" -m tcp --dport 8100 -j KUBE-SVC-Q3YZ4CDE6HCUYEMJ

-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.100.171.115/32 -p tcp -m comment --comment "default/nginx02 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
# ClusterIP类型:将访问集群ip(10.100.171.115)的流量转发到 KUBE-SVC-Q3YZ4CDE6HCUYEMJ 链
-A KUBE-SERVICES -d 10.100.171.115/32 -p tcp -m comment --comment "default/nginx02 cluster IP" -m tcp --dport 80 -j KUBE-SVC-Q3YZ4CDE6HCUYEMJ


$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-674c77bcbb-8vstk 1/1 Running 0 4s
nginx-674c77bcbb-gccft 1/1 Running 0 18d
nginx-674c77bcbb-zxlqv 1/1 Running 0 4s

$ iptables-save | grep KUBE-SVC-Q3YZ4CDE6HCUYEMJ
# iptables是顺序执行的,为了实现负载均衡能力在这里使用了 --probability 字段,将当前链的流量几乎平均的转发给3个pod
-A KUBE-SVC-Q3YZ4CDE6HCUYEMJ -m comment --comment "default/nginx02" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-WBVYH6V7WTMZDI55
-A KUBE-SVC-Q3YZ4CDE6HCUYEMJ -m comment --comment "default/nginx02" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ZHLASCKR62S4EDUE
-A KUBE-SVC-Q3YZ4CDE6HCUYEMJ -m comment --comment "default/nginx02" -j KUBE-SEP-CLURA2L5LHNLEX32


kubectl get endpoints
NAME ENDPOINTS AGE
nginx02 10.244.115.1:80,10.244.115.41:80,10.244.115.42:80 7m23s
# 最终将流量转发给容器的80端口
-A KUBE-SEP-CLURA2L5LHNLEX32 -s 10.244.115.42/32 -m comment --comment "default/nginx02" -j KUBE-MARK-MASQ
-A KUBE-SEP-CLURA2L5LHNLEX32 -p tcp -m comment --comment "default/nginx02" -m tcp -j DNAT --to-destination 10.244.115.42:80
-A KUBE-SEP-WBVYH6V7WTMZDI55 -s 10.244.115.1/32 -m comment --comment "default/nginx02" -j KUBE-MARK-MASQ
-A KUBE-SEP-WBVYH6V7WTMZDI55 -p tcp -m comment --comment "default/nginx02" -m tcp -j DNAT --to-destination 10.244.115.1:80
-A KUBE-SEP-ZHLASCKR62S4EDUE -s 10.244.115.41/32 -m comment --comment "default/nginx02" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZHLASCKR62S4EDUE -p tcp -m comment --comment "default/nginx02" -m tcp -j DNAT --to-destination 10.244.115.41:80

如何修改代理模式

kube-proxy默认使用iptables模式

1
2
3
kubectl logs kube-proxy-jwnwd -n kube-system | grep Proxier
I0804 12:46:45.890812 1 server_others.go:212] Using iptables Proxier.
I0804 12:46:45.890823 1 server_others.go:219] creating dualStackProxier for iptables.

kubeadm部署修改为ipvs模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl edit configmap kube-proxy -n kube-system
...
mode: "ipvs"
...


# 删掉kube-proxy的容器让他自动重启,如果需要所有节点生效,需要重启所有节点的kube-proxy
$ kubectl delete pod kube-proxy-jwnwd -n kube-system
$ kubectl logs kube-proxy-z8fw6 -n kube-system | grep Proxi
I0823 05:50:10.760349 1 server_others.go:274] Using ipvs Proxier.
I0823 05:50:10.760358 1 server_others.go:276] creating dualStackProxier for ipvs.

# 安装ipvs管理工具ipvsadm
$ yum install ipvsadm -y
$ ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.100.171.115:80 rr
-> 10.244.115.1:80 Masq 1 0 0
-> 10.244.115.41:80 Masq 1 0 0
-> 10.244.115.42:80 Masq 1 0 0

内核加载ipvs模块

1
2
3
4
5
6
7
8
9
10
ipvs_mode_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod_name in $(ls ${ipvs_mode_dir} | grep -o "^[^.]*"); do
/sbin/modinfo -F filename $mod_name |& grep -qv ERROR && echo $mod_name >> /etc/modules-load.d/ipvs.conf || :
done
systemctl enable --now systemd-modules-load.service
lsmod | grep ip_vs
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 48
ip_vs 145497 54 ip_vs_rr,ip_vs_sh,ip_vs_wrr

二进制方式部署的集群修改ipvs模式

1
2
3
4
5
6
7
# 二进制方式部署的集群需要修改kube-proxy的配置文件
$ vim kube-proxy-config.yml
mode: ipvs
ipvs:
scheduler: "rr“
# 重启kube-proxy让其生效
$ systemctl restart kube-proxy

iptables与ipvs对比

模式 优点 缺点
iptables 灵活、功能强大,在k8s中应用的较早,更稳定 当条目过多会导致性能下降(规则的遍历匹配和更新,呈线性时延)
只有一种调度算法
ipvs 工作在内核态,有更好的性能
调度算法非常丰富(rr、wrr、lc、wlc、ip hash等)

CoreDNS

k8s集群内部的DNS服务器,目前为k8s集群默认使用的DNS组件服务,CoreDNS服务一直监听Kubernetes API,会为每一个Service创建CNS记录,用于记录域名解析。

  • CoreDNS可以在集群内服务间调用场景提供名字解析服务

image-20210823141736285

1
2
3
4
# CoreDNS yaml文件地址
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns

#
  • ClusterIP A级路格式:<service-name>.<namespace-name>.svc.cluster.local
    • 示例:my-svc.my-namespace.svc.cluster.local

Ingress

  • Ingress
    • 将集群内部的服务通过7层规则对外部提供服务的规则集合
    • k8s集群中的一个抽象资源,提供一个暴露集群中应用的外部访问入口的定义方法
  • Ingress Controller
    • 通过读取Ingress规则实现流量分发策略并完成流量的接入、分发
    • 根据Ingress定义的规则生成具体的路由规则,并对Pod提供负载均衡

为了弥补NodePort不足而生

  • NodePort只支持四层负载均衡
  • 一个端口只能给一个服务使用,端口需要提前做规划
  • 端口多了之后维护困难

Pod和Ingress的关系

image-20210909131432048

Ingress Controller

Ingress管理的负载均衡器,为集群提供全局的负载均衡能力

Ingress Controller有很多实现

  • k8s官方维护的nginx
  • Haproxy Ingress
  • Istio Ingress
  • Kong Ingress
  • Traefik Ingress

Ingress官方文档:文档地址

部署Ingress Controller

Ingress-nginx官方文档:文档地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml -O k8s-ingress-controller.yaml
$ vim k8s-ingress-controller.yaml
# 将DeployMent修改为DaemonSet
# 添加使用宿主机网络,将ingress controller的端口直接暴露到宿主机网络 80/443
apiVersion: apps/v1
kind: DeamonSet
...
spec:
dnsPolicy: ClusterFirst
hostNetwork: true
containers:
...
image: willdockerhub/ingress-nginx-controller:v0.48.1
image: docker.io/jettech/kube-webhook-certgen:v1.5.1
image: docker.io/jettech/kube-webhook-certgen:v1.5.1
...
#- name: webhook
# containerPort: 8443
# protocol: TCP
...

$ kubectl apply -f k8s-ingress-controller.yaml
$ kubectl get svc,daemonset,job -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.108.89.56 <pending> 80:8024/TCP,443:8567/TCP 62s
service/ingress-nginx-controller-admission ClusterIP 10.108.86.25 <none> 443/TCP 62s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ingress-nginx-controller 2 2 0 2 0 kubernetes.io/os=linux 62s

NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 0/1 62s 62s
job.batch/ingress-nginx-admission-patch 0/1 62s 62s

创建Service

1
kubectl expose deploy nginx-taint --name nginx-taint --port=8080 --target-port=80 --type=ClusterIP --dry-run=client -o yaml > nginx-taint-service.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-taint
name: nginx-taint
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx-taint
type: ClusterIP
1
2
3
4
5
6
7
8
# 如果报错Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io" 
# 可以先删除ingress-nginx-admission
# kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

$ kubectl apply -f nginx-taint-service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-taint ClusterIP 10.98.210.7 <none> 8080/TCP 4s

创建Ingress规则

1
kubectl create ingress nginx-taint --default-backend=nginx-taint:8080 --rule="test1.isme.com/*=nginx-taint:8080" --dry-run=client -o yaml > nginx-taint-ingress.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-taint
spec:
defaultBackend:
service:
name: nginx-taint
port:
number: 8080
rules:
- host: test2.isme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-taint
port:
number: 8080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ kubectl apply -f nginx-taint-ingress.yaml
$ kubectl get svc,ep,ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-taint ClusterIP 10.98.210.7 <none> 8080/TCP 23h

NAME ENDPOINTS AGE
endpoints/nginx-taint 10.244.115.1:80,10.244.115.11:80,10.244.115.17:80 + 27 more... 23h

NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/nginx-taint <none> test2.isme.com 80 21h

$ kubectl describe ingress nginx-taint
Name: nginx-taint
Namespace: default
Address:
Default backend: nginx-taint:8080 (10.244.115.1:80,10.244.115.11:80,10.244.115.17:80 + 27 more...)
Rules:
Host Path Backends
---- ---- --------
test2.isme.com
/ nginx-taint:8080 (10.244.115.1:80,10.244.115.11:80,10.244.115.17:80 + 27 more...)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 2m28s (x4 over 21h) nginx-ingress-controller Scheduled for sync
Normal Sync 2m28s (x4 over 21h) nginx-ingress-controller Scheduled for sync

本地绑定host到任意一个节点ip上,然后通过浏览器访问验证

image-20210909153953357

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 将ingress转发端口设置成pod端口
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-taint
spec:
defaultBackend:
service:
name: nginx-taint
port:
number: 80
rules:
- host: test2.isme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-taint
port:
number: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ kubectl describe ingress nginx-taint
Name: nginx-taint
Namespace: default
Address:
Default backend: nginx-taint:80 (10.244.115.1:80,10.244.115.11:80,10.244.115.17:80 + 27 more...)
Rules:
Host Path Backends
---- ---- --------
test2.isme.com
/ nginx-taint:80 (10.244.115.1:80,10.244.115.11:80,10.244.115.17:80 + 27 more...)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s (x5 over 21h) nginx-ingress-controller Scheduled for sync
Normal Sync 6s (x5 over 21h) nginx-ingress-controller Scheduled for sync

配置https规则

自签https证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF

cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

cat > test2.isme.com-csr.json <<EOF
{
"CN": "test2.isme.com",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes test2.isme.com-csr.json | cfssljson -bare test2.isme.com

将证书保存到Secret中

1
2
3
4
$ kubectl create secret tls test2-isme-com --cert=test2.isme.com.pem --key=test2.isme.com-key.pem
$ kubectl get secret
NAME TYPE DATA AGE
test2-isme-com kubernetes.io/tls 2 10s

Ingress配置tls

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-taint
spec:
defaultBackend:
service:
name: nginx-taint
port:
number: 80
tls:
- hosts:
- test2.isme.com
secretName: test2-isme-com
rules:
- host: test2.isme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-taint
port:
number: 80

访问验证

image-20210910130746219

Ingress Controller工作原理

Ingress Controller通过与Kubernetes API交互,动态的去感知集群中Ingress 规则变化,通过Ingress定义的规则,如:那个域名对应哪个service;生成流量转发规则,然后应用到具体实现的应用上,如:nginx,然后加载配置生效,最终完成7层负载流量转发以及规则动态更新的问题。

1
2
3
浏览器 -> 公网负载均衡器  -> ingress controller -> 分布在各个节点上的Pod

浏览器 -> 公网负载均衡器 -> Service(NodePort) -> 分布在各个节点上的Pod

Ingress Controller如何通过Ingress规则找到Service对应的Pod

  • Ingress规则中转发的Service端口可以是Service的端口,也可以是Pod的端口
  • Service是四层接入层,通过端口进行转发,每一个端口对应一个服务的多副本Pod
  • 同一个Service可以创建多个端口映射,代理服务的多个或同一个端口
  • Ingress规则实际上是通过Service加端口信息,找到对应代理的后端服务多副本Pod并进行转发

Service多端口配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 多端口映射service,service开启多个端口可以代理不同或同一个端口
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-taint
name: nginx-taint
spec:
ports:
- port: 8080
name: port-8080
protocol: TCP
targetPort: 80
- port: 8081
name: port-8081
protocol: TCP
targetPort: 80
selector:
app: nginx-taint
type: ClusterIP
1
2
3
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-taint ClusterIP 10.98.210.7 <none> 8080/TCP,8081/TCP 25h

Ingress配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# 不同域名对应不同端口
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-taint
spec:
defaultBackend:
service:
name: nginx-taint
port:
number: 80
tls:
- hosts:
- test2.isme.com
secretName: test2-isme-com
rules:
- host: test2.isme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-taint
port:
number: 8080
- host: test1.isme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-taint
port:
number: 8081
1
2
3
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-taint <none> test2.isme.com,test1.isme.com 80, 443 23h

最终2个域名都可以访问到同一个服务