docker-k8s-1.13.1

摘要

本文部分内容来源于网络,个人收集整理,请勿传播

docker安装

请移步 docker安装学习记录

kubernetes

角色划分

1
2
3
k8s-master1	10.2.8.44	k8s-master	etcd、kube-apiserver、kube-controller-manager、kube-scheduler
k8s-node1 10.2.8.65 k8s-node etcd、kubelet、docker、kube_proxy
k8s-node2 10.2.8.34 k8s-node etcd、kubelet、docker、kube_proxy

下载相关资源

1
2
3
4
5
6
7
8
wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz

环境文件配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
master01="172.16.198.133"
node01="172.16.198.136"
node02="172.16.198.137"

NAME_1=infra1
NAME_2=infra2
NAME_3=infra3
HOST_1=172.16.1.16
HOST_2=172.16.1.17
HOST_3=172.16.1.13

ETCD_DATA_DIR="/data/infra.etcd/"
# 这里需要根据不同节点修改
# ETCD_NAME="${NAME_1}"
# ETCD_IP="${HOST_1}"

配置hosts

1
2
3
echo -e "172.16.198.133 master01
172.16.198.136 node01
172.16.198.137 node02" >> /etc/hosts

ubuntu

1
2
3
4
5
6
7
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

修改内核参数

1
2
3
4
5
6
vim /etc/sysctl.conf

net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

环境配置

1
2
3
4
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
echo "10.0.0.176.master" > /etc/hostname
echo "127.0.0.1 10.0.0.176.master" >> /etc/hosts
sysctl kernel.hostname=10.0.0.176.master

使用二进制文件部署k8s

etcd集群安装

本文使用静态配置的方式搭建,详细操作以及介绍请移步 etcd-cluster 部署

安装etcd

1
2
3
4
5
6
7
8
9
ETCD_VER="v3.2.4"
DOWNLOAD_URL="https://github.com/coreos/etcd/releases/download"
LOCAL_DIR="/data/soft"
mkdir ${LOCAL_DIR}
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o ${LOCAL_DIR}/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p ${LOCAL_DIR}/etcd && tar xzvf ${LOCAL_DIR}/etcd-${ETCD_VER}-linux-amd64.tar.gz -C ${LOCAL_DIR}/etcd --strip-components=1

cd ${LOCAL_DIR}/etcd/ && cp etcd etcdctl /usr/local/bin
etcd -version

静态配置

静态配置主要预先将集群的配置信息分配好,然后将集群分布启动,集群将根据配置信息组成集群。这里按如下的配置信息分别启动三个etcd。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
mkdir /data/bin -p && cd /data/bin && touch etcd.sh
# 基本配置 在每个节点都要添加
NAME_1=infra1
NAME_2=infra2
NAME_3=infra3
HOST_1=172.16.1.16
HOST_2=172.16.1.17
HOST_3=172.16.1.13

ETCD_DATA_DIR="/data/infra.etcd/"
# 这里需要根据不同节点修改
# ETCD_NAME="${NAME_1}"
# ETCD_IP="${HOST_1}"

ETCD_LISTEN_PEER_URLS="http://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="http://${ETCD_IP}:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${ETCD_IP}":2380
ETCD_INITIAL_CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-test"
ETCD_ADVERTISE_CLIENT_URLS="http://${ETCD_IP}:2379"

pkill etcd
etcd --name ${ETCD_NAME} --data-dir ${ETCD_DATA_DIR} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS} \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster ${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state ${ETCD_INITIAL_CLUSTER_STATE} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN}

配置supervisor

详情请移步 etcd-cluster 部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
mkdir -p /etc/supervisor/conf.d/
mkdir -p /data/logs/supervisor/

vim /etc/supervisor/supervisord.conf

[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)

[supervisord]
logfile=/data/logs/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/data/logs/supervisor ; ('AUTO' child log dir, default $TEMP)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket

[include]
files = /etc/supervisor/conf.d/*.conf

#########################################

vim /etc/supervisor/conf.d/etcd.conf

[program:etcd]
directory = /data/bin/
command = /bin/sh etcd.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/etcd.log
stderr_logfile = /data/logs/supervisor/etcd_err.log

supervisord -c /etc/supervisor/supervisord.conf

环境变量配置

本文最开始使用的是etcd2接口,目前etcd3接口变化比较大,所以现在直接重新按照etcd3的接口进行配置

1
2
3
4
5
6
7
8
9
10
11
cd /data/bin && vim etcd.env

#export ETCDCTL_API=3
export ETCDCTL_API=2
export HOST_1=172.16.1.16
export HOST_2=172.16.1.17
export HOST_3=172.16.1.13
export ENDPOINTS=http://$HOST_1:2379,http://$HOST_2:2379,http://$HOST_3:2379
alias etcdctl='etcdctl --endpoints=$ENDPOINTS'

source /data/bin/etcd.env

集群验证

按如上配置分别启动集群,启动集群后,将会进入集群选举状态,若出现大量超时,则需要检查主机的防火墙是否关闭,或主机之间是否能通过2380端口通信,集群建立后通过以下命令检查集群状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
etcdctl member list -w table
+------------------+---------+--------+-------------------------+-------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+-------------------------+-------------------------+
| cd1adc80e78ec1cf | started | infra3 | http://172.16.1.13:2380 | http://172.16.1.13:2379 |
| e74b186f7d620f76 | started | infra2 | http://172.16.1.17:2380 | http://172.16.1.17:2379 |
| ea91822d14c58cfb | started | infra1 | http://172.16.1.16:2380 | http://172.16.1.16:2379 |
+------------------+---------+--------+-------------------------+-------------------------+

etcdctl cluster-health

etcdctl endpoint health
172.16.1.13:2379 is healthy: successfully committed proposal: took = 1.683776ms
172.16.1.17:2379 is healthy: successfully committed proposal: took = 2.115015ms
172.16.1.16:2379 is healthy: successfully committed proposal: took = 2.494229ms

在其中一个节点执行

1
2
3
4
5
6
7
8
# api2
etcdctl mkdir /k8s/network
etcdctl set /k8s/network/config '{"network":"172.100.0.0/16"}'
{"network":"172.100.0.0/16"}

# api3
etcdctl put /k8s/network/config '{"network":"172.100.0.0/16"}'
OK

其他节点执行有正常返回结果说明集群搭建成功

1
2
3
etcdctl get /k8s/network/config
/k8s/network/config
{"network":"172.100.0.0/16"}

该命令含义是,期望docker运行的container实例的地址,都在 172.100.0.0/16网段中

flanneld会读取/k8s/network目录中config值,然后接管docker的地址分配,并把docker和宿主机器之间的网络桥接起来。

flannel

安装flannel

1
2
3
4
5
6
7
FLANNEL_VERSION="v0.8.0"
LOCAL_DIR="/data/soft"
FLANNEL_URL="https://github.com/coreos/flannel/releases/download"
mkdir ${LOCAL_DIR}
cd ${LOCAL_DIR} && wget ${FLANNEL_URL}/${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz
tar xvf flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz
cp flanneld /usr/local/bin && flanneld -version

启动flannel

1
2
3
4
5
6
7
8
9
10
11
cd /data/bin && touch flanneld.sh

HOST_1=172.16.1.16
HOST_2=172.16.1.17
HOST_3=172.16.1.13
ENDPOINTS=http://$HOST_1:2379,http://$HOST_2:2379,http://$HOST_3:2379

pkill flanneld
flanneld \
-etcd-endpoints=${ENDPOINTS} \
-etcd-prefix=/k8s/network

supervisor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
vim /etc/supervisor/conf.d/flanneld.conf
[program:flanneld]
directory = /data/bin/
command = /bin/sh flanneld.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/flanneld.log
stderr_logfile = /data/logs/supervisor/flanneld_err.log

supervisorctl update
supervisorctl status
etcd RUNNING pid 30867, uptime 0:34:13
flanneld RUNNING pid 41079, uptime 0:00:25

docker配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
source /run/flannel/subnet.env

cat <<E0F> /etc/docker/daemon.json
{
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 3,
"disable-legacy-registry": true,
"insecure-registries": [
"http://r.isme.pub"
],
"registry-mirrors": [
"https://uss7pbj4.mirror.aliyuncs.com"
],
"log-driver": "json-file",
"log-opts": {
"labels": "io.cass.isme.pub,log.ignore",
"max-size": "1g",
"max-file": "10"
},
"graph": "/data/docker",
"storage-driver": "overlay",
"bip": "${FLANNEL_SUBNET}",
"mtu": ${FLANNEL_MTU},
"hosts": [
"tcp://127.0.0.1:4243",
"unix:///var/run/docker.sock"
]
}
E0F

rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi


systemctl daemon-reload
systemctl enable docker
systemctl restart docker

安装k8s

1
2
3
4
5
6
7
8
9
10
KUB_VERSION="v1.7.2"
LOCAL_DIR="/data/soft"
K8S_URL="https://github.com/kubernetes/kubernetes/releases/download"
mkdir ${LOCAL_DIR}
cd ${LOCAL_DIR} && wget ${K8S_URL}/${KUB_VERSION}/kubernetes.tar.gz
tar xvf kubernetes.tar.gz && cd kubernetes/cluster
./get-kube-binaries.sh
cd ../server/ && tar xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-controller-manager kube-proxy kube-scheduler /usr/local/bin/

启动k8s集群

Master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
mkdir /data/logs/k8s/{apiserver,controller-manager,scheduler} -p
cd /data/bin && touch kube.env kube-api.sh kube-con.sh kube-sche.sh

HOST_1=172.16.1.16
HOST_2=172.16.1.17
HOST_3=172.16.1.13
ENDPOINTS=http://$HOST_1:2379,http://$HOST_2:2379,http://$HOST_3:2379
LOG_DIR="/data/logs/k8s"

source /data/bin/kube.env
THIS_IP=${HOST_1}
pkill kube-apiserver
kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd_servers=${ENDPOINTS} \
--logtostderr=false \
--allow-privileged=false \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=30000-32767 \
--advertise-address=${THIS_IP} \
--log_dir=${LOG_DIR}/apiserver \
--service-cluster-ip-range=172.100.0.0/16

source /data/bin/kube.env
THIS_IP=${HOST_1}
pkill kube-controller-manager
kube-controller-manager \
--master=${THIS_IP}:8080 \
--enable-hostpath-provisioner=false \
--logtostderr=false \
--log_dir=${LOG_DIR}/controller-manager

source /data/bin/kube.env
THIS_IP=${HOST_1}
pkill kube-scheduler
kube-scheduler --master=${THIS_IP}:8080 \
--logtostderr=false \
--log_dir=${LOG_DIR}/scheduler

master-supervisor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
cd /etc/supervisor/conf.d/ && touch kube-api.conf kube-con.conf kube-sche.conf

vim kube-api.conf
[program:kube-api]
directory = /data/bin/
command = /bin/sh kube-api.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/kube-api.log
stderr_logfile = /data/logs/supervisor/kube-api_err.log

vim kube-con.conf
[program:kube-con]
directory = /data/bin/
command = /bin/sh kube-con.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/kube-con.log
stderr_logfile = /data/logs/supervisor/kube-con_err.log

vim kube-sche.conf
[program:kube-sche]
directory = /data/bin/
command = /bin/sh kube-sche.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/kube-sche.log
stderr_logfile = /data/logs/supervisor/kube-sche_err.log

supervisorctl update
kube-api: added process group
kube-con: added process group
kube-sche: added process group

supervisorctl status
etcd RUNNING pid 30867, uptime 2:11:59
flanneld RUNNING pid 41079, uptime 1:38:11
kube-api RUNNING pid 41543, uptime 0:00:04
kube-con RUNNING pid 41544, uptime 0:00:04
kube-sche RUNNING pid 41547, uptime 0:00:04

node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
mkdir /data/logs/k8s/{kubelet,proxy} -p
cd /data/bin && touch kube.env kubelet.sh kube-proxy.sh

HOST_1=172.16.1.16
HOST_2=172.16.1.17
HOST_3=172.16.1.13

MASTER=${HOST_1}
THIS_IP=${HOST_2}
LOG_DIR="/data/logs/k8s"

source /data/bin/kube.env
pkill kubelet
kubelet \
--address=0.0.0.0 \
--port=10250 \
--log_dir=${LOG_DIR}/kubelet \
--hostname_override=${THIS_IP} \
--api_servers=http://${MASTER}:8080 \
--logtostderr=false

source /data/bin/kube.env
pkill kube-proxy
kube-proxy \
--master=${MASTER}:8080 \
--logtostderr=false

node-supervisor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
cd /etc/supervisor/conf.d/ && touch kubelet.conf kube-proxy.conf

vim kubelet.conf
[program:kubelet]
directory = /data/bin/
command = /bin/sh kubelet.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/kubelet.log
stderr_logfile = /data/logs/supervisor/kubelet_err.log

vim kube-proxy.conf
[program:kube-proxy]
directory = /data/bin/
command = /bin/sh kube-proxy.sh
user = root
autostart = true
autorestart = true
stdout_logfile = /data/logs/supervisor/kube-proxy.log
stderr_logfile = /data/logs/supervisor/kube-proxy_err.log


supervisorctl update
kube-proxy: added process group
kubelet: added process group

supervisorctl status
etcd RUNNING pid 30890, uptime 2:25:32
flanneld RUNNING pid 41025, uptime 1:56:50
kube-proxy RUNNING pid 41805, uptime 0:01:34
kubelet RUNNING pid 41742, uptime 0:01:38

配置skydns

1
etcdctl mk /skydns/config '{"dns-addr":"172.100.81.2:53","ttl":3600,"domain":"sky."}'

附录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
docker容器所依赖的六种namespace级别的资源隔离技术:

1.UTS(UNIX Time-sharing system)
允许创建独立的hostname/domian_name namespace,作为独立的节点
2.IPC(Inter Process Communication)
创建隔离的进程间通信方式,实现包括信号量、消息队列、管道、共享内存等资源的namespace隔离
3.PID(Process ID)
允许树的子叶节点namespace创建自己专属的pid区间,pid对同级节点无效,仅父级namespace节点可见。(在子节点的pid,会在父节点映射为另一个pid标志,
因此在父节点的角度,每个子节点的进程拥有两个pid号)
4.mount
挂载点namespace,各个namespace默认的mount挂载只在本空间内生效,不影响其他namespace,也可使用share共享挂载的方式互相影响, 或者slave从属挂
载的方式,实现主影响从。
5.network
每个net ns拥有独立的网络资源(包括网络设备,IPv4/IPv6协议栈,防火墙,/proc/net、/sys/class/net目录等)
6.user
每个user ns拥有独立的id、属性、key、根目录,组织结构等


docker:
- docker ps -a
- docker build -t xxx . #根据Dockerfile生成image
- docker images
- docker exec -it {ID\NAME} /bin/bash | /bin/sh
- docker run -it {ID\NAME} -p -v
- docker start {ID\NAME}
- docker stop {ID\NAME}
- docker rm {ID\NAME}
- docker rmi {ID\NAME}
- docker save coredns/coredns:1.0.0 | gzip > coredns.tar.gz #将已有的img打包
- docker load -i IMAGE #docker载入本地image打包文件
- docker stats {ID\NAME} #查看容器资源占用情况,不填查看全部
- docker cp
- docker commit -p -m 'xxx' 1a889d5bbf99 xxx #将容器打标签提交为img,-p选项意思是打包时暂停容器,不会退出
- docker push xxx #push到容器仓库


k8s:
- kubectl get pods -o wide
- kubectl get pod xxx -o yaml #获取yaml配置文件
- kubectl get nodes -o wide
- kubectl set image deployment xxx xxx=image_url #更改部署镜像
- kubectl describe pod mysql-deploy-766bd7dbcb-7nxvw #查看pod的创建运行记录
- kubectl scale deploy/kube-dns --replicas=3 #修改deploy的副本数
- kubectl create -f xxx.yaml #创建资源
- kubectl delete deploy mysql-deploy #删除资源
- kubectl get svc -o wide
- kubectl get ep SVC_NAME #查看svc对应绑定的pod
- kubectl get rs
- kubectl get deploy/DEPLOY-NAME
- kubectl get all #获取所有类型资源
- kubectl get componentstatuses #获取k8s各组件健康状态,简写为kubectl get cs
- kubectl describe deploy/DEPLOY-NAME
- kubectl status rollout deploy/DEPLOY-NAME
- kubectl label nodes 171 disktype=ssd #添加标签
- kubectl label nodes 171 disktype- #删除标签
- kubectl label nodes 171 disktype=hdd --overwrite #修改标签
- kubectl logs POD_NAME #查看pod的日志,排错用
- kubectl get nodes -o json | jq '.items[] | .spec' #查看每个node的CIDR分配
- kubectl delete pod NAME --grace-period=0 --force #强制删除资源,在1.3版本去掉--force选项
- kubectl replace -f xxx.yaml #更改定义资源的yaml配置文件
- kubectl get secret -n kube-system | grep dashboard #查找secret
- kubectl describe secret -n kube-system kubernetes-dashboard-token-ld92d #查看该secret的令牌
- kubectl scale --replicas=3 deployment/xxxx #横向扩展deploy的rs数量
- kubectl cordon NODENAME #将node设置为检修状态,不再向此node调度新的pod
- kubectl drain NODENAME #将node设置为(排水)不可用状态,并且驱逐其上的pod转移至其他正常node上。这一步会进行两个步骤:1.将node设为cordon状态2.驱逐node上的pod
- kubectl drain node2 --delete-local-data --force --ignore-daemonsets #忽略ds,否则存在ds的话无法正常执行
- kubectl uncordon NODENAME #取消检修状态、排水状态
- kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts=^.* & #kubectl监听端口代理apiserver的服务,允许调用客户端免去认证步骤
- kubectl patch deployment my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"update-time": "2018-04-11 12:15" }}}}}' #更新configmap之后挂载configmap 的pod中的环境变量不会自动更新,可以通过更新一下deployment.spec.template的注解内容来触发pod的滚动更新。


**k8s restful api**
#### GET方法
## labelSelector
# 获取标签{app: pimsystemdev}的pod
http://192.168.9.27:8001/api/v1/namespaces/default/pods?labelSelector=app%3Dpimsystemdev

# 动态获取标签{app: pimsystemdev}的pod
http://192.168.9.27:8001/api/v1/namespaces/default/pods?watch&labelSelector=app%3Dpimsystemdev

## fieldSelector
# 根据level 1字段获取信息
http://192.168.9.27:8001/api/v1/namespaces/default/pods?fieldSelector=metadata.name%3Dykstaskdev-657c7f56fc-7vnd4