11-搭建高可用k8s集群

摘要

本文内容转自网络,个人学习记录使用,请勿传播

环境准备

软件版本

软件 版本
操作系统 CentOS7.3
Docker 19-ce
Kubernets 1.21.13

服务器规划

角色 IP 独立组件
k8s-master1 10.0.1.67 docker,etcd
k8s-master2 10.0.1.68 docker,etcd
k8s-master3 10.0.1.69 docker,etcd
k8s-node1 10.0.1.70

架构图

image-20220411191833112

环境初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld

关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时

关闭swap:
$ swapoff -a # 临时
$ vim /etc/fstab # 永久

设置主机名:
$ hostnamectl set-hostname <hostname>

在master添加hosts:
$ cat >> /etc/hosts << EOF
10.0.1.67 k8s-master1
10.0.1.68 k8s-master2
10.0.1.69 k8s-master3
10.0.1.70 k8s-node1
EOF

将桥接的IPv4流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
$ sysctl --system # 生效

时间同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com

签发自签证书

获取CFSSL工具

1
2
3
wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget http://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo

创建ca证书签名请求文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ cat ca-csr.json
{
"CN": "isme",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "isme",
"OU": "sre"
}
],
"ca": {
"expiry": "175200h"
}
}

CN:Common Name,浏览器用此字段验证网站是否合法,一般写域名

C:Country,国家

ST:State,州、省

L:Locality,地区、城市

O:Organization Name,组织、公司

OU:Organization Unit Name,组织单位名称,公司部门

生成ca证书和私钥

1
2
3
4
5
6
7
8
$ cfssl gencert -initca ca-csr.json  | cfssl-json -bare ca

2020/03/09 14:49:54 [INFO] generating a new CA key and certificate from CSR
2020/03/09 14:49:54 [INFO] generate received request
2020/03/09 14:49:54 [INFO] received CSR
2020/03/09 14:49:54 [INFO] generating key: ecdsa-256
2020/03/09 14:49:54 [INFO] encoded CSR
2020/03/09 14:49:54 [INFO] signed certificate with serial number 124863084763763669006466983451404543659034533255

二进制部署Etcd集群

创建基于ca证书的config配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ cat ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}

创建etcd证书签名请求文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ cat etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"10.157.30.209",
"10.157.30.212",
"10.157.30.21",
"10.157.30.136"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "isme",
"OU": "sre"
}
]
}

生成etcd证书和私钥

1
2
3
4
5
6
7
8
9
10
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
2020/03/09 15:29:37 [INFO] generate received request
2020/03/09 15:29:37 [INFO] received CSR
2020/03/09 15:29:37 [INFO] generating key: ecdsa-256
2020/03/09 15:29:37 [INFO] encoded CSR
2020/03/09 15:29:37 [INFO] signed certificate with serial number 229056313165587796775974500832785561391588328288
2020/03/09 15:29:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

搭建etcd集群playbook

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
# 目录结构
├── certs
├── certs.test
├── cfssl
├── playbook
├── resource
│   ├── coredns
│   ├── ingress
│   ├── istio
│   ├── kubesphere
│   ├── kuboard
│   ├── openebs
│   ├── prometheus
│   └── weave
└── roles
├── etcd
├── flannel
├── k8s-node
├── k8s-server
├── supervisor
└── tools

# hosts文件
$ cat hosts
[etcd]
10.0.1.67
10.0.1.68
10.0.1.69

$ cat etcd.sh
ansible-playbook ./playbook/etcd.yml -i ./hosts --tag new

$ cat ./playbook/etcd.yml
---
# ******************************************************
# Author : wangping
# Last modified: 2020-03-09 17:58
# Version : v1
# Filename : etcd.yml
# Description : 安装etcd集群
# ******************************************************

- hosts: "etcd"
vars_files: vars.yml
roles:
- ./roles/etcd

# etcd-role目录结构
.
├── files
│   ├── etcd-v3.1.20-linux-amd64.tar.gz
│   ├── etcd-v3.2.28-linux-amd64.tar.gz
│   ├── etcd-v3.3.18-linux-amd64.tar.gz
│   └── etcd-v3.4.4-linux-amd64.tar.gz
├── tasks
│   ├── etcd-certs.yml
│   ├── etcd-install.yml
│   ├── etcd-restart.yml
│   ├── etcd-start.yml
│   ├── etcd-status.yml
│   ├── etcd-stop.yml
│   ├── etcd-supervisor.yml
│   └── main.yml
├── templates
│   ├── etcd.conf.j2
│   └── etcd.j2
└── vars

# etcd配置文件模板
cat templates/etcd.j2
ETCD_NAME=infra{{ groups['etcd'].index(ansible_default_ipv4.address) }}
ETCD_IP={{ ansible_default_ipv4.address }}
ETCD_DATA_DIR={{ ETCD_DATA_DIR }}
ETCD_CERTS_DIR={{ ETCD_CERTS_DIR }}

{% macro initial_cluster() -%}
{% for host in groups['etcd'] -%}
infra{{ loop.index0 }}=https://{{ hostvars[host].ansible_facts.default_ipv4.address }}:2380
{%- if not loop.last -%},{%- endif -%}
{%- endfor -%}
{% endmacro -%}

ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}":2380
ETCD_INITIAL_CLUSTER={{ initial_cluster() }}
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-isme"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379,http://127.0.0.1:2379"

{{ ETCD_WORK_DIR }}/etcd --name ${ETCD_NAME} --data-dir ${ETCD_DATA_DIR} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS} \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster ${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state ${ETCD_INITIAL_CLUSTER_STATE} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--quota-backend-bytes 8000000000 \
--ca-file ${ETCD_CERTS_DIR}/ca.pem \
--cert-file ${ETCD_CERTS_DIR}/etcd-peer.pem \
--key-file ${ETCD_CERTS_DIR}/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ${ETCD_CERTS_DIR}/ca.pem \
--peer-ca-file ${ETCD_CERTS_DIR}/ca.pem \
--peer-cert-file ${ETCD_CERTS_DIR}/etcd-peer.pem \
--peer-key-file ${ETCD_CERTS_DIR}/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ${ETCD_CERTS_DIR}/ca.pem \
--log-output stdout

# etcd supervisord管理配置
cat templates/etcd.conf.j2
[program:etcd]
directory = {{ K8S_BIN }}
command = sh {{ ETCD_BIN }}
user = root
autostart = true
autorestart = true
startsecs = 3
startretries = 3
exitcodes = 0,2
stopsignal = QUIT
stopwaitsecs = 10
redirect_stderr = true
stdout_logfile = {{ K8S_LOG_DIR }}/etcd.log
stdout_logfile_maxbytes = 64MB
stdout_logfile_backups = 10
stdout_capture_maxbytes = 1MB
stdout_events_enabled = false

# etcd task-main
cat tasks/main.yml
- include: etcd-install.yml
tags:
- install
- new

- include: etcd-certs.yml
tags:
- certs
- new

- include: etcd-supervisor.yml
tags:
- supervisor
- new

- include: etcd-stop.yml
tags:
- stop

- include: etcd-start.yml
tags:
- start

- include: etcd-restart.yml
tags:
- restart

- include: etcd-status.yml
tags:
- status
- new

# 安装etcd
cat tasks/etcd-install.yml
- name: "检查k8s目录是否存在: {{ K8S_SOFT_DIR }}"
stat:
path: "{{ K8S_SOFT_DIR }}"
register: result_k8s_work_dir

- name: "创建k8s目录: {{ K8S_SOFT_DIR }}"
file:
path: "{{ K8S_SOFT_DIR }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: directory
when: not result_k8s_work_dir.stat.exists

- name: "检查etcd是否存在: {{ K8S_SOFT_DIR }}/{{ ETCD_PKG }}"
stat:
path: "{{ K8S_SOFT_DIR }}/{{ ETCD_PKG }}"
register: result_etcd_pkg

- name: "删除已存在的etcd包: {{ ETCD_PKG }}"
file:
path: "{{ K8S_SOFT_DIR }}/{{ ETCD_PKG }}"
state: absent
ignore_errors: yes
when: result_etcd_pkg.stat.exists

- name: "拷贝etcd包: {{ ETCD_PKG }}"
copy:
src: "{{ ETCD_PKG }}"
dest: "{{ K8S_SOFT_DIR }}/{{ ETCD_PKG }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"

- name: "解压etcd包: {{ ETCD_PKG }}"
unarchive:
src: "{{ K8S_SOFT_DIR }}/{{ ETCD_PKG }}"
dest: "{{ K8S_SOFT_DIR }}"
copy: no
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"

- name: "创建etcd目录软链: {{ ETCD_WORK_DIR }}"
file:
src: "{{ K8S_SOFT_DIR }}/{{ ETCD_VERSION }}"
dest: "{{ ETCD_WORK_DIR }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: link

- name: "检查etcd数据目录是否存在: {{ ETCD_DATA_DIR }}"
stat:
path: "{{ ETCD_DATA_DIR }}"
register: result_etcd_data_dir

- name: "创建etcd数据目录: {{ ETCD_DATA_DIR }}"
file:
path: "{{ ETCD_DATA_DIR }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: directory
when: not result_etcd_data_dir.stat.exists

- name: "创建etcdctl软链: {{ ETCD_WORK_DIR }}/etcdctl"
file:
src: "{{ ETCD_WORK_DIR }}/etcdctl"
dest: "/usr/bin/etcdctl"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: link

# etcd证书下发
cat tasks/etcd-certs.yml
- name: "检查etcd证书目录是否存在: {{ ETCD_CERTS_DIR }}"
stat:
path: "{{ ETCD_CERTS_DIR }}"
register: result_etcd_certs_dir

- name: "创建etcd证书目录: {{ ETCD_CERTS_DIR }}"
file:
path: "{{ ETCD_CERTS_DIR }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: directory
when: not result_etcd_certs_dir.stat.exists

- name: "拷贝etcd证书: {{ ETCD_CERTS_DIR }}"
copy:
src: "{{ LOCAL_CERTS_DIR }}/{{ item.name }}"
dest: "{{ ETCD_CERTS_DIR }}/{{ item.name }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
mode: "{{ item.mode }}"
with_items:
- name: "ca.pem"
mode: "0644"
- name: "etcd-peer.pem"
mode: "0644"
- name: "etcd-peer-key.pem"
mode: "0600"

# etcd supervisor下发
cat tasks/etcd-supervisor.yml
- name: "检查bin目录是否存在: {{ K8S_BIN }}"
stat:
path: "{{ K8S_BIN }}"
register: result_k8s_bin_dir

- name: "创建bin目录: {{ K8S_BIN }}"
file:
path: "{{ K8S_BIN }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
state: directory
when: not result_k8s_bin_dir.stat.exists

- name: "生成etcd启动文件: {{ ETCD_BIN }}"
template:
src: "{{ ETCD_BIN_TEMPLATE }}"
dest: "{{ ETCD_BIN }}"
owner: "{{ REMOTE_USER }}"
group: "{{ REMOTE_GROUP }}"
mode: 0755

- name: "生成etcd supervisor启动文件: {{ ETCD_SUPERVISOR_CONFIG }}"
template:
src: "{{ ETCD_SUPERVISOR_CONFIG_TEMPLATE }}"
dest: "{{ ETCD_SUPERVISOR_CONFIG }}"
owner: "root"
group: "root"
mode: 0644

- name: "加载supervisord"
shell: "supervisorctl update"
become: True
become_user: "root"

使用配置文件启动

https://etcd.io/docs/v3.5/op-guide/configuration/

https://github.com/etcd-io/etcd/blob/main/etcd.conf.yml.sample#L96

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
name: infra{{ groups['etcd'].index(inventory_hostname) }}
data-dir: {{ ETCD_DATA_DIR }}

max-request-bytes: 10485760
quota-backend-bytes: 107374182400
heartbeat-interval: 1000
election-timeout: 5000
quota-backend-bytes: 8000000000

{% macro initial_cluster() -%}
{% for host in groups['etcd'] -%}
infra{{ loop.index0 }}=https://{{ hostvars[host].inventory_hostname }}:2380
{%- if not loop.last -%},{%- endif -%}
{%- endfor -%}
{% endmacro -%}
initial-cluster: {{ initial_cluster() }}
listen-peer-urls: https://{{ inventory_hostname }}:2380
listen-client-urls: https://{{ inventory_hostname }}:2379,http://127.0.0.1:2379
initial-advertise-peer-urls: https://{{ inventory_hostname }}:2380
advertise-client-urls: https://{{ inventory_hostname }}:2379,http://127.0.0.1:2379

initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'

client-transport-security:
cert-file: {{ ETCD_CERTS_DIR }}/etcd-peer.pem
key-file: {{ ETCD_CERTS_DIR }}/etcd-peer-key.pem
client-cert-auth: true
trusted-ca-file: {{ ETCD_CERTS_DIR }}/ca.pem
auto-tls: false
peer-transport-security:
cert-file: {{ ETCD_CERTS_DIR }}/etcd-peer.pem
key-file: {{ ETCD_CERTS_DIR }}/etcd-peer-key.pem
client-cert-auth: true
trusted-ca-file: {{ ETCD_CERTS_DIR }}/ca.pem
auto-tls: false

使用配置文件启动etcd命令

1
etcd --config-file /home/etcd/etcd.cnf

完整配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
name: 'default'
data-dir:
wal-dir:
snapshot-count: 10000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: http://localhost:2380
listen-client-urls: http://localhost:2379
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: http://localhost:2380
advertise-client-urls: http://localhost:2379
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster:
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file:
key-file:
client-cert-auth: false
trusted-ca-file:
auto-tls: false
peer-transport-security:
cert-file:
key-file:
client-cert-auth: false
trusted-ca-file:
auto-tls: false
self-signed-cert-validity: 1
log-level: debug
logger: zap
log-outputs: [stderr]
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"

查看集群状态

1
2
3
4
5
6
7
8
9
ETCDCTL_API=3 /home/work/k8s/etcd/etcdctl --cacert=/home/work/k8s/certs/etcd/ca.pem --cert=/home/work/k8s/certs/etcd/etcd-peer.pem --key=/home/work/k8s/certs/etcd/etcd-peer-key.pem --endpoints="https://10.0.1.67:2379,https://10.0.1.68:2379,https://10.0.1.69:2379" endpoint health --write-out=table

+---------------------------+--------+------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+---------------------------+--------+------------+-------+
| https://10.0.1.67:2379 | true | 1.749195ms | |
| https://10.0.1.69:2379 | true | 1.419206ms | |
| https://10.0.1.68:2379 | true | 2.371087ms | |
+---------------------------+--------+------------+-------+

安装配置docker环境

配置阿里源安装docker

1
2
3
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
systemctl enable docker && systemctl start docker

修改docker配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
cat /etc/docker/daemon.json

{
"data-root": "/opt/docker/data/container",
"exec-opts": [],
"exec-root": "/opt/docker/data/bootstrap",
"experimental": false,
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"labels": [],
"live-restore": true,
"log-driver": "json-file",
"log-opts": {
"labels": "io.isme.log",
"max-size": "1g",
"max-file": "100"
},
"pidfile": "/var/run/docker.pid",
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 3,
"debug": false,
"hosts": [
"unix://var/lib/docker.sock"
],
"log-level": "info",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 102400,
"Soft": 102400
},
"nproc": {
"Name": "nproc",
"Hard": 102400,
"Soft": 102400
}
},
"init": true,
"bip": "172.16.100.1/16",
"registry-mirrors": [
"http://r.isme.com"
],
"insecure-registries": [
"http://r.isme.com"
]
}

cat /home/work/docker_isme/conf/default.json
{
"defaultAction": "SCMP_ACT_ERRNO",
"archMap": [
{
"architecture": "SCMP_ARCH_X86_64",
"subArchitectures": [
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
]
},
{
"architecture": "SCMP_ARCH_AARCH64",
"subArchitectures": [
"SCMP_ARCH_ARM"
]
},
{
"architecture": "SCMP_ARCH_MIPS64",
"subArchitectures": [
"SCMP_ARCH_MIPS",
"SCMP_ARCH_MIPS64N32"
]
},
{
"architecture": "SCMP_ARCH_MIPS64N32",
"subArchitectures": [
"SCMP_ARCH_MIPS",
"SCMP_ARCH_MIPS64"
]
},
{
"architecture": "SCMP_ARCH_MIPSEL64",
"subArchitectures": [
"SCMP_ARCH_MIPSEL",
"SCMP_ARCH_MIPSEL64N32"
]
},
{
"architecture": "SCMP_ARCH_MIPSEL64N32",
"subArchitectures": [
"SCMP_ARCH_MIPSEL",
"SCMP_ARCH_MIPSEL64"
]
},
{
"architecture": "SCMP_ARCH_S390X",
"subArchitectures": [
"SCMP_ARCH_S390"
]
}
],
"syscalls": [
{
"names": [
"accept",
"accept4",
"access",
"add_key",
"adjtimex",
"alarm",
"bind",
"brk",
"capget",
"capset",
"chdir",
"chmod",
"chown",
"chown32",
"clock_adjtime",
"clock_adjtime64",
"clock_getres",
"clock_getres_time64",
"clock_gettime",
"clock_gettime64",
"clock_nanosleep",
"clock_nanosleep_time64",
"close",
"connect",
"copy_file_range",
"creat",
"dup",
"dup2",
"dup3",
"epoll_create",
"epoll_create1",
"epoll_ctl",
"epoll_ctl_old",
"epoll_pwait",
"epoll_wait",
"epoll_wait_old",
"eventfd",
"eventfd2",
"execve",
"execveat",
"exit",
"exit_group",
"faccessat",
"fadvise64",
"fadvise64_64",
"fallocate",
"fanotify_mark",
"fchdir",
"fchmod",
"fchmodat",
"fchown",
"fchown32",
"fchownat",
"fcntl",
"fcntl64",
"fdatasync",
"fgetxattr",
"flistxattr",
"flock",
"fork",
"fremovexattr",
"fsetxattr",
"fstat",
"fstat64",
"fstatat64",
"fstatfs",
"fstatfs64",
"fsync",
"ftruncate",
"ftruncate64",
"futex",
"futex_time64",
"futimesat",
"getcpu",
"getcwd",
"getdents",
"getdents64",
"getegid",
"getegid32",
"geteuid",
"geteuid32",
"getgid",
"getgid32",
"getgroups",
"getgroups32",
"getitimer",
"getpeername",
"getpgid",
"getpgrp",
"getpid",
"getppid",
"getpriority",
"getrandom",
"getresgid",
"getresgid32",
"getresuid",
"getresuid32",
"getrlimit",
"get_robust_list",
"getrusage",
"getsid",
"getsockname",
"getsockopt",
"get_thread_area",
"gettid",
"gettimeofday",
"getuid",
"getuid32",
"getxattr",
"inotify_add_watch",
"inotify_init",
"inotify_init1",
"inotify_rm_watch",
"io_cancel",
"ioctl",
"io_destroy",
"io_getevents",
"io_pgetevents",
"io_pgetevents_time64",
"ioprio_get",
"ioprio_set",
"io_setup",
"io_submit",
"io_uring_enter",
"io_uring_register",
"io_uring_setup",
"ipc",
"keyctl",
"kill",
"lchown",
"lchown32",
"lgetxattr",
"link",
"linkat",
"listen",
"listxattr",
"llistxattr",
"_llseek",
"lremovexattr",
"lseek",
"lsetxattr",
"lstat",
"lstat64",
"madvise",
"membarrier",
"memfd_create",
"mincore",
"mkdir",
"mkdirat",
"mknod",
"mknodat",
"mlock",
"mlock2",
"mlockall",
"mmap",
"mmap2",
"mprotect",
"mq_getsetattr",
"mq_notify",
"mq_open",
"mq_timedreceive",
"mq_timedreceive_time64",
"mq_timedsend",
"mq_timedsend_time64",
"mq_unlink",
"mremap",
"msgctl",
"msgget",
"msgrcv",
"msgsnd",
"msync",
"munlock",
"munlockall",
"munmap",
"nanosleep",
"newfstatat",
"_newselect",
"open",
"openat",
"pause",
"pipe",
"pipe2",
"poll",
"ppoll",
"ppoll_time64",
"prctl",
"pread64",
"preadv",
"preadv2",
"prlimit64",
"pselect6",
"pselect6_time64",
"pwrite64",
"pwritev",
"pwritev2",
"read",
"readahead",
"readlink",
"readlinkat",
"readv",
"recv",
"recvfrom",
"recvmmsg",
"recvmmsg_time64",
"recvmsg",
"remap_file_pages",
"removexattr",
"rename",
"renameat",
"renameat2",
"request_key",
"restart_syscall",
"rmdir",
"rseq",
"rt_sigaction",
"rt_sigpending",
"rt_sigprocmask",
"rt_sigqueueinfo",
"rt_sigreturn",
"rt_sigsuspend",
"rt_sigtimedwait",
"rt_sigtimedwait_time64",
"rt_tgsigqueueinfo",
"sched_getaffinity",
"sched_getattr",
"sched_getparam",
"sched_get_priority_max",
"sched_get_priority_min",
"sched_getscheduler",
"sched_rr_get_interval",
"sched_rr_get_interval_time64",
"sched_setaffinity",
"sched_setattr",
"sched_setparam",
"sched_setscheduler",
"sched_yield",
"seccomp",
"select",
"semctl",
"semget",
"semop",
"semtimedop",
"semtimedop_time64",
"send",
"sendfile",
"sendfile64",
"sendmmsg",
"sendmsg",
"sendto",
"setfsgid",
"setfsgid32",
"setfsuid",
"setfsuid32",
"setgid",
"setgid32",
"setgroups",
"setgroups32",
"setitimer",
"setpgid",
"setpriority",
"setregid",
"setregid32",
"setresgid",
"setresgid32",
"setresuid",
"setresuid32",
"setreuid",
"setreuid32",
"setrlimit",
"set_robust_list",
"setsid",
"setsockopt",
"set_thread_area",
"set_tid_address",
"setuid",
"setuid32",
"setxattr",
"shmat",
"shmctl",
"shmdt",
"shmget",
"shutdown",
"sigaltstack",
"signalfd",
"signalfd4",
"sigprocmask",
"sigreturn",
"socket",
"socketcall",
"socketpair",
"splice",
"stat",
"stat64",
"statfs",
"statfs64",
"statx",
"symlink",
"symlinkat",
"sync",
"sync_file_range",
"syncfs",
"sysinfo",
"tee",
"tgkill",
"time",
"timer_create",
"timer_delete",
"timer_getoverrun",
"timer_gettime",
"timer_gettime64",
"timer_settime",
"timer_settime64",
"timerfd_create",
"timerfd_gettime",
"timerfd_gettime64",
"timerfd_settime",
"timerfd_settime64",
"times",
"tkill",
"truncate",
"truncate64",
"ugetrlimit",
"umask",
"uname",
"unlink",
"unlinkat",
"utime",
"utimensat",
"utimensat_time64",
"utimes",
"vfork",
"vmsplice",
"wait4",
"waitid",
"waitpid",
"write",
"writev"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"ptrace"
],
"action": "SCMP_ACT_ALLOW",
"args": null,
"comment": "",
"includes": {
"minKernel": "4.8"
},
"excludes": {}
},
{
"names": [
"personality"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 0,
"valueTwo": 0,
"op": "SCMP_CMP_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"personality"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 8,
"valueTwo": 0,
"op": "SCMP_CMP_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"personality"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 131072,
"valueTwo": 0,
"op": "SCMP_CMP_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"personality"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 131080,
"valueTwo": 0,
"op": "SCMP_CMP_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"personality"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 4294967295,
"valueTwo": 0,
"op": "SCMP_CMP_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {}
},
{
"names": [
"sync_file_range2"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"ppc64le"
]
},
"excludes": {}
},
{
"names": [
"arm_fadvise64_64",
"arm_sync_file_range",
"sync_file_range2",
"breakpoint",
"cacheflush",
"set_tls"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"arm",
"arm64"
]
},
"excludes": {}
},
{
"names": [
"arch_prctl"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"amd64",
"x32"
]
},
"excludes": {}
},
{
"names": [
"modify_ldt"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"amd64",
"x32",
"x86"
]
},
"excludes": {}
},
{
"names": [
"s390_pci_mmio_read",
"s390_pci_mmio_write",
"s390_runtime_instr"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"s390",
"s390x"
]
},
"excludes": {}
},
{
"names": [
"open_by_handle_at"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_DAC_READ_SEARCH"
]
},
"excludes": {}
},
{
"names": [
"bpf",
"clone",
"fanotify_init",
"lookup_dcookie",
"mount",
"name_to_handle_at",
"perf_event_open",
"quotactl",
"setdomainname",
"sethostname",
"setns",
"syslog",
"umount",
"umount2",
"unshare"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_ADMIN"
]
},
"excludes": {}
},
{
"names": [
"clone"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 2114060288,
"valueTwo": 0,
"op": "SCMP_CMP_MASKED_EQ"
}
],
"comment": "",
"includes": {},
"excludes": {
"caps": [
"CAP_SYS_ADMIN"
],
"arches": [
"s390",
"s390x"
]
}
},
{
"names": [
"clone"
],
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 1,
"value": 2114060288,
"valueTwo": 0,
"op": "SCMP_CMP_MASKED_EQ"
}
],
"comment": "s390 parameter ordering for clone is different",
"includes": {
"arches": [
"s390",
"s390x"
]
},
"excludes": {
"caps": [
"CAP_SYS_ADMIN"
]
}
},
{
"names": [
"reboot"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_BOOT"
]
},
"excludes": {}
},
{
"names": [
"chroot"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_CHROOT"
]
},
"excludes": {}
},
{
"names": [
"delete_module",
"init_module",
"finit_module"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_MODULE"
]
},
"excludes": {}
},
{
"names": [
"acct"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_PACCT"
]
},
"excludes": {}
},
{
"names": [
"kcmp",
"process_vm_readv",
"process_vm_writev",
"ptrace"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_PTRACE"
]
},
"excludes": {}
},
{
"names": [
"iopl",
"ioperm"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_RAWIO"
]
},
"excludes": {}
},
{
"names": [
"settimeofday",
"stime",
"clock_settime"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_TIME"
]
},
"excludes": {}
},
{
"names": [
"vhangup"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_TTY_CONFIG"
]
},
"excludes": {}
},
{
"names": [
"get_mempolicy",
"mbind",
"set_mempolicy"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYS_NICE"
]
},
"excludes": {}
},
{
"names": [
"syslog"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_SYSLOG"
]
},
"excludes": {}
}
]
}

/home/work/docker_isme/bin/dockerd --config-file /home/work/docker_isme/conf/daemon.json --seccomp-profile /home/work/docker_isme/conf/default.json

部署k8s-master

配置阿里云yum源

1
2
3
4
5
6
7
8
9
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

第一台机器安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

1
2
3
$ yum erase -y kubelet kubeadm kubectl
$ yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
$ systemctl enable kubelet

生成初始化配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 9037x2.tcaqnpaqkra9vsbw
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.1.67
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: kube-0-3.epc.xxx.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs: # 包含所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
- kube-0-1.epc.xxx.com
- kube-0-2.epc.xxx.com
- kube-0-3.epc.xxx.com
- 10.252.56.252
- 10.0.1.67
- 10.0.1.68
- 10.0.1.69
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.252.56.252:6443 # 负载均衡虚拟IP(VIP)和端口
controllerManager: {}
dns:
type: CoreDNS
etcd:
external: # 使用外部etcd
endpoints:
- https://10.0.1.67:2379 # etcd集群3个节点
- https://10.0.1.68:2379
- https://10.0.1.69:2379
caFile: /home/work/k8s/certs/etcd/ca.pem # 连接etcd所需证书
certFile: /home/work/k8s/certs/etcd/etcd-peer.pem
keyFile: /home/work/k8s/certs/etcd/etcd-peer-key.pem
imageRepository: registry.aliyuncs.com/google_containers # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: v1.21.0 # K8s版本,与上面安装的一致
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # Pod网络,与下面部署的CNI网络组件yaml中保持一致
serviceSubnet: 10.96.0.0/12 # 集群内部虚拟网络,Pod统一访问入口
scheduler: {}

使用配置文件创建集群

1
kubeadm init --config kubeadm-config.yaml

初始化完成后,会有两个join的命令,带有 –control-plane 是用于加入组建多master集群的,不带的是加入节点的。

拷贝kubectl使用的连接k8s认证文件到默认路径:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 10.252.56.252:6443 --token 9037x2.tcaqnpaqkra9vsbw \
--discovery-token-ca-cert-hash sha256:7cf93da1f9346f2b722aeaccd37902bb3d6e67d4a8c967c85778e799c9c08922 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.252.56.252:6443 --token 9037x2.tcaqnpaqkra9vsbw \
--discovery-token-ca-cert-hash sha256:7cf93da1f9346f2b722aeaccd37902bb3d6e67d4a8c967c85778e799c9c08922

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。

使用kubectl工具:需要先将默认的认证文件复制到默认位置

1
2
3
4
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
  • 安装目录:/etc/kubernetes/
  • 组件配置文件目录:/etc/kubernetes/manifests/
  • 证书文件位置:/etc/kubernetes/pki

其他master节点加入集群

1
2
3
4
5
6
7
8
# 拷贝证书文件
scp -r /etc/kubernetes/pki/ 10.0.1.68:/etc/kubernetes/
scp -r /etc/kubernetes/pki/ 10.0.1.69:/etc/kubernetes/

# 执行join命令
kubeadm join 10.0.1.67:6443 --token 9037x2.tcaqnpaqkra9vsbw \
--discovery-token-ca-cert-hash sha256:59ec9f1fb3446bfd65ec6d16ac5f884f72786b3c7a86ef73aa52bd12c6d33810 \
--control-plane

node节点加入集群

1
2
3
4
5
yum install -y kubelet-1.21.0 kubeadm-1.21.0


kubeadm join 10.0.1.67:6443 --token 9037x2.tcaqnpaqkra9vsbw \
--discovery-token-ca-cert-hash sha256:59ec9f1fb3446bfd65ec6d16ac5f884f72786b3c7a86ef73aa52bd12c6d33810

安装Pod网络插件(CNI)

目前calico是Kubernets主流的网络方案,calico可以在三层工作,也可以实现udp封装的跨主机虚拟网络

参考资料:[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network]

1
wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 –pod-network-cidr指定的一样。

1
2
3
4
5
6
7
8
9
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58497c65d5-wcj97 1/1 Running 0 8m19s
kube-system calico-node-2sgvc 1/1 Running 0 18m
kube-system calico-node-7x4jk 1/1 Running 0 6m15s
kube-system calico-node-c4srw 1/1 Running 0 18m
kube-system calico-node-kfgt9 1/1 Running 0 18m

扩容master节点

安装kubeadm,kubelet和kubectl

1
2
3
$ yum erase -y kubelet kubeadm kubectl
$ yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
$ systemctl enable kubelet

生成加入集群命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubeadm init phase upload-certs --upload-certs
W0428 23:13:13.134983 13189 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0428 23:13:13.135079 13189 version.go:103] falling back to the local client version: v1.21.0
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4ee3bdb7031b047928a02cfbc96256158095fa3e38284d9d3295bf3614fb3ee9


$ kubeadm token create --print-join-command
kubeadm join 10.252.56.252:6443 --token 1d8lqa.l76fgdl2tahykxed --discovery-token-ca-cert-hash sha256:7cf93da1f9346f2b722aeaccd37902bb3d6e67d4a8c967c85778e799c9c08922 --control-plane --certificate-key 4ee3bdb7031b047928a02cfbc96256158095fa3e38284d9d3295bf3614fb3ee9



kubeadm join 10.252.56.252:6443 --token 1d8lqa.l76fgdl2tahykxed --discovery-token-ca-cert-hash sha256:7cf93da1f9346f2b722aeaccd37902bb3d6e67d4a8c967c85778e799c9c08922 --control-plane --certificate-key 4ee3bdb7031b047928a02cfbc96256158095fa3e38284d9d3295bf3614fb3ee9

其他

创建docker-registry认证信息

1
kubectl create secret docker-registry registry-auth --docker-username=admin --docker-password=harbor123 --docker-server=192.168.1.200

apiserver修改node端口范围

1
- --service-node-port-range=8000-9000

故障排查

calico 组件添加后,pod是running状态但是没有ready

1
2
3
4
5
6
7
8
9
# 有可能是有残留配置导致的
# 1.删除残留cni配置
rm -rf /etc/cni/net.d
# 2.清楚ipvs路由
ipvsadm --clear
# 3.删除残留的网卡配置
ip link
ip linke delete brxxxxxxx
# 4.重装的集群,记得要把etcd也清了