k8s 1.26.2基于containerd1.6.18
环境
服务器参数:
- CentOS Linux release 7.9.2009 (Core)
- 4核(vCPU)8GB
防火墙:关闭
SELINUX:SELINUX=disabled
软件环境:
- docker版本:20.10.22
- docker-compose版本:2.15.1
- kubeadm版本:1.26.2;kubelet版本:1.26.2;kubectl版本:1.26.2
- containerd版本:1.6.18
- flannel版本:v0.20.0
一、环境
1、hostname
bash
1 | hostnamectl set-hostname tenxun-jing |
2、防火墙
bash
1 | 1、### 关闭 |
3、依赖安装
bash
1 | yum install update |
4、docker安装(可选)
bash
1 | # step 1: 安装必要的一些系统工具 |
二、k8s安装
1、安装kubeadm、kubelet、kubectl
bash
1 | cat <<EOF > /etc/yum.repos.d/kubernetes.repo |
2、安装containerd,配置 crictl
2.1、安装containerd
bash
1 | 1、安装 |
containerd 配置镜像加速
法一:添加附件
shell1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
271、修改/etc/containerd/config.toml文件
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d" # 镜像地址配置文件
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
2、创建相应目录
mkdir /etc/containerd/certs.d/docker.io -pv
3、配置加速
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://xxxxxxxx.mirror.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF
4、重启Containerd
systemctl restart containerd
5、拉取镜像
ctr i pull docker.io/library/nginx:latest法二:直接在/etc/containerd/config.toml中添加
shell1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
161、修改/etc/containerd/config.toml文件
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
# 添加镜像加速信息(此注释去掉)
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"]
2、重启Containerd
systemctl restart containerdcontainerd代理配置参考:
前提得有代理,没有代理,大可不必
bash1
2
3
4
5
6
7
8
9
10
11
121、添加代理
vim /lib/systemd/system/containerd.service
# 在 [Service] 下添加
Environment="http_proxy=http://127.0.0.1:7890"
Environment="https_proxy=http://127.0.0.1:7890"
Environment="ALL_PROXY=socks5://127.0.0.1:7891"
Environment="all_proxy=socks5://127.0.0.1:7891"
2、重启
systemctl daemon-reload && \
systemctl restart containerd
2.2、配置crictl
bash
1 | # 配置文件地址 /etc/crictl.yaml,修改 sock 地址 |
2.3、启动服务
bash
1 | systemctl enable containerd && \ |
3、安装k8s
命令行形式初始化
bash
1 | 1、官方镜像 |
3.1、生成kubeadm.yml
bash
1 | kubeadm config print init-defaults > kubeadm.yml |
修改后的配置
bash
1 | apiVersion: kubeadm.k8s.io/v1beta3 |
3.2、使用 kubeadm.yml 进行初始化
bash
1 | # 查看所需镜像列表 |
bash
1 | mkdir -p $HOME/.kube |
3.3、kubectl补全
bash
1 | 1、安装bash-completion工具 |
4、安装网络插件 flannel
bash
1 | 1、下载 |
5、添加节点
bash
1 | 1、获取加入集群的命令,以下 指令在master节点执行 |
三、问题记录
1、解决k8s Error registering network: failed to acquire lease: node “master“ pod cidr not assigne
问题描述:
部署flannel网络插件时发现flannel一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr
bash
1 | 1、修改 |
2、container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
shell
1 | 1、问题触发: |
四、其他
1、污点taint
bash
1 | 1、查看 |
2、重置脚本
bash
1 | #/bin/bash |
3、代理脚本
- 前提得有代理,没有代理,大可不必bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# premise: touch start_containerd_env.sh && chmod +x start_containerd_env.sh
# implement: source start_containerd_env.sh && [env_start|env_stop|env_status]
containerd_file="/lib/systemd/system/containerd.service"
proxy_port="7890"
socks5_port="7891"
proxy_ip="127.0.0.1"
# list
proxy_str_list=(
'Environment="http_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \
'Environment="https_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \
'Environment="ALL_PROXY=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \
'Environment="all_proxy=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \
)
list_len=$((${#proxy_str_list[@]} - 1))
function env_create(){
[[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return
for ((i=0;i <= ${list_len};i++));do
grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null
[[ $? != "0" ]] && sed -ri "/${proxy_str_list[${i}]}/d" ${containerd_file} && sed -ri "/\[Service\]/a${proxy_str_list[${i}]}" ${containerd_file}
done
proxy_str_num=$(grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" ${containerd_file}|wc -l)
[[ "${proxy_str_num}" != "${#proxy_str_list[@]}" ]] && echo "[error] not create containerd proxy in ${containerd_file}" && return
}
function env_delete(){
[[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return
for ((i=0;i <= ${list_len};i++));do
grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && sed -ri "s/(^${proxy_str_list[${i}]})/#\1/g" ${containerd_file}
grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && echo "[error] not notes ${proxy_str_list[${i}]}" && return
done
}
function env_start(){
echo "==[env_start]== BEGIN"
env_create
systemctl daemon-reload && systemctl restart containerd
[[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return
[[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) == "4" ]] && echo "[sucess] start containerd proxy" && systemctl show --property=Environment containerd |grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" || echo "[error] not set containerd proxy env"
echo "==[env_start]== END"
}
function env_stop(){
echo "==[env_stop]== BEGIN"
grep "^Environment=" ${containerd_file}|grep "${proxy_ip}" &>/dev/null
if [[ $? == "0" ]];then
env_delete
systemctl daemon-reload && systemctl restart containerd
[[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return
else
echo "[warning] not operation, not set containerd proxy"
fi
systemctl show --property=Environment containerd | grep "Environment="
[[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) != "4" ]] && echo "[sucess] stop containerd proxy"
echo "==[env_stop]== END"
}
function env_status(){
systemctl show --property=Environment containerd | grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}"
[[ "$(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l)" != "4" ]] && echo "[error] not set containerd proxy env"
}
msg="==[error]==input error, please try: source xx.sh && [env_start|env_stop|env_status]"
[[ ! "$1" ]] || echo ${msg}
4、更改nodePort模式下的默认端口范围
- 官网:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/
- 使用nodePort模式,官方默认范围为30000-32767
- NodePort 类型
如果将 type 字段设置为 NodePort,则 Kubernetes 控制平面将在 –service-node-port-range 标志指定的范围内分配端口(默认值:30000-32767)。 每个节点将那个端口(每个节点上的相同端口号)代理到您的服务中。 您的服务在其 .spec.ports[*].nodePort 字段中要求分配的端口。 - 修改/etc/kubernetes/manifests/kube-apiserver.yaml调整完毕后会等待大概10s,因为更改kube-apiserver.yaml配置文件后会进行重启操作,重新加载配置文件,期间可执行kubectl get pod命令进行查看,如果可正常查看pod信息即说明重启完毕。但是此时端口范围可能仍然不会生效,需要继续进行以下操作:shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23[root@node-1 manifests]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.235.21
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-admission-plugins=PodPreset
- --runtime-config=settings.k8s.io/v1alpha1=true
- --service-node-port-range=1-65535 # 需增加的配置
...然后重新进行新的service的生成,即可成功创建指定nodePort的service。shell1
2[root@node-0 manifests]# systemctl daemon-reload
[root@node-0 manifests]# systemctl restart kubelet
5、补充:云服务器公网部署初始化(实验表明:不同节点之间的pod无法互通。**不建议)
- 法一:添加公网ip的虚拟网卡shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
181、临时生效
ifconfig eth0:1 <公网ip>
2、永久生效
cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=<公网ip>
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF
3、kubeadm初始化时选择 <公网ip>
4、补充:卸载网卡
ifconfig eth0:1 down
法二:
shell
1 | 1、公网ip初始化 |
6、验证集群是否搭建成功:
shell
1 | cat > test.yaml << EOF |
7、拉取镜像脚本(测试通过)
- 本脚本针对coredns插件镜像拉取;建议是拉取k8s 1.11版本以上得(k8s 从1.11版本开始使用coredns插件)docker版shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29!/bin/bash
Author:jing
premise: touch k8s_img_pull.sh && chmod +x k8s_img_pull.sh
implement: bash k8s_img_pull.sh
china_img_url="registry.cn-hangzhou.aliyuncs.com/google_containers"
k8s_img_url="k8s.gcr.io"
version="v1.26.2"
images=($(kubeadm config images list --kubernetes-version=${version} | awk -F "/" '{if ($3 != "") {print $2"/"$3}else{print $2}}'))
for imagename in ${images[@]}
do
echo ${imagename}|grep "/" &> /dev/null
if [[ $? == 0 ]];then
coredns_img=$(echo ${imagename}|grep "/"|awk -F'/' '{print $2}')
ctr -n k8s.io images pull ${china_img_url}/${coredns_img}
ctr -n k8s.io images tag ${china_img_url}/${coredns_img} ${k8s_img_url}/${imagename}
ctr -n k8s.io images rm ${china_img_url}/${coredns_img}
else
ctr -n k8s.io images pull ${china_img_url}/${imagename}
ctr -n k8s.io images tag ${china_img_url}/${imagename} ${k8s_img_url}/${imagename}
ctr -n k8s.io images rm ${china_img_url}/${imagename}
fi
# 导出
# [[ ! -d "/root/kube-images/" ]] && mkdir -p /root/kube-images/
# ctr -n k8s.io images save -o /root/kube-images/${imagename}.tar.gz ${k8s_img_url}/${imagename}
# ctr -n k8s.io images rm ${k8s_img_url}/$imagename}
doneshell1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30!/bin/bash
Author:jing
premise: touch k8s_img_pull.sh && chmod +x k8s_img_pull.sh
implement: bash k8s_img_pull.sh
china_img_url="registry.cn-hangzhou.aliyuncs.com/google_containers"
k8s_img_url="k8s.gcr.io"
version="v1.18.20"
images=($(kubeadm config images list --kubernetes-version=${version} | awk -F "/" '{if ($3 != "") {print $2"/"$3}else{print $2}}'))
for imagename in ${images[@]}
do
echo ${imagename}|grep "/" &> /dev/null
if [[ $? == 0 ]];then
coredns_img=$(echo ${imagename}|grep "/"|awk -F'/' '{print $2}')
docker pull ${china_img_url}/${coredns_img}
docker tag ${china_img_url}/${coredns_img} ${k8s_img_url}/${imagename}
docker rmi ${china_img_url}/${coredns_img}
else
docker pull ${china_img_url}/${imagename}
docker tag ${china_img_url}/${imagename} ${k8s_img_url}/${imagename}
docker rmi ${china_img_url}/${imagename}
fi
# 导出
# [[ ! -d "/root/kube-images/" ]] && mkdir -p /root/kube-images/
# docker save -o /root/kube-images/${imagename}.tar.gz ${k8s_img_url}/${imagename}
# docker rmi ${k8s_img_url}/$imagename}
done
8、清理fannel网络方法
shell
1 | sudo ifconfig cni0 down |
9、开启ipvs
bash
1 | # 此模式必须安装ipvs内核模块,否则会降级为iptables |
本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 编程知识学习笔记!
评论
ValineGiscus