K8s集群搭建-Kubeadm方式搭建集群【1.23.0版本】

news/2024/7/2 23:43:08

文章目录

    • 一、初始化准备
    • 二、安装kubeadm
    • 三、初始化Master集群
    • 四、将新的Node节点加入集群
    • 五、部署CNI网络插件
    • 六、其他配置

Kubernetes1.24(包括1.24)之后不在兼容docker,如果有需要兼容docker的需求,则安装一个 cri-docker的插件,本文使用的是kubernetes1.23版本。

一、初始化准备

1、关闭防火墙
Centos7 默认启动了防火墙,而Kubernetes的Master与Node节点之间会存在大量的网络通信,安全的做是在防火墙上配置各个组件需要相互通信的端口如下表:

组件默认端口
API Service8080(http非安全端口)、6443(https安全端口)
Controller Manager10252
Scheduler10251
kubelet10250、10255(只读端口)
etcd2379(供客户端访问)、2380(供etcd集群内部节点之间访问)
集群DNS服务53(tcp/udp)

不止这些,需要用到其他组件,就需要开放组件的端口号,列如CNI网络插件calico需要179端口、镜像私服需要5000端口,根据所需集群组件在防火墙上面放开对应端口策略。
在安全的网络环境下,可以简单的将Firewalls服务关掉:

systemctl stop firewalld
systemctl disable firewalld 
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

2、配置时间同步保证集群内时间保持一致

yum -y install ntpdate
ntpdate ntp1.aliyun.com

3、禁用swap交换分区

sed -i -r '/swap/ s/^/#/' /etc/fstab
swapoff --all

4、修改Linux内核参数,添加网桥过滤器和地址转发功能

cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p

# 加载网桥过滤器模块
modprobe br_netfilter
# 验证是否生效
lsmod | grep br_netfilter

5、配置ipvs功能
在Kubernetes中Service有两种代理模型,分别是iptable和IPVS,两者对比IPVS的性能高,如果想要使用IPVS模型,需要手动载人IPVS模块

yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
# 执行脚本
/etc/sysconfig/modules/ipvs.modules

# 验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

6、安装docker组件

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker存储库(阿里)
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 查看当前镜像源中支持的docker版本
yum list docker-ce --showduplicates

# 安装特定版docker-ce,安装时需要使用--setopt=obsoletes=0参数指定版本,否则yum会自动安装高版本
yum -y install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7

Docker默认使用的Cgroup Driver为默认文件驱动,而k8s默认使用的文件驱动为systemd,k8s要求驱动类型必须要一致,所以需要将docker文件驱动改成systemd

mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://aoewjvel.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 启动docker程序及开机自启
systemctl enable docker --now

7、域名解析

cat  >> /etc/hosts << EOF
16.32.15.200 master
16.32.15.201 node1
16.32.15.202 node2
EOF

在指定主机上面修改主机名

hostnamectl set-hostname master && bash
hostnamectl set-hostname node1 && bash
hostnamectl set-hostname node2 && bash

二、安装kubeadm

配置国内yum源,一键安装 kubeadm、kubelet、kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum -y install --setopt=obsoletes=0 kubeadm-1.23.0 kubelet-1.23.0 kubectl-1.23.0

kubeadm将使用kubelet服务以容器方式部署kubernetes的主要服务,所以需要先启动kubelet服务

systemctl enable kubelet.service --now

三、初始化Master集群

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.0 \
--pod-network-cidr=192.168.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=16.32.15.200 \
--ignore-preflight-errors=all
  • –image-repository:指定国内阿里云的镜像源

  • –kubernetes-version:指定k8s版本号

  • –pod-network-cidr: pod网段

  • –service-cidr: service网段

  • –apiserver-advertise-address: apiserver地址

  • –ignore-preflight-errors:忽略检查的一些错误

如果初始化正常则会输出日志:

[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 16.32.15.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502504 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: de4oak.y15qhm3uqgzimflo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 16.32.15.200:6443 --token de4oak.y15qhm3uqgzimflo \
        --discovery-token-ca-cert-hash sha256:0f54086e09849ec84de3f3790b0c042d4a21f4cee54f57775e82ec0fe5e3d14c 

由于kubernetes默认使用CA证书,所以需要为kubectl配置证书才可以访问到Master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置完成证书后,我们就可以使用kubectl命令行工具对集群进行访问和操作了,如下是查看 kube-system 命名空间下的ConfigMap列表

kubectl -n kube-system get configmap
NAME                                 DATA   AGE
coredns                              1      13m
extension-apiserver-authentication   6      13m
kube-proxy                           2      13m
kube-root-ca.crt                     1      13m
kubeadm-config                       1      13m
kubelet-config-1.23                  1      13m

需要留意一下保留token 用于添加节点到k8s集群中

kubeadm join 16.32.15.200:6443 --token de4oak.y15qhm3uqgzimflo \
        --discovery-token-ca-cert-hash sha256:0f54086e09849ec84de3f3790b0c042d4a21f4cee54f57775e82ec0fe5e3d14c 
    
# 如果忘记可以使用 'kubeadm token list'查看
kubeadm token list

四、将新的Node节点加入集群

在node节点上安装kubeadm、kubelet命令,无需安装kubectl

yum -y install --setopt=obsoletes=0 kubeadm-1.23.0 kubelet-1.23.0

systemctl start kubelet
systemctl enable kubelet

使用kubeadm join命令加入集群,直接复制Master节点初始化输出token值即可

kubeadm join 16.32.15.200:6443 --token de4oak.y15qhm3uqgzimflo \
        --discovery-token-ca-cert-hash sha256:0f54086e09849ec84de3f3790b0c042d4a21f4cee54f57775e82ec0fe5e3d14c 

Node成功加入集群后可以通过 kubectl get nodes 命令确认新的节点是否已加入

kubectl get nodes
NAME                    STATUS     ROLES                  AGE   VERSION
localhost.localdomain   NotReady   control-plane,master   90m   v1.23.0
node1                   NotReady   <none>                 59m   v1.23.0
node2                   NotReady   <none>                 30m   v1.23.0

五、部署CNI网络插件

对于CNI网络插件可以有很多选择,比如Calico CNI:
calico.yaml下载地址
calico.yaml 下载好之后需要修改 CALICO_IPV4POOL_CIDR 值为初始化容器时 --pod-network-cidr向的网段,如下:

kubectl apply -f calico.yaml

# 查看网络Pod是否创建成功
kubectl get pods -n kube-system

如果下载慢 可以手动执行一下 这个需要在所有节点执行

# 获取要pull的镜像
grep image calico.yaml

docker pull calico/cni:v3.15.1
docker pull calico/pod2daemon-flexvol:v3.15.1
docker pull calico/node:v3.15.1
docker pull calico/kube-controllers:v3.15.1

查看节点状态为 Ready就绪

kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   3m48s   v1.23.0
node1    Ready    <none>                 3m26s   v1.23.0
node2    Ready    <none>                 3m19s   v1.23.0

六、其他配置

1、token证书默认24小时 过期需要生成新的证书

kubeadm token create --print-join-command

2、集群初始化失败 重新初始化

kubeadm reset

3、calico组件始终未就绪 手动pull一下

# 获取要pull的镜像
grep image calico.yaml

docker pull calico/cni:v3.15.1
docker pull calico/pod2daemon-flexvol:v3.15.1
docker pull calico/node:v3.15.1
docker pull calico/kube-controllers:v3.15.1

4、自动补全功能

yum install -y bash-completion 
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
echo "source <(kubectl completion bash)" >> /etc/profile
source /etc/profile

5、设置master节点可调度,去除污点

kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes master node-role.kubernetes.io/control-plane:NoSchedule-

查看污点

kubectl describe nodes master |grep Taints

打标签

kubectl label nodes master node-role.kubernetes.io/worker=    # 添加master节点worker
kubectl label nodes master node-role.kubernetes.io/master-    # 删除master节点master

http://lihuaxi.xjx100.cn/news/1052861.html

相关文章

混淆电路(GC)

基本概念 在混淆电路框架下&#xff0c;任意功能函数可被表示为一个与门和异或门组成的布尔电路&#xff0c;协议的参与方由生成方&#xff08;Garbler&#xff09;和计算方&#xff08;Evaluator&#xff09;组成。 **大致的流程&#xff1a;**生成方生成密钥并加密查找表&am…

codemirror 5前端代码编辑器资料整理。

CodeMirror 是基于js的源代码编辑器组件&#xff0c;它支持javascript等多种高级语言&#xff0c;tampermonkey内置的代码编辑器就是基于它。它的按键组合方式兼容vim&#xff0c;emacs等&#xff0c;调用者还可自定义”自动完成“的列表窗口&#xff0c;自由度极高&#xff0c…

【测试开发】第一节.测开入门(附常考面试题)

文章目录 前言 一、什么是测试开发 1.1 常考面试题 二、软件测试的基础概念 2.1 需求 2.2 测试用例 3、BUG 三、生命周期 3.1 软件的生命周期 3.2 软件测试的生命周期 四、软件工程中的几种常见的开发模型 4.1 瀑布模型 4.2 螺旋模型 4.3 增量模型和迭代模型 4.4 敏捷…

基于matlab评估机场监控雷达上 5G 新无线电 (NR) 信号的干扰

一、前言 随着5G NR系统的频率范围超出LTE中使用的频段&#xff0c;频谱管理变得更加复杂。对扩大5G覆盖范围的需求是由更高的数据速率和更低的延迟的好处推动的。新5G基站的实施反过来又推动了了解这些信号如何影响在相同频段上运行的已安装系统的需求。其中一个系统是空中交通…

无线洗地机哪款性价比高?高性价比的洗地机分享

虽说现在市面上清洁工具很多&#xff0c;但是要说清洁效果最好的&#xff0c;肯定非洗地机莫属。它集合了吸&#xff0c;洗&#xff0c;拖三大功能&#xff0c;干湿垃圾一次清理&#xff0c;还能根据地面的脏污程度进行清洁&#xff0c;达到极致的清洁效果&#xff0c;省时省力…

【STM32】基础知识 第五课 C 语言基础知识

【STM32】基础知识 第五课 C 语言基础知识 stdint.h 简介位操作寄存器位赋值 宏定义带参数的宏定义条件编译头文件编译代码条件编译extern 声明 类别名 (typedef)结构体 指针指针使用的常见问题代码规范 stdint.h 简介 stdint.h 是从 C99 中引进的一个标准 C 库的文件. 路径: …

前向传播的简单介绍,并给出代码实例

文章目录 前向传播的介绍前向传播的基本概念前向传播的步骤实例代码示例一代码示例二定义模型定义损失函数定义优化器执行前向传播 总结 前向传播的介绍 前向传播是神经网络中的一种基本操作&#xff0c;其作用是将输入数据通过网络中的权重和偏置计算&#xff0c;最终得到输出…

编程能力提升:15个步骤助你成为顶尖程序员

目录 1. 学习新的编程语言2. 熟悉代码规范和最佳实践3. 参加开源项目4. 阅读高质量的代码5. 掌握设计模式6. 使用工具和框架7. 学习软件工程知识8. 不断实践和练习9. 参加技术交流和分享10. 注重自我反思和改进11. 熟悉数据结构和算法12. 学习代码调试和优化13. 关注安全和性能…