KubernetesV1.28 集群部署(Kubeadm)

【Kubernetes】V1.28 集群部署(Kubeadm)

安装步骤

使用Kubeadm部署集群是比较简单的,主要分为两个步骤:

  1. Master节点上使用kubeadm init初始化。
  2. 在其他工作节点使用kubeadm join <Master 节点的IP和地址>

基本要求

集群部署最好是三台机器以上,本文就使用三台机器,其中一台部署Master,另外两台是Worker,我这里使用的是腾讯云服务器,操作系统是CentOS Stream 8

关闭防火墙

systemctl disable firewalld --now

关闭SELINUX

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
# 临时关闭(当前窗口有效)
setenforce 0
# 永久关闭
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

关闭Swap

# 临时关闭(当前窗口有效)
swapoff -a
# 永久关闭
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

将ipv4流量传递到iptables

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

时间同步

CentOS 8 中默认不再支持 ntp 软件包,时间同步将由 chrony 来实现,腾讯云服务器其实已经好了它的时间同步,如下图所示:

时间同步

如果确实需要配置,只需要按照上图配置即可,时间服务地址如下:

server time1.tencentyun.com iburst
server time2.tencentyun.com iburst
server time3.tencentyun.com iburst
server time4.tencentyun.com iburst
server time5.tencentyun.com iburst

配置HOSTS

cat >> /etc/hosts << EOF
10.0.16.7 linux1
10.0.16.11 linux2
10.0.16.13 linux3
EOF

安装其他工具

yum -y install socat conntrack-tools

安装kubernetes工具

需要安装kubeadmkubeletkubectl

# 此操作会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

安装容器运行时

本文使用Docker作为容器运行时,官网安装说明是:https://docs.docker.com/engine/install/centos/

# 卸载Docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
# 设置仓库
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 安装最新版本
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
# 启动并设置开机启动
systemctl enable docker --now
# 设置cgroup=systemd
cat >> /etc/docker/daemon.json << EOF
{
    "exec-opts":["native.cgroupdriver=systemd"]
}
EOF

还要安装cri-dockerd,去https://github.com/Mirantis/cri-dockerd/releases地址下载rmp包。

下载

# 安装
yum install -y cri-dockerd-0.3.7.20231027185657.170103f2-0.el7.x86_64.rpm

# 设置开机启动
systemctl enable cri-docker --now

修改Pause版本

vim /usr/lib/systemd/system/cri-docker.service,下图红色部分是新添加的。

--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

修改Pause版本

启动

# 修改配置文件要reload
systemctl daemon-reload
# 重启
systemctl restart cri-docker

初始化Master

在初始化之前,最好先检查下容器运行时情况,使用Docker的安装方法,会将Containerd运行时也启动,可以查看下,如下图:

docker启动的

containerd是启动的

因为有两个运行时都启动,所有需要使用--cri-socket=unix:///var/run/cri-dockerd.sock来指定使用那个容器运行时。

我是以linux1机器作为master,所以要在该机器上初始化。

[root@linux1 ~]#  kubeadm init --apiserver-advertise-address=10.0.16.7 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
[root@linux1 ~]#  kubeadm init --apiserver-advertise-address=10.0.16.7 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
W1112 10:48:15.451007  301479 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1112 10:48:15.451069  301479 version.go:105] falling back to the local client version: v1.28.3
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local linux1] and IPs [10.96.0.1 10.0.16.7]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [linux1 localhost] and IPs [10.0.16.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [linux1 localhost] and IPs [10.0.16.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002228 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node linux1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node linux1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 7m20a9.3tdgm9ci23gyi2pz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.16.7:6443 --token 7m20a9.3tdgm9ci23gyi2pz \
        --discovery-token-ca-cert-hash sha256:0b39b8b063bf7790c5ced39fa7a36f1ac1b41c04858ba6eb19cdba0b6b4a8f1a 

配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# export 配置
cat /etc/profile.d/kubernetes.sh 
export KUBECONFIG=/etc/kubernetes/admin.conf

查看节点信息:

[root@linux1 ~]# kubectl get node -owide
NAME     STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
linux1   NotReady   control-plane   95s   v1.28.3   10.0.16.7     <none>        CentOS Stream 8   4.18.0-492.el8.x86_64   docker://24.0.7

可以看到状态是NotReady,这是因为没有安装网络插件。

安装网络插件

网络插件有很多,详情可查看https://kubernetes.io/docs/concepts/cluster-administration/addons/,我这里使用calico插件,官网地址是:https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart

[root@linux1 ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created

customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

[root@linux1 ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
因为我的pod网络配置是--pod-network-cidr=192.168.0.0/16,这与calico默认网段一致,如果不一致,则需要先将custom-resources.yaml下载下来,找到192.168.0.0/16处,然后根据自己的情况修改该处。

安装完网络插件后,等一小会,再次查看节点情况:

[root@linux1 ~]# kubectl get node -owide
NAME     STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION          CONTAINER-RUNTIME
linux1   Ready    control-plane   23m   v1.28.3   10.0.16.7     <none>        CentOS Stream 8   4.18.0-492.el8.x86_64   docker://24.0.7

状态已经变为了Ready

加入工作节点

linux2:

[root@linux2 ~]# kubeadm join 10.0.16.7:6443 --token 7m20a9.3tdgm9ci23gyi2pz \
>         --discovery-token-ca-cert-hash sha256:0b39b8b063bf7790c5ced39fa7a36f1ac1b41c04858ba6eb19cdba0b6b4a8f1a  --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

linux3:

[root@linux3 ~]# kubeadm join 10.0.16.7:6443 --token 7m20a9.3tdgm9ci23gyi2pz \
>         --discovery-token-ca-cert-hash sha256:0b39b8b063bf7790c5ced39fa7a36f1ac1b41c04858ba6eb19cdba0b6b4a8f1a  --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

等待一段后查看节点信息

[root@linux1 ~]# kubectl get node
NAME     STATUS   ROLES           AGE     VERSION
linux1   Ready    control-plane   62m     v1.28.3
linux2   Ready    <none>          5m4s    v1.28.3
linux3   Ready    <none>          4m38s   v1.28.3

重启所有机器

重启机器验证下集群是否会自动启动。

查看节点:

[root@linux1 ~]# kubectl get node
NAME     STATUS   ROLES           AGE     VERSION
linux1   Ready    control-plane   66m     v1.28.3
linux2   Ready    <none>          8m52s   v1.28.3
linux3   Ready    <none>          8m26s   v1.28.3

没有问题。

标签: kubeadm, k8s, kubernetes, kubernetes-cluster, v1.28

添加新评论