k8s集群搭建方式(kubeadm方式)

| 标签 k8s 

1、部署安装

1.1 前置知识点

目前生产部署 Kubernetes 集群主要有两种方式:

(1)kubeadm Kubeadm 是一个 K8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。 官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

(2)二进制包 从 github 下载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。

1.2 kubeadm 部署方式介绍

kubeadm 是官方社区推出的一个用于快速部署 kubernetes 集群的工具,这个工具能通 过两条指令完成一个 kubernetes 集群的部署: 第一、创建一个 Master 节点 kubeadm init 第二, 将 Node 节点加入到当前集群中 $ kubeadm join

1.3 安装要求

在开始之前,部署 Kubernetes 集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB 或更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止 swap 分区

    1.4 最终目标

    (1) 在所有节点上安装 Docker 和 kubeadm (2)部署 Kubernetes Master (3)部署容器网络插件 (4)部署 Kubernetes Node,将节点加入 Kubernetes 集群中 (5)部署 Dashboard Web 页面,可视化查看 Kubernetes 资源

    1.5 准备环境

    | 角色 | IP | | — | — | | k8s-master | 10.211.55.15 | | k8s-node 1 | 10.211.55.16 | | k8s-node 2 | 10.211.55.17 |

1.6 系统初始化

1.6.1 关闭防火墙:

$ systemctl stop firewalld 
$ systemctl disable firewalld

1.6.2 关闭 selinux:

$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 
$ setenforce 0 # 临时 

1.6.3 关闭 swap:

$ swapoff -a # 临时 
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久 

1.6.4 主机名:

$ hostnamectl set-hostname <hostname> 

1.6.5 在 master 添加 hosts:

$ cat >> /etc/hosts << EOF 
10.211.55.15 k8s-master 
10.211.55.16 k8s-node1 
10.211.55.17 k8s-node2 
EOF 

1.6.6 将桥接的 IPv4 流量传递到 iptables 的链:

$ cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 
EOF 
$ sysctl --system # 生效 

1.6.7 时间同步:

$ yum install ntpdate -y 
$ ntpdate time.windows.com

1.7 所有节点安装 Docker/kubeadm/kubelet

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。

1.7.1 安装 Docker

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 
$ yum -y install docker-ce-18.06.1.ce-3.el7 
$ systemctl enable docker && systemctl start docker 
$ docker --version

1.7.2 添加阿里云 YUM 软件源

设置仓库地址

# cat > /etc/docker/daemon.json << EOF 
{ 
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] 
}
EOF 

添加 yum 源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF 
[kubernetes] 
name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF 

1.7.3 安装 kubeadm,kubelet 和 kubectl

$ yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
$ systemctl enable kubelet 

1.8 部署 Kubernetes Master

1.8.1 在 k8s-master 节点执行

$ kubeadm init \
--apiserver-advertise-address=10.211.55.15 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

1.8.2 使用 kubectl 工具:

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config 
$ kubectl get nodes 

1.9 安装 Pod 网络插件(CNI)

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 
kubectl get pods -n kube-system
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-6jdxf             1/1     Running   0          33m
coredns-7ff77c879f-bt9mt             1/1     Running   0          33m
etcd-k8s-master                      1/1     Running   0          33m
kube-apiserver-k8s-master            1/1     Running   1          33m
kube-controller-manager-k8s-master   1/1     Running   4          33m
kube-flannel-ds-bv727                1/1     Running   0          2m30s
kube-flannel-ds-fm7fz                1/1     Running   0          2m30s
kube-flannel-ds-rdj26                1/1     Running   0          2m30s
kube-proxy-5rqfw                     1/1     Running   0          33m
kube-proxy-fqqtz                     1/1     Running   0          10m
kube-proxy-ljndp                     1/1     Running   0          9m50s
kube-scheduler-k8s-master            1/1     Running   4          33m

1.10 加入 Kubernetes Node

1.10.1 在 k8s-node1 和 k8s-node2 执行

向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:

$ kubeadm join 10.211.55.15:6443 --token esce21.q6hetwm8si29qxwn \ 
--discovery-token-ca-cert-hash 
sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5 

root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   34m     v1.18.0
k8s-node1    Ready    <none>   10m     v1.18.0
k8s-node2    Ready    <none>   9m57s   v1.18.0

1.11 测试 kubernetes 集群

在 Kubernetes 集群中创建一个 pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx 
$ kubectl expose deployment nginx --port=80 --type=NodePort 
$ kubectl get pod,svc 



[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl get pod
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-glcl2   0/1     ContainerCreating   0          35s
[root@k8s-master ~]# kubectl get pod
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-glcl2   0/1     ContainerCreating   0          44s
[root@k8s-master ~]# kubectl get pod
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-glcl2   0/1     ContainerCreating   0          51s
[root@k8s-master ~]# kubectl get pod
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-glcl2   0/1     ContainerCreating   0          58s
[root@k8s-master ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-glcl2   1/1     Running   0          71s
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-glcl2   1/1     Running   0          2m8s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        40m
service/nginx        NodePort    10.107.175.33   <none>        80:32762/TCP   29s

访问地址:http://10.211.55.15:32762 http://10.211.55.16:32762 http://10.211.55.17:32762

2. 集群初始化报错解决方案

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

不能初始化集群

报错说明:

这个问题一般是由虚拟机或软件包配置错误错误引起的,需要修改 Docker Cgroup 的驱动程序

解决方案

$ vim /etc/docker/daemon.json

{
  "exec-opts": [
  	"native.cgroupdriver=systemd"
  ],
  "log-driver": "json-file",
  "log-opts": {
  	"max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
  	"overlay2.override_kernel_check=true"
  ],
  "registry-mirrors" : [
  	"https://ot2k4d59.mirror.aliyuncs.com/"
  ],
  "graph": "/data/docker"
}

$ systemctl daemon-reload
$ systemctl restart docker

将 Docker Cgroup 驱动程序修改为 systemd 然后加载配置,重新启动 Docker 服务。

接着初始化

$ kubeadm init \
--apiserver-advertise-address=10.211.55.15 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

报如下错误

[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

出现上述报错,端口被占用。执行

kubeadm reset

继续初始化

$ kubeadm init \
--apiserver-advertise-address=10.211.55.15 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.15:6443 --token s2938g.mzs7v8jy98iztjpb \
    --discovery-token-ca-cert-hash sha256:818ae1677e592160597f2e6b036b2f93318fdaa710356abdc74b18854a525c6a 

成功!


上一篇     下一篇