在CentOS 7上使用kubeadm安裝Kubernetes集群

這是續集。
您可以通過以下鏈接以PDF格式下載本文,以為我們提供支持。

以PDF格式下載指南


這是續集。

這是續集。
這是續集。

本指南描述了如何使用kubeadm工具在CentOS 7上部署最小的可運行Kubernetes集群。 Kubeadm是一個命令行工具,旨在允許用戶引導符合最佳實踐的Kubernetes集群。該工具支持以下群集生命周期管理功能: 引導令牌 和集群升級。

在CentOS 7上安裝Kubernetes集群

下一節將詳細介紹在CentOS 7服務器上部署最小的Kubernetes集群的過程。此安裝 單控平面 聚會。還有其他有關使用RKE和Kubespray部署高可用性Kubernetes集群的指南。

步驟1:準備Kubernetes服務器

群集中使用的服務器的最低服務器要求為:

  • 2 GiB 每台計算機具有更多的RAM,幾乎沒有空間容納應用程序。
  • 至少 2個CPU 在機器上用作 控制平面 節點。
  • 集群中所有計算機之間的完整網絡連接–可以是私有的也可以是公共的

由於此設置是出於開發目的,因此我的服務器具有以下詳細信息

服務器類型服務器主機名規格
k8s-master01.computingforgeeks.com4GB內存,2vcpus
工人k8s-worker01.computingforgeeks.com4GB內存,2vcpus
工人k8s-worker02.computingforgeeks.com4GB內存,2vcpus

登錄所有服務器並更新操作系統。

sudo yum -y update && sudo systemctl reboot

第2步:安裝kubelet,kubeadm,kubectl

服務器重新啟動後,將CentOS 7 Kubernetes存儲庫添加到所有服務器。

sudo tee /etc/yum.repos.d/kubernetes.repo<

然後安裝所需的軟件包。

sudo yum -y install epel-release vim git curl wget kubelet kubeadm kubectl --disableexcludes=kubernetes

檢查kubectl的版本並確認安裝。

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

步驟3:停用SELinux並交換

如果您將SELinux設置為強制模式,則將其關閉或使用許可模式。

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

關閉交換。

sudo sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
sudo swapoff -a

配置sysctl。

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/kubernetes.conf<

步驟4:安裝容器運行時

Kubernetes使用容器運行時在Pod中運行容器。支持的容器運行時為:

  • 碼頭工人
  • CRI-O
  • 在一個容器中

注意:您必須一次選擇一個運行時。

安裝Docker運行時

# Install packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum update -y && yum install -y containerd.io-1.2.13 docker-ce-19.03.8 docker-ce-cli-19.03.8

# Create required directories
sudo mkdir /etc/docker
sudo mkdir -p /etc/systemd/system/docker.service.d

# Create daemon json config file
sudo tee /etc/docker/daemon.json <

安裝CRI-O:

# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<

安裝容器:

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf < /etc/containerd/config.toml

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

要使用systemd cgroup驅動程序, plugins.cri.systemd_cgroup = true/etc/containerd/config.toml..如果您使用kubeadm,請手動 kubelet cgroup驅動程序

步驟5:配置防火牆

如果防火牆服務處於活動狀態,則需要啟用某些端口。

主服務器端口:

sudo firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload

工作節點端口:

sudo firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload

步驟6:初始化控制平面節點

登錄到要用作主服務器的服務器,並確保已加載br_netfilter模塊。

$ lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  2 br_netfilter,ebtable_broute

啟用kubelet服務。

sudo systemctl enable kubelet

接下來,初始化運行控制平面組件(包括etcd(集群數據庫)和API服務器)的機器。

拉貨櫃圖片:

$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.3
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.7

這些是基礎 kubeadm init 用於引導集群的選項。

--control-plane-endpoint :  set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

設置群集終結點的DNS名稱或將記錄添加到/ etc / hosts文件。

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.computingforgeeks.com

創建一個集群。

sudo kubeadm init 
  --pod-network-cidr=192.168.0.0/16 
  --control-plane-endpoint=k8s-cluster.computingforgeeks.com
  

注意:如果您的網絡中已經使用了192.168.0.0/16,則需要在上述命令中替換192.168.0.0/16,然後選擇另一個Pod Network CIDR。

容器運行時套接字:

Unix域套接字的RuntimePath
碼頭工人/var/run/docker.sock
容器/run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

您可以選擇將套接字文件傳遞給運行時,以根據您的配置發布地址。

這是我的初始化命令的輸出。

....
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0611 22:34:23.276374    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0611 22:34:23.278380    4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.008181 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.computingforgeeks.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01.computingforgeeks.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zoy8cq.6v349sx9ass8dzyj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-cluster.computingforgeeks.com:6443 --token zoy8cq.6v349sx9ass8dzyj 
    --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-cluster.computingforgeeks.com:6443 --token zoy8cq.6v349sx9ass8dzyj 
    --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 

在輸出中使用命令配置kubectl。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

檢查集群的狀態。

$ kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.computingforgeeks.com:6443
KubeDNS is running at https://k8s-cluster.computingforgeeks.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

可以使用安裝輸出中的命令添加其他主節點。

kubeadm join k8s-cluster.computingforgeeks.com:6443 
  --token zoy8cq.6v349sx9ass8dzyj 
  --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 
  --control-plane 

步驟7:安裝網絡插件

在本指南中, 印花布..您可以選擇任何其他 支持的網絡插件..

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

顯示以下輸出。

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

確保所有Pod正在運行。

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-76d4774d89-nfqrr                     1/1     Running   0          2m52s
kube-system   calico-node-kpprr                                            1/1     Running   0          2m52s
kube-system   coredns-66bff467f8-9bxgm                                     1/1     Running   0          7m43s
kube-system   coredns-66bff467f8-jgwln                                     1/1     Running   0          7m43s
kube-system   etcd-k8s-master01.computingforgeeks.com                      1/1     Running   0          7m58s
kube-system   kube-apiserver-k8s-master01.computingforgeeks.com            1/1     Running   0          7m58s
kube-system   kube-controller-manager-k8s-master01.computingforgeeks.com   1/1     Running   0          7m58s
kube-system   kube-proxy-bt7ff                                             1/1     Running   0          7m43s
kube-system   kube-scheduler-k8s-master01.computingforgeeks.com            1/1     Running   0          7m58s

確保主節點準備就緒。

$ kubectl get nodes -o wide
NAME                                 STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master01.computingforgeeks.com   Ready    master   8m38s   v1.18.3   95.217.235.35           CentOS Linux 7 (Core)   3.10.0-1127.10.1.el7.x86_64   docker://19.3.8

步驟8:添加工作程序節點

一旦控制平面準備就緒,就可以將工作節點添加到群集中以運行計劃的工作負載。

如果端點地址不在DNS中,則將一條記錄添加到/ etc / hosts中。

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.computingforgeeks.com

指定的join命令用於將工作程序節點添加到集群。

kubeadm join k8s-cluster.computingforgeeks.com:6443 
  --token zoy8cq.6v349sx9ass8dzyj 
  --discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 

輸出:

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

通過在控制平面中運行以下命令,檢查節點是否已加入集群。

$ kubectl get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
k8s-master01.computingforgeeks.com   Ready    master   18m   v1.18.3
k8s-worker01.computingforgeeks.com   Ready       98s   v1.18.3

如果您的加入令牌已過期,請參閱有關如何加入工作程序節點的指南。

將新的Kubernetes工作節點加入現有集群

步驟9:將應用程序部署到集群

您需要部署應用程序並驗證集群是否正常運行。

kubectl apply -f https://k8s.io/examples/pods/commands.yaml

檢查廣告連播是否已啟動

$ kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          40s

步驟10:安裝Kubernetes儀錶板(可選)

您可以使用Kubernetes儀錶板將容器化的應用程序部署到Kubernetes集群,對容器化的應用程序進行故障排除以及管理集群資源。

請參閱安裝指南:如何使用NodePort安裝Kubernetes儀錶板

儲存指南:

使用Cephfs的Kubernetes的Ceph永久存儲

使用Ceph RBD的Kubernetes持久存儲

如何使用Heketi和GlusterFS配置Kubernetes動態卷配置

學習課程:


Kubernetes的絕對初學者實踐

★★★★★
(13730)

$ 14.63

$ 225.05

有現貨

立即購買

在CentOS 7上使用kubeadm安裝Kubernetes集群Udemy.com


Kubernetes管理員(CKA)認證和認證測試

Kubernetes管理員(CKA)認證和認證測試

★★★★★
(11259)

$ 14.63

$ 225.05

有現貨

立即購買

在CentOS 7上使用kubeadm安裝Kubernetes集群Udemy.com

類似的Kubernetes部署指南:

  • 使用Rancher RKE安裝生產Kubernetes集群
  • 如何使用K3在5分鐘內部署輕量級Kubernetes集群
  • 使用Ansible和Kubespray部署可用於生產的Kubernetes集群

這是續集。
您可以通過以下鏈接以PDF格式下載本文,以為我們提供支持。

以PDF格式下載指南


這是續集。

這是續集。
這是續集。

Sidebar