使用Rancher RKE安装生产Kubernetes集群

如何使用RKE部署可用于生产环境的Kubernetes集群? Kubernetes受到了很多关注,现在已经成为容器化工作负载的标准编排层。如果您需要一个开源系统来自动化容器化应用程序的部署而无需担心扩展和管理,那么Kubernetes是首选工具。

部署生产级Kubernetes集群有许多标准方法。这包括使用以下工具: 杯具立方喷雾 或手动构建集群 库贝阿姆。 您可以使用几种指南作为参考。

使用Ansible和Kubespray部署生产就绪的Kubernetes集群

使用Ansible和Calico CNI在CentOS 7 / CentOS 8上部署Kubernetes集群

本指南提供了使用RKE安装生产级Kubernetes集群的简单步骤。使用Rancher Kubernetes Engine(RKE)设置5节点集群,并使用Helm软件包管理器安装Rancher图表。

什么是RKE?

Rancher Kubernetes引擎(RKE)是非常简单且非常快的Kubernetes发行版,它完全在容器内运行。 Rancher是一个容器管理平台,用于组织在生产环境中部署容器的组织。使用Rancher,您可以轻松地在任何地方运行Kubernetes,满足IT要求并增强DevOps团队的能力。

准备工作站机器

部署工作站需要许多CLI工具。这可以是有权访问群集节点的虚拟机。

  1. kubectl:
--- Linux ---
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

--- macOS ---
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

2。 ke

--- Linux ---
curl -s https://api.github.com/repos/rancher/rke/releases/latest | grep download_url | grep amd64 | cut -d '"' -f 4 | wget -qi -
chmod +x rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke
rke --version

--- macOS ---
curl -s https://api.github.com/repos/rancher/rke/releases/latest | grep download_url | grep darwin-amd64 | cut -d '"' -f 4 | wget -qi -
chmod +x rke_darwin-amd64
sudo mv rke_darwin-amd64 /usr/local/bin/rke
rke --version

3。 头盔

--- Helm 3 ---
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

使用RKE安装Kubernetes

在五个节点上工作。

  • 三个主节点 – Etcd和控制平面(HA时为3)
  • 两个 工作节点规模可满足工作负载需求

这些是我的设置规格。

  • 主节点 8GB RAM和 4 vcpus
  • 工人机器– 16 GB RAM和 8 vpcus

RKE支持的操作系统

RKE几乎可以在安装了Docker的任何Linux操作系统上运行。 Rancher已通过测试,并受以下支持:

  • 红帽企业Linux
  • Oracle企业Linux
  • CentOS Linux
  • 的Ubuntu
  • RancherOS

步骤1:更新您的Linux系统

第一步是更新用于构建集群的Linux计算机。

--- CentOS ---
$ sudo yum -y update
$ sudo reboot

--- Ubuntu / Debian ---
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo reboot

步骤2:建立rke使用者

如果您使用的是Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS, root 作为用户 SSH用户 对于 Bugzilla 1527565。为此,创建一个名为 ke 用于部署。

使用Ansible Playbook:

---
- name: Create rke user with passwordless sudo
  hosts: rke-hosts
  remote_user: root
  tasks:
    - name: Add RKE admin user
      user:
        name: rke
        shell: /bin/bash
     
    - name: Create sudo file
      file:
        path: /etc/sudoers.d/rke
        state: touch
    
    - name: Give rke user passwordless sudo
      lineinfile:
        path: /etc/sudoers.d/rke
        state: present
        line: 'rke ALL=(ALL:ALL) NOPASSWD: ALL'
     
    - name: Set authorized key taken from file
      authorized_key:
        user: rke
        state: present
        key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"

在所有主机上手动创建用户

登录到每个群集节点并创建一个rke用户。

sudo useradd rke
sudo passwd rke

在没有用户密码的情况下启用sudo。

$ sudo vim /etc/sudoers.d/rke
rke  ALL=(ALL:ALL) NOPASSWD: ALL

复制用户的ssh公钥 〜/ .ssh / authorized_keys 文件

for i in rke-master-01 rke-master-02 rke-master-03 rke-worker-01 rke-worker-02; do
  ssh-copy-id [email protected]$i
done

验证您可以从工作站登录。

$ ssh [email protected]
Warning: Permanently added 'rke-master-01,x.x.x.x' (ECDSA) to the list of known hosts.
[[email protected] ~]$ sudo su - # No password prompt
Last login: Mon Jan 27 21:28:53 CET 2020 from y.y.y.y on pts/0
[[email protected] ~]# exit
[[email protected] ~]$ exit
logout
Connection to rke-master-01 closed.

步骤3:启用所需的内核模块。

使用Ansible

创建具有以下内容的剧本,并针对RKE服务器清单执行该剧本。

---
- name: Load RKE kernel modules
  hosts: rke-hosts
  remote_user: root
  vars:
    kernel_modules:
      - br_netfilter
      - ip6_udp_tunnel
      - ip_set
      - ip_set_hash_ip
      - ip_set_hash_net
      - iptable_filter
      - iptable_nat
      - iptable_mangle
      - iptable_raw
      - nf_conntrack_netlink
      - nf_conntrack
      - nf_conntrack_ipv4
      - nf_defrag_ipv4
      - nf_nat
      - nf_nat_ipv4
      - nf_nat_masquerade_ipv4
      - nfnetlink
      - udp_tunnel
      - veth
      - vxlan
      - x_tables
      - xt_addrtype
      - xt_conntrack
      - xt_comment
      - xt_mark
      - xt_multiport
      - xt_nat
      - xt_recent
      - xt_set
      - xt_statistic
      - xt_tcpudp

  tasks:
    - name: Load kernel modules for RKE
      modprobe:
        name: "{{ item }}"
        state: present
      with_items: "{{ kernel_modules }}"
 

手动方式

登录到每个主机并启用运行Kubernetes所需的内核模块。

for module in br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4   nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel veth vxlan x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set  xt_statistic xt_tcpudp;
     do
       if ! lsmod | grep -q $module; then
         echo "module $module is not present";
       fi;

步骤4:禁用交换并更改sysctl条目

Kubernetes的建议是禁用交换并添加sysctl值。

对于Ansible:

---
- name: Disable swap and load kernel modules
  hosts: rke-hosts
  remote_user: root
  tasks:
    - name: Disable SWAP since kubernetes can't work with swap enabled (1/2)
      shell: |
        swapoff -a
     
    - name: Disable SWAP in fstab since kubernetes can't work with swap enabled (2/2)
      replace:
        path: /etc/fstab
        regexp: '^([^#].*?sswaps+.*)$'
        replace: '# 1'
    - name: Modify sysctl entries
      sysctl:
        name: '{{ item.key }}'
        value: '{{ item.value }}'
        sysctl_set: yes
        state: present
        reload: yes
      with_items:
        - {key: net.bridge.bridge-nf-call-ip6tables, value: 1}
        - {key: net.bridge.bridge-nf-call-iptables,  value: 1}
        - {key: net.ipv4.ip_forward,  value: 1}

手动地

交换:

$ sudo vim /etc/fstab
# Add comment to swap line

$ sudo swapoff -a

Sysctl:

$ sudo tee -a /etc/sysctl.d/99-kubernetes.conf <

确认无效:

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        180M        6.8G        8.5M        633M        7.2G
Swap:            0B          0B          0B

步骤5:安装受支持的Docker版本

每个Kubernetes版本都支持不同的Docker版本。 Kubernetes发行说明包括: 当前清单 验证的Docker版本。

在撰写本文时,受支持的Docker版本为:

Docker VersionInstall脚本
18.09.2curl https://releases.rancher.com/install-docker/18.09.2.sh | sh
18.06.2curl https://releases.rancher.com/install-docker/18.06.2.sh | sh
17.03.2curl https://releases.rancher.com/install-docker/17.03.2.sh | sh

您可以跟随 安装Docker 说明或牧场主 安装脚本 安装Docker安装最新的受支持版本。

curl https://releases.rancher.com/install-docker/18.09.2.sh | sudo bash -

启动并启用Docker服务。

sudo systemctl enable --now docker

确保您的计算机上安装了Kubernetes支持的Docker版本。

$ sudo docker version --format '{{.Server.Version}}'
18.09.2

新增 ke 将用户分配给Docker组。

$ sudo usermod -aG docker rke
$ id rke
uid=1000(rke) gid=1000(rke) groups=1000(rke),994(docker)

步骤6:在防火墙中打开端口

  • 对于单节点安装, 只需打开必要的端口,以允许Rancher与下游用户群集进行通信。
  • 对于高可用性安装, 一样
    需要打开端口,并需要其他端口进行设置
    安装了Rancher的Kubernetes集群

检查所有使用的端口 需求页面

防火墙TCP端口:

for i in 22 80 443 179 5473 6443 8472 2376 8472 2379-2380 9099 10250 10251 10252 10254 30000-32767; do
    sudo firewall-cmd --add-port=${i}/tcp --permanent
done
sudo firewall-cmd --reload

带有防火墙的UDP端口:

for i in 8285 8472 4789 30000-32767; do
   sudo firewall-cmd --add-port=${i}/udp --permanent
done

步骤6:允许SSH TCP转发

您需要为SSH服务器启用系统范围的TCP转发。

打开位于的ssh配置文件 / etc / ssh / sshd_config

$ sudo vi /etc/ssh/sshd_config
AllowTcpForwarding yes

更改后,重新启动ssh服务。

--- CentOS ---
$ sudo systemctl restart sshd

--- Ubuntu ---
$ sudo systemctl restart ssh

步骤7:生成RKE群集配置文件。

RKE使用一个名为 cluster.yml 确定如何在集群中的节点上部署Kubernetes。

许多配置选项 可以设置 cluster.yml。该文件可以从以下位置创建 最小的例子 模板或 rke配置 命令

执行rke config命令在当前目录中创建一个新的cluster.yml。

rke config --name cluster.yml

该命令提示输入构建集群所需的所有信息。

创建一个空模板 cluster.yml 指定文件 --empty 国旗

rke config --empty --name cluster.yml

这是我的群集配置文件的外观– 请勿复制粘贴并将其用作参考 创建自己的配置。

# https://rancher.com/docs/rke/latest/en/config-options/
nodes:
- address: 10.10.1.10
  internal_address:
  hostname_override: rke-master-01
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.11
  internal_address:
  hostname_override: rke-master-02
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.12
  internal_address:
  hostname_override: rke-master-03
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.13
  internal_address:
  hostname_override: rke-worker-01
  role: [worker]
  user: rke
- address: 10.10.1.114
  internal_address:
  hostname_override: rke-worker-02
  role: [worker]
  user: rke

# using a local ssh agent 
# Using SSH private key with a passphrase - eval `ssh-agent -s` && ssh-add
ssh_agent_auth: true

#  SSH key that access all hosts in your cluster
ssh_key_path: ~/.ssh/id_rsa

# By default, the name of your cluster will be local
# Set different Cluster name
cluster_name: rke

# Fail for Docker version not supported by Kubernetes
ignore_docker_version: false

# prefix_path: /opt/custom_path

# Set kubernetes version to install: https://rancher.com/docs/rke/latest/en/upgrades/#listing-supported-kubernetes-versions
# Check with -> rke config --list-version --all
kubernetes_version:
# Etcd snapshots
services:
  etcd:
    backup_config:
      interval_hours: 12
      retention: 6
    snapshot: true
    creation: 6h
    retention: 24h

kube-api:
  # IP range for any services created on Kubernetes
  #  This must match the service_cluster_ip_range in kube-controller
  service_cluster_ip_range: 10.43.0.0/16
  # Expose a different port range for NodePort services
  service_node_port_range: 30000-32767
  pod_security_policy: false


kube-controller:
  # CIDR pool used to assign IP addresses to pods in the cluster
  cluster_cidr: 10.42.0.0/16
  # IP range for any services created on Kubernetes
  # # This must match the service_cluster_ip_range in kube-api
  service_cluster_ip_range: 10.43.0.0/16
  
kubelet:
  # Base domain for the cluster
  cluster_domain: cluster.local
  # IP address for the DNS service endpoint
  cluster_dns_server: 10.43.0.10
  # Fail if swap is on
  fail_swap_on: false
  # Set max pods to 150 instead of default 110
  extra_args:
    max-pods: 150

# Configure  network plug-ins 
# KE provides the following network plug-ins that are deployed as add-ons: flannel, calico, weave, and canal
# After you launch the cluster, you cannot change your network provider.
# Setting the network plug-in
network:
    plugin: canal
    options:
      canal_flannel_backend_type: vxlan

# Specify DNS provider (coredns or kube-dns)
dns:
  provider: coredns

# Currently, only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to
# add to the API server PKI certificate.
# This is useful if you want to use a load balancer for the
# control plane servers.
authentication:
  strategy: x509
  sans:
    - "k8s.computingforgeeks.com"

# Set Authorization mechanism
authorization:
    # Use `mode: none` to disable authorization
    mode: rbac

# Currently only nginx ingress provider is supported.
# To disable ingress controller, set `provider: none`
# `node_selector` controls ingress placement and is optional
ingress:
  provider: nginx
  options:
     use-forwarded-headers: "true"

在我的配置中,主节点具有 控制平面 角色。但是,您可以通过添加以下内容来使用它来调度广告连播: 工人 角色。

role: [controlplane, etcd, worker]

步骤7:使用RKE部署Kubernetes集群

一旦创建 cluster.yml 该文件使您可以使用简单的命令来部署集群。

rke up

该命令假定cluster.yml文件位于运行命令的目录中。如果要使用其他文件名,请指定:

$ rke up --config ./rancher_cluster.yml

在密码短语中使用SSH私钥– eval ssh-agent -s && ssh-add

验证安装程序在输出中没有显示任何错误。

......
INFO[0181] [sync] Syncing nodes Labels and Taints       
INFO[0182] [sync] Successfully synced nodes Labels and Taints 
INFO[0182] [network] Setting up network plugin: canal   
INFO[0182] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0183] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0183] [addons] Executing deploy job rke-network-plugin 
INFO[0189] [addons] Setting up coredns                  
INFO[0189] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0189] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0189] [addons] Executing deploy job rke-coredns-addon 
INFO[0195] [addons] CoreDNS deployed successfully..     
INFO[0195] [dns] DNS provider coredns deployed successfully 
INFO[0195] [addons] Setting up Metrics Server           
INFO[0195] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0196] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0196] [addons] Executing deploy job rke-metrics-addon 
INFO[0202] [addons] Metrics Server deployed successfully 
INFO[0202] [ingress] Setting up nginx ingress controller 
INFO[0202] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0202] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0202] [addons] Executing deploy job rke-ingress-controller 
INFO[0208] [ingress] ingress controller nginx deployed successfully 
INFO[0208] [addons] Setting up user addons              
INFO[0208] [addons] no user addons defined              
INFO[0208] Finished building Kubernetes cluster successfully

步骤8:访问您的Kubernetes集群

作为Kubernetes创建过程的一部分, kubeconfig 创建并写入文件 kube_config_cluster.yml

在生成的文件中设置KUBECONFIG变量。

export KUBECONFIG=./kube_config_cluster.yml

查看集群中的节点列表。

$ kubectl get nodes        
NAME             STATUS   ROLES               AGE     VERSION
rke-master-01    Ready    controlplane,etcd   16m     v1.17.0
rke-master-02    Ready    controlplane,etcd   16m     v1.17.0
rke-master-03    Ready    controlplane,etcd   16m     v1.17.0
rke-worker-01    Ready    worker              6m33s   v1.17.0
rke-worker-02    Ready    worker              16m     v1.17.0

您可以将此文件复制到 $ HOME / .kube /配置 如果没有其他Kubernetes集群。

mkdir ~/.kube
cp kube_config_rancher-cluster.yml ~/.kube/config

以下指南将指导您完成Rancher的安装。 开源多集群业务流程平台 这样可以轻松管理和保护Enterprise Kubernetes。

Sidebar