在CentOS 8 / CentOS 7上使用Heketi設置GlusterFS存儲

在本指南中,您將學習如何使用Heketi在CentOS 8 / CentOS 7上安裝和配置GlusterFS存儲。 GlusterFS是軟件定義的橫向擴展存儲解決方案,旨在為非結構化數據提供價格合理且靈活的存儲。藉助GlusterFS,您可以在合併基礎架構和數據存儲的同時提高可用性性能和數據可管理性。

GlusterFS存儲可以部署在私有雲或數據中心或內部數據中心中。這僅在商用服務器和存儲硬件上完成,從而提供了功能強大,可擴展且高度可用的NAS環境。

赫克蒂

Heketi提供了一個RESTful管理界面,可用於管理GlusterFS存儲卷的生命周期。這使得將GlusterFS與OpenShift,OpenStack Manila和Kubernetes等雲服務集成在一起以進行動態卷配置變得容易。

Heketi將自動確定磚在整個集群中的位置,並將磚及其副本放置在不同的故障域中。

偏好設定

CentOS 8 / CentOS 7系統上的GlusterFS設置包括以下內容。

  • CentOS 8 / CentOS 8 Linux服務器
  • GlusterFS 6軟件版本
  • 三台GlusterFS服務器
  • 每個服務器都有三個磁盤(@10GB
  • 設置DNS解析可用 / etc /主機 如果沒有DNS服務器,則文件
  • 具有sudo或root用戶訪問權限的用戶帳戶
  • Heketi安裝在GlusterFS節點之一上。

低於 / etc /主機 對於每個服務器文件,我都有:

$ sudo vim /etc/hosts
10.10.1.168 gluster01
10.10.1.179 gluster02
10.10.1.64  gluster03

步驟1:更新所有服務器

驗證將成為GlusterFS存儲群集一部分的所有服務器均已更新。

sudo yum -y update

建議您重新啟動系統,因為可能會有內核更新。

sudo reboot

步驟2:配置NTP時間同步

必須使用網絡時間協議(NTP)或Chrony守護程序在所有GlusterFS存儲服務器之間同步時間。請參考以下指南。

在CentOS上設置時間同步

步驟3:添加GlusterFS存儲庫

將GlusterFS存儲庫下載到所有服務器。 GlusterFS 6是最新的穩定版本,將在此設置中使用。

CentOS 8:

sudo yum -y install wget
sudo wget -O  /etc/yum.repos.d/glusterfs-rhel8.repo https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/CentOS/glusterfs-rhel8.repo

CentOS 7:

sudo yum -y install centos-release-gluster6

添加存儲庫後,更新YUM索引。

sudo yum makecache

步驟3:在CentOS 8 / CentOS 7上安裝GlusterFS

在CentOS 8上安裝GlusterFS與安裝CentOS 7有所不同。

在CentOS 8上安裝GlusterFS

啟用PowerTools存儲庫

sudo dnf -y install dnf-utils
sudo yum-config-manager --enable PowerTools
sudo dnf -y install glusterfs-server 

在CentOS 7上安裝GlusterFS

要在CentOS 7上安裝最新的GlusterFS,請在所有節點上運行以下命令:

sudo yum -y install glusterfs-server 

檢查已安裝的軟件包版本。

$ rpm -qi glusterfs-server 
Name        : glusterfs-server
Version     : 6.5
Release     : 2.el8
Architecture: x86_64
Install Date: Tue 29 Oct 2019 06:58:16 PM EAT
Group       : Unspecified
Size        : 6560178
License     : GPLv2 or LGPLv3+
Signature   : RSA/SHA256, Wed 28 Aug 2019 03:39:40 PM EAT, Key ID 43607f0dc2f8238c
Source RPM  : glusterfs-6.5-2.el8.src.rpm
Build Date  : Wed 28 Aug 2019 03:27:19 PM EAT
Build Host  : buildhw-09.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager    : Fedora Project
Vendor      : Fedora Project
URL         : http://docs.gluster.org/
Bug URL     : https://bugz.fedoraproject.org/glusterfs
Summary     : Distributed file-system server

您也可以使用gluster命令檢查版本。

$ gluster --version 
glusterfs 6.5
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
$ glusterfsd --version

步驟4:在CentOS 8 / CentOS 7上啟動GlusterFS服務

在CentOS 8 / CentOS 7上安裝GlusterFS服務後,啟動並啟用該服務。

sudo systemctl enable --now glusterd.service

加載Heketi所需的所有內核模塊。

for i in dm_snapshot dm_mirror dm_thin_pool; do
  sudo modprobe $i
done

如果您有活動的Firewalld服務,請允許GlusterFS使用的端口。

sudo firewall-cmd --add-service=glusterfs --permanent 
sudo firewall-cmd --reload 

檢查所有節點的服務狀態。

$ systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-10-29 19:10:08 EAT; 3min 1s ago
     Docs: man:glusterd(8)
 Main PID: 32027 (glusterd)
    Tasks: 9 (limit: 11512)
   Memory: 3.9M
   CGroup: /system.slice/glusterd.service
           └─32027 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Oct 29 19:10:08 gluster01.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server...
Oct 29 19:10:08 gluster01.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.

$ systemctl status glusterd
 ● glusterd.service - GlusterFS, a clustered file-system server
    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
    Active: active (running) since Tue 2019-10-29 19:10:13 EAT; 3min 51s ago
      Docs: man:glusterd(8)
  Main PID: 3706 (glusterd)
     Tasks: 9 (limit: 11512)
    Memory: 3.8M
    CGroup: /system.slice/glusterd.service
            └─3706 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
 Oct 29 19:10:13 gluster02.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server…
 Oct 29 19:10:13 gluster02.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.

$ systemctl status glusterd
 ● glusterd.service - GlusterFS, a clustered file-system server
    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
    Active: active (running) since Tue 2019-10-29 19:10:15 EAT; 4min 24s ago
      Docs: man:glusterd(8)
  Main PID: 3716 (glusterd)
     Tasks: 9 (limit: 11512)
    Memory: 3.8M
    CGroup: /system.slice/glusterd.service
            └─3716 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
 Oct 29 19:10:15 gluster03.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server…
 Oct 29 19:10:15 gluster03.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.

探測集群中的其他節點

[[email protected] ~]# gluster peer probe gluster02
peer probe: success. 

[[email protected] ~]# gluster peer probe gluster03
peer probe: success. 

[[email protected] ~]# gluster peer status
Number of Peers: 2

Hostname: gluster02
Uuid: ebfdf84f-3d66-4f98-93df-a6442b5466ed
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 98547ab1-9565-4f71-928c-8e4e13eb61c3
State: Peer in Cluster (Connected)

第5步:在以下任意位置安裝Heketi 結點

我用 gluster01 運行Heketi服務的節點。從Github發布頁面下載最新版本的Heketi服務器和客戶端。

curl -s https://api.github.com/repos/heketi/heketi/releases/latest 
  | grep browser_download_url 
  | grep linux.amd64 
  | cut -d '"' -f 4 
  | wget -qi -

提取下載的heketi存檔。

for i in `ls | grep heketi | grep .tar.gz`; do tar xvf $i; done

複製heketi和heketi-cli二進制軟件包。

sudo cp heketi/{heketi,heketi-cli} /usr/local/bin

確保它們可用

$ heketi --version
Heketi v9.0.0

$ heketi-cli --version
heketi-cli v9.0.0

步驟5:配置Heketi服務器

  • 添加一個heketi系統用戶。
sudo groupadd --system heketi
sudo useradd -s /sbin/nologin --system -g heketi heketi
  • 創建一個heketi配置和數據路徑。
sudo mkdir -p /var/lib/heketi /etc/heketi /var/log/heketi
  • 將heketi配置文件複製到/ etc / heketi目錄。
sudo cp heketi/heketi.json /etc/heketi
  • 編輯Heketi配置文件
sudo vim /etc/heketi/heketi.json

設置服務端口。

"port": "8080"

設置管理員並使用機密。

"_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "ivd7dfORN7QNeKVO"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "gZPgdZ8NtBNj6jfp"
    }
  },

配置glusterfs執行器

_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab",
......
},

如果您使用非root用戶, 沒有密碼 sudo特權升級。

確保數據庫路徑設置正確

"_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",
},

以下是完整的修改後的配置文件。

{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080",

	"_enable_tls_comment": "Enable TLS in Heketi Server",
	"enable_tls": false,

	"_cert_file_comment": "Path to a valid certificate file",
	"cert_file": "",

	"_key_file_comment": "Path to a valid private key file",
	"key_file": "",


  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "ivd7dfORN7QNeKVO"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "gZPgdZ8NtBNj6jfp"
    }
  },

  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
  "backup_db_to_kube_secret": false,

  "_profiling": "Enable go/pprof profiling on the /debug/pprof endpoints.",
  "profiling": false,

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "mock",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "cloud-user",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

     "_refresh_time_monitor_gluster_nodes": "Refresh time in seconds to monitor Gluster nodes",
    "refresh_time_monitor_gluster_nodes": 120,

    "_start_time_monitor_gluster_nodes": "Start time in seconds to monitor Gluster nodes when the heketi comes up",
    "start_time_monitor_gluster_nodes": 10,

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug",

    "_auto_create_block_hosting_volume": "Creates Block Hosting volumes automatically if not found or exsisting volume exhausted",
    "auto_create_block_hosting_volume": true,

    "_block_hosting_volume_size": "New block hosting volume will be created in size mentioned, This is considered only if auto-create is enabled.",
    "block_hosting_volume_size": 500,

    "_block_hosting_volume_options": "New block hosting volume will be created with the following set of options. Removing the group gluster-block option is NOT recommended. Additional options can be added next to it separated by a comma.",
    "block_hosting_volume_options": "group gluster-block",

    "_pre_request_volume_options": "Volume options that will be applied for all volumes created. Can be overridden by volume options in volume create request.",
    "pre_request_volume_options": "",

    "_post_request_volume_options": "Volume options that will be applied for all volumes created. To be used to override volume options in volume create request.",
    "post_request_volume_options": ""
  }
}
  • 生成Heketi SSH密鑰
sudo ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
sudo chown heketi:heketi /etc/heketi/heketi_key*
  • 將生成的公鑰複製到所有GlusterFS節點
for i in gluster01 gluster02 gluster03; do
  ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]$i
done

或者,您也可以提供以下內容 /etc/heketi/heketi_key.pub 添加到每個服務器 〜/ .ssh /授權密鑰

驗證您可以使用Heketi私鑰訪問GlusterFS節點。

$ ssh -i /etc/heketi/heketi_key [email protected]
The authenticity of host 'gluster02 (10.10.1.179)' can't be established.
ECDSA key fingerprint is SHA256:GXNdsSxmp2O104rPB4RmYsH73nTa5U10cw3LG22sANc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'gluster02,10.10.1.179' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Tue Oct 29 20:11:32 2019 from 10.10.1.168
[[email protected] ~]# 
  • 創建一個系統單位文件

創建一個Heketi Systemd單位文件

$ sudo vim /etc/systemd/system/heketi.service
[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=-/etc/heketi/heketi.env
User=heketi
ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target

下載Heketi的示例環境文件。

sudo wget -O /etc/heketi/heketi.env https://raw.githubusercontent.com/heketi/heketi/master/extras/systemd/heketi.env
  • 設置所有目錄權限
sudo chown -R heketi:heketi /var/lib/heketi /var/log/heketi /etc/heketi
  • Heketi服務啟動

禁用SELinux

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

然後重新加載Systemd並啟動Heketi服務

sudo systemctl daemon-reload
sudo systemctl enable --now heketi

驗證服務正在運行。

$ systemctl status heketi
● heketi.service - Heketi Server
   Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-10-29 20:29:23 EAT; 4s ago
 Main PID: 2166 (heketi)
    Tasks: 5 (limit: 11512)
   Memory: 8.7M
   CGroup: /system.slice/heketi.service
           └─2166 /usr/local/bin/heketi --config=/etc/heketi/heketi.json

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Heketi v9.0.0
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Loaded mock executor
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Volumes per cluster limit is set to default value of 1000
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: Auto Create Block Hosting Volume set to true
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume size 500 GB
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume Options: group gluster-block
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 GlusterFS Application Loaded
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started background pending operations cleaner
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started Node Health Cache Monitor
Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Listening on port 8080

步驟6:創建Heketi拓撲文件

創建了用於生成和更新拓撲文件的ansible劇本。手動編輯json文件是一項艱巨的任務。這使縮放更容易。

Ansible的本地安裝-請參閱Ansible的Ansible安裝文檔。

對於 CentOS的

sudo yum -y install epel-release
sudo yum -y install ansible

對於 的Ubuntu

sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

安裝Ansible之後,為您的項目創建一個文件夾結構

mkdir -p ~/projects/ansible/roles/heketi/{tasks,templates,defaults}

創建Heketi拓撲Jinja2模板

$ vim ~/projects/ansible/roles/heketi/templates/topology.json.j2
{
  "clusters": [
    {
      "nodes": [
      {% if gluster_servers is defined and gluster_servers is iterable %}
      {% for item in gluster_servers %}
        {
          "node": {
            "hostnames": {
              "manage": [
                "{{ item.servername }}"
              ],
              "storage": [
                "{{ item.serverip }}"
              ]
            },
            "zone": {{ item.zone }}
          },
          "devices": [
            "{{ item.disks | list | join ("","") }}"
          ]
        }{% if not loop.last %},{% endif %}
    {% endfor %}
    {% endif %}
      ]
    }
  ]
}

變量定義-設置一個與您的首選項匹配的值。

$ vim ~/projects/ansible/roles/heketi/defaults/main.yml
---
# GlusterFS nodes
gluster_servers:
  - servername: gluster01
    serverip: 10.10.1.168
    zone: 1
    disks:
      - /dev/vdc
      - /dev/vdd
      - /dev/vde
  - servername: gluster02
    serverip: 10.10.1.179
    zone: 1
    disks:
      - /dev/vdc
      - /dev/vdd
      - /dev/vde
  - servername: gluster03
    serverip: 10.10.1.64
    zone: 1
    disks:
      - /dev/vdc
      - /dev/vdd
      - /dev/vde

創建一個Ansible任務

$ vim ~/projects/ansible/roles/heketi/tasks/main.yml
---
- name: Copy heketi topology file
  template:
    src: topology.json.j2
    dest: /etc/heketi/topology.json

- name: Set proper file ownership
  file:
   path:  /etc/heketi/topology.json
   owner: heketi
   group: heketi

創建劇本和清單文件

$ vim ~/projects/ansible/heketi.yml
---
- name: Generate Heketi topology file and copy to Heketi Server
  hosts: gluster01
  become: yes
  become_method: sudo
  roles:
    - heketi

$ vim ~/projects/ansible/hosts
gluster01

這就是一切的樣子

$ cd ~/projects/ansible/
$ tree
.
├── heketi.yml
├── hosts
└── roles
    └── heketi
        ├── defaults
        │   └── main.yml
        ├── tasks
        │   └── main.yml
        └── templates
            └── topology.json.j2

5 directories, 5 files

運行劇本

$ cd ~/projects/ansible
$ ansible-playbook -i hosts --user myuser --ask-pass --ask-become-pass heketi.yml

# Key based and Passwordless sudo / root, use:
$ ansible-playbook -i hosts --user myuser heketi.yml

執行輸出

檢查生成的拓撲文件的內容。

$ cat /etc/heketi/topology.json 
{
  "clusters": [
    {
      "nodes": [
                    {
          "node": {
            "hostnames": {
              "manage": [
                "gluster01"
              ],
              "storage": [
                "10.10.1.168"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdc","/dev/vdd","/dev/vde"
          ]
        },            {
          "node": {
            "hostnames": {
              "manage": [
                "gluster02"
              ],
              "storage": [
                "10.10.1.179"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdc","/dev/vdd","/dev/vde"
          ]
        },            {
          "node": {
            "hostnames": {
              "manage": [
                "gluster03"
              ],
              "storage": [
                "10.10.1.64"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdc","/dev/vdd","/dev/vde"
          ]
        }              ]
    }
  ]
}

步驟7:加載Heketi拓撲文件

如果一切正常,請加載拓撲文件。

# heketi-cli topology load --user admin --secret heketi_admin_secret --json=/etc/heketi/topology.json

在我的配置中,運行

 # heketi-cli topology load --user admin --secret ivd7dfORN7QNeKVO --json=/etc/heketi/topology.json
Creating cluster ... ID: dda582cc3bd943421d57f4e78585a5a9
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node gluster01 ... ID: 0c349dcaec068d7a78334deaef5cbb9a
		Adding device /dev/vdc ... OK
		Adding device /dev/vdd ... OK
		Adding device /dev/vde ... OK
	Creating node gluster02 ... ID: 48d7274f325f3d59a3a6df80771d5aed
		Adding device /dev/vdc ... OK
		Adding device /dev/vdd ... OK
		Adding device /dev/vde ... OK
	Creating node gluster03 ... ID: 4d6a24b992d5fe53ed78011e0ab76ead
		Adding device /dev/vdc ... OK
		Adding device /dev/vdd ... OK
		Adding device /dev/vde ... OK

在下面的屏幕快照中共享了相同的輸出。

在CentOS 8 / CentOS 7上使用Heketi設置GlusterFS存儲

步驟7:檢查GlusterFS / Heketi設置

添加Heketi訪問憑證 〜/ .Bashrc 文件

$ vim ~/.bashrc
export HEKETI_CLI_SERVER=http://heketiserverip:8080
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY="AdminPass"

獲取文件。

source ~/.bashrc

讀取拓撲文件後,執行以下命令以列出集群。

# heketi-cli cluster list
Clusters:
Id:dda582cc3bd943421d57f4e78585a5a9 [file][block]

列出集群中的可用節點。

# heketi-cli node list
Id:0c349dcaec068d7a78334deaef5cbb9a	Cluster:dda582cc3bd943421d57f4e78585a5a9
Id:48d7274f325f3d59a3a6df80771d5aed	Cluster:dda582cc3bd943421d57f4e78585a5a9
Id:4d6a24b992d5fe53ed78011e0ab76ead	Cluster:dda582cc3bd943421d57f4e78585a5a9

運行以下命令以查看特定節點的詳細信息:

# heketi-cli node info ID
# heketi-cli node info 0c349dcaec068d7a78334deaef5cbb9a

Node Id: 0c349dcaec068d7a78334deaef5cbb9a
State: online
Cluster Id: dda582cc3bd943421d57f4e78585a5a9
Zone: 1
Management Hostname: gluster01
Storage Hostname: 10.10.1.168
Devices:
Id:0f26bd867f2bd8bc126ff3193b3611dc   Name:/dev/vdd            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):10      Bricks:0       
Id:29c34e25bb30db68d70e5fd3afd795ec   Name:/dev/vdc            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):10      Bricks:0       
Id:feb55e58d07421c422a088576b42e5ff   Name:/dev/vde            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):10      Bricks:0   

創建一個Gluster卷,並確保Heketi和GlusterFS在工作。

# heketi-cli volume create --size=1
Name: vol_7e071706e1c22052e5121c29966c3803
Size: 1
Volume Id: 7e071706e1c22052e5121c29966c3803
Cluster Id: dda582cc3bd943421d57f4e78585a5a9
Mount: 10.10.1.168:vol_7e071706e1c22052e5121c29966c3803
Mount Options: backup-volfile-servers=10.10.1.179,10.10.1.64
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

# heketi-cli volume list
Id:7e071706e1c22052e5121c29966c3803    Cluster:dda582cc3bd943421d57f4e78585a5a9    Name:vol_7e071706e1c22052e5121c29966c3803

要查看拓撲,請執行以下操作:

heketi-cli topology info

gluster命令還可以用於檢查集群中的服務器。

gluster pool list

要與Kubernetes集成,請檢查以下內容:

如何使用Heketi和GlusterFS配置Kubernetes動態卷配置

GlusterFS和Heketi設置現在可以使用。以下指南介紹了如何使用Heketi和GlusterFS配置Kubernetes和OpenShift持久卷的動態配置。

請參閱:

  • GlusterFS文件
  • Heketi的文檔

類似指南:

適用於Kubernetes和Docker容器的最佳存儲解決方案

如何使用Minio設置與S3兼容的對象存儲服務器

安裝並使用Stratis來管理RHEL 8 / Fedora上的本地存儲

Stratis存儲管理備忘單

Sidebar