在Ubuntu 20.04上安装Ceph 15(Octopus)存储集群

这是续集。
您可以通过以下链接以PDF格式下载本文,以为我们提供支持。

将指南下载为PDF


这是续集。

这是续集。
这是续集。

我计划将有关在Ubuntu 20.04 Linux服务器上安装Ceph存储群集的文章汇总在一起,这是交货日期。 Ceph是一种软件定义的存储解决方案,旨在在商用硬件上构建分布式存储集群。在Ubuntu 20.04上构建Ceph存储集群的要求在很大程度上取决于您的预期用例。

此设置不适用于运行关键任务和功能强大的刻录应用程序。您可能需要参考官方项目文档以了解这些要求,尤其是有关网络和存储硬件的要求。以下是构成本安装指南的标准Ceph组件。

  • Ceph MON –服务器监控
  • Ceph MDS元数据服务器
  • Ceph MGR – Ceph管理器守护程序
  • Ceph OSD对象存储守护进程

在Ubuntu 20.04上安装Ceph Storage Cluster

在Ubuntu 20.04 Linux服务器上开始部署Ceph Storage Cluster之前,需要准备所需的服务器。下面是准备安装我的服务器的图片。

如您所见,我的实验室具有以下服务器名称和IP地址:

服务器主机名服务器的IP地址Ceph组件服务器规格
ceph-mon-01172.16.20.10Ceph MON,MGR,MDS8GB RAM,4vPcus
ceph-mon-02172.16.20.11Ceph MON,MGR,MDS8GB RAM,4vPcus
ceph-mon-03172.16.20.12Ceph MON,MGR,MDS8GB RAM,4vPcus
ceph-osd-01172.16.20.13Ceph OSD16GB RAM,8vpcus
ceph-osd-02172.16.20.14Ceph OSD16GB RAM,8vpcus
ceph-osd-03172.16.20.15Ceph OSD16GB RAM,8vpcus

步骤1:准备第一个见证节点

用于部署的ceph组件是Cephadm。 Cephadm通过使用SSH从管理器守护程序连接到主机来添加,删除或更新Ceph群集,从而部署和管理Ceph群集。

登录到第一个观察者节点。

$ ssh [email protected] 
Warning: Permanently added 'ceph-mon-01,172.16.20.10' (ECDSA) to the list of known hosts.
Enter passphrase for key '/var/home/jkmutai/.ssh/id_rsa': 
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

Last login: Tue Jun  2 20:36:36 2020 from 172.16.20.10
[email protected]:~# 

更新资料 / etc / hosts 包含所有IP地址和主机名条目的文件。

# vim /etc/hosts

127.0.0.1 localhost

# Ceph nodes
172.16.20.10  ceph-mon-01
172.16.20.11  ceph-mon-02
172.16.20.12  ceph-mon-03
172.16.20.13  ceph-osd-01
172.16.20.14  ceph-osd-02
172.16.20.15  ceph-osd-03

操作系统更新和升级:

sudo apt update && sudo apt -y upgrade
sudo systemctl reboot

安装Ansible和其他基本实用程序。

sudo apt update
sudo apt -y install software-properties-common git curl vim bash-completion ansible

确保已安装Ansible。

$ ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]

确保将/ usr / local / bin路径添加到PATH。

echo "PATH=$PATH:/usr/local/bin" >>~/.bashrc
source ~/.bashrc

检查当前路径:

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin

生成SSH密钥。

$ ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:3gGoZCVsA6jbnBuMIpnJilCiblaM9qc5Xk38V7lfJ6U [email protected]
The key's randomart image is:
+---[RSA 4096]----+
| ..o. . |
|. +o . |
|. .o.. . |
|o .o .. . . |
|o%o.. oS . o .|
|@+*o o… .. .o |
|O oo . ….. .E o|
|o+.oo. . ..o|
|o .++ . |
+----[SHA256]-----+

安装Cephadm。

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
sudo mv cephadm  /usr/local/bin/

确保cephadm在本地可用。

$ cephadm --help

步骤2:更新所有Ceph节点并推送SSH公钥

配置第一个Mon节点后,创建一个可更新所有节点的Ansible剧本,并按ssh公钥进行更新 / etc / hosts 所有节点的文件。

cd ~/
vim prepare-ceph-nodes.yml

请更正以下内容并正确设置。 时区 附加到文件。

---
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  vars:
    ceph_admin_user: cephadmin
  tasks:
    - name: Set timezone
      timezone:
        name: Africa/Nairobi

    - name: Update system
      apt:
        name: "*"
        state: latest
        update_cache: yes

    - name: Install common packages
      apt:
        name: [vim,git,bash-completion,wget,curl,chrony]
        state: present
        update_cache: yes

    - name: Set authorized key taken from file to root user
      authorized_key:
        user: root
        state: present
        key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
   
    - name: Install Docker
      shell: |
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker-ce.list
        apt update
        apt install -qq -y docker-ce docker-ce-cli containerd.io

    - name: Reboot server after update and configs
      reboot:

创建一个清单文件。

$ vim hosts
[ceph_nodes]
ceph-mon-01
ceph-mon-02
ceph-mon-03
ceph-osd-01
ceph-osd-02
ceph-osd-03

如果您使用密钥密码短语,请保存它。

$ eval `ssh-agent -s` && ssh-add ~/.ssh/id_rsa_jmutai 
Agent pid 3275
Enter passphrase for /root/.ssh/id_rsa_jmutai: 
Identity added: /root/.ssh/id_rsa_jkmutai (/root/.ssh/id_rsa_jmutai)

设置ssh。

tee -a ~/.ssh/config<

运行剧本:

# As root user with  default ssh key:
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user root

# As root user with password:
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user root --ask-pass

# As sudo user with password - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-pass --ask-become-pass

# As sudo user with ssh key and sudo password - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass

# As sudo user with ssh key and passwordless sudo - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass

# As sudo or root user with custom key
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key /path/to/private/key 

就我而言:

$ ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jkmutai

执行输出:

 ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jmutai 

PLAY [Prepare ceph nodes] ******************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-mon-01]
ok: [ceph-osd-01]
ok: [ceph-osd-02]
ok: [ceph-osd-03]

TASK [Update system] ***********************************************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-osd-03]

TASK [Install common packages] *************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Add ceph admin user] *****************************************************************************************************************************
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-03]

TASK [Create sudo file] ********************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Give ceph admin user passwordless sudo] **********************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Set authorized key taken from file to ceph admin] ************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-osd-03]

TASK [Set authorized key taken from file to root user] *************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-osd-03]

TASK [Install Docker] **********************************************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Reboot server after update and configs] **********************************************************************************************************
changed: [ceph-osd-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

PLAY RECAP *********************************************************************************************************************************************
ceph-mon-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

以在节点上创建的Ceph管理员用户身份测试ssh。

$ ssh [email protected]2
Warning: Permanently added 'ceph-mon-02,172.16.20.11' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-28-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

[email protected]:~$ sudo su -
[email protected]:~# logout[email protected]:~$ exit
logout
Connection to ceph-mon-01 closed.

配置/ etc / hosts

如果尚未在所有群集服务器上激活为主机名配置的DNS,请在所有节点上更新/ etc / hosts。

要更改的剧本是:

$ vim update-hosts.yml
---
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  tasks:
    - name: Clean /etc/hosts file
      copy:
        content: ""
        dest: /etc/hosts

    - name: Update /etc/hosts file
      blockinfile:
        path: /etc/hosts
        block: |
           127.0.0.1     localhost
           172.16.20.10  ceph-mon-01
           172.16.20.11  ceph-mon-02
           172.16.20.12  ceph-mon-03
           172.16.20.13  ceph-osd-01
           172.16.20.14  ceph-osd-02
           172.16.20.15  ceph-osd-03

剧本执行:

$ ansible-playbook -i hosts update-hosts.yml --private-key ~/.ssh/id_rsa_jmutai 

PLAY [Prepare ceph nodes] ******************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************
ok: [ceph-mon-01]
ok: [ceph-osd-02]
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-osd-01]
ok: [ceph-osd-03]

TASK [Clean /etc/hosts file] ***************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Update /etc/hosts file] **************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

PLAY RECAP *********************************************************************************************************************************************
ceph-mon-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

确认:

$ ssh [email protected]
$ cat /etc/hosts
# BEGIN ANSIBLE MANAGED BLOCK
127.0.0.1      localhost
172.16.20.10   ceph-mon-01
172.16.20.11   ceph-mon-02
172.16.20.12   ceph-mon-03
172.16.20.13   ceph-osd-01
172.16.20.14   ceph-osd-02
172.16.20.15   ceph-osd-03
# END ANSIBLE MANAGED BLOCK

步骤3:在Ubuntu 20.04上部署Ceph 15(Octopus)存储集群

要在Ubuntu 20.04上引导新的Ceph集群,您需要第一个监视器地址(IP或主机名)。

sudo mkdir -p /etc/ceph
cephadm bootstrap 
  --mon-ip ceph-mon-01 
  --initial-dashboard-user admin 
  --initial-dashboard-password [email protected]

执行输出:

INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 8dbf2eda-a513-11ea-a3c1-a534e03850ee
INFO:cephadm:Verifying IP 172.16.20.10 port 3300 ...
INFO:cephadm:Verifying IP 172.16.20.10 port 6789 ...
INFO:cephadm:Mon IP 172.16.20.10 is in CIDR network 172.31.1.1
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr not available, waiting (4/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to [email protected]'s authorized_keys...
INFO:cephadm:Adding host ceph-mon-01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

	     URL: https://ceph-mon-01:8443/
	    User: admin
	Password: [email protected]

INFO:cephadm:You can access the Ceph CLI with:

	sudo /usr/local/bin/cephadm shell --fsid 8dbf2eda-a513-11ea-a3c1-a534e03850ee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

安装Ceph工具。

cephadm add-repo --release octopus
cephadm install ceph-common

根据需要添加监视器。

--- Copy Ceph SSH key ---
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]

--- Label the nodes with mon ---
ceph orch host label add ceph-mon-01 mon
ceph orch host label add ceph-mon-02 mon
ceph orch host label add ceph-mon-03 mon

--- Add nodes to the cluster ---
ceph orch host add ceph-mon-02
ceph orch host add ceph-mon-03

--- Apply configs ---
ceph orch apply mon ceph-mon-02
ceph orch apply mon ceph-mon-03

显示主机和标签的列表。

# ceph orch host ls

HOST         ADDR         LABELS  STATUS  
ceph-mon-01  ceph-mon-01  mon             
ceph-mon-02  ceph-mon-02  mon             
ceph-mon-03  ceph-mon-03  mon

正在运行的容器:

# docker ps
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES
7d666ae63232        prom/alertmanager          "/bin/alertmanager -…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-alertmanager.ceph-mon-01
4e7ccde697c7        prom/prometheus:latest     "/bin/prometheus --c…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-prometheus.ceph-mon-01
9fe169a3f2dc        ceph/ceph-grafana:latest   "/bin/sh -c 'grafana…"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-grafana.ceph-mon-01
c8e99deb55a4        prom/node-exporter         "/bin/node_exporter …"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-node-exporter.ceph-mon-01
277f0ef7dd9d        ceph/ceph:v15              "/usr/bin/ceph-crash…"   9 minutes ago       Up 9 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-crash.ceph-mon-01
9de7a86857aa        ceph/ceph:v15              "/usr/bin/ceph-mgr -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mgr.ceph-mon-01.qhokxo
d116bc14109c        ceph/ceph:v15              "/usr/bin/ceph-mon -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mon.ceph-mon-01

步骤4:部署Ceph OSD

将群集的公共SSH密钥安装在新OSD节点上的root用户的authorized_keys文件中。

ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]

告诉Ceph,新节点是集群的一部分。

--- Add hosts to cluster ---
ceph orch host add ceph-osd-01
ceph orch host add ceph-osd-02
ceph orch host add ceph-osd-03

--- Give new nodes labels ---

ceph orch host label add  ceph-osd-01 osd
ceph orch host label add  ceph-osd-02 osd
ceph orch host label add  ceph-osd-03 osd

显示存储节点上的所有设备。

# ceph orch device ls
HOST         PATH      TYPE   SIZE  DEVICE                           AVAIL  REJECT REASONS  
ceph-mon-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-01  /dev/sdb  hdd   50.0G  HC_Volume_5680482                True                   
ceph-osd-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-02  /dev/sdb  hdd   50.0G  HC_Volume_5680484                True                   
ceph-osd-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-03  /dev/sdb  hdd   50.0G  HC_Volume_5680483                True                   
ceph-osd-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked    

如果满足以下所有条件,则认为存储设备可用:

  • 该设备不能有分区。
  • 设备不得具有LVM状态。
  • 请勿安装设备。
  • 设备不能有文件系统。
  • 该设备不得包含Ceph BlueStore OSD。
  • 设备必须大于5 GB。

指示Ceph使用可用的未使用的存储设备。

# ceph orch daemon add osd ceph-osd-01:/dev/sdb
Created osd(s) 0 on host 'ceph-osd-01'

# ceph orch daemon add osd ceph-osd-02:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-02'

# ceph orch daemon add osd ceph-osd-03:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-03'

检查ceph的状态。

# ceph -s
  cluster:
    id:     8dbf2eda-a513-11ea-a3c1-a534e03850ee
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-mon-01 (age 23m)
    mgr: ceph-mon-01.qhokxo(active, since 22m), standbys: ceph-mon-03.rhhvzc
    osd: 3 osds: 3 up (since 36s), 3 in (since 36s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:     1 active+clean

第5步:访问Ceph仪表板

Ceph仪表板现在在活动MGR服务器的地址处可用。

# ceph -s

看起来像这样:

URL: https://ceph-mon-01:8443/
User: admin
Password: [email protected]

使用您的凭据登录以访问Ceph管理仪表板。

在Ubuntu 20.04上安装Ceph 15(Octopus)存储集群

享受通过Cephadm和容器在Ubuntu 20.04上管理Ceph Storage Cluster的乐趣。以下文章将介绍添加OSD,删除OSD,配置RGW等。保持联系以获取更新。

这是续集。
您可以通过以下链接以PDF格式下载本文,以为我们提供支持。

将指南下载为PDF


这是续集。

这是续集。
这是续集。

Sidebar