Zero revision record
Serial number | Revised content | Revision time |
---|---|---|
1 | newly added | 20210423 |
I. summary
This article introduces CentOS 7 6 use ceph deploy to install the version of ceph nautilus. This paper mainly deploys ceph based on the production environment, especially the redundant configuration at the network level.
II. Environmental information
(1) Hardware information
2.1.1 server information
host name | Brand model | Machine configuration | quantity |
---|---|---|---|
proceph01.pro.kxdigit.com | Inspur SA5212M5 | 42102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY card 1/RAID card SAS3108 2GB | 1 |
proceph02.pro.kxdigit.com | Inspur SA5212M5 | 42102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY card 1/RAID card SAS3108 2GB | 1 |
proceph03.pro.kxdigit.com | Inspur SA5212M5 | 42102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY card 1/RAID card SAS3108 2GB | 1 |
2.1.2 switch information
Two switches of the same configuration are stacked.
Switch name | Brand model | Machine configuration | quantity |
---|---|---|---|
A3_1F_DC_openstack_test_jieru_train-irf_b02&b03 | H3CLS-6860-54HF | 10G optical port 48, 40g optical port 6 | 2 |
(2) Operating system
centos 7.6.1810 64 bit operating system
[root@localhost vlan]# cat /etc/centos-release CentOS Linux release 7.6.1810 (Core) [root@localhost vlan]#
(3) ceph information
III. implementation
(1) Deployment planning
3.1.1 deployment network planning
Host side | Physical interface | Network card name | binding | IP address | Switch | Interface | binding | pattern | VLAN | remarks |
---|---|---|---|---|---|---|---|---|---|---|
proceph01 | 10 Gigabit optical port 1 | enp59s0f1 | mode4 | bond0:10.3.140.31 | B02.40U | 7 | BAGG7/LACP | access | 140 | API management |
proceph01 | 10 Gigabit optical port 3 | enp175s0f1 | mode4 | B03.40U | 7 | BAGG7/LACP | access | 140 | API management | |
proceph01 | 10 Gigabit optical port 2 | enp59s0f0 | mode4 | bond1: 10.3.141.31 | B02.40U | 31 | BAGG31/LACP | access | 141 | Storage private network |
proceph01 | 10 Gigabit optical port 4 | enp175s0f0 | mode4 | B03.40U | 31 | BAGG31/LACP | access | 141 | Storage private network | |
proceph02 | 10 Gigabit optical port 1 | enp59s0f1 | mode4 | bond0:10.3.140.32 | B02.40U | 8 | BAGG8/LACP | access | 140 | API management |
proceph02 | 10 Gigabit optical port 3 | enp175s0f1 | mode4 | B03.40U | 8 | BAGG8/LACP | access | 140 | API management | |
proceph02 | 10 Gigabit optical port 2 | enp59s0f0 | mode4 | bond1:10.3.141.32 | B02.40U | 32 | BAGG32/LACP | access | 141 | Storage private network |
proceph02 | 10 Gigabit optical port 4 | enp175s0f0 | mode4 | B03.40U | 32 | BAGG32/LACP | access | 141 | Storage private network | |
proceph03 | 10 Gigabit optical port 1 | enp59s0f1 | mode4 | bond0:10.3.140.33 | B02.40U | 9 | BAGG9/LACP | access | 140 | API management |
proceph03 | 10 Gigabit optical port 3 | enp175s0f1 | mode4 | B03.40U | 9 | BAGG9/LACP | access | 140 | API management | |
proceph03 | 10 Gigabit optical port 2 | enp59s0f0 | mode4 | bond1:10.3.141.33 | B02.40U | 33 | BAGG33/LACP | access | 141 | Storage private network |
proceph03 | 10 Gigabit optical port 4 | enp175s0f0 | mode4 | B03.40U | 33 | BAGG33/LACP | access | 141 | Storage private network |
3.1.2 deployment node function planning
host name | IP | disk | role |
---|---|---|---|
proceph01.pro.kxdigit.com | 10.3.140.31 | System disk: / dev/sda data disk: / dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg | ceph-deploy,monitor,mgr,mds,osd |
proceph02.pro.kxdigit.com | 10.3.140.32 | System disk: / dev/sda data disk: / dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg | monitor,mgr,mds,osd |
proceph03.pro.kxdigit.com | 10.3.140.33 | System disk: / dev/sda data disk: / dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg | monitor,mgr,mds,osd |
3.1.3 raid special instructions
The system disk is raid1, and each data disk is raid0 separately. There are six data disks in total, and raid0 is done six times;
(2) Deployment preparation (all three nodes need to be implemented)
3.2.1-3.2.5 for detailed operation, please refer to linux installs ceph nautilus based on three physical machines
linux (CentOS 7) uses ceph deploy to install ceph
3.2.1 configuring bond0
[refer to this article]( https://www.cnblogs.com/weiwei2021/p/14690254.html )
3.2.2 configure bond1
ditto
3.2.3 turn off dynamic routing
After the machine is configured with dual addresses, if dynamic routing is not turned off, only one address can be used externally. That is, the address corresponding to the first default route in the routing table.
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter echo 0 > /proc/sys/net/ipv4/conf/bond0/rp_filter echo 0 > /proc/sys/net/ipv4/conf/bond1/rp_filter
Permanently turn off dynamic routing
[root@localhost etc]# cp /etc/sysctl.conf /etc/sysctl.conf.bak.orig [root@localhost etc]# vim /etc/sysctl.conf # close dynamic route for 2 IP net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.bond0.rp_filter = 0 net.ipv4.conf.bond1.rp_filter = 0
3.2.4 configuring dns
Based on ansible Playbook
[dev@10-3-170-32 base]$ ansible-playbook modifydns.yml
Configure dns on dns server
domain name | Address resolution |
---|---|
proceph01.pro.kxdigit.com | 10.3.140.31 |
proceph02.pro.kxdigit.com | 10.3.140.32 |
proceph03.pro.kxdigit.com | 10.3.140.33 |
3.2.5 modify ssh configuration file
Because dns is configured, dns will be used for ssh login by default, so ssh login will be very slow,
[root@localhost ssh]# cp sshd_config sshd_config.bak.orig [root@localhost ssh]# vim sshd_config [root@localhost ssh]# systemctl restart sshd [root@localhost ssh]#
Just turn off the default
#UseDNS yes UseDNS no
3.2.6 configuring yum source
Based on ansible Playbook
Update operating system source
[dev@10-3-170-32 base]$ ansible-playbook updateyum.yml
Update ceph source
[dev@10-3-170-32 base]$ ansible-playbook updatecephyum.yml
3.2.4 configure time server
Based on ansible Playbook
[dev@10-3-170-32 base]$ ansible-playbook modifychronyclient.yml
3.2.5 configuring hosts files
/The following configuration is added to the etc/hosts file
10.3.140.31 proceph01 10.3.140.32 proceph02 10.3.140.33 proceph03
3.2.5 turn off the firewall and turn off selinux
[dev@10-3-170-32 base]$ ansible-playbook closefirewalldandselinux.yml
3.2.6 setting machine name
[root@localhost ~]# hostnamectl set-hostname proceph01.pro.kxdigit.com [root@localhost ~]# exit Logout Connection to 10.3.140.31 closed. [dev@10-3-170-32 base]$ ssh root@10.3.140.32 Last login: Fri Apr 23 16:37:32 2021 from 10.3.170.32 [root@localhost ~]# hostnamectl set-hostname proceph02.pro.kxdigit.com [root@localhost ~]# exit Logout Connection to 10.3.140.32 closed. [dev@10-3-170-32 base]$ ssh root@10.3.140.33 Last login: Fri Apr 23 16:37:32 2021 from 10.3.170.32 [root@localhost ~]# hostnamectl set-hostname proceph03.pro.kxdigit.com [root@localhost ~]# exit
3.2.7 create and deploy user cephadmin
All three nodes should create the user and set sudo
[root@proceph01 ~]# useradd cephadmin [root@proceph01 ~]# echo "cephnau@2020" | passwd --stdin cephadmin Change user cephadmin Your password. passwd: All authentication tokens have been successfully updated. [root@proceph01 ~]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin cephadmin ALL = (root) NOPASSWD:ALL [root@proceph01 ~]# chmod 0440 /etc/sudoers.d/cephadmin [root@proceph01 ~]#
3.2.8 configure cephadmin user password free login
The deployment node needs to log in to three nodes without secret. The deployment node here is the same machine as node 001, which is also used as secret free
[root@proceph01 ~]# su - cephadmin [cephadmin@proceph01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa): Created directory '/home/cephadmin/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/cephadmin/.ssh/id_rsa. Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub. The key fingerprint is: SHA256:/N1IGwJzKLKEEvnIqbnz4BaVMqSe2jx3SsfBaCHSDG4 cephadmin@proceph01.pro.kxdigit.com The key's randomart image is: +---[RSA 2048]----+ |o. | |o* . . | |*E* = . + . | |+B.= * o + | |o.= + o S . o | |o+ . . . . + = | |o+. . o . + . | |=o+.... | |.+.o.o | +----[SHA256]-----+ [cephadmin@proceph01 ~]$ ssh-copy-id proceph01.pro.kxdigit.com /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub" The authenticity of host 'proceph01.pro.kxdigit.com (10.3.140.31)' can't be established. ECDSA key fingerprint is SHA256:IDIkIjgVg6mimwePYirWVtNu6XN34kDpeWhcUqLn7bo. ECDSA key fingerprint is MD5:6a:2c:8e:d3:57:32:57:7e:10:4c:2f:84:c5:a2:5e:ab. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys cephadmin@proceph01.pro.kxdigit.com's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'proceph01.pro.kxdigit.com'" and check to make sure that only the key(s) you wanted were added. [cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph01.pro.kxdigit.com /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub" /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option) [cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph02.pro.kxdigit.com /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub" The authenticity of host 'proceph02.pro.kxdigit.com (10.3.140.32)' can't be established. ECDSA key fingerprint is SHA256:0UefKLdjPASb5QOcZtvQ0P0ed1nxlwJL9tVqjalBKO8. ECDSA key fingerprint is MD5:15:1d:05:62:f3:1e:38:71:1a:f8:58:56:08:bf:39:b9. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys cephadmin@proceph02.pro.kxdigit.com's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'cephadmin@proceph02.pro.kxdigit.com'" and check to make sure that only the key(s) you wanted were added. [cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph03.pro.kxdigit.com /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub" The authenticity of host 'proceph03.pro.kxdigit.com (10.3.140.33)' can't be established. ECDSA key fingerprint is SHA256:fkkrIhBYdiU2YixiBKQn6f8cr72F4MdlydFk7o5luNU. ECDSA key fingerprint is MD5:e8:9c:85:bb:01:e5:3e:d8:20:86:50:5f:5a:f2:f9:80. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys cephadmin@proceph03.pro.kxdigit.com's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'cephadmin@proceph03.pro.kxdigit.com'" and check to make sure that only the key(s) you wanted were added. [cephadmin@proceph01 ~]$
(3) Deploy ceph
3.3.1 installation of ceph at all nodes
All three nodes need to be installed
[cephadmin@proceph02 ~]$ sudo yum -y install ceph ceph-radosgw Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile
3.3.2 deployment node installation CEPH deploy
Install CEPH deploy using cephadmin user on deployment node ceph01
[root@proceph01 ~]# su - cephadmin Last login: April 23-16:59:30 CST 2021pts/0 upper [cephadmin@proceph01 ~]$ sudo yum -y install ceph-deploy python-pip Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile ceph | 2.9 kB 00:00:00 ceph/primary_db | 87 kB 00:00:00 Resolving Dependencies --> Running transaction check ---> Package ceph-deploy.noarch 0:2.0.1-0 will be installed ---> Package python2-pip.noarch 0:8.1.2-12.el7 will be installed
[cephadmin@proceph01 ~]$ ceph-deploy --version 2.0.1 [cephadmin@proceph01 ~]$
3.3.3 deployment of ceph cluster
Deploy node operations in CEPH deploy
3.3.3 installing ceph software
cephadmin user action on deployment node
[cephadmin@proceph01 cephcluster]$ ceph-deploy new proceph01 proceph02 proceph03 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new proceph01 proceph02 proceph03 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7f665c92b230> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f665c947e18> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : ['proceph01', 'proceph02', 'proceph03'] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None
And generate the following configuration files
[cephadmin@proceph01 cephcluster]$ ll total 20 -rw-rw-r--. 1 cephadmin cephadmin 244 Apr 23 17:44 ceph.conf -rw-rw-r--. 1 cephadmin cephadmin 9268 Apr 23 17:44 ceph-deploy-ceph.log -rw-------. 1 cephadmin cephadmin 73 Apr 23 17:44 ceph.mon.keyring [cephadmin@proceph01 cephcluster]$
PS:
ceph deploy – cluster {cluster name} new node1 node2 / / create a ceph cluster with a custom cluster name. Default
Think ceph
Modify CEPH Conf new network configuration
[global] fsid = ad0bf159-1b6f-472b-94de-83f713c339a3 mon_initial_members = proceph01, proceph02, proceph03 mon_host = 10.3.140.31,10.3.140.32,10.3.140.33 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public network = 10.3.140.0/24 cluster network = 10.3.141.0/24
It is better to use optical fiber network for cluster network
3.3.5 cluster configuration initialization, generating all keys
Deployment node execution
[cephadmin@proceph01 cephcluster]$ ceph-deploy mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial
Generated secret key
[cephadmin@proceph01 cephcluster]$ ls -al total 88 drwxrwxr-x. 2 cephadmin cephadmin 270 Apr 23 17:58 . drwx------. 7 cephadmin cephadmin 199 Apr 23 17:49 .. -rw-------. 1 cephadmin cephadmin 113 Apr 23 17:58 ceph.bootstrap-mds.keyring -rw-------. 1 cephadmin cephadmin 113 Apr 23 17:58 ceph.bootstrap-mgr.keyring -rw-------. 1 cephadmin cephadmin 113 Apr 23 17:58 ceph.bootstrap-osd.keyring -rw-------. 1 cephadmin cephadmin 113 Apr 23 17:58 ceph.bootstrap-rgw.keyring -rw-------. 1 cephadmin cephadmin 151 Apr 23 17:58 ceph.client.admin.keyring -rw-rw-r--. 1 cephadmin cephadmin 308 Apr 23 17:49 ceph.conf -rw-rw-r--. 1 cephadmin cephadmin 244 Apr 23 17:47 ceph.conf.bak.orig -rw-rw-r--. 1 cephadmin cephadmin 56416 Apr 23 17:58 ceph-deploy-ceph.log -rw-------. 1 cephadmin cephadmin 73 Apr 23 17:44 ceph.mon.keyring [cephadmin@proceph01 cephcluster]$
3.3.6 distribution of configuration information to all nodes
[cephadmin@proceph01 cephcluster]$ ceph-deploy admin proceph01 proceph02 proceph03 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy admin proceph01 proceph02 proceph03
Switch to root
[cephadmin@proceph01 cephcluster]$ su - Password: Last login: Fri Apr 23 17:11:56 CST 2021 from 10.3.170.32 on pts/0 Last failed login: Fri Apr 23 18:01:55 CST 2021 on pts/0 There was 1 failed login attempt since the last successful login. [root@proceph01 ~]# ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 3m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [root@proceph01 ~]# [root@proceph02 ~]# ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 4m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [root@proceph02 ~]# exit logout Connection to proceph02 closed. [root@proceph01 ~]# exit logout [cephadmin@proceph01 cephcluster]$ ssh proceph03 Last login: Fri Apr 23 17:56:35 2021 from 10.3.140.31 [cephadmin@proceph03 ~]$ sudo ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [cephadmin@proceph03 ~]$
If you want to use the cephadmin account to execute ceph -s, you need to modify the permissions of the / etc/ceph directory
[cephadmin@proceph01 cephcluster]$ sudo chown -R cephadmin:cephadmin /etc/ceph [cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 7m) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [cephadmin@proceph01 cephcluster]$
All three nodes need to execute sudo chown -R cephadmin:cephadmin /etc/ceph
3.3.7 configuring osd
cephadmin is executed by the user on the deployment node
All three nodes need operations. You can use commands directly on the deployment node.
First, check the hard disk on each node through lsblk, and then
for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
do
ceph-deploy disk zap proceph01 $dev
ceph-deploy osd create proceph01 --data $dev
done
Add osd
3.3.7.1 adding osd to proceph01
3.3.7.1.1 check the hard disk name first
[cephadmin@proceph01 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.1G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 221G 0 part ├─centos-root 253:0 0 175G 0 lvm / ├─centos-swap 253:1 0 16G 0 lvm [SWAP] └─centos-home 253:2 0 30G 0 lvm /home sdb 8:16 0 7.3T 0 disk sdc 8:32 0 7.3T 0 disk sdd 8:48 0 7.3T 0 disk sde 8:64 0 7.3T 0 disk sdf 8:80 0 7.3T 0 disk sdg 8:96 0 7.3T 0 disk [cephadmin@proceph01 ~]$
3.3.7.1.1 add osd to the proceph01 node
Execute: / home/cephadmin/cephcluster in this directory
[cephadmin@proceph01 cephcluster]$ pwd /home/cephadmin/cephcluster [cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg > do > ceph-deploy disk zap proceph01 $dev > ceph-deploy osd create proceph01 --data $dev > done [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph01 /dev/sdb [ceph_deploy.cli][INFO ] ceph-deploy options:
inspect
It can be seen that 6 OSDs have been added
[cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 2h) mgr: no daemons active osd: 6 osds: 6 up (since 51s), 6 in (since 51s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [cephadmin@proceph01 cephcluster]$
3.3.7.1.1 add osd to the proceph02 node
Deployment node execution
First log in to proceph02 and check the number of hard disks
[cephadmin@proceph02 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.1G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 221G 0 part ├─centos-root 253:0 0 175G 0 lvm / ├─centos-swap 253:1 0 16G 0 lvm [SWAP] └─centos-home 253:2 0 30G 0 lvm /home sdb 8:16 0 7.3T 0 disk sdc 8:32 0 7.3T 0 disk sdd 8:48 0 7.3T 0 disk sde 8:64 0 7.3T 0 disk sdf 8:80 0 7.3T 0 disk sdg 8:96 0 7.3T 0 disk [cephadmin@proceph02 ~]$
Then deploy the node to the / home/cephadmin/cephcluster directory for execution
[cephadmin@proceph01 cephcluster]$ pwd /home/cephadmin/cephcluster [cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg > do > ceph-deploy disk zap proceph02 $dev > ceph-deploy osd create proceph02 --data $dev > done [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph02 /dev/sdb
inspect
[cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h) mgr: no daemons active osd: 12 osds: 12 up (since 25m), 12 in (since 25m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [cephadmin@proceph01 cephcluster]$
3.3.7.1.2 add osd to the proceph03 node
Node 3 check the new hard disk
[cephadmin@proceph03 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.1G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 221G 0 part ├─centos-root 253:0 0 175G 0 lvm / ├─centos-swap 253:1 0 16G 0 lvm [SWAP] └─centos-home 253:2 0 30G 0 lvm /home sdb 8:16 0 7.3T 0 disk sdc 8:32 0 7.3T 0 disk sdd 8:48 0 7.3T 0 disk sde 8:64 0 7.3T 0 disk sdf 8:80 0 7.3T 0 disk sdg 8:96 0 7.3T 0 disk [cephadmin@proceph03 ~]$
Go back to the deployment node and execute the new osd
[cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg > do > ceph-deploy disk zap proceph03 $dev > ceph-deploy osd create proceph03 --data $dev > done [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph03 /dev/sdb [ceph_deploy.cli][INFO ] ceph-deploy options:
inspect
[cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h) mgr: no daemons active osd: 18 osds: 18 up (since 18s), 18 in (since 18s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [cephadmin@proceph01 cephcluster]$
3.3.8 deployment mgr
Deployment node execution
[cephadmin@proceph01 cephcluster]$ ceph-deploy mgr create proceph01 proceph02 proceph03 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mgr create proceph01 proceph02 proceph03
inspect
[cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h) mgr: proceph01(active, since 24s), standbys: proceph02, proceph03 osd: 18 osds: 18 up (since 2m), 18 in (since 2m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 18 GiB used, 131 TiB / 131 TiB avail pgs: [cephadmin@proceph01 cephcluster]$
3.3.9 install Mgr dashboard (all three nodes need to be installed)
It is installed on all three nodes, but it is only enabled on the primary node at present.
Install directly using yum. The following is an example of proceph01 installation. Both proceph02 and proceph03 need to be installed.
[cephadmin@proceph01 cephcluster]$ sudo yum install ceph-mgr-dashboard Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile
3.3.10 start Mgr dashboard (master node)
[cephadmin@proceph01 cephcluster]$ ceph -s cluster: id: ad0bf159-1b6f-472b-94de-83f713c339a3 health: HEALTH_OK services: mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h) mgr: proceph01(active, since 94s), standbys: proceph02, proceph03 osd: 18 osds: 18 up (since 6m), 18 in (since 6m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 18 GiB used, 131 TiB / 131 TiB avail pgs: [cephadmin@proceph01 cephcluster]$
mgr: proceph01(active, since 94s), standbys: proceph02, proceph03
So start in proceph01
[cephadmin@proceph01 cephcluster]$ ceph mgr module enable dashboard [cephadmin@proceph01 cephcluster]$ ceph dashboard create-self-signed-cert Self-signed certificate created [cephadmin@proceph01 cephcluster]$ ceph dashboard set-login-credentials admin admin ****************************************************************** *** WARNING: this command is deprecated. *** *** Please use the ac-user-* related commands to manage users. *** ****************************************************************** Username and password updated [cephadmin@proceph01 cephcluster]$
Then log in https://10.3.170.31:8443
The account and password can be admin admin.