CL236 configuring IP failover -- configuring NFS Ganesha

CL236 configure IP failover – configure NFS Ganesha

In this chapter, learn how to configure NFS Ganesha to solve the problem of high availability.

RHCA column address: https://blog.csdn.net/qq_41765918/category_11532281.html

NFS Ganesha feature

NFS Ganesha is a user mode file server for NFS. It supports NFSv3, NFSv4 and NFSv4 1 and PNFS (as a technical preview). Using the cluster infrastructure provided by Corosync and Pacemaker, NFS Ganesha can achieve high availability.

The built-in NFS server for Red Hat Gluster storage only supports NFSv3. If NFSv4, Kerberos authentication or encryption, or IP failover is required, the administrator should use NFS Ganesha.

Important: NFS Ganesha cannot run simultaneously with the built-in NFSv3 server. NFS should be disabled on all nodes that will run NFS Ganesha.

You can refer to the official website for export learning:

https://www.gluster.org/glusterfs-and-nfs-ganesha-integration/

https://docs.gluster.org/en/latest/Administrator-Guide/NFS-Ganesha-GlusterFS-Integration/

Textbook practice (understanding by practice)

[root@workstation ~]# lab ganesha setup

1. Install the required packages on servera and serverb.

# systemctl stop glusterd
# killall glusterfs
# killall glusterfsd
# yum -y install glusterfs-ganesha

2. Update the firewall on servera and serverb.

To allow pacemaker/corosync, NFS, portmapper, and mountd.

# firewall-cmd --permanent --add-service=high-availability --add-service=nfs --add-service=rpc-bind --add-service=mountd 
success
# firewall-cmd --reload
success

3. Modify the configuration file as required.

[root@servera ~]# cp /etc/ganesha/ganesha-ha.conf.sample /etc/ganesha/ganesha-ha.conf
[root@servera ~]# vim /etc/ganesha/ganesha-ha.conf
[root@servera ~]# egrep -v ^# /etc/ganesha/ganesha-ha.conf
HA_NAME="gls-ganesha"
HA_VOL_SERVER="servera"
HA_CLUSTER_NODES="servera.lab.example.com,serverb.lab.example.com"
VIP_servera_lab_example_com="172.25.250.16"
VIP_serverb_lab_example_com="172.25.250.17"

[root@servera ~]# scp /etc/ganesha/ganesha-ha.conf serverb:/etc/ganesha/

4. Set up clusters as required.

Prepare servera and serverb as cluster members by enabling the correct services, setting cluster user passwords, and authenticating each other.

# systemctl enable pacemaker.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/pacemaker.service to /usr/lib/systemd/system/pacemaker.service.
# systemctl enable pcsd.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
# systemctl start pcsd.service

# echo redhat | passwd --stdin hacluster 
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.

from servera System to verify the communication between all nodes pc signal communication.
[root@servera ~]# pcs cluster auth -u hacluster -p redhat servera.lab.example.com serverb.lab.example.com
servera.lab.example.com: Authorized
serverb.lab.example.com: Authorized

5. Create an SSH key pair to support the secret free login of NFS Ganesha.

[root@servera ~]# ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /var/lib/glusterd/nfs/secret.pem.
Your public key has been saved in /var/lib/glusterd/nfs/secret.pem.pub.
The key fingerprint is:
a4:bd:d2:9d:b8:13:1a:4f:0a:21:a4:2c:b8:85:d0:d1 root@servera.lab.example.com
The key's randomart image is:
+--[ RSA 2048]----+
| ..o             |
|. o E            |
|++      .        |
|=.o .  +         |
|.o . .. S        |
|.   . ..o+ .     |
|     ..*+.o      |
|      o.o.       |
|        ..       |
+-----------------+
[root@servera ~]# scp /var/lib/glusterd/nfs/secret.pem* serverb:/var/lib/glusterd/nfs/
root@serverb's password: 
secret.pem                                             100% 1675     1.6KB/s   00:00    
secret.pem.pub                                         100%  410     0.4KB/s   00:00
[root@servera ~]# ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@servera
[root@servera ~]# ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@serverb

6. Start glusterd on both nodes and enable shared storage for glusterd.

Start on both nodes glusterd
# systemctl start glusterd.service

[root@servera ~]# gluster volume set all cluster.enable-shared-storage enable

7. Configure NFS Ganesha on both nodes to use the default port (20048/tcp/20048/UDP) for the mount process.

# tail -f -n 10 /etc/ganesha/ganesha.conf 
NFS_Core_Param {
        #Use supplied name other tha IP In NSM operations
        NSM_Use_Caller_Name = true;
        #Copy lock states into "/var/lib/nfs/ganesha" dir
        Clustered = false;
        #Use a non-privileged port for RQuota
        Rquota_Port = 4501;
        MNT_Port=20048;
}

8. Enable the cluster service and configure the corresponding mount.

[root@servera ~]# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success 
You have mail in /var/spool/mail/root

[root@servera ~]# gluster volume set custdata ganesha.enable on
volume set: success

9. Set permanent mount.

[root@servera ~]# showmount -e
Export list for servera.lab.example.com:
/custdata (everyone)
[root@servera ~]# showmount -e 172.25.250.16
Export list for 172.25.250.16:
/custdata (everyone)
[root@servera ~]# showmount -e 172.25.250.17
Export list for 172.25.250.17:
/custdata (everyone)

[root@workstation ~]# mkdir /mnt/nfs
[root@workstation ~]# echo "172.25.250.16:/custdata /mnt/nfs nfs rw,vers=4 0 0" >> /etc/fstab 
[root@workstation ~]# mount -a
[root@workstation ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/vda1               xfs        10G  3.0G  7.1G  30% /
devtmpfs                devtmpfs  902M     0  902M   0% /dev
tmpfs                   tmpfs     920M   84K  920M   1% /dev/shm
tmpfs                   tmpfs     920M   17M  904M   2% /run
tmpfs                   tmpfs     920M     0  920M   0% /sys/fs/cgroup
tmpfs                   tmpfs     184M   16K  184M   1% /run/user/42
tmpfs                   tmpfs     184M     0  184M   0% /run/user/0
172.25.250.16:/custdata nfs4      2.0G   33M  2.0G   2% /mnt/nfs

10. Script scoring.

[root@workstation ~]# lab ganesha grade

Chapter experiment

[root@workstation ~]# lab ipfailover setup

1. Install the required software on serverc and serverd and open any ports required for this setting on the firewall on these machines.

# yum -y install samba ctdb
# firewall-cmd --permanent --add-service=samba
# firewall-cmd --permanent --add-port=4379/tcp
# firewall-cmd --reload

2. Stop ctdbmeta volume, and then set relevant start and stop triggers on serverc and serverd to use ctdbmeta volume for CTDB. Clustering is also enabled for Samba on both nodes.

[root@serverc ~]# gluster volume stop ctdbmeta 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: ctdbmeta: success

# vim /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
# $META is the volume that will be used by CTDB as a shared filesystem.
# It is not desirable to use this volume for storing 'data' as well.
# META is set to 'all' (viz. a keyword and hence not a legal volume name)
# to prevent the script from running for volumes it was not intended.
# User needs to set META to the volume that serves CTDB lockfile.
META=ctdbmeta

# vim /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
# $META is the volume that will be used by CTDB as a shared filesystem.
# It is not desirable to use this volume for storing 'data' as well.
# META is set to 'all' (viz. a keyword and hence not a legal volume name)
# to prevent the script from running for volumes it was not intended.
# User needs to set META to the volume that serves CTDB lockfile.
META=ctdbmeta

add to clustering = yes to configure
# grep clustering -C 2 /etc/samba/smb.conf 

[global]
clustering=yes
#------------------------ AIO Settings ------------------------
#

3. Start the ctdbmeta volume, then configure CTDB, use your serverc and serverd systems for iP failover, and use 172.25.250.18/24 as the floating iP address.

[root@serverc ~]# gluster volume start ctdbmeta 
volume start: ctdbmeta: success

# vim /etc/ctdb/nodes
172.25.250.12
172.25.250.13

# vim /etc/ctdb/public_addresses	
# cat /etc/ctdb/public_addresses 
172.25.250.18/24 eth0

# systemctl enable ctdb
# systemctl start ctdb

4. Ensure that the custdata volume is exported using samba. Remember to set a redhat Samba password for smbuser users.

Set the samba password of smbuser to redhat. Because ctdb propagates this change to all nodes, this step only needs to be performed on a single host.

# smbpasswd -a smbuser
New SMB password: redhat
Retype new SMB password: redhat
Added user smbuser.

# gluster volume set labdata stat-prefetch off
# gluster volume set labdata server.allow-insecure on
# gluster volume set labdata storage.batch-fsync-delay-usec 0

serverc and serverd All need to be changed:
# vim /etc/glusterfs/glusterd.vol
    option rpc-auth-allow-insecure on
# systemctl restart glusterd

[root@serverc ~]# gluster volume stop labdata
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: custdata: success
[root@serverc ~]# gluster volume start labdata
volume start: custdata: success

5. On your workstation system, use Samba to permanently mount custdata volumes on / mnt/custdata through floating IP addresses.

[root@workstation ~]# mkdir /mnt/labdata
[root@workstation ~]# echo "//172.25.250.18/gluster-labdata /mnt/labdata cifs user=smbuser,pass=redhat 0 0" >> /etc/fstab
[root@workstation ~]# mount -a
[root@workstation ~]# df -Th
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/vda1                       xfs        10G  3.1G  7.0G  31% /
devtmpfs                        devtmpfs  902M     0  902M   0% /dev
tmpfs                           tmpfs     920M   84K  920M   1% /dev/shm
tmpfs                           tmpfs     920M   17M  904M   2% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
tmpfs                           tmpfs     184M   16K  184M   1% /run/user/42
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
//172.25.250.18/gluster-labdata cifs      2.0G   33M  2.0G   2% /mnt/labdata

6. Script scoring

[root@workstation ~]# lab ipfailover grade

7. Reset the environment

reset workstation,servera,serverb,serverc,serverd

summary

  • Describes how to configure NFS Ganesha to solve the problem of high availability.

The above is the sharing of brother goldfish. I hope it can be helpful to the little friends who see this article.

If this [article] is helpful to you, I hope I can praise [brother goldfish] 👍, It is not easy to create. Compared with the official statement, I prefer to explain every knowledge point in [easy to understand] style. If you are interested in [operation and maintenance technology], you are also welcome to pay attention ❤️❤️❤️ [brother goldfish] ❤️❤️❤️, I will bring you great [harvest and surprise] 💕💕!

Keywords: Linux Operation & Maintenance server glusterfs

Added by arhunter on Thu, 10 Mar 2022 17:40:28 +0200