CL236 client configuration – mount via nfs
This chapter describes how to use clients for NFS mounting and GlusterFS storage.
RHCA column address: https://blog.csdn.net/qq_41765918/category_11532281.html
Red Hat Gluster storage volumes and NFSv3
By default, any new Red Hat Gluster storage volumes will be exported via NFSv3 with acl enabled. This allows clients that cannot run native clients to access data stored on Red Hat Gluster storage volumes without adding a dedicated server to re export these volumes.
The NFSv3 export does not use the NFSv3 server in the Linux kernel. Instead, they use a dedicated NFSv3 server written specifically for Red Hat Gluster storage, which is exported only through TCP.
Unlike native clients, clients using NFSv3 do not automatically fail over to another server when the server connected to them is unavailable. This failover and NFSv4 support can be configured using NFS Ganesha, as discussed in Chapter 8, "configuring IP failover".
The initial configuration uses the server exported by NFSv3
Unless NFSv3 is disabled for a volume, when the volume starts, it has been exported through NFSv3 (from all hosts). To access this volume from a client, the firewall used as an NFSv3 server on the host must be modified to allow this traffic.
When using firewalld, two services must be allowed to enable NFSv3 through the firewall: RPC bind and nfs. RPC bind service allows connection to 111/TCP and 111/UDP ports required by portmapper service, while nfs service allows connection to 2049/TCP port of actual NFSv3 service.
# firewall-cmd --add-service=rpc-bind --add-service=nfs --permanent
# firewall-cmd --reload
# echo "node1:/vol1 /mnt/vol1 nfs defaults,vers=3,_netdev 0 0" >> /etc/fstab # mount -a # The experiment in the textbook is direct nfs rw 0 0, but as suggested, the version parameter is added.
**Note: * * Red Hat Enterprise Linux 7 clients can mount red hat Gloster storage volumes using NFSv3 without any additional mount options, but it is still recommended to add the vers=3 option to the mount, so it is not necessary to try protocol version 4 first. You can also specify the proto TCP option if the client may first attempt to mount via UDP.
Although Red Hat Gluster storage volumes can also be exported via UDP (using the nfs.mount UDP option), UDP exports will not support subdirectory installations or client restrictions set on the volume.
[student@workstation ~]$ lab nfs-client setup Setting up for lab exercise work: • Testing if all hosts are reachable.......................... SUCCESS • Adding glusterfs to runtime firewall on servera............. SUCCESS • Adding glusterfs to permanent firewall on servera........... SUCCESS • Adding glusterfs to runtime firewall on serverb............. SUCCESS • Adding glusterfs to permanent firewall on serverb........... SUCCESS • Adding glusterfs to runtime firewall on serverc............. SUCCESS • Adding glusterfs to permanent firewall on serverc........... SUCCESS • Adding glusterfs to runtime firewall on serverd............. SUCCESS • Adding glusterfs to permanent firewall on serverd........... SUCCESS • Adding servera to trusted storage pool...................... SUCCESS • Adding serverb to trusted storage pool...................... SUCCESS • Adding serverc to trusted storage pool...................... SUCCESS • Adding serverd to trusted storage pool...................... SUCCESS • Ensuring thin LVM pool vg_bricks/thinpool exists on servera. SUCCESS ............
1. Set firewall rules
[root@servera ~]# firewall-cmd --add-service=rpc-bind --add-service=nfs --permanent success [root@servera ~]# firewall-cmd --reload success
2. Mount using nfs.
[root@workstation ~]# mkdir /mnt/mediadata [root@workstation ~]# echo "servera:/mediadata /mnt/mediadata nfs rw 0 0" >> /etc/fstab [root@workstation ~]# mount -a [root@workstation ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/vda1 xfs 10G 3.0G 7.1G 30% / devtmpfs devtmpfs 902M 0 902M 0% /dev tmpfs tmpfs 920M 84K 920M 1% /dev/shm tmpfs tmpfs 920M 17M 904M 2% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup tmpfs tmpfs 184M 16K 184M 1% /run/user/42 tmpfs tmpfs 184M 0 184M 0% /run/user/0 servera:/mediadata nfs 8.0G 130M 7.9G 2% /mnt/mediadata
3. Scoring script
[root@workstation ~]# lab nfs-client grade
Configure the server exported by NFSv3, such as firewall policy.
The client configuration uses nfs to mount GlusterFS for use.
The above is the sharing of brother goldfish. I hope it can be helpful to the little friends who see this article.
If this [article] is helpful to you, I hope I can praise [brother goldfish] 👍， It is not easy to create. Compared with the official statement, I prefer to explain every knowledge point in [easy to understand] style. If you are interested in [operation and maintenance technology], you are also welcome to pay attention ❤️❤️❤️ [brother goldfish] ❤️❤️❤️， I will bring you great [harvest and surprise] 💕💕!