add all nodes in /etc/hosts
Ports
111 - portmap/rpcbind
24007 - GlusterFS Daemon
24008 - GlusterFS Management
38465 to 38467 - Required for GlusterFS NFS service
24009 to +X - GlusterFS versions earlier than 3.4
49152 to +X - GlusterFS versions 3.4 and later
Each brick for every volume on the host requires its own port. For every new brick, one new port will be used starting at 24009 for GlusterFS versions earlier than 3.4 and 49152 for version 3.4 and later.
For example, if you have one volume with two bricks, you must open 24009-24010 or 49152-49153.
Install packages
yum install centos-release-gluster310.noarch
yum install parted lvm2 xfsprogs glusterfs glusterfs-fuse glusterfs-server
to exclude from updates
grep ^exclude /etc/yum.conf
exclude=kernel* gluster*
on all nodes
pvcreate /dev/sdb ( make sure same size on all nodes? )
vgcreate vgglus1 /dev/sdb
lvcreate -l 100%VG -n gbrick1 vgglus1
mkfs.xfs -i size=512 /dev/vgglus1/gbrick1
echo '/dev/vgglus1/gbrick1 /opt/gvol1 xfs inode64,nobarrier 0 0' >> /etc/fstab
mkdir gvol1
mount glvol1
on node 1
mkdir /opt/gvol1/brick1
on node 2
mkdir /opt/gvol1/brick2
Start glusterd
systemctl enable glusterd
systemctl start glusterd
Build a peer group
from node1: gluster peer probe glus2 ; gluster peer status
from node2: gluster peer probe glus1 ; gluster peer status
Verify the status of the pool
gluster pool list
Create a replicated volume
gluster volume create gvol1 replica 2 transport tcp glus1:/opt/gvol1/brick1 glus2:/opt/gvol1/brick2
gluster volume start gvol1
get volume info
gluster volume info gvol1
export with nfs ( watch status before, is it by default? )
gluster volume stop gvol1
gluster volume set gvol1 auth.allow 192.168.100.*
gluster volume set gvol1 nfs.disable off
gluster volume set gvol1 nfs.addr-namelookup off
gluster volume set gvol1 nfs.export-volumes on
gluster volume set gvol1 nfs.rpc-auth-allow 192.168.100.*
gluster volume start gvol1
gluster volume info gvol1