CentOS (5.5) glusterfs configuration
Posted on .
The configuration is with two nodes as server and one client:
# wget http://download.gluster.com/pub/gluster/glusterfs/2.0/LATEST/CentOS/glusterfs-*
# yum install libibverbs-devel fuse fuse-devel
# rpm -i glusterfs-*
# cd /etc/glusterfs/
The following commands should be executed on node01 and node02.
# vim glusterfsd.vol
# chkconfig --level 345 glusterfsd on
The following commands should be executed on the client.
# vim /etc/glusterfs/glusterfs.vol
# glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
Summary: node01 and node02 hosts will work as glusterfs servers. Glusterfs supports replicate and distribute configuration. In our case (replicate) shared folders from each servers will be mirrored so if one the servers goes down the other(s) will take the whole traffic without interruptions and any additional intervention.
- node01 - 192.168.100.68
- node02 - 192.168.100.69
- client - 192.168.100.106
# wget http://download.gluster.com/pub/gluster/glusterfs/2.0/LATEST/CentOS/glusterfs-*
# yum install libibverbs-devel fuse fuse-devel
# rpm -i glusterfs-*
# cd /etc/glusterfs/
The following commands should be executed on node01 and node02.
# vim glusterfsd.vol
# /etc/init.d/glusterfsd startvolume posix type storage/posix option directory /data end-volume volume locks type features/locks subvolumes posix end-volume volume brick type performance/io-threads option thread-count 8 subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.brick.allow 192.168.100.* subvolumes brick end-volume
# chkconfig --level 345 glusterfsd on
The following commands should be executed on the client.
# vim /etc/glusterfs/glusterfs.vol
# mkdir /mnt/glusterfsvolume remote1 type protocol/client option transport-type tcp option remote-host node01.host.com option remote-subvolume brick end-volume volume remote2 type protocol/client option transport-type tcp option remote-host node02.host.com option remote-subvolume brick end-volume volume replicate type cluster/replicate subvolumes remote1 remote2 end-volume volume writebehind type performance/write-behind option window-size 1MB subvolumes replicate end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume
# glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
Summary: node01 and node02 hosts will work as glusterfs servers. Glusterfs supports replicate and distribute configuration. In our case (replicate) shared folders from each servers will be mirrored so if one the servers goes down the other(s) will take the whole traffic without interruptions and any additional intervention.