Monday, November 19, 2012

High-Availability NFS Pseudo Load-Balanced Using Round Robin DNS, Virtual IP and Heartbeat on Ubuntu 12.04 LTS

Goal of this setup is to provide fault tolerance, and increased read performance for an NFS share.

Prerequisites:

  • Two servers with equal content to be shared using NFS
  • DNS server 

 The setup:

  • Server1: server1.example.com, IP address: 10.0.0.100
  • Server2: server2.example.com, IP address: 10.0.0.101
  • Virtual IP address: 10.0.0.200 preferred server1, failover at server2
  • Virtual IP address: 10.0.0.201 preferred server2, failover at server1
  • Virtual IP address: fe80::f1ee:dead:beef preferred server1, failover at server2
  • Virtual IP address: fe80::baad:dead:beef preferred server2, failover at server1
  • DNS entry with an A record pointing to both virtual ip (multihomed, resolved using round robin)
For my particular setup, I have replicated storage using DRBD with OCFS2 to ensure equal content across my two servers.

Result:

  • Fault tolerant NFS share, against a single DNS
  • If one server goes down, the other server will take over the virtual IP, providing high-availability
  • Round robin balanced NFS mounts across the two servers

Set up two virtual ips

Virtual ips are set up and managed by Heartbeat. Set it up on both servers:
apt-get install heartbeat -y
echo -e "auth 3 \n3 md5 secretpasswordhere" > /etc/ha.d/authkeys
chmod 600 /etc/ha.d/authkeys

Server1:
nano /etc/ha.d/ha.cf
# How many seconds between heartbeats
keepalive 2

# Seconds before declaring host dead
deadtime 10
 
# What UDP port to use for udp or ppp-udp communication?
udpport 694

bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 10.0.0.101

# What interfaces to heartbeat over?
udp     eth0
logfacility     local0

# Allow ip to float back when server recovers
auto_failback on

# Tell what machines are in the cluster
# node must match uname -n
node    server1.example.com
node    server2.example.com

  Server2:
nano /etc/ha.d/ha.cf  
#
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport        694
bcast  eth0
mcast eth0 225.0.0.1 694 1 0
ucast eth0 192.168.0.100
#       What interfaces to heartbeat over?
udp     eth0
#
#       Facility to use for syslog()/logger (alternative to log/debugfile)
#
logfacility     local0
 
# Allow ip to float back when server recovers
auto_failback on 
 
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    server1.example.com
node    server2.example.com
Now, on both servers - set up the virtual ips:

nano /etc/ha.d/haresources 
server1 10.0.0.200 IPv6addr::fe80:0000:0000:0000:0000:flee:dead:beef
server2 10.0.0.201 IPv6addr::fe80:0000:0000:0000:0000:baad:dead:beef
 Note: Be sure the name matches 'uname -n' output.

Restart heartbeat to make the changes take effect:

/etc/init.d/heartbeat restart 

Now server1 is the primary node for 10.0.0.200, and will take over 10.0.0.201 if server2 stops responding (and vice versa)

Configuring NFS

Install NFS and share /var/www on both servers:

apt-get install nfs-kernel-server -y
mkdir /var/www
echo "/var/www 10.0.0.0/19(rw,async,no_subtree_check)" >> /etc/exports


Now your clients can connect to the DNS pointing to both the virtual IPs, and enjoy the benefits of a highly available NFS server!

No comments:

Post a Comment