Working with Load balancers

Load balancing is a very important aspect of high availability applications. By definition it is the distribution of workload across multiple computing resources with the aim of optimizing resource use, maximizing throughput, minimizing response time, and avoiding overload of any single resource.

Over the last week I dealt with setting up such a scenario and learned a thing or two. For this application we were going to use HAProxy for the load balancing. Its free and its the De-facto Open source load-balancer shipped with most Linux distros. This fitted well in our Ubuntu environment.

The application is Java based running on tomcat and data stored in PostgreSQL database. So two server instances(virtual) running tomcat and a third running the database. We had a little trouble making it work at first. Each instance worked well on its own but some input fields got disabled when going through the load balancer, eventually we got it to work.

One of the main issue was that HAProxy won’t create any directory or files for you. If you plan to use any custom error and log files ensure they exist before use put the paths in haproxy.cfg.

Here is the configuration that finally worked….

global
log 127.0.0.1   local0
log 127.0.0.1    local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNU$

defaults
log     global
mode    http
option  httplog
option  dontlognull
timeout connect 5000
timeout client  50000
timeout server  50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http

frontend localnodes
bind *:80
mode http
default_backend nodes

backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost

cookie JSESSIONID prefix
server tomcat1 192.168.xx.xxx:8080 cookie JSESSIONID_SERVER_1 check inter 5000
server tomcat2 192.168.xx.xxx:8080 cookie JSESSIONID_SERVER_2 check inter 5000

listen stats *:81
mode http
maxconn 10

timeout client      100s
timeout server      100s
timeout connect     100s
timeout queue       100s

log global
stats enable
stats show-node
stats refresh 30s
stats uri /haproxy?stats
stats hide-version
stats auth admin:pass787oo909

Advertisements

One thought on “Working with Load balancers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s