To offload heavily-used webservers, you can add more webservers and setup Apache Loadbalancer in front to distribute all incoming requests to your backend web servers.
The load balancer hides all the backend servers to the public, and from the outside it looks like a single server doing all of the work.
Enable the load balancer module
To get the ability of load balancing support for HTTP, FTP and AJP13 protocols in Apache ,some modules are required:
LoadModule proxy_module mod_proxy.so
LoadModule proxy_http_module mod_proxy_http.so
LoadModule proxy_balancer_module mod_proxy_balancer.so
Please don’t forget to load mod_proxy_http, because you wouldn’t get any error messages if it’s not loaded. The balancer just won’t work.
Disable Proxy
Because mod_proxy makes Apache also become an (open) proxy server, prevent Apache from functioning as a forward proxy server by setting ProxyRequests to Off
ProxyRequests Off
<Proxy \*>
Order deny,allow
Deny from all
</Proxy>
In a typical reverse proxy or gateway configuration, this option should always be set to Off.
This does not disable the use of the ProxyPass directive.
Example of a balancer configuration
Below is an example to provide load balancing between two back-end servers:
<Proxy balancer://clustername>
BalancerMember http://serverA
BalancerMember http://serverB
Order allow,deny
Allow from all
</Proxy>
<VirtualHost *:80>
ProxyPreserveHost On
ProxyPass / balancer:// clustername/
ProxyPassReverse / balancer:// clustername/
ServerName server.com
</VirtualHost>
Load balancer scheduler algorithm
By default Apache simply counts the number of requests and makes sure every backend server gets the same amount of requests forwarded.
There are 3 load balancer scheduler algorithms available for use. You can control this via via the lbmethod value of the Balancer definition:
Request Counting (how much we expect this worker to work)
lbmethod=byrequests
Weighted Traffic (how much traffic, in bytes, we want this worker to handle)
lbmethod=bytraffic
Counting and Pending Request Counting (A new request is automatically assigned to the worker with the lowest number of active requests.)
lbmethod=bybusyness
Load balancer stickyness
The balancer supports stickyness. When a request is proxied to some back-end, then all following requests from the same user should be proxied to the same back-end.
This approach is transparent to clients and back-ends, but suffers from some problems: unequal load distribution if clients are themselves hidden behind proxies, stickyness errors when a client uses a dynamic IP address that changes during a session and loss of stickyness, if the mapping table overflows.
Example of how to provide load balancing with stickyness
Header add Set-Cookie “ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/” env=BALANCER_ROUTE_CHANGED
<Proxy balancer://clustername>
BalancerMember http://serverA route=1 loadfactor=1
BalancerMember http://serverB route=2 loadfactor=2
ProxySet lbmethod=byrequests stickysession=ROUTEID timeout=15
</Proxy>
<VirtualHost *:80>
ProxyPreserveHost On
ProxyPass / balancer:// clustername/
ProxyPassReverse / balancer:// clustername/
ServerName server.com
</VirtualHost>