Load balancing with ISPconfig 3

Discussion in 'General' started by BrojJedan, Jul 4, 2017.

  1. BrojJedan

    BrojJedan New Member

    Hi,
    yeah.. I know.. this again..
    But I was wondering.. is it possible to load balance two servers without having to deploy the load balancer?
    I would set up two servers on different IPs, with different internet providers. One in my office, one offsite on another location. IPs are public.

    Is it possible to set up something similar to roud robin, but with persistent sessions.

    so something like this:
    server 1 (33.44.55.66) ----------- INTERNET ----------- server 2 (66.55.44.33)

    So if a client fails to connect to one server, the other server takes over (in case of storms, longer power outages, etc..)
    And also load balancing and redundancy is always a good idea. I am hosting just two websites (both mine). It is not a production server but more my own hobby.

    Thanks in advance!
    M.
     
  2. Tuumke

    Tuumke Active Member

    I dont think you can loadbalance with ISPC. You would really net a loadbalancer in front of the servers.
    Clients would access the loadbalancers public IP Address, not those of your servers. I don't know of any free/opensource loadbalancers, but they probably are out there.
     
  3. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    I never tried but I think basically nginx server can already do load balancing by being in the front line before other servers (reverse proxy). It also can (by itself) balance certain concurrent connections depending on the server itself.
     
  4. I would also recommend @ahrasis solution. Nginx works very well as a load balancer. An alternative is Varnish, but if you're using nginx already, having Varnish on top of it is overkill. Varnish is a more sophisticated reverse proxy, but it only does that — reverse proxy'ing. Nginx can do the whole stuff — serving static Web pages, passing PHP requests to the backend (or whatever else... I have nginx as a Go front end :) ), use it as a proxy, use it as a fast cache (even for PHP requests), and as a load balancer, all at a footprint smaller than either Varnish or Apache.

    What I'm not sure about is if you can configure two nginxes, one on the master and one on the server, both of which contacting their respective backends in a load-balancing configuration. I have never tried that; my 'usual' configuration has one nginx as a front-end load balancer/reverse proxy/fast proxy cache/whatever with a public IP address to several back-end servers (running nginx locally as well), each of which is on a private IP address. Theoretically, nothing prevents the same configuration to be spread among two nginxes running on two different public IP addresses.

    Here is some introductory reading on setting up nginx load balancing, using several different algorithms. The configuration is really very simple to understand: http://nginx.org/en/docs/http/load_balancing.html It just... works :)
     
  5. BrojJedan

    BrojJedan New Member

    Thank you all for your replies. Nginx souds like a good idea.. but I am more fluent in Apache.. Old habits die hard..
    Also thank you for the link. I will look into this later this week. As I said, it is just for my own hobby. I like discovering new things. :)
    If I fin anything.. I will post here.

    Thank all of you again!
     
  6. ... I humbly admit that I only started to tinker with nginx a while ago because I was curious about it, since so many people talked wonders of its performance and tiny footprint, compared to Apache's overbloated executable (and aye, I'm aware one can keep those modules off the core which are not needed... but every time I do that I tend to leave too much off the core, and then nothing works... lol). At that time, I had a really tiny VPS to play with, and I was curious about how much performance I could extract from it — which actually lead me to write on the subject an extensive tutorial. Sorry for the plug — the configuration is not meant to be used with ISPConfig3 (even though I mention ISPConfig3 in the comment section!), but there are some useful tweaks you can use if you have a very tiny footprint. Nginx is really great for that!
    Note that I have also tested Apache on a tiny footprint VPS, and sometimes I can also get it working — but more often not! Nginx, however, does cheat a bit: it does not handle PHP natively, it requires an external PHP service for that (PHP-FPM is the usual choice); by contrast, Apache loads PHP as an internal module. There are advantages and disadvantages with both approaches. My own very unsophisticated benchmarks show that Apache runs PHP 5.4 a bit faster than Nginx, mostly because Apache does not need to talk to an external process, but just handles PHP as an internal thread (I never tried to benchmark more recent versions of PHP, though). Nginx, on the other hand, can be configured to serve static files directly to the client without communicating with the backend (PHP) servers: so, in a real environment, where web pages are a mix of static and dynamic, Nginx can easily outperform Apache. A very well configured Nginx configuration, dealing with a PHP application which pre-generates static pages (like phpBB with Smarty, or WordPress with many of the caching plugins), can therefore be blindingly fast, because Nginx is so good and so fast at serving static content (and if the PHP back-end or the database die, Nginx can still continue to serve static content from the cache until the sysadmin fixes what was wrong). By contrast, Apache will have an edge on very CPU-incentive PHP applications, with little opportunities to cache content and few static pages/images/CSS/JS/whatever. As always, there is no 'perfect' choice, and I have certainly seen very creative solutions, like having Nginx as a reverse proxy and acting as a load balancer for a stripped-down Apache server which runs without the internal PHP module, but with PHP-FPM as a separate process... and having the Nginx front-end serving static content directly, not even bothering to talk to Apache. That kind of configuration is very appealing to 'old school' Apache die-hards, because it combines the advantages of all three solutions — you can still pretty much configure everything using the Apache configuration syntax, Apache can continue to do its job even if PHP-FPM dies, and on top of all that Nginx can continue to serve static content in a load-balancing scenario. But personally I have never tried out this solution, although I'm familiar with Varnish + Apache/PHP, which has equivalent performance to Nginx + PHP-FPM, and can do pretty much the same thing — with a catch: for Varnish to be effective, it requires a huge amount of memory, typically starting with 1 GByte and possibly using as much as you can afford. Using that kind of configuration, almost everything will be served from memory anyway...
     

Share This Page