I've been playing around with ISPCongif3 for 3 months now, trying to set up a HA shared hosting cluster. The initial idea was to have pairs of web servers, pairs of mail servers, and pairs or triplets of db servers, all behind a load balancer. The tutorials use lsyncd+unison. I have set this up a few times, and although it worked conceptually, it died when it came to real work. Uploading a website of 1500 files took 3 hours to replicate to the other web server. This is just not acceptable. I eventually found csync2, and using csync2+lsyncd, managed to get this implemented instead. Now my 1500 file website replicated within 7 seconds. This is good. I'm now moving on to the database part. The tutorials use MySQL master-master replication, but this is buggy as hell. I'm trying to build a business-grade setup, and cannot afford to have any outages or sync issues. There's also tungsten, but from what I've read, it seems to also have sync issues. Galera seems like a good idea as it does multi-master synchronous replication, but requires a triplet setup to avoid split-brain. Has anyone actually managed to get a HA solution in production using ISPConfig? I'm pulling the little hair I have left out with this endeavour. All feedback and ideas welcome. Thanks in advance, -Andreas
Several of our clients use the setup that i described here in production: http://www.howtoforge.com/installin...tabase-cluster-on-debian-6.0-with-ispconfig-3 I did not encounter such long sync times with unsion that you described. I will check out csync2 if its faster on our systems as well. Mysql replication is always a problem as the sync that is built into mysql is not very afult tolerant. But the oter options that I tested have too many limitations e.g. that table modifications are not possible after a database table has been created so that updates of cms systems that modify their database will fail and disallowing customers to update their cms is not an option for most ISP's.
I run ISPConfig on XenServer 6.0 servers with Advanced Licenses for HA. The iSCSI underneath it get backupped daily (lvm snapshots -to-> xen images) for fast crash recovery. If you don't want Virtualisation, you might want to check out DRBD + Corosync (Pacemaker)
I just wanted to post an update on this whole HA cluster thing. If you cluster cloud servers in a mirrored array, you're wasting your time, and unneccessarily complicating your life. Mirrored setups are great to cover hardward failures, and cloud servers generally remove this risk altogether. Mirrored setups do NOTHING for bad coding and load situations. If a server dies because of apache or mysql runninng amok due to high load, this load will be sent to the mirror. If the mirror has the same code and config, it will obviously die immidiately as it now has twice the load of the server that went down in the first place. If your cloud takes a while to re-provision a server from failed hardware to a new node, then an HA cluster may be worth looking into. Otherwise, I say no. In our 12 years of selling hosting on dedicated machinery, we've had 2 hardware failure that impacted customers. We've had dozens of outages due to load and bad coding. (of course customers don't care much about the quality of their code, and don't want to pay anyone to fix it - they just expect hosting to work flawlessly all the time). We're now looking at a new configuration which has a cluster of single service servers (multiple web, mail, and DB servers), and a cache front-end in an HA configuration. This way, if a server fails for any reason, the static parts of the websites on the server would still keep working.
I agree with you on some points, users always think the hardware running on it just runs their code smoothly. Though ofc it's known that HA (mirrored setup) does not compensate for incompetent customers and load situations. But that wasn't the case here, we where talking about a simpel HA setup (as you described in your initial post) Anyway, You might want to setup the following (which scales very well): 2 loadbalancers: keepalived + ipvs (decent machines can handle ~300k packets per second easily, split the load makes ~600k) Some webservers behind it. (apache and nginx) Mysql servers running master-multi slave setup (corosync works very well for this to auto switch master in case of failure) Mailservers can be put behind the load balancer as well, which works very well. We use it to send out our mailings much faster, since (f.e.) hotmail only accepts X connections per Y seconds from the same IP. This method greatly speeds up the mailing process. Let nginx send static files directly, forward dynamic content to it apache. add 2 memcached servers (use: repcached patch) to sync data and use them as phpsessions data. ISPConfig is a very nice (FREE!) product. I'd not suggest using this in higher load environments. The main reason is because everything runs on the same box, and what (imho is a design flaw is using mysql for postfix) If your box gets spammed (which will happen when you host a lot of domains), even with RBL's setup, it creates a db connection per mail, which ALSO has influence on the websites that use the same mysql. Increasing max connections helps (sometimes) but is not a fix, it's a work around, which is bad imho. It's perfect for (f.e.) your companies email and you don't want to be bothered with managing it. Someone else can manage the accounts easily and let's me focus on other tasks at hand. Mark
Thats not the case. All larger setups use ispconfig splitted to several servers. Thats not the case too as well, as the mysql server for the mail node is separated from the client mysql servers, so no website database is affected by this. Seems as if you did not made yourself familar with large scale ISPConfig setups yet as mysql is no bottleneck on the mail node (its always faster then the number of mails that postfix can handle in the same amount of time) and also web nodes does not get affected in any way by the mail server if you set them correctly.
You're completely right on all fronts there Till, but everything in this thread becomes obsolete when you split everything up in multiple machines. I was more aiming at ISPConfig out of the box (afaik, most users are using). Don't get me wrong I really like ISPConfig because of it's simplicity to users with a little knowledge of web/mail/db systems so it takes a lot of work out of my hands. I don't have a large scale ISPConfig setup, that is correct, why not? I don't know, I currently run 7 boxes with it of which 2 need to be separate. So I could possibly add 5 of them into one single group. Can't figure out a reason why I haven't. I think because for our own load balanced env. I've been using my own piece of software to manage practically everything from rackspace layout, switch connections, powerstrip mgmt, loadbalanced groups, cacti/nagios integration, server configuration etc. within one web application. Which only leaves mail. I have ISPConfig for that.
If you have already a software that manages your cluster, then its a good idea to keep that of course The benefit in the ispconfig system architecture is that splitting of services and mirroring can be combined while having just one controlpanel. Here a example for a mail service: The mysql database on mail systems contain only the login details and some other mail related configuration. This database information is exchanged between the servers by ispconfig internally, so there is no need for a mysql replication in this case. ISPConfig suppots the mirroring with as many nodes as you like, so you can combine 2, 5 or even 100 servers into one clustered mailsystem. Just the folder /var/vmail which holds the email mailbox files have to be accessible from all servers, this can be done by a cluster filesystem, csync2, a shared nfs drive, drbd or similar solution.In front of these servers you need some kind of loadbalancer that redirects the imap and pop3 requests to the servers. For smtp, a multi mx record setup might be enough or you add a loadbalancer for that as well.
Ok, hold the phone there.. So I can setup 5 ispconfig boxes in a multi setup .. mount /var/vmail from a shared storage. and let the 5 boxes mirror it's configuration. put a loadbalancer in front and add the 5 boxes in your MX records. Power of 5 boxes as a virtual single entry point for mail? That's nice! I still need to install ISPConfig as a whole on all servers I assume? I'd love to have 1 box with the web admin and the other run as less as possible. Basic install just for the remote control of the box... I need to dive into this sometime. -- Yeah i run my own cluster software, written in php/bash/perl/etc .. I have "add self healing" on my to do list, which would be so awesome. (yes vm solutions have that already. But i'd hate to spend 300x 700 EUR for Xen Advanced licenses ). The idea is to create some sort of AI, that knows what's running, what's needed, what's available, what's expected by us and let "it" take care of everything. But we're drifting off topic. /me opening ispc3 manual *search multiserver setup*
Yes. Yes, thats the way it should be installed. You run only the server part of ispconfig on the mail nodes, not the interface part (installation type expert, then select only mail as installation type and disselect to install the interface). The interface part can be on another ispconfig server which has a apache installed. For larger setups I would use a separate interface server e.g. in a VM which runs just the ispconfig interface but does not host any services like web, mail or dns for customers. This makes the setup also more secure. In such a multiserver setup, the slave nodes poll the master server for config updates. They are not affected in their normal operation when the master is down. If you shutdown a slave server for maintenance, then it will catch up with all config changes that have been done on all other slaves when you switch it on again. ISPConfig stores a backlog of all changes for 30 days on the master. If you fear that the master is a single point of failre (as customers would not be able to change any email settinsg while its down, even if the slaves are not affected), then install the master node as mirrored server or use a mysql mirroring on the master database and let the slaves connect to this mirrored mysql db.
Awesome, sounds great! Gosh, why didn't i knew about this before :S Thanks for the info, will look into this.