Hi all, I'm very new to ispconfig and it's inner workings, so I would like to ask the more experienced ones about a setup I would like to replicate in ispconfig: I have been running an active-active mailserver failover setup for some time now using shared storage and some help from the firewall (it's a hand-made setup). This setup has 2+ back-office servers running postfix and courier imap/pop3. The 2 servers use a shared mailbox storage (nfs/gfs/whatever). They can both access the same storage at the same time. The firewall redirects incoming connections to one of the back-office servers using a round-robin algorithm. If one of the servers fail, it's kicked out of the pool automatically, until restored by the admin. if there's no firewall available, 2 servers could share 2 public IPs with simple ha-tools and migrate them as necessary (load balancing coming from the DNS). For this setup to work in ispconfig, I guess the 2+ mailservers need to access the same mysql database or there could be a master server and replicated servers via mysql master/slave mechanism (this could be easy). I could just install 2 mailservers in a multiserver setup and then tweak the postfix/courier config files by hand, but... I would greatly appreciate some pointers about where to look/tweak in the ispconfig installation so that such a setup would not be broken from a future ispconfig update. Thank you, ispcomm
The configs would automatically update to read from their own database (dbispconfig), which the ispconfig master replicates to the slaves. How do you currently have your users setup/stored? Marty
Thats not the case. Ecah server has its own mysql server for redundancy and ispconfig is doing the replication of the mysql records automatically. Use the email setup part of this tutorial: http://www.howtoforge.com/installin...tabase-cluster-on-debian-5.0-with-ispconfig-3
I'm running a heavily modded ispcp installation at the moment. I have a master server holding all the configs (ispcp daemon writes postfix/courier data to db3 files, which is better than proxy:mysql btw). When the master server is updated, a daemon is kicked to rewrite the config files for postfix/courier etc. A mod in the master server will replicate such data to the slave servers. NFS is used for small clusters. Bigger ones have a GFS setup over iscsi. ispcomm
thank you for the pointer. I must have overlooked it. On a side note, I would like to exchange some opinions with you regarding mysql replication and holding accounts in real-time db. I was wondering if it won't be a much simpler setup to just use db3 files for accounts (postfix/courier) and just rehash them when necessary on all involved servers. Currently all accounts are in mysql and postfix is reading directly there. I have some concerns on issues with mysql availability (specially on heavily loaded servers). On such servers (perhaps running web+mail+dns) a failure of the mysql server will bring all the system down. If db3 files are used the mail/pop/imap setup will be much more stable and perhaps efficient. postfix restarts automatically every time it detects a change in an external hash table. On the other hand I known mysql toasters can be very very reliable. I have one with some 1000 accounts on it that was installed in the woody era and has since been upgraded several times: it's rock solid but again it's a dedicated machine and is a pure toaster. No joomla sites to hog the mysql db. Not trying to steer development here... just interested in your opinion. ispcomm
ISPConfig is a multiserver controlpanel, so you run normally one or more dedicated mailservers (or virtual machines) in your ispconfig cluster if you have a larger setup. A few thousand email accounts are no problem for the current setup on a single server / vm and you can run as many mailservers in a setup as you need. We had a hash based setup in ispconfig 2 but found the mysql setup to scale better and the account information can also be accessed directly by the pop3 / imap daemon. As ispconfig replicates the mysql contents based on events and every node in the cluster has its own mysql database, no other node is affected when the master or any other node goes down. When a node was offline, it resyncs automatically and fetches all config changes from the master. I run a webserver with the current setup for over 2 years now without a single problem caused by mysql. If you prefer to use hash files, you can write a plugin for it that simply creates a new hash file.
Perhaps I have developed a bad/lazy habit of clustering full servers (with complete installation of web+mail+db) in the believe that it's easier to increase average utilization of resources. This makes it easy to use space cpu/disk cycles when required by temporary load on mail or db or web. A problem with this setup is that the occasional bad query to the db would have impact on mail as well (user resource limiting is not so easy in mysql). I see how one can "easily" accomplish higher resource density with dedicated VZ containers, but my setup derives from many years ago when freevsd was pioneering virtual servers and was probably the only option (I had a mirrored freevsd over drbd with linux-ha at that time). The vz kernel was an obscure add-on made by a russian guy and vserver was another abscure private patch Currently I'm using auto-migrating xen containers over shared storage but its nice to see that glusterfs (as in your tutorial) seems to be stable enough for replicating data. The project I had in mind for ispconfig is quickly becoming a reason to rethink my whole setup (ouch!) thank you for sharing ideas tough. ispcomm.
Take a look at openvz. I use openvz virtual machines to separate services for several years. A benefit is that you can do easy backups with vzdump of every vm and in case that a vm needs more resources, then simply migrate it to another server. Openvz is developed for ISP's by the same company swsoft that develops the most used commercial controlpanel plesk (openvz is the open source version of the commercial virtuozzo software) and has nearly no overhead for virtualisation as it uses the same linux kernel for all virtual machines. Ready made openvz kernels are available in the official debian repositorys, so no patching required. I would prefer openvz over xen for the follwoing reasons: - It is lighter / has less overhead. - It stores all files of a vm in a folder and not a file or partition like plesk. So extending the size of a vm is judt editing the quota limit in the config file, no need to change pertitions. Also the vm has always the size of the data that it contains and you dont loose harddisk space like it is the case with xen images. We used xen for the howtoforge servers for severla years and then switched to openvz some time ago. We found out that the servers feel faster now (better loading times of pages and images) and we have less trouble with the virtual machines then we had with xen. IN ISPCOnfig 3 you can also use dedicated mysql servers for websites. So another solution for this can be to run a dedicated mysql server for all website databases. But I would prefer the openvz approach.
Yes, openvz has matured a lot since last time I looked at it. At that time I choose vserver over openvz as that was the more mature option, as I didn't need all bean counter stuff as I was using only my own servers. I need to catch up with migration and some inner workings of VZ, but overall it makes much more sense over xen (well... I backup xen machines via lvm snapshots and that works quite well on running servers too). Thanks again. ispcomm