HA with Shared Storage

Discussion in 'Installation/Configuration' started by BobGeorge, Jul 14, 2017.

  1. BobGeorge

    BobGeorge Member

    I've got a multi-node server and a shared storage server.

    My plan is to have HAProxy on the frontend to balance the load between all the nodes and then each node has the storage server mounted as an NFS share and that's where all the data (websites and emails) and a shared configuration will actually be stored.

    The idea being that HAProxy can choose any of these nodes to deal with an incoming request and, via the NFS share to the storage server, they're all working with the exact same data and configuration. So any node can serve up any of the websites or emails stored on the storage server.

    As an example, I've got an LDAP server running on the storage server - to synchronise the users and groups across the network - and then mounted a shared "/home" directory on the storage server on the local "/home", so that you can login as any user on any node and will have access to the same shared home directory.

    An another example, I've got a shared "/etc" directory on the storage server and, within it, a "hosts" file that lists out all the IP addresses and host names for the network. On the local "/etc" of each node, "hosts" is symlinked to point to the file on the shared "/etc" file. So that they're all working with the same shared "hosts" file and, for example, if I were to add more nodes to the network then I can edit the shared "hosts" file from any node and all the nodes would see it.

    I give these examples - which I've already got up and running - to show you how I'd like it to work. It doesn't matter which node HAProxy chooses to deal with the request, they all share - via the NFS mount of the shared storage - the exact same data and configuration.

    (I've seen examples where something like unison or rsync is used to sync the data between servers but, rather, I'm separating the storage off into its own array of servers - a SAN - and the nodes share it over the network instead.)

    What I need, though, is a web panel - such as ISPConfig - that can operate in this kind of environment, as we'll be having web designers - "resellers" as ISPConfig would see it - using the system and they need a nice and simple web panel to control things (and, on our end, the sales team could also use it to add new clients and resellers, and keep track of things with the billing and invoicing module).

    You get the gist. I've got to put a nice public face onto the system for others to use it, so how would I integrate ISPConfig into all this to do that?

    If I installed ISPConfig on each node, then could it be made to use a shared configuration via the storage server?
     
  2. BobGeorge

    BobGeorge Member

    Okay, so ISPConfig requires MySQL installed, because it uses a database to store its own configuration and to synchronise with other servers in the multi-server setup.

    But is it possible, instead, to have a single MySQL server - in my case, on the storage server(s), so that the database files can be stored on the RAID array and then be implicitly included into our backup procedures (which involves backing up this RAID array over the network to a completely physically separate backup server elsewhere) - and then all the other nodes are clients, obtaining their shared configuration from this "master" and, thus, are implicitly synchronised as they share a network-wide configuration?

    The individual nodes are just "processing grunt", so to speak. They have little storage themselves (SAS disks). They are expendable. If one dies, then the rest carry on. If we want more processing grunt, then we slide in a new set of nodes into our rack and they join the array.

    Any and all data - besides any local transient runtime stuff - needs to be on the storage servers. They'll get backed up periodically. They have the RAID arrays for redundancy. And they have all the storage capacity. (And they also have some nice dual 10G NICs, by the way, so bandwidth won't be a problem, as this exceeds the maximum strain that all the current processing nodes could put on it).

    This is the architecture that we need, and it's just a question of learning how best to make ISPConfig work well within that architecture.
     
  3. BobGeorge

    BobGeorge Member

    No, on further thought, it's fine for each node to have their own MySQL configuration database locally and synchronise, so long as the storage server is the "master" database server for everything else and if it also itself has ISPConfig installed, then it'll be synchronised with the other nodes too, and that's how - by placing the storage server's database files onto the RAID array - how I can also keep the ISPC configuration backed up too.

    It's a bit needlessly "round the houses" in that they're all maintaining their own local copies of the same thing and then explicitly synchronising with each other to keep it the same. When, if it could just be stored on the network share, there'd just be one copy and they'd be implicitly synchronised as they're literally all running off the same configuration.

    But that would only work if ISPConfig has been purposefully written to support it - the multiple update problem - and there's no sense fighting too hard against how it works, just for the sake of a few extra copies and a slight delay for the synchronisation.

    The websites and databases should be fine. Now I need to investigate how possible it is to make Postfix / Dovecot happy in this kind of environment.
     
  4. till

    till Super Moderator Staff Member ISPConfig Developer

    Your setup is introducing a single point of failure, is less stable and way slower than the setup that ISPConfig uses and that's why ISPConfig is not doing it's configuration like this. And your setup does not scale well.

    First, MySQL will not work properly when several daemons access the same files trough a network node. Even if that would work, then all nodes would fail when a simple file corruption happens, so you introduce a single point of failure here. A MySQL database that uses a network filesystem is way slower than a local one.

    Beside that, ISPConfig allows it to take nodes down for maintenance for up to 30 days and they will pick up all changes in the right order when they are back. If you would have a single database where all nodes connect to, then a node that would come up again would see newer data in the database while its local config and files do not match to that data, so e.g. an incoming mail would fail because the database knows this address already while the filesystem config has not been updated yet.

    And finally there also a security aspect, ispconfig limits the access to the data of the other nodes by giving the ispcsrv users of the nodes limited permissions, only the master has full write access to all tables. With yur config, each node would b able to take over the master and all other nodes easily by sharing the master database.

    In a cluser, a shared storage makes sense for website and email data, but not for MySQL and that's why ISPConfig handles it like this.
     
  5. BobGeorge

    BobGeorge Member

    Yes, I can see what you're saying. And with my "on further thought", I was myself beginning to realise that the way ISPConfig does things is fine and makes sense.

    Although, really, my concern is that - especially with something like, say, Wordpress - databases are part of the website data. I just want to ensure that the "website data" on the storage server also fully includes things like Wordpress databases too, because as much, if not more, of the "website data" is actually inside the database, not in the HTML / PHP / CSS files, in fact.

    I guess, instead, then, we'd be looking at some sort of backup process where the database data is dumped onto the storage array, alongside the "website data", so that we're storing - and backing up - the whole package.

    But I guess what's putting me off is that if the MySQL database backup is a separate process to storing the other "website data", then can't it potentially get out of sync?

    Really, I think the problem is that I'm conscious to ensure that everything that needs backing up will be backed up, and I'd already thought of putting the website and email data on the storage server, so was just blindly repeating that idea for the databases as well. But I can see that it's different and just doesn't fit that model, so must be dealt with differently.

    How, though, do I ensure that the databases also end up on the storage server with the other website data, because it's also "website data" of another kind and is needed to fully restore from a backup?
     
  6. till

    till Super Moderator Staff Member ISPConfig Developer

    Mount the /var/backup folder of the nodes from your central storage system. ISPConfig places the website and mysql backup of the sites there when you enabled backups for that site.
     
    BobGeorge likes this.
  7. BobGeorge

    BobGeorge Member

    Yes, but the thing is that the storage server isn't really backup. It's primary storage that, itself, gets backed up elsewhere.

    To explain, the storage server is very much dedicated for the purpose of storage. It has a RAID array with 24 slots. Even with mirroring halving that, if you were to fill it up with, say, 10TB drives then that's 120TB or an eighth-ish of a petabyte. We've got dual 10G NICs - as well as the dual 1G NICs that are part of the motherboard - adding up to a bonded 20Gbps (and I've got cachefilesd on the NFS share to cache things and further speed it up). There's also 24GB of RAM installed, so it can RAM cache quite a bit too.

    But the "processing" nodes don't have much storage at all. They've got fast but small SAS drives. Really, they're just OS + applications drives, and whatever's spare acts as a disk cache (so the NFS share will only be doing the full network round-trip on a cache miss, otherwise it's just sending checksums to verify the cached copy remains valid and then uses the local cached copy - so the more local disk cache space, the less often it needs to do a full-blown network round-trip to access files).

    Separating off the storage to the storage server isn't just a want from me to centralise the data, but is an integral part of how the overall system should operate. Eventually, once those drives start filling up, how the system basically has to operate, as the "processing" nodes just don't have the local storage capacity to cope. And I'd rather not use it up, because the more spare disk space for caching purposes there is available, the less often we have to invoke a full-blown network round-trip.

    Eventually, the processing nodes simply won't be physically able to store all the websites locally. The other advantage of doing things this way - which effectively decouples the processing and storage - is that, if more storage is needed, then we slide into the rack another storage server and, using something like GlusterFS, create what looks to be one massive storage drive (and at which point, I guess it starts qualifying as being a SAN). If more processing power is needed, then we slide in another multi-node server to give us another 32 cores of grunt. It's a setup designed for expansion and, more over, we can expand processing and storage somewhat separately, to react to what's actually needed.

    The storage server isn't backup, per se, and will, itself, be backed up, over the network, to a backup server elsewhere (like, in a different physical location, on a different network and power supply and so forth).

    We looked at Virtualmin and cPanel but the lack of multi-server support knocked that on the head. I hoped that ISPConfig would prove better, as it does at least inherently comprehend the notion of a multi-server setup and could perhaps be tailored to work in this manner. And the boss just loves the idea of the "billing and invoicing" module, so he's very enthusiastic for this.

    Indeed, to explain, we do already have a single server that's serving websites right now using Virtualmin. But we need to expand beyond that, and so I'm setting this up to take over with future expansion, high availability and so forth in mind.

    Yes, I know, hardly ideal. But this is where I find myself, due to circumstances.
     
  8. till

    till Super Moderator Staff Member ISPConfig Developer

    You asked how to backup MySQL and websites together and I explained how this is done easily within ISPConfig. If you mount that folder from your fast storage system or if you use a separate backup storage is up to you, And if you don't want to use ISPConfig's internal backup, then you can backup MySQL outside of ISPConfig of course. ISPConfig is used in many larger setups, e.g. I know an installation that I worked on who uses it in a 5 server cluster with shared /var/vmail filesystem as a mail cluster with more than 16 thousand email accounts and other providers have even large multi server systems with dozens of nodes as well. ISPConfig is really flexible in this regard and yes, you can even run as many nodes with a single MySQL instance as you like, I personally just won't do that for the reasons I explained. But all these systems use separate MySQL instances on each node for high availability, fault tolerance, and scalability.
     
  9. BobGeorge

    BobGeorge Member

    I guess I'm just not understanding how to correctly deploy ISPConfig on my system to achieve the desired results and have confused myself somewhat.

    I've installed ISPConfig on all the servers and put the "master" server on first node, then added the rest of the nodes to this, so I see all the servers on the "System" page. Each node is running its own MySQL instance.

    This first node accepts incoming Internet requests and will be running HAProxy to distribute the load amongst the processing nodes. It also runs a DNS server (DNS won't go through HAProxy, this first node will deal with that itself, but the rest will be forwarded onto the other nodes).

    The second node is set up similarly. It's the second name server. In ISPConfig, I've selected it to be a mirror of the first node. It's the redundancy for the first node. It'll also run HAProxy and, via heartbeat monitoring, will take over from the first node, if the first node goes down.

    (I'm thinking of this first two nodes as being the "front end". Their job is to deal with the outside Internet, so responding to DNS queries and accepting / filtering / load-balancing incoming requests. The second node is just the first node's redundancy, in case it ever goes down.)

    The rest of the nodes are running web servers, mail servers, database servers. Basically, everything but the DNS, which the first two nodes already deal with. HAProxy is dealing incoming requests to these nodes and so I'd like it that any of these nodes can deal with any website or any email address.

    To this end, the shared storage server is mounted on each, and I intend to configure it so that the websites and emails are stored on the NFS share. That is, the "DocumentRoot" for the websites will be on "/Web", which is actually a NFS mount to a directory on the storage server. I'll set up the emails to work in the same way.

    Thus, whenever any of the nodes accesses a website or an email, it's actually getting it from the storage server.

    I don't know how to get the databases to also work in this way. To be clear, I'm only concerned here with the website databases. That is, if someone's running a Wordpress site, the corresponding databases that accompany the website. ISPConfig's own database is not relevant here and each node is running a MySQL instance to handle that locally. But I'd want it so that if someone did create a Wordpress site, for example, that the corresponding database for that website goes hand-in-hand with the website files on the storage server.

    When creating a website in ISPConfig, there's a field asking which server to create it on. This should be irrelevant. You create the website (or email) and it should be that any node can deal with this.

    I'm thinking that I'll have something like "admin.domain.tld" redirect to "domain.tld:8080" to access the ISPConfig interface. You see, there will be resellers - web designers - accessing this system, so it needs to have a nice public face for them. To that end, the field about which server to create their website on shouldn't be there. As far as they're concerned, they just create their website or their email account and the system "just works" for them. I guess I can just hack the interface code to make this field hidden.

    Similarly, the IPv4 and IPv6 address fields shouldn't be there. If they're using our hosting service, then these will always be our static IP address.

    So I also foresee that I'm going to have to do a bit of hacking of the ISPConfig interface to hide fields that the public - the resellers and clients - don't need to see. Well, I'll be creating a theme anyway, I'm thinking, as we'd like to brand it - change the logo, paint it in the company colours and that sort of thing - so I guess I can create the theme to just not show some of the "backend details" that our customers don't need to know about.

    This was my overall plan. I'm sure I've got something wrong and still don't really know how to handle the databases - that is, website databases, as I'm perfectly happy for each node to run its own MySQL instance for ISPConfig's own database - in a similar way to the websites and email. Thanks for being patient with my ignorance.
     
  10. till

    till Super Moderator Staff Member ISPConfig Developer

    When all nodes shall use the shared storage for the website MySQL databases, why not create one central mysql database only node in ispconfig which mounts /var/lib/mysql from this shared storage. then disable 'mysql' under services for all other nodes. So this server is the only database server to choose from then within ispconfig.
     
  11. BobGeorge

    BobGeorge Member

    Thanks. I'll try that out and see what happens.
     
  12. BobGeorge

    BobGeorge Member

    One more thing.

    I've got the first node as the "master" server, running the ISPConfig interface, and then the second node to be its redundancy. I've configured the second node to be a mirror of the first node.

    Is there anything else I need to set up to have the second node take over as the master server should the first node go down? Or, acting as a mirror of the master server, then it'd do that already?
     
  13. Gwnet

    Gwnet New Member

    @BobGeorge

    Im aware this thread is old, but as i tirelessly looked around the internet for a solution to create the same exact setup, i was wondering if you could shed any light on this. Ive researched so many platforms and isp-config appears to be the best foundational point to build on.

    If any one else could direct me to a how-to, it would be greatly appreciated, as all searches seem to come up short.

    key elements as read from above.
    -Multiple nodes serving up same websites
    -All nodes share storage for web and email, so any node can respond to requests
    -Removing/hiding elements which the end users
    don’t need to see, eg which server to create website on (this is under the assumption ispconfig is being used to create this setup).
    -Mirroring master node for redundancy. Essentially there should be no single points of failure.

    I know it seems like a big ask, but i am really surprised that I haven’t been able to find a solution to create this setup. If you know a better platform, im all ears.

    thank you.
     
  14. Taleman

    Taleman Well-Known Member HowtoForge Supporter

    It would have been better to create a new thread and put a link to this discussion, rather than revive 4 year old thread.
    The Documentation page https://www.ispconfig.org/documentation/ has info on "ISPConfig 3 in a Multiserver- and Cluster enviroment" which should answer some of your questions. This forum has discussions on redundant setups, two mirrored web servers for example.
     
  15. Gwnet

    Gwnet New Member

    Thank you for taking the time to reply. I will go back through their documentation and see what i can find.

    The new thread suggestion, excuse the newbie mistake. I am used to reading books, which you can reopen at any time.
     

Share This Page