Multiserver Setup

Discussion in 'Installation/Configuration' started by The_Cook, Nov 7, 2024.

  1. The_Cook

    The_Cook New Member

    Hi Guys,
    I'm currently setting up a master-master infrastructure 1 x panel, 2 x DNS and load balanced 2 x mail and 2 x web servers. The panel is a single instance and DNS doesn't need load balancing or synchronising (as far as I am aware).
    I know that ISPConfig is just a control panel that setups up / manages the services that run on the underlying servers, but I have a couple of questions.
    I have setup file replication with unison for 'www' on web1 and web2 and use dovecot for replication between mx1 and mx2.
    The confusing parts:
    1. Database replication automatically works between web1 and web2 for user created database within the ISPconfig panel (web1 is the default DB server and web2 is a mirror of web1), but the tables added to those user databases in phpMyAdmin don't sync. Do I setup DB replication just for these tables?
    2. Do I just replicate the SSL certificate directories with unison between web1 & web2 and mx1 & mx2 so that web2 and mx2 have a copy of the live certificates? I am presuming that web1 and mx1 will renew the certs automatically and then the replication will just copy over the new certs.
    3. What happens if web1 is taken offline and is never brough back online and I replace with web3. Do I turn off mirroring for web2 (making it the main web server) and setup web3 to mirror web2? What would then happen with cert renewals?
    4. Do I need to copy/sync anything else? Are there any other issues with this setup or something I'm not considering?
    I'm thinking someone else has done this/set this up, or parts of it already and may be able to help me not reinvent the wheel. I have followed the multiserver guide https://www.howtoforge.com/tutorial/ispconfig-multiserver-setup-debian-ubuntu/

    Thank you.
     
  2. nhybgtvfr

    nhybgtvfr Well-Known Member HowtoForge Supporter

    1 ispconfig configures the configs for each server, this includes the db usernames and passwords, and the initial creation of the databases on each server. it does nothing with the contents of the user databases. you need to either use a central database for the webservers to use for client databases, or configure db replication between the webservers for the client databases (not mysql.user, dbispconfig etc)

    2. if you're replicating /var/www using unison, then the ssl certificates, when renewed, are already copied into that path, /var/www/clients/client#/web#/ssl/ so should already be replicated between the two servers.
    personally, i would go for a something like dbrd or glusterfs, or nfs rather than unison.

    3. dunno, never done it.

    4. yep. plenty of issues.. load-balancing.. the load-balancers would normally have the public ip's. the servers behind them having private ip's, and inaccessible from the internet directly. you could pass all traffic through, and have certs on the backend servers, but that removes one of the bigger benefits for load-balancers.. inspecting and blocking problem traffic before it reaches the webservers/mailservers.. or you could have https terminate at the load-balancers.. which means the server/website certs are all created on the load-balancer, not the backend servers, so you'll need to configure it all to handle that.
    the load-balancers are also a single point of failure.. lose the load-balancer, and you lose access to both webservers, so you should have at least two load-balancers, with a floating ip between them for failover. and you'll need to handle session data.. make sure the user get's directed to the same backend server on each request.. i know haproxy can be configured for that.. other lb's,... don't know.. otherwise it's another redis server or similar, just for holding session-data.
    now it's all getting very complicated.. there's more potential area's for failure, misconfiguration etc... it's more hassle to setup and maintain.
     
  3. remkoh

    remkoh Active Member HowtoForge Supporter

    I'm right in the middle of something similar in my testlab and even go a few steps further.

    2x ispc, 3x db (1 only running garbd), 4x web (2x apache, 2x nginx), 3x petasan (running nfs shares), 2x haproxy also running public dns (for accessing websites and panel from the internet).
    Once this is all running smoothly 2x mail will be added.
    Ispc, db, and haproxy run keepalived with vip addresses that failover on outage.

    Nowhere is /etc/hosts populated with all servers, but all servers use internal only dns (running on the db servers).
    None of the servers run MariaDB/MySQL except ofcourse 2 db servers, which have MariaDB installed.
    All db's are on those 2 servers. Also websites will use db's on them.
    All servers connect to their databases using a special hostname which resolves to the db servers vip addresses (roundrobin). Except ofcourse the 2 db servers itself, who use localhost.
    Haproxy is not a package but compiled, to support QUIC/h3.

    1) My databases are replicated using Galera.
    The 3rd db server runs garbd to act is Galera arbiter (to avoid split-brain situations).

    2) Web1-2 and web3-4 both use a nfs share for /var/www/
    So certificates (and web data) are accessable for both nodes.
    For certificate authentication dns_ispconfig is being used (because of haproxy, see 4)

    3) Haproxy handles my outage of a webserver (or ispc) node. When one is down all connections will be send to the other.
    Also sni mapping is being used to send you to or one of the apache nodes or one of the nginx nodes.

    Within the ISPC panel websites (and FTP users etc) are configured for web1 or web3. Web2 and web4 are mirrors (as is ispc2).
    If web1 would completely fail you can't just only un-mirror web2. You also will have to alter ISPC master db and change server id from web1 to web2 for all websites (and FTP users etc). Afterwards you can add a new server as mirror.

    4) Plenty probably ...
    Most important for me is getting my website certificates to haproxy, as tls is terminated there. Work in progress (for now copied manually). And figure out how to use quota with nfs.
    Not sure yet if I'll run into anything else.
    So far everything seems to be running fine, including when I shutdown a random node (even ispc1).

    With custom vhost templates (in /usr/local/ispconfig/server/conf-custom/) all websites are scoring 100% at internet.nl and everything is green, except csp and http compression for some websites.
     
    Last edited: Nov 8, 2024

Share This Page