Hi, I did some effort to let ISPconfig (3.2.12p1) working behind HAProxy (HAProxy version 3.0.8-1ppa1~noble 2025/01/29). It works with IPv4 and IPv6 protocol, receiving and sending. It works for me, but that's it. I did not found real issues and tested with a few testing sites and with the tool swaks all on Ubuntu 24.04. All the documentation you can find on my private site. It's to much to publish here. See: https://www.bertip.nl/#!postfix-behind-haproxy.md If you find some issues or have comments to make it better, you are welcome. Greetings, Bert
What's your use-case to put haproxy in front of ISPC for mailservices? I can understand it when you want to use loadbalancing but your haproxy config only has a single backend server. So the loadbalance config you use has no function at all. And SMTP/SMTPS/SUBMIT traffic is passed through if I'm not mistaking. So why the need to redirect to another port on the backend server? I'm using haproxy myself, two of them in front of an ISPC multi server set-up. Not yet for mailservices but loadbalancing webservices only at this time (two mirrored apache and nginx webservers). The haproxies don't use passthrough but are endpoints. The only way the clients can use quic/h3. Because of that I've made a script that copies ssl/tls certificates from the webservers to the haproxies and adds ocsp to the haproxy service when certificates are created/renewed by the webservers. Loadbalancing between the haproxies is done with roundrobin in dns and keepalived for failover. The haproxies share several tables with client connection info. So when one haproxy fails you will switch to the other but you keep your connection to the same webserver.
agree.. balance settings in haproxy backends are completely redundant with only 1 backend server behind it. frontend's only need to bind to the public facing ip's, not every ip on the LB, and no need to change the postfix ports. sounds like @remkoh's system is two live active/active haproxy instances accepting connections on their own ip's. personally, i would configure them active/passive with a floating ip and a heartbeat between them, and have all public services connect only using the floating ip, and have the floating ip switch to the other haproxy instance only on failure. with the OP's configuration, i'm not sure there's really any need/benefit in using haproxy unless there's a lot more configured that's not mentioned, eg, just one public ip and multiple servers each providing a different service.
Indeed active/active. Both with a floating IP (v4 and v6) and heartbeat in between. Active/passive could have been possible too though I don't really see the use to loadbalance to multiple backends with just one frontend other then when the load on the backend is really heavy. And I had the resources to do it like this My thoughts exactly. For one public ip and multiple servers you can just use nat in your router/firewall and don't need a proxy. And haproxy can only use sni to direct webtraffic to different backends. It can't for mail, ftp etc. I did read the config wrong. Haproxy is endpoint for all mailtraffic and not just imap/pop. But still, what's the use-case behind this single proxy and single backend set-up? For now it just looks like an extra possible single point of failure.
Hi, You are wright with your question what's the use case. I will explain the background how I came to this. I have an ISPConfig setup in the datacenter and I build one at home for backup (mail-relay mostly). The problem became after moving to another home. Before I had 8 public IP's on my Internet connection, so I could give ISPConfig one. Now in the new home I have only one Public IPv4-address and a /48 Ipv6. So far also no problem. I could use NAT to my server. Until now when I have home automation, Nextcloud, a Pi-hole ect. in the network. I had to open for each application a port in the firewall and and use NAT to this application, until some of them use the same port. I already had some experience with HAProxy and decided to use this as the frontend for (almost) everything. One point of entry, one point of maintenance (fail-over will be build). I managed to let ISPconfig use Let's encrypt through HAProxy for my websites, so i didn't have to copy certificates or use Let'sencypt on the HAProxy-server. For the ISPconfig webinterface and the MX-records I use a multidomain certificate, which I bought. Note: For the few websites on ISPconfig I have to wait until the renewal of the certificate from Let's Encrypt, or this works. The other main reason to build like this, is that I 'm prepared for a multi-server setup with two mailservers. Indeed the use now with one mailserver is minimal. I would say, I did it, because it can. Also with this setup I solved a problem, that ISPconfig only saw the IP-address off the HAProxy-server and not that from the clients which where connecting to the mail-server (also webserver). This gave me all the time ban's from fail2ban. Now I see the client's IP-addresses and the e-mail works normal. The extra ports which I used in the postfix config, where (in my case) actually necessary. It worked without these ports, but not when I was sending from out the internally network. For example: I could not sent with swaks to this server an e-mail. Also from thunderbird-client, it didn't work (STARTTLS error). With opening these ports extra to it and making the e-mail secure (tested on different sites) it became all good. It maybe not optimal for some of you, but for me it solved some issues and I have full control over the URL's/ports and forwarding it with this config on my HAProxy-server. In my case it's more a proxy then a loadbalancer. The road to finding out the technical solution to this was the most interesting, which i wanted to share.
Ok, that makes it a whole lot more clear. It makes perfect sense to use haproxy to avoid duplicate ports. But only if you're talking about 80 and 443 (or at least ports used for webservices) as haproxy works perfectly with sni filtering for http(s) traffic. Though that config is completely absent on your website. Haproxy can't use sni filtering for other traffic than webtraffic. So I still don't understand your mail set-up in haproxy. Using haproxy makes sense in preparation to multiple backends. But there should be no need what so ever to change ports on the backend for it to work in any client. It definitely doesn't solve the fail2ban problem (proxy ip instead of client ip) you had. That was caused by incorrect/incomplete haproxy config.
To fully understand my config (which is not fully ready yet), I placed a link on the page for downloading the whole haproxy.cfg, This config is screened, it has a piece of Let's Encrypt and piece of a security.txt in it. For one site I used offloading in combination with a Let;s Encrypt cert.This all is explained in the other tutorial on the site. ** sharing is caring **
I like your config I'm doing a lot of the same things but also quit often in a different way. Mostly using map files (like domain to backend) where you have several different lines for different domains and other things. I also use a map file for linking certificates to domains. Did you install your OS's haproxy package or did you compile haproxy yourself? I do the compile option and compiled quic/h3 in it too for every website on port 443 to use by clients. (all major browsers only support quic/h3 on port 443 and no other ports) How do you handle your certificates? Not copied by hand I hope? You too use haproxy as ssl/tls endpoint on several ports so you need certificates in haproxy itself. I exclude /well-known/acme-challenge (among other things) when redirecting all other http to https. That way ISPC on the webservers can do certificate requests and renewals. A script I created on the haproxies searches the webservers for renewed certificates and copies them to the haproxies and after that inserts renewed ocsp info into the haproxy service and reloads that service. For the haproxy stats page I just use the ISPC server certificate. ISPC is installed because my haproxies also function as public dns for my domains. I use dns-challenge for those certificates so I have no need for ports 80 and 443 (on which haproxy is running) to be able to renew those. I saw security.txt pass by. Maybe this can interest you: https://forum.howtoforge.com/threads/server-wide-security-txt.90153/
I installed everything from standard repositories HAProxy like this: https://www.bertip.nl/#!HAProxy-on-ubuntu-2404.md I have no experience with the quic protocol yet. For the certificates. I have one multidomain certificate, which I bought, which is used for most of the URL's (offloading). For one site hosted on the ISPconfig server i did a double Let's encrypt. One on the HAproxy and one on the ISPconfig server. Now I have to wait until the renewal to see or this works. I will look into your security.txt solution. I updated the website with the postfix master.cf and main.cf as I use in ISPconfig. Gives me 100% score on Internet.nl and on more sites also a good reputation (sending en recieving e-mail).
Both use http-challenge for the renewal? I'd expect the ISPC server to fail renewal then. My guess is you did the certificate request on your ISPC server first and after that changed your haproxy settings for the certificate request on that server? Haproxy will almost certainly interfere with the ISPC server's renewal challenge. That's why I let my ISPC servers do the certificate requests and renewals and I made a script that checks for new and renewed certificates and copies them (over ssh) to my haproxies. Additional benefit is that my haproxies and ISPC servers use the exact same certificates that way. Installing two mirrored ISPC mailserver is still on my to-do list. Also planned to put behind my haproxies. Eager to find out if I'm going to run into the same issues as you did. Though time is scarce. To many other projects I'm also working on. And my current mailserver is running just fine, also scoring 100% (as well as almost all websites).