We're familiar with the situation where there is some issue with a site, and Apache Host Matching selects the first enabled vhost. A visitor requests example.com, and host matching gets the first vhost, which may be 000-default or one that we created like 100-a-default. The cert for that selected vhost doesn't match the requested domain. So the user gets an invalid cert error. I'd like to handle this better. If I Enable 000-default-ssl.conf and request a disabled domain, the client browser gets back a cert for server_hostname, which doesn't match the desired example.com. If I Disable 000-default-ssl.conf, the first vhost selected for port 443 is a placeholder that I created, "100-a-default.company.tld". The client browser gets back a cert for subdomain "a-default", which of course doesn't match example.com either. We can't 403-redirect (or similar) on a cert error: By definition the browser doesn't trust the server yet, so it won't redirect based on instructions from the server. The HTTP processing hasn't started yet. The SSL/TLS handshake ends and the browser disconnects without making the request for content. A common note in these forums is "you need to ensure all sites are HTTPS". Absolutely! But the issue that brought me here today is with a site with valid SSL, that was temporarily Disabled through ISPConfig. I think that uses `a2dissite`. So a request for example.com is valid according to DNS and gets through to this server, but Apache can't find it, it goes for that default vhost, and this results in a cert error that includes 'company.tld' to someone looking for example.com. I'd like to eliminate the redundant "a-default" vhost subdomain, and (maybe with the default vhost) move a visitor in this scenario to a page that will tell them something useful. For example, is there any feature yet for configuring a landing page for disabled sites? I know ... "They're DISABLED!" IMO the result of that shouldn't be a misleading security warning on the client side. I'm thinking of Domain Parking, where a "disabled" site is optionally "parked" with a domain alias. Rather than just removing a site from sites-enabled, replace them with an alias. Responses would need to return a site-specific cert, but the DocumentRoot would be common for all sites in this state. Is that a valid suggestion? Is there a better way to deal with this now? It would be helpful to know when Apache gets a request for a site that resolves back to the default vhost. Is anyone aware of a log where this event is recorded? Related to checking that all sites are SSL-enabled: Does anyone have a script that will show sites that are not SSL-enabled? That would be a useful field in the domain list, like the Disabled flag, with a check and/or CCS coloring. I understand many sites are intentionally not HTTPS - this wouldn't be a warning, just an indication of state. Right now I think the only way to do this would be with a SQL query, maybe a REST request? Thanks!
Note: This wasn't posted to the ISPConfig forum (yet) because the challenge is not entirely specific to ISPConfig. If there is a ISPConfig-specific resolution or this results in enhancement requests, I can create a separate similar thread. TY
I redirect all to https://server.example.com with email address to contact if such a site is not found. It should be simple enough but I don't share it here since mine is nginx web server so it is irrelevant to yours. Anyway, this could be a good new feature to ask.
hmm. i don't see any problem with how apache currently handles it. i just have 000-default and 000-default-ssl enabled, so if apache can't find any particular vhost / servername, the default site/page gets loaded instead. the docroot in them is left as /var/www/html/ and the ssl cert settings in 000-default-ssl point to the ispconfig cert files in /usr/local/ispconfig/interface/ssl/ then i just change the /var/www/html/index.html file, remove the apache default file and put my own holding page in there with standard info: this domain is owned by one of our clients... blah blah blah.. if this domain is yours and you expected to see something else contact us here..... blah blah blah.... and the page includes a little javascript that redirects the visitor to our main company website after 60 seconds. just create your own index.html or index.php file to replace the apache default and put whatever content you want in that. works perfectly fine.
yep. but i don't use the fqdn for anything. i currently use a paid-for wildcard cert. and have the ispconfig interface on it's own subdomain. could just leave it on the fqdn, on port 8080 using acme.sh or letsencrypt, and create a vhost using just the domain name, or www.domain name (or any other subdomain) and reverse proxy port 80 of that to the control panel on 8080. the 000-default and 000-default-ssl and index.html changes would still work fine that way too.
I was doing this via 000-default.vhost for nginx web server: Code: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; error_page 404 https://server.example.com/index.html; return 404; location / { try_files $uri $uri/ =404; } } server { listen 443 ssl http2; listen [::]:443 ssl http2 ipv6only=on; ssl_protocols TLSv1.3 TLSv1.2; ssl_certificate /usr/local/ispconfig/interface/ssl/ispserver.crt; ssl_certificate_key /usr/local/ispconfig/interface/ssl/ispserver.key; server_name server.example.com; root /var/www/html; index index.html; location / { try_files $uri $uri/ =404; } }
I've found that when apache.conf has LogLevel <= info, that /var/log/ispconfig/httpd/site/error.log reports a SSL error when Apache cannot resolve a hostname. This is getting closer to finding out which domain is causing a problem, for quicker resolution. But keeping a low log level (like 'debug') is not a good solution for a production environment. Still looking at this. I believe the solution by @nhybgtvfr only applies when all sites are covered by the same wildcard cert. When each site has a unique cert, per ISPConfig LE sites, the ISPConfig cert files still result in a cert mismatch. Also note that there is no 000-default-ssl.vhost file. (ISPConfig 3.2.2 / Ubuntu 20.04 / Apache 2.4.41 ) I added one with the default snakeoil cert and all hell broke loose. Creating a default host (SSL or non, 000-default-ssl.vhost or a domain like "a-default") will not eliminate the cert mismatch problem: I've created a vhost subdomain under my primary business domain, "a-configuration-issue". As elegantly as I can now, this shows a cert request for "example.tld" returned a cert for "a-configuration-issue.mydomain.tld". If this is as far as we can go with this, I'm good with it. I do not think disabling a site provoked this condition. I am researching both what has caused this, and then how to report when this happens, and will take up the topic in the ISPConfig forum if required. Thanks guys!
You actually describe the default behavior of Apache and Nginx web servers. This is not related to ISPConfig, so opening a request for this in the ISPConfig forum makes no sense to me. This is even covered in the read before posting https://forum.howtoforge.com/threads/please-read-before-posting.58408/ thread. See 'When visiting domain B, content of domain A is showing'. The details behind this Apache default behavior are: If Apache does not find a matching vhost, then it will show the first vhost that it finds on the same IP address and port. Example: You have websites a.tld and b.tld working on the server. While a.tld has SSL and b.tld does not has SSL enabled. If you enter https://b.tld in the browser, you must get a.tld plus get an SSL error. That's why one should either enable SSL for all sites or if you want to have sites without SSL, then use different IP addresses for them. If you now point a third domain c.tld to this server in DNS and don't create a website for it, it will result in being a.tld shown in the browser for HTTP or HTTPS. The reason that a.tld is shown is that a.tld is first in DNS. How do come default vhosts into play now? The default vhost files are chosen to be first in DNS by pretending 000- to them, that's why they are shown instead of a website. And here it's also separated between HTTP and HTTPS, so if you don't have an SSL default vhost, then the first website with SSL gets shown instead. All this is not ISPConfig specific, it happens on any Apache and Nginx server. An SSL cert mismatch must occur in such a situation as the default or the first website that catches the incoming SSL request does not has the domain of the nonexisting requested site in his cert.
@till - Please ref the first paragraph of my OP. There is no confusion about what's happening, the cause, or effect. I said "I'd like to handle this better." This goes beyond the common tier-1 "why is this happening?!?" post and on to "we all know the situation, let's discuss what can we do to improve on it". ... Let's see if we can make the world a little better. Scenario: A visitor requests site1, it's not found as host:443, and Apache Host Matching selects the first enabled vhost, site2. We all understand that. Here is how I'm looking at doing this "better" so far: 1) Identify the site1 and report the issue to the admin. Right now I still don't see a good way to do this other than setting LogLevel=debug and parsing logs. 2) Proactively script (curl) polling through all sites on the server to see if we get a good cert. Code: curl -Ivvv --stderr - https://$sitename | grep -iE "SSL certificate verify ok" * SSL certificate verify ok. 3) Rather than strategically positionally an ugly '000foo' or 'aaaaa' vhost to be selected as the top/default, I'm using 'a-configuration-issue', so if someone sees this or reports it, it's completely obvious that there is, um, a configuration issue. This is a trivial improvement but it can help an admin to immediately identify this problem, compared to "some 'random' site2 cert" getting returned for site1. A site owner won't get ticked that their site is getting replaced by someone else's site - it's obvious that there is "a configuration issue", a recognized problem that can be solved. 4) (Probably not a good idea yet) If we identify a site that's causing this problem, modify the DNS so that site1 is not getting directed to the server actually hosting that site, but to another server/IP that has a "better" cert so that a better message can be displayed to the visitor. That might be a wildcard cert. If it's another legitimate cert for that site, on a different host, it's probably going to look like a MITM issue. I haven't thought this one through yet but someone else here might think it's a great/simple idea or a really bad idea - and I won't need to keep thinking about it. That's why I'm sharing here. Any other ideas? This enquiry isn't in the ISPConfig forum. It's in Linux Server Operations. If someone gets a bright idea for better ways to handle this outside of ISPConfig, cool. If an ISPConfig developer sees a way for this software to solve this common issue, then I'm glad this is helpful. If we resolve that there are no better ways to handle this, then at least we can refer here - we really did try to think of ways to do this better but we couldn't. That's an acceptable resolution in itself. Thanks.
It was explained above at least as answered by @nhybgtvfr one using server FQDN and the other with paid wildcard cert but which part you do not understand? In any event, you cannot create a site without you know it is pointing to your server so catch all to one vhost of server fqdn should be enough to cover this case. Otherwise, instead of simple index.html you might want to use advanced one which can relay the info to somewhere and/or log that so that you can manage them properly. For me I won't be doing that when there is a simplest solution already available as it just ain't worth my time.
That's why a default vhost is used, so not site2 catches the request but the default vhost and in this default vhost html page, you can add some informational text that the requested website does not exist. You can even create such a site acting as a default vhost in ISPConfig by simply creating a site that's always first in the alphabet by using a nonexisting domain like '000default.tld' or similar. See also posts above from @nhybgtvfr about default vhosts.
also, the page used when the default vhost gets it doesn't have to be index.html, it can be index.php. (although you can include php script directly in .html files too) (or anything else you like as long as you set it using DirectoryIndex) you could easily include some php or javascript in the page that logs the hit separately from the standard apache logs, so you can monitor/parse this separate log file, or the script could email the server admin to alert them to the issue, possibly even including the original request url, so you know which site / cert is causing the issue. you're trying to re-engineer the whole way apache (or nginx) handles the problem, when all you really need to concern yourself with is what you put into the page the default vhost serves.
When a default vhost is selected, the cert for that site does not match the requested domain, so the response back to the browser is a cert error. The suggestion to put some explanatory content on the server doesn't apply - the request never goes through to be processed by pages or scripts. A wildcard cert is impractical for a multi-domain server. I've been talking about domains, not subdomains.
we've all been talking about domains. i just happen to have my server fqdn, and control panel on a wildcard certificate. but that is irrelevant for these issues. i still have lots of hosted sites on various domains, the way apache (and nginx) would handle them is exactly the same regardless of the type of certificate. any request, to any non-existent ServerName will be treated the same. for the end user (visitor), their browser will issue a cert mismatch warning, they will either stop there (or be forced to stop there), in which case there's not much you can do, although the attempted access should still be logged. or they will have the option to proceed, in which case the default page will get loaded. you either have custom logging enabled in the default vhost, and use those logs to get alerted, or the attempted access is logged in the normal apache log file. or you have something in the default page that gets loaded (if they chose to continue to the site) that will log the requested url and/or send you an alert email. those are your options. you can parse /var/log/apache2/access.log for entries that don't match existing vhost servername entries, eg: Code: access.log:165.120.127.60 - - [05/Oct/2022:11:43:27 +0100] "GET /favicon.ico HTTP/1.1" 404 421 "http://nope.redacted.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0" access.log:165.120.127.60 - - [05/Oct/2022:11:43:49 +0100] "GET /favicon.ico HTTP/1.1" 404 506 "https://nope.redated.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0" for http and https attempts respectively. https attempts will only get logged if the user chose to continue to the site after the cert warning if you want to get the requested url from the default loaded page, put the following script in the head: <script> var address = window.location.host; </script> it can be displayed, in the content using: "<span id="address"></span>" and you can rediirect that page to wherever you want later (60 seconds in this case) using: Code: <script type="text/javascript"> document.getElementById('address').innerHTML = address; var $seconds = document.getElementById('seconds'); var secondsMax = 60; var seconds = secondsMax; $seconds.innerHTML = seconds; var secondCounter = setInterval(function() { if (seconds <= 0) { clearInterval(secondCounter); window.location.href = 'https://www.targetdomainname.tld'; } $seconds.innerHTML = seconds; seconds--; }, 1000); </script> in the body i'll leave you to work out how to send an email alert containing the requested address from within the default loaded page.