I have a weird feeling that ISPConfig has setup server 2 with something different from server 1. During setup, I make sure that they have the same configuration in Nginx, MySQL, PHP, etc. Both server has also the same hardware specs. But when it comes to performance, server 1 is far better than server 2. This is the reason why I upgraded my two servers before. Because server 1 is also performing better than server 2. I thought it was because server 2 has lower specs. Though the difference is not that much. Now since I have the same specs on both servers, I still noticed that server 1 is far more superior than server 2 when it comes to performance. One thing I noticed is the us, sy, and ni in Top command. In server 1 - us is much smaller compared to server 2. In fact server 1 has more websites compared to server 2. I run most of the website in server 1. I have only 1 website that run on both server. And the other runs all on server 1. In server 1 - ni is much higher than server 2. I noticed that it seems php-fpm is not using niceness (ni) in server 2. In server 1 - sy is more active than sy in server 2. Please see screenshot below for comparison To see the full image, please right click the image above and click "open image in new tab". Note: I'm using master<>master replication on both servers.
When you install both servers in the same way, then both systems are set up in the same way and the setup is also the same. ISPConfig can not set up systems differently when you install them the same way. As you say, you run different sites and different site loads on the two servers, this means that the load on them must differ. That said, you can have performance differences by hardware failures (harddisk not ok, mainboard issues etc) or a different system configuration outside of ISPConfig like filesystem and kernel settings, but you can not have such differences caused by ISPConfig as ISPConfig has no influence on the speed and performance of your system, ISPConfig just writes config files and does this in the exact same way on any system you use it on.
Whats the measurement for performance here? The us, sy and ni values you marked here are useless, as this is just a single time frame. Looking at the load averages, server1 seems to "work" more than server2
Beacause there is not a special niceness for the process set? The default is zero. For php-fpm the setting is called process.priority in the configgfile.
When I transfer all sites to server 2, it cannot carry the load. While it is okay with server 1. For a follow up question, why server 2 has zero ni? As I've said in my first message, server 1 has more sites loaded, while server 2 has only 1 and it was shared on both server. Take for example: mydomain.com's traffic is shared by both server, and all other sites are running on server 1 only.
I've already answered that in Post #5. What means "cannot carry the load"? Does it get unresponsive and is it out of mermory? There are a ton of things that can cause issues, like the storage of the servers, the uplink connection etc. I don't know enough about the envoirenment and from what we can see here everything looks fine. Check your services and logs on both hosts and see if there is something out of the ordinary. If there are differences in the server platform and hardware make sure to check this aswell. See the reply from till above.
Can you please tell me where this config file that you mean? So I can compare if there's some difference between the two config file.
/etc/php/8.1/fpm/php-fpm.conf, or whatever version you are using for the webspace in question. However be aware that the "process.priority" alone does not explain such issues as you describe above.
I had such an issue a few years ago where a server seemed to work well but was not able to carry load, the system nearly came to a halt when I tried to do a backup. In the end, it was a hardware defect of the harddisk which was not easy to spot as smart values of the disk were fine and software raid 1 was ok as well (only one disk had this issue though) but when it got heavily loaded, the performance broke down. So a hardware issue is definitely a possible reason.