Hello, I have a multi-server setup (made after the instructions PDF I purchased on this website). It worked for years. Today I added 2 new customers and now the processing queue is stuck at 6 forever: I have followed the instructions about how to enable debug mode and check for errors. However, all I get are debug logs, no error appeared nor appears. Here are the operations done. As you can see, there are only August operations and then today's. I have attached the full log, in (Excel) CSV format. Job queue is empty. There are zero errors whatsoever. Looking at the overview, no server is in error. If I hard-refresh the page, it still shows 6 pending updates.
I found the solution by myself. ISPConfig did not help me at all, showing exactly ZERO errors in debug mode, despite what I am going to write below. Basically, something did not work with the ISPConfig update and the second database server's Code: ispcsrv user had a bad password. The master server's fail2ban (configured by ISPConfig at server creation) kicked in and IP banned the second database server, which could not exchange data any more. End result: zero errors, zero warnings and jobs queue stuck.
You will definately get an error reported in Debug mode in this case. So you either missed enabling debug mode https://www.faqforge.com/linux/debugging-ispconfig-3-server-actions-in-case-of-a-failure/ or you did not run server.sh in debug mode on the right slave server.
I enabled debug mode. I have been using ISPConfig on several multi-server clusters for so many years. I manually ran the server.sh script (if this is what you call "server.sh in debug mode") and it's exactly the very cryptic connection failure message that put me on the right track. That's the only thing I don't totally love about ISPConfig: the fact that it's awesome as long as it works (which is 99% of the time, to be honest) but when it stops... oh the pain at finding why!
You always enable debug mode, run server.sh and see why it fails. The reason why a slave node fails can not be shown in the UI as the slave node is not able to report it back to the UI in that case. if you would have a single server, you would be able to see the exact reason for the error in monitor module. So why is this a pain, it's always the exact same procedure and really a straightforward process and done in less then a minute.