I'm finally upgrading my multiserver setup from debian 10 to debian 11 using the tutorial https://www.howtoforge.com/update-the-ispconfig-perfect-server-from-debian-10-to-debian-11/ The upgrades seems to go well but after upgrading my servers can't reach each other via MariaDB anymore. It seems the servers begin rejecting connections after being upgraded. For example when I try to connect from panel server (debian 10) to webmail server (debian 11) I get: Code: root@panel:~# mysql -h 10.0.0.5 ERROR 1130 (HY000): Host 'panel.domain.org' is not allowed to connect to this MariaDB server The ISPConfig Monitor is stuck and still shows the old "Debian Buster" status for the webmail server. Same issue applies with other servers of my multiserver setup after upgrade.
In an ISPConfig multiserver system, the slave nodes connect to the master and not the other way around. So you must connect them from slave to master if you want to test this and not from master to slave node.
I finally got to a new attempt to upgrade my multiserver setup. Of course you ( @till ) were right about the direction for the test. Here the output I get after upgrading the panel server: Code: root@webmail:~# mysql -h 10.0.0.3 ERROR 2002 (HY000): Can't connect to MySQL server on '10.0.0.3' (115) Might my error be that I choose "yes" when ispconfig_update.sh asked me if I wanted to "Reconfigure Permissions in master database?" on the master server as well?
No, it is ok. While its not needed to run it on the master, it does not harm. On which server did you run this command?
I run it on my webmail server. But I get the same answer from the other slave servers as well. So for example from DNS server. I'm a little bin in the dark about what I might be missing
So I assume your webmail server is not the master? Have you tested the same command on the master, does it work? If yes, then you likely closed a firewall port which blocks mysql access. If it does not work on the master, then you might have disabled networking in Mysql/mariaDB config (the setting in the file is named skip-networking) so your database is not listening on the network interface anymre. Besides that, the correct test would be: mysql -h master.yourdomain.tld -u ispcsrvX -p while using master server hostname, ispcsrv* username and password from file /usr/local/ispconfig/server/lib/config.inc.php The databases settings for 'dbmaster'.
Thanks for the great advice. Seems that the update really activated bind-address to 127.0.0.1. I changed this to: Code: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Now the connection can be established with your command. Nevertheless it seems that I got some more trouble since the server states are stuck with old values for the uptime. Even after rebooting the servers (slave and master) and waiting more than 5 minutes (thanks for your advice on Manually update/renew "Monitor" / "System State", btw) there seems to be no update of the uptimes for the slave servers. Only the master server gets the correct uptime every 5 minutes.
By comparing the new /etc/mysql/mariadb.conf.d/50-server.cnf with a backup I just realized that in the old file there was the following line that is missing in the new config: Code: socket = /run/mysqld/mysqld.sock Might this be an issue here?
Do you get an error when you run: /usr/local/ispconfig/server/server.sh as root? This should not be related to connections over the network. You can try to add it, but I guess it will likely make no difference.
Since the upgrade seems to have brought several changed config files I can only guess I'm missing some important change in one of these files. If I'm not mistaken in this cases the old config files were overwritten by new maintainer versions: /etc/mysql/mariadb.conf.d/50-server.cnf /etc/resolvconf/resolv.conf /etc/ssh/sshd/config I attached them here in case it might help to discover the problem.
It depends on what you have chosen during dist upgrade of the OS. The connection of your servers seems to be fine, otherwise the server.sh should have thrown an error. But you can try to enable debug mode in ISPConfig GUI on the master for a slave node and then run server.sh and post the output. Then you can try to empty the table sys_cron (not mix up with table cron) in the dbispconfig database of a slave; this will reset the internal cron system in ISPConfig, which is responsible for the monitoring jobs and other things like backups. Then you can check the root crontab with command 'crontab -l' to see if they all contain the server.sh and cron.sh cronjobs from ISPConfig.
Code: root@webmail:~# . /usr/local/ispconfig/server/server.sh 04.05.2024-12:44 - DEBUG [plugins.inc:155] - Calling function 'check_phpini_changes' from plugin 'webserver_plugin' raised by action 'server_plugins_loaded'. 04.05.2024-12:44 - DEBUG [server:217] - Remove Lock: /usr/local/ispconfig/server/temp/.ispconfig_lock
I just emptied the table. This is the result of crontab -l: Code: root@webmail:~# crontab -l 28 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null * * * * * /usr/local/ispconfig/server/server.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done * * * * * /usr/local/ispconfig/server/cron.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done Now the server status is getting updated correctly again for this slave server. Seems that I will have to repeat this for every slave server. Since the error only occurred after upgrading the master server (panel.example.com) and works again when reverting to a backup of the master server my guess would have been that the responsible configuration for sure had to be somewhere on the master server side. Any idea what might have triggered this? In any case I'm very grateful. You saved my day - if not the whole month
Yes, do this for any system that is not updating its status. My guess is that the sudden inability to connect to the master while some cronjobs were running caused the cronjobs status to get stuck in running mode. Emptying the table resets them, so they work again now. I'll have to look into the code to see how we can fix such an issue automatically, e.g., by defining a max run time and then resetting the status. The problem is that some cronjobs, like backups, might run for a long time, and it's critical not to start them twice.
I guess this makes sense Maybe since you will already touch the code it might be an idea to implement some information about how old the data received from the servers are. This way it would be much easier to realize that something is of. Ideally there could be also a warning after some time without status reports from a server. Should I add a feature request for this in GitLab?