Hi, I tried to install multiserver setup. First master server with ISPConfig interface and second server www1 with web, ftp and db. After installation I can see server www1 in ISPConfig interface on master server, screenshot attached. Every change made on master server ISPConfig for www1 server is stuck in queue. Any idea please?¨ Debian 10 Buster, ISPConfig 3.1.15
That problem is often caused by wrong database permissions or users for the server users. Did you follow the ISPConfig Manual carefully and created them for all servers in your multiserver setup? Also: https://www.howtoforge.com/community/threads/please-read-before-posting.58408/
Yes, I followed the setup guide. I created user root@public-ip and root@www1 and used grant all command for these users on master server. If db permissions or user was set wrong I wouldn't see my web server in server overview on ISPConfig on master server, right? I can see communacation from and on 3306 port of master server constantly running via tcpdump. I really do not know where is the issue... :/ Can I find any interesting log about it? I am able to connect from www1 server to master with mysql... What else should I try to determine what wrong from please? I didn't find much info how exactly the link between server works. Code: root@www1:~# mysql --host master.**** -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2331 Server version: 10.3.17-MariaDB-0+deb10u1 Debian 10 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | dbispconfig | | information_schema | | mysql | | performance_schema | +--------------------+ 4 rows in set (0.001 sec) MariaDB [(none)]> CREATE USER test; Query OK, 0 rows affected (0.002 sec)
The ISPConfig servers communicates via mysql, one on each server including the master server. User created is [email protected] or [email protected] where domain is server fqdn and ip is full ip. You should test any of the mysql access locally and remotely to make sure it is working properly. This is another issue which may or may not be related to the mysql access. You should test your server as suggested in the link provided by @Taleman above: https://www.howtoforge.com/community/threads/please-read-before-posting.58408/
The most common reason for communication problems between nodes is that the /etc/hosts files on all servers are not the same and do not contain the same hostname/IP combinations. If the files don't match, then changes will get stuck in the queue due to wrong database user permissions.
I set on master: 127.0.1.1 master.ho***.** master 127.0.0.1 localhost 116.203.240.49 www1.ho*** .** 116.203.240.55 master.ho*** .** and on www1: 127.0.1.1 www1.ho*** .** www1 127.0.0.1 localhost 116.203.240.49 www1.ho*** .** 116.203.240.55 master.ho*** .** That was the issue. Now it works! I thought this is not neccessary. I only had my localhost IP set to FQDN and domain name in /etc/hosts. Thanks for your help. In case of any future problems - is DB connection logged somewhere? EDIT: Looks everything works as it should, but in Monitor on master server I cannot see any relevant info for www1 server. Is there some CRON scheduled for it which will run later?
The hosts files must always contain the IP addresses that another host can use to connect to this node, so using 127.0.1.1 or 127.0.0.1 for the server hostname makes no sense in such a setup. 127.0.0.1 is loclahost and localhost.localdomain only. You can get the details of a failure by using debug mode: https://www.faqforge.com/linux/debugging-ispconfig-3-server-actions-in-case-of-a-failure/ It may take some time until all values get updated. If they are not up to date after 12 hours, then there is still a connection problem.