I've followed the guide, making a few adjustments. I'm running Debian Testing (Etch currently) and instead of installing UltraMonkey, I've installed heartbeat-2 and ldirectord-2 via standard Apt repositories. I've also installed MySQL via standard repositories, and configured the cluster as per Debian instructions and config files. Everything is working fine accept that the virtual IP is forwarding traffic to whichever Loadbalancing server is currently Master, rather than through to one of the MySQL cluster (storage) nodes. For example, if I SSH, ping, or try connecting with MySQL to the virtual IP, it will connect to 'loadb1' if 'loadb1' is currently the master heartbeat server. Otherwise, it connects me to 'loadb2' when it has taken over. Shouldn't it be forwarding me to 'MySQL1' and/or 'MySQL2'? One other thing that I've noticed is when issuing 'ipvsadm' command, I show a weight of 0, instead of 1 (or other): Code: gossamer:/# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.32:mysql wrr -> 192.168.0.13:mysql Route 0 0 0 -> 192.168.0.12:mysql Route 0 0 0 'gossamer' is 'loadb1' and the MySQL NDB MGM server. 192.168.0.32 is the virtual IP. Any suggestions? Let me know if you'd like to see configs. Thanks. -Nick
I just noticed that I'm getting this in the log on the master loadbalancer everytime I try to initiate a MySQL connection to the virtual IP: Code: Feb 11 15:15:56 gossamer kernel: IPVS: ip_vs_wrr_schedule(): no available servers Feb 11 15:24:08 gossamer kernel: IPVS: ip_vs_wrr_schedule(): no available servers Feb 11 15:24:14 gossamer kernel: IPVS: ip_vs_wrr_schedule(): no available servers Feb 11 15:31:33 gossamer kernel: IPVS: ip_vs_wrr_schedule(): no available servers I've done some googling, but I'm not coming up with anything. Any ideas/suggestions? Thanks, Nick
Ok, I found some issues where the 'connectioncheck' table had disappeared from the MySQL cluster. I've recreated it and ldirectord is properly setting a weight of 10 now when I issue ipvsadm: Code: bugs:/# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.32:mysql wrr -> 192.168.0.13:mysql Route 10 0 0 -> 192.168.0.12:mysql Route 10 0 1 But I still can't connect to the MySQL server. I'm connecting from a separate server on the 192.168.0.0/24 subnet with the following command: Code: rocky:/# mysql -h 192.168.0.32 -u ldirector -pldirectorpassword If I change it to connect to the cluster nodes directly, instead of the 192.168.0.32 virtual address, it connects fine - so it isn't a MySQL permissions problem. Some type of routing problem? Anyone have any ideas now? Or suggestions? Things to check/recheck? Thanks Nick
There's always one active and one passive load balancer, that's ok. Can you run the tests from http://www.howtoforge.com/loadbalanced_mysql_cluster_debian_p8 and post the results here?
I figured it out. I needed to have MySQL on the Cluster nodes listening on lo:0 as well. I set 'bind-address = 0.0.0.0' in the MySQL config for the two cluster nodes so that it would listen on all interfaces (eth0, lo, and lo:0). I figured this out by setting up a second service in ldirectord.cf - SSH - and forwarding it to the two cluster nodes like the MySQL service. It worked when I SSH'd to the virtual IP. Open SSH server listens on all interfaces by default, and that was the only difference between the two services on the cluster nodes. I don't know if it would work by having MySQL listen on only lo:0 - since ldirectord connects to the other private IPs for checking. Does this all sound right? Thanks Nick