High Availability Cluster Using Debian Lenny

Discussion in 'Installation/Configuration' started by jbimmerle, Sep 4, 2009.

  1. jbimmerle

    jbimmerle New Member

    Hi All

    I was hoping to configure a high availability cluster to offer load balancing and redundancy to some of my hosting and development clients. I plan to follow the steps outlined in the How-To-Forge document listed below:

    http://www.howtoforge.com/setting-u...lancer-with-haproxy-heartbeat-on-debian-lenny

    My configuration would be as follows:

    2 x Load Balancers

    2 x Web Servers

    2 x MySQL Database Servers

    2 x Storage Servers

    All nodes in the cluster would be running Debian Lenny. The two storage servers would connect directly to the web servers using GlusterFS as detailed in the following How-To-Forge guide:

    http://www.howtoforge.com/high-avai...c-file-replication-across-two-storage-servers

    All content on the web servers would be replicated between each other as would the content on the MySQL and GlusterFS servers.

    I plan to host a number of sites on this configuration and a number of them will require use of SSL and hence static IPs. My questions are as follows:

    1) Can Lighttpd be used in place of Apache? Or are there specific reasons to stick with Apache in HA configurations?

    2) Has it been confirmed that HA Proxy cannot handle SSL and that I must use reverse proxy with Apache? What impact does this configuration have on the cluster's performance?

    3) If I have multiple sites using SSL, are all the static IPs assigned to the load balancers to effectively "listen" for requests which will then be forwarded on the the web servers?

    4) I plan to have the web servers, MySQL servers and GlusterFS servers all connecting via internal IP addresses. Public IPs will still be assigned to these nodes to allow for maintenance via SSH. Are there any issues with this configuration or should I block off all external access to these nodes and use a dedicated node to manage all internal nodes?

    5) How does the load balancer segment handle failover if the public IPs are assigned to one load balancer? In affect, how do these public IPs actually float between the two? I'm a little confused on this because it seems I'm unable to assign the same static IPs to both load balancer nodes.

    6) Where are the SSL certificates stored? Since the load balancer segment is where the requests are received and are linked to the static IPs associated with the certificates, should they be stored on both of the load balancers? Or should they still be installed on both web servers and is this even allowed with SSL certificates?

    7) Does anyone have any feedback in terms of GlusterFS performance? I plan to have the storage mounted directly to the web server segment via SSH tunnels unless there is more secure and efficient means.

    I think that is about it for now. More to follow based on the feedback received.

    Thank you all in advance for helping me sort through the finer aspects of this configuration.

    Joe
     

Share This Page