ispconfig on multiple web servers

Discussion in 'General' started by xeroblast, Jun 9, 2014.

  1. xeroblast

    xeroblast New Member

    here is my setup:

    2 domains connected to 1 public IP : domainone.com & domaintwo.com
    1 server with 2 openvz containers : the server is centos (192.168.0.99) controlled by ispconfig in creating virtual containers.. each container (ubuntu) is installed with howtoforgeperfectserver setup and slave to main/host ispconfig..

    :confused: my question that i need help with is how to make domainone.com will go to first container (192.168.0.101) and domaintwo.com will go to second container (192.168.0.102)?

    currently those 2 domain go to main ispconfig which is the host server (centos).

    thanks for the help..
     
  2. doekia

    doekia Member

    Install ISP config on both container, pointing the the master isp config on the hypervisor (your centos openvz supervisor).

    The 2 system will then appear as potention destination for each and every function you can configure I your ISPconfig panel.

    This is called multi-server setup as stated in the documentation.

    The only difference during installation within the container is you need to install in "expert" mode and answer no at the question "Is this the master server or need to join..." You need credential information of the mysql of the master when installing the slave
     
  3. xeroblast

    xeroblast New Member

    i already did that and the files initial html files are generated successfully in their respective containers but when viewing on public ip, it only go to the main ispconfig.

    the public ip is forwarded to the main ispconfig. viewing them locally via local ip ( entering the local ip to the /etc/hosts ) works but not in public ip.

    i just hope this is already supported in ispconfig. coz im guessing this is not yet supported...
     
  4. doekia

    doekia Member

    Well container has nothing to do with DNS records.

    If you want site B to be serviced by server2 let say, you need a A records that state site-B.tld => IP of server2.

    DNS server can be on any server, but usually we dedicate 1 server to run the DNS in SOA mode, a second the run DNS in slave mode (or use hoster secondary DNS for such). No need for web and dns to sits on the same physical server, no need either that dns be handled by ISPconfig ...

    Hope it make sense.
    Best,
     
  5. doekia

    doekia Member

    Re-ready your response, it seems what you want to do is ... reverse proxying.

    Enable the apache modules (mod_proxy, mod_proxy_http on the hypervisor, the mod_rpaf on the containers) and makes appropriate directives in the fake site to redirect to your LAN based containers)

    I wonder however why you make containers if you do not have public ip for them. Easier have them as plain vhosts.

    It could also be addressed by openvz bridged ethercard doing NAT alike but again this is pretty complex based on what I understand of your need
     
  6. xeroblast

    xeroblast New Member

    sorry for the delayed reply..

    my boss wants it that a single public ip will be accessed by many clients but those clients are separated by openvz containers..

    what i have currently/previously known is you can have is a single public ip and can have many clients is through apache server alias but my boss wants that through the domain he put in the browsers url, he'll get directed to the specific container he was assigned..
    for example: domain1.com will be directed to container with local ip 192.168.1.11 and domain2.com will be directed to container with local ip 192.168.1.12 ..

    so what about the reverse proxying, i really dont have any idea about this.. can you elaborate further??

    thanks a lot again...
     
  7. doekia

    doekia Member

    The public ip attached to the hypervisor has vhosts the reverse proxy on the private IP. One vhost = one proxy

    This howerver as nothing to do with ISPConfig.
    The issue you face here is plain network routing issue

    Bridging can also be a solution.

    To your boss, let assume you need one public IP for one customer (container) it cost 2$ per month. Does it really make sense to have some complex - hardly maintainable approach that will certainly cost more rather than provisionning those IP's? Also consider the fact that running container although not that expensive in term of resource, the scenario does not scale that good (20 customer = 20 containers w/ 20 systems, 20 ispc, ...), each container duplicates a lot ( system, ispconfig, ... ) At some point the hypervisor is spending most of its horse power switching from containers

    To you (if boss stubborn on its decision), let implement a reverse proxy or haproxy on the hypervisor, you are all set on, but each time you create a new container you need to reflect that to either your haproxy config and/or your reverse proxy mechanism.


    Your question is not:
    how to make domainone.com will go to first container (192.168.0.101) and domaintwo.com will go to second container (192.168.0.102)
    But:
    how do domainone.com be routed to 192.168.0.101 considering the fact that domainone.com can only get a public ip that refers to the hypervisor public ip

    This is routing/network question.
    At OSI level, you could address such by bridging (level4 ich) or level7 vhost+proxy
     
  8. xeroblast

    xeroblast New Member

    thanks for confirming about my question that it cant be done... actually i was already talking to my boss that we really need to buy public ip for each client/container..

    my boss want it that way coz he dont want clients to be in same system due to ssh connections so clients dont see each others data.
     
  9. doekia

    doekia Member

    By default and unless you tamper with system settings, ssh user created thru ISPConfig lands on their own folder and have only permission for such

    You can also "by ISPConfig" force ssh login to be "chrooted", if more paranoid.
    Assuming you used perfect install with jailkit ... (recommended)

    You can try to invent the wheel or read the documentation about ISPConfig ...

    Only advantage of container are to isolate system resources (and that does not always address the issue). i.e: a container contains bad code that "eats" all horse power from the system ... it cannot eats more than the allocated vz settings...

    All other use case are greatly covered by pristine ISPConfig ... single/multi mode, specially if you use quota.

    Personnaly I have numerous (about 10+) multi-client installation of ISPConfig and only 2 requiring virtualization due prestage/developpement kind of customer on which we all know that things can get crazy.
     
    Last edited: Jun 26, 2014

Share This Page