Question about PFSense Load Balancer

Discussion in 'HOWTO-Related Questions' started by 3zzz, Nov 17, 2011.

  1. 3zzz

    3zzz New Member

    Greetings all,

    I have read the "HowTo" here and I am interested in trying this for a new production network:
    http://www.howtoforge.com/how-to-use-pfsense-to-load-balance-your-web-servers

    I noticed the author writes "if this is your edge firewall I would recommend a physical machine"

    Is this so that PFsense will have dedicated CPU resources to handle the load balancing? Are there other considerations?

    I had been considering putting everything onto VMWare ESXi hosts including a PFSense cluster, based on the 2 tutorials here http://doc.pfsense.org/index.php/Tutorials

    1) Installing pfSense in VMware
    &
    2) "Building a fully redundant Cluster with 2 pfSense-systems between WAN/LAN with CARP & pfsync / pfSense CARP & pfsync failover-simulation"

    But maybe I'll need to run separate hardware for the PFSense cluster?
    Will be trying some experiments over the next week or 2 to see if I can figure this out... appreciate any advice, TVMIA
     
  2. 3zzz

    3zzz New Member

    Well I realized security is also a consideration. If the physical box is hooked to the WAN, we'll need to make sure there are no open ports other than to PFSense. But assuming we use NAT to all the other VMs, how much of a concern is this really?
     
  3. mmidgett

    mmidgett Member

    I think the thinking behind this is not to put all your eggs in one basket. Depending on your network load and the power of your cpu it is defiantly doable. Just think if your esxi server dies so does all your network but if this is use in a colocation rack and your trying to save space then for temp solution I don't think that you have a problem. Also most pfsense servers need not to be more than 1ghz. If your not running lots of vpn connections then 500mhz will do.
     
  4. 3zzz

    3zzz New Member

    Thanks mmidgett!

    Well I was thinking to have 2 identical physical esxi servers, on each would be PFsense and synched copies of all the VMs (or perhaps shared storage?)

    I will set up VMs from each in a pool so that if primary fails and secondary takes over, half the pool will still be there to serve clients.

    More of a long term permanent solution if i get it to work as i'm thinking...

    That's great - I don't plan on much vpn at all, but hope to push 100mbps+ from the setup.
     
  5. neofire

    neofire Member

    Hey 3zzz

    The Reasons i Suggested a physical machine if pfsense is going to be edge firewall, (and mmidgett nailed one of the reasons) is purely from Disaster Recovery point a view ( all eggs in one basket situation ) and the other reason is security and expandability, i have seen one situation where a client had a VM firewall on the same host as his production VMs and (his firewall was setup quite poorly) and some one managed to hack and gain access to his VMware ESXi Console, and cause considerable damage to his environment

    In regards to expandability, if you want to build a DMZ for example i personally like other hardware to control this and not have my esxi touching the dmz at all

    if you have any more questions or concerns feel free to ask
     
  6. 3zzz

    3zzz New Member

    Thanks neofire!!
    I think I will have 2 identical machines for redundancy; seems for my purposes it'll be cheaper than shared storage.
    For security I will limit access to ESXi to the local network only, and use pfsense to block LAN addresses from spoofing over the WAN so I would hope ESXi is not accessible to hackers unless they first gain access to a LAN machine.

    Well thanks for your advice, I'll let you know how it goes!
     
  7. neofire

    neofire Member

    Sounds like you got it all sorted, Hope it works out and it would be good to hear how you go

    i am posting a Fail over HowTo this week ( i have a bit of catch up to do ) and hopefully a few more will go up with different pfsense configurations
     
  8. 3zzz

    3zzz New Member

    well tbh i am struggling to figure out what kind of storage i will need for my vmhosts in production. I figure we'll have about 6-8 VMs running on each.

    Will I notice performance issues or would we get by just fine with onboard SATA drives?
    Or will we have to spend more for
    onboard SAS drives
    onboard RAID (w SATA or SAS drives)
    external SAN (3ware raids with SATA drives)
    or something more?

    From reading it sounds like you really have to test it and see... I can imagine my boss won't like shelling out all that cash for a vmhost server if we then test it and see that performance is poor and we need to spend another $5K+ for SAN... I'm thinking to go with a couple onboard SAS drives for the heavy access servers and SATA for the lighter ones, and see how it goes...
    thanks for any suggestions!
    3
     
  9. neofire

    neofire Member

    All it depends on your Virtual Requirements, How many VMs will you run, what applications do you want to virtualize, how many Virtual Hosts you need to run etc

    can i ask what your intending to build i might be able to recommend some things
     
  10. 3zzz

    3zzz New Member

    thanks neofire!

    we have an existing system with a web server that is almost constantly overloaded, it's a quad core. it's not very redundant. We also have a couple other servers that don't do much. So I want to put those inside the ESXi host, and turn the web server into 2 or 3 VM web servers load balanced with PFSense. With 12 cores on the VMhost, hopefully this will perform better than the current web server.

    Then if all goes according to plan, add a second identical VMhost with all identical VMs and set it up with the PFSense failover setup.

    By doing all this I hope we will
    a) improve the performance of the site by spreading the load over several VMs
    b) have a redundant system so there will be no downtime due to hardware failures
    c) free up valuable rack space by going from 3 towers to 2 1Us
    d) move our systems towards VM for backups, clones and hardware independence
     
  11. 3zzz

    3zzz New Member

    I know SAN would be best but I'm figuring get 2 SAS drives and dedicate 1 for each virtual web server and that will be an improvement, then just use local SATA for the admin boxes.
     
  12. neofire

    neofire Member

    SAS Drives would be a good start for now if your restricted with funds, just have plenty of RAM, and you may have to tweak your Resource Pools depending on how things go (with in the hypervisor) and how many network cards does each box have

    What Operating systems are you using

    if you are looking at new servers and would like something that is value for money but packs a punch

    look at these
    http://www.dell.com/au/business/p/poweredge-t610/pd

    i have been rolling out medium size businesses with those there good
     
  13. 3zzz

    3zzz New Member

    the guests will be running ubuntu server, the host will be ESXi 5 w 24GB.

    i was only planning to have 3 NICs in each host, as long as it's not exceeding the total capacity of the card, does it matter much?
     
  14. neofire

    neofire Member

    No it wont matter, Sometimes depending on the VM host setups i sometimes suggest ( if you can afford to ) depending on the amount of VMs your having is dedicate a Network card per VM or you can just pool them together and let the ESXi host handle the traffic

    I don't think you will have any issues either way,
     

Share This Page