Perfect Server - getting slow

Discussion in 'Server Operation' started by halycon, Jan 22, 2015.

  1. halycon

    halycon New Member

    Hey,
    i'm fighting this problem since several months, but i don't win. So i will try to ask you out here.
    I set up a server with the perfect server guide based on debian wheezy. I tried both nginx and apache2, but it's always the same.
    I'm running this on a VPS with 4 cores, 14gb ram running on a tiered storage with ssd boost.
    The Problem in short is: I start the webserver. Sites are running fine and fast. After 30 - 40 min the sites are getting incredibly slow. When i run a test on tools.pingdom.com or google page speed, i see "wait" or "time to first byte" is 90% of the page loading time.
    For example:
    1 - 20 min after webserver starts:
    page loading time: 1,2 seconds
    including wait: 0,4s
    40 min after webserver starts:
    page loading time: 10 seconds
    including wait: 9s
    When i do apache2 restart, it's fast again.
    BUT ispconfig is always running blazingly fast...? Even when my port80 sites have 20s loading time, the ispconfig on port8080 is running fine.

    Some history:
    I used to run some joomla sites on a usual ubuntu server installation with apache2 fcgi and php5. All things kept default. I never had any problem.
    Then things grew up, i needed more comfort setting up sites and mail, so i installed "the perfect server" with ispconfig. First i took nginx with php-fpm. I set up everything, transfered my sites to the new box and things started to be like i described above. I spent hours and hours trying to adjust php-fpm settings but nothing helped.
    Then i rented another VPS from a different provider, set up everything again, transfered the sites, same problem.
    Then i again rented another VPS and started from scratch, this time using apache2 with mod_fcgi.
    And...still the same problem.

    Now i don't know what to do. For now i set up a cron job which restarts apache2 every hour, but that's not a solution.
    Please give me any hint where i habe to look what's going wrong. There is nothing in error_logs. I activateds apache status page and attached both a screen of the output as the server is good:
    good.PNG
    and bad:
    bad.PNG

    I tried to adjust apache2 settings:
    Code:
    <IfModule mpm_prefork_module>
        StartServers          10
        MinSpareServers       10
        MaxSpareServers      15
        MaxClients          250
        MaxRequestsPerChild   0
    </IfModule>
    
    <IfModule mpm_worker_module>
        StartServers          10
        MinSpareThreads      50
        MaxSpareThreads      100
        ThreadLimit          64
        ThreadsPerChild      25
        MaxClients          250
        MaxRequestsPerChild   0
    </IfModule>
    
    
    <IfModule mpm_event_module>
        StartServers         10
        MinSpareThreads      50
        MaxSpareThreads     100
        ThreadLimit          64
        ThreadsPerChild      25
        MaxClients          250
        MaxRequestsPerChild   0
    </IfModule>
    
    
    and also some fcgi settings in the apache directives field of the website in ispconfig:
    Code:
    <IfModule mod_fcgid.c>
    FcgidBusyScanInterval 90
    FcgidBusyTimeout 600
    FcgidErrorScanInterval 3
    FcgidFixPathinfo 1
    FcgidIdleScanInterval 70
    FcgidIdleTimeout 360
    FcgidIOTimeout 1000
    FcgidMaxProcesses 1000
    FcgidMaxProcessesPerClass 100
    FcgidMaxRequestInMem 268435456
    FcgidMaxRequestLen 1073741824
    FcgidMaxRequestsPerProcess 0
    FcgidMinProcessesPerClass 3
    FcgidOutputBufferSize 1048576
    FcgidProcessLifeTime 3600
    FcgidSpawnScore 1
    FcgidSpawnScoreUpLimit 10
    FcgidTerminationScore 2
    FcgidTimeScore 2
    FcgidZombieScanInterval 3
    </IfModule>
    But there is no noticeable difference.
    I will be thankful for any hints. Thank you :)
     
  2. till

    till Super Moderator Staff Member ISPConfig Developer

    If you get the same problem with apache and nginx, which are completely different server systems and ispconfig (which is a simple php based website as well) stays fast, then the problem is most likely related to your websites, maybe a plugin that accumulates data and slows down the site or you have a problem with mysql or some code in your site that connects to a external service. Changing the php-fpm and fcgi defauts should not be nescessary. Do you get any errors in the error.log of the website.

    Some of our client systems that use the perfect server setup handle more then a million pageviews with the nginx perfect server setup with default settings (just increased mysql max connections and set php-fpm mode to ondemand with a 500 simultanious connection limit), so if you dont have far more pageviews, then the defaults should be fine.
     
  3. halycon

    halycon New Member

    Well, of course i do not have million of views :)
    I didn't want to say, that the "perfect server" setup is causing my problems, i'm sorry if my post sounded like this.
    When the server-slow-down comes then all my sites are affected, except ispconfig itself. That's the mysterial thing. Affected are Joomla sites, owncloud, as well as wordpress sites. I don't get it...
     
  4. till

    till Super Moderator Staff Member ISPConfig Developer

    Hmm, thats really strange indeed as each site has its own fcgi process and all sites use mysql (like ispconfig). Have you checked with top and iotop if there is a high cpu or IO load when the sites slow down?
     
  5. halycon

    halycon New Member

    iostat looks like this:
    I'm not really able to understand this. Do you see a problem in this output?
    I just installed new relic, because it's recommended here and there, is see database respond time is 20ms at max. So no problem there is think.

    Edit:
    I was klicking around in new relic the last 30 minutes and studied the slowest transactions. I noticed that everywhere is something like "memcache get" and "memcache set" taking up tp 10s. I did a apt-get remove for all memcache related: php5-memcache, memcache and so on. The sites speed is drastically improved now. Maybe i finally found the error, i keep looking.
    [​IMG]
     
    Last edited: Jan 23, 2015
  6. halycon

    halycon New Member

    Seems memcache really was the problem. I don't know what's wrong with this, but i am so happy found this out. This took me 7(!!) months ! :D
    new_relic_screen.PNG
     
  7. till

    till Super Moderator Staff Member ISPConfig Developer

    Thats quite interesting. Memcache is a pure in memory caching system, you cant really slow it down or speed it up by config changes, basically you just set the size (amount of memory) that it may use. So maybe you hit some kind of memory usage penalty of the vm host here when your memcache starts to use too much ressources or memcache is configured to use too much ram, e.g. when the host system is overbooked and it starts to put the memory that memcche uses on the disk (swap), then this would explain the behaviour.
     

Share This Page