Hello, i'm facing a problem with my site , it has a lot of traffic and it loads very slow , ~30-40sec. First i found the problem when i got some 500 internal errors , i found that IPCCommTimeout 31 , was to low and i changed it to 60 , now i don't get 500 errors but the site loads really slow , ~40 sec ... I think fcgi is under heavy presure and i doesnt work as expected... can you provide me some information in order to make things better ?
i use debian , the site is updating minute per minute , if i enable cache it wont take longer for the user to "see" the new content ?
No, updates are still shown to users immediately. php-apc is another PHP opcode cacher, it is available as a package for Debian Squeeze.
i tried with another joomla site , different machine and the results are big difference , ~2 sec site loading.... Now with the problem of the thread , i think its code problem because i moved the site at a machine with xeon 8 core processor with 4 GB ram , and i got very little changes..
Same problem, how to configure fcgid ? Hello, I have the same problem : running Debian Squeeze + ISP Config 3.0.3 + Apache 2 + PHP + fcgid + suexec and approx 200 websites hosted on the same machine with only 4GB RAM. Here is my fcgi config (from a website vhost config) : Code: IdleTimeout 3600 ProcessLifeTime 7200 # MaxProcessCount 1000 DefaultMinClassProcessCount 3 DefaultMaxClassProcessCount 100 IPCConnectTimeout 8 IPCCommTimeout 360 BusyTimeout 300 When apache starts all goes fine, but after few hours (2-3 hours) a huge number of process runs and all websites answer very slow (about 30-60s before display a page) : Code: root@ircf-web:~# pstree init─┬─acpid ├─apache2─┬─150*[apache2] │ ├─apache2───302*[php-cgi] My question are : - Which fcgi parameters should I modify to fix this and how ? I'd couldn't find a tutorial to tweak my fcgi config depending on my ressources (websites/RAM) - Where should I modify it ? Maybe /usr/local/ispconfig/server/conf(-custom)/vhost.conf.master ? - Once modified how I can re-generate each vhost config file ? (remember I have 200 vhosts) - Should I install php-apc too, or will this add significant overhead ? Notes : - My boss is negociating with our ISP to increase RAM, we'll try to add as much as possible... - This problem happens since yesterday when we imported about 70 new websites from an old server. Thank you for your support. Edit : - I've found the official apache fcgi documentation , it helped me a to understand fcgi config - I've found a (temporary ?) solution : 1- I have commented fcgi config (see lines above) from /usr/local/ispconfig/server/conf/vhost.conf.master 2- I modified each website config so the new vhost file can be generated for each website 3- I have copied fcgi config from last ispconfig version into /etc/apache2/mods-enabled/fcgi.conf so I can globally tweak fcgi without having to modify each website : Code: FcgidIdleTimeout 30 # Default is 300, process weren't recycled enough, I will set it higher when I have more RAM FcgidIdleScanInterval 30 # Default is 120, even if I set FcgidIdleTimeout < 120 process were removed only every 120 seconds FcgidProcessLifeTime 3600 FcgidMaxProcesses 100 # Default is 1000, there were too many processes for my server to handle, I will set it higher when I have more RAM FcgidMinProcessesPerClass 0 FcgidMaxProcessesPerClass 10 # Default is 100, I had to limit each site so that every site could have a chance FcgidConnectTimeout 3 FcgidIOTimeout 360 FcgidBusyTimeout 300 FcgidMaxRequestLen 1073741824 # Avoid some HTTP 500 errors for upload programs Any other idea is welcomed. Thank you again.
Help needed... Unfortunately, after trying different settings (see last settings in post above) the best I could do is avoiding the whole server to crash by setting a "sure" max process limit (100). The problem is all 200 sites can't run simultaneously. Besides some sites may take lots of process while others won't have one. I also installed APC (apt-get install php-apc) that really improved site performance, assuming you have a process for this site, so that didn't solve my problem. Tomorrow I'll try to increase a little max process and to set max process per site, and upgrade RAM asap. If anyone had a clue to that would save my life ! ty PS : Previous server ran the 200 websites with mod_php and only 2GB with no problem. Indeed security and quota handling was problematic... Edit : - This morning I set FcgidIdleScanInterval and FcgidMaxProcessesPerClass (see modified config above), wait and see... - I noticed some (most ?) process are not killed after IdleTimeout, especially when ps count reaches maxProcess, I seems not to decrease and stick to maxProcess (some process are still recycled anyway). I couldn't find a forum thread about this (mod_fcgid bug ?). To "fix" this I added "0 * * * * /etc/init.d/apache reload" to my crontab, that will kill processes each hours, I couldn't find a better way until now. Since I added this, processes are always under maxProcess (around 50-80 processes) and all sites can work smoothly, I don't think this is a good solution anyway, besides it may disconnect people each hour - Maybe the "process not killed" bug is related to the Debian mod_fcgid version (2.3.6), the mod_fcgid changelog shows a similar bug solved in next version (2.3.7) - Finally found a fix for the "process not killed" bug : Some websites vhosts weren't updated and still had the old fcgi config. After having updated all website, processes are killed perfectly, then I removed my crontab entry. I hope this bug won't get back if I set specific fcgi config on some website (according to mod_fcgid changelog)