Sorry, I did'nt researched if it was or not deprecated. You get the idea, if you know that is deprecated, use what you propose instead! If you can experiment a little, it could be ca good idea to change the values in every vhost. For sure you will have cgi process for less time in memory. Debian will continue using cache, but you can limit swap. Tunning a web server is an every day works
seems web9 was the culprit. I changed the proposed timeout to 300 from originally 3600 and it seems to work. still very high usage of resources but at least I know who and why is using them: http://zice.ro/munin/serverkompetenz.net/h1870666.serverkompetenz.net/index.html every time this client sends his newsletter to about 10.000 recipients (all verified, no spam) the server kinda bogged down. looks better right now with the timeout set to 300 (at least that is the only thing I changed and it works) - not sure if that is the cause for it working better now. still, back to the newsletter, I have set it to send out 50 mails / minute and its using SMTP. the server is set up according to this how-to: http://www.howtoforge.com/perfect-server-debian-squeeze-with-bind-and-courier-ispconfig-3 the only sifference is that I didn't start from scratch but with the strato minimal installation which means the whole thing is running on a software raid 1 - possibly this is the bottleneck? any good links for checking/tuning software raids or isn't there much to tune? ###edit### forgot to add that the newsletter is a wordpress plugin and relies on a certain wordpress function that mimics cron, basically it depends on having visitors to trigger so if nobody visits the site, the newsleter doesn't get sent as its never triggered... just for your info.
I guess the only part where you can tune the system is to check and rewrite the php code of the neslwtter addon to scale better for large newsletters. Normally such a software should use not many resources, but if it e.g. tries to load all subscriber data into an large array at once or similar mistakes, then it can bring down a server.
IdleTimeout 300 saved me no more idle php-cgi processes sitting around. now I am onto debugging the HD I/O and latency, found the problem too: mysql. the million dollar question is how can I set this as a default parameter for all new sites that will be created and keep that change persistent even if I update ispcfg3? when moving to this server it seems I took over almost all relevant settings except that mysql was running with some default configuration and nothing was cached so sending 10.000 was killing the HD because of all those uncached DB accesses. finetuning mysql right now to balance things out so that I am neither using too much RAM for the cache nor have too many uncached HD accesses.
mkdir /scripts/ cd /scripts wget http://www.day32.com/MySQL/tuning-primer.sh wget http://mysqltuner.pl/mysqltuner.pl chmod 700 tuning-primer.sh mysqltuner.pl perl /scripts/mysqltuner.pl /scripts/tuning-primer.sh you can play with this for mysql. You have to find the template used by ispconfig to create every vhost (I don't know where to change it, but I am sure that you will find it ). As I remember, with each update of ispconfig, ALL FILES are replaced so... you can make a script to use sed and reemplace the lines after each update, or just modify them manually after each update... Till and Falko I think will have a better solution but... I think that 1 hour of idle life is not the best for a "generic installation". Maybe it could be the best for a server with 8-12 gb ram, but it is just an opinion (they know exactly why them set that timer )
thanks, I know those. add this one to the list: http://hackmysql.com/mysqlreport its just that since this server actually has little traffic I have to wait for 1-2 days after each mysql config change I make lets see if Till or Falko add some idea as to where to permanently make that change.
I think (99.9% sure ) that you must change the time in this file: /usr/local/ispconfig/server/conf/vhost.conf.master But wait for Falko and Till post xD Edit: line 158
not 100% solved :-( what does this tell you? http://zice.ro/munin/serverkompetenz.net/h1870666.serverkompetenz.net/index.html server got stuck last night, when I came to work today, all I saw with pstree were about 50 apache2 processes and about the same amount of php-cgi procs. Can't spot anything unusual in the graphs, can you?
Yes, your system is swaping unnecessarily. You can try this: echo 10 > /proc/sys/vm/swappiness But I recommend make this permanent using (you can always rollback this to 60, make backup before of this file): echo "vm/swappiness=10" >> /etc/sysctl.conf Restart your system, and watch results. All my servers are configured with that value, but I didn't have the oportunity yet to increase the load of the server to see if it behave as I want. In your system, the kernel is using all the free memory as cache (it is good), but, because you are using almost 2gb of memory, Debian by default is prepared to begin swap early (60 as default value, read the explanations pls)... so... lowering it to 10, your system should stop swapping until (this is JUST A GUESS) 80-90% of your memory is in use, and not begin at 30-40% as it is happening right now... You have your system, swap area and data in the same disks... all of them are fighting for (writte) the use of the disk and it slows down everything (swap mainly). In a future will be a good idea to separate system/swap from user data files. Every time a user access a site, ir creates a cgi process in memory... if it is not used for 300 seconds, it will die... if it is used, it will remain in memory... if you let them stay idle in memory for 3600 secs, and a user access it until 3600 secs area reached, your process will live forever in memory... with 300 you free memory quickly, lowering the memory consumption BUT, if you have request to every site you are hosting, one request in less than 300 secs, well... you will increase the memory consumption, no matter if you set 300 or 3000000000 seconds... changing this setting only fixed one of your "problems" (free resources if them are not in using so... with this you lower your memory consumption and minimize your OS swap necessity). You main problem I think, is swapping... Regards.
hmm... I had set swappiness to 10 already when you fist proposed it so not sure whats going on here :-( there might be some vhosts left without the timeout set from 3600 to 300... talking about separating files from swap, yeah, I know but msot entry level root servers come with two equal sized HDs so it is meant to be used in a software raid... besides, seeing the low usage of this server....
You have a really cute case here xD, I will se what I found around your problem... In the mean while, you can set swap to 0 and see how it works (this will not disable swapping, it will set your server to wait the maximun time possible until begin swapping). By the way... when your server start to crawl, what are you doing to "restore" it to a good state? are you rebooting it? or you just restart some services? which ones? please, If you don't understand what I asking, please advise and I will use a translator to re-question you xD. I beg your pardon for my poor english.
I understand you just fine, no worries. to restore it, if it is not too late, I do /etc/init.d/apache2 restart killall -9 php-cgi /etc/init.d/apache2 restart /etc/init.d/mysql restart but sometimes it is simply too late, can't even use console to access the server when its using 4GB of ram and swapping another 4Gb then I need to reboot via my hosting back pannel - but you can see when it has been rebooted in the stats I linked further up...
I beg your pardon... I make a mistake in the line I wrote for swap... it is not a "/", it is "." !!! The correct value for /etc/sysctl.conf must be: vm.swappiness=10 Sorry
thanks but that doesn't change anything since I haven't restarted my system yet... so that was not a problem
xD If you restart your system nothing will happen if that value is wrong xD the problem is that right now you still have Debian set to do swapping like crazy! xD xD
its happening again. started freezing 3 times already today. all I could do was kill apache wait a few minutes and start again. the problems "seem" to only happen when a friend I am hosting on this server sends his daily newsletter to about 8500 recipients. and yes, its legit, no spam :-( I am using mailpress.org (a wordpress plugin) to run it for him, sending 50 mails/minute that makes about 3h to finish the whole thing. should all work just fine. still I end up with tons of apache2 and php-cgi processes and everything freezes in time :-( The point is its a friend, I can't kick him off the server and I do not know what I am doing wrong :-( look at this, this is not normal: top - 20:56:48 up 24 days, 18:49, 1 user, load average: 70.03, 48.94, 40.49 Tasks: 378 total, 3 running, 375 sleeping, 0 stopped, 0 zombie Cpu(s): 2.3%us, 4.1%sy, 0.1%ni, 0.0%id, 93.1%wa, 0.1%hi, 0.3%si, 0.0%st Mem: 4028824k total, 4003044k used, 25780k free, 668k buffers Swap: 8191984k total, 1887344k used, 6304640k free, 1661904k cached
Oh... normally a newsletter MUST be in a SEPARATED server and not in a shared server because that... I will offer a virtual server for who want to send newsletters... I don't know another solution... 8500 mail eats your server...
I did'nt a research about this... but every hosting here in Argentina offers a vps if you want to send newletters because of performance... they don't allow newsletter on a shared server... It is an eternal discussion with clients, because the say that their emails are NOT spam, but it is not the problem, the problem is that they are sending massive email... so... the solution is to use a vps and not a shared server... I did'nt test it myself, but the main reason for this is that newsletter eats all server resources... it is terrible for all the users allocated in the shared server... I wish to give you more information, but I don't have... maybe in a future I could say how much problematic this is... but I will not allow massive mail from my shared server... I will offer a vps, because in the worse case, the vps will become unusable, and not the entire server, nor the others vps. Regards,