I've just arrived with a misbehaving server. The first noted symptom is auth failed for an email connection. Upon review of the server there are multiple instances of server.sh running -- perhaps more than hundred. /bin/sh -c /usr/local/ispconfig/server/cron.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done /bin/sh -c /usr/local/ispconfig/server/server.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done /bin/bash /usr/local/ispconfig/server/cron.sh (etc.) How can I kill these? Is there a quicker method than manually by PID?
Was able to use `pkill` to remove the errant processes. Still trying to get services back online. When I run /usr/local/ispconfig/server/server.sh there is no output.
After cleaning up all the processes I tried running server.sh and would see no output at all. Turns out there was something about MySQL server causing troubles. It was crashed and required manually kill & restart. Now I can run server.sh successfully. Weird.
This morning I found the server in the same state as before - piled up cron.sh and server.sh jobs and MySQL unresponsive. This time I was able to catch an error from MySQL: "[ERROR] Error in accept: Too many open files". Hrm.
MySQL `open_files_limit` was still at default (1024) so I increased significantly and restarted all services. Hopefully this will fix things.
Thanks, till! I've confirmed via `cat /proc/$(cat /var/run/mysqld/mysqld.pid)/limits` the max open files is raised substantially for MySQL. I also found this reference helpful: https://duntuk.com/how-raise-ulimit-open-files-and-mysql-openfileslimit