I see the following services fail, by coincidence. - How bad is this? - Anything to be concerned about? Code: root@vps:~# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● certbot.service loaded failed failed Certbot ● clamav-daemon.service loaded failed failed Clam AntiVirus userspace daemon ● quotaon.service loaded failed failed Enable File System Quotas ● snap.lxd.activate.service loaded failed failed Service for snap application lxd.activate LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 4 loaded units listed. root@vps:~# root@vps:~# free -h total used free shared buff/cache available Mem: 1.9Gi 593Mi 182Mi 137Mi 1.1Gi 1.0Gi Swap: 0B 0B 0B
ISPConfig uses clamav (if it is a mail system) and quota if it's a web server). Try to start these two and check if they start now. If not, check why they failed.
Code: root@vps:~# service quotaon start Job for quotaon.service failed because the control process exited with error code. See "systemctl status quotaon.service" and "journalctl -xe" for details. root@vps:~# root@vps:~# systemctl status quotaon.service ● quotaon.service - Enable File System Quotas Loaded: loaded (/lib/systemd/system/quotaon.service; static; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2024-01-07 10:50:25 UTC; 18s ago Docs: man:quotaon(8) Process: 3439 ExecStart=/sbin/quotaon -aug (code=exited, status=2) Main PID: 3439 (code=exited, status=2) Jan 07 10:50:25 vps systemd[1]: Starting Enable File System Quotas... Jan 07 10:50:25 vps quotaon[3439]: quotaon: cannot find //aquota.group on /dev/sda1 [/] Jan 07 10:50:25 vps quotaon[3439]: quotaon: cannot find //aquota.user on /dev/sda1 [/] Jan 07 10:50:25 vps systemd[1]: quotaon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Jan 07 10:50:25 vps systemd[1]: quotaon.service: Failed with result 'exit-code'. Jan 07 10:50:25 vps systemd[1]: Failed to start Enable File System Quotas. root@vps:~# root@vps:~# journalctl -xe -- A start job for unit quotaon.service has finished with a failure. -- -- The job identifier is 412 and the job result is failed.
Regards Quota: "The issue is caused due attempting to remount the filesystem read-only." "The issue only occurs on some virtual servers." SOURCE: https://github.com/serghey-rodin/vesta/issues/1107 " Without this fix, the script tries to remount the filesystem read-only which (mostly) fails and returns an error. Also, running quotacheck two times is not needed. This fix has been tested on multiple servers. " SOURCE: https://github.com/serghey-rodin/vesta/pull/1098
Is this a virtual server? If yes, it likely uses a virtualization that either does not support quota or where quota is managed from the host and not from within the VM, you can ignore the failure then.
Yes, it is a VPS, alias Virtual Private Server. - At OVH. - If OVH does not support such thing, I wouldn't be surprised. Any suggestions for recommendable hosts?... Maybe I simply should update to a non-virtuaal server?... I just read your comment about the topic in another thread: "when you run this system inside a Linux container that is virtualized with LXC, then quota can not work. or when it uses a filesystem that does not support Linux filesystem quota, it also can not work." Source: https://forum.howtoforge.com/threads/solved-installation-with-no-quota-flag.90155/ Thanks for the information on Quota. Regards ClamAV / ClamD * As read in several other threads, memory is the problem. I am running with 2GB only. ClamAV simply demands a ton for running. * Also as read in other threads: ClamAV is not critical, due to various reasons. To try to start clamav, run Code: root@vps:~# clamd If it returns "Killed": Code: root@vps:~# clamd Killed root@vps:~# Looking into syslog reveals the memory problem. Code: tail -n300 /var/log/syslog (OR:) nano /var/log/syslog Inside you will find something like: Code: kernel: [ 4055.220828] Out of memory: Killed process 5327 (clamd) Thanks for the attention and information. Ah, ya, and if any should need info on how freshclam behaves in this situation, here my status: Code: root@vps:~# freshclam ERROR: /var/log/clamav/freshclam.log is locked by another process ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log). ERROR: initialize: libfreshclam init failed. ERROR: Initialization error! root@vps:~# root@vps:~# ls -la /var/log/clamav/freshclam.log -rw-r----- 1 clamav clamav 11302 Jan 7 11:17 /var/log/clamav/freshclam.log root@vps:~# root@vps:~# tail /var/log/clamav/freshclam.log Sun Jan 7 10:17:13 2024 -> -------------------------------------- Sun Jan 7 11:17:13 2024 -> Received signal: wake up Sun Jan 7 11:17:13 2024 -> ClamAV update process started at Sun Jan 7 11:17:13 2024 Sun Jan 7 11:17:13 2024 -> WARNING: Your ClamAV installation is OUTDATED! Sun Jan 7 11:17:13 2024 -> WARNING: Local version: 0.103.9 Recommended version: 0.103.11 Sun Jan 7 11:17:13 2024 -> DON'T PANIC! Read https://docs.clamav.net/manual/Installing.html Sun Jan 7 11:17:13 2024 -> daily.cld database is up-to-date (version: 27147, sigs: 2050511, f-level: 90, builder: raynman) Sun Jan 7 11:17:13 2024 -> main.cld database is up-to-date (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr) Sun Jan 7 11:17:13 2024 -> bytecode.cld database is up-to-date (version: 334, sigs: 91, f-level: 90, builder: anvilleg) Sun Jan 7 11:17:13 2024 -> -------------------------------------- root@vps:~#
Regards CertBot: At a check later, it seemed not to be down anymore. Maybe it reboot by itself?... Not worrying bout that for now.
You can e.g. use Hetzner cloud, they cloud systems are KVM based so they fully support all Linux technologies incl. quota. So your system ran out of memory (RAM). What you can try is adding a wap file with e.g. 2GB size https://linuxize.com/post/create-a-linux-swap-file/ (if that's not blocked by OVH, you'll see. At hetzner Cloud which I use for my systems, you can create a swap file), or upgrade to a VM with more RAM. Clamd requires quite a bit of RAM In regards to freshclam, the log output is fine and freshclam is likely already running, that's why it reported that error as it can not open the log file a second time. So that's all fine. Regarding certbot service, it does not really matter as ISPConfig runs certbot on its own anyway.