Hi. Installing rspamd according to guide on Debian 9. Looks ok before updating, service starts on 3.1.14p2. After update to 3.1.15, the service does not start anymore. # systemctl status rspamd.service ● rspamd.service - rapid spam filtering system Loaded: loaded (/lib/systemd/system/rspamd.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2019-09-11 10:42:18 CEST; 7min ago Docs: https://rspamd.com/doc/ Process: 23033 ExecStart=/usr/bin/rspamd -c /etc/rspamd/rspamd.conf -f (code=exited, status=1/FAILURE) Main PID: 23033 (code=exited, status=1/FAILURE) sep 11 10:42:17 systemd[1]: rspamd.service: Failed with result 'exit-code'. sep 11 10:42:18 systemd[1]: rspamd.service: Service hold-off time over, scheduling restart. sep 11 10:42:18 systemd[1]: Stopped rapid spam filtering system. sep 11 10:42:18 systemd[1]: rspamd.service: Start request repeated too quickly. sep 11 10:42:18 systemd[1]: Failed to start rapid spam filtering system. sep 11 10:42:18 systemd[1]: rspamd.service: Unit entered failed state. sep 11 10:42:18 systemd[1]: rspamd.service: Failed with result 'exit-code'. tailf of rspamd.log: 2019-09-11 09:18:09 #14634(main) <bc699d>; main; rspamd_term_handler: catch termination signal, waiting for children 2019-09-11 09:18:09 #14639(controller) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14641(normal) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14641(normal) rspamd_worker_on_terminate: performing finishing actions 2019-09-11 09:18:09 #14639(controller) rspamd_controller_on_terminate: closing rrd file: /var/lib/rspamd/rspamd.rrd 2019-09-11 09:18:09 #14640(normal) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14640(normal) rspamd_worker_on_terminate: performing finishing actions 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: controller process 14639 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14640 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14641 terminated softly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: terminate worker rspamd_proxy(14638) with SIGKILL 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: rspamd_proxy process 14638 terminated hardly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; main: terminating... So some config changes by ispconfig does this, i have not activated rspamd in controllpanel yet. What to look for ?
So you had rspamd installed and configured before ispconfig supported it? In tis case, it might be that your custom config is incompatible with the config from ispconfig. Rspamd gets configured during ispconfig update if its installed, it does not matter if its turned on or not. And you should check the rspamd log again, also further up, there should be more info now.
No, i installed exactly like the new guide. i had 3.1.14p2, after guide i have 3.1.15 and service does not start.
Actually, its works on one server, the main controllpanel server, there gui didnt start at first, but after manually starting rspamd.service it started and i can check status. But after that i begin to update more emailservers and 3 servers now have this problem. Or should i update all servers to 3.1.15 before i install rspamd ? https://www.howtoforge.com/replacing-amavisd-with-rspamd-in-ispconfig/ According to guide : ISPConfig 3.1 must be installed. The ISPConfig version will be 3.1.15 or newer after this procedure.
After manually starting rspamd.service again, it fails, nothing new is written to rspamd.log: 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: controller process 14639 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14640 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14641 terminated softly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: terminate worker rspamd_proxy(14638) with SIGKILL 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: rspamd_proxy process 14638 terminated hardly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; main: terminating... Same timestamps as last log
No, follow the instructions in the guide that I've published. The log shows that its a normal successful stop of the daemon, no error. There are no lines above that lines in the log?
There is a lot of working log, beforehand i installed to 3.1.15. 2019-09-11 09:13:39 #14639(controller) <unpc7a>; cfg; rspamd_sqlite3_init: enable per language statistics for BAYES_HAM 2019-09-11 09:13:39 #14634(main) <bc699d>; main; rspamd_cld_handler: catch SIGCHLD signal, finding terminated workers 2019-09-11 09:13:39 #14634(main) <bc699d>; main; rspamd_cld_handler: hs_helper process 14642 terminated normally 2019-09-11 09:13:39 #14641(normal) <unpc7a>; cfg; rspamd_sqlite3_init: enable per language statistics for BAYES_SPAM 2019-09-11 09:13:39 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:13:39 #14641(normal) <unpc7a>; cfg; rspamd_sqlite3_init: enable per language statistics for BAYES_HAM 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 1, sha: 5476f74ad48cca32a6b75ab3b7ff8e54739d139e 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 3, sha: 9e7a6c20215fe3a0ed02d1f36d58b3574ee0e5bc 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 2, sha: cbfe3bb48b9884e2aee097935a0b01f32dd05185 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 6, sha: 0ae8b25b405a33bbb9aa3a00d2107ce941321d78 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 5, sha: 1f29b5fab838111d63352810c4166e8cea352aee 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 4, sha: 6c417eefa957259c00b97cd8cd76cd75f4d1e2fa 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 8, sha: 214deace8991612eaebc3a48f6fac79d3d32a895 2019-09-11 09:13:39 #14639(controller) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 7, sha: 473ded076882895aaa4d03c83358c86be000efe3 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 3, sha: 9e7a6c20215fe3a0ed02d1f36d58b3574ee0e5bc 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 2, sha: cbfe3bb48b9884e2aee097935a0b01f32dd05185 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 1, sha: 5476f74ad48cca32a6b75ab3b7ff8e54739d139e 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 8, sha: 214deace8991612eaebc3a48f6fac79d3d32a895 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 7, sha: 473ded076882895aaa4d03c83358c86be000efe3 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 6, sha: 0ae8b25b405a33bbb9aa3a00d2107ce941321d78 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 5, sha: 1f29b5fab838111d63352810c4166e8cea352aee 2019-09-11 09:13:39 #14641(normal) <unpc7a>; lua; lua_redis.lua:1151: uploaded redis script to 127.0.0.1 with id 4, sha: 6c417eefa957259c00b97cd8cd76cd75f4d1e2fa 2019-09-11 09:14:21 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:14:46 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:14:58 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:14:58 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_propagate_error: invalid return on resolving multi.uribl.com, disable object 2019-09-11 09:14:58 #14639(controller) <unpc7a>; cfg; rspamd_worker_monitored_on_change: broadcast monitored update for 3nxzfegumbi67tq1kjtuupxnd493zxt: dead 2019-09-11 09:14:58 #14640(normal) <unpc7a>; cfg; rspamd_worker_monitored_handler: updated monitored status for 3nxzfegumbi67tq1kjtuupxnd493zxt: dead 2019-09-11 09:14:58 #14641(normal) <unpc7a>; cfg; rspamd_worker_monitored_handler: updated monitored status for 3nxzfegumbi67tq1kjtuupxnd493zxt: dead 2019-09-11 09:16:00 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:18:09 #14639(controller) <3nxzfe>; monitored; rspamd_monitored_dns_cb: DNS query blocked on multi.uribl.com (127.0.0.1 returned), possibly due to high volume 2019-09-11 09:18:09 #14634(main) <bc699d>; main; rspamd_term_handler: catch termination signal, waiting for children 2019-09-11 09:18:09 #14639(controller) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14641(normal) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14641(normal) rspamd_worker_on_terminate: performing finishing actions
Ok, so it worked and then it was stopped successfully, no error logged which means that there is probably no error. And now when you start it again with: systemctl start rspamd which messages get added to the log now.
systemctl start rspamd Nothing new: tailf /var/log/rspamd/rspamd.log 2019-09-11 09:18:09 #14641(normal) rspamd_worker_on_terminate: performing finishing actions 2019-09-11 09:18:09 #14639(controller) rspamd_controller_on_terminate: closing rrd file: /var/lib/rspamd/rspamd.rrd 2019-09-11 09:18:09 #14640(normal) <bc699d>; main; rspamd_worker_term_handler: terminating after receiving signal Avslutad 2019-09-11 09:18:09 #14640(normal) rspamd_worker_on_terminate: performing finishing actions 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: controller process 14639 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14640 terminated softly 2019-09-11 09:18:10 #14634(main) <bc699d>; main; wait_for_workers: normal process 14641 terminated softly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: terminate worker rspamd_proxy(14638) with SIGKILL 2019-09-11 09:18:20 #14634(main) <bc699d>; main; wait_for_workers: rspamd_proxy process 14638 terminated hardly 2019-09-11 09:18:20 #14634(main) <bc699d>; main; main: terminating... ^C tailf /var/log/syslog | grep -i rspamd Sep 11 11:36:14 www8 systemd[1]: rspamd.service: Start request repeated too quickly. Sep 11 11:36:14 www8 systemd[1]: rspamd.service: Unit entered failed state. Sep 11 11:36:14 www8 systemd[1]: rspamd.service: Failed with result 'exit-code'. ^C
No. But did deamon-reload and got this: rspamd[27248]: 2019-09-11 11:39:14 #27248(main) cfg; rspamd_config_read: failed to load config: ucl parser error: cannot open file /etc/rspamd/local.d/worker-controller.inc: Access denied. Did ispconfig script change user for the config file ? Owned by root
ls -lah /etc/rspamd/local.d/worker-controller.inc -rw------- 1 root root 178 sep 11 09:17 /etc/rspamd/local.d/worker-controller.inc
Have you configured a different umask? Normally the file should be created with g+r and a+r so it is always owned by root.