-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freshclam on K8S #315
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi @kadomino - sorry for the late reply. mailu (that is That said: freshclam is running as daemon (not via cron). I can implement a flag to disable freshclam via the environment. |
Hi @ghostwheel42 - thanks for looking into this :-) No problem for the delay - there is no urgency. Indeed, I do understand that Mailu's main platform is docker-compose. Nevertheless, in my experience it works fine on K8S. I have been using it in production for 6 months now and the only issue that I have encountered is this freshclam DB curruption. This statement should be taken with a grain of salt though, because my use case is quite basic. Also, I only noticed the problem because on my particular K8S cluster the nodes were rebooting frequently (for unrelated reasons), so there was a high probability of the a pod being killed in the middle of a freshclam download. Sorry for missing the point that freshclam runs as a daemon and not as a cron in the clamav containter. In any case this still violates the "one service = one container" principle which is more strictly needed under K8S than under docker-compose. Indeed, it would be great if you could allow somehow for running freshclam and clamav in 2 different containers (they would need to operate on a common ReadWriteMany storage, of course). This would eliminate this corruption issue on K8S, I think. Concerning the Helm Chart, I noticed it recently and made a fork with additions for my needs, which I would be happy to contribute back. I can also write the K8S CronJob part if/when the separate freshclam image becomes available. |
Hi @kadomino - I know the "one service = one container" principle, but I think we'll have to live with what we have for now. I think the Dockerfile/entrypoint of the sntispam container could be changed to run in 3 modes (vie env):
The helm-chart could be updated to make use of mode 2 and 3. What do you think? |
Hi @ghostwheel42 - thanks again, this seems like a good idea to me :-) Indeed, it's not that important what's in the image, as long as it can be used for running different types of containers. In terms of help from me, I'm probably not the right person to touch the images, but I can help with Helm & K8S. Maybe you could get the maintainer of the helm chart involved in this discussion at the right time. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Environment & Versions
Environment
Versions
Using v1.8.0, but this applied to any version.
Description
It seems that freshclam is run within the clamav container. While this often works fine, when an orchestrator is in play (K8S), it may (and does regularly for me) corrupt the downloaded DB and cause MailU to stop receiving emails.
Replication Steps
Run MailU on K8S and delete the Clamav pod while freshclam is downloading its DB.
Expected behaviour
One of the principles of using an orchestrator is that no container should ever run a Cron, because the orchestrator is the only one in charge of all the workloads. In the case of MailU, this means that Freshclam (or any other "container crons") should be run in a separate pod via a K8S CronJob object.
Logs
When the problem occurs, the Postfix logs show that the Clamav pod refused the connection and the Clamav logs show that the DB is corrupted.
The text was updated successfully, but these errors were encountered: