-
-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker / Container Support #786
Comments
Ich habe hierfür eine Bounty auf Bountysource angelegt: |
Danke für das Enhancement Request Issue. Das gaze wird jedoch kein einfaches Unterfangen werden und ist nicht so einfach umzusetzen wie die integration als x86/ova variante. Ich lasse das Ticket aber trotzdem geöffnet falls hier noch mehr feedback dazu gegeben werden möchte oder sich jemand findet der daran arbeiten möchte. |
Hätte auch großes Interesse an einem Docker-Image. @jens-maus: Wo liegen deiner Meinung nach die größten Hürden im Vergleich zu einer ESXi / OVA Lösung? Eventuell kann ich mich demnächst damit beschäftigen. |
@denisulmer Nun, um es "RaspberryMatic-like" bzw. "buildroot-like" zu machen müsste man erst einmal eine buildroot docker version generieren lassen und schauen zu welchen einschränkungen es da kommt bzw. was man da anpassen müsste um diese zu umgehen. Danach könnte man sich dann an die Portierung des OCCU teiles in RaspberryMatic machen. Eine schnelle Lösung wird es sicherlich nicht geben vermute ich. Alles etwas fleißarbeit. |
Kann man nicht das ganze RaspberryMatic-Root-Image einfach in einem Docker-Image bereitstellen und das INIT-System im CMD ausführen? SystemScripte müssten dann ausgeklammert werden. |
@mpietruschka Kannst du gerne mal probieren und entsprechend berichten. Ich vermute aber das das nicht einfach so out-of-the-box funktionieren wird und auch nicht die endlösung sein kann weil man das docker image ja sauber via buildroot erstellen lassen sollte. Aber wie gesagt, tobt euch aus und berichtet oder schickt nen pull request. |
Wenn es rein nach der Docker-Philosophie geht, müsste jeder Dienst in einem gesondertem Container bereitgestellt werden. Das hat in meinen Augen nur Sinn, wenn diese Dienste auch gesondert geupdated werden. Das ist hier nicht zu erwarten. Gibt es denn schon einen Docker Ansatz für RaspberryMatic? Und kann ich das reine rootfs für armv7 irgendwo herunterladen? Ps. Gibt es einen Chat-/Austausch-Kanal? Dann müssten hier nicht alle Grundsatzfragen geklärt werden. Es sei denn, es ist gewünscht :) Gruß, Marti |
Also eine einzelne trennung der Dienste ergibt bei RaspberryMatic keinen Sinn. Das würde ich nicht angehen weil am schluss hat das sonst eh nix mehr mit RaspberryMatic zu tun. Ich hab aber schon einen groben plan wie man vorgehen könnte. Im Groben wird das wohl sowas werden wie das hier: https://github.com/AdvancedClimateSystems/docker-buildroot. D.h. man bastelt etwas um die buildroot umgebung von RaspberryMatic herum und lässt buildroot dann am schluss ein rootfs generieren was man dann in docker reinwerfen kann. Inwieweit man da auf widerstand stößst kann ich natürlich schwer abschätzen. Muss man einfach mal testen, etc. Und wenn du das rootfs armv7 haben willst, lad halt das *.img herunter und extrahier das rootfs. Oder du nutzt das *-ccu3.tgz, da ist das rootfs als separate datei drin. Aber wie gesagt, ich halte es für keinen guten ansatz einfach das fertige rootfs von RaspberryMatic zu nehmen und dann das zu extrahieren was man braucht. Umgedreht sollte man das machen, die buildroot umgebung so anpassen das am schluss ein rootfs für die docker nutzung erzeugt wird das man dann direkt in docker importieren kann. Dann lässt sich das schön sauber in die CI umgebung von RaspberryMatic integrieren und man kann auch gleich automatisiert docker versionen für x86 mit ausliefern (wenn es denn klappt). |
Ich glaube ich kann deinen Gedanken nachvollziehen: Ein zweckentfremdetes System-Build würde bei jeder Änderung ggf. zu einer Anpassung des Docker-Images führen. Stichwort langfristige Wartung. Die Unterschiede stelle ich mir allerdings garnicht so groß vor (derzeit jedenfalls ;). Wenn es erst läuft wird es sicherlich einige Funktionen in der WebUI nicht geben können. Bspw.
Man müsste aber gut sehen können welche Anpassungen für Docker gemacht werden mussten. Darauf ließe sich dann ein dediziertes Buildroot anfangen. Ist das die Richtung in die du möchtest? Angelnu hat bereits ein Docker-Image gebaut. Dabei holt er dein Repo und die Resourcen die er gebrauchen kann. Ist das so wie du es nicht haben wolltest? |
In fact I have switched my main installation to RaspberryMatic since I love some of the extra features added by @jens-maus and I did not have the passion to keep backporting them all to the original CCU base. So I would be able to generate a docker container out of a tarball containing the CCU filesystem. Then there are a few things I disable when running in a container that do not make sense or are not possible into a container (loading modules, configuring the HDMI, etc). I also do a one-time install of the pivccu device driver on the host to support all the HW devices that require extra modules (that must match the host kernel). @jens-maus - if you agree I would propose to contribute my docker support into your project and I would discontinue my stand alone version: I do not really see much value on using the vanially CCU firmware... Let me know if you are interested. btw: Ich spreche auch Deutsch aber für teschnische Sachen bin auf English besser vertraut. |
@angelnu This sounds great and I would be indeed interested in your docker support stuff. In fact, it would be great to get at least your build scripts where you extract everything from the vanilla CCU firmware into a docker environment and get it somewhat ported over to RaspberryMatic so that we can perhaps add a mechanism that takes the final RaspberryMatic buildroot generated tar archive with the filesystem similar to the CCU and then extract all the docker relevant things and build the docker image in one run. For this we would then indeed also use GitHub Actions to get it more smoothly integrated. |
Good, then let us get rolling @jens-maus :-) In fact the build part is pretty simple: https://github.com/angelnu/docker-ccu/blob/master/Dockerfile Most I do there is download and extract the original CCU and then get some of your patches so it would be "just" the lines after 52. So some questions to get moving:
|
Hi, all this sounds great. Did you made any progress the last 2 month? Is there something where I could help? Martin |
Not yet - I was hoping @jens-maus could chime in for the above questions - specially generating a tarball with his build is the pre-req to generate a docker image out of it. |
Any progress? Added $10 on Bountysource. :) |
@jens-maus - I have a few days to work on this before being back to work. The main question for me to start is if you want to produce tarballs from the buildroot (that I would pull in my project to build the docker image) or if you want to merge my docker steps in your project. Since I am not so familiar with your project structure I would start with "just" copying the intermediate tarball from a local build in my project and progress to upload a docker image for testing. |
If possible I would of course prefer to merge your work into this repository! So please go this road. |
Good - it is also my preferred option since I personally use RaspberryMatic and therefore I am not able to give a good support for the official homematic versions. Ok, I will prepare a PR for your repo. |
I am also running in "production" in K8S with 100+ devices (mix of Homematic and Homematic IP wireless devices) and 2 LAN gateways and no problem - running on Intel is a really welcomed performance boost! I was looking at the HB-RF-ETH as a way to achieve high availability of the Raspberrymatic side - if one my Kubernetes nodes die Raspberrymatic gets re-deployed. Currently I have to plug USB PCBs to each of the nodes and with the HB-RF-ETH I can just deploy 1 (and another spare) and that is independent from the location and number of Kubernetes nodes. Sounds paranoid but I do not want to have my home automation down for any reason :-) |
Good morning, Identifying Homematic RF-Hardware: HmRF: none, HmIP: none, OK No other issues in docker logs. a successful start from yesterday looked like this in dmesg, I have no such message at all today:
|
I saw that there are two newer builds, so I manually updated to 3.55.5.20210114-67aab13. |
Hello angelnu, thanks for your support for the issue Problems with starting with the HmIP-RFUSB #36 . As suggested, I tried to use this image (ghcr.io/jens-maus/raspberrymatic:snapshot), unfortunately without success. During the start of this container everything looks fine: The HmIP-RFUSB device was found and even updated. But in the RaspberryMatic console I only see the Virtual Remote Control: And with this I'm not able to pair any HMIP device. Do you have any suggestions? Or do you need further information? Please tell me, which logs you need. Thanks! Regards ROBOlogo |
Right out of your shown screenshots I can only spot that the HmIP-RFUSB is correctly identified and seems to work. If the pairing dialog counts down in time the HmIP-RFUSB works fine.
Well, if your HmIP-RFUSB is not able to find any HmIP device during pairing, this can have several different reasons. One being that the HmIP-RFUSB is a suboptimal rf module device and I would suggest to use a RPI-RF-MOD connected to a HB-RF-USB/HB-RF-ETH in general. Or you should use a USB adapter cable and not connect the HmIP-RFUSB directly to the device you are running the docker an. So make sure the HmIP-RFUSB is located 1-2 meters away from the system you run the docker on and then try again. |
The boot sequence looks good. @robologo1 - just confirming - you did click on Learning Devices and you can see HMIP there and start the learning process. Then during the 60 seconds you put a device in learning mode but you do not get it discovered. Correct? If that is the case then as @jens-maus says the most likely problem has to with a not optimal reception for your adapter. Another reason would be that your host is running too much or is being slowed down (example: Raspberry 4 without any cooling) so the container is getting too much lag. |
@angelnu, @jens-maus - Thanks for your quick response. :) I tried a lot of things and found out, that the issue was not related with the installation of the USB adapter, but only with the way I tried to reset the sensors to factory defaults. Because all my windows contacts were already paired to my old installaiton and needed to be reset before the new pairing. Now it works perfectly! Thanks again!!! 👍 |
good to hear :-) For next time (or those reading this) - the backup/restore works fine so it is possible to backup from another system, either HW RaspberryMatic or my previous angelnu/ccu container and restore in the new Raspberrymatic OCI without having to re-pair any devices. I also debugged a problem related to 64 bit kernels and remote filesystems at #903 If anyone runs with this combination you should be aware of it. For glusterfs I added a circumvention in my Kubernetes Helm chart (WIP) at https://github.com/angelnu/helm/tree/master/raspberrymatic |
@angelnu Thanks for the hint. BTW: Would be great if you could finish/complete your K8s documentation in the wiki anytime soon :) |
jaja, now that you say. I first thought documenting it the "classic way" with an example yaml to deploy but then I thought on directly providinga Helm chart but then I wanted to add to your CI so it could be automatically updated so I started investingating the Github acctions available... This weekend :-) But being serious - if anyone here is interested in testing my Helm chart and even would also like to test with GlusterFS for High Availability I would appreciate it. It does not feel right releasing something it has been tested by me only. If not here I would ask the home assistant community - I also contributed to their Helm chart in the past. |
Moving from a stable FHEM + HM-CFG-LAN adapter setup to dockerized CCU + HM-CFG-LAN. Initially I had issues getting the adapter to work, but after a while and a lot of patience, it connected. I have no encryption key though. The adapter has a stable IP. Yet, the CCU loses connection to it every now and then. It used to be stable for a day or two, but now it is always disconnecting, raising the "RF-Gateway-Alarm" alert. Nothing in the logs.. And I did not deploy the kernel modules as my adapter is LAN.. Any tipps? |
@yoogie27 I would suggest trying first a CCU+HM-CFG-LAN setup without docker to rule out a general problem with the HM-CFG-LAN in combination with a CCU. You are here in the alpha/test issue regarding dockerization of a CCU and this is still alpha/beta quality. And my suspicion is, that the HM-CFG-LAN is simply not working well with a CCU. That's why it is also not BTW listed in the documentation as a supported rf gateway device (https://github.com/jens-maus/RaspberryMatic/wiki/Einleitung#vorraussetzungen). Never been tested and if you are the first one, try it with a real CCU system before you go the docker way. |
@jens-maus @yoogie27 I am running my virtual CCU environments for years successfully with 3 HM-CFG-LAN Gateways for years now (Coming friom the original CCU2, trying Homegear a few months, then more than a year using Debmatic before I switched to the docker CCU about a month ago) without any issues. therefore, i cannot confirm jens-maus's suspicion at all, quite the opposite. ;) |
@nicx Are these really HM-CFG-LAN gateway or are these HM-LGW-O-TW-W-EU devices? |
I am really talking about the very old HM-CFG-LAN-Gateways, the small round ones. Never had any problems with them. |
Thanks @nicx and @jens-maus . The oCCU page lists HM-LAN-CFG as supported as well. At least without OTA firmware updates. So I would suspect that rfd is supporting it. I actually found that rfd is logging into /var/log/messages and there it says that rfd could not connect to host... I will continue investigating. Hopefully I can get it working. |
Then why did nobody tell me about it that they work with RaspberryMatic? ;-) Then it's probably your job now to help @yoogie27 :) |
@jens-maus @nicx Hm.. In the end it's an issue with my HM-CFG-LAN... When I unplug the power, it works for a while. I will continue digging. Great project, by the way! |
I think I know what's wrong. I have watchtower deployed to keep my containers up to date. Whenever there is a new snapshot, it jumps ship and updates everything. After that, the old HM-CFG-LAN bug kicks in that renders the puck unusable. Only reflashing the latest firmware brings it out of the state. Powercycling is not enough. I am fed up, so I have ordered the bundled Raspberry with the radio module and a clock module to run RaspberryMatic on it. |
@yoogie27 just fyi: I cannot confirm theses problems. I am using watchtower, too, and all my 3 HM-CFG-LANs are working without this problems. So maybe yours is just defect ;) |
Please note that I will close this ticket/issue here now, since the general implementation of the docker/container integration is done and seems to work flawlessly. All recent discussions in here (note, that this is no discussion fora!) are not really related to the docker/container integration but specific to using the old/obsolete HM-CFG-LAN devices with RaspberryMatic. So please allow me to close this issue/ticket and thank all the contributors and especially @angelnu for starting the whole docker implementation in first place. I am sure this will be appreciated by a lot of other users waiting already for a longer time for such an additional virtualization opportunity. So thanks again! And last, not least: Anyone who contributed to the initial Bounty (https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository), e.g. @regnets, @ProfDrYoMan, @mpietruschka should please allow @angelnu to mark this bounty as "solved" and thus receive the money for it! |
@jens-maus fyi: the docker tag "latest" is still missing, so following the install documentation will currently not work ;) |
@nicx I know. But you know that this haven't been released yet, right? So simply wait for the final release version to appear somewhat next week. And then you should of course stop using the snapshot tag for your productive environment. |
@jens-maus has been working on this as much at least at me polishing my first draft - not sure if the Bounty allows it but @regnets, @ProfDrYoMan, @mpietruschka you should consider @jens-maus as the target for your donations for all the work he did here and does in general to run this. |
Is your feature request related to a problem? Please describe.
Ich würde gerne Raspberrymatic mit Phoscon (eine Zigbee Bridge alternative von Dresden Elektronik) parallel auf einem Raspberry Pi nutzen.
Describe the solution you'd like
Ideal wäre es, wenn es hierfür eine Docker Integration geben würde, dann lässt sich beides leicht aktualisieren und sie kommen sich nicht in die Quere.
Describe alternatives you've considered
Bisher keine. Ich warte einfach auf die Integration bzw. würde für dieses Ticket ein Bounty aufsetzen als Incentive.
Additional context
Mir ist bewusst, dass es hierzu schon einige Tickets #192, #248, #357 gegeben hat. Das letzte hat die Integration von Raspberrymatic auf die x86 Plattform bzw. für virtualisierte Umgebungen ermöglicht (siehe auch https://homematic-forum.de/forum/viewtopic.php?f=65&t=54055#p538104). Jedoch ist hierbei auch unter dem Tisch gefallen, dass es noch keine Docker Integration gibt.
The text was updated successfully, but these errors were encountered: