-
-
Notifications
You must be signed in to change notification settings - Fork 31.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mDNS packet flood when running multiple instances #50695
Comments
What is interesting that the flood disappears sometimes for several minutes and then starts again. After further investigation it seems that hostnames are not really the issue here, but the fact that I am using the same instance names What
|
I have a similar issue I also have 2 instances using the same name, but I encounted a problem which started with this error: 2021-05-07 00:04:21 WARNING (zeroconf-Engine-242) [zeroconf] Choked at offset 32942 while unpacking b'\x00\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x001homeassistant3 [46ebcbd293bf45359d585de0706dd53c]\x0c_workstation\x04_tcp\x05local\x00\x00\xff\x00\x01\x0ehomeassistant3\xc0P\x00\xff\x00\x010homeassistant [46ebcbd293bf45359d585de0706dd53c]\xc0>\x00\xff\x00\x01\xc0\x0c\x00\x10\x80\x01\x00\x00\x00x\x00\x01\x00\xc0\x0c\x00!\x80\x01\x00\x00\x00x\x00\x1c\x00\x00\x00\x00\x00\x00\x0ehomeassistant3\x05local\x00\xc0[\x00\x01\x80\x01\x00\x00\x00x\x00\x04\xc0\xa8\xc0d'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/zeroconf/__init__.py", line 745, in __init__
self.read_others()
File "/usr/local/lib/python3.8/site-packages/zeroconf/__init__.py", line 816, in read_others
domain = self.read_name()
File "/usr/local/lib/python3.8/site-packages/zeroconf/__init__.py", line 877, in read_name
length = self.data[off]
IndexError: index out of range
2021-05-07 00:04:21 DEBUG (zeroconf-Engine-242) [zeroconf] Received from '192.168.1.100':5353 (socket 16): (236 bytes) IP in the log is IP of HA OS VM version 6.0.dev20210429
@agners what HA OS did this issue occurred? does it reproduced on 5.x release also? |
I am testing 6.0.rc1 (and later dev release) right now. But I am pretty sure it happened in 5.x releases already. |
We do have a guard against packet floods in Most mDNS implementations don't actually pass the Bonjour conformance test (https://developer.apple.com/bonjour/) when it comes to conflicting names so the best way is to usually avoid the problem is possible by not using conflicting names, but seems like it would be difficult since we want the user to be able to hit This does seem like a problem with |
1 similar comment
We do have a guard against packet floods in Most mDNS implementations don't actually pass the Bonjour conformance test (https://developer.apple.com/bonjour/) when it comes to conflicting names so the best way is to usually avoid the problem is possible by not using conflicting names, but seems like it would be difficult since we want the user to be able to hit This does seem like a problem with |
Related to home-assistant/plugin-multicast#1 maybe? |
@bdraco afaict, the package flood is not about the host name, it's about the Home Assistant service announcement (see It does seem like @frenck hm, there are several reports, maybe it matches some reports, not sure But its not a loop between a router mDNS forwarding and Home Assistant at the same time. In my case there is no mDNS forwarding between VLANs enabled (in fact the instances are just on the regular LAN, no VLAN used for those). |
I am more worried about the malformed mDNS packet sent by |
Looks like there are two places where we have a non unique name. In theory we should get a non unique name exception when registering the second home._homeassistant._tcp.local. |
There are a few unreleased zeroconf fixes that might help here, but they can't fix the dns sd conflict on homeassistant.local. since that's coming from the systemd service @agners |
FWIW, @bdraco and me debugged the issue quite a bit here. The case I have been seeing is definitely related to duplicate instance names (in Configuration -> General). Once those are resolved, things work as expected. However, since quite a while (~October 2020) Home Assistant Core should detect duplicate instance names. This should lead to messages such as:
For some reason this seems to constantly not appear in my cases. It is probably related to the amount of Zeroconf answers which need to be processed, plus maybe also the fact that this is during startup where there is a high workload anyways. Increasing the wait time before registering the service to 10s+ (in @thecode yeah if the origin is |
Retested with today's nightly, now the duplicate instance gets properly detected:
This is most likely fixed with #50784, and maybe (also) by #50807. |
The problem
It seems that multiple HA Core instances can cause an excessive amount of mDNS packets (peaking at 1k/s in my network). I think it also needs at least one instance using Home Assistant OS and with a hostname
homeassistant
. From what I understand it is an interaction betweensystemd-resolved
(running on Home Assistant OS as a mDNS resolver) and Home Assistant Core acting as a mDNS responder.I am not sure if this is a Core issue, but it looks like. Discussing quickly with @bdraco he suggested to open an issue.
Note: For testing Home Assistant OS I often reinstall and end up with default configuration, so this certainly is somewhat specific to my use-case. But since
homeassistant
is the standard host name, and probably quite some people don't bother to change the hostname of a HAOS installation, somebody running a second installation (even only for a test) might run into the same problem as well.What is version of Home Assistant Core has the issue?
core-2021.5.3
What was the last working version of Home Assistant Core?
No response
What type of installation are you running?
Home Assistant OS
Integration causing the issue
zeroconf
Link to integration documentation on our website
https://www.home-assistant.io/integrations/zeroconf/
Example YAML snippet
No response
Anything in the logs that might be useful for us?
The text was updated successfully, but these errors were encountered: