-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skip link local while checking for redundant announcements or queries #235
Skip link local while checking for redundant announcements or queries #235
Conversation
The issue becomes more prominent in a IPv6 link local only setup. As all link local addresses are in the same subnet. |
There is another occasion of a |
Tested this change in a pre-release version of https://github.com/hrzlgnm/mdns-browser in my setup where the issue was very prominent and driving me crazy. And I could not observe it anymore. |
Perhaps we only need to handle ipv6 link local adresses in special manner, so the subnect optimization for other address kinds still works. |
Let's put it this way: |
If I understand, the problem is that link-local addresses (especially for IPv6 as every IPv6 interface has a link-local address) do not belong to the same network segment even though their netmask (i.e. prefix) match. In other words, we should not use "subnet" to filter out any link-local addresses. But we can (probably should?) still filter regular non-link-local addresses of the same subnet (even though that might not be very common). From this perspective, I have a different take about the implementation of the fix, and came up with a diff like this: (in both places where diff --git a/src/service_daemon.rs b/src/service_daemon.rs
index 8ed4c8c..f06100b 100644
--- a/src/service_daemon.rs
+++ b/src/service_daemon.rs
@@ -1205,11 +1205,15 @@ impl Zeroconf {
for (_, intf_sock) in self.intf_socks.iter() {
let subnet = ifaddr_subnet(&intf_sock.intf.addr);
- if subnet_set.contains(&subnet) {
- continue; // no need to send again in the same subnet.
+ if !intf_sock.intf.is_link_local() {
+ if subnet_set.contains(&subnet) {
+ continue; // no need to send again in the same subnet.
+ }
}
if self.broadcast_service_on_intf(info, intf_sock) {
- subnet_set.insert(subnet);
+ if !intf_sock.intf.is_link_local() {
+ subnet_set.insert(subnet);
+ }
outgoing_addrs.push(intf_sock.intf.ip());
}
}
@@ -1400,11 +1404,13 @@ impl Zeroconf {
let mut subnet_set: HashSet<u128> = HashSet::new();
for (_, intf_sock) in self.intf_socks.iter() {
- let subnet = ifaddr_subnet(&intf_sock.intf.addr);
- if subnet_set.contains(&subnet) {
- continue; // no need to send query the same subnet again.
+ if !intf_sock.intf.is_link_local() {
+ let subnet = ifaddr_subnet(&intf_sock.intf.addr);
+ if subnet_set.contains(&subnet) {
+ continue; // no need to send query the same subnet again.
+ }
+ subnet_set.insert(subnet);
}
- subnet_set.insert(subnet);
broadcast_dns_on_intf(&out, intf_sock);
}
} Does it make sense? Will it help address your case? (and thanks for your PR and investigations!) |
Yes makes sense, I'll check it out on my branch and test whether this works for me.
Glad to use the library and happy to contribute to. ❤️ |
@@ -1205,11 +1205,13 @@ impl Zeroconf { | |||
|
|||
for (_, intf_sock) in self.intf_socks.iter() { | |||
let subnet = ifaddr_subnet(&intf_sock.intf.addr); | |||
if subnet_set.contains(&subnet) { | |||
if !intf_sock.intf.is_link_local() && subnet_set.contains(&subnet) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cargo clippy or rust-analyzer suggested to collapse this if
@keepsimple1 I've tested my latest change again with another pre-release version of mdns-browser and it works with your suggested changes. |
Thanks for updating and verifying! Will merge now. |
…or query packets (#235) Co-authored-by: keepsimple1 <keepsimple@gmail.com>
Otherwise in case multiple network interfaces are present, not all interfaces may be queried and thus services will be removed too early.
I was seeing service removals for services shown as online by other queriers like avahi or python zeroconf. And i was able to track it down to this.