-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPTCP in multihoming doesn't announce all endpoints #331
Comments
Hello, Thank you for the bug report!
Is it always the IP that is not in the same subnet that is being announced? (eth0 and 1 share the same subnet). (But still, it is strange, I didn't check the code but I don't see why we would add this restriction)
Sorry, I didn't get that. If you want to have a fullmesh topology, you need to add the |
Hello @matttbe, thank you for your time
Indeed, it is always the same IP that is announced unless, I remove it from the endpoints then the other IP will be announced.
If I'm not mistaken in order to configure the path manager. The user can set the manager limits using : |
Mmh, strange. So only one is announced?
That's correct but I'm not sure to understand your issue. May you explain what you expect and what you get? |
No I didn't use namespaces.
Sorry for the confusion. So I want the path manager to create a full mesh, however I only observe two connections from the same interface. I'm going to try adding the |
OK, to be investigated then. From what I see in the code, it is likely possible we announce one IP and that's it. We should certainly loop there: mptcp_net-next/net/mptcp/pm_netlink.c Lines 565 to 587 in 1786d5c
Could you build a kernel if we send a patch? Or do you want to try modifying the function here above by a while-loop (see the code just below this chunk).
I suggest to use one issue per ticket: if you still have an issue with the |
I prefer modifying the function. Please provide a step-by-step guideline ;), I don't want to end up messing with the kernel, I have learned enough from previous mistakes. |
We discussed about this issue at our weekly meeting yesterday and there is a technical limitation that doesn't let us looping over all Currently, the behaviour is: sending an ADD_ADDR after each establishment of subflow. So when the connection is established, a first ADD_ADDR is sent, then a new one is sent when a new subflow is established, etc. The current behaviour is probably not solid enough (e.g. if it is not possible to reach the first announced address) and the PM should probably try to ask sending more ADD_ADDR later, e.g. when the ADD_ADDR ECHO has been received. |
Is this the current behavior in the kernel or the patch that you were going to send me ? With this behavior, I should be able to receive all endpoints (eth1, eth2), so it's fine for me. For the sake of the study, can I please get a brief description of the technical limitation. Thank you for the support once again. |
The above configuration is incorrect. The endpoint should be either 'signal' or 'subflow'. For the intended scenario it must be 'signal'. The kernel PM is still misbehaving, since it's supposed to still announce both addresses. I can reproduce the issue with a simplified setup, and dropping the bogus 'subflow' flag resolves it, e.g. the server announces all the configured addresses. There is still a later issue, as sometimes HMAC checking fails on the additional subflow creation, still to be investigated. |
Both kernel-side problems (missing announced with bogus config, sporadic subflow creation failure with correct config) have the same root cause: HMAC check failure. |
Good catch, I didn't even noticed! The client is configured like that as well.
Indeed, we don't want to change
Thank you for having checked! |
Hello @pabeni, Indeed, by removing 'subflow' flag at the server I get all the endpoints, thank you very much. So at the client endpoints should be set as |
Yes, that was how it was designed and how it is tested: the client only set |
I have to take back the last part of the above sentence. The kernel PM is actually working as expected: signal one local addr and then try to create addtional subflows using the available local 'subflow' endpoint as source. Such additional subflows creation tries to connect to the peer (client) address and port [as the DENY_JOIN_ID0 flag is cleared at MPC handshake time] and is not successful, lacking a tcp listener on the (client) end. Still such attempt marks the relevant endpoint id ad used (the attempt is started, mptcp can't easily diagnose the failure) making it not available for later 'signal'. We could make the scenario more easy to understand adding 'subflow creation attempt' MIBs and/or setting the DENY_JOIN_ID0 flag on the client side by default. TL;DR: not a bug.
The above was due a setup issue on my side: I used 'nc' in background. The process closed one end of the mptcp socket, moving it out of the established status. The HMAC fail counter is increased both on hmac failure and mptcp-state based failures. TL;DR: not a bug even there. |
@pabeni Thank you for this clarification!
Indeed, good idea. We could also say that the in-kernel PM should not let the listener socket creating new subflows. For such particular need, people can use the userspace PM, no?
That remind me #203 :-) |
Copying here from IRC for future memory. In the general case we may want the server socket being able to create subflows towards the client, to cope with some protocol weirdness. And we should expose a more consistent deny_join_id0 to avoid possible interoperability issues.
All the new MIBS will be accounted into the MPTCP code. this specific one should land here: https://elixir.bootlin.com/linux/latest/source/net/mptcp/subflow.c#L701 we should increment [MPTCP_MIB_JOINACKMAC] only on hmac failures and increment different mibs on other tests failures. |
@vanyingenzi I suggest to close this ticket now that 3 new ones have been created: #333 #334 #335. If I'm not mistaken, everything has been covered. If not, feel free to re-open this ticket or create a new one. A quick summary of the situation:
|
Hello,
I'm currently trying to study the behavior of mptcp servers on multihomed hosts using only IPv6.
So the setup is as follows :
Issues
The execution on the server :
./iperf3 -s -p 80 -m
The execution on the client :
./iperf3 -c 2001:6a8:308f:9:0:82ff:fe68:e519 -p 80 -t 60 -m
When the client receives an announced subflow from the server, it doesn't create a fullmesh, however the limits set with→ maybe not an issue? If yes, a new ticket will be created. See belowip mptcp
allows the kernel path manager to do so.This is the output of
ip mptcp monitor
when I run iperf3 withmptcpize
(Also tried with the iperf3 implementation that support mptcp) and I never reach 4 subflows, at certain executions with the same configuration I get 3 subflows created.>$ sudo ip mptcp monitor [ CREATED] token=4349e85c remid=0 locid=0 saddr6=2001:6a8:308f:7:56e1:adff:fe69:1e34 daddr6=2001:6a8:308f:9:0:82ff:fe68:e519 sport=52312 dport=80 [ ESTABLISHED] token=4349e85c remid=0 locid=0 saddr6=2001:6a8:308f:7:56e1:adff:fe69:1e34 daddr6=2001:6a8:308f:9:0:82ff:fe68:e519 sport=52312 dport=80 [ ANNOUNCED] token=4349e85c remid=3 daddr6=2001:6a8:308f:10:0:83ff:fe00:2 dport=80 [SF_ESTABLISHED] token=4349e85c remid=3 locid=0 saddr6=2001:6a8:308f:7:56e1:adff:fe69:1e34 daddr6=2001:6a8:308f:10:0:83ff:fe00:2 sport=49039 dport=80 backup=0 [ CLOSED] token=4349e85c
Server Configuration
On the server, here's the output of certain commands giving more information about the setup:
Client configuration
I'm really open to disclose more information about the issue.
Thank you in advance.
The text was updated successfully, but these errors were encountered: