Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The best way to analyse MPTCPv1 performance? [QUESTION] #251

Closed
vandit86 opened this issue Dec 15, 2021 · 7 comments
Closed

The best way to analyse MPTCPv1 performance? [QUESTION] #251

vandit86 opened this issue Dec 15, 2021 · 7 comments
Labels

Comments

@vandit86
Copy link

I've been experimenting with MPTCPv0 (out-of-tree) and MPTCPv1 (upstream (RFC 8684)) for a while now.
However, I can't find a good application layer analysis tool for the upstream MPTCP protocol. All existing tools are not suitable for MPTCPv1, since the upstream version is not compatible with MPTCPv0. For now, I keep using the same set of standard tools to analyse MPTCPv1 performance, like Wireshark (version 3.4.8) and tcptrace.

Can someone advise me on the best way to examine MPTCPv1 protocol? For example, to see the data sequence number evolution through subflows, delays, re-injections, data mapping and so on. I note that wireshark works well with MPTCPv0, but it can't interpret (translate) perfectly the upstream version of the protocol. Please, correct me if I'm wrong.

Also, I can't find documentation other than RFC. Many aspects of the upstream protocol operation are not clear from the RFC (i.e., congestion control, scheduling mechanism, etc) and can only be identified by examining traffic. It would be nice to have a set of configuration/usage examples that will help one to understand better the operation of MPTCPv1.

Thank you!

@RuiCunhaM
Copy link

RuiCunhaM commented Dec 16, 2021

I note that wireshark works well with MPTCPv0, but it can't interpret (translate) perfectly the upstream version of the protocol.

Hi! Just for curiosity, why exactly do you find that Wireshark does not translate perfectly the upstream/v1 version of the protocol? I ask this because it is what I've been using also, and so far I haven't seem to have find anything wrong, I wonder if I missed something.

@matttbe
Copy link
Member

matttbe commented Dec 16, 2021

Hello,

If you compile 'mptcp_v0.96' branch in the out-of-tree version, you can turn on a sysctl to use MPTCPv1.

Latest version of Wireshark should properly interpret MPTCPv1. If not, there is a bug :)

tcptrace has never been MPTCP aware.

Also, I can't find documentation other than RFC. Many aspects of the upstream protocol operation are not clear from the RFC (i.e., congestion control, scheduling mechanism, etc) and can only be identified by examining traffic. It would be nice to have a set of configuration/usage examples that will help one to understand better the operation of MPTCPv1.

Indeed but for the moment, the configuration is quite simple: there is one path-manager that can be modified with ip mptcp (there is a dedicated man page), one packet scheduler and no MPTCP-specific Congestion Control.

@vandit86
Copy link
Author

@RuiCunhaM. Actually, I was confused by the Data Sequence Number(DSN), in the way it represented on Wireshark for upstream version. There are tree different fields: tcp.options.mptcp.rawdataseqno, mptcp.dsn, mptcp.dss.dsn. For out-of-tree protocol version, I looked only on mptcp.dsn (second DSN column on image) and it was "nice" increasing relative number.., but I guess for upstream version tcp.options.mptcp.rawdataseqno is makes more sense (first DSN column on image).

wireshark-mptcp-v1-dsn-differance

Other thing that I noted that Wireshark could not recognize data reinjection on MPTCP level, ( mptcp.reinjection_of and mptcp.reinjected_in ). Maybe this because of mapping for a range of data used in upstream MPTCPv1, rather than per‑packet signalling.

@vandit86
Copy link
Author

@matttbe ,

If you compile 'mptcp_v0.96' branch in the out-of-tree version, you can turn on a sysctl to use MPTCPv1.

I've been experimenting with 5.15.0-rc7 kernel from the export branch from this (mptcp_net-next) repo.

Indeed but for the moment, the configuration is quite simple: there is one path-manager that can be modified with ip mptcp (there is a dedicated man page), one packet scheduler and no MPTCP-specific Congestion Control.

As I'm understand, MPTCPv1 implement BLEST-like packet scheduler and the coupled congestion control??

Thanks,

@matttbe
Copy link
Member

matttbe commented Dec 17, 2021

@vandit86 I think we are talking about different things:

  • MPTCPv1 is the protocol defined in RFC8684
  • MPTCPv0: RFC6824 (deprecated by MPTCPv1)

There are two major implementations of MPTCP in the Linux kernel:

The out-of-tree supports MPTCPv0 from the beginning and MPTCPv1 since v0.96 (kernel 5.4).
The upstream version only supports MPTCPv1.

As I'm understand, MPTCPv1 MPTCP Upstream implement BLEST-like packet scheduler and the coupled congestion control??

The upstream implementations has for the moment only one packets scheduler inspired from BLEST model.
No coupled congestion controls are supported (yet), regular ones per subflow are used..

@vandit86
Copy link
Author

Ok, now everithing is much clearer to me then before. Thank you Matthieu @matttbe for explanation)
I switch from the older kernel to upstreme version in order to make exepriments with MPTCPv1. Now i see that I could compile 'mptcp_v0.96' branch in the out-of-tree version (kernel 5.4) and switch to version 1 by sysctl instead. (I guess..).

Furthermore, as far as I understand, currently out-of-tree implementetion offers higher configuration flexebility. i.e., it is possible to select different alilable path-managers, congestion controls and schedulers.. In addition, ip mptcp tool should work as well if MPTCPv1 is enabled.. Am I right ??

Thank you !!!

@matttbe
Copy link
Member

matttbe commented Dec 17, 2021

Great!

In addition, ip mptcp tool should work as well if MPTCPv1 is enabled..

No, ip mptcp is specific to the Upstream implementation.

I hope you don't mind if I close this ticket.

For what you saw with Wireshark, it looks like a bug on their side if it cannot properly track the DSN. Maybe best to open a bug report there. (But because it might be specific to MPTCPv1, maybe someone else has to look at that → maybe good to also open a new dedicated ticket in this project?)

@matttbe matttbe closed this as completed Dec 17, 2021
jenkins-tessares pushed a commit that referenced this issue Jul 20, 2023
Add a big batch of test coverage to assert all aspects of the tcx opts
attach, detach and query API:

  # ./vmtest.sh -- ./test_progs -t tc_opts
  [...]
  #238     tc_opts_after:OK
  #239     tc_opts_append:OK
  #240     tc_opts_basic:OK
  #241     tc_opts_before:OK
  #242     tc_opts_chain_classic:OK
  #243     tc_opts_demixed:OK
  #244     tc_opts_detach:OK
  #245     tc_opts_detach_after:OK
  #246     tc_opts_detach_before:OK
  #247     tc_opts_dev_cleanup:OK
  #248     tc_opts_invalid:OK
  #249     tc_opts_mixed:OK
  #250     tc_opts_prepend:OK
  #251     tc_opts_replace:OK
  #252     tc_opts_revision:OK
  Summary: 15/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230719140858.13224-8-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
jenkins-tessares pushed a commit that referenced this issue Aug 11, 2023
Add a detachment test case with miniq present to assert that with and
without the miniq we get the same error.

  # ./test_progs -t tc_opts
  #244     tc_opts_after:OK
  #245     tc_opts_append:OK
  #246     tc_opts_basic:OK
  #247     tc_opts_before:OK
  #248     tc_opts_chain_classic:OK
  #249     tc_opts_delete_empty:OK
  #250     tc_opts_demixed:OK
  #251     tc_opts_detach:OK
  #252     tc_opts_detach_after:OK
  #253     tc_opts_detach_before:OK
  #254     tc_opts_dev_cleanup:OK
  #255     tc_opts_invalid:OK
  #256     tc_opts_mixed:OK
  #257     tc_opts_prepend:OK
  #258     tc_opts_replace:OK
  #259     tc_opts_revision:OK
  Summary: 16/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230804131112.11012-2-daniel@iogearbox.net
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
matttbe pushed a commit that referenced this issue Aug 17, 2023
Add several new tcx test cases to improve test coverage. This also includes
a few new tests with ingress instead of clsact qdisc, to cover the fix from
commit dc644b5 ("tcx: Fix splat in ingress_destroy upon tcx_entry_free").

  # ./test_progs -t tc
  [...]
  #234     tc_links_after:OK
  #235     tc_links_append:OK
  #236     tc_links_basic:OK
  #237     tc_links_before:OK
  #238     tc_links_chain_classic:OK
  #239     tc_links_chain_mixed:OK
  #240     tc_links_dev_cleanup:OK
  #241     tc_links_dev_mixed:OK
  #242     tc_links_ingress:OK
  #243     tc_links_invalid:OK
  #244     tc_links_prepend:OK
  #245     tc_links_replace:OK
  #246     tc_links_revision:OK
  #247     tc_opts_after:OK
  #248     tc_opts_append:OK
  #249     tc_opts_basic:OK
  #250     tc_opts_before:OK
  #251     tc_opts_chain_classic:OK
  #252     tc_opts_chain_mixed:OK
  #253     tc_opts_delete_empty:OK
  #254     tc_opts_demixed:OK
  #255     tc_opts_detach:OK
  #256     tc_opts_detach_after:OK
  #257     tc_opts_detach_before:OK
  #258     tc_opts_dev_cleanup:OK
  #259     tc_opts_invalid:OK
  #260     tc_opts_mixed:OK
  #261     tc_opts_prepend:OK
  #262     tc_opts_replace:OK
  #263     tc_opts_revision:OK
  [...]
  Summary: 44/38 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/8699efc284b75ccdc51ddf7062fa2370330dc6c0.1692029283.git.daniel@iogearbox.net
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants