-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with IPROTO #411
Comments
Hi, This issue mixes a few different topics, let's try to separate them.
so the tool works, even if it's unable to fetch some info (get_transport_info: getsockopt: errno 95), but that is a warning not a failure. Note that such warning is caused by the mptcp protocol not supporting the TCP_MAXSEG socket option. We could add such support, but first we should clarify the meaning of such option in case of multiple active subflows. Should we pick min, max, or first subfloow maxseg? I'm wild guessing 'min' should be a good pick. In anycase netperf is expected to work, please report your setup details and the exact error message if not.
TL;DR: netperf should work with mptcpize, please provide more info if you see failures. |
But, if I use mptcpize with netperf, it doesn't take multiple subflows (with default scheduler), while mptcpize with iperf3 yes. I have checked that my systemtap is working fine and still I keep getting the same error. |
Hi,
I guess you really mean that "mptcpized" netperf does not create multiples subflows, while iperf3 does. That is quite unexpected, as the mptcpize tool only forces the specified application to use the mptcp protocol instead of TCP. Additional subflow creation is always in charge of the Path Manager (thus even the scheduler is irrelevant). The Path Manager can be configured e.g. via the iproute2 tool, see: If the number of additional subflows created by iperf3 and netperf are different, probably something changed in your environment in between the 2 tests. As already asked, please report more information, alike:
Additionally a pcap capture on the relevant interface(s) limited to TCP syn,fin,rst packets could help.
Still the error you report above looks more related to systemtop env than the specific mptcp stap. In any case systemtap will provide the same results as mptcpize, and is an obsolete method. Please instead provide the info requested above, thanks! |
In theory, if the default scheduler is used, the same data should be sent through all subflows as the redundant? Or different data for each one? Using mptcpize with netperf? |
The default scheduler should use all available paths in a way to reach the best throughput.
mptcpize and systemstap will only force an application to create MPTCP connections instead of "plain" TCP ones. It doesn't influence the path-manager or the packets scheduler which is configured on the system via sysctl. EDIT: you can have a situation where multiple paths are created (or it tries to create multiple ones) but it only uses one for some reasons. It is always important to check the counters, limits and endpoints to know what's happening as mentioned by Paolo. |
So, It's like redundant? |
No, redundant is supposed to duplicate the data over the different paths in order to minimise delay in case of issue with one link. (and probably not a good idea to use this "redundant" technique with MPTCP except maybe for very specific use-cases and coupled with a dedicated path-manager recreating bad paths very quickly) |
Hello @AlejandraOliver I lost track if there were still some opened questions here. Can we close this issue? |
Hi @matttbe , sorry I didn’t reply you back. You can close the issue, all ok! Thanks!! |
Hi, I am trying to get my Linux machine to use MPTCP socket instead of TCP (I'm using a kernel from the export branch). So far I have been running
mptcpize run iperf3
to test for MPTCP. Now I want to run netperf but I think this tool doesn't work with mptcize. Does anyone have the .stap application I need to create to have MPTCP sockets and don't need to use mptcpize?I have used
But I get an error:
The text was updated successfully, but these errors were encountered: