Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast path changes #45

Open
kouske opened this issue Nov 23, 2015 · 8 comments
Open

Fast path changes #45

kouske opened this issue Nov 23, 2015 · 8 comments

Comments

@kouske
Copy link

kouske commented Nov 23, 2015

Hi,
I have noticed that if I have an MP that has sent a PREQ to two other MP's then the following will happen:

  1. If the path that was calculated from the first reply is better than the current path then immedietly the MP will change it's path without waiting for the reply from the second MP.
  2. If the path from the second MP contains a better path, then the path will be immediatly changed again.

This created a situation where for each path selection I see a jitter in the TP because of the constant path changes.

Is this how the mesh should work? Or is there some setting that can prevent this situation?

Best Regards

@chunyeow
Copy link
Contributor

You can try to use net_traversal_jiffies to limit this.

But please check whether this patch is available:
http://git.kernel.org/cgit/linux/kernel/git/jberg/mac80211-next.git/commit/?id=bc3ce0b0be6b85e738e80ed25b52ad940f34b921

If yes, maybe you can try to revert this and see how.

@kouske
Copy link
Author

kouske commented Nov 24, 2015

It seems that the patch is available.

Even so, shouldn't the mesh path selection algorithm consider other responses before deciding? (without reverting)

@kouske kouske closed this as completed Nov 24, 2015
@kouske kouske reopened this Nov 24, 2015
@bcopeland
Copy link
Contributor

I'd have to reread what the standard says on this -- but yes, the code as currently written will take the first response, and then any after that which have a better metric, for any given path refresh. So this implies a path swap if the first response is not the best. So, supposing we do something different here, I guess we'd need to buffer the responses for some amount of time before committing to one.

@chunyeow
Copy link
Contributor

I thought that the introduction of net_traversal_jiffies is mainly for
this, nope?

On Sat, Nov 28, 2015 at 5:32 AM, Bob Copeland notifications@github.com
wrote:

I'd have to reread what the standard says on this -- but yes, the code as
currently written will take the first response, and then any after that
which have a better metric, for any given path refresh. So this implies a
path swap if the first response is not the best. So, supposing we do
something different here, I guess we'd need to buffer the responses for
some amount of time before committing to one.


Reply to this email directly or view it on GitHub
#45 (comment).

@bcopeland
Copy link
Contributor

On Fri, Nov 27, 2015 at 05:42:19PM -0800, Chun-Yeow wrote:

I thought that the introduction of net_traversal_jiffies is mainly for
this, nope?

I'd have to reread what the standard says on this -- but yes, the code as
currently written will take the first response, and then any after that
which have a better metric, for any given path refresh. So this implies a
path swap if the first response is not the best. So, supposing we do
something different here, I guess we'd need to buffer the responses for
some amount of time before committing to one.

According to the standard there are two uses of net_traversal_jiffies
(13.10.8 5,6):

  1. limit the PREQs sent for a single target to 2X net traversal time
  2. restrict SN increments to net traversal time

#2 is what we are doing in PREQ response, but it just means we send
a PREP with the same SN for all the PREQs we get. So if we get multiple
PREPs all with the same (target) SN then we'll still take the best one
here:

            if (SN_GT(mpath->sn, orig_sn) ||
                (mpath->sn == orig_sn &&
                 new_metric >= mpath->metric)) {
                    /* don't process */
                }
                /* do process, orig_sn >= ours or new_metric < ours */

That will happen as soon as we get the PREP. The standard doesn't say
anything about buffering the replies though to only pick the best one,
as far as I can tell (13.10.10.4.3).

Bob Copeland %% http://bobcopeland.com/

@chunyeow
Copy link
Contributor

chunyeow commented Dec 1, 2015

I went through this paper "A joint experimental and simulation study of the IEEE 802.11s HWMP protocol and airtime link metric" and it did mention that the original feature (without Bob's patch) is as optional in one of the early IEEE 802.11s drafts by open80211s. This paper did recommend few approaches to improve this. Maybe it is good to take a look.

@sritam2
Copy link

sritam2 commented Jul 4, 2017

I am trying to form the mesh network using SAE authentication in a very busy channel (channel 6 - 2437 Mhz). But still the authentication time (calculated from time stamps of wpa_supplicant) is not affected because of the channel being busy. It takes the same time compared to the authentication time when the channel was free (near by access points, routers, other devices were all off).

How is this possible?? Even if the channel is heavily loaded with network traffic, the authentication time required for new node to become part of MBSS is still unaffected ?? Any reason for that?
Is it that authentication traffic is given more priority than normal data traffic ??

Please help me in clearing this doubt of mine.

Thanks and Regards,
Sritam.

@zhejunli
Copy link

@sritam2 :I believe it is because the Management/Control packets have higher priority than Data packets. Eg. Ath9k supports 10 queues and data packets use Q0 which is lowest priority. This makes Mgt/Ctl packets have higher chance to be sent both inside chip and in he air, by DCF mechanism.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants