Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

swss: flush g_asicState after each event is done #570

Merged
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions orchagent/orchdaemon.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -313,5 +313,6 @@ void OrchDaemon::start()
for (Orch *o : m_orchList)
o->doTask();

flush(); //flush after each event is handled, don't wait
Copy link
Contributor

@qiluo-msft qiluo-msft Aug 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

//flush after e [](start = 17, length = 15)

Please help follow the coding style of other comments. #Closed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Will do

Copy link
Contributor

@qiluo-msft qiluo-msft Aug 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flush [](start = 8, length = 5)

About the performance "route performance improved by 200~300 routes/sec". Could you provide more details?

  1. What is the test environment and test steps?
  2. What is the performance before this PR and after?
  3. If it is a small scope test, could you add a unit test or vs test case to automate it?
    #Closed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. the environment and test steps are listed on the sildes we discussed about two weeks ago . I already sent the slides that time internally, if you didn't get it, I can resent it to you.
  2. The performance is listed in the slides as well. Only one PR didn't make sense, it involved many PRs for each optimization. We need to look at them together.
  3. We tested the routing performance on physical switch , it matched what I listed on the slides.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. the environment and test steps are listed on the sildes we discussed about two weeks ago . I already sent the slides that time internally, if you didn't get it, I can resent it to you.
  2. The performance is listed in the slides as well. Only one PR didn't make sense, it involved many PRs for each optimization. We need to look at them together.
  3. We tested the routing performance on physical switch , it matched what I listed on the slides.

Copy link
Contributor

@qiluo-msft qiluo-msft Aug 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I just want to know performance before this PR and after? You only mentioned improved by 200~300 routes/sec. Rough numbers are ok. #Closed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before changes it is 1300 routes/sec, platform is brcm and SAI is 3.1, CPU is Intel(R) Atom(TM) CPU C2558 @ 2.40GHz. After this enabling pipeline changes only, it is about 1500-1600 routes/sec, if plus the syncd buffer changes in other PR, it is about 1700-1800 routes/sec

}
}