Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Macros for running flow table lookup #147

Merged
merged 2 commits into from
Jun 28, 2019
Merged

Conversation

kevindweb
Copy link
Contributor

@kevindweb kevindweb commented Jun 18, 2019

Referencing issue #62 , this pr adds a macro FLOW_LOOKUP to onvm_pkt.c to enable/disable our flow table lookup.

Summary:

Although this is a "good first issue", it's been an issue for a while, and @koolzz and I found it was very useful for testing the pktgen functionality in CI. Here's what I received with this added functionality. With ENABLE_FLOW_LOOKUP=1, from the max rx was 13331727, max tx was 14811688. Disabling the flow table lookup portion of onvm_pkt_process_rx_batch, we received max rx of 14783964, and max tx of 14785566. Over 1mil pps rx faster by disabling the flow table, which makes sense, but is good that we got it.

Usage:
By default, per @twood02 's request in the original issue, flow lookups happen. But in /onvm/onvm_mgr/Makefile setting ENABLE_FLOW_LOOKUP=0, flow lookups will not happen. This is useful if only one port is running pktgen.

This PR includes
Resolves issues #62
Breaking API changes
Internal API changes
Usability improvements
Bug fixes
New functionality 👍
New NF/onvm_mgr args
Changes to starting NFs
Dependency updates
Web stats updates

Merging notes:

  • Dependencies: None
    I want to merge this into the upcoming CI pktgen update, to leverage this functionality. We saw a 6mil pps increase by using this on the cluster nodes the other day. (no strict dependencies though)

TODO before merging :

  • PR is ready for review

Test Plan:

Make sure pktgen still works.

Review:

@koolzz you were there on Saturday talking about this
@dennisafa I know you work a lot with Pktgen, if you could test this

@kevindweb
Copy link
Contributor Author

I purposely turned off CI in testing, I will definitely run CI before merge.

Copy link
Member

@koolzz koolzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Speed is good

@@ -35,6 +35,9 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

# set this to 1 if onvm is using flow table lookup for incoming packets
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set this to 0 to disable flow table lookup for incoming packets
If the default is 1, we should describe the alternative

@koolzz
Copy link
Member

koolzz commented Jun 18, 2019

Won't merge before CI approves this though 😛

@koolzz
Copy link
Member

koolzz commented Jun 18, 2019

Resolves issue #62 I think? Not 64

#62

@kevindweb
Copy link
Contributor Author

@onvm will you be kind?

@onvm
Copy link

onvm commented Jun 24, 2019

@onvm will you be kind?

CI Message

Your results will arrive shortly

Copy link

@onvm onvm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@onvm will you be kind?

CI Message

Run successful see results:
✔️ PR submitted to develop branch
✔️ Speed tester performance check passed
✔️ Linter passed

[Results from nimbnode17]

  • Median TX pps for Pktgen: 6472504
  • Performance rating - 107.88% (compared to 6000000 average)

@kevindweb kevindweb mentioned this pull request Jun 24, 2019
1 task
dennisafa pushed a commit to dennisafa/openNetVM that referenced this pull request Jun 25, 2019
This code introduces **EXPERIMENTAL** support to allow NFs to efficiently run on **shared** CPU cores.  NFs wait on semaphores when idle and are signaled by the manager when new packets arrive. Once the NF is in wake state, no additional notifications will be sent until it goes back to sleep.  Shared cpu variables for mgr are in the `nf_wakeup_info` structs, the NF shared cpu vars were moved to the `onvm_nf` struct.

 - The scaling/pthread model required modifications to the signal handling process. To accommodate proper signal handling and fix other issues, children will block all signals and only the parent will handle signals. When terminating the parent NF will tell all children to stop running, which in turn will turn their children to stop running, which in turn will tell... this ensures that the parent thread blocks until absolutely all descendants have exited (this is required because even though a pthread child is detached if the main process exits it will be abruptly killed)
 - Creating an `onvm_nf_context` to be malloced before nflib starts and has info about that specific NF (shutdown information/nf and nf_info structs). This struct replaces `onvm_nf_info` in most API calls. _In the next pr `onvm_nf_info` will be reworked into a nf initialization structure, more on that later_
 - Running the `onvm_nflib_start_signal_handler(nf_context, NULL);` call before `onvm_nflib_init` to ensure proper signal handling even on NF startup (pretty sure that resolves sdnfv#93)
 - `onvm_nflib_stop` is now executed from the main(), instead of inside `onvm_nflib_run`. Also, the `onvm_nflib_stop` will be automatically called for the children spawned by the scaling API.
  
Other related changes:

 - Stats `-v` mode gets an update with the shared cpu information if such is available
 - onvm gets a config structure which contains onvm configuration flags for now but might be expanded later. This will be stored in a memzone and shared with the NFs.

**API additions:**

 - `onvm_nflib_start_signal_handler(struct onvm_nf_context *nf_context, handle_signal_func signal_hanlder);`

**API changes:**

 - `onvm_nflib_init(int argc, char *argv[], const char *nf_tag, struct onvm_nf_info **nf_info_p);` -> `onvm_nflib_init(int argc, char *argv[], const char *nf_tag, struct onvm_nf_context *nf_context);`
 - `onvm_nflib_run_callback(struct onvm_nf_info *info, pkt_handler_func pkt_handler, callback_handler_func callback_handler);` -> `onvm_nflib_run_callback(struct onvm_nf_context *nf_context, pkt_handler_func pkt_handler, callback_handler_func callback_handler);`
 - `onvm_nflib_run(struct onvm_nf_info *info, pkt_handler_func pkt_handler);` -> `onvm_nflib_run(struct onvm_nf_context *nf_context, pkt_handler_func pkt_handler);`
 - `onvm_nflib_stop(struct onvm_nf_info *nf_info);` -> `onvm_nflib_stop(struct onvm_nf_context *nf_context);`

All code changes are featurized using INTERRUPT_SEM macro.

See the ONVM README.md file for more information and warnings.

Commit log:

* Shared CPU NFs (sdnfv#150)

* merges rebased to master with sdnfv#147

* reverting num_clients update.

* replace tabs with spaces

* Readme describing experimental shared CPU support

* Add usage and known limitations for shared CPU code

* Bug Fix: Fixes instance ID assignment race condition (sdnfv#183) (sdnfv#185)

Fixes a race condition in NF instance ID assignment.  Before, if two NFs were started at the same time, they would be assigned the same instance ID.  This change modifies logic in `onvm_nf_next_instance_id` to avoid assigning the same ID to two starting NFs.

* Bug Fix: Fixes instance ID assignment race condition
  - Fixes sdnfv#178 
  - Changed incrementation of next_instance_id in onvm_mgr/onvm_nf.c
such that the ID can't be reused before an NF becomes active.

* partial refactoring of api w accordance of nf_info

* onvm mgr compiles

* flow table compiles -- all examples compile

* changed onvm_get_rx and onvm_get_tx to get rx_q and tx_q associated with onvm_nf struct

* Did requested changes. Also updated newer NFs (such as arp_response and flow_tracker). Also did some small style changes in  some other documentation in onvm_nflib.h

* Corrected some incorrect references to nflib functions in NF_Dev.md

* removed typing error

* Minor cleanup

* Stats cleanup

 - Some stats were unused and broken(?) nuked them
 - added stats that are possible usefull to stdout (slightly ugly)

* Add functionality to scale same NF instances

* Fixed a few global vars for multithreading to work

* Code clean up, comments and style fixes

* Added scale return value

* Fixed nf_mode for running advanced rings

* Refactored the onvm_nf/onvm_nf_info passing

- Keeping onvm_nf_info small, moved vars to onvm_nf
- Added a callback function for the scaling child thread
- Cleaned up code

* Adv rings NF scaling, setup func for spawned NFS

- Adds ability to scale NFs with advanced rings
- Adds a setup function for NFs
- More docs and cleanup

* Fixed build errors, changed NF typedef func names

* Launching NFs with different state/pkt processing

 -  Moving nf_info as passed arg to packet_handler -> allows for spawned NF to access their own nf_info
 -  Launching new NFs now accepts arguments for packet_handler function, setup function, callback function, advanced rings function
 -  Add a data pointer to onvm_nflib_info to maintain state
 -  Working on a scaling example (currently testing code, not in usable state) will showcase different features of spawning NFs

* Major changes, init breakdown, argv for spawned nf

- This is a major change, a lot of nflib initlization is refactored to allow spawned NFs to launch with their own service id
- There is a bug when the spawn a lot of new NFs we can no longer retrieve the nf_info from nf_info_mp, this needs to be fixed as it breaks stuff

* Fixing bugs, cleaning up code, updating example NF

- Fixed the nf_info bug where the mempool would get messed up as children weren't cleared properly.
- Updated the example scaling NF
- Removed most of the debugging code

* Clean up of shared cpu branch

Cleaned up messy bits, replacing macro branches with if statements, encapsulating code in functions, and getting rid of unnecessary code.

* The code cleaning commit has arrived

 - Style fixes
 - Removing debug code
 - Making code prettier
 - Adding better docs
 - Implementing requested changes

* Minor cleanup/whitespace fixes

* Fix Large ServieNum port crash bug

* Implementing requested changes

* Update scaling  docs

* Doc typos fixed

* Doc fixes

* Performace fixes

 - Changed MAX_NFS back to 128, improves performance
 - Changed the order inside of the onvm_nf struct, due to struct packing this reduced the overall size of onvm_nf down to 168 from 176

* Fixed memzone init size issue

* Update Releases.md

* Fix space

* initial pthread testing

* Small fixes

* small fixes

* More minor fixes

* More tiny fixes

* Scaling nf fixes

* Small fixes

* Logging, README fixes

* Added Manual/MGR core assignment mode, cleanup

* Added dedicated core option, error codes

* Combining shared cpu & pthreads

 - Dedicated pthread to watch for signals, this ensures that when user sends a SIGINT the threads are properly woken up and cleaned up. Not possible with the previous signal handler as it didn't have NF infromation

* Implemented shared cpu mutex logic for adv rings

* Add hacky shared cpu fix

* Fix some unused/old code

* Minor fixes, cleanup

* Remove testing print statement

* Remove unused func

* Fix shutdown thread join and cleanup

* Shared CPU mode as an argument

 - Add shared cpu '-c' flag passed to onvm_mgr
 - Add custom flags memzone
 - Surround all shared cpu code with if(shared_cpu)
 - Overall cleanup

* Update README.md

* Update NF_Dev.md

* Fix onvm_manager README

* Scaling nf share cpu flag

* Style fixes

* Cleanup

* Code cleanup

* Cleanup

* Onvm config struct for flags

* Refactoring and things

* Remove unused vars, rename var

* Fix stats & remove unrelated code

* Style fixes

* Fix imports in speed tester

* Fix race by properly killing grandchildren

* Rename vars, cleanup, comments

* Final cleanup

* Minore style fixes

* Minore style fixes

* Only cleanup shm when in shared cpu mode

* Fix unused macro

* Fix flag_p name to sleep_state

* Fix comment about signal thread

* Fix some style issues

* Fix scaling adv ring children termination

* Fix sleep issue with shared cpu scaling

* Fix cleanup, logical redesign

* The bug is strong with this one

* Fix type

* Pre-init signaling thread

* Better pre-init termination

* Working on combining signal handlers

* Remove unused dpdk_init_finished var

* Rewrote some termination logic

* Add defines in header file

* Fix signal handling, add nf context (api changes)

 - For proper termination of both advanced rings and normal NFs create
   nf context which is malloced as opposed to rte_malloc and is created
   prior to nflib init.
 - To properly handle signals and simplify the process instead of using
   a dedicated pthread, use signal() + global termination context
 - Involves a bunch of API changes to handle the new context variable,
   some API changes are still missing

* Cleanup comments, remove unused vars

* Further cleanup

* Cleanup

* More cleanup/updates

* Resolve needed signaling issues

* Update NFs to new API

* Fix fake getopts hack

* Add custom nf specific sig handling

* Update docs with paper description

* Update NF_Dev.md

* Update NF_Dev.md

* update references to research papers

* Remove debug pring

* Make children_cnt atomic

* Fix docs and comments

* Fix style issues

* Redo termination cleanup

* Removed ugly flag, proposed by the one and only @nks5295

* Fixed API calls for all example NFs

* Remove extra onvm_nflib_stop call in SpeedTester

* Add time based stop

* Fix Basic Monitor NF

* Fixed iter cnt for shutdown

* Fix comment

* Make keep_running atomic
Copy link
Member

@dennisafa dennisafa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the usage of FLOW_LOOKUP in the makefile, looks clean.

@koolzz
Copy link
Member

koolzz commented Jun 28, 2019

@onvm you're next

@onvm
Copy link

onvm commented Jun 28, 2019

@onvm you're next

CI Message

Another CI run in progress, adding request to the end of the list

@onvm
Copy link

onvm commented Jun 28, 2019

@onvm you're next

CI Message

Your results will arrive shortly

Copy link

@onvm onvm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@onvm you're next

CI Message

Run successful see results:
✔️ PR submitted to develop branch
✔️ Pktgen performance check passed
✔️ Speed Test performance check passed
✔️ Linter passed

[Results from nimbnode17]

  • Median TX pps for Pktgen: 6327939

  • Performance rating - 105.47% (compared to 6000000 average)

  • Median TX pps for Speed Tester: 38447214

  • Performance rating - 109.85% (compared to 35000000 average)

Copy link
Member

@koolzz koolzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement this thingy for our pktgen testing as we discussed, but this is good on its own so good work and merging

@koolzz koolzz merged commit 6bf1c9f into sdnfv:develop Jun 28, 2019
@onvm
Copy link

onvm commented Jun 29, 2019

qqq

CI Message

Your results will arrive shortly

@onvm
Copy link

onvm commented Jun 29, 2019

qqq

CI Message

Error: ERROR: Script failed on nimbnode30

@onvm
Copy link

onvm commented Jul 1, 2019

Testing

CI Message

Your results will arrive shortly

Copy link

@onvm onvm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing

CI Message

Run successful see results:
✔️ PR submitted to develop branch
✔️ Pktgen performance check passed
❌ PR drops Speed Test performance below minimum requirement
✔️ Linter passed

[Results from nimbnode30]

  • Median TX pps for Pktgen: 2629066

  • Performance rating - 100.35% (compared to 2620000 average)

  • Median TX pps for Speed Tester: 33952720

  • Performance rating - 89.35% (compared to 38000000 average)

@koolzz koolzz added this to the ONVM v19.07 milestone Jul 31, 2019
@kevindweb kevindweb deleted the flow_table branch March 21, 2020 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants