Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared core functionality for messages #149

Merged
merged 10 commits into from
Jul 27, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/NF_Dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ NFs can scale by running multiple threads. For launching more threads the main N
Example use of Multithreading NF scaling functionality can be seen in the scaling_example NF.

### Shared core mode
This is an **EXPERIMENTAL** mode for OpenNetVM. It allows multiple NFs to run on a shared core. In "normal" OpenNetVM, each NF will poll its RX queue for packets, monopolizing the CPU even if it has a low load. This branch adds a semaphore-based communication system so that NFs will block when there are no packets available. The NF Manger will then signal the semaphore once one or more packets arrive.
This is an **EXPERIMENTAL** mode for OpenNetVM. It allows multiple NFs to run on a shared core. In "normal" OpenNetVM, each NF will poll its RX queue and message queue for packets and messages respectively, monopolizing the CPU even if it has a low load. This branch adds a semaphore-based communication system so that NFs will block when there are no packets and messages available. The NF Manger will then signal the semaphore once one or more packets or messages arrive.

This code allows you to evaluate resource management techniques for NFs that share cores, however it has not been fully tested with complex NFs, therefore if you encounter any bugs please create an issue or a pull request with a proposed fix.

Expand All @@ -52,7 +52,7 @@ The code is based on the hybrid-polling model proposed in [_Flurries: Countless
Usage / Known Limitations:
- To enable pass a `-c` flag to the onvm_mgr, and use a `-s` flag when starting a NF to specify that they want to share cores
- All code for sharing CPUs is within `if (ONVM_NF_SHARE_CORES)` blocks
- When enabled, you can run multiple NFs on the same CPU core with much less interference than if they are polling for packets
- When enabled, you can run multiple NFs on the same CPU core with much less interference than if they are polling for packets and messages
- This code does not provide any particular intelligence for how NFs are scheduled or when they wakeup/sleep
- Note that the manager threads all still use polling

Expand Down
3 changes: 2 additions & 1 deletion onvm/onvm_nflib/onvm_common.h
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@
#define ONVM_NF_ACTION_OUT 3 // send the packet out the NIC port set in the argument field

#define PKT_WAKEUP_THRESHOLD 1 // for shared core mode, how many packets are required to wake up the NF
#define MSG_WAKEUP_THRESHOLD 1 // for shared core mode, how many messages on an NF's ring are required to wake up the NF

/* Used in setting bit flags for core options */
#define MANUAL_CORE_ASSIGNMENT_BIT 0
Expand Down Expand Up @@ -469,7 +470,7 @@ get_sem_name(unsigned id) {

static inline int
whether_wakeup_client(struct onvm_nf *nf, struct nf_wakeup_info *nf_wakeup_info) {
if (rte_ring_count(nf->rx_q) < PKT_WAKEUP_THRESHOLD)
if (rte_ring_count(nf->rx_q) < PKT_WAKEUP_THRESHOLD && rte_ring_count(nf->msg_q) < MSG_WAKEUP_THRESHOLD)
return 0;

/* Check if its already woken up */
Expand Down
13 changes: 8 additions & 5 deletions onvm/onvm_nflib/onvm_nflib.c
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -555,6 +555,14 @@ onvm_nflib_thread_main_loop(void *arg) {

start_time = rte_get_tsc_cycles();
for (;rte_atomic16_read(&nf_local_ctx->keep_running) && rte_atomic16_read(&main_nf_local_ctx->keep_running);) {
/* Possibly sleep if in shared core mode, otherwise continue */
if (ONVM_NF_SHARE_CORES) {
if (unlikely(rte_ring_count(nf->rx_q) == 0) && likely(rte_ring_count(nf->msg_q) == 0)) {
rte_atomic16_set(nf->shared_core.sleep_state, 1);
sem_wait(nf->shared_core.nf_mutex);
}
}

nb_pkts_added =
onvm_nflib_dequeue_packets((void **)pkts, nf_local_ctx, nf->function_table->pkt_handler);

Expand Down Expand Up @@ -908,12 +916,7 @@ onvm_nflib_dequeue_packets(void **pkts, struct onvm_nf_local_ctx *nf_local_ctx,
/* Dequeue all packets in ring up to max possible. */
nb_pkts = rte_ring_dequeue_burst(nf->rx_q, pkts, PACKET_READ_SIZE, NULL);

/* Possibly sleep if in shared core mode, otherwise return */
if (unlikely(nb_pkts == 0)) {
if (ONVM_NF_SHARE_CORES) {
rte_atomic16_set(nf->shared_core.sleep_state, 1);
sem_wait(nf->shared_core.nf_mutex);
}
return 0;
}

Expand Down