Skip to content

Commit

Permalink
Optimize x86 atomic_fence (#328)
Browse files Browse the repository at this point in the history
* Added optimized x86 atomic_fence for gcc-compatible compilers.

On x86 (32 and 64-bit) any lock-prefixed instruction provides sequential
consistency guarantees for atomic operations and is more efficient than
mfence.

We are choosing a "lock not" on a dummy byte on the stack for the following
reasons:

 - The "not" instruction does not affect flags or clobber any registers.
   The memory operand is presumably accessible through esp/rsp.
 - The dummy byte variable is at the top of the stack, which is likely
   hot in cache.
 - The dummy variable does not alias any other data on the stack, which
   means the "lock not" instruction won't introduce any false data
   dependencies with prior or following instructions.

In order to avoid various sanitizers and valgrind complaining, we have to
initialize the dummy variable to zero prior to the operation.

Additionally, for memory orders weaker than seq_cst there is no need for
any special instructions, and we only need a compiler fence. For the relaxed
memory order we don't need even that.

This optimization is only enabled for gcc up to version 11. In gcc 11 the
compiler implements a similar optimization for std::atomic_thread_fence.
Compilers compatible with gcc (namely, clang up to 13 and icc up to 2021.3.0,
inclusively) identify themselves as gcc < 11 and also benefit from this
optimization, as they otherwise generate mfence for
std::atomic_thread_fence(std::memory_order_seq_cst).

Signed-off-by: Andrey Semashev <andrey.semashev@gmail.com>

* Removed explicit mfence in atomic_fence on Windows.

The necessary instructions according to the memory order argument
should already be generated by std::atomic_thread_fence.

Signed-off-by: Andrey Semashev <andrey.semashev@gmail.com>

* Removed memory order argument from atomic_fence.

The code uses memory_order_seq_cst in all call sites of atomic_fence,
so remove the argument and simplifiy the implementation a bit. Also, renamed
the function to make the memory order it implements apparent.

Signed-off-by: Andrey Semashev <andrey.semashev@gmail.com>
  • Loading branch information
Lastique authored Dec 22, 2021
1 parent 74b7fc7 commit 8a87469
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 23 deletions.
23 changes: 7 additions & 16 deletions include/oneapi/tbb/detail/_machine.h
Original file line number Diff line number Diff line change
Expand Up @@ -76,25 +76,16 @@ using std::this_thread::yield;
#endif

//--------------------------------------------------------------------------------------------------
// atomic_fence implementation
// atomic_fence_seq_cst implementation
//--------------------------------------------------------------------------------------------------

#if _MSC_VER && (__TBB_x86_64 || __TBB_x86_32)
#pragma intrinsic(_mm_mfence)
static inline void atomic_fence_seq_cst() {
#if (__TBB_x86_64 || __TBB_x86_32) && defined(__GNUC__) && __GNUC__ < 11
unsigned char dummy = 0u;
__asm__ __volatile__ ("lock; notb %0" : "+m" (dummy) :: "memory");
#else
std::atomic_thread_fence(std::memory_order_seq_cst);
#endif

static inline void atomic_fence(std::memory_order order) {
#if _MSC_VER && (__TBB_x86_64 || __TBB_x86_32)
if (order == std::memory_order_seq_cst ||
order == std::memory_order_acq_rel ||
order == std::memory_order_acquire ||
order == std::memory_order_release )
{
_mm_mfence();
return;
}
#endif /*_MSC_VER && (__TBB_x86_64 || __TBB_x86_32)*/
std::atomic_thread_fence(order);
}

//--------------------------------------------------------------------------------------------------
Expand Down
4 changes: 2 additions & 2 deletions src/tbb/arena.h
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,7 @@ void arena::advertise_new_work() {
};

if( work_type == work_enqueued ) {
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
#if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
if ( my_market->my_num_workers_soft_limit.load(std::memory_order_acquire) == 0 &&
my_global_concurrency_mode.load(std::memory_order_acquire) == false )
Expand All @@ -508,7 +508,7 @@ void arena::advertise_new_work() {
// Starvation resistant tasks require concurrency, so missed wakeups are unacceptable.
}
else if( work_type == wakeup ) {
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
}

// Double-check idiom that, in case of spawning, is deliberately sloppy about memory fences.
Expand Down
10 changes: 5 additions & 5 deletions src/tbb/concurrent_monitor.h
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ class concurrent_monitor_base {

// Prepare wait guarantees Write Read memory barrier.
// In C++ only full fence covers this type of barrier.
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
}

//! Commit wait if event count has not changed; otherwise, cancel wait.
Expand Down Expand Up @@ -272,7 +272,7 @@ class concurrent_monitor_base {

//! Notify one thread about the event
void notify_one() {
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
notify_one_relaxed();
}

Expand Down Expand Up @@ -301,7 +301,7 @@ class concurrent_monitor_base {

//! Notify all waiting threads of the event
void notify_all() {
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
notify_all_relaxed();
}

Expand Down Expand Up @@ -337,7 +337,7 @@ class concurrent_monitor_base {
//! Notify waiting threads of the event that satisfies the given predicate
template <typename P>
void notify( const P& predicate ) {
atomic_fence(std::memory_order_seq_cst);
atomic_fence_seq_cst();
notify_relaxed( predicate );
}

Expand Down Expand Up @@ -409,7 +409,7 @@ class concurrent_monitor_base {

//! Abort any sleeping threads at the time of the call
void abort_all() {
atomic_fence( std::memory_order_seq_cst );
atomic_fence_seq_cst();
abort_all_relaxed();
}

Expand Down

0 comments on commit 8a87469

Please sign in to comment.