Skip to content

Commit

Permalink
fix the jetson allocator strategy, test=develop (#32932)
Browse files Browse the repository at this point in the history
  • Loading branch information
Shixiaowei02 authored May 19, 2021
1 parent aa4a56f commit 1e1600e
Showing 1 changed file with 9 additions and 7 deletions.
16 changes: 9 additions & 7 deletions paddle/fluid/inference/api/analysis_predictor.cc
Original file line number Diff line number Diff line change
Expand Up @@ -650,20 +650,22 @@ std::unique_ptr<PaddlePredictor> CreatePaddlePredictor<
gflags.push_back("--cudnn_deterministic=True");
}

if (config.thread_local_stream_enabled()) {
gflags.push_back("--allocator_strategy=thread_local");
process_level_allocator_enabled = false;
} else {
process_level_allocator_enabled = true;
}

// TODO(wilber): jetson tx2 may fail to run the model due to insufficient memory
// under the native_best_fit strategy. Modify the default allocation strategy to
// auto_growth. todo, find a more appropriate way to solve the problem.
#ifdef WITH_NV_JETSON
gflags.push_back("--allocator_strategy=auto_growth");
#endif

// TODO(Shixiaowei02): Add a mandatory scheme to use the thread local
// allocator when multi-stream is enabled.
if (config.thread_local_stream_enabled()) {
gflags.push_back("--allocator_strategy=thread_local");
process_level_allocator_enabled = false;
} else {
process_level_allocator_enabled = true;
}

if (framework::InitGflags(gflags)) {
VLOG(3) << "The following gpu analysis configurations only take effect "
"for the first predictor: ";
Expand Down

0 comments on commit 1e1600e

Please sign in to comment.