Skip to content

Commit

Permalink
modify infer gpu memory strategy (#41427)
Browse files Browse the repository at this point in the history
* modify infer gpu memory strategy

* modify infer gpu memory strategy
  • Loading branch information
JZZ-NOTE authored Apr 7, 2022
1 parent 53409bc commit 56e72b2
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 11 deletions.
7 changes: 0 additions & 7 deletions paddle/fluid/inference/api/analysis_predictor.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1061,13 +1061,6 @@ std::unique_ptr<PaddlePredictor> CreatePaddlePredictor<
gflags.push_back("--cudnn_deterministic=True");
}

// TODO(wilber): jetson tx2 may fail to run the model due to insufficient memory
// under the native_best_fit strategy. Modify the default allocation strategy to
// auto_growth. todo, find a more appropriate way to solve the problem.
#ifdef WITH_NV_JETSON
gflags.push_back("--allocator_strategy=auto_growth");
#endif

// TODO(Shixiaowei02): Add a mandatory scheme to use the thread local
// allocator when multi-stream is enabled.
if (config.thread_local_stream_enabled()) {
Expand Down
4 changes: 0 additions & 4 deletions paddle/fluid/platform/flags.cc
Original file line number Diff line number Diff line change
Expand Up @@ -364,11 +364,7 @@ PADDLE_DEFINE_EXPORTED_double(
* Example:
* Note: For selecting allocator policy of PaddlePaddle.
*/
#ifdef PADDLE_ON_INFERENCE
static constexpr char kDefaultAllocatorStrategy[] = "naive_best_fit";
#else
static constexpr char kDefaultAllocatorStrategy[] = "auto_growth";
#endif
PADDLE_DEFINE_EXPORTED_string(
allocator_strategy, kDefaultAllocatorStrategy,
"The allocation strategy, enum in [naive_best_fit, auto_growth]. "
Expand Down

0 comments on commit 56e72b2

Please sign in to comment.