Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

预测出core #22948

Closed
Angus07 opened this issue Mar 11, 2020 · 5 comments
Closed

预测出core #22948

Angus07 opened this issue Mar 11, 2020 · 5 comments

Comments

@Angus07
Copy link

Angus07 commented Mar 11, 2020

错误信息如下:

terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::memory::detail::MetadataCache::load(paddle::memory::detail::MemoryBlock const*) const
2 paddle::memory::detail::MemoryBlock::total_size(paddle::memory::detail::MetadataCache const&) const
3 paddle::memory::detail::MemoryBlock::split(paddle::memory::detail::MetadataCache*, unsigned long)
4 paddle::memory::detail::BuddyAllocator::SplitToAlloc(std::_Rb_tree_const_iterator<std::tuple<unsigned long, unsigned long, void*> >, unsigned long)
5 paddle::memory::detail::BuddyAllocator::Alloc(unsigned long)
6 void* paddle::memory::legacy::Allocpaddle::platform::CPUPlace(paddle::platform::CPUPlace const&, unsigned long)
7 paddle::memory::allocation::NaiveBestFitAllocator::AllocateImpl(unsigned long)
8 paddle::memory::allocation::AllocatorFacade::Alloc(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace> const&, unsigned long)
9 paddle::memory::allocation::AllocatorFacade::AllocShared(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace> const&, unsigned long)
10 paddle::memory::AllocShared(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace> const&, unsigned long)
11 paddle::framework::Tensor::mutable_data(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace>, paddle::framework::proto::VarType_Type, unsigned long)
12 paddle::AnalysisPredictor::SetFeed(std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor > const&, paddle::framework::Scope*)
13 paddle::AnalysisPredictor::Run(std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor > const&, std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor >*, int)


Error Message Summary:

PaddleCheckError: Expected desc->check_guards() == true, but received desc->check_guards():0 != true:1.
at [baidu/paddlepaddle/paddle/paddle/fluid/memory/detail/meta_cache.cc:33]

Aborted (core dumped)

core在
#18 0x00007f4a98ffb620 in paddle::AnalysisPredictor::Run(std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor > const&, std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor >*, int) () from libernie-inference.so

@Shixiaowei02
Copy link
Contributor

您好,这个错误是指针访问非法了,请检查一下业务代码的指针是否有问题。

@LihangLiu
Copy link

同碰到这个问题,模型加载没问题,预测run的时候崩了。具体情况是
在预测前创建paddle::PaddleTensor的时候,tensor.data.Resize(size * sizeof(float))中size=500运行正常,size=10000的时候崩了。其他代码和环境一样。
请问是paddle inference的时候,对于tensor的大小有限制么?

@Shixiaowei02
Copy link
Contributor

同碰到这个问题,模型加载没问题,预测run的时候崩了。具体情况是
在预测前创建paddle::PaddleTensor的时候,tensor.data.Resize(size * sizeof(float))中size=500运行正常,size=10000的时候崩了。其他代码和环境一样。
请问是paddle inference的时候,对于tensor的大小有限制么?

没有限制。只要出现
PaddleCheckError: Expected desc->check_guards() == true, but received desc->check_guards():0 != true:1. at [baidu/paddlepaddle/paddle/paddle/fluid/memory/detail/meta_cache.cc:33]
这种报错,就需考虑用户代码内存越界。

@LihangLiu
Copy link

好的,我查一下,感谢!

@paddle-bot-old
Copy link

Since you haven't replied for more than a year, we have closed this issue/pr.
If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up.
由于您超过一年未回复,我们将关闭这个issue/pr。
若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants