-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
推理设置GPU时报错 #3
Comments
在多个平台测试过,均有问题,Window下MXNet-cu100版本,Jetson TX2 下是ubuntu18.04+cuda10.0,报错也一模一样 |
看起来是个bug |
我在ubuntu下的cuda90下 也有报错 不过没记录报错的信息 |
我初步定位了下问题,先修改了部分向量的初始化代码,根据context初始化了,然后发现仍然有问题,是在feature.py,异常抛出在net.py的hybrid_forward函数,发现是向FPNFeatureExpander输入的的时候报错了,也就是self.feature(x)那一行,确定输入的x的context是在GPU中。但是没想通为什么这个在初始化Feature的时候用的CPU,参数都传的没问题。后来感觉应该可以copy到GPU的,但是对MXNet实在是不熟悉。直接在FPNFeatureExpander的init函数里调用net.collect_params().reset_ctx()是不行的,提示我网络没有初始化。到这里我就放弃了QAQ~~ |
breezedeus/CnOCR#117 解决方法 |
发布了 v0.1.1,修复此问题 |
报错信息为 CachedOp requires all inputs to live on the same context. But data is on gpu(0) while _mobilenetv30_first-3x3-conv-conv2d_weight is on cpu(0)
The text was updated successfully, but these errors were encountered: