You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():
C++ Call Stacks (More useful to developers):
Python Call Stacks (More useful to users):
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op
attrs=kwargs.get("attrs", None))
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/layers/nn.py", line 1403, in conv2d
"data_format": data_format,
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 181, in _conv_norm
name=_name + '.conv2d.output.1')
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 452, in c1_stage
name=_name)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 473, in call
res = self.c1_stage(res)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn.py", line 98, in build
body_feats = self.backbone(im)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn.py", line 335, in test
return self.build(feed_vars, 'test')
File "tools/export_serving_model.py", line 79, in main
test_fetches = model.test(feed_vars)
File "tools/export_serving_model.py", line 98, in
main()
Ubuntu16.04 用docker部署cascade_rcnn,cpu可以进行,gpu报错:过程如下
1、cpu
服务端:
python3 -m paddle_serving_server_gpu.serve --model serving_server --port 9292
显示:
W0100 00:00:00.000000 445 fluid_cpu_engine.cpp:53] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidCpuNativeDirWithSigmoidCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_NATIVE_DIR_SIGMOID in macro!
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
--- Running analysis [ir_graph_to_program_pass]
客户端:执行程序
from paddle_serving_client import Client
from paddle_serving_app.reader import *
import numpy as np
preprocess = Sequential([
File2Image(), BGR2RGB(), Div(255.0),
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False),
Resize(800, 1333), Transpose((2, 0, 1)), PadStride(32)
])
postprocess = RCNNPostprocess("label_list.txt", "output")
client = Client()
client.load_client_config("serving_client/serving_client_conf.prototxt")
client.connect(['127.0.0.1:9292'])
im = preprocess('000000570688.jpg')
fetch_map = client.predict(feed={"image": im, "im_info": np.array(list(im.shape[1:]) + [1.0]),
"im_shape": np.array(list(im.shape[1:]) + [1.0])}, fetch=["multiclass_nms_0.tmp_0"])
fetch_map["image"] = '000000570688.jpg'
print (fetch_map)
postprocess(fetch_map)
print(fetch_map)
显示结果如下:
python3 test_client.py
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1102 17:33:43.807981 16940 naming_service_thread.cpp:209] brpc::policy::ListNamingService("127.0.0.1:9292"): added 1
{'multiclass_nms_0.tmp_0': array([[1.00000000e+00, 3.89992058e-01, 0.00000000e+00, 5.76865784e+02,
1.54522171e+01, 6.53119812e+02],
[1.00000000e+00, 3.77118677e-01, 7.63976440e+02, 5.79579468e+02,
[5.70000000e+01, 1.32285058e-01, 1.91921631e+02, 6.49584534e+02,
2.85223114e+02, 6.92049500e+02]], dtype=float32), 'image': '000000570688.jpg', 'multiclass_nms_0.tmp_0.lod': array([ 0, 100], dtype=int32)}
但在gpu下报错
2、gpu
服务端执行:python3 -m paddle_serving_server_gpu.serve --model serving_server --port 9292 --gpu_id 0
显示结果:
mkdir: cannot create directory ‘workdir_0’: File exists
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
Going to Run Comand
/usr/local/python3.5.1/lib/python3.5/site-packages/paddle_serving_server_gpu/serving-gpu-cuda9-0.3.2/serving -enable_model_toolkit -inferservice_path workdir_0 -inferservice_file infer_service.prototxt -max_concurrency 0 -num_threads 2 -port 9292 -reload_interval_s 10 -resource_path workdir_0 -resource_file resource.prototxt -workflow_path workdir_0 -workflow_file workflow.prototxt -bthread_concurrency 2 -gpuid 0 -max_body_size 536870912
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralDistKVInferOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralTextReaderOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralCopyOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralDistKVQuantInferOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralReaderOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralInferOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralTextResponseOp
I0100 00:00:00.000000 487 op_repository.h:65] RAW: Succ regist op: GeneralResponseOp
I0100 00:00:00.000000 487 service_manager.h:61] RAW: Service[LoadGeneralModelService] insert successfully!
I0100 00:00:00.000000 487 load_general_model_service.pb.h:299] RAW: Success regist service[LoadGeneralModelService][PN5baidu14paddle_serving9predictor26load_general_model_service27LoadGeneralModelServiceImplE]
I0100 00:00:00.000000 487 service_manager.h:61] RAW: Service[GeneralModelService] insert successfully!
I0100 00:00:00.000000 487 general_model_service.pb.h:1473] RAW: Success regist service[GeneralModelService][PN5baidu14paddle_serving9predictor13general_model23GeneralModelServiceImplE]
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_ANALYSIS, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:27] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_ANALYSIS in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_ANALYSIS_DIR, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:33] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidGpuAnalysisDirCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_ANALYSIS_DIR in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_ANALYSIS_DIR_SIGMOID, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:39] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidGpuAnalysisDirWithSigmoidCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_ANALYSIS_DIR_SIGMOID in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_NATIVE, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:44] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_NATIVE in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_NATIVE_DIR, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:49] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_NATIVE_DIR in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_GPU_NATIVE_DIR_SIGMOID, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_gpu_engine.cpp:55] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidGpuNativeDirWithSigmoidCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_GPU_NATIVE_DIR_SIGMOID in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_ANALYSIS, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:25] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_ANALYSIS in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_ANALYSIS_DIR, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:31] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidCpuAnalysisDirCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_ANALYSIS_DIR in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_ANALYSIS_DIR_SIGMOID, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:37] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidCpuAnalysisDirWithSigmoidCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_ANALYSIS_DIR_SIGMOID in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_NATIVE, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:42] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_NATIVE in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_NATIVE_DIR, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:47] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_NATIVE_DIR in macro!
I0100 00:00:00.000000 487 factory.h:121] RAW: Succ insert one factory, tag: FLUID_CPU_NATIVE_DIR_SIGMOID, base type N5baidu14paddle_serving9predictor11InferEngineE
W0100 00:00:00.000000 487 fluid_cpu_engine.cpp:53] RAW: Succ regist factory: ::baidu::paddle_serving::predictor::FluidInferEngine< FluidCpuNativeDirWithSigmoidCore>->::baidu::paddle_serving::predictor::InferEngine, tag: FLUID_CPU_NATIVE_DIR_SIGMOID in macro!
I1102 09:35:04.759968 487 analysis_predictor.cc:138] Profiler is deactivated, and no profiling report will be generated.
I1102 09:35:04.765663 487 analysis_predictor.cc:875] MODEL VERSION: 1.7.2
I1102 09:35:04.765684 487 analysis_predictor.cc:877] PREDICTOR VERSION: 1.8.4
I1102 09:35:04.765931 487 analysis_predictor.cc:474] ir_optim is turned off, no IR pass will be executed
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I1102 09:35:04.870309 487 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I1102 09:35:04.983223 487 memory_optimize_pass.cc:223] Cluster name : rpn_cls_logits_fpn5.tmp_1 size: 12
I1102 09:35:04.983239 487 memory_optimize_pass.cc:223] Cluster name : im_shape size: 12
I1102 09:35:04.983242 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_2.tmp_1 size: 4
I1102 09:35:04.983245 487 memory_optimize_pass.cc:223] Cluster name : image size: 12
I1102 09:35:04.983247 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_0.tmp_1 size: 4
I1102 09:35:04.983249 487 memory_optimize_pass.cc:223] Cluster name : im_info size: 12
I1102 09:35:04.983253 487 memory_optimize_pass.cc:223] Cluster name : rpn_cls_logits_fpn6.tmp_1 size: 12
I1102 09:35:04.983255 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_0.tmp_0 size: 16
I1102 09:35:04.983259 487 memory_optimize_pass.cc:223] Cluster name : roi_align_2.tmp_0 size: 50176
I1102 09:35:04.983263 487 memory_optimize_pass.cc:223] Cluster name : roi_align_11.tmp_0 size: 50176
I1102 09:35:04.983265 487 memory_optimize_pass.cc:223] Cluster name : roi_align_9.tmp_0 size: 50176
I1102 09:35:04.983269 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_3.tmp_1 size: 4
I1102 09:35:04.983271 487 memory_optimize_pass.cc:223] Cluster name : fpn_res2_sum.tmp_1 size: 1024
I1102 09:35:04.983274 487 memory_optimize_pass.cc:223] Cluster name : concat_2.tmp_0 size: 50176
I1102 09:35:04.983276 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_1.tmp_1 size: 4
I1102 09:35:04.983279 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_1.tmp_0 size: 16
I1102 09:35:04.983281 487 memory_optimize_pass.cc:223] Cluster name : generate_proposals_2.tmp_0 size: 16
I1102 09:35:04.983284 487 memory_optimize_pass.cc:223] Cluster name : roi_align_8.tmp_0 size: 50176
I1102 09:35:04.983289 487 memory_optimize_pass.cc:223] Cluster name : res3d.add.output.5.tmp_0 size: 2048
I1102 09:35:04.983291 487 memory_optimize_pass.cc:223] Cluster name : fpn_res3_sum.tmp_1 size: 1024
I1102 09:35:04.983294 487 memory_optimize_pass.cc:223] Cluster name : fpn_res4_sum.tmp_1 size: 1024
--- Running analysis [ir_graph_to_program_pass]
I1102 09:35:05.009222 487 analysis_predictor.cc:496] ======= optimize end =======
W1102 09:35:05.010625 487 infer.h:487] Succ load common model[0x2ef43e70], path[serving_server].
W1102 09:35:05.010643 487 infer.h:185] Succ load model_data_pathserving_server
客户端执行:
from paddle_serving_client import Client
from paddle_serving_app.reader import *
import numpy as np
preprocess = Sequential([
File2Image(), BGR2RGB(), Div(255.0),
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False),
Resize(800, 1333), Transpose((2, 0, 1)), PadStride(32)
])
postprocess = RCNNPostprocess("label_list.txt", "output")
client = Client()
client.load_client_config("serving_client/serving_client_conf.prototxt")
client.connect(['127.0.0.1:9292'])
im = preprocess('000000570688.jpg')
fetch_map = client.predict(feed={"image": im, "im_info": np.array(list(im.shape[1:]) + [1.0]),
"im_shape": np.array(list(im.shape[1:]) + [1.0])}, fetch=["multiclass_nms_0.tmp_0"])
fetch_map["image"] = '000000570688.jpg'
print (fetch_map)
postprocess(fetch_map)
print(fetch_map)
报错如下:
在客户端出现:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1102 17:36:08.139057 17125 naming_service_thread.cpp:209] brpc::policy::ListNamingService("127.0.0.1:9292"): added 1
W1102 17:36:09.307957 17201 socket.cpp:1739] Fail to keep-write into fd=7 SocketId=8589934594@127.0.0.1:9292@33894: Connection reset by peer [104]
W1102 17:36:09.308286 17203 socket.cpp:1739] Fail to keep-write into fd=3 SocketId=226@127.0.0.1:9292@33898: Connection reset by peer [104]
W1102 17:36:09.308418 17125 predictor.hpp:129] inference call failed, message: [E1014]1/1 channels failed, fail_limit=1 [C0][E1014]Got EOF of fd=3 SocketId=1@127.0.0.1:9292@33890 [R1][E1014]Got EOF of fd=7 SocketId=8589934594@127.0.0.1:9292@33894 [R2][E1014]Got EOF of fd=3 SocketId=226@127.0.0.1:9292@33898
E1102 17:36:10.739296 17125 general_model.cpp:567] failed call predictor with req: insts { tensor_array { float_data: 800 float_data: 1088 float_data: 1 elem_type: 1 shape: 3 } tensor_array { float_data: 0.77617955 float_data: 0.78302467 float_data: 0.79329634 float_data: 0.79330432 float_data: 0.78989387 float_data: 0.77962214 float_data: 0.76935053 float_data: 0.7590788 float_data: 0.7864207 float_data: 0.79669237 float_data: 0.80696404 float_data: 0.76198548 float_data: 0.738412 float_data: 0.75895524 float_data: 0.77270168 float_data: 0.78297329 float_data: 0.76668471 float_data: 0.7556299 float_data: 0.7556299 float_data: 0.77054787 float_data: 0.79314542 float_data: 0.80350375 float_data: 0.80975974 float_data: 0.80770552 float_data: 0.81106305 float_data: 0.817226 float_data: 0.80709565 float_data: 0.80015421 float_data: 0.80015421 float_data: 0.806903 float_data: 0.81717467 float_data: 0.8233794 float_data: 0.82755381 float_data: 0.82755381 float_data: 0.82755387 float_data: 0.82755381 float_data: 0.81740254 float_data: 0.80581164 float_data: 0.79143137 float_data: 0.79988784 float_data: 0.82043105 float_data: 0.824758 float_data: 0.82886666 float_data: 0.83297533 float_data: 0.83038336 float_data: 0.82422036 float_data: 0.80793822 float_data: 0.79542285 float_data: 0.79131418 float_data: 0.77249944 float_data: 0.7457931 float_data: 0.75141692 float_data: 0.75952989 float_data: 0.77185583 float_data: 0.78951657 float_data: 0.81005991 float_data: 0.8104291 float_data: 0.810429 float_data: 0.8104291 float_data: 0.79313254 float_data: 0.76642632 float_data: 0.75180531 float_data: 0.73999935 float_data: 0.73383635 float_data: 0.73563707 float_data: 0.74180007 float_data: 0.76204014 float_data: 0.78066218 float_data: 0.79504251 float_data: 0.80015421 float_data: 0.80015421 float_data: 0.79413086 float_data: 0.79306519 float_data: 0.8033368 float_data: 0.80039978 float_data: 0.79012817 float_data: 0.6700061 float_data: 0.6700061 float_data: 0.67635369 float_data: 0.68662524 float_data: 0.68908411 float_data: 0.70046008 float_data: 0.73538339 float_data: 0.74624741 float_data: 0.74213874 float_data: 0.73608005 float_data: 0.72875828 float_data: 0.71848661 float_data: 0.71074116 float_data: 0.70457816 float_data: 0.69646847 float_data: 0.69055581 float_data: 0.69055581 float_data: 0.68299657 float_data: 0.67067051 float_data: 0.67389327 float_data: 0.67742896 float_data: 0.67948329 float_data: 0.67776763 float_data: 0.67365897 float_data: 0.68119258 float_data: 0.69339806 float_data: 0.71599579 float_data: 0.72480536 float_data: 0.72480536 float_data: 0.72674251 float_data: 0.72653055 float_data: 0.72036767 float_data: 0.70670307 float_data: 0.6882143 float_data: 0.681329 float_data: 0.675166 float_data: 0.669003 float_data: 0.66284 float_data: 0.656677 float_data: 0.65244484 float_data: 0.64945638 float_data: 0.64945638 float_data: 0.65318787 float_data: 0.65935087 float_data: 0.65973127 float_data: 0.66697121 float_data: 0.69367754
float_data: 0.69805253 float_data: 0.68778086 float_data: 0.68713081 float_data: 0.68546975 float_data: 0.67930675 float_data: 0.664482 float_data: 0.64393866 float_data: 0.6426065 float_data: 0.644258 float_data: 0.650421 float_data: 0.65288138 float_data: 0.65288138 float_
Traceback (most recent call last):
File "test_client.py", line 17, in
fetch_map["image"] = '000000570688.jpg'
TypeError: 'NoneType' object does not support item assignment
在服务端显示:
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():
C++ Call Stacks (More useful to developers):
Python Call Stacks (More useful to users):
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op
attrs=kwargs.get("attrs", None))
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/users/wangjiawei04/paddle_release_home/python/lib64/python2.7/site-packages/paddle/fluid/layers/nn.py", line 1403, in conv2d
"data_format": data_format,
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 181, in _conv_norm
name=_name + '.conv2d.output.1')
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 452, in c1_stage
name=_name)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 473, in call
res = self.c1_stage(res)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn.py", line 98, in build
body_feats = self.backbone(im)
File "/home/users/wangjiawei04/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn.py", line 335, in test
return self.build(feed_vars, 'test')
File "tools/export_serving_model.py", line 79, in main
test_fetches = model.test(feed_vars)
File "tools/export_serving_model.py", line 98, in
main()
Error Message Summary:
ExternalError: Cudnn error, CUDNN_STATUS_EXECUTION_FAILED at (/paddle/paddle/fluid/operators/conv_cudnn_op.cu:300)
[operator < conv2d > error]
Aborted (core dumped)
The text was updated successfully, but these errors were encountered: