You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your implementation of Densenet Caffe and I can already train Cifar 10 dataset. However, when it comes to my own dataset, it will be stuck for a long time at here:
After a long time (about 1hour), the process will be killed without any error information.
My input size of images is 512*512. I use "train_test_BCBN_C10plusNoBias.prototxt" for my network architecture and I only modified the input and the "num_output" in InnerProduct1 layer. Do you think it is too large? Or I have made some other mistakes? Can you give me some advice? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hi Tongcheng,
Thank you for your implementation of Densenet Caffe and I can already train Cifar 10 dataset. However, when it comes to my own dataset, it will be stuck for a long time at here:
I0111 00:30:41.302551 8065 layer_factory.hpp:77] Creating layer Data1
I0111 00:30:41.337936 8065 db_lmdb.cpp:35] Opened lmdb /home/gaia/Dev/caffe/examples/Apollo/lmdb_test/apollo_general18_train_lmdb
I0111 00:30:41.356104 8065 net.cpp:86] Creating Layer Data1
I0111 00:30:41.356127 8065 net.cpp:382] Data1 -> Data1
I0111 00:30:41.356150 8065 net.cpp:382] Data1 -> Data2
I0111 00:30:41.357967 8065 data_layer.cpp:48] output data size: 1,3,512,512
After a long time (about 1hour), the process will be killed without any error information.
My input size of images is 512*512. I use "train_test_BCBN_C10plusNoBias.prototxt" for my network architecture and I only modified the input and the "num_output" in InnerProduct1 layer. Do you think it is too large? Or I have made some other mistakes? Can you give me some advice? Thanks a lot!
The text was updated successfully, but these errors were encountered: