Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train with other dataset #14

Open
VincentGu11 opened this issue Jan 11, 2018 · 0 comments
Open

Train with other dataset #14

VincentGu11 opened this issue Jan 11, 2018 · 0 comments

Comments

@VincentGu11
Copy link

Hi Tongcheng,

Thank you for your implementation of Densenet Caffe and I can already train Cifar 10 dataset. However, when it comes to my own dataset, it will be stuck for a long time at here:

I0111 00:30:41.302551 8065 layer_factory.hpp:77] Creating layer Data1
I0111 00:30:41.337936 8065 db_lmdb.cpp:35] Opened lmdb /home/gaia/Dev/caffe/examples/Apollo/lmdb_test/apollo_general18_train_lmdb
I0111 00:30:41.356104 8065 net.cpp:86] Creating Layer Data1
I0111 00:30:41.356127 8065 net.cpp:382] Data1 -> Data1
I0111 00:30:41.356150 8065 net.cpp:382] Data1 -> Data2
I0111 00:30:41.357967 8065 data_layer.cpp:48] output data size: 1,3,512,512

After a long time (about 1hour), the process will be killed without any error information.
My input size of images is 512*512. I use "train_test_BCBN_C10plusNoBias.prototxt" for my network architecture and I only modified the input and the "num_output" in InnerProduct1 layer. Do you think it is too large? Or I have made some other mistakes? Can you give me some advice? Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant