-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any simple example? #550
Comments
Hi rmanor, I'm not very knowledgeable as I just got started using Caffe as well, so folks should feel free to jump in and correct me. The documentation for the general procedure of training with your data is here: http://caffe.berkeleyvision.org/imagenet_training.html , and you will be able to do all your training by copying and modifying the files in To summarize, the steps I followed to train Caffe were:
where picture-foo belongs to category 0 and picture-foo1 belongs to category 1.
Best of luck! |
Thanks, I read imagenet example, a bit clearer. |
Hi rmanor, I would recommend against writing your own code to convert your data into a leveldb. |
I need to write my own code because my data isn't images... but if conver_imageset doesn't do any image processing then maybe I can use it anyway. |
What kind of data do you have? Please disregard all the instructions above if you aren't using images/this isn't for computer vision purposes. I am curious because I was under the impression that convoluted neural nets were designed to recognize visual patterns. Are you re-purposing caffe for something else? |
I think convnets were designed for images, but they had success in recent
On Sat, Jun 28, 2014 at 7:24 PM, dennis-chen notifications@github.com
|
You could also use the HDF5 layer to save and read data. Sergio 2014-06-28 10:11 GMT-07:00 rmanor notifications@github.com:
|
@sguada I see that I create HDF5 files from MATLAB, cool! |
@sergeyk could you post a simple example using a HDF5 data layer |
Thanks. Specifically I would like to now how the hdf5 should be built. |
Okay, will PR an example to master soon. |
Thanks.
On Tue, Jul 1, 2014 at 11:43 PM, Sergey Karayev notifications@github.com
|
Correct, it can be anything. On Tue, Jul 1, 2014 at 1:46 PM, rmanor notifications@github.com wrote:
|
Thank you, I appreciate the help from all of you.
On Tue, Jul 1, 2014 at 11:47 PM, Sergey Karayev notifications@github.com
|
@rmanor sorry i haven't got to packaging up a notebook example, but please consider https://github.com/BVLC/caffe/blob/master/src/caffe/test/test_data/generate_sample_data.py Running This example creates a simple dataset and is used in https://github.com/BVLC/caffe/blob/master/src/caffe/test/test_hdf5data_layer.cpp I am not sure how to create this example in Matlab, but it should be equally easy. |
@sergeyk Thanks! |
Hey,
On Wed, Jul 9, 2014 at 5:02 AM, wusx11 notifications@github.com wrote:
|
Will be resolved by #691 |
The documentation link in @dennis-chen's first post is broken. I think it should be http://caffe.berkeleyvision.org/gathered/examples/imagenet.html |
Thanks a lot @dennis-chen for your post. It was really helpful! Do you any similar post for testing the data? I want to test an image with the learned model using python wrapper. I am editing the classifier.py file in CAFFE_ROOT/python to classify the test image, but there are some strange errors. Any help in this regard would be really useful. |
@pulkit1991 , I'm very glad you found it helpful! Below are instructions I wrote on testing the learn model with the python wrapper when I was documenting this earlier this summer, hope it helps! How do I use the python wrapper?Compiling the python wrapper on futuregrid is a uphill battle that you will have to fight alone, brave warrior. I got the wrapper working on my personal That said, if you import numpy and add "CAFFE_HOME_DIR/python" to your system path, you should be able to import and use caffe without a problem in your python programs. Initiating a caffe classifier looks like this: self.net = caffe.Classifier(DEPLOY_PROTOTXT,TRAINED_NET) As stated previously, your DEPLOY_PROTOTXT, TRAINED_NET, and IMAGE_MEAN should've been generated by training. Just plug in their file paths and caffe does the rest of the magic. To do classification on a specific image, call: scores = self.net.predict([caffe.io.load_image(img_path)]) Where scores is a vector of length 1000. The index of a score indicates caffe's confidence that the image is of class index. For example, if scores looks like [.999,.1,...], then caffe has a high confidence that the image is of class 0. You defined the classes and labels earlier on in a text file when generating the leveldbs for training. But I trained on 2 classes, not 1000. What's going on? |
Hey @dennis-chen ! Thanks a lot for this help! I have a good idea now as to how to proceed. There is one small thing I would like to ask. In the MEAN_FILE file you need .npy file, but I dont have it yet for my data. I am using my own data for training and testing. What should I do for the mean file issue? |
Just found out! |
Thanks for detailed tutorial! You're awesome! |
btw, can you tell more about how you get DEPLOY_PROTOTXT? I cp one from caffe/models/bvlc_reference_caffenet/deploy.prototxt. I try to adjust it and use it with python wrapper you mentioned above but end up with some strange errors |
To write a deploy.prototxt, copy this to a new file called deploy.prototxt in
Append to that all layers in train_val.prototxt.
Change the value of the layer that contains a "num_output" field |
@andersonRocha091 |
@zacharyhorvitz thank you very much for the heads up. I was looking for the dataset lmdb files and I found that when I ran CreateImagenet.sh I executed like root of the system. So, the lmdb files regarded to the validation images was saved in a totally different location. After copying those lmdb to the right path everything worked just fine. Thank you for the help |
Hello, I am new to caffe and my goal is to download a pre trained network(the MIT places database trained network). I just want to supply a few test images to this network to see the results before I dive into training it on my own, and other stuff. Is there a document or some source where I can look into so that I can do this quickly. I am running Ubuntu . |
hi, Also the make_imagenet_mean.sh script only generated data/ilsvrc12/imagenet_mean.binaryproto file for me. Is that all it should generate? Or am I missing something? thanks |
If I remember correctly, they should be in models/bvlc_reference_net(or something of the like) |
thanks, found it. |
hi, I would like if someone could reffer to my previous question. what is wrong with what I have done? |
i was able to train cifarnet with my own dataset(with the example steps provided dennis-chen). Now i have a trained network with .caffemodel and .solverstate . How do i call test_net.bin to know how good the network is trained. Can anyone tell me what are the arguments i have to pass to this script. Thanks |
dennis chen , |
you need to call caffe.net to categorize the images, i use the below python script to categorize the images. import numpy as np Make sure that caffe is on the python path:caffe_root = '/home/sharath/caffe/' # this file is expected to be in {caffe_root}/examples import sys import caffe plt.rcParams['figure.figsize'] = (10, 10) import os caffe.set_mode_cpu() input preprocessing: 'data' is the name of the input blob == net.inputs[0]transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) set net to batch size of 50net.blobs['data'].reshape(50,3,64,32) l = os.listdir(rootdir) def sort_nicely( l ): text_file = open("predict.txt", "w") text_file.close() |
Hey guys, Could you please tell me if the way I am generating HDF5 from images is correct or not? I have written the following script based on the demo.m from the hdf5creation directory. What I am not 100% sure about is the lines between asterisks. After reading an image I down scale the pixel values by 255 so that they fall between 0 and 1. Then, I change the order of channels from RGB to BGR and finally switch the order of rows and columns before passing to the "store2hdf5" function. I am not subtracting the mean like we do in LMDB format because the data has to fall between 0 and 1. Am I correct? %------------------------------------------------------ chunksz=500; for batchno=1:ceil(num_total_samples/chunksz), batchdata = zeros(IMAGE_DIM,IMAGE_DIM,3,Adap_chunksz); % store to hdf5 |
You might find this stackoverflow thread relevant. |
when i am running caffe on imagenet then imagenet_mean.binaryproto file is not getting generated so what could be the possible reason behind that. |
Hi y'all, I have followed http://caffe.berkeleyvision.org/imagenet_training.html and prepared image set with 20 images, 18 for train and 2 for validation (2 types of classes: apples and tomatoes, image converted to 256) Now when I started to train, it takes for me around 2-3 hours just for "iteration 0". What might be my issue? solver:net: "models/bvlc_reference_caffenet/train_val.prototxt" train_val.:data_param { Thanks for any ideas in advance! |
I don't know how big the image files are, but it seems the problem is the Carlos 2015-11-09 17:46 GMT+01:00 Viktor N. notifications@github.com:
|
Thank you Sir. Carlos you solved my issue with batch size=4. But now have accuracy and training time questions:
machine: ubuntu64, ram=5Gb, CPU mode, i7 Thank you in advance |
FYI http://caffe.berkeleyvision.org/imagenet_training.html referenced above is a broken link. |
How to make my own data set into leveldb format, then input the Siamese network in caffe?如果有会中文的,望联系,感激不尽! |
Guys i am new to deep learning and caffe framework. My prof. asked me to do a project that is to Download at least 20 images for each 10 different cities and Train the system and try to recognise. i have tried to read the documentations and some tutorials on deep learning but i dont know how to start it. i managed to install caffe linked with python on mac. i have already the images(training and validation). i need someone to guide me through :) |
hi guys...i'm very new in deep learning... I have a small subset of a large video dataset with 15 category and 500 images per each category for perform video event detection... i'm using caffe and caffe reference model for training ... first I extracted '5' key frame from each this short video and train network with this training images along as their labels. my question is this : how refine this model structure to fit it to my data?? it is enough that set last fully connected layer to 15 or must be refined convolution and pooling and other layers?? for the next step I want to extract last convolution layer output for clustering... how to do this?? thank's a lot friends. |
I wrote a python script in order to convert image data into hdf5 below. Caffe accept my hdf5 dataset but I always got low accuracy and high loss. So, I doubt my data conversion script, did not convert properly. Can anyone find problems from my script?
|
Hi guys. |
To new caffe users with questions:
(from https://raw.githubusercontent.com/BVLC/caffe/master/CONTRIBUTING.md) |
Hello Everyone, |
I am working on a data set which has 10 classes and during training each image has 2 or more classes present in it , for example Is it possible to use imagnet to train for such multi label multi instance classification? |
How to prepare train.txt and val.txt from images. Please explain. |
To all users with questions about how to use caffe, please visit the tutorial page or ask questions on the caffe-users mailing list. I am locking this conversation because it is generating noise in the tracker/notifications, and because the users posting here aren't actually being helped. |
Hi,
I started with Caffe and the mnist example ran well.
However, I can not understand how am I suppose to use this for my own data for a classification task.
What should be the data format? Where should I specify the files?
How do I see the results for a test set?
All of these are not mentioned at all in the documentation.
Any pointers will be appreciated, thanks.
The text was updated successfully, but these errors were encountered: