Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教下rcf训练的一些问题 #50

Open
piaobuliao opened this issue Aug 6, 2018 · 20 comments
Open

请教下rcf训练的一些问题 #50

piaobuliao opened this issue Aug 6, 2018 · 20 comments

Comments

@piaobuliao
Copy link

@yun-liu 你好我是在https://github.com/happynear/caffe-windows的caffe代码上
添加image_labelmap_data_layer等层,但是没有加AutoCrop层,参考#24 用Crop层替代
成功编译出libcaffe.lib和caffe.exe(没有编译caffe的python版本),目前用http://mftp.mmcheng.net/liuyun/rcf/data/HED-BSDS.tar.gz 这个数据训练。
因为没有编译caffe的python版本,所以没有用solve.py,
直接"caffe.exe" train --solver=/solver.prototxt -gpu=0

一些log如下: fuse_loss在几千到几万波动
0805 23:26:13.153472 20696 solver.cpp:336] Iteration 0, Testing net (#0)
I0805 23:26:16.867991 20696 solver.cpp:224] Iteration 0 (-2.73941e-35 iter/s, 3.73482s/50 iters), loss = 542030
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #0: dsn1_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #1: dsn2_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #2: dsn3_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #3: dsn4_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #4: dsn5_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 solver.cpp:243] Train net output #5: fuse_loss = 125365 (* 1 = 125365 loss)
I0805 23:26:16.867991 20696 sgd_solver.cpp:137] Iteration 0, lr = 0.0001
I0805 23:26:17.039825 20696 sgd_solver.cpp:200] weight diff/data:nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 0.000000 nan nan 0.000000 nan nan nan 0.000000 nan nan nan 0.000000 nan nan nan 0.000000 0.000000
I0805 23:28:52.964601 20696 solver.cpp:224] Iteration 50 (0.320336 iter/s, 156.086s/50 iters), loss = 394900
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #0: dsn1_loss = 4003.08 (* 1 = 4003.08 loss)
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #1: dsn2_loss = 18340.7 (* 1 = 18340.7 loss)
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #2: dsn3_loss = 18340.7 (* 1 = 18340.7 loss)
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #3: dsn4_loss = 18340.7 (* 1 = 18340.7 loss)
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #4: dsn5_loss = 18340.7 (* 1 = 18340.7 loss)
I0805 23:28:52.964601 20696 solver.cpp:243] Train net output #5: fuse_loss = 3454.18 (* 1 = 3454.18 loss)
......
......
......
I0806 08:36:02.148245 20696 solver.cpp:224] Iteration 10900 (0.338876 iter/s, 147.546s/50 iters), loss = 372257
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #0: dsn1_loss = 2914.58 (* 1 = 2914.58 loss)
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #1: dsn2_loss = 18340.7 (* 1 = 18340.7 loss)
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #2: dsn3_loss = 18340.7 (* 1 = 18340.7 loss)
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #3: dsn4_loss = 18340.7 (* 1 = 18340.7 loss)
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #4: dsn5_loss = 18340.7 (* 1 = 18340.7 loss)
I0806 08:36:02.148245 20696 solver.cpp:243] Train net output #5: fuse_loss = 2914.54 (* 1 = 2914.54 loss)
I0806 08:36:02.148245 20696 sgd_solver.cpp:137] Iteration 10900, lr = 1e-05
I0806 08:36:02.288869 20696 sgd_solver.cpp:200] weight diff/data:nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 0.001091 nan nan 0.000000 nan nan nan 0.000000 nan nan nan 0.000000 nan nan nan 0.000000 0.000010
I0806 08:38:34.147261 20696 solver.cpp:224] Iteration 10950 (0.328973 iter/s, 151.988s/50 iters), loss = 385102
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #0: dsn1_loss = 12236.3 (* 1 = 12236.3 loss)
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #1: dsn2_loss = 51659.6 (* 1 = 51659.6 loss)
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #2: dsn3_loss = 51659.6 (* 1 = 51659.6 loss)
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #3: dsn4_loss = 51659.6 (* 1 = 51659.6 loss)
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #4: dsn5_loss = 51659.6 (* 1 = 51659.6 loss)
I0806 08:38:34.147261 20696 solver.cpp:243] Train net output #5: fuse_loss = 12235.3 (* 1 = 12235.3 loss)
I0806 08:38:34.147261 20696 sgd_solver.cpp:137] Iteration 10950, lr = 1e-05

然后在OpenCV上用训练了一万次的模型提取边缘试试看,结果是全黑的图像,这说明我训练次数不够多还是其他问题呢?用你的rcf_pretrained_bsds.caffemodel的提取边缘的效果很好。
所以我想请教你几个问题

  1. 必须要用solve.py的方式来训练吗? 我看solve.py主要做了两个事情:
    一个是用interp_surgery函数给deconvolution层初始化,想问下这个是必须的吗?deconvolution能像普通的卷积一样能通过训练自己学习出来吗? 看https://www.zhihu.com/question/63890195/answer/214223863,deconvolution也可以用 weight_filler: { type: "bilinear" },这个是不是和interp_surgery函数同样的作用?
    还有一个是从'5stage-vgg.caffemodel'恢复 weights做fine-tuning,我想问下不做fine-tuning,直接HED-BSDS数据训练可行吗?还有一种情况就是比如我自己修改了一个模型结构完全不能用'5stage-vgg.caffemodel'做fine-tuning的话,那怎么办?

  2. 还有我看了caffe-windows的sigmoid_cross_entropy_loss_layer_caffe.cpp是和官方的caffe的代码一样,但是和你项目里的
    sigmoid_cross_entropy_loss_layer_caffe.cpp是不一样的,我一开始没注意到这个差异,想问下,训练rcf模型必须用你这个sigmoid_cross_entropy_loss_layer_caffe.cpp吗?

  3. 是不是训练的迭代次数不够多?

  4. 考虑到我不是fine-tuning,我把base_lr从 1e-6 增大到 1e-4,这个影响大吗?

@yun-liu
Copy link
Owner

yun-liu commented Aug 7, 2018

我没做过类似这样的实验,就说一下自己知道的事情。

  1. 用weight_filler: { type: "bilinear" } 来初始化deconvolution层是可以的;
    5stage-vgg.caffemodel 不是用来恢复weights做fine-tuning,而是初始化的作用(在caffe里有区别吧),你可以用-weights来设定;
    不用5stage-vgg.caffemodel初始化,根据这篇论文unsupervised Learning of Edges里Table 2的实验,结果会差些,但不会那么差;
  2. sigmoid_cross_entropy_loss_layer_caffe.cpp是改过的,这个得用rcf里的,原版caffe的不行,sigmoid_cross_entropy_loss_layer_caffe.cpp调用了data_transformer.cpp里的函数,也要对应添加几个函数;
  3. 迭代一万次也不会这么差;
  4. 这个不清楚

@piaobuliao
Copy link
Author

piaobuliao commented Aug 7, 2018

@yun-liu 非常感谢啊

  1. 我把Deconvolution都用bilinear初始化,总算能出来一些边缘了 ,虽然和你给出的rcf_pretrained_bsds的效果还不能比,我试试用5stage-vgg.caffemodel初始化是不是会有效果,还有那些lr_mult和decay_mult的设置是按照什么个思路的,我看前面基本都是1,0,后面有的100,有的0.001

layer { name: "upsample_8" type: "Deconvolution" bottom: "score-dsn4" top: "score-dsn4-up"
param { lr_mult: 0 decay_mult: 0 }
convolution_param { kernel_size: 16 stride: 8 num_output: 1 weight_filler: { type: "bilinear" } bias_term: false} }

rcf_pretrained_bsds的效果
rcfmat_trainbymyself rcf_pretrained_bsds

我目前的效果
point2 rcf_deepedgedetection_train_cudnn_initv1_iter_4000

  1. 我替换成你的sigmoid_cross_entropy_loss_layer_caffe.cpp试试,估计这个是我提取边缘效果差的关键。不过我这里有个caffe的问题,如果我训练指定用gpu的模式,但是我cu文件没有实现(cpp实现了),那caffe自动就调用cpp的实现了?
    因为我看现在的caffe他有sigmoid_cross_entropy_loss_layer_caffe.cpp和sigmoid_cross_entropy_loss_layer_caffe.cu两个版本,你的只有.cpp,我是需把.cu删掉,然后cpp用你的sigmoid_cross_entropy_loss_layer_caffe 喽

  2. 我尝试把卷积层都用xavier来初始化,这样的话 base_lr要是1e-8 才能收敛,不然就都是 loss = nan
    I0806 15:20:08.245276 4492 solver.cpp:214] Iteration 18, loss = nan, ignore and continue....
    不知道你有没有碰到过这种情况
    layer { name: "conv1_1" type: "Convolution" bottom: "data" top: "conv1_1"
    convolution_param { engine: CUDNN num_output: 64 pad: 1 kernel_size: 3 weight_filler: { type: "xavier" } bias_filler { type: "constant"} } }

  3. 关于这个loss到底怎么看,我之前搞分类的loss都会逐渐变小到几乎接近0,这个难道就在几千到几万波动呢? 能不能发一份你训练的log日志给我看看啊,谢了。

@yun-liu
Copy link
Owner

yun-liu commented Aug 7, 2018

  1. lr_mult和decay_mult,在conv5上比较大,是因为conv5层次比较高,edge需要high-level的信息;那些学习率比较小的层,因为参数比较少,容易让检测结果震荡,所以学习率要小一些。(其实都是扯淡,炼丹嘛,没有理论根据,ps,这些参数我是跟HED (ICCV'15) 学的)
  2. gpu模式下,没有cu,就只能自动掉cpple
  3. 没有合适的初始化,学习率只有小一点了;nan一般就是因为学习率大了
  4. loss是挺大的,只用bsds最后能到一千多;用bsds + pascal训练,最后大概一万

@piaobuliao
Copy link
Author

@yun-liu 哈哈,用了rcf的sigmoid_cross_entropy_loss_layer_caffe.cpp,提取的边缘果然犀利多了。我把后面的卷积层通道数目改了下,conv1和conv2的通道数还是和原来一样,conv3:144,conv4:160 conv5:160,这样子最终模型8.39MB小了很多,但是好像提取的噪声边缘也对了,可能训练次数多一点会好一点,看2万次的比6千次的就好一点,最大可能就是高层卷积通道数目少了,可能对全局的边缘提取效果差了,我试试把除了fuse_loss和dsn5_loss之外的其他loss的权重改小是不是也会有作用啊

迭代6000次结果
rcfmat_trainbymyself rcf_deepedgedetection_train_cudnn_simplemodelv2_iter_6000

迭代20000次结果
rcfmat_trainbymyself rcf_deepedgedetection_train_cudnn_simplemodelv2_iter_20000

@yun-liu
Copy link
Owner

yun-liu commented Aug 10, 2018

恭喜恭喜

@piaobuliao
Copy link
Author

@yun-liu 效果持续提升ing,不过还是有一些问题请教下你

  1. 结构同上面的 conv1和conv2的通道数还是和原来一样(从5stage-vgg.caffemodel),conv3:144,conv4:160 conv5:160,最终模型8.39MB,然后我修改了loss的权重,
    dsn1_loss用0.1,dsn2_loss用0.2,dsn3_loss用0.5,dsn4_loss用1.0,dsn5_loss用2.0 fuse_loss用1.0
    从效果图看 噪声边缘抑制地越来越好,但是感觉还是没你预训练的rcf_pretrained_bsds的提取的边缘感觉,这个只能增加高层的卷积的通道数目吗?

迭代3.8万次
rcfmat_trainbymyself rcf_deepedgedetection_train_cudnn_simplemodelv2_lossweightchange_iter_38000 2

迭代6.4万次
rcfmat_trainbymyself rcf_deepedgedetection_train_cudnn_simplemodelv2_lossweightchange_iter_64000

迭代32.2万次
rcfmat_trainbymyself simplemodelv2_lossweightchange_iter_322000

  1. 还有我尝试 一个新的结构,conv1和conv5和你的一样,其他层的卷积通道数减少,也就是 conv1和conv5的weights从5stage-vgg.caffemodel初始化,但是好像就会报这个错误,不知道你有没有碰到过这种情况。而且奇怪的是,我的weights 不从5stage-vgg.caffemodel初始化,直接完全从头开始训练又是不会报这个错误,感觉这个错误还是weights初始化引起的,网上搜索了下,好像也不能解决啊。
    https://www.cnblogs.com/superxiaoying/p/9001714.html
    https://www.zhihu.com/question/54396262

F0810 15:14:40.643846 12080 math_functions.cu:79] Check failed: error == cudaSuccess (77 vs. 0) an illegal memory access was encountered

I0810 15:14:40.632021 12080 solver.cpp:224] Iteration 0 (0 iter/s, 0.790295s/50 iters), loss = 2907.62
I0810 15:14:40.632021 12080 solver.cpp:243] Train net output #0: dsn1_loss = 773.976 (* 0.1 = 77.3976 loss)
I0810 15:14:40.633007 12080 solver.cpp:243] Train net output #1: dsn2_loss = 773.976 (* 0.2 = 154.795 loss)
I0810 15:14:40.633007 12080 solver.cpp:243] Train net output #2: dsn3_loss = 773.976 (* 0.5 = 386.988 loss)
I0810 15:14:40.633199 12080 solver.cpp:243] Train net output #3: dsn4_loss = 773.976 (* 1 = 773.976 loss)
I0810 15:14:40.633297 12080 solver.cpp:243] Train net output #4: dsn5_loss = 773.976 (* 2 = 1547.95 loss)
I0810 15:14:40.633399 12080 solver.cpp:243] Train net output #5: fuse_loss = 773.976 (* 1 = 773.976 loss)
I0810 15:14:40.633493 12080 sgd_solver.cpp:137] Iteration 0, lr = 1e-06
F0810 15:14:40.643846 12080 math_functions.cu:79] Check failed: error == cudaSuccess (77 vs. 0) an illegal memory access was encountered
*** Check failure stack trace: ***

@yun-liu
Copy link
Owner

yun-liu commented Aug 11, 2018

  1. 通道数减少了,网络变小了,结果变差了吧
  2. 一个卷基层weights的数量=output channels * input channels * kernel height * kernel width. 你conv5通道没变,conv4通道变了,不就是conv5的输入通道变了么,weights数量变了

@piaobuliao
Copy link
Author

@yun-liu

  1. 我感觉好像还不止是通道数的问题,我今天尝试输出训练了30万次的模型的中间结果,sigmoid-dsn1,sigmoid-dsn2,sigmoid-dsn3,sigmoid-dsn4,sigmoid-dsn5的图像,发现sigmoid-dsn1和sigmoid-dsn2输出有边缘的图,而高层的都是一片漆黑的图, 因为conv1和conv2是从从5stage-vgg.caffemodel初始化的,其他都是从零开始训练,也就是说我训练了30万次,高层的根本没训练出啥有意义的东西?感觉很奇怪啊

我把我训练的模型和solver.prototxt放到百度云了,你有空能帮我看一下吗? 感觉很诡异啊
https://pan.baidu.com/s/1cLgobYoIZ_2qD0RMrSHz8Q

  1. 你说的有道理,一开始我就以为只要这一层的结构和原来的一样就能用原来的模型初始化。

@yun-liu
Copy link
Owner

yun-liu commented Aug 13, 2018

你的高层既然不能用5stage-vgg.caffemodel初始化,那有没有用随机初始化呢?

@piaobuliao
Copy link
Author

@yun-liu

我就直接 每指定默认就都是0? 按理说训练的足够多应该也会有用的,就算都用0初始化
layer { name: "conv5_1_new" type: "Convolution" bottom: "pool4" top: "conv5_1_new"
param { lr_mult: 100 decay_mult: 1 } param { lr_mult: 200 decay_mult: 0 }
convolution_param { num_output: 256 pad: 2 kernel_size: 3 dilation: 2 } }

@yun-liu
Copy link
Owner

yun-liu commented Aug 16, 2018

@piaobuliao
caffe里不能连续两个卷积层都用0初始化,两个层都是0,求导之后就是0,是学不到任何东西的

@piaobuliao
Copy link
Author

piaobuliao commented Aug 17, 2018

@yun-liu

  1. 哦,一层卷积没关系的啊,我看你的 train_val.prototxt里的 conv1_1_down,conv2_1_down之类的 也都没有指定初始化方式也就是用0初始化的喽。
  2. 我前几天尝试了一个叫做RCF_DeepEdgeDetection_SimpleV5_1的模型 每个stage都保留1层卷积,卷积的通道数等结构都同原来的,然后从你预训练的rcf_pretrained_bsds.caffemodel 初始化weights。我看我训练的模型的sigmoid-dsn1sigmoid-dsn3和你的都还差不太多, 但是sigmoid-dsn4后面的感觉都比较糟糕啊,难道后面多层卷积级联这么重要啊? 还有我尝试这个RCF_DeepEdgeDetection_SimpleV5_1的模型从5stage-vgg.caffemodel初始化weights ,只有sigmoid-dsn1的输出有边缘,其他的都是黑乎乎的图像,感觉很奇怪啊。

这是我的模型结构
rcf_deepedgedetection_simplev5_1

训练了34.5万次结果,我把每个sigmoid-dsn的结果都输出
rcf_deepedgedetection_simplev5_1_resumefromrcfpretrain_iter_345000

你的rcf_pretrained_bsds.caffemodel 的结果

rcf_pretrained_bsds

@piaobuliao
Copy link
Author

piaobuliao commented Aug 17, 2018

@yun-liu

想问下 (1.90468 iter/s, 26.2511s/50 iters), loss = 15294.9,这个是每训练50次 平均的loss吧, 我绘制出图像是这种的,loss逐渐减小,这个应该是朝着收敛地方向吧。
后面的dsn1_loss 等是不是只是单次的loss啊,好像基本都是起伏不定的

avg_loss
avg_loss

dsn1_loss

dsn1_loss
I0815 07:27:52.943397 16228 solver.cpp:224] Iteration 93600 (1.90468 iter/s, 26.2511s/50 iters), loss = 15294.9
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #0: dsn1_loss = 1417.2 (* 0.1 = 141.72 loss)
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #1: dsn2_loss = 1077.96 (* 0.2 = 215.592 loss)
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #2: dsn3_loss = 1273.73 (* 0.5 = 636.863 loss)
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #3: dsn4_loss = 1805.61 (* 1 = 1805.61 loss)
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #4: dsn5_loss = 1806.44 (* 2 = 3612.88 loss)
I0815 07:27:52.943397 16228 solver.cpp:243] Train net output #5: fuse_loss = 1064.82 (* 1 = 1064.82 loss)
I0815 07:27:52.943397 16228 sgd_solver.cpp:137] Iteration 93600, lr = 6.256e-07
I0815 07:27:52.958998 16228 sgd_solver.cpp:200] weight diff/data:0.001846 0.001935 0.001858 nan nan 0.000817 0.000030 0.001402 0.000064 0.000620 0.000036 nan 0.000000 nan 0.000000 0.000001
I0815 07:28:19.366746 16228 solver.cpp:224] Iteration 93650 (1.89269 iter/s, 26.4174s/50 iters), loss = 13938.6
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #0: dsn1_loss = 574.267 (* 0.1 = 57.4267 loss)
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #1: dsn2_loss = 475.058 (* 0.2 = 95.0117 loss)
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #2: dsn3_loss = 493.207 (* 0.5 = 246.604 loss)
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #3: dsn4_loss = 656.917 (* 1 = 656.917 loss)
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #4: dsn5_loss = 656.883 (* 2 = 1313.77 loss)
I0815 07:28:19.366746 16228 solver.cpp:243] Train net output #5: fuse_loss = 451.792 (* 1 = 451.792 loss)

@yun-liu
Copy link
Owner

yun-liu commented Aug 22, 2018

  1. 一层conv不初始化没关系,不能连续两层不初始化
  2. 这个情况不清楚

@yun-liu
Copy link
Owner

yun-liu commented Aug 22, 2018

总的loss是平均后的loss,dsn1_loss每次迭代的loss,没有平均

@zzzzzz0407
Copy link

你好,我想知道这个训练的输入大小是多少,因为数据扩充生成了很多不同大小的图,直接shuffle拼接导致大小不一,好像不能进行训练。

@yun-liu
Copy link
Owner

yun-liu commented Sep 21, 2018

@zHanami 您好,抱歉,不太理解你的意思,为什么要大小一样?shuffle拼接是什么意思?

@zzzzzz0407
Copy link

@yun-liu 数据不是做了[0.5, 1, 1.5] Scale的数据增强吗,生成的每个Batch我的理解是NCHW,大小不一样如何去训练?

@yun-liu
Copy link
Owner

yun-liu commented Sep 22, 2018

@zHanami caffe除了batch_size,caffe还有个参数叫iter_size,这里设置batch_size为1,iter_size是10,效果是和batch_size=10是一样的。这样就不会有这个问题了

@zzzzzz0407
Copy link

@yun-liu 原来是这样,非常感谢!是我才疏学浅了,我直接把图片和Label Resize成固定大小,当时还在想这么做岂不是rescale扩充白做了。。。最后还弄得不收敛~万分感谢,我再去试试。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants