We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Fine-Tuning is the process of training specific sections of a network to improve results.
To stop a layer from learning further, you can set it's param attributes in your prototxt.
param
For example:
layer { name: "example" type: "example" ... param { lr_mult: 0 #learning rate of weights decay_mult: 1 } param { lr_mult: 0 #learning rate of bias decay_mult: 0 } }