Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable special/nonexistent regularisation of the first layer #72

Open
korsbo opened this issue May 18, 2022 · 2 comments
Open

Enable special/nonexistent regularisation of the first layer #72

korsbo opened this issue May 18, 2022 · 2 comments

Comments

@korsbo
Copy link
Member

korsbo commented May 18, 2022

Sometimes, it can be hard to know the input data's scale, so it might be hard to standardise them (like in a UDE). It might then make sense to let the parameters of the first layer be unregularised or weakly regularised such that they can better compensate for differences in scale between the inputs. Something like FrontMiddleLastPenalty, although that's getting a bit verbose.

@chriselrod
Copy link
Contributor

I think I can add a PerLayer penalty that lets you pass a tuple of penalties.
As well as a NonBiasPenalty, that doesn't get applied to bias.

@korsbo
Copy link
Member Author

korsbo commented May 18, 2022

As well as a NonBiasPenalty, that doesn't get applied to bias.

Would it be better to have some penalty wrappers like you have with FrontLastPenalty or would it be more natural to just let the bias regularisation toggling be a type parameter of L1Penalty and L2Penalty?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants