You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe the solutions are all shifted 1 degree of freedom.
For a) the function should be 0 everywhere, otherwise the regularization term is infinity, for b) the first dericative has to be 0 --> function needs to be constant, etc.
The text was updated successfully, but these errors were encountered:
To add an additional piece of evidence for this answer: the answer to part c) is given in the text on page 278, "When λ → ∞, g will be perfectly smooth—it will just be a straight line that passes as closely as possible to the training points. In fact, in this case, g will be the linear least squares line, since the loss function in (7.11) amounts to minimizing the residual sum of squares."
Another error: for e), when λ = 0 and the penalty term becomes irrelevant, g will be a function which completely interpolates the training data, since the minimization is over all curves. In particular, it won't be the linear least squares line as the current solution claims.
I believe the solutions are all shifted 1 degree of freedom.
For a) the function should be 0 everywhere, otherwise the regularization term is infinity, for b) the first dericative has to be 0 --> function needs to be constant, etc.
The text was updated successfully, but these errors were encountered: