You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the library. From initial benchmarks for my ml pipelines seems to be faster than LBFGS. But accuracy for logistic regression is worth. Would be cool to handle reg parameter in StochasticGradientDescent the same way Spark's minibatch SGD does.
The text was updated successfully, but these errors were encountered:
Thanks for asking. When incorporating L-2 regularization into AdaGrad, there is no easy way to handle sparse features, i.e. the per-sample complexity has to be proportional to the feature dimension instead of the number of non-zero entries. If you want a regularization term, you may encode it into the Gradient function, or stop the algorithm after several passes over the dataset. For AdaGrad SGD, early stopping is roughly equivalent to L-2 regularization.
In the Example page, there is a traditional SGD implementation that supports regParam. But traditional SGD is not as fast as AdaGrad in practice.
We will implement more stochastic algorithms in the future, such as SVRG, that is fast enough and handles the regularization more easily.
Hi, thanks for the library. From initial benchmarks for my ml pipelines seems to be faster than LBFGS. But accuracy for logistic regression is worth. Would be cool to handle reg parameter in StochasticGradientDescent the same way Spark's minibatch SGD does.
The text was updated successfully, but these errors were encountered: