Skip to content

dan070/hedgehog-oscar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 

Repository files navigation

Hedgehog Oscar - Online ML inspired by Vorpal Rabbit

”A nice, online, stand-alone estimator of logistic regression”

Features (planned, at least)

  • Logistic regression through-REST-api (with dynamic learning rate and regularization)

  • Anomaly detection and drift detection

  • Model rollbacks, version handling and stop-learning mode

  • Classification and probability estimation by repeated testing

Background and inspirational quotes

Beloved logistic regression

Logistic regression (sort of invented 1830s) is one of the most used ML tools for classification, with good reason.

The world is divided into two classes, those who believe the incredible, and those who do the improbable.

Oscar Wilde

Less is more

Stochastic gradient descent (with finesse) might not be optimal but we can try to do one thing good. The hedgehog will be adaptive to parameter changes, as a consequence.

A fox knows many things, but a hedgehog one important thing

Old Greek proverb

Be (mindfully) simple

Even simple models (like log-reg), done with curiosity, will give insights into a problem. I call on both hedgehogs and foxes to help me now and in the future.

I meant it as a kind of enjoyable intellectual game, but it was taken seriously. Every classification throws light on something.

Isaiah Berlin

Adam original paper : https://arxiv.org/abs/1412.6980 https://link.medium.com/QyWDEMtAQX

https://www.kdnuggets.com/2019/06/gradient-descent-algorithms-cheat-sheet.html

https://developers.google.com/machine-learning/crash-course/regularization-for-simplicity/l2-regularization

https://towardsdatascience.com/understanding-the-scaling-of-l²-regularization-in-the-context-of-neural-networks-e3d25f8b50db

https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c

https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/

http://ruder.io/optimizing-gradient-descent/

https://www.internalpointers.com/post/cost-function-logistic-regression

https://arxiv.org/pdf/1704.04289.pdf

https://ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html

http://doingbayesiandataanalysis.blogspot.com/2014/08/how-to-use-mcmc-posterior-as-prior-for.html?m=1

https://link.medium.com/ZXR9Gd0JlX

https://github.com/jasoncapehart/go-sgd/blob/master/sgd.go

outliers: https://www.kdnuggets.com/2019/06/overview-outlier-detection-methods-pyod.html

instead of yaml... input data?? https://github.com/toml-lang/toml/blob/master/README.md

About

Online ML inspired by Vorpal Rabbit

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published