Skip to content

Commit

Permalink
various updates
Browse files Browse the repository at this point in the history
  • Loading branch information
stanton119 committed Jul 18, 2024
1 parent 6a3cb3d commit 11c8870
Show file tree
Hide file tree
Showing 5 changed files with 273 additions and 229 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,8 @@ Use the `settings.json` file in the repo
* Beta Bernouli bandit vs logistic regression with no features
* NN multi-row vs multi-column - do they perform similarly?
* Multi horizon forecasting direct method - with shared NN architecture - compare separate models for each horizon with a NN that shares layers. Compare with sequence to sequence models.
* Gaussian process from scratch
* ref - https://www.youtube.com/watch?v=HA-VHNVbvwQ&list=WL&index=26
* Probabilistic neural networks
* Normalizing flows - model complex distributions with transformations of gaussians
* Can we train an output layer as a gaussian mixture to model complex distirbutions via gradient descent
Expand Down

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion nlp/unfinished-text_embeddings/text_embedding.ipynb

Large diffs are not rendered by default.

8 changes: 7 additions & 1 deletion paper_list.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,4 +243,10 @@ Read 2024/03
* Using a learning rate decay of $\alpha_t=\alpha/\sqrt{t}$ is common.
* Results
* Performs similarly to SGD with momentum on dense MNIST
* Performas similarly to AdaGrad on sparse problems (best of both worlds)
* Performas similarly to AdaGrad on sparse problems (best of both worlds)

### The LambdaLoss Framework for Ranking Metric Optimization [2018]
https://dl.acm.org/doi/10.1145/3269206.3271784
Read 2024/03
* Summary
* Formulates a general framework for optimise learn to rank problems

Large diffs are not rendered by default.

0 comments on commit 11c8870

Please sign in to comment.