Skip to content

Commit 2fa2e8b

Browse files
committed
Add VAE and AE notes
1 parent 5a1e17d commit 2fa2e8b

File tree

7 files changed

+61
-0
lines changed

7 files changed

+61
-0
lines changed
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Autoencoder
2+
3+
- An auto-encoder is the combination of an function that converts the input data into a different representation, and a decoder function that converts the new representation back into the original format.
4+
- Can be used for data visualization (compressing data into 2D or 3D).
5+
- Autoencoders work because data may occupy a manifold of lower dimensionality than n.
6+
- One class autoencoders are used for anomaly detection:
7+
- A well-trained autoencoder will predict any new data that is coming from the normal state of the process (as it will have the same pattern or distribution).
8+
- Therefore, the reconstruction error will be small.
9+
- However, if we try to reconstruct a data from a rare-event, the Autoencoder will struggle.
10+
- This will make the reconstruction error high during the rare-event.
11+
- We can catch such high reconstruction errors and label them as **a rare-event prediction**.
12+
- Is it possible to find a linear boundary with them? Not sure but I guess yes. For example, if you train an autoencoder on really simple images spread in two classes, you might be able to apply just a single threshold to "classify" those images. the decision boundary is nonlinear in the original data space, but linear in the feature space into which the data are mapped
13+
14+
VAE are generative, but what about autoencoders?
15+
16+
- The latent space can be repurposed for something else like interpolation.
17+
- We usually consider autoencoders not be generative since there are no distributional assumptions of how your data (in latent space) is generated.
18+
- In probabilistic reasoning lingo, there are no assumptions on the data generation process.
19+
- (Ian Goodfellow): the autoencoder doesn’t give direct explicit access to an estimate of the density or the ability to sample directly.
20+
21+
Bottleneck dimensionality vs reconstruction loss:
22+
23+
![](./autoencoders.png)
46.7 KB
Loading
268 KB
Loading
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
# Variational Autoencoders
2+
3+
![](vae2.png)
4+
5+
![](vae-gaussian.png)
6+
7+
- We add a constraint on the encoding network, that forces it to generate latent vectors that roughly follow a unit gaussian distribution.
8+
- Generating new images is now easy: all we need to do is sample a latent vector from the unit gaussian and pass it into the decoder.
9+
10+
```python
11+
image_loss = mean((generated_image - real_image)**2)
12+
latent_loss = kl_divergence(latent_variable, unit_gaussian)
13+
loss = image_loss + latent_loss
14+
```
15+
16+
![](vae.png)
17+
18+
- the KL divergence of two gaussians is easy to compute in its closed form.
19+
- In order to optimize the KL divergence, we need to apply a simple reparameterization trick: instead of the encoder generating a vector of real values, it will generate a vector of means and a vector of standard deviations.
20+
21+
```python
22+
samples = random_normal([batchsize, n_z], mean=0, std=1, dtype=tf.float32)
23+
sampled_z = z_mean + (z_stddev * samples)
24+
```
25+
26+
## More
27+
28+
- <https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/>
29+
- <https://hameddaily.blogspot.com/2018/12/yet-another-tutorial-on-variational.html>
30+
- <https://jaan.io/what-is-variational-autoencoder-vae-tutorial/>
31+
- <https://towardsdatascience.com/generating-images-with-autoencoders-77fd3a8dd368>
32+
- <https://wiseodd.github.io/techblog/2017/01/24/vae-pytorch/>
33+
- <https://jhui.github.io/2017/03/06/Variational-autoencoders/>
34+
- <https://www.jeremyjordan.me/variational-autoencoders/>
35+
- <http://kvfrans.com/variational-autoencoders-explained/>
36+
- <https://miro.medium.com/max/2255/1*ejNnusxYrn1NRDZf4Kg2lw@2x.png>
37+
- <https://www.assemblyai.com/blog/variational-autoencoders-for-dummies/>
38+
- Invariant Representations without Adversarial Training : <https://dcmoyer.github.io/selfhosted/blag.html>
102 KB
Loading
32.9 KB
Loading

0 commit comments

Comments
 (0)