Skip to content

A repo with some experiments with the Transformer architecture

Notifications You must be signed in to change notification settings

NINAnor/transformers_experiments

Repository files navigation

Transformers experiments

Aim of this repository

This repository is to provide a detailed and simple explanation of the transformer architecture. The repository also contains experiments and bits of code for building and training transformer networks.

At the moment the repo contain explanations on vanilla transformers but will include explanations on spin-off transformers such as Vision Transformers.

Explanations on embeddings and positional encodings

  • A Jupyter notebook displaying the workflow for obtaining embeddings from sentences. This includes tokenization of the sentence + getting the positional encodings

  • The README detailing the theory behind the terms embeddings, tokenization and positional encodings

Resources to understand transformers

Seminal papers:

Other cool papers:

Pre-trained transformers for acoustic data:

Tutorials related to transformers:

Other cool resources on transformers:

About

A repo with some experiments with the Transformer architecture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published