Skip to content

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

License

Notifications You must be signed in to change notification settings

zcain117/pytorch-lightning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The lightweight PyTorch wrapper for high-performance AI research.
Scale your models, not the boilerplate.


Website • Key Features • How To Use • Docs • Examples • Community • Grid AI • Licence

PyPI - Python Version PyPI Status PyPI Status Conda DockerHub codecov

ReadTheDocs Slack Discourse status license Next Release

*Codecov is > 90%+ but build delays may show less

PyTorch Lightning is just organized PyTorch

Lightning disentangles PyTorch code to decouple the science from the engineering. PT to PL


Lightning Philosophy

Lightning is designed with these principles in mind:

Principle 1: Enable maximal flexibility.
Principle 2: Abstract away unecessary boilerplate, but make it accessible when needed.
Principle 3: Systems should be self-contained (ie: optimizers, computation code, etc).
Principle 4: Deep learning code should be organized into 4 distinct categories.

  • Research code (the LightningModule).
  • Engineering code (you delete, and is handled by the Trainer).
  • Non-essential research code (logging, etc... this goes in Callbacks).
  • Data (use PyTorch Dataloaders or organize them into a LightningDataModule).

Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!

Get started with our 2 step guide


Inference

Lightning is also designed for the fast inference AI researchers and production teams need to scale up things like BERT and self-supervised learning. Lightning can automatically export to ONNX or TorchScript for those cases.


Trending contributors


Continuous Integration

System / PyTorch ver. 1.3 (min. req.)* 1.4 1.5 1.6 (latest) 1.7 (nightly)
Conda py3.7 [linux] PyTorch & Conda PyTorch & Conda PyTorch & Conda PyTorch & Conda PyTorch & Conda
Linux py3.7 [GPUs**] - - Build Status - -
Linux py3.7 [TPUs***] - - - TPU tests -
Linux py3.6 / py3.7 / py3.8 CI complete testing - - CI complete testing -
OSX py3.6 / py3.7 - CI complete testing - CI complete testing -
Windows py3.6 / py3.7 / py3.8 CI complete testing - - CI complete testing -
  • * torch>=1.4 is the minimal pytorch version for Python 3.8
  • ** tests run on two NVIDIA K80
  • *** tests run on Google GKE TPUv2/3
  • TPU w/ py3.6/py3.7 means we support Colab and Kaggle env.

How To Use

Step 0: Install

Simple installation from PyPI

pip install pytorch-lightning

From Conda

conda install pytorch-lightning -c conda-forge

Install bleeding-edge (no guarantees)

pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade

Step 0: Add these imports

import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl

Step 1: Define a LightningModule (nn.Module subclass)

A LightningModule defines a full system (ie: a GAN, autoencoder, BERT or a simple Image Classifier).

class LitAutoEncoder(pl.LightningModule):

    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
        self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))
    
    def forward(self, x):
        # in lightning, forward defines the prediction/inference actions
        embedding = self.encoder(x)
        return embedding

    def training_step(self, batch, batch_idx):
        # training_step defined the train loop. It is independent of forward
        x, y = batch
        x = x.view(x.size(0), -1)
        z = self.encoder(x)
        x_hat = self.decoder(z)
        loss = F.mse_loss(x_hat, x)
        self.log('train_loss', loss)
        return loss

    def configure_optimizers(self):
        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
        return optimizer
Note: Training_step defines the training loop. Forward defines how the LightningModule behaves during inference/prediction.

Step 2: Train!

dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])

autoencoder = LitAutoEncoder()
trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))

And without changing a single line of code, you could run on GPUs

# 8 GPUs
trainer = Trainer(max_epochs=1, gpus=8)

# 256 GPUs
trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)

Or TPUs

# Distributes TPU core training
trainer = Trainer(tpu_cores=8)

# Single TPU core training
trainer = Trainer(tpu_cores=[1])

For advanced users, you can still own complex training loops

class LitAutoEncoder(pl.LightningModule):
    def training_step(self, batch, batch_idx, opt_idx):
        (opt_a, opt_b) = self.optimizers()
        
        loss_a = ...
        self.manual_backward(loss_a, opt_a)
        opt_a.step()
        opt_a.zero_grad()
        
        loss_b = ...
        self.manual_backward(loss_b, opt_b, retain_graph=True)
        self.manual_backward(loss_b, opt_b)
        opt_b.step()
        opt_b.zero_grad()

Key Features

  • Scale your models to run on any hardware (CPU, GPUs, TPUs) without changing your model
  • Making code more readable by decoupling the research code from the engineering
  • Easier to reproduce
  • Less error prone by automating most of the training loop and tricky engineering
  • Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
  • Lightning has out-of-the-box integration with the popular logging/visualizing frameworks (Tensorboard, MLFlow, Neptune.ai, Comet.ml, Wandb).
  • Tested rigorously with every new PR. We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
  • Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).

Lightning automates 40+ parts of DL/ML research

  • GPU training
  • Distributed GPU (cluster) training
  • TPU training
  • EarlyStopping
  • Logging/Visualizing
  • Checkpointing
  • Experiment management
  • Full list here

Examples

Hello world

MNIST hello world
MNIST on TPUs

Contrastive Learning

BYOL
CPC v2
Moco v2
SIMCLR

NLP

BERT
GPT-2

Reinforcement Learning

DQN
Dueling-DQN
Reinforce

Vision

GAN

Classic ML

Logistic Regression
Linear Regression


Community

The lightning community is maintained by

  • 16 core contributors who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs.
  • 280+ community contributors.

Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.

Asking for help

If you have any questions please:

  1. Read the docs.
  2. Look it up in our forum (or add a new question)
  3. Search through the issues.
  4. Join our slack.
  5. Ask on stackoverflow with the tag pytorch-lightning.

Funding

Building open-source software with only a few part-time people is hard!

We're venture funded and backed by some of the top VC funds in the world, Index Ventures, Bain Capital Ventures, First Minute Capital.

Their funding ensures we can continue to build awesome tooling like Grid, give you around the clock support, hire a full-time staff, attend conferences, and move faster through implementing features you request.

To supercharge your research and production work, visit our Grid.ai platform


Grid AI

Grid AI is our native platform for training models at scale on the cloud!

Sign up for early access here

To use grid, take your regular command:

    python my_model.py --learning_rate 1e-6 --layers 2 --gpus 4

And change it to use the grid train command:

    grid train --grid_gpus 4 my_model.py --learning_rate 'uniform(1e-6, 1e-1, 20)' --layers '[2, 4, 8, 16]'

The above command will launch (20 * 4) experiments each running on 4 GPUs (320 GPUs!) - by making ZERO changes to your code.


Licence

Please observe the Apache 2.0 license that is listed in this repository. In addition the Lightning framework is Patent Pending.

BibTeX

If you want to cite the framework feel free to use this (but only if you loved it 😊):

@article{falcon2019pytorch,
  title={PyTorch Lightning},
  author={Falcon, WA},
  journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning},
  volume={3},
  year={2019}
}

About

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 89.1%
  • Jupyter Notebook 10.1%
  • Other 0.8%