Skip to content

Latest commit

 

History

History
40 lines (36 loc) · 2.72 KB

README.md

File metadata and controls

40 lines (36 loc) · 2.72 KB

TensorFlow Models

This repository contains machine learning models implemented in TensorFlow. The models are maintained by their respective authors. To propose a model for inclusion, please submit a pull request.

Currently, the models are compatible with TensorFlow 1.0 or later. If you are running TensorFlow 0.12 or earlier, please upgrade your installation.

Models

  • adversarial_text: semi-supervised sequence learning with adversarial training.
  • attention_ocr: a model for real-world image text extraction.
  • autoencoder: various autoencoders.
  • cognitive_mapping_and_planning: implementation of a spatial memory based mapping and planning architecture for visual navigation.
  • compression: compressing and decompressing images using a pre-trained Residual GRU network.
  • differential_privacy: privacy-preserving student models from multiple teachers.
  • domain_adaptation: domain separation networks.
  • im2txt: image-to-text neural network for image captioning.
  • inception: deep convolutional networks for computer vision.
  • learning_to_remember_rare_events: a large-scale life-long memory module for use in deep learning.
  • lm_1b: language modeling on the one billion word benchmark.
  • namignizer: recognize and generate names.
  • neural_gpu: highly parallel neural computer.
  • neural_programmer: neural network augmented with logic and mathematic operations.
  • next_frame_prediction: probabilistic future frame synthesis via cross convolutional networks.
  • real_nvp: density estimation using real-valued non-volume preserving (real NVP) transformations.
  • resnet: deep and wide residual networks.
  • skip_thoughts: recurrent neural network sentence-to-vector encoder.
  • slim: image classification models in TF-Slim.
  • street: identify the name of a street (in France) from an image using a Deep RNN.
  • swivel: the Swivel algorithm for generating word embeddings.
  • syntaxnet: neural models of natural language syntax.
  • textsum: sequence-to-sequence with attention model for text summarization.
  • transformer: spatial transformer network, which allows the spatial manipulation of data within the network.
  • tutorials: models described in the TensorFlow tutorials.
  • video_prediction: predicting future video frames with neural advection.