Skip to content

pyannote/pyannote-video

Repository files navigation

Announcement

Open postdoc position at LIMSI combining machine learning, NLP, speech processing, and computer vision.

pyannote-video

a toolkit for face detection, tracking, and clustering in videos

Installation

Create a new conda environment:

$ conda create -n pyannote python=3.6 anaconda
$ source activate pyannote

Install pyannote-video and its dependencies:

$ pip install pyannote-video

Download dlib models:

$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/davisking/dlib-models.git
$ bunzip2 dlib-models/dlib_face_recognition_resnet_model_v1.dat.bz2
$ bunzip2 dlib-models/shape_predictor_68_face_landmarks.dat.bz2

Tutorial

To execute the "Getting started" notebook locally, download the example video and pyannote.video source code:

$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/pyannote/pyannote-video.git
$ pip install jupyter
$ jupyter notebook --notebook-dir="pyannote-video/doc"

Documentation

No proper documentation for the time being...

About

Face detection, tracking and clustering in videos

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages