Skip to content
This repository has been archived by the owner on Oct 15, 2019. It is now read-only.

Introduction

hjk41 edited this page Dec 18, 2014 · 1 revision

Introduction to Minerva

Minerva is a fast and flexible tool for deep learning. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.

Features

  • Matrix programming interface
  • Easy interaction with NumPy
  • Multi-GPU, multi-CPU support
  • Good performance: ImageNet AlexNet training achieves 213 and 403 images/s with one and two Titan GPU, respectivly. Four GPU cards number will be coming soon.

Overview of Minerva Design

Minerva achieves both programmability and program efficiency at the same time by utilizing DAG (Directed Acyclic Graph) execution.

Minerva provides NDArray programming interface, as used in many other programming frameworks such as NumPy. The NDArray interface makes it easy for machine learning experts to write new algorithms. In the backend, Minerva converts the NDArray operations into a dependency graph, represented as a DAG. Minerva then executes the NDArray operations following the dependency. As such, independent operations can be carried out in parallel. The use of DAG execution make it easy to fully utilize the parallelism inside the algorithm, achieving optimal performance.

Current Minerva design derives from an older version, which is described in our NIPS Workshop paper: NIPS workshop paper