Skip to content

harrydrippin/brusta-go

Repository files navigation

Brusta logo

Brusta-Go

  • Language-agnostic PyTorch model serving
  • Serve JIT compiled PyTorch model in production environment

This project is an extension for Brusta: original project with Scala/Java support

Requirements

  • docker == 18.09.1
  • go >= 1.13
  • your JIT traced PyTorch model (If you are not familiar with JIT tracing, please refer JIT Tutorial)

Process Flow

  1. Run "make" to make your PyTorch model server binary (libtorch should be pre-installed)
  2. Load your traced PyTorch model file on the "model server"
  3. Run the model server

Details On Build Server

  • TBD

Details On Model Server

  • TBD

Request Example

Request to the model server as follow (Suppose your input dimension is 3)

curl -X POST -d '{"input":[1.0, 1.0, 1.0]}' localhost:8080/predict

Contributors for original repository

Author