Add a guide for multi-model endpoints #986
Labels
docs
Improvements or additions to documentation
enhancement
New feature or request
good first issue
Good for newcomers
Milestone
Description
Multi-model endpoints are possible using the Python Predictor, but we don't yet have an example of how to do this.
#619 tracks adding support for a model cache, so that all models need not be able to fit in memory at the same time
The text was updated successfully, but these errors were encountered: