You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Can we consider integrating a set of pre-trained models to monai-deploy-app-sdk?
Describe the solution you'd like
We should have something like MMAR integrated into monai-deploy. In this context the user will directly download a docker image (or model) from nvidia cloud for direct usage. The model that will be downloaded will contain all the relevant information (i.e., input/output node names, data type, input/output size) required for writing the config.pbtxt for a triton inference engine.
Describe alternatives you've considered
As of now we are using a google drive where the model is hosted.
Additional context
We can have this as a feature where models from a model zoo can be directly integrated to monai-deploy app sdk. There are so many ubiquitous models in deep learning and radiology, for example breast cancer detector, brain mask segmentation, lung nodule detection, fracture detection, tumor segmentation (BRATS). Though different groups have different models, these are some of the problems people have been working on for a while now and we do have models with reasonably good accuracy. We should integrate them with monai-deploy. A new lab/research team can just download these models and also the infrastructure (through the deploy sdk) to run inference. Otherwise people may get the model but still takes a lot of work to make them work.
We are definitely working on that. We've had a look at MMAR and solutions from Huggingface and others, we're in the process of developing a prototype repo for a shared model to discuss and consider its contents and what information we need in such things. MMAR is a good starting point for what the structure likely will be.
Is your feature request related to a problem? Please describe.
Can we consider integrating a set of pre-trained models to monai-deploy-app-sdk?
Describe the solution you'd like
We should have something like MMAR integrated into monai-deploy. In this context the user will directly download a docker image (or model) from nvidia cloud for direct usage. The model that will be downloaded will contain all the relevant information (i.e., input/output node names, data type, input/output size) required for writing the config.pbtxt for a triton inference engine.
Describe alternatives you've considered
As of now we are using a google drive where the model is hosted.
Additional context
We can have this as a feature where models from a model zoo can be directly integrated to monai-deploy app sdk. There are so many ubiquitous models in deep learning and radiology, for example breast cancer detector, brain mask segmentation, lung nodule detection, fracture detection, tumor segmentation (BRATS). Though different groups have different models, these are some of the problems people have been working on for a while now and we do have models with reasonably good accuracy. We should integrate them with monai-deploy. A new lab/research team can just download these models and also the infrastructure (through the deploy sdk) to run inference. Otherwise people may get the model but still takes a lot of work to make them work.
The text was updated successfully, but these errors were encountered: