-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] Multi-model support in an application built with MONAI Deploy App SDK #244
Comments
Both existing and new (MONAI Bundle) inference operators have been enhanced to make use of and request uniquely named models from the App execution context. Applications, namely the Spleen Seg and Liver and Tumor Seg, have been test with loading multiple models, in a defined folder structure, and the inference operator requesting named model, with success. An example containing multiple inference operators each using a different model in the app context will be provided in later releases, once the models are ready, e.g. segmentation followed by classification in series, or multi-(model)-AI with each consuming the same input image. |
Is it possible to provide an example on loading multiple models into one monai deploy app. I don't think any of the current demo apps included this. |
Hi @linhandev yes we will work on such an example thabks for pointing out |
@linhandev Thanks for the question. Yes, I'm planning to do a good example with, e.g. Seg and Classification models, though I have not gotten a good set from the MONAI Model Zoo yet. I can potentially have an app with both the existing Liver Tumor as well as the Spleen Seg, a mixture of plain TorchScript and MONAI Bundle compliant TorchScript, but I need to first tweak the DICOMSeg writer to save DICOM Seg instance file with Series instance UID as the unique file name. In the meantime, one can already provide multiple models in an app, with the model files in a defined folder structure, as shown in the example below, which has a model identified by the name
and to access the model from within the app, the
Hope this helps. |
@linhandev I have created a WIP pull request demonstrating the use of multiple models within the same app. It is WIP for a couple reasons, one being that one of the MONAI Bundle TorchScripts fails to load, and fails even just with plainly torch.jit.load() on its own, see issue created for the Model Zoo. |
Is your feature request related to a problem? Please describe.
There are cases where multiple AI models are needed in the same application to provide the final inference result, typically one model will provide the image ROI for another model, for example,
The ROI image can be generated using non-DL computer vision based algorithm, but it is becoming common with DL models.
Describe the solution you'd like
operators
, e.g. multiple inference operators each supporting a specific named modelAlternative Solution
Additional context
App SDK standardizes the in-memory image representation, ensuring consistency and correctness in passing image objects among operators within the same app Make DICOMSeriesToVolumeOperator consistent with ITK in serving NumPy array #238
The text was updated successfully, but these errors were encountered: