Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Multi-model support in an application built with MONAI Deploy App SDK #244

Closed
MMelQin opened this issue Jan 21, 2022 · 5 comments · Fixed by #303
Closed

[FEA] Multi-model support in an application built with MONAI Deploy App SDK #244

MMelQin opened this issue Jan 21, 2022 · 5 comments · Fixed by #303
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@MMelQin
Copy link
Collaborator

MMelQin commented Jan 21, 2022

Is your feature request related to a problem? Please describe.
There are cases where multiple AI models are needed in the same application to provide the final inference result, typically one model will provide the image ROI for another model, for example,

  • a prostate tumor segmentation model requires the image of the prostate itself as input, which is the output of a prostate segmentation model with a DICOM CT series as input
  • A COVID classification model that requires the ROI of the lung only, which is segmented with a lung seg model

The ROI image can be generated using non-DL computer vision based algorithm, but it is becoming common with DL models.

Describe the solution you'd like

  • The App SDK has been designed to support multiple processing logic/algorithm, called operators, e.g. multiple inference operators each supporting a specific named model
  • The App SDK supports sharing results, as in-memory objects, from one operator to downstream operators, so for example a segmentation inference operator can generate a organ segment image object, which is linked to be the input to the tumor segmentation operator by creating the network within the application.
  • App SDK inference operator needs to load named model, and have the model specific transformation and inference logic. As of now, the segmentation inference operator load the default model, and needs to passed a model name/UID on instantiation.
  • Multiple models can already be loaded and made available in the execution context, by the base Application class.
  • App SDK packager support packaging multiple models in the MONAI App Package (Docker image), accessible to the app for loading at start-up

Alternative Solution

  • Create a operator to encapsulate the complete inference logic and model itself, a "black box" operator. For example, for an organ segmentation operator, the input to the operator would be the in-memory volumetric image (converted from a DICOM series), and the output is an in-memory organ seg image. The disadvantage of this is that model file needs to be embedded in the operator code itself.
  • Build model specific application, each as a MAP (current limitation on MAP sepc, single Docker image), and then use an external orchestrator or platform to manage the execution of each MAP. The disadvantages include shared memory is unlikely, and the need of external orchestration.
    Additional context
    App SDK standardizes the in-memory image representation, ensuring consistency and correctness in passing image objects among operators within the same app Make DICOMSeriesToVolumeOperator consistent with ITK in serving NumPy array #238
@MMelQin MMelQin added the enhancement New feature or request label Jan 21, 2022
@MMelQin MMelQin modified the milestone: v0.4 Jan 21, 2022
@MMelQin MMelQin added enhancement New feature or request and removed enhancement New feature or request labels Jan 21, 2022
@vikashg vikashg self-assigned this Jan 21, 2022
@MMelQin
Copy link
Collaborator Author

MMelQin commented Jun 27, 2022

Both existing and new (MONAI Bundle) inference operators have been enhanced to make use of and request uniquely named models from the App execution context.

Applications, namely the Spleen Seg and Liver and Tumor Seg, have been test with loading multiple models, in a defined folder structure, and the inference operator requesting named model, with success.

An example containing multiple inference operators each using a different model in the app context will be provided in later releases, once the models are ready, e.g. segmentation followed by classification in series, or multi-(model)-AI with each consuming the same input image.

@MMelQin MMelQin self-assigned this Jun 27, 2022
@MMelQin MMelQin added this to the v0.4 milestone Jun 27, 2022
@MMelQin MMelQin linked a pull request Jun 27, 2022 that will close this issue
@gigony gigony mentioned this issue Jul 12, 2022
@linhandev
Copy link

Is it possible to provide an example on loading multiple models into one monai deploy app. I don't think any of the current demo apps included this.

@vikashg
Copy link
Collaborator

vikashg commented Jul 16, 2022

Hi @linhandev yes we will work on such an example thabks for pointing out

@MMelQin
Copy link
Collaborator Author

MMelQin commented Jul 17, 2022

Is it possible to provide an example on loading multiple models into one monai deploy app. I don't think any of the current demo apps included this.

@linhandev Thanks for the question.

Yes, I'm planning to do a good example with, e.g. Seg and Classification models, though I have not gotten a good set from the MONAI Model Zoo yet. I can potentially have an app with both the existing Liver Tumor as well as the Spleen Seg, a mixture of plain TorchScript and MONAI Bundle compliant TorchScript, but I need to first tweak the DICOMSeg writer to save DICOM Seg instance file with Series instance UID as the unique file name.

In the meantime, one can already provide multiple models in an app, with the model files in a defined folder structure, as shown in the example below, which has a model identified by the name spleen_model, and another by liver_tumore_model while the path to folder app_models is used as arg value for -model on the CLI commands.

app_models
├── liver_tumor_model
│   └── model.ts
└── spleen_model
    └── model.ts

and to access the model from within the app, the model_name arg is used to pass the model name to the Bundle Inference operator and/or the base Segmentation Inference operator constructor, e.g.

        bundle_spleen_seg_op = MonaiBundleInferenceOperator(
            input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)],
            output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)],
            model_name="spleen_model",
        )

Hope this helps.

@MMelQin
Copy link
Collaborator Author

MMelQin commented Jul 17, 2022

@linhandev I have created a WIP pull request demonstrating the use of multiple models within the same app. It is WIP for a couple reasons, one being that one of the MONAI Bundle TorchScripts fails to load, and fails even just with plainly torch.jit.load() on its own, see issue created for the Model Zoo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants