Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wsinfer local model doc #118

Merged
merged 4 commits into from
Sep 23, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 36 additions & 10 deletions docs/deep/wsinfer.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
(wsinfer-extension)=
# WSInfer

(wsinfer-extension)=

The [WSInfer QuPath extension](https://github.com/qupath/qupath-extension-wsinfer/) makes it possible to do patch-based deep learning inference for digital pathology, without any need for scripting.

It's a collaboration between Stony Brook University ([WSInfer](https://wsinfer.readthedocs.io/en/latest/)) and the QuPath group (the [extension](https://github.com/qupath/qupath-extension-wsinfer/) that brings WSInfer models to QuPath).
Expand All @@ -26,7 +27,6 @@ A GPU is not required but can dramatically speed up processing.
If you have an NVIDIA GPU and want to use it with WSInfer, you will need to install a version of CUDA compatible with PyTorch - please see {doc}`gpu`.
:::


## Set-up

With QuPath installed and running, drag and drop the WSInfer extension into the application and restart QuPath.
Expand Down Expand Up @@ -102,9 +102,9 @@ However the most (potentially) exciting additional option is the {guilabel}`Pref

The options available will depend upon your computer's capabilities (at least as far as they could be discerned by Deep Java Library):

* **CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
* **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple silicon - it is the Mac version of GPU acceleration
* **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and a little bit of luck.
- **CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
- **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple silicon - it is the Mac version of GPU acceleration
- **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and a little bit of luck.

If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*.
However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer - CUDA in particular.
Expand All @@ -119,6 +119,33 @@ It won't automatically find any existing PyTorch you might have installed: Deep
If you have a compatible GPU, and want CUDA support, you'll need to ensure you have an appropriate CUDA installed *before* PyTorch is downloaded.
:::

## Using your own models

In addition to models downloaded from the WSInfer zoo, you may also use your own models by putting them in a specific local directory called `user` or `local`. This in turn should be put into the model directory (the path can be seen in the WSI dialog) as a sibling of the `kaczmarj` directory that contains the downloaded models.

Inside the local models directory, every model should be contained in a subdirectory. The subdirectory name becomes the model name in the model selection menu.
Inside it, there will be:

- the model in Torchscript format, with the name `torchscript_model.pt`, and
- a `config.json` file, as explained [here](https://wsinfer.readthedocs.io/en/latest/user_guide.html#use-your-own-model). An example of config file is shown [here](https://github.com/SBU-BMI/wsinfer/issues/221).

Thus, the structure of the models directory could be as follows:

```text
wsinfer
kaczmarj
breast-tumor-resnet34.tcga-brca
main
README.md
config.json
lfs-pointer.txt
torchscript_model.pt
local
my_model
config.json
torchscript_model.pt
wsinfer-zoo-registry.json
```

## Scripting

Expand All @@ -137,11 +164,10 @@ where the `selectAnnotation()` line was added when I pressed the {guilabel}`Anno

To process in batch, I would need to

* Add my images to a QuPath project
* Annotate the regions of interest in the images (and save the data)
* Open the above script in QuPath's script editor
* Choose {menuselection}`Run --> Run for project`, and select the images I want to process

- Add my images to a QuPath project
- Annotate the regions of interest in the images (and save the data)
- Open the above script in QuPath's script editor
- Choose {menuselection}`Run --> Run for project`, and select the images I want to process

## Identifying TILs (overlaying predictions of two models)

Expand Down