Skip to content

0.6 release notes #158

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/deep/djl.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Instead, you should use the {menuselection}`Manage DJL Engines` to explicitly re

To use an NVIDIA GPU with either TensorFlow or Pytorch, you will need to have a *compatible* version of CUDA installed *before* downloading the engine.

See {ref}`gpu-support` for more details.
See [the GPU support page](gpu-support) for more details.
:::

If downloading the engine is successful, the indicator beside the engine should switch to green.
Expand Down
Binary file modified docs/deep/images/instanseg_fl_results.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/deep/images/instanseg_running.jpg
Binary file not shown.
6 changes: 0 additions & 6 deletions docs/deep/instanseg.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,12 +105,6 @@ When you click `Run`, InstanSeg will check for PyTorch.
If this is not on your machine it will download it for you (this could be > 100 MB, so may take a while).
Once this is done, the model will run and you will see the results in the viewer.

:::{figure} images/instanseg_running.jpg
:class: shadow-image large-image

Running InstanSeg
:::

### 5. Viewing Results

The results will be displayed in the viewer.
Expand Down
Binary file added docs/reference/images/omero-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/reference/images/toolbar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 1 addition & 3 deletions docs/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,11 @@
:maxdepth: 2

faqs
<!-- tips_and_tricks -->
shortcuts
commands
config
building
styling
projects_structure
<!-- release_notes -->
<!-- release_notes_template -->
release_notes
```
195 changes: 87 additions & 108 deletions docs/reference/release_notes.md

Large diffs are not rendered by default.

7 changes: 2 additions & 5 deletions docs/reference/release_notes_template.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,4 @@
# Release Notes Template ***TODO***

Welcome to the expanded release notes for QuPath ***TODO***.
This version was released on ***TODO***.
The aim of this document is to provide a more detailed overview of the changes in this version than the original github [release notes](https://github.com/qupath/qupath/blob/main/CHANGELOG.md) ***TODO***.
# Release Notes Template

1. [Major features](#major-features) - Spotlight changes
2. [Enhancements](#enhancements) - Additions to make existing features better
Expand All @@ -28,6 +24,7 @@ Example image
```{image} https://github.com/user-attachments/assets/ecd1d6a7-9b49-4a93-b635-2298d43abf09
:width: 48%
```

Another example image format for in-line images.

## ✨ Enhancements
Expand Down
6 changes: 6 additions & 0 deletions docs/starting/annotating.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,12 @@ You can set these quickly for a selected annotation by pressing the {kbd}`Enter`

The name can be shown or hidden in the viewer using {menuselection}`View --> Show names`, or the shortcut {kbd}`N`.

:::{figure} images/annotating_names.jpg
:class: shadow-image mid-image

Annotations with names displayed
:::

## {{ tool_selection_mode }} Selection mode

One toolbar button that lives *beside* the annotation tools is not actually used to draw new annotations -- but it is closely related.
Expand Down
2 changes: 1 addition & 1 deletion docs/starting/essential_tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The *Command List* also now includes a brief help text description for most comm
If you find yourself wanting to run the same command repeatedly, uncheck the {guilabel}`Auto close` box to keep the command list open.
:::

:::{figure} images/tips_command.jpg
:::{figure} images/tips_command.png
:class: shadow-image full-image

QuPath's 'Command list'
Expand Down
4 changes: 2 additions & 2 deletions docs/starting/first_steps.md
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ Note that this table remains connected to the image, and allows you to select in
Each measurement can also be viewed in a histogram by clicking on {guilabel}`Show Histogram` with various viewing options available.
For further data analysis, the table can be saved as a CSV file or copied to the clipboard for pasting into another application, e.g. Excel.

:::{figure} images/steps_table.jpg
:::{figure} images/steps_table.png
:class: shadow-image full-image

A 'detection' measurement table containing details of all the detected cells
Expand Down Expand Up @@ -391,4 +391,4 @@ In such cases, the `.qpdata` file alone doesn't contain enough information - tha

If you got this far, great! You've seen many of the main features of QuPath, and had your first encounter with the fundamental idea of working with objects.

Even if not everything is clear yet, hopefully it gives enough motivation to read on through the documentation and see how powerful these ideas can become when put together.
Even if not everything is clear yet, hopefully it gives enough motivation to read on through the documentation and see how powerful these ideas can become when put together.
20 changes: 17 additions & 3 deletions docs/starting/help.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,30 @@ If your needs for instruction are modest, it's always worthwhile to try hovering

There are two other commands that can help - which both also have toolbar buttons.

### {{ tool_pin_point }} QuPath tour

When uncertain about the QuPath interface or a feature, the **QuPath tour** under {menuselection}`Help → QuPath Tour` may help. It guides you though the interface by highlighting each element and offering a brief explanation of its function.

:::{figure} images/steps_tour.jpg
:class: shadow-image full-image

QuPath Tour
:::

### {{ tool_help }} Context help

Starting with v0.5.0, QuPath has a new command {menuselection}`Help --> Show interactive help`.
The context help is a great way to find out more about a tool or a parameter within QuPath or tell alert the user to . It can be found under {menuselection}`Help --> Show interactive help`.

This aims to include tips and explain things that may have gone wrong.
It also shows help text associated with any command or button under the cursor.
This provides additional information when you hover over items in QuPath such as tools or parameters. It also warns you about potential issues that could disrupt your image analysis workflow—making it a valuable tool to keep open as you work.

If you find yourself stuck or confused, it's worthwhile clicking {{ icon_help }} first to see if it can help.
And if the interactive help has something it really thinks you should know, a small badge will be displayed on the toolbar button.

:::{figure} images/steps_context_help.png
:class: shadow-image mini-image

Context Help
:::

### {{ tool_log }} Log

Expand Down
Binary file added docs/starting/images/annotating_names.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_convex.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_grid.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_image.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_manual_continued.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_manual_start.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/counting_region.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_annotation_panel.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_annotations.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_cells_detected.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_cells_display.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/starting/images/steps_context_help.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_image.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_image_pixels.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_image_tab.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_overview.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/starting/images/steps_table.jpg
Binary file not shown.
Binary file added docs/starting/images/steps_table.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/starting/images/steps_tour.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/starting/images/steps_welcome.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/starting/images/tips_command.jpg
Binary file not shown.
Binary file added docs/starting/images/tips_command.png
8 changes: 4 additions & 4 deletions docs/tutorials/cell_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,16 +124,16 @@ Some commands that enable this are found in the {menuselection}`Analyze --> Calc
One approach is to calculate textures from the image surrounding each cell.
This can be very effective, although computationally quite demanding whenever there are very large numbers of cells.

A much faster alternative, which can give very good results, is to simply 'smooth' the existing measurements with the {menuselection}`Analyze --> Calculate features --> Add smoothed features` command.
This will supplement the existing measurements with new measurements calculated by taking a weighted average of the corresponding measurements of neighboring cells.
A much faster alternative, which can give very good results, is to simply 'smooth' the existing measurements with the {menuselection}`Analyze --> Calculate features --> Add smoothed features` command.
This will supplement the existing measurements with new measurements calculated by taking a weighted average of the corresponding measurements of neighboring cells. A pop-up dialog will ask which regions to smooth, select all annotations.

The weighting depends on distance, i.e. cells that are further away have less contribution to the result.
Technically, distance is based on centroids and the weighting is calculated from a Gaussian function, where the parameter required in the dialog box is the [full-width-at-half-maximum](https://en.wikipedia.org/wiki/Full_width_at_half_maximum) of the Gaussian function.
Less technically, putting higher numbers into the dialog box results in more smoothing.
This reduces the noisiness of the measurements more effectively, but also makes it more difficult to distinguish smaller areas containing particular cell types.

:::{figure} images/ki67_auto_smooth_features.jpg
:class: shadow-image mid-image
:class: shadow-image small-image

Smooth features dialog
:::
Expand Down Expand Up @@ -173,7 +173,7 @@ Training cell classification
Continue creating annotations and assigning their classes.
Right-clicking on the image after drawing the annotation can offer an easier way to set the class, without needing to move the mouse to the other side of the screen and press the {guilabel}`Set class` button on the left.

:::{figure} images/ki67_auto_training_tumor.jpg
:::{figure} images/ki67_auto_training_stroma.jpg
:class: shadow-image full-image

Training cell classification with right-click
Expand Down
5 changes: 2 additions & 3 deletions docs/tutorials/cell_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ Another way to view all the measurements of all the cells is by selecting {menus
This should open up a results table with the measurements of all cells.
From this, it is possible to generate histograms, sort columns, select individual cells (which will then be selected on the image) and to export the measurements to a CSV file for use elsewhere.

:::{figure} images/ki67_detecting_results_detections.jpg
:::{figure} images/ki67_detecting_results_detections.png
:class: shadow-image full-image

Cell detection results table
Expand All @@ -146,7 +146,6 @@ However, it is important to note that when the stain estimates are improved then
If necessary, it is possible to then proceed to draw further annotations around areas of interest.
These can be processed one-by-one by running *Positive cell detection* on an annotation when it is selected, or else they can be processed all together (in parallel).
The easiest way to do the latter is to ensure that no annotations are selected (e.g. double-click a background area with the *Move* tool {{ icon_move }} selected), and then press the {guilabel}`Run` button in the *Positive cell detection* dialog window.
QuPath will then prompt you to confirm if you want to run the detection for all *Annotations*.

:::{figure} images/ki67_detecting_multiple_parallel_annotations.jpg
:class: shadow-image full-image
Expand All @@ -168,7 +167,7 @@ Whenever you have multiple annotations, it can be helpful to generate a results
This is similar to creating a results table for detections, but requires the {menuselection}`Measure --> Show annotation measurements` command instead.
You can also access this command from the *Measurement table* icon in the toolbar {{ icon_table }}.

:::{figure} images/ki67_detecting_results_annotations.jpg
:::{figure} images/ki67_detecting_results_annotations.png
:class: shadow-image full-image

Annotation results table
Expand Down
Binary file added docs/tutorials/images/class_list.png
Binary file removed docs/tutorials/images/density_map_stroma.jpg
Diff not rendered.
Binary file modified docs/tutorials/images/ki67_auto_cells_detected.jpg
Binary file removed docs/tutorials/images/ki67_auto_final_local.jpg
Diff not rendered.
Binary file modified docs/tutorials/images/ki67_auto_final_markup.jpg
Binary file modified docs/tutorials/images/ki67_auto_map_raw.jpg
Binary file modified docs/tutorials/images/ki67_auto_map_smoothed.jpg
Binary file modified docs/tutorials/images/ki67_auto_original.jpg
Binary file modified docs/tutorials/images/ki67_auto_parallel.jpg
Binary file modified docs/tutorials/images/ki67_auto_results_detections.jpg
Binary file modified docs/tutorials/images/ki67_auto_smooth_features.jpg
Binary file modified docs/tutorials/images/ki67_auto_training_first.jpg
Binary file modified docs/tutorials/images/ki67_auto_training_intensity.jpg
Binary file modified docs/tutorials/images/ki67_auto_training_ring.jpg
Binary file removed docs/tutorials/images/ki67_auto_training_tumor.jpg
Diff not rendered.
Binary file modified docs/tutorials/images/ki67_auto_training_updated.jpg
Binary file modified docs/tutorials/images/ki67_detecting_annotation.jpg
Binary file modified docs/tutorials/images/ki67_detecting_final_markup.jpg
Binary file modified docs/tutorials/images/ki67_detecting_multiple_rois.jpg
Diff not rendered.
Binary file modified docs/tutorials/images/ki67_detecting_positive_dialog.jpg
Diff not rendered.
Diff not rendered.
Binary file modified docs/tutorials/images/measurement_table.png
Binary file modified docs/tutorials/images/multiplex_all_classified.jpg
Binary file modified docs/tutorials/images/multiplex_cell_measurements.jpg
Binary file modified docs/tutorials/images/multiplex_cells.jpg
Binary file modified docs/tutorials/images/multiplex_centroids.jpg
Binary file modified docs/tutorials/images/multiplex_channels.jpg
Binary file modified docs/tutorials/images/multiplex_ck.jpg
Binary file modified docs/tutorials/images/multiplex_duplicating.jpg
Binary file modified docs/tutorials/images/multiplex_foxp3.jpg
Binary file modified docs/tutorials/images/multiplex_load.jpg
Binary file modified docs/tutorials/images/multiplex_load_sequentially.jpg
Binary file modified docs/tutorials/images/multiplex_populate_channels.jpg
Binary file modified docs/tutorials/images/multiplex_project.jpg
Binary file modified docs/tutorials/images/multiplex_single_ck.jpg
Binary file modified docs/tutorials/images/multiplex_single_pdl1.jpg
Binary file removed docs/tutorials/images/multiplex_train_dialog.jpg
Diff not rendered.
Binary file added docs/tutorials/images/multiplex_train_dialog.png
56 changes: 38 additions & 18 deletions docs/tutorials/multiplex_analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ The *Fluorescence* type here tells QuPath that 'high pixel values mean more of s
Choosing *Brightfield* conveys the opposite message, which would cause problems because cell detection would then switch to looking for dark nuclei on a light background.
:::

:::{sidebar} Accurate cell detection
Good cell segmentation is really *essential* for accurate multiplexed analysis.
New and improved methods of segmenting cells in QuPath are being actively explored...
:::{sidebar} Precise cell detection
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a reason to change this, I think 'Accurate' more accurately describes what is intended here - https://en.wikipedia.org/wiki/Accuracy_and_precision

Good cell segmentation is really *essential* for higher quality multiplexed analysis.
Instanseg is a new cell detection method available in QuPath via an extension. More information can be found in the {doc}`InstanSeg <../deep/instanseg>` tutorial.
:::

### Set up the channel names
Expand All @@ -72,9 +72,9 @@ Adjusting the channel names in the Brightness & Contrast dialog

::::{tip}
Setting all the channel names individually can be very laborious.
Two tricks can help.
Three tricks can help.

1\. Outside QuPath (or in the *Script editor*) create a list of the channel names you want, with a separate line for each name.
1. Outside QuPath (or in the *Script editor*) create a list of the channel names you want, with a separate line for each name.
Copy this list to the clipboard, and then select the corresponding channels in the *Brightness/Contrast* dialog and press {kbd}`Ctrl + V` to paste them.

:::{figure} images/multiplex_channel_names.jpg
Expand All @@ -95,6 +95,16 @@ setChannelNames(
'CK'
)
```

1. Save your Brightness & Contrast settings
Brightness and contrasts viewing settings can now be saved and re-loaded by entering a name into the {guilabel}`Settings` field and pressing {guilabel}`Save`. To reload simply select the name from the drop-down list.

:::{figure} images/multiplex_brightness_profile.png
:class: shadow-image mini-image

Saved brightness and contrast settings
:::

::::

:::{tip}
Expand All @@ -108,7 +118,7 @@ This allows you to reset all the image metadata to whatever was read originally
We now want to make the channel names available as *classifications*.

The classifications currently available are shown under the *Annotations* tab.
You can either right-click this list or select the {guilabel}`` button and choose {menuselection}`Populate from image channels` to quickly set these.
You can either right-click this list or select the {guilabel}`` button and choose {menuselection}`Populate from image channels` to quickly set these.

:::{figure} images/multiplex_populate_channels.jpg
:class: shadow-image full-image
Expand Down Expand Up @@ -142,7 +152,7 @@ Exploring the detection results using measurement maps

The next step involves finding a way to identify whether cells are positive or negative *for each marker independently* based upon the detections and measurements made during the previous step.

Since QuPath v0.2.0 there are two different ways to do this:
There are two different ways to do this:

1. Threshold a single measurement (e.g. mean nucleus intensity)
2. Train a machine learning classifier to decide based upon multiple measurements
Expand All @@ -152,8 +162,7 @@ You do not have to choose the same method for every marker, but can switch betwe

### Option #1. Simple thresholding

QuPath v0.2.0 introduced a new command, {menuselection}`Classify --> Object classification --> Create single measurement classifier`.
This gives us a quick way to classify based on the value of one measurement.
A quick way to classify based on one measurement is the command {menuselection}`Classify --> Object classification --> Create single measurement classifier`.

As usual, you can consider the options in the dialog box in order from top to bottom, and hover the cursor over each for a short description of what it means.

Expand All @@ -163,6 +172,10 @@ As usual, you can consider the options in the dialog box in order from top to bo
Creating a single measurement classifier for PDL1
:::

:::{note}
Since PLD1 is the red channel which is also the default detection color there isn't a quick visual indicator of what is PLD1 positive. To resolve this change the default detection color to something else (e.g. blue) via the {menuselection}`Preferences --> Objects --> Default object color`. This has been done in the figure above and then returned to the default red color.
:::

In this case, we can ignore the **Object filter** (all our detections are cells, so no need to distinguish between them).

The **Channel filter** will be helpful, because it will help us quickly set sensible defaults for the options below.
Expand Down Expand Up @@ -199,30 +212,26 @@ This process is a bit more involved, but the effort is often worth it.
It is very difficult and confusing to try to train multiple classifiers by annotating the same image.

The process is made easier by creating duplicate images within the project for each channel that needs a classifier.
To do this, choose {menuselection}`Classify --> Training Images --> Create duplicate channel training images`.
We recommend having cell detections as detailed above and save the image and it's annotations **before** creating training images via {menuselection}`Classify --> Training Images --> Create duplicate channel training images`.

:::{figure} images/multiplex_duplicating.jpg
:class: shadow-image full-image

Creating duplicate training images for each channel
:::

:::{Note}
It's useful to run cell detection **before** duplicating the images so the detections match!
:::

:::{tip}
It is a good idea to turn the **Initialize Points annotation** option *on*... it might help us later.
:::

#### Train & save classifiers

Now you should have multiple duplicate images in your project, with names derived from the original channel names.
Because you ran this after cell detection (right?!), these duplicate images will bring across all the original cells.
Because you ran this after cell detection and saving (right?!), these duplicate images will bring across all the original cells.

We can then proceed with {menuselection}`Classify --> Object classification --> Train object classifier`.

:::{figure} images/multiplex_train_dialog.jpg
:::{figure} images/multiplex_train_dialog.png
:class: shadow-image small-image

The dialog box for training an object classifier
Expand Down Expand Up @@ -266,7 +275,18 @@ We shouldn't use any other classes in the training annotations.
Training an object classifier for FoxP3 by selecting individual cells
:::

Once you are done with one marker, choose {menuselection}`Save & Apply` and enter a name to identify your classifier.
:::{tip}
To only see the cells that are currently classified, click on the eye next to 'None' in the {guilabel}`Class list` in the {guilabel}`Annotations` pane to hide the unclassified cells.

:::{figure} images/class_list.png
:class: shadow-image mini-image

The Class list showing the 'None' class hidden
:::

:::

Once you are done with one marker, enter a name to identify your classifier and then select {menuselection}`Save`.
Then save the image data and open the image associated with the next marker of interest, repeating the process as many times as necessary.

:::{figure} images/multiplex_ck.jpg
Expand Down Expand Up @@ -324,7 +344,7 @@ A few things can help:

- The box in the bottom right corner of the viewer now shows not only the mouse location, but also the classification of the object under the cursor.
- {menuselection}`View --> Show channel viewer` makes it possible to see all channels side-by-side. Right-click on the channel viewer to customize its display.
- Right-clicking on the *Classifications* list under the *Annotations* tab, you can now use {menuselection}`Populate from existing objects --> All classes` to create a list of all classifications present within the image. The filter box below this list enables quickly finding classifications including specific text. You can then select these, and toggle their visibility by right-clicking or pressing the {kbd}`spacebar`.
- Right-clicking on the *Classifications* list under the *Annotations* tab, you can now use {menuselection}`Populate from existing objects --> All classes` to create a list of all classifications present within the image. The filter box below this list enables quickly finding classifications including specific text. You can then select these, and toggle their visibility by clicking on the eye or pressing the {kbd}`spacebar`.
- Right-click on the image and choose {menuselection}`Cells --> Centroids only` to have another view of the classified cells. Now, the shape drawn for each cell relates to the 'number of components' of its classification, while its color continues to depict the specific class. This makes similar-but-not-the-same classifications to be spotted more easily than using (often subtle) color differences alone.

:::{figure} images/multiplex_centroids.jpg
Expand Down