)
+* Color spaces (see [`skimage.color`](http://scikit-image.org/docs/dev/api/skimage.color.html) package)
+ - `rgb2hsv`
+ - `rgb2luv`
+ - `rgb2xyz`
+ - `rgb2lab`
+* [Histogram equalization](http://scikit-image.org/docs/dev/auto_examples/plot_equalize.html) and [local histogram equalization](http://scikit-image.org/docs/dev/auto_examples/plot_local_equalize.html)
diff --git a/modules/2_feature_detection.md b/modules/2_feature_detection.md
new file mode 100644
index 0000000..c0f0f48
--- /dev/null
+++ b/modules/2_feature_detection.md
@@ -0,0 +1,317 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.11.2
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell}
+---
+tags: [remove-input, remove-output]
+---
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# Feature detection
+
+Feature detection is often the end result of image processing. We'll detect some basic features (edges, points, and circles) in this section, but there are a wealth of feature detectors that are available.
+
+## Edge detection
+
+Before we start, let's set the default colormap to grayscale and turn off pixel interpolation.
+
+```{code-cell}
+import matplotlib.pyplot as plt
+import numpy as np
+
+plt.rcParams['image.cmap'] = 'gray'
+plt.rcParams['image.interpolation'] = 'none'
+```
+
+We've already discussed edge filtering, using the Sobel filter, in the last section.
+
+```{code-cell}
+import skdemo
+from skimage import data
+from skimage import filters
+
+image = data.camera()
+pixelated = image[::10, ::10]
+gradient = filters.sobel(pixelated)
+skdemo.imshow_all(pixelated, gradient)
+```
+
+With the Sobel filter, however, we get back a grayscale image, which essentially tells us the likelihood that a pixel is on the edge of an object.
+
+We can apply a bit more logic to *detect* an edge; i.e. we can use that filtered image to make a *decision* whether or not a pixel is on an edge. The simplest way to do that is with thresholding:
+
+```{code-cell}
+skdemo.imshow_all(gradient, gradient > 0.4)
+```
+
+That approach doesn't do a great job. It's noisy and produces thick edges. Furthermore, it doesn't use our *knowledge* of how edges work: They should be thin and tend to be connected along the direction of the edge.
+
+### Canny edge detector
+
+The Canny edge detector combines the Sobel filter with a few other steps to give a binary edge image. The steps are as follows:
+* Gaussian filter
+* Sobel filter
+* Non-maximal suppression
+* Hysteresis thresholding
+
+Let's go through these steps one at a time.
+
+### Step 1: Gaussian filter
+
+As discussed earlier, gradients tend to enhance noise. To combat this effect, we first smooth the image using a gradient filter:
+
+```{code-cell}
+from skimage import img_as_float
+
+sigma = 1 # Standard-deviation of Gaussian; larger smooths more.
+pixelated_float = img_as_float(pixelated)
+pixelated_float = pixelated
+smooth = filters.gaussian(pixelated_float, sigma)
+skdemo.imshow_all(pixelated_float, smooth)
+```
+
+### Step 2: Sobel filter
+
+Next, we apply our edge filter:
+
+```{code-cell}
+gradient_magnitude = filters.sobel(smooth)
+skdemo.imshow_all(smooth, gradient_magnitude)
+```
+
+### Step 3: Non-maximal suppression
+
+Goal: Suppress gradients that aren't on an edge
+
+Ideally, an edge is thin: In some sense, an edge is infinitely thin, but images are discrete so we want edges to be a single pixel wide. To accomplish this, we thin the image using "non-maximal suppression". This takes the edge-filtered image and thins the response *across* the edge direction; i.e. in the direction of the maximum gradient:
+
+```{code-cell}
+zoomed_grad = gradient_magnitude[15:25, 5:15]
+maximal_mask = np.zeros_like(zoomed_grad)
+# This mask is made up for demo purposes
+maximal_mask[range(10), (7, 6, 5, 4, 3, 2, 2, 2, 3, 3)] = 1
+grad_along_edge = maximal_mask * zoomed_grad
+skdemo.imshow_all(zoomed_grad, grad_along_edge, limits='dtype')
+```
+
+Obviously, this is a faked version of non-maximal suppression: Pixels are *manually* masked here. The actual algorithm detects the direction of edges, and keeps a pixel only if it has a locally maximal gradient-magnitude in the direction *normal to the edge direction*. It doesn't mask pixels *along* the edge direction since an adjacent edge pixel will be of comparable magnitude.
+
+The result of the filter is that an edge is only possible if there are no better edges near it.
+
+### Step 4: Hysteresis thresholding
+
+Goal: Prefer pixels that are connected to edges
+
+The final step is the actual decision-making process. Here, we have two parameters: The low threshold and the high threshold. The high threshold sets the gradient value that you *know* is definitely an edge. The low threshold sets the gradient value that could be an edge, but only if it's connected to a pixel that we know is an edge.
+
+These two thresholds are displayed below:
+
+```{code-cell}
+from skimage import color
+
+low_threshold = 0.2
+high_threshold = 0.3
+label_image = np.zeros_like(pixelated)
+# This uses `gradient_magnitude` which has NOT gone through non-maximal-suppression.
+label_image[gradient_magnitude > low_threshold] = 1
+label_image[gradient_magnitude > high_threshold] = 2
+demo_image = color.label2rgb(label_image, gradient_magnitude,
+ bg_label=0, colors=('yellow', 'red'))
+plt.imshow(demo_image)
+```
+
+The **red points** here are above `high_threshold` and are seed points for edges. The **yellow points** are edges if connected (possibly by other yellow points) to seed points; i.e. isolated groups of yellow points will not be detected as edges.
+
+Note that the demo above is on the edge image *before* non-maximal suppression, but in reality, this would be done on the image *after* non-maximal suppression. There isn't currently an easy way to get at the intermediate result.
+
+### The Canny edge detector
+
+Now we're ready to look at the actual result:
+
+```{code-cell}
+import ipywidgets as widgets
+from skimage import data, feature
+
+image = data.coins()
+
+def canny_demo(**kwargs):
+ edges = feature.canny(image, **kwargs)
+ plt.imshow(edges)
+ plt.show()
+# As written, the following doesn't actually interact with the
+# `canny_demo` function. Figure out what you need to add.
+widgets.interact(canny_demo); # <-- add keyword arguments for `canny`
+```
+
+Play around with the demo above. Make sure to add any keyword arguments to `interact` that might be necessary. (Note that keyword arguments passed to `interact` are passed to `canny_demo` and forwarded to `feature.canny`. So you should be looking at the docstring for `feature.canny` or the discussion above to figure out what to add.)
+
+Can you describe how the low threshold makes a decision about a potential edge, as compared to the high threshold?
+
+## Aside: Feature detection in research
+
+When taking image data for an experiment, the end goal is often to detect some sort of feature. Here are a few examples from some of my own research.
+
+### Feature detection: case study
+
+I got started with image processing when I needed to detect the position of a device I built to study swimming in viscous environments (low-Reynolds numbers):
+
+```{code-cell}
+from IPython.display import Image, YouTubeVideo
+
+YouTubeVideo('1Pjlj9Pymx8')
+```
+
+### Particle detection
+
+For my post-doc, I ended up investigating the collection of fog from the environment and built the apparatus displayed below:
+
+```{code-cell}
+from IPython.display import Image, YouTubeVideo
+
+Image(filename='images/fog_tunnel.png')
+```
+
+The resulting experiments looked something like this:
+
+```{code-cell}
+YouTubeVideo('14qlyhnyjT8')
+```
+
+The water droplets in the video can be detected using some of the features in `scikit-image`. In particular, `peak_local_max` from the `feature` package is useful here. (We'll play with this function in the Hough transform section.) There's a bit more work to get subpixel accuracy, but that function can get you most of the way there:
+
+```{code-cell}
+Image(filename='images/particle_detection.png')
+```
+
+### Circle detection
+
+If we look at the previous video over a longer period of time (time-lapsed), then we can see droplets accumulating on the surface:
+
+```{code-cell}
+YouTubeVideo('_qeOggvx5Rc')
+```
+
+To allow easier measurement, the microscope objective was moved to a top-down view on the substrate in the video below:
+
+```{code-cell}
+YouTubeVideo('8utP9Ju_rdc')
+```
+
+At the time, I was interested in figuring out how droplet sizes evolved. To accomplish that, we could open up each frame in some image-analysis software and manually measure each drop size. That becomes pretty tediuous, pretty quickly. Using feature-detection techniques, we can get a good result with significantly less effort.
+
+```{code-cell}
+Image(filename='images/circle_detection.png')
+```
+
+In case you're wondering about the differences between the flat (lower) and textured (upper) surfaces pictured above, the circle measurements were used to describe the difference in growth, which is summarized below:
+
+```{code-cell}
+Image(filename='images/power_law_growth_regimes.png')
+```
+
+## Hough transforms
+
+Hough transforms are a general class of operations that make up a step in feature detection. Just like we saw with edge detection, Hough transforms produce a result that we can use to detect a feature. (The distinction between the "filter" that we used for edge detection and the "transform" that we use here is a bit arbitrary.)
+
+### Circle detection
+
+To explore the Hough transform, let's take the *circular* Hough transform as our example. Let's grab an image with some circles:
+
+```{code-cell}
+image = data.coins()[0:95, 180:370]
+plt.imshow(image);
+```
+
+We can use the Canny edge filter to get a pretty good representation of these circles:
+
+```{code-cell}
+edges = feature.canny(image, sigma=3, low_threshold=10, high_threshold=60)
+plt.imshow(edges);
+```
+
+While it looks like we've extracted circles, this doesn't give us much if what we want to do is *measure* these circles. For example, what are the size and position of the circles in the above image? The edge image doesn't really tell us much about that.
+
+We'll use the Hough transform to extract circle positions and radii:
+
+```{code-cell}
+from skimage.transform import hough_circle
+
+hough_radii = np.arange(15, 30, 2)
+hough_response = hough_circle(edges, hough_radii)
+```
+
+Here, the circular Hough transform actually uses the edge image from before. We also have to define the radii that we want to search for in our image.
+
+So... what's the actual result that we get back?
+
+```{code-cell}
+print(edges.shape)
+print(hough_response.shape)
+```
+
+We can see that the last two dimensions of the response are exactly the same as the original image, so the response is image-like. Then what does the first dimension correspond to?
+
+As always, you can get a better feel for the result by plotting it:
+
+```{code-cell}
+# Use max value to intelligently rescale the data for plotting.
+h_max = hough_response.max()
+
+def hough_responses_demo(i):
+ # Use `plt.title` to add a meaningful title for each index.
+ plt.imshow(hough_response[i, :, :], vmax=h_max*0.5)
+ plt.show()
+widgets.interact(hough_responses_demo, i=(0, len(hough_response)-1));
+```
+
+Playing around with the slider should give you a pretty good feel for the result.
+
+This Hough transform simply counts the pixels in a thin (as opposed to filled) circular mask. Since the input is an edge image, the response is strongest when the center of the circular mask lies at the center of a circle with the same radius.
+
++++
+
+## Exercise:
+
+Use the response from the Hough transform to **detect the position and radius of each coin**.
+
+```{code-cell}
+from skimage.feature import peak_local_max
+from skimage.draw import circle_perimeter
+
+centers = []
+likelihood = []
+# Your code here
+```
+
+The same concept described in this section can be applied to line detection, ellipse detection, and various other features of interest.
+
+## Further reading
+
+### Interest point detection
+
+We've only skimmed the surface of what might be classified as "feature detection". One major area that we haven't covered is called [interest point detection](http://en.wikipedia.org/wiki/Interest_point_detection). Here, we don't even need to know what we're looking for, we just identify small patches (centered on a pixel) that are "distinct". (The definition of "distinct" varies by algorithm; e.g., the Harris corner detector defines interest points as corners.) These distinct points can then be used to, e.g., compare with distinct points in other images.
+
+One common use of interest point detection is for image registration, in which we align (i.e. "register") images based on interest points. Here's an example of the [CENSURE feature detector from the gallery](http://scikit-image.org/docs/dev/auto_examples/plot_censure.html):
+
+```{code-cell}
+Image(filename='images/censure_example.png')
+```
+
+* [Probabilistic Hough transform](http://scikit-image.org/docs/dev/auto_examples/plot_line_hough_transform.html)
+* [Circular and elliptical Hough transforms](http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html)
+* [Template matching](http://scikit-image.org/docs/dev/auto_examples/plot_template.html)
+* [Histogram of Oriented Gradients](http://scikit-image.org/docs/dev/auto_examples/plot_hog.html)
+* [BRIEF](http://scikit-image.org/docs/dev/auto_examples/plot_brief.html), [CENSURE](http://scikit-image.org/docs/dev/auto_examples/plot_censure.html), and [ORB](http://scikit-image.org/docs/dev/auto_examples/plot_orb.html) feature detectors/descriptors
+* [Robust matching using RANSAC](http://scikit-image.org/docs/dev/auto_examples/plot_matching.html)
diff --git a/modules/3_morphological_operations.md b/modules/3_morphological_operations.md
new file mode 100644
index 0000000..24f8b5c
--- /dev/null
+++ b/modules/3_morphological_operations.md
@@ -0,0 +1,114 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+%matplotlib inline
+```
+
+# Morphological operations
+
+Morphology is the study of shapes. In image processing, some simple operations can get you a long way. The first things to learn are *erosion* and *dilation*. In erosion, we look at a pixel’s local neighborhood and replace the value of that pixel with the minimum value of that neighborhood. In dilation, we instead choose the maximum.
+
+```{code-cell} python
+import numpy as np
+from matplotlib import pyplot as plt, cm
+import skdemo
+```
+
+```{code-cell} python
+image = np.array([[0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)
+plt.imshow(image);
+```
+
+The documentation for scikit-image's morphology module is
+[here](http://scikit-image.org/docs/0.10.x/api/skimage.morphology.html).
+
+Importantly, we must use a *structuring element*, which defines the local
+neighborhood of each pixel. To get every neighbor (up, down, left, right, and
+diagonals), use `morphology.square`; to avoid diagonals, use
+`morphology.diamond`:
+
+```{code-cell} python
+from skimage import morphology
+sq = morphology.square(width=3)
+dia = morphology.diamond(radius=1)
+print(sq)
+print(dia)
+```
+
+The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors: a 1 value means that pixel counts as a neighbor, while a 0 value does not. So:
+
+```{code-cell} python
+skdemo.imshow_all(image, morphology.erosion(image, sq), shape=(1, 2))
+```
+
+and
+
+```{code-cell} python
+skdemo.imshow_all(image, morphology.dilation(image, sq))
+```
+
+and
+
+```{code-cell} python
+skdemo.imshow_all(image, morphology.dilation(image, dia))
+```
+
+Erosion and dilation can be combined into two slightly more sophisticated operations, *opening* and *closing*. Here's an example:
+
+```{code-cell} python
+image = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
+ [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], np.uint8)
+plt.imshow(image);
+```
+
+What happens when run an erosion followed by a dilation of this image?
+
+What about the reverse?
+
+Try to imagine the operations in your head before trying them out below.
+
+```{code-cell} python
+skdemo.imshow_all(image, morphology.opening(image, sq)) # erosion -> dilation
+```
+
+```{code-cell} python
+skdemo.imshow_all(image, morphology.closing(image, sq)) # dilation -> erosion
+```
+
+**Exercise**: use morphological operations to remove noise from a binary image.
+
+```{code-cell} python
+from skimage import data, color
+hub = color.rgb2gray(data.hubble_deep_field()[350:450, 90:190])
+plt.imshow(hub);
+```
+
+Remove the smaller objects to retrieve the large galaxy.
diff --git a/modules/4_segmentation.md b/modules/4_segmentation.md
new file mode 100644
index 0000000..516a72e
--- /dev/null
+++ b/modules/4_segmentation.md
@@ -0,0 +1,494 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+:tags: [remove-input, remove-output]
+
+%matplotlib inline
+%config InlineBackend.figure_format = 'retina'
+```
+
+# Segmentation
+
+--------------
+
+## Separating an image into one or more regions of interest.
+
+Everyone has heard or seen Photoshop or a similar graphics editor take a person from one image and place them into another. The first step of doing this is *identifying where that person is in the source image*.
+
+In popular culture, the Terminator's vision segments humans:
+
+
+
+### Segmentation contains two major sub-fields
+
+* **Supervised** segmentation: Some prior knowledge, possibly from human input, is used to guide the algorithm. Supervised algorithms currently included in scikit-image include
+ * Thresholding algorithms which require user input (`skimage.filters.threshold_*`)
+ * `skimage.segmentation.random_walker`
+ * `skimage.segmentation.active_contour`
+ * `skimage.segmentation.watershed`
+* **Unsupervised** segmentation: No prior knowledge. These algorithms attempt to subdivide into meaningful regions automatically. The user may be able to tweak settings like number of regions.
+ * Thresholding algorithms which require no user input.
+ * `skimage.segmentation.slic`
+ * `skimage.segmentation.chan_vese`
+ * `skimage.segmentation.felzenszwalb`
+ * `skimage.segmentation.quickshift`
+
++++
+
+First, some standard imports and a helper function to display our results
+
+```{code-cell} python
+import numpy as np
+import matplotlib.pyplot as plt
+
+import skimage.data as data
+import skimage.segmentation as seg
+from skimage import filters
+from skimage import draw
+from skimage import color
+from skimage import exposure
+
+
+def image_show(image, nrows=1, ncols=1, cmap='gray', **kwargs):
+ fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 16))
+ ax.imshow(image, cmap='gray')
+ ax.axis('off')
+ return fig, ax
+```
+
+## Thresholding
+
+In some images, global or local contrast may be sufficient to separate regions of interest. Simply choosing all pixels above or below a certain *threshold* may be sufficient to segment such an image.
+
+Let's try this on an image of a textbook.
+
+```{code-cell} python
+text = data.page()
+
+image_show(text);
+```
+
+### Histograms
+
+A histogram simply plots the frequency (number of times) values within a certain range appear against the data values themselves. It is a powerful tool to get to know your data - or decide where you would like to threshold.
+
+```{code-cell} python
+fig, ax = plt.subplots(1, 1)
+ax.hist(text.ravel(), bins=256, range=[0, 255])
+ax.set_xlim(0, 256);
+```
+
+### Experimentation: supervised thresholding
+
+Try simple NumPy methods and a few different thresholds on this image. Because *we* are setting the threshold, this is *supervised* segmentation.
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+text_segmented = ... # Your code here
+
+image_show(text_segmented);
+```
+
+Not ideal results! The shadow on the left creates problems; no single global value really fits.
+
+What if we don't want to set the threshold every time? There are several published methods which look at the histogram and choose what should be an optimal threshold without user input. These are unsupervised.
+
++++
+
+### Experimentation: unsupervised thresholding
+
+Here we will experiment with a number of automatic thresholding methods available in scikit-image. Because these require no input beyond the image itself, this is *unsupervised* segmentation.
+
+These functions generally return the threshold value(s), rather than applying it to the image directly.
+
+Try `otsu` and `li`, then take a look at `local` or `sauvola`.
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+text_threshold = filters.threshold_ # Hit tab with the cursor after the underscore, try several methods
+
+image_show(text < text_threshold);
+```
+
+```{code-cell} python
+
+```
+
+## Supervised segmentation
+
+Thresholding can be useful, but is rather basic and a high-contrast image will often limit its utility. For doing more fun things - like removing part of an image - we need more advanced tools.
+
+For this section, we will use the `astronaut` image and attempt to segment Eileen Collins' head using supervised segmentation.
+
+```{code-cell} python
+# Our source image
+astronaut = data.astronaut()
+image_show(astronaut);
+```
+
+The contrast is pretty good in this image for her head against the background, so we will simply convert to grayscale with `rgb2gray`.
+
+```{code-cell} python
+astronaut_gray = color.rgb2gray(astronaut)
+image_show(astronaut_gray);
+```
+
+We will use two methods, which segment using very different approaches:
+
+* **Active Contour**: Initializes using a user-defined contour or line, which then is attracted to edges and/or brightness. Can be tweaked for many situations, but mixed contrast may be problematic.
+* **Random walker**: Initialized using any labeled points, fills the image with the label that seems least distant from the origin point (on a path weighted by pixel differences). Tends to respect edges or step-offs, and is surprisingly robust to noise. Only one parameter to tweak.
+
++++
+
+### Active contour segmentation
+
+We must have a set of initial parameters to 'seed' our segmentation this. Let's draw a circle around the astronaut's head to initialize the snake.
+
+This could be done interactively, with a GUI, but for simplicity we will start at the point [100, 220] and use a radius of 100 pixels. Just a little trigonometry in this helper function...
+
+```{code-cell} python
+def circle_points(resolution, center, radius):
+ """Generate points defining a circle on an image."""
+ radians = np.linspace(0, 2*np.pi, resolution)
+
+ c = center[1] + radius*np.cos(radians)
+ r = center[0] + radius*np.sin(radians)
+
+ return np.array([c, r]).T
+
+# Exclude last point because a closed path should not have duplicate points
+points = circle_points(200, [100, 220], 100)[:-1]
+```
+
+```{code-cell} python
+snake = seg.active_contour(astronaut_gray, points)
+```
+
+```{code-cell} python
+fig, ax = image_show(astronaut)
+ax.plot(points[:, 0], points[:, 1], '--r', lw=3)
+ax.plot(snake[:, 0], snake[:, 1], '-b', lw=3);
+```
+
+```{code-cell} python
+
+```
+
+### Random walker
+
+One good analogy for random walker uses graph theory.
+
+* The distance from each pixel to its neighbors is weighted by how similar their values are; the more similar, the lower the cost is to step from one to another
+* The user provides some seed points
+* The algorithm finds the cheapest paths from each point to each seed value.
+* Pixels are labeled with the cheapest/lowest path.
+
+We will re-use the seed values from our previous example.
+
+```{code-cell} python
+astronaut_labels = np.zeros(astronaut_gray.shape, dtype=np.uint8)
+```
+
+The random walker algorithm expects a label image as input. Any label above zero will be treated as a seed; all zero-valued locations will be filled with labels from the positive integers available.
+
+There is also a masking feature where anything labeled -1 will never be labeled or traversed, but we will not use it here.
+
+```{code-cell} python
+indices = draw.circle_perimeter(100, 220, 25)
+
+astronaut_labels[indices] = 1
+astronaut_labels[points[:, 1].astype(np.int), points[:, 0].astype(np.int)] = 2
+
+image_show(astronaut_labels);
+```
+
+```{code-cell} python
+astronaut_segmented = seg.random_walker(astronaut_gray, astronaut_labels)
+```
+
+```{code-cell} python
+# Check our results
+fig, ax = image_show(astronaut_gray)
+ax.imshow(astronaut_segmented == 1, alpha=0.3);
+```
+
+```{code-cell} python
+
+```
+
+## Flood fill
+
+A basic but effective segmentation technique was recently added to scikit-image: `segmentation.flood` and `segmentation.flood_fill`. These algorithms take a seed point and iteratively find and fill adjacent points which are equal to or within a tolerance of the initial point. `flood` returns the region; `flood_fill` returns a modified image with those points changed to a new value.
+
+This approach is most suited for areas which have a relatively uniform color or gray value, and/or high contrast relative to adjacent structures.
+
+Can we accomplish the same task with flood fill?
+
+```{code-cell} python
+seed_point = (100, 220) # Experiment with the seed point
+flood_mask = seg.flood(astronaut_gray, seed_point, tolerance=0.3) # Experiment with tolerance
+```
+
+```{code-cell} python
+fig, ax = image_show(astronaut_gray)
+ax.imshow(flood_mask, alpha=0.3);
+```
+
+Not ideal! The flood runs away into the background through the right earlobe.
+
+Let's think outside the box.
+* What if instead of segmenting the head, we segmented the background around it and the collar?
+* Is there any way to increase the contrast between the background and skin?
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+seed_bkgnd = (100, 350) # Background
+seed_collar = (200, 220) # Collar
+
+better_contrast = # Your idea to improve contrast here
+tol_bkgnd = # Experiment with tolerance for background
+tol_collar = # Experiment with tolerance for the collar
+
+flood_background = seg.flood(better_contrast, seed_bkgnd, tolerance=tol_bkgnd)
+flood_collar = seg.flood(better_contrast, seed_collar, tolerance=tol_collar)
+```
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+fig, ax = image_show(better_contrast)
+
+# Combine the two floods with binary OR operator
+ax.imshow(flood_background | flood_collar, alpha=0.3);
+```
+
+```{code-cell} python
+flood_mask2 = seg.flood(astronaut[..., 2], (200, 220), tolerance=40)
+fig, ax = image_show(astronaut[..., 2])
+ax.imshow(flood_mask | flood_mask2, alpha=0.3);
+```
+
+## Unsupervised segmentation
+
+Sometimes, human input is not possible or feasible - or, perhaps your images are so large that it is not feasible to consider all pixels simultaneously. Unsupervised segmentation can then break the image down into several sub-regions, so instead of millions of pixels you have tens to hundreds of regions.
+
+### SLIC
+
+There are many analogies to machine learning in unsupervised segmentation. Our first example directly uses a common machine learning algorithm under the hood - K-Means.
+
+```{code-cell} python
+# SLIC works in color, so we will use the original astronaut
+astronaut_slic = seg.slic(astronaut)
+```
+
+```{code-cell} python
+# label2rgb replaces each discrete label with the average interior color
+image_show(color.label2rgb(astronaut_slic, astronaut, kind='avg'));
+```
+
+We've reduced this image from 512*512 = 262,000 pixels down to 100 regions!
+
+And most of these regions make some logical sense.
+
++++
+
+### Chan-Vese
+
+This algorithm iterates a level set, which allows it to capture complex and even disconnected features. However, its result is binary - there will only be one region - and it requires a grayscale image.
+
+This algorithm takes a few seconds to run.
+
+```{code-cell} python
+chan_vese = seg.chan_vese(astronaut_gray)
+```
+
+```{code-cell} python
+fig, ax = image_show(astronaut_gray)
+ax.imshow(chan_vese == 0, alpha=0.3);
+```
+
+Chan-Vese has a number of paremeters, which you can try out! In the interest of time, we may move on.
+
+```{code-cell} python
+
+```
+
+### Felzenszwalb
+
+This method oversegments an RGB image (requires color, unlike Chan-Vese) using another machine learning technique, a minimum-spanning tree clustering. The number of segments is not guaranteed and can only be indirectly controlled via `scale` parameter.
+
+```{code-cell} python
+astronaut_felzenszwalb = seg.felzenszwalb(astronaut) # Color required
+```
+
+```{code-cell} python
+image_show(astronaut_felzenszwalb);
+```
+
+Whoa, lots of regions! How many is that?
+
+```{code-cell} python
+# Find the number of unique labels
+```
+
+Let's see if they make sense; label them with the region average (see above with SLIC)
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+astronaut_felzenszwalb_colored = # Your code here
+
+image_show(astronaut_felzenszwalb_colored);
+```
+
+Actually reasonable small regions. If we wanted fewer regions, we could change the `scale` parameter (try it!) or start here and combine them.
+
+This approach is sometimes called **oversegmentation**.
+
+But when there are too many regions, they must be combined somehow.
+
+```{code-cell} python
+
+```
+
+## Combining regions with a Region Adjacency Graph (RAG)
+
+Remember how the concept behind random walker was functionally looking at the difference between each pixel and its neighbors, then figuring out which were most alike? A RAG is essentially the same, except between regions.
+
+We have RAGs now in scikit-image, but we have to import *from the future*; this functionality is exposed in the `future.graph` submodule meaning it is stable and robust enough to ship, but the API may change.
+
+```{code-cell} python
+import skimage.future.graph as graph
+
+rag = graph.rag_mean_color(astronaut, astronaut_felzenszwalb + 1)
+```
+
+Now we show just one application of a very useful tool - `skimage.measure.regionprops` - to determine the centroid of each labeled region and pass that to the graph.
+
+`regionprops` has many, many other uses; see the API documentation for all of the features that can be quantified per-region!
+http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops
+
+```{code-cell} python
+import skimage.measure as measure
+
+# Regionprops ignores zero, but we want to include it, so add one
+regions = measure.regionprops(astronaut_felzenszwalb + 1)
+
+# Pass centroid data into the graph
+for region in regions:
+ rag.nodes[region['label']]['centroid'] = region['centroid']
+```
+
+`display_edges` is a helper function to assist in visualizing the graph.
+
+```{code-cell} python
+def display_edges(image, g, threshold):
+ """Draw edges of a RAG on its image
+
+ Returns a modified image with the edges drawn.Edges are drawn in green
+ and nodes are drawn in yellow.
+
+ Parameters
+ ----------
+ image : ndarray
+ The image to be drawn on.
+ g : RAG
+ The Region Adjacency Graph.
+ threshold : float
+ Only edges in `g` below `threshold` are drawn.
+
+ Returns:
+ out: ndarray
+ Image with the edges drawn.
+ """
+ image = image.copy()
+ for edge in g.edges():
+ n1, n2 = edge
+
+ r1, c1 = map(int, rag.nodes[n1]['centroid'])
+ r2, c2 = map(int, rag.nodes[n2]['centroid'])
+
+ line = draw.line(r1, c1, r2, c2)
+ circle = draw.circle(r1,c1,2)
+
+ if g[n1][n2]['weight'] < threshold :
+ image[line] = 0,255,0
+ image[circle] = 255,255,0
+
+ return image
+```
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+# All edges are drawn with threshold at infinity
+edges_drawn_all = display_edges(astronaut_felzenszwalb_colored, rag, np.inf)
+image_show(edges_drawn_all);
+```
+
+Try a range of thresholds out, see what happens.
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+threshold = ... # Experiment
+
+edges_drawn_few = display_edges(astronaut_felzenszwalb_colored, rag, threshold)
+image_show(edges_drawn_few);
+```
+
+#### Finally, cut the graph
+
+Once you are happy with the (dis)connected regions above, the graph can be cut to merge the regions which are still connected.
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+final_labels = graph.cut_threshold(astronaut_felzenszwalb + 1, rag, threshold)
+final_label_rgb = color.label2rgb(final_labels, astronaut, kind='avg')
+
+image_show(final_label_rgb);
+```
+
+How many regions exist now?
+
+```{code-cell} python
+:tags: [raises-exception, remove-output]
+
+np.unique(final_labels).size
+```
+
+## Exercise: Cat picture
+
+The data directory also has an excellent image of Stéfan's cat, Chelsea. With what you've learned, can you segment the cat's nose? How about the eyes? Why is the eye particularly challenging?
+
+Hint: the cat's nose is located close to [240, 270] and the right eye center is near [110, 172] in row, column notation.
+
+```{code-cell} python
+fig, ax = image_show(data.chelsea())
+
+ax.plot(270, 240, marker='o', markersize=15, color="g")
+ax.plot(172, 110, marker='o', markersize=15, color="r");
+```
+
+```{code-cell} python
+
+```
+
+```{code-cell} python
+
+```
diff --git a/modules/5_ransac.md b/modules/5_ransac.md
new file mode 100644
index 0000000..f7c1e6f
--- /dev/null
+++ b/modules/5_ransac.md
@@ -0,0 +1,237 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+%matplotlib inline
+import matplotlib
+matplotlib.rcParams['image.cmap'] = 'gray'
+```
+
+# RANSAC
+
+From WikiPedia:
+
+> Random sample consensus (RANSAC) is an iterative method to estimate
+> parameters of a mathematical model from a set of observed data which
+> contains outliers. Therefore, it also can be interpreted as an
+> outlier detection method.
+
+[Gallery example](http://scikit-image.org/docs/dev/auto_examples/plot_matching.html)
+
+```{code-cell} python
+import numpy as np
+from matplotlib import pyplot as plt
+
+from skimage.measure import ransac
+```
+
+Let's set up some random data points:
+
+```{code-cell} python
+np.random.seed(seed=1)
+
+# generate coordinates of line
+x = np.arange(-200, 200)
+y = 0.2 * x + 20
+data = np.column_stack([x, y])
+
+# add faulty data
+faulty = np.array(30 * [(180., -100)])
+faulty += 5 * np.random.normal(size=faulty.shape)
+data[:faulty.shape[0]] = faulty
+
+# add gaussian noise to coordinates
+noise = np.random.normal(size=data.shape)
+data += 0.5 * noise
+data[::2] += 5 * noise[::2]
+data[::4] += 20 * noise[::4]
+
+plt.plot(data[:, 0], data[:, 1], 'b.');
+```
+
+Now, fit a line to the data. We start with our model:
+
+$$\mathbf{y} = m \mathbf{x} + c$$
+
+Or, in matrix notation:
+
+$$\mathbf{y} = \left[ \begin{array}{c} \vdots \\ \mathbf{x} \\ \vdots \end{array}
+ \ \begin{array}{c} \vdots \\ \mathbf{1} \\ \vdots \end{array} \right]
+ \left[ \begin{array}{c} m \\ c \end{array} \right]
+ = X \mathbf{p}$$
+
+Since we have an over-determined system, we use least squares to solve:
+
+```{code-cell} python
+x = data[:, 0]
+y = data[:, 1]
+
+X = np.column_stack((x, np.ones_like(x)))
+
+p, _, _, _ = np.linalg.lstsq(X, y)
+p
+```
+
+With those parameters in hand, let's plot the resulting line:
+
+```{code-cell} python
+m, c = p
+plt.plot(x, y, 'b.')
+
+xx = np.arange(-250, 250)
+plt.plot(xx, m * xx + c, 'r-');
+```
+
+Scikit-image provides an N-dimensional LineModel object that encapsulates the above:
+
+```{code-cell} python
+from skimage.measure import ransac, LineModelND
+
+model = LineModelND()
+model.estimate(data)
+model.params
+```
+
+Instead of ``m`` and ``c``, it parameterizes the line by ``origin``
+and ``direction`` --- much safer when dealing with vertical lines,
+e.g.!
+
+```{code-cell} python
+origin, direction = model.params
+plt.plot(x, y, 'b.')
+plt.plot(xx, model.predict_y(xx), 'r-');
+```
+
+Now, we robustly fit the line using inlier data selecte with the RANSAC algorithm:
+
+```{code-cell} python
+model_robust, inliers = ransac(data, LineModelND, min_samples=2,
+ residual_threshold=10, max_trials=1000)
+outliers = (inliers == False)
+
+yy = model_robust.predict_y(xx)
+
+fig, ax = plt.subplots()
+
+ax.plot(data[inliers, 0], data[inliers, 1], '.b', alpha=0.6, label='Inlier data')
+ax.plot(data[outliers, 0], data[outliers, 1], '.r', alpha=0.6, label='Outlier data')
+ax.plot(xx, yy, '-b', label='Robust line model')
+
+plt.legend(loc='lower left')
+plt.show()
+```
+
+## Exercise: Going interplanetary
+
+The sun is one of the most spherical objects in our solar system.
+According to an [article in Scientific American](http://www.scientificamerican.com/gallery/well-rounded-sun-stays-nearly-spherical-even-when-it-freaks-out/):
+
+> Earth's closest star is one of the roundest objects humans have
+> measured. If you shrank the sun down to beach ball size, the
+> difference between its north-south and the east-west diameters would
+> be thinner than the width of a human hair, says Jeffery Kuhn, a
+> physicist and solar researcher at the University of Hawaii at
+> Manoa. "Not only is it very round, but it's too round," he adds. The
+> sun is more spherical and more invariable than theories predict.
+
+If the sun is spherical, we should be able to fit a circle to a 2D
+slice of it! Your task is to do just that, using RANSAC and scikit-image's CircleModel.
+
+Let's start by loading an example image:
+
+```{code-cell} python
+from skimage import io
+
+image = io.imread('../images/superprom_prev.jpg')
+
+f, ax = plt.subplots(figsize=(8, 8))
+ax.imshow(image);
+```
+
+In this specific image, we got a bit more than we bargained for in the
+form of magnificently large solar flares. Let's see if some *canny
+edge detection* will help isolate the sun's boundaries.
+
+```python
+from skimage import feature, color
+
+# Step 1: convert the image from color to gray, using `color.rgb2gray`
+
+...
+
+# Step 2: do edge detection on the image, using `feature.canny`. Play around with the `sigma`
+# parameter until you get a reasonable set of edges.
+
+f, ax = plt.subplots(figsize=(10, 10))
+ax.imshow(my_result, cmap='gray')
+```
+
+The edges look good, but there's a lot going on inside the sun. We
+use RANSAC to fit a robust circle model.
+
+```python
+from skimage.measure import CircleModel
+
+points = ... # Let points be an array with coordinate positions of edge pixels found above, shape (N, 2)
+
+model_robust, inliers = ransac(...)
+```
+
+The parameters of the circle are center x, y and radius:
+
+```python
+model_robust.params
+```
+
+Let's visualize the results, drawing a circle on the sun, and also
+highlighting inlier vs outlier edge pixels:
+
+```python
+from skimage import draw
+
+cy, cx, r = model_robust.params
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 8))
+
+ax0.imshow(image)
+ax1.imshow(image)
+
+ax1.plot(points[inliers, 1], points[inliers, 0], 'b.', markersize=1)
+ax1.plot(points[~inliers, 1], points[~inliers, 0], 'g.', markersize=1)
+ax1.axis('image')
+
+circle = plt.Circle((cx, cy), radius=r, facecolor='none', linewidth=2)
+ax0.add_patch(circle);
+```
+
+## Exercise: CardShark
+
+Your small start-up, CardShark, that you run from your garage over nights and
+evenings, takes photos of credit cards and turns them into machine
+readable information.
+
+The first step is to identify where in a photo the credit card is
+located.
+
+1. Load the photo `../images/credit_card.jpg`
+2. Using RANSAC and LineModelND shown above, find the first most
+ prominent edge of the card
+3. Remove the edge points belonging to the most prominent edge, and
+ repeat the process to find the second, third, and fourth
+
+```{code-cell} python
+f, ax = plt.subplots()
+
+image = io.imread('../images/credit_card.jpg')
+ax.imshow(image);
+```
diff --git a/modules/6_sand_grain_counting.md b/modules/6_sand_grain_counting.md
new file mode 100644
index 0000000..63b7227
--- /dev/null
+++ b/modules/6_sand_grain_counting.md
@@ -0,0 +1,139 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+## Counting grains and bubbles
+
+This Scanning Element Microscopy image shows a glass sample
+(light gray matrix) with some bubbles (black) and unmolten
+sand grains (dark gray). We wish to determine the fraction
+of the sample covered by these three phases,
+and to estimate the number of sand grains and bubbles,
+their average sizes, etc.
+
++++
+
+### Loading the slide
+
+```{code-cell} python
+%matplotlib inline
+import numpy as np
+
+import matplotlib
+matplotlib.rcParams['image.interpolation'] = 'nearest'
+matplotlib.rcParams['image.cmap'] = 'viridis'
+matplotlib.rcParams['figure.figsize'] = (10, 7)
+```
+
+```{code-cell} python
+from skimage import io
+from matplotlib import pyplot as plt
+
+img = io.imread('../images/bubbles.jpg')
+
+plt.imshow(img, cmap=plt.cm.gray, interpolation='nearest');
+```
+
+### Remove banner
+
+```{code-cell} python
+img_clean = img[:880, :]
+plt.imshow(img_clean, cmap=plt.cm.gray, interpolation='nearest');
+```
+
+### Filter to get rid of speckles
+
+```{code-cell} python
+import scipy.ndimage as ndi
+img_med = ndi.median_filter(img_clean, size=5)
+plt.imshow(img_med, cmap=plt.cm.gray, interpolation='nearest');
+```
+
+### Find threshold values
+
+```{code-cell} python
+plt.hist(img_med.flatten(), bins=40, range=(0, 150));
+```
+
+### Separate layers by thresholding
+
+```{code-cell} python
+bubbles = (img_med <= 50)
+sand = (img_med > 50) & (img_med <= 120)
+glass = (img_med > 120)
+
+def plot_images(cmap=plt.cm.gray):
+ for n, (name, image) in \
+ enumerate([('Original', img_med),
+ ('Bubbles', bubbles),
+ ('Sand', sand),
+ ('Glass', glass)]):
+
+ plt.subplot(2, 2, n + 1)
+ plt.imshow(image, cmap=cmap)
+ plt.title(name)
+ plt.axis('off')
+
+plot_images();
+```
+
+### Visualise layers
+
+```{code-cell} python
+def plot_color_overlay():
+ all_layers = np.zeros((img_clean.shape[0],
+ img_clean.shape[1], 3)) # Color image
+
+ # You shouldn't run this if bubbles isn't a mask
+ # -- otherwise, fancy indexing instead of masking
+ assert(bubbles.dtype == np.bool)
+
+ all_layers[bubbles] = (1, 0, 0)
+ all_layers[sand] = (1, 1, 0)
+ all_layers[glass] = (0, 0, 1)
+
+ plt.imshow(all_layers)
+
+plot_color_overlay()
+```
+
+### Clean up shapes found
+
+```{code-cell} python
+for img in (sand, bubbles, glass):
+ img[:] = ndi.binary_opening(img, np.ones((5, 5)))
+ img[:] = ndi.binary_closing(img, np.ones((5, 5)))
+
+plot_images()
+```
+
+### Label connected components
+
+```{code-cell} python
+# Convert to int so we can store the labels
+bubbles = bubbles.astype(int)
+sand = sand.astype(int)
+glass = glass.astype(int)
+
+for name, img in [('Sand', sand),
+ ('Bubbles', bubbles),
+ ('Glass', glass)]:
+ labels, count = ndi.label(img)
+ print('%s regions found in %s' % (count, name))
+ img[:] = labels
+
+ obj_areas = [np.sum(labels == i) for \
+ i in range(1, labels.max())]
+ print("Mean obj area %d" % np.mean(obj_areas))
+
+plot_images(cmap=plt.cm.nipy_spectral)
+```
diff --git a/modules/adv0_chromosomes.md b/modules/adv0_chromosomes.md
new file mode 100644
index 0000000..30ee83e
--- /dev/null
+++ b/modules/adv0_chromosomes.md
@@ -0,0 +1,94 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# Measuring chromatin fluorescence
+
+Goal: we want to quantify the amount of a particular protein (red fluorescence) localized on the centromeres (green) versus the rest of the chromosome (blue).
+
+
+
+The main challenge here is the uneven illumination, which makes isolating the chromosomes a struggle.
+
+```{code-cell} python
+import numpy as np
+from matplotlib import cm, pyplot as plt
+import skdemo
+plt.rcParams['image.cmap'] = 'cubehelix'
+plt.rcParams['image.interpolation'] = 'none'
+```
+
+```{code-cell} python
+from skimage import io
+image = io.imread('../images/chromosomes.tif')
+skdemo.imshow_with_histogram(image);
+```
+
+Let's separate the channels so we can work on each individually.
+
+```{code-cell} python
+protein, centromeres, chromosomes = image.transpose((2, 0, 1))
+```
+
+Getting the centromeres is easy because the signal is so clean:
+
+```{code-cell} python
+from skimage.filters import threshold_otsu
+centromeres_binary = centromeres > threshold_otsu(centromeres)
+skdemo.imshow_all(centromeres, centromeres_binary)
+```
+
+But getting the chromosomes is not so easy:
+
+```{code-cell} python
+chromosomes_binary = chromosomes > threshold_otsu(chromosomes)
+skdemo.imshow_all(chromosomes, chromosomes_binary, cmap='gray')
+```
+
+Let's try using an adaptive threshold:
+
+```{code-cell} python
+from skimage.filters import threshold_local
+chromosomes_adapt = threshold_local(chromosomes, block_size=51)
+# Question: how did I choose this block size?
+skdemo.imshow_all(chromosomes, chromosomes_adapt)
+```
+
+Not only is the uneven illumination a problem, but there seem to be some artifacts due to the illumination pattern!
+
+**Exercise:** Can you think of a way to fix this?
+
+(Hint: in addition to everything you've learned so far, check out [`skimage.morphology.remove_small_objects`](http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.remove_small_objects))
+
+Now that we have the centromeres and the chromosomes, it's time to do the science: get the distribution of intensities in the red channel using both centromere and chromosome locations.
+
+```python
+# Replace "None" below with the right expressions!
+centromere_intensities = None
+chromosome_intensities = None
+all_intensities = np.concatenate((centromere_intensities,
+ chromosome_intensities))
+minint = np.min(all_intensities)
+maxint = np.max(all_intensities)
+bins = np.linspace(minint, maxint, 100)
+plt.hist(centromere_intensities, bins=bins, color='blue',
+ alpha=0.5, label='centromeres')
+plt.hist(chromosome_intensities, bins=bins, color='orange',
+ alpha=0.5, label='chromosomes')
+plt.legend(loc='upper right')
+plt.show()
+```
diff --git a/modules/adv1_lesion_quantification.md b/modules/adv1_lesion_quantification.md
new file mode 100644
index 0000000..784dca0
--- /dev/null
+++ b/modules/adv1_lesion_quantification.md
@@ -0,0 +1,90 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} ipython3
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# Quantifying spinal cord regeneration in zebrafish
+
+We want to quantify the amount of fluorescent cells in a wounded zebrafish embryo spinal column:
+
+
+
+The key steps are:
+
+- estimating the position and width of the cord
+- estimating the average fluorescence along the length of the cord
+
+```{code-cell} ipython3
+from matplotlib import pyplot as plt, cm
+from skimage import io
+image = io.imread('../images/zebrafish-spinal-cord.png')
+```
+
+# SciPy to estimate coordinates
+
+First, we get just the top and bottom rows of pixels, and use a 1D gaussian filter to smooth the signal.
+
+```{code-cell} ipython3
+from scipy import ndimage as nd
+top, bottom = image[[0, -1], :]
+
+fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(8, 3))
+
+top_smooth = nd.gaussian_filter1d(top, sigma=20)
+ax0.plot(top, color='blue', lw=2)
+ax0.plot(top_smooth, color='orange', lw=2)
+ax0.set_title('top')
+
+bottom_smooth = nd.gaussian_filter1d(bottom, sigma=20)
+ax1.plot(bottom, color='blue', lw=2)
+ax1.plot(bottom_smooth, color='orange', lw=2)
+ax1.set_title('bottom')
+```
+
+With smooth curves, we can get the mode (the position of the center) and width of the signal.
+
+```{code-cell} ipython3
+top_mode = top_smooth.argmax()
+top_max = top_smooth[top_mode]
+top_width = (top_smooth > float(top_max) / 2).sum()
+
+bottom_mode = bottom_smooth.argmax()
+bottom_max = bottom_smooth[bottom_mode]
+bottom_width = (bottom_smooth > float(bottom_max) / 2).sum()
+
+width = max(bottom_width, top_width)
+
+print(top_mode, top_width, bottom_mode, bottom_width)
+```
+
+# scikit-image to trace the profile
+
+Now, use `measure.profile_line` to trace from (0, `top_mode`) to (-1, `bottom_mode`).
+
+```python
+from skimage import measure
+trace = measure.profile_line(None) # Replace `None` with correct args
+```
+
+Finally, plot the trace.
+
+```python
+plt.plot(trace, color='black', lw=2)
+plt.xlabel('position along embryo')
+plt.ylabel('mean fluorescence intensity')
+```
+
+From this trace, we can compute various summary statistics (e.g. min/max, gap width, slope, etc), and plot these over time as the wound recovers.
diff --git a/modules/adv2_microarray.md b/modules/adv2_microarray.md
new file mode 100644
index 0000000..c2066c6
--- /dev/null
+++ b/modules/adv2_microarray.md
@@ -0,0 +1,203 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.11.2
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} ipython3
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# DNA microarray processing
+
+### Data in this example
+
+*Yeast microarrays for genome wide parallel genetic and gene
+expression analysis*
+
+
+
+Two-color fluorescent scan of a yeast microarray containing 2,479 elements
+(ORFs). The center-to-center distance between elements is 345 μm. A probe
+mixture consisting of cDNA from yeast extract/peptone (YEP) galactose (green
+pseudocolor) and YEP glucose (red pseudocolor) grown yeast cultures was
+hybridized to the array. Intensity per element corresponds to ORF expression,
+and pseudocolor per element corresponds to relative ORF expression between the
+two cultures.
+
+by Deval A. Lashkari, http://www.pnas.org/content/94/24/13057/F1.expansion
+
+
+
+Learn more about microarrays:
+
+- [Tutorial on how to analyze microarray data](http://www.hhmi.org/biointeractive/how-analyze-dna-microarray-data)
+- [Complementary DNA](http://en.wikipedia.org/wiki/Complementary_DNA)
+
+More example data:
+
+- [MicroArray Genome Imaging & Clustering Tool](http://www.bio.davidson.edu/projects/MAGIC/MAGIC.html) by Laurie Heyer & team, Davidson College
+
+```{code-cell} ipython3
+import matplotlib.pyplot as plt
+
+import numpy as np
+
+from skimage import io, img_as_float
+```
+
+```{code-cell} ipython3
+microarray = io.imread('../images/microarray.jpg')
+
+# Scale between zero and one
+microarray = img_as_float(microarray)
+
+plt.figure(figsize=(10, 5))
+plt.imshow(microarray[:500, :1000], cmap='gray', interpolation='nearest');
+```
+
+```{code-cell} ipython3
+from skimage import color
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 10))
+
+red = microarray[..., 0]
+green = microarray[..., 1]
+
+red_rgb = np.zeros_like(microarray)
+red_rgb[..., 0] = red
+
+green_rgb = np.zeros_like(microarray)
+green_rgb[..., 1] = green
+
+ax0.imshow(green_rgb, interpolation='nearest')
+ax1.imshow(red_rgb, interpolation='nearest')
+plt.suptitle('\n\nPseudocolor plots of red and green channels', fontsize=16);
+```
+
+```{code-cell} ipython3
+from skimage import filters
+
+mask = (green > 0.1)
+plt.imshow(mask[:1000, :1000], cmap='gray');
+```
+
+```{code-cell} ipython3
+z = red.copy()
+z /= green
+z[~mask] = 0
+
+print(z.min(), z.max())
+
+plt.imshow(z[:500, :500], cmap=plt.cm.gray, vmin=0, vmax=2);
+```
+
+### Locating the grid
+
+```{code-cell} ipython3
+both = (green + red)
+
+plt.imshow(both, cmap='gray');
+```
+
+```{code-cell} ipython3
+from skimage import feature
+
+sum_down_columns = both.sum(axis=0)
+sum_across_rows = both.sum(axis=1)
+
+dips_columns = feature.peak_local_max(sum_down_columns.max() - sum_down_columns, min_distance=5)
+dips_columns = np.sort(dips_columns.ravel())
+
+M = len(dips_columns)
+column_distance = np.mean(np.diff(dips_columns))
+
+dips_rows = feature.peak_local_max(sum_across_rows.max() - sum_across_rows, min_distance=5)
+dips_rows = np.sort(dips_rows.ravel())
+
+N = len(dips_rows)
+row_distance = np.mean(np.diff(dips_rows))
+
+print('Columns are a mean distance of %.2f apart' % column_distance)
+print('Rows are a mean distance of %.2f apart' % row_distance)
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 5))
+
+ax0.plot(sum_down_columns)
+ax0.scatter(dips_columns, sum_down_columns[dips_columns])
+ax0.set_xlim(0, 200)
+ax0.set_title('Column gaps')
+
+ax1.plot(sum_across_rows)
+ax1.scatter(dips_rows, sum_across_rows[dips_rows])
+ax1.set_xlim(0, 200)
+ax0.set_title('Row gaps');
+```
+
+```{code-cell} ipython3
+P, Q = 500, 500
+
+plt.figure(figsize=(15, 10))
+plt.imshow(microarray[:P, :Q])
+
+for i in dips_rows[dips_rows < P]:
+ plt.plot([0, Q], [i, i], 'm')
+
+for j in dips_columns[dips_columns < Q]:
+ plt.plot([j, j], [0, P], 'm')
+
+plt.axis('image');
+```
+
+```{code-cell} ipython3
+out = np.zeros(microarray.shape[:2])
+M, N = len(dips_rows), len(dips_columns)
+
+for i in range(M - 1):
+ for j in range(N - 1):
+ row0, row1 = dips_rows[i], dips_rows[i + 1]
+ col0, col1 = dips_columns[j], dips_columns[j + 1]
+
+ r = microarray[row0:row1, col0:col1, 0]
+ g = microarray[row0:row1, col0:col1, 1]
+
+ ratio = r / g
+ mask = ~np.isinf(ratio)
+
+ mean_ratio = np.mean(ratio[mask])
+ if np.isnan(mean_ratio):
+ mean_ratio = 0
+
+ out[row0:row1, col0:col1] = mean_ratio
+```
+
+```{code-cell} ipython3
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 10))
+
+ax0.imshow(microarray)
+ax0.grid(color='magenta', linewidth=1)
+
+ax1.imshow(out, cmap='gray', interpolation='nearest', vmin=0, vmax=3);
+ax1.grid(color='magenta', linewidth=1)
+```
+
+### Transform the intensity to spot outliers
+
+```{code-cell} ipython3
+from skimage import exposure
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 10))
+
+ax0.imshow(microarray)
+ax0.grid(color='magenta', linewidth=1)
+
+ax1.imshow(exposure.adjust_log(out, gain=0.4), cmap='gray', interpolation='nearest', vmin=0, vmax=3);
+ax1.grid(color='magenta', linewidth=1)
+```
diff --git a/modules/adv3_panorama_stitching.md b/modules/adv3_panorama_stitching.md
new file mode 100644
index 0000000..6d8ba63
--- /dev/null
+++ b/modules/adv3_panorama_stitching.md
@@ -0,0 +1,862 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# scikit-image advanced panorama tutorial
+
+Enhanced from the original demo as featured in [the scikit-image paper](https://peerj.com/articles/453/).
+
+Multiple overlapping images of the same scene, combined into a single image, can yield amazing results. This tutorial will illustrate how to accomplish panorama stitching using scikit-image, from loading the images to cleverly stitching them together.
+
++++
+
+### First things first
+
+Import NumPy and matplotlib, then define a utility function to compare multiple images
+
+```{code-cell} python
+import numpy as np
+import matplotlib.pyplot as plt
+
+def compare(*images, **kwargs):
+ """
+ Utility function to display images side by side.
+
+ Parameters
+ ----------
+ image0, image1, image2, ... : ndarrray
+ Images to display.
+ labels : list
+ Labels for the different images.
+ """
+ f, axes = plt.subplots(1, len(images), **kwargs)
+ axes = np.array(axes, ndmin=1)
+
+ labels = kwargs.pop('labels', None)
+ if labels is None:
+ labels = [''] * len(images)
+
+ for n, (image, label) in enumerate(zip(images, labels)):
+ axes[n].imshow(image, interpolation='nearest', cmap='gray')
+ axes[n].set_title(label)
+ axes[n].axis('off')
+
+ f.tight_layout()
+```
+
+### Load data
+
+The ``ImageCollection`` class provides an easy and efficient way to load and represent multiple images. Images in the ``ImageCollection`` are not only read from disk when accessed.
+
+Load a series of images into an ``ImageCollection`` with a wildcard, as they share similar names.
+
+```{code-cell} python
+from skimage import io
+
+pano_imgs = io.ImageCollection('../images/pano/JDW_03*')
+```
+
+Inspect these images using the convenience function `compare()` defined earlier
+
+```{code-cell} python
+# compare(...)
+```
+
+Credit: Images of Private Arch and the trail to Delicate Arch in Arches National Park, USA, taken by Joshua D. Warner.
+License: CC-BY 4.0
+
+---
+
++++
+
+# 0. Pre-processing
+
+This stage usually involves one or more of the following:
+* Resizing, often downscaling with fixed aspect ratio
+* Conversion to grayscale, as some feature descriptors are not defined for color images
+* Cropping to region(s) of interest
+
+For convenience our example data is already resized smaller, and we won't bother cropping. However, they are presently in color so coversion to grayscale with `skimage.color.rgb2gray` is appropriate.
+
+```{code-cell} python
+from skimage.color import rgb2gray
+
+# Make grayscale versions of the three color images in pano_imgs
+# named pano0, pano1, and pano2
+```
+
+```{code-cell} python
+# View the results using compare()
+```
+
+---
+
++++
+
+# 1. Feature detection and matching
+
+We need to estimate a projective transformation that relates these images together. The steps will be
+
+1. Define one image as a _target_ or _destination_ image, which will remain anchored while the others are warped
+2. Detect features in all three images
+3. Match features from left and right images against the features in the center, anchored image.
+
+In this three-shot series, the middle image `pano1` is the logical anchor point.
+
+We detect "Oriented FAST and rotated BRIEF" (ORB) features in both images.
+
+**Note:** For efficiency, in this tutorial we're finding 800 keypoints. The results are good but small variations are expected. If you need a more robust estimate in practice, run multiple times and pick the best result _or_ generate additional keypoints.
+
+```{code-cell} python
+from skimage.feature import ORB
+
+# Initialize ORB
+# This number of keypoints is large enough for robust results,
+# but low enough to run within a few seconds.
+orb = ORB(n_keypoints=800, fast_threshold=0.05)
+
+# Detect keypoints in pano0
+orb.detect_and_extract(pano0)
+keypoints0 = orb.keypoints
+descriptors0 = orb.descriptors
+
+# Detect keypoints in pano1 and pano2
+```
+
+Match features from images 0 <-> 1 and 1 <-> 2.
+
+```{code-cell} python
+from skimage.feature import match_descriptors
+
+# Match descriptors between left/right images and the center
+matches01 = match_descriptors(descriptors0, descriptors1, cross_check=True)
+matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
+```
+
+Inspect these matched features side-by-side using the convenience function ``skimage.feature.plot_matches``.
+
+```{code-cell} python
+from skimage.feature import plot_matches
+fig, ax = plt.subplots(1, 1, figsize=(12, 12))
+
+# Best match subset for pano0 -> pano1
+plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01)
+
+ax.axis('off');
+```
+
+Most of these line up similarly, but it isn't perfect. There are a number of obvious outliers or false matches.
+
+```{code-cell} python
+fig, ax = plt.subplots(1, 1, figsize=(12, 12))
+
+# Best match subset for pano2 -> pano1
+plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12)
+
+ax.axis('off');
+```
+
+Similar to above, decent signal but numerous false matches.
+
+---
+
++++
+
+# 2. Transform estimation
+
+To filter out the false matches, we apply RANdom SAmple Consensus (RANSAC), a powerful method of rejecting outliers available in ``skimage.transform.ransac``. The transformation is estimated using an iterative process based on randomly chosen subsets, finally selecting the model which corresponds best with the majority of matches.
+
+We need to do this twice, once each for the transforms left -> center and right -> center.
+
+```{code-cell} python
+from skimage.transform import ProjectiveTransform
+from skimage.measure import ransac
+
+# Select keypoints from
+# * source (image to be registered): pano0
+# * target (reference image): pano1, our middle frame registration target
+src = keypoints0[matches01[:, 0]][:, ::-1]
+dst = keypoints1[matches01[:, 1]][:, ::-1]
+
+model_robust01, inliers01 = ransac((src, dst), ProjectiveTransform,
+ min_samples=4, residual_threshold=1, max_trials=300)
+
+# Select keypoints from
+# * source (image to be registered): pano2
+# * target (reference image): pano1, our middle frame registration target
+src = keypoints2[matches12[:, 1]][:, ::-1]
+dst = keypoints1[matches12[:, 0]][:, ::-1]
+
+model_robust12, inliers12 = ransac((src, dst), ProjectiveTransform,
+ min_samples=4, residual_threshold=1, max_trials=300)
+```
+
+The `inliers` returned from RANSAC select the best subset of matches. How do they look?
+
+```{code-cell} python
+# Use plot_matches as before, but select only good matches with fancy indexing
+# e.g., matches01[inliers01]
+```
+
+```{code-cell} python
+# Use plot_matches as before, but select only good matches with fancy indexing
+# e.g., matches12[inliers12]
+```
+
+Most of the false matches are rejected!
+
+---
+
++++
+
+# 3. Warping
+
+Next, we produce the panorama itself. We must _warp_, or transform, two of the three images so they will properly align with the stationary image.
+
+### Extent of output image
+The first step is to find the shape of the output image to contain all three transformed images. To do this we consider the extents of all warped images.
+
+```{code-cell} python
+from skimage.transform import SimilarityTransform
+
+# Shape of middle image, our registration target
+r, c = pano1.shape[:2]
+
+# Note that transformations take coordinates in (x, y) format,
+# not (row, column), in order to be consistent with most literature
+corners = np.array([[0, 0],
+ [0, r],
+ [c, 0],
+ [c, r]])
+
+# Warp the image corners to their new positions
+warped_corners01 = model_robust01(corners)
+warped_corners12 = model_robust12(corners)
+
+# Find the extents of both the reference image and the warped
+# target image
+all_corners = np.vstack((warped_corners01, warped_corners12, corners))
+
+# The overall output shape will be max - min
+corner_min = np.min(all_corners, axis=0)
+corner_max = np.max(all_corners, axis=0)
+output_shape = (corner_max - corner_min)
+
+# Ensure integer shape with np.ceil and dtype conversion
+output_shape = np.ceil(output_shape[::-1]).astype(int)
+```
+
+### Apply estimated transforms
+
+Warp the images with `skimage.transform.warp` according to the estimated models. A shift, or _translation_ is needed to place as our middle image in the middle - it isn't truly stationary.
+
+Values outside the input images are initially set to -1 to distinguish the "background", which is identified for later use.
+
+**Note:** ``warp`` takes the _inverse_ mapping as an input.
+
+```{code-cell} python
+from skimage.transform import warp
+
+# This in-plane offset is the only necessary transformation for the middle image
+offset1 = SimilarityTransform(translation= -corner_min)
+
+# Translate pano1 into place
+pano1_warped = warp(pano1, offset1.inverse, order=3,
+ output_shape=output_shape, cval=-1)
+
+# Acquire the image mask for later use
+pano1_mask = (pano1_warped != -1) # Mask == 1 inside image
+pano1_warped[~pano1_mask] = 0 # Return background values to 0
+```
+
+Warp left panel into place
+
+```{code-cell} python
+# Warp pano0 to pano1
+transform01 = (model_robust01 + offset1).inverse
+pano0_warped = warp(pano0, transform01, order=3,
+ output_shape=output_shape, cval=-1)
+
+pano0_mask = (pano0_warped != -1) # Mask == 1 inside image
+pano0_warped[~pano0_mask] = 0 # Return background values to 0
+```
+
+Warp right panel into place
+
+```{code-cell} python
+# Warp pano2 to pano1
+transform12 = (model_robust12 + offset1).inverse
+pano2_warped = warp(pano2, transform12, order=3,
+ output_shape=output_shape, cval=-1)
+
+pano2_mask = (pano2_warped != -1) # Mask == 1 inside image
+pano2_warped[~pano2_mask] = 0 # Return background values to 0
+```
+
+Inspect the warped images:
+
+```{code-cell} python
+compare(pano0_warped, pano1_warped, pano2_warped, figsize=(12, 10));
+```
+
+---
+
++++
+
+# 4. Combining images the easy (and bad) way
+
+This method simply
+
+1. sums the warped images
+2. tracks how many images overlapped to create each point
+3. normalizes the result.
+
+```{code-cell} python
+# Add the three warped images together. This could create dtype overflows!
+# We know they are are floating point images after warping, so it's OK.
+merged = ## Sum warped images
+```
+
+```{code-cell} python
+# Track the overlap by adding the masks together
+overlap = ## Sum masks
+```
+
+```{code-cell} python
+# Normalize through division by `overlap` - but ensure the minimum is 1
+normalized = merged / ## Divisor here
+```
+
+Finally, view the results!
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+ax.imshow(normalized, cmap='gray')
+
+fig.tight_layout()
+ax.axis('off');
+```
+
+
+---
+
+
+
++++
+
+**What happened?!** Why are there nasty dark lines at boundaries, and why does the middle look so blurry?
+
+
+The **lines are artifacts (boundary effect) from the warping method**. When the image is warped with interpolation, edge pixels containing part image and part background combine these values. We would have bright lines if we'd chosen `cval=2` in the `warp` calls (try it!), but regardless of choice there will always be discontinuities.
+
+...Unless you use `order=0` in `warp`, which is nearest neighbor. Then edges are perfect (try it!). But who wants to be limited to an inferior interpolation method?
+
+Even then, it's blurry! Is there a better way?
+
+---
+
++++
+
+# 5. Stitching images along a minimum-cost path
+
+Let's step back a moment and consider: Is it even reasonable to blend pixels?
+
+Take a look at a _difference image_, which is just one image subtracted from the other.
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# Generate difference image and inspect it
+difference_image = pano0_warped - pano1_warped
+ax.imshow(difference_image, cmap='gray')
+
+ax.axis('off');
+```
+
+The surrounding flat gray is zero. _A perfect overlap would show no structure!_
+
+Instead, the overlap region matches fairly well in the middle... but off to the sides where things start to look a little embossed, a simple average blurs the result. This caused the blurring in the previous, method (look again). _Unfortunately, this is almost always the case for panoramas!_
+
+How can we fix this?
+
+Let's attempt to find a vertical path through this difference image which stays as close to zero as possible. If we use that to build a mask, defining a transition between images, the result should appear _seamless_.
+
+---
+
++++
+
+# Seamless image stitching with Minimum-Cost Paths and `skimage.graph`
+
+Among other things, `skimage.graph` allows you to
+* start at any point on an array
+* find the path to any other point in the array
+* the path found _minimizes_ the sum of values on the path.
+
+
+The array is called a _cost array_, while the path found is a _minimum-cost path_ or **MCP**.
+
+To accomplish this we need
+
+* Starting and ending points for the path
+* A cost array (a modified difference image)
+
+This method is so powerful that, with a carefully constructed cost array, the seed points are essentially irrelevant. It just works!
+
++++
+
+### Define seed points
+
+```{code-cell} python
+rmax = output_shape[0] - 1
+cmax = output_shape[1] - 1
+
+# Start anywhere along the top and bottom, left of center.
+mask_pts01 = [[0, cmax // 3],
+ [rmax, cmax // 3]]
+
+# Start anywhere along the top and bottom, right of center.
+mask_pts12 = [[0, 2*cmax // 3],
+ [rmax, 2*cmax // 3]]
+```
+
+### Construct cost array
+
+This utility function exists to give a "cost break" for paths from the edge to the overlap region.
+
+We will visually explore the results shortly. Examine the code later - for now, just use it.
+
+```{code-cell} python
+from skimage.morphology import flood_fill
+
+def generate_costs(diff_image, mask, vertical=True, gradient_cutoff=2.):
+ """
+ Ensures equal-cost paths from edges to region of interest.
+
+ Parameters
+ ----------
+ diff_image : ndarray of floats
+ Difference of two overlapping images.
+ mask : ndarray of bools
+ Mask representing the region of interest in ``diff_image``.
+ vertical : bool
+ Control operation orientation.
+ gradient_cutoff : float
+ Controls how far out of parallel lines can be to edges before
+ correction is terminated. The default (2.) is good for most cases.
+
+ Returns
+ -------
+ costs_arr : ndarray of floats
+ Adjusted costs array, ready for use.
+ """
+ if vertical is not True:
+ return tweak_costs(diff_image.T, mask.T, vertical=vertical,
+ gradient_cutoff=gradient_cutoff).T
+
+ # Start with a high-cost array of 1's
+ costs_arr = np.ones_like(diff_image)
+
+ # Obtain extent of overlap
+ row, col = mask.nonzero()
+ cmin = col.min()
+ cmax = col.max()
+ shape = mask.shape
+
+ # Label discrete regions
+ labels = mask.copy().astype(np.uint8)
+ cslice = slice(cmin, cmax + 1)
+ submask = np.ascontiguousarray(labels[:, cslice])
+ submask = flood_fill(submask, (0, 0), 2)
+ submask = flood_fill(submask, (shape[0]-1, 0), 3)
+ labels[:, cslice] = submask
+
+ # Find distance from edge to region
+ upper = (labels == 2).sum(axis=0).astype(np.float64)
+ lower = (labels == 3).sum(axis=0).astype(np.float64)
+
+ # Reject areas of high change
+ ugood = np.abs(np.gradient(upper[cslice])) < gradient_cutoff
+ lgood = np.abs(np.gradient(lower[cslice])) < gradient_cutoff
+
+ # Give areas slightly farther from edge a cost break
+ costs_upper = np.ones_like(upper)
+ costs_lower = np.ones_like(lower)
+ costs_upper[cslice][ugood] = upper[cslice].min() / np.maximum(upper[cslice][ugood], 1)
+ costs_lower[cslice][lgood] = lower[cslice].min() / np.maximum(lower[cslice][lgood], 1)
+
+ # Expand from 1d back to 2d
+ vdist = mask.shape[0]
+ costs_upper = costs_upper[np.newaxis, :].repeat(vdist, axis=0)
+ costs_lower = costs_lower[np.newaxis, :].repeat(vdist, axis=0)
+
+ # Place these in output array
+ costs_arr[:, cslice] = costs_upper[:, cslice] * (labels[:, cslice] == 2)
+ costs_arr[:, cslice] += costs_lower[:, cslice] * (labels[:, cslice] == 3)
+
+ # Finally, place the difference image
+ costs_arr[mask] = diff_image[mask]
+
+ return costs_arr
+```
+
+Use this function to generate the cost array.
+
+```{code-cell} python
+# Start with the absolute value of the difference image.
+# np.abs necessary because we don't want negative costs!
+costs01 = generate_costs(np.abs(pano0_warped - pano1_warped),
+ pano0_mask & pano1_mask)
+```
+
+Allow the path to "slide" along top and bottom edges to the optimal horizontal position by setting top and bottom edges to zero cost.
+
+```{code-cell} python
+# Set top and bottom edges to zero in `costs01`
+# Remember (row, col) indexing!
+```
+
+Our cost array now looks like this
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(15, 12))
+
+ax.imshow(costs01, cmap='gray', interpolation='none')
+
+ax.axis('off');
+```
+
+The tweak we made with `generate_costs` is subtle but important. Can you see it?
+
++++
+
+### Find the minimum-cost path (MCP)
+
+Use `skimage.graph.route_through_array` to find an optimal path through the cost array
+
+```{code-cell} python
+from skimage.graph import route_through_array
+
+# Arguments are:
+# cost array
+# start pt
+# end pt
+# can it traverse diagonally
+pts, _ = route_through_array(costs01, mask_pts01[0], mask_pts01[1], fully_connected=True)
+
+# Convert list of lists to 2d coordinate array for easier indexing
+pts = np.array(pts)
+```
+
+Did it work?
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# Plot the difference image
+ax.imshow(pano0_warped - pano1_warped, cmap='gray')
+
+# Overlay the minimum-cost path
+ax.plot(pts[:, 1], pts[:, 0])
+
+plt.tight_layout()
+ax.axis('off');
+```
+
+That looks like a great seam to stitch these images together - the path looks very close to zero.
+
++++
+
+### Irregularities
+
+Due to the random element in the RANSAC transform estimation, everyone will have a slightly different blue path. **Your path will look different from mine, and different from your neighbor's.** That's expected! _The awesome thing about MCP is that everyone just calculated the best possible path to stitch together their unique transforms!_
+
++++
+
+### Filling the mask
+
+Turn that path into a mask, which will be 1 where we want the left image to show through and zero elsewhere. We need to fill the left side of the mask with ones over to our path.
+
+**Note**: This is the inverse of NumPy masked array conventions (``numpy.ma``), which specify a negative mask (mask == bad/missing) rather than a positive mask as used here (mask == good/selected).
+
+Place the path into a new, empty array.
+
+```{code-cell} python
+# Start with an array of zeros and place the path
+mask0 = np.zeros_like(pano0_warped, dtype=np.uint8)
+mask0[pts[:, 0], pts[:, 1]] = 1
+```
+
+Ensure the path appears as expected
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# View the path in black and white
+ax.imshow(mask0, cmap='gray')
+
+ax.axis('off');
+```
+
+Label the various contiguous regions in the image using `skimage.measure.label`
+
+```{code-cell} python
+from skimage.morphology import flood_fill
+
+# Labeling starts with one at point (0, 0)
+mask0 = flood_fill(mask0, (0, 0), 1, connectivity=1)
+
+# The result
+plt.imshow(mask0, cmap='gray');
+```
+
+Looks great!
+
+
+### Rinse and repeat
+
+Apply the same principles to images 1 and 2: first, build the cost array
+
+```{code-cell} python
+# Start with the absolute value of the difference image.
+# np.abs is necessary because we don't want negative costs!
+costs12 = generate_costs(np.abs(pano1_warped - pano2_warped),
+ pano1_mask & pano2_mask)
+
+# Allow the path to "slide" along top and bottom edges to the optimal
+# horizontal position by setting top and bottom edges to zero cost
+costs12[0, :] = 0
+costs12[-1, :] = 0
+```
+
+**Add an additional constraint this time**, to prevent this path crossing the prior one!
+
+```{code-cell} python
+costs12[mask0 > 0] = 1
+```
+
+Check the result
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(8, 8))
+ax.imshow(costs12, cmap='gray');
+```
+
+Your results may look slightly different.
+
+Compute the minimal cost path
+
+```{code-cell} python
+# Arguments are:
+# cost array
+# start pt
+# end pt
+# can it traverse diagonally
+pts, _ = route_through_array(costs12, mask_pts12[0], mask_pts12[1], fully_connected=True)
+
+# Convert list of lists to 2d coordinate array for easier indexing
+pts = np.array(pts)
+```
+
+Verify a reasonable result
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# Plot the difference image
+ax.imshow(pano1_warped - pano2_warped, cmap='gray')
+
+# Overlay the minimum-cost path
+ax.plot(pts[:, 1], pts[:, 0]);
+
+ax.axis('off');
+```
+
+Initialize the mask by placing the path in a new array
+
+```{code-cell} python
+mask2 = np.zeros_like(pano0_warped, dtype=np.uint8)
+mask2[pts[:, 0], pts[:, 1]] = 1
+```
+
+Fill the right side this time, again using `skimage.measure.label` - the label of interest is 3
+
+```{code-cell} python
+mask2 = (label(mask2, connectivity=1, background=-1) == 3)
+
+# The result
+plt.imshow(mask2, cmap='gray');
+```
+
+### Final mask
+
+The last mask for the middle image is one of exclusion - it will be displayed everywhere `mask0` and `mask2` are not.
+
+```{code-cell} python
+mask1 = ~(mask0.astype(np.bool) | mask2.astype(np.bool))
+```
+
+Define a convenience function to place masks in alpha channels
+
+```{code-cell} python
+def add_alpha(img, mask=None):
+ """
+ Adds a masked alpha channel to an image.
+
+ Parameters
+ ----------
+ img : (M, N[, 3]) ndarray
+ Image data, should be rank-2 or rank-3 with RGB channels
+ mask : (M, N[, 3]) ndarray, optional
+ Mask to be applied. If None, the alpha channel is added
+ with full opacity assumed (1) at all locations.
+ """
+ from skimage.color import gray2rgb
+ if mask is None:
+ mask = np.ones_like(img)
+
+ if img.ndim == 2:
+ img = gray2rgb(img)
+
+ return np.dstack((img, mask))
+```
+
+Obtain final, alpha blended individual images and inspect them
+
+```{code-cell} python
+pano0_final = add_alpha(pano0_warped, mask0)
+pano1_final = add_alpha(pano1_warped, mask1)
+pano2_final = add_alpha(pano2_warped, mask2)
+
+compare(pano0_final, pano1_final, pano2_final, figsize=(15, 15))
+```
+
+What we have here is the world's most complicated and precisely-fitting jigsaw puzzle...
+
+Plot all three together and view the results!
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# This is a perfect combination, but matplotlib's interpolation
+# makes it appear to have gaps. So we turn it off.
+ax.imshow(pano0_final, interpolation='none')
+ax.imshow(pano1_final, interpolation='none')
+ax.imshow(pano2_final, interpolation='none')
+
+fig.tight_layout()
+ax.axis('off');
+```
+
+Fantastic! Without the black borders, you'd never know this was composed of separate images!
+
+---
+
++++
+
+# Bonus round: now, in color!
+
+We converted to grayscale for ORB feature detection, back in the initial **preprocessing** steps. Since we stored our transforms and masks, adding color is straightforward!
+
+Transform the colored images
+
+```{code-cell} python
+# Identical transforms as before, except
+# * Operating on original color images
+# * filling with cval=0 as we know the masks
+pano0_color = warp(pano_imgs[0], (model_robust01 + offset1).inverse, order=3,
+ output_shape=output_shape, cval=0)
+
+pano1_color = warp(pano_imgs[1], offset1.inverse, order=3,
+ output_shape=output_shape, cval=0)
+
+pano2_color = warp(pano_imgs[2], (model_robust12 + offset1).inverse, order=3,
+ output_shape=output_shape, cval=0)
+```
+
+Apply the custom alpha channel masks
+
+```{code-cell} python
+pano0_final = add_alpha(pano0_color, mask0)
+pano1_final = add_alpha(pano1_color, mask1)
+pano2_final = add_alpha(pano2_color, mask2)
+```
+
+View the result!
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(12, 12))
+
+# Turn off matplotlib's interpolation
+ax.imshow(pano0_final, interpolation='none')
+ax.imshow(pano1_final, interpolation='none')
+ax.imshow(pano2_final, interpolation='none')
+
+fig.tight_layout()
+ax.axis('off');
+```
+
+Save the combined, color panorama locally as `'./pano-advanced-output.png'`
+
+```{code-cell} python
+from skimage.color import gray2rgb
+
+# Start with empty image
+pano_combined = np.zeros_like(pano0_color)
+
+# Place the masked portion of each image into the array
+# masks are 2d, they need to be (M, N, 3) to match the color images
+pano_combined += pano0_color * gray2rgb(mask0)
+pano_combined += pano1_color * gray2rgb(mask1)
+pano_combined += pano2_color * gray2rgb(mask2)
+
+
+# Save the output - precision loss warning is expected
+# moving from floating point -> uint8
+io.imsave('./pano-advanced-output.png', pano_combined)
+```
+
+
+---
+
+
+
++++
+
+
+
++++
+
+# Once more, from the top
+
+I hear what you're saying. "But Josh, those were too easy! The panoramas had too much overlap! Does this still work in the real world?"
+
++++
+
+**Go back to the top. Under "Load Data" replace the string `'data/JDW_03*'` with `'data/JDW_9*'`, and re-run all of the cells in order.**
+
++++
+
+
+---
+
+
+
+```{code-cell} python
+%reload_ext load_style
+%load_style ../themes/tutorial.css
+```
diff --git a/modules/adv4_warping.md b/modules/adv4_warping.md
new file mode 100644
index 0000000..a7d857e
--- /dev/null
+++ b/modules/adv4_warping.md
@@ -0,0 +1,471 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+from __future__ import division, print_function
+import numpy as np
+%matplotlib inline
+```
+
+# Overview
+
+- http://scikit-image.org/docs/stable/api/skimage.transform.html
+- http://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.warp
+- http://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.AffineTransform (and other similar classes)
+
++++
+
+# Image rotation from scratch
+
+The following code shows how to rotate an image using the skimage (scikit-image) library.
+
+```{code-cell} python
+import matplotlib.pyplot as plt
+from skimage import transform, data
+
+camera = data.camera()
+rotated = transform.rotate(camera, 30)
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 5))
+ax0.imshow(camera, cmap='gray')
+ax1.imshow(rotated, cmap='gray');
+```
+
+**Exercise:** Write an algorithm from scratch that will
+do the same (i.e., take an input image as an ndarray, and rotate it).
+
+If you feel creative, you can also write code to magnify (zoom) the image.
+
+You may need: http://en.wikipedia.org/wiki/Polar_coordinate_system
+
+A (bad) solution is given below--but try it yourself before looking!
+
++++
+
+### A problematic approach
+
+```{code-cell} python
+from skimage import color
+```
+
+```{code-cell} python
+def rotate(image, theta):
+ theta = np.deg2rad(theta)
+
+ height, width = image.shape[:2]
+ out = np.zeros_like(image)
+
+ centre_x, centre_y = width / 2., height / 2.
+
+ for x in range(width):
+ for y in range(height):
+
+ x_c = x - centre_x
+ y_c = y - centre_y
+
+ # Determine polar coordinate of pixel
+ radius = np.sqrt(x_c**2 + y_c**2)
+ angle = np.arctan2(y_c, x_c)
+
+ new_angle = angle + theta
+
+ new_x = radius * np.cos(new_angle)
+ new_y = radius * np.sin(new_angle)
+
+ new_x = new_x + centre_x
+ new_y = new_y + centre_y
+
+ if (new_x >= width) or (new_x < 0) or\
+ (new_y >= height) or (new_y < 0):
+ continue
+ else:
+ out[int(new_y), int(new_x)] = image[y, x]
+
+ return out
+
+rotated = rotate(camera, 40)
+
+plt.imshow(rotated, cmap='gray', interpolation='nearest');
+```
+
+## And while we can attempt to fix the problem...
+
+...this is not an optimal approach
+
+```{code-cell} python
+# Attempt at fixing the holes using a median filter
+# -- it works, sort of, but it's not the best approach.
+
+height, width = rotated.shape[:2]
+
+out = rotated.copy()
+
+for x in range(1, width - 1):
+ for y in range(1, height - 1):
+ if out[y, x] == 0:
+ out[y, x] = np.median([out[y, x-1],
+ out[y, x+1],
+ out[y+1, x],
+ out[y-1, x]])
+
+plt.imshow(out, cmap='gray', interpolation='nearest');
+```
+
+```{code-cell} python
+A = np.array([[4, 2], [1, 6]])
+print(A)
+```
+
+```{code-cell} python
+plt.imshow(A, cmap='gray', interpolation='nearest');
+```
+
+# For later discussion: interpolation
+
+## Bi-linear interpolation
+
+
+
+
+Also see [bilinear interpolation on Wikipedia](http://en.wikipedia.org/wiki/Bilinear_interpolation)
+
++++
+
+# Some warping experiments!
+
+## Fish-eye
+
+```{code-cell} python
+from skimage import transform, data, io
+import numpy as np
+import matplotlib.pyplot as plt
+
+# Load face
+face = io.imread('../images/stefan.jpg')
+
+# Get the eye nicely in the middle
+face = face[:185, 15:]
+```
+
+```{code-cell} python
+plt.imshow(face)
+plt.plot([face.shape[1]/2.], [face.shape[0]/2.], 'or', markersize=14, alpha=0.4)
+plt.axis('image');
+```
+
+```{code-cell} python
+# Define a transformation on the x-y coordinates
+
+def fisheye(xy):
+ center = np.mean(xy, axis=0)
+ xc, yc = (xy - center).T
+
+ # Polar coordinates
+ r = np.sqrt(xc**2 + yc**2)
+ theta = np.arctan2(yc, xc)
+
+ r = 0.8 * np.exp(r**(1/2.1) / 1.8)
+
+ return np.column_stack((r * np.cos(theta), r * np.sin(theta))) + center
+```
+
+```{code-cell} python
+# Warp and display
+
+out = transform.warp(face, fisheye)
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 5))
+ax0.imshow(face)
+ax0.set_axis_off()
+
+ax1.imshow(out)
+ax1.set_axis_off()
+
+plt.title('Knock! Knock!')
+plt.show()
+```
+
+## Run the following scripts for fun:
+
+(Open up the terminal in the "scripts" directory first)
+
+- **deswirl.py** (run using: ``python deswirl.py``)
+
+ In the UK, a criminal tried to hide his identity by posting
+ swirled pictures of his face online. Here, we use the
+ Mona Lisa to illustrate what he did. Can you restore
+ her face back to normal? (Note that you can adjust the
+ position of the red dot, as well as move the sliders.)
+
+
+- **clock_deblur.py**
+
+ I took a picture of a wall clock while moving the camera. Or perhaps the clock moved.
+ Either way, now I cannot read the time! I've implemented a deblurring
+ algorithm--can you adjust its parameters to help me pin-point
+ the time?
+
++++
+
+# Here's code for a swirl transform:
+
+```{code-cell} python
+from skimage import transform
+
+def swirl(xy, center=[0, 0], strength=1, radius=100, rotation=0):
+ """Compute the coordinate mapping for a swirl transformation.
+
+ """
+ x, y = xy.T
+ x0, y0 = center
+ rho = np.sqrt((x - x0)**2 + (y - y0)**2)
+
+ # Ensure that the transformation decays to approximately 1/1000-th
+ # within the specified radius.
+ radius = radius / 5 * np.log(2)
+
+ theta = rotation + strength * \
+ np.exp(-rho / radius) + \
+ np.arctan2(y - y0, x - x0)
+
+ xy[..., 0] = x0 + rho * np.cos(theta)
+ xy[..., 1] = y0 + rho * np.sin(theta)
+
+ return xy
+
+
+h, w = face.shape[:2]
+
+parameters = {'center': [w/2., h/2.],
+ 'strength': 8,
+ 'radius': 90,
+ 'rotation': 0}
+
+out = transform.warp(face, swirl, parameters)
+```
+
+```{code-cell} python
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(8, 4))
+
+ax0.imshow(face)
+ax1.imshow(out);
+```
+
+# Can you come up with an even better distortion?
+
+## Start with this template:
+
+```{code-cell} python
+def my_warp(xy):
+ x = xy[:, 0]
+ y = xy[:, 1]
+
+ x = x + 1.5 * np.sin(y / 3)
+
+ return np.hstack((x, y))
+
+image = plt.imread('../images/stefan.jpg')
+out = transform.warp(image, my_warp)
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(8, 4))
+ax0.imshow(image)
+ax1.imshow(out);
+```
+
+## Composing Transformations
+
+scikit-image allows you to compose several transformations. For example:
+
+```{code-cell} python
+from skimage import data
+
+cat = data.chelsea()
+horizontal_shift = transform.SimilarityTransform(translation=[20, 0])
+
+multiple_shifts = horizontal_shift + horizontal_shift + horizontal_shift
+
+f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(15, 5))
+ax0.imshow(cat)
+ax1.imshow(transform.warp(cat, horizontal_shift.inverse)) # Note the inverse!
+ax2.imshow(transform.warp(cat, multiple_shifts.inverse));
+```
+
+The `transform` module allows us to rotate images. The inner workings is something like this:
+
+```{code-cell} python
+def my_rotate(image, angle):
+ rotation_tf = transform.SimilarityTransform(rotation=np.deg2rad(angle))
+ return transform.warp(image, rotation_tf.inverse)
+
+plt.imshow(my_rotate(cat, 30))
+```
+
+Note that this rotates the cat around the origin (top-left).
+
+**Can you modify `my_rotate` to rotate the image around the center?**
+
+*Hint:*
+
+1. Shift the image (see above) so that the center of the image lies at (0, 0)
+2. Rotate the image
+3. Shift the image back---the opposite of what you did in step 1
+
+All of this can be achieved by composing transformations and calling `warp` once.
+
++++
+
+# Advanced challenge: rectifying an image
+
+
+
++++
+
+We know the above tiles are laid out in a square--can you transform
+the image so that the tiles are displayed as if you were viewing them from above?
+
+The centre-points of the corner circles are, given as (row, column) coordinates:
+
+```
+(72, 129) -- top left
+(76, 302) -- top right
+(185, 90) -- bottom left
+(193, 326) -- bottom right
+```
+
+Hint: there is a linear transformation matrix, $H$, such that
+
+$H \mathbf{x} = \mathbf{x}'$
+
+where $\mathbf{x}$ is the *homogeneous* coordinate in the original image and
+$\mathbf{x}'$ is the *homogeneous* coordinate in the rectified image (with *homogeneous*
+we simply mean that we add an extra 1 at the end, e.g. (72, 129) becomes (72, 129, 1).
+The values for $\mathbf{x}$ and their new values, $\mathbf{x}'$,
+are therefore:
+
+```
+x = (72, 129, 1), x' = (0, 0, 1)
+x = (76, 302, 1), x' = (0, 400, 1)
+x = (185, 90, 1), x' = (400, 0, 1)
+x = (193, 326, 1) x' = (400, 400, 1)
+```
+
+(You can choose any output size you like--I chose $400 \times 400$)
+
+Why do we need homogeneous coordinates? It allows us to have *translation* as part of H:
+
+$
+\left[\begin{array}{ccc}
+H_{00} & H_{01} & H_{02}\\
+H_{10} & H_{11} & H_{12}\\
+H_{20} & H_{21} & 1
+\end{array}\right]\left[\begin{array}{c}
+x\\
+y\\
+1
+\end{array}\right]=\left[\begin{array}{c}
+H_{00}x+H_{01}y+H_{02}\\
+H_{10}x+H_{11}y+H_{12}\\
+H_{20}x+H_{21}y+H_{22}
+\end{array}\right]
+$
+
+Note that each element of the output coordinate is of the form $ax + by + c$. Without the 1 in the last position of the coordinate, there would have been no $+ c$ and therefore no translation!
+
+The question on how to determine $H$ is left for another day. If you are curious,
+the [answer can be found here](homography.pdf).
+
+In the meantime, I provide some code to calculate $H$:
+
+```{code-cell} python
+from skimage.transform import estimate_transform
+
+source = np.array([(129, 72),
+ (302, 76),
+ (90, 185),
+ (326, 193)])
+
+target = np.array([[0, 0],
+ [400, 0],
+ [0, 400],
+ [400, 400]])
+
+tf = estimate_transform('projective', source, target)
+H = tf.params
+print(H)
+```
+
+Using the code in the cell above, you can compute the target coordinate of any position in the original image.
+
+```{code-cell} python
+# Verify that the top left corner maps to (0, 0)
+
+x = np.array([[129, 72, 1]])
+
+z = np.dot(H, x.T)
+z /= z[2]
+
+print(z)
+```
+
+### Here's a template solution:
+
+```{code-cell} python
+def rectify(xy):
+ x = xy[:, 0]
+ y = xy[:, 1]
+
+ # We need to provide the backward mapping, from the target
+ # image to the source image.
+ HH = np.linalg.inv(H)
+
+ # You must fill in your code here to take
+ # the matrix HH (given above) and to transform
+ # each coordinate to its new position.
+ #
+ # Hint: handy functions are
+ #
+ # - np.dot (matrix multiplication)
+ # - np.ones_like (make an array of ones the same shape as another array)
+ # - np.column_stack
+ # - A.T -- type .T after a matrix to transpose it
+ # - x.reshape -- reshapes the array x
+
+ # ... your code
+ # ... your code
+ # ... your code
+ # ... your code
+
+ return ...
+
+
+image = plt.imread('images/chapel_floor.png')
+out = transform.warp(image, rectify, output_shape=(400, 400))
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(8, 4))
+ax0.imshow(image)
+ax1.imshow(out)
+```
+
+
+
+
+The solution to the above problem is provided as [solutions/tile_rectify.py](solutions/tile_rectify.py). Only look at it after you've attempted the problem yourself!
+
+
++++
+
+# For more fun examples see http://scikit-image.org/docs/dev/auto_examples
+
+```{code-cell} python
+
+```
diff --git a/modules/adv5_pores.md b/modules/adv5_pores.md
new file mode 100644
index 0000000..ce8c6d0
--- /dev/null
+++ b/modules/adv5_pores.md
@@ -0,0 +1,324 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+# Diatom analysis
+
+See https://www.nature.com/articles/s41524-019-0202-3:
+
+**Deep data analytics for genetic engineering of diatoms linking genotype to phenotype via machine learning**, Artem A. Trofimov, Alison A. Pawlicki, Nikolay Borodinov, Shovon Mandal, Teresa J. Mathews, Mark Hildebrand, Maxim A. Ziatdinov, Katherine A. Hausladen, Paulina K. Urbanowicz, Chad A. Steed, Anton V. Ievlev, Alex Belianinov, Joshua K. Michener, Rama Vasudevan, and Olga S. Ovchinnikova.
+
+```{code-cell} python
+%matplotlib inline
+%config InlineBackend.figure_format = 'retina'
+import matplotlib.pyplot as plt
+import numpy as np
+```
+
+```{code-cell} python
+# Set up matplotlib defaults: larger images, gray color map
+import matplotlib
+matplotlib.rcParams.update({
+ 'figure.figsize': (10, 10),
+ 'image.cmap': 'gray'
+})
+```
+
+```{code-cell} python
+from skimage import io
+image = io.imread('../data/diatom-wild-032.jpg')
+
+plt.imshow(image);
+```
+
+```{code-cell} python
+pores = image[:690, :]
+
+plt.imshow(pores);
+```
+
+```{code-cell} python
+from scipy import ndimage as ndi
+from skimage import util
+
+denoised = ndi.median_filter(util.img_as_float(pores), size=3)
+```
+
+```{code-cell} python
+plt.imshow(denoised);
+```
+
+```{code-cell} python
+from skimage import exposure
+
+pores_gamma = exposure.adjust_gamma(denoised, 0.7)
+plt.imshow(pores_gamma);
+```
+
+```{code-cell} python
+pores_inv = 1 - pores_gamma
+plt.imshow(pores_inv);
+```
+
+```{code-cell} python
+# This is the problematic part of the manual pipeline: you need
+# a good segmentation. There are algorithms for automatic thresholding,
+# such as `filters.otsu` and `filters.li`, but they don't always get the
+# result you want.
+
+t = 0.325
+thresholded = (pores_gamma <= t)
+
+plt.imshow(thresholded);
+```
+
+```{code-cell} python
+from skimage import filters
+
+filters.try_all_threshold(pores_gamma, figsize=(15, 20));
+```
+
+```{code-cell} python
+from skimage import segmentation, morphology, color
+```
+
+```{code-cell} python
+distance = ndi.distance_transform_edt(thresholded)
+
+plt.imshow(exposure.adjust_gamma(distance, 0.5))
+plt.title('Distance to background map');
+```
+
+```{code-cell} python
+local_maxima = morphology.local_maxima(distance)
+```
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(20, 20))
+
+maxi_coords = np.nonzero(local_maxima)
+
+ax.imshow(pores);
+plt.scatter(maxi_coords[1], maxi_coords[0]);
+```
+
+```{code-cell} python
+# This is a utility function that we'll use for display in a while;
+# you can ignore it for now and come and investigate later.
+
+def shuffle_labels(labels):
+ """Shuffle the labels so that they are no longer in order.
+ This helps with visualization.
+ """
+ indices = np.unique(labels[labels != 0])
+ indices = np.append(
+ [0],
+ np.random.permutation(indices)
+ )
+ return indices[labels]
+```
+
+```{code-cell} python
+markers = ndi.label(local_maxima)[0]
+labels = segmentation.watershed(denoised, markers)
+```
+
+```{code-cell} python
+f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(20, 5))
+ax0.imshow(thresholded)
+ax1.imshow(np.log(1 + distance))
+ax2.imshow(shuffle_labels(labels), cmap='magma');
+```
+
+```{code-cell} python
+labels_masked = segmentation.watershed(thresholded, markers, mask=thresholded, connectivity=2)
+```
+
+```{code-cell} python
+f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(20, 5))
+ax0.imshow(thresholded)
+ax1.imshow(np.log(1 + distance))
+ax2.imshow(shuffle_labels(labels_masked), cmap='magma');
+```
+
+```{code-cell} python
+from skimage import measure
+
+contours = measure.find_contours(labels_masked, level=0.5)
+plt.imshow(pores)
+for c in contours:
+ plt.plot(c[:, 1], c[:, 0])
+```
+
+```{code-cell} python
+regions = measure.regionprops(labels_masked)
+```
+
+```{code-cell} python
+f, ax = plt.subplots(figsize=(10, 3))
+ax.hist([r.area for r in regions], bins=100, range=(0, 200));
+```
+
+```{code-cell} python
+from keras import models, layers
+from keras.layers import Conv2D, MaxPooling2D, UpSampling2D
+
+M = 76
+N = int(23 / 76 * M) * 2
+
+model = models.Sequential()
+model.add(
+ Conv2D(
+ 32,
+ kernel_size=(2, 2),
+ activation='relu',
+ input_shape=(N, N, 1),
+ padding='same'
+ )
+)
+model.add(MaxPooling2D(pool_size=(2, 2)))
+model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
+model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
+model.add(UpSampling2D(size=(2, 2)))
+model.add(
+ Conv2D(
+ 1,
+ kernel_size=(2, 2),
+ activation='sigmoid',
+ padding='same'
+ )
+)
+model.compile(loss='mse', optimizer='Adam', metrics=['accuracy'])
+
+# Load pre-trained weights from disk
+model.load_weights('../data/keras_model-diatoms-pores.h5')
+```
+
+```{code-cell} python
+shape = np.array(pores.shape)
+padded_shape = (np.ceil(shape / 46) * 46).astype(int)
+delta_shape = padded_shape - shape
+
+padded_pores = np.pad(
+ pores,
+ pad_width=[(0, delta_shape[0]), (0, delta_shape[1])],
+ mode='symmetric'
+)
+
+blocks = util.view_as_blocks(padded_pores, (46, 46))
+```
+
+```{code-cell} python
+B_rows, B_cols, _, _ = blocks.shape
+
+tiles = blocks.reshape([-1, 46, 46])
+
+# `predict` wants input of shape (N, 46, 46, 1)
+tile_masks = model.predict_classes(tiles[..., np.newaxis])
+
+print(tile_masks.shape)
+tile_masks = tile_masks[..., 0].astype(bool)
+print(tile_masks.shape)
+```
+
+```{code-cell} python
+nn_mask = util.montage(tile_masks, grid_shape=(B_rows, B_cols))
+nn_mask = nn_mask[:shape[0], :shape[1]]
+```
+
+```{code-cell} python
+plt.imshow(nn_mask);
+```
+
+```{code-cell} python
+contours = measure.find_contours(nn_mask, level=0.5)
+plt.imshow(pores)
+for c in contours:
+ plt.plot(c[:, 1], c[:, 0])
+```
+
+```{code-cell} python
+nn_regions = measure.regionprops(
+ measure.label(nn_mask)
+)
+```
+
+```{code-cell} python
+f, ax = plt.subplots(figsize=(10, 3))
+ax.hist([r.area for r in regions], bins='auto', range=(0, 200), alpha=0.4, label='Classic')
+ax.hist([r.area for r in nn_regions], bins='auto', range=(0, 200), alpha=0.4, label='NN')
+ax.legend();
+```
+
+## Bonus round: region filtering
+
+```{code-cell} python
+def is_circular(regions, eccentricity_threshold=0.1, area_threshold=10):
+ """Calculate a boolean mask indicating which regions are circular.
+
+ Parameters
+ ----------
+ eccentricity_threshold : float, >= 0
+ Regions with an eccentricity less than than this value are
+ considered circular. See `measure.regionprops`.
+ area_threshold : int
+ Only regions with an area greater than this value are considered
+ circular.
+ """
+ return np.array([
+ (r.area > area_threshold) and
+ (r.eccentricity <= eccentricity_threshold)
+ for r in regions
+ ])
+```
+
+```{code-cell} python
+def filtered_mask(mask, regions, eccentricity_threshold, area_threshold):
+ mask = mask.copy()
+ suppress_regions = np.array(regions)[
+ ~is_circular(
+ regions,
+ eccentricity_threshold=eccentricity_threshold,
+ area_threshold=area_threshold
+ )
+ ]
+
+ for r in suppress_regions:
+ mask[tuple(r.coords.T)] = 0
+
+ return mask
+```
+
+```{code-cell} python
+plt.imshow(filtered_mask(nn_mask, nn_regions,
+ eccentricity_threshold=0.8,
+ area_threshold=20));
+```
+
+```{code-cell} python
+contours = measure.find_contours(
+ filtered_mask(nn_mask, nn_regions,
+ eccentricity_threshold=0.8,
+ area_threshold=20),
+ level=0.5
+)
+plt.imshow(pores)
+for c in contours:
+ plt.plot(c[:, 1], c[:, 0])
+```
+
+```{code-cell} python
+filtered_regions = np.array(nn_regions)[is_circular(nn_regions, 0.8, 20)]
+
+f, ax = plt.subplots(figsize=(10, 3))
+ax.hist([r.area for r in filtered_regions], bins='auto', range=(0, 200), alpha=0.4);
+```
diff --git a/modules/example_pano.md b/modules/example_pano.md
new file mode 100644
index 0000000..811874e
--- /dev/null
+++ b/modules/example_pano.md
@@ -0,0 +1,256 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+**Note:** This example has been significantly expanded and enhanced. The new, recommended version is located [here](solutions/adv3_panorama-stitching-solution.ipynb). We retain this version intact as it was the exact example used in [the scikit-image paper](https://peerj.com/articles/453/).
+
++++
+
+# Panorama stitching
+
+```{code-cell} python
+import numpy as np
+import matplotlib.pyplot as plt
+from skimage import io, transform
+from skimage.color import rgb2gray
+from skdemo import imshow_all
+```
+
+```{code-cell} python
+ic = io.ImageCollection('../images/pano/DFM_*')
+```
+
+The ``ImageCollection`` class provides an easy way of
+loading and representing multiple images. Images are not
+read from disk until accessed.
+
+```{code-cell} python
+imshow_all(ic[0], ic[1])
+```
+
+Credit: Photographs taken in Petra, Jordan by François Malan
+License: CC-BY
+
+```{code-cell} python
+image0 = rgb2gray(ic[0][:, 500:500+1987, :])
+image1 = rgb2gray(ic[1][:, 500:500+1987, :])
+
+image0 = transform.rescale(image0, 0.25)
+image1 = transform.rescale(image1, 0.25)
+```
+
+```{code-cell} python
+imshow_all(image0, image1)
+```
+
+For this demo, we estimate a projective transformation
+that relates the two images. Since the outer
+parts of these photographs do not comform well to such
+a model, we select only the central parts. To
+further speed up the demonstration, images are downscaled
+to 25% of their original size.
+
++++
+
+# 1. Feature detection and matching
+
+"Oriented FAST and rotated BRIEF" features are detected in both images:
+
+```{code-cell} python
+from skimage.feature import ORB, match_descriptors
+
+orb = ORB(n_keypoints=1000, fast_threshold=0.05)
+
+orb.detect_and_extract(image0)
+keypoints1 = orb.keypoints
+descriptors1 = orb.descriptors
+
+orb.detect_and_extract(image1)
+keypoints2 = orb.keypoints
+descriptors2 = orb.descriptors
+
+matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
+```
+
+```{code-cell} python
+from skimage.feature import plot_matches
+
+fig, ax = plt.subplots(1, 1, figsize=(10, 10))
+plot_matches(ax, image0, image1, keypoints1, keypoints2, matches12)
+ax.axis('off');
+```
+
+Each feature yields a binary descriptor; those are used to find
+the putative matches shown. Many false matches are observed.
+
++++
+
+# 2. Transform estimation
+
+To filter matches, we apply RANdom SAMple Consensus (RANSAC),
+a common method of rejecting outliers. This iterative process
+estimates transformation models based on
+randomly chosen subsets of matches, finally selecting the
+model which corresponds best with the majority of matches.
+
+```{code-cell} python
+from skimage.transform import ProjectiveTransform
+from skimage.measure import ransac
+from skimage.feature import plot_matches
+
+# Select keypoints from the source (image to be registered)
+# and target (reference image)
+src = keypoints2[matches12[:, 1]][:, ::-1]
+dst = keypoints1[matches12[:, 0]][:, ::-1]
+
+model_robust, inliers = ransac((src, dst), ProjectiveTransform,
+ min_samples=4, residual_threshold=2)
+```
+
+```{code-cell} python
+fig, ax = plt.subplots(1, 1, figsize=(15, 15))
+plot_matches(ax, image0, image1, keypoints1, keypoints2, matches12[inliers])
+ax.axis('off');
+```
+
+Note how most of the false matches have now been rejected.
+
++++
+
+# 3. Warping
+
+Next, we want to produce the panorama itself. The first
+step is to find the shape of the output image, by taking
+considering the extents of all warped images.
+
+```{code-cell} python
+from skimage.transform import SimilarityTransform
+
+r, c = image1.shape[:2]
+
+# Note that transformations take coordinates in (x, y) format,
+# not (row, column), in order to be consistent with most literature
+corners = np.array([[0, 0],
+ [0, r],
+ [c, 0],
+ [c, r]])
+
+# Warp the image corners to their new positions
+warped_corners = model_robust(corners)
+
+# Find the extents of both the reference image and the warped
+# target image
+all_corners = np.vstack((warped_corners, corners))
+
+corner_min = np.min(all_corners, axis=0)
+corner_max = np.max(all_corners, axis=0)
+
+output_shape = (corner_max - corner_min)
+output_shape = np.ceil(output_shape[::-1])
+```
+
+Warp the images according to the estimated transformation model.
+Values outside the input images are set to -1 to distinguish the
+"background".
+
+A shift is added to make sure that both images are visible in their
+entirety. Note that ``warp`` takes the inverse mapping
+as an input.
+
+```{code-cell} python
+from skimage.color import gray2rgb
+from skimage.exposure import rescale_intensity
+from skimage.transform import warp
+
+offset = SimilarityTransform(translation=-corner_min)
+
+image0_ = warp(image0, offset.inverse,
+ output_shape=output_shape, cval=-1)
+
+image1_ = warp(image1, (model_robust + offset).inverse,
+ output_shape=output_shape, cval=-1)
+```
+
+An alpha channel is now added to the warped images
+before they are merged together:
+
+```{code-cell} python
+def add_alpha(image, background=-1):
+ """Add an alpha layer to the image.
+
+ The alpha layer is set to 1 for foreground and 0 for background.
+ """
+ return np.dstack((gray2rgb(image), (image != background)))
+
+image0_alpha = add_alpha(image0_)
+image1_alpha = add_alpha(image1_)
+
+merged = (image0_alpha + image1_alpha)
+alpha = merged[..., 3]
+
+# The summed alpha layers give us an indication of how many
+# images were combined to make up each pixel. Divide by the
+# number of images to get an average.
+merged /= np.maximum(alpha, 1)[..., np.newaxis]
+merged = merged[..., :3]
+```
+
+```{code-cell} python
+imshow_all(image0_alpha, image1_alpha, merged)
+```
+
+Note that, while the columns are well aligned, the color
+intensity is not well matched between images.
+
++++
+
+# 4. Blending
+
+To blend images smoothly we make use of the open source package
+[Enblend](http://enblend.sf.net), which in turn employs multi-resolution splines and
+Laplacian pyramids [1, 2].
+
+[1] P. Burt and E. Adelson. ["A Multiresolution Spline With Application to Image Mosaics"](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.690). ACM Transactions on Graphics, Vol. 2, No. 4, October 1983. Pg. 217-236.
+[2] P. Burt and E. Adelson. ["The Laplacian Pyramid as a Compact Image Code"](https://doi.org/10.1109/TCOM.1983.1095851). IEEE Transactions on Communications, April 1983.
+
+```{code-cell} python
+plt.imsave('/tmp/frame0.tif', image0_alpha)
+plt.imsave('/tmp/frame1.tif', image1_alpha)
+```
+
+```{code-cell} python
+%%bash
+
+enblend /tmp/frame*.tif -o /tmp/pano.tif
+```
+
+```{code-cell} python
+pano = io.imread('/tmp/pano.tif')
+
+plt.figure(figsize=(10, 10))
+plt.imshow(pano)
+plt.axis('off');
+```
+
+---
+
+
+
+```{code-cell} python
+%reload_ext load_style
+%load_style ../themes/tutorial.css
+```
diff --git a/modules/images b/modules/images
new file mode 120000
index 0000000..5e67573
--- /dev/null
+++ b/modules/images
@@ -0,0 +1 @@
+../images
\ No newline at end of file
diff --git a/modules/machine_learning.md b/modules/machine_learning.md
new file mode 100644
index 0000000..1031ee4
--- /dev/null
+++ b/modules/machine_learning.md
@@ -0,0 +1,560 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+%matplotlib inline
+import matplotlib
+matplotlib.rcParams['image.interpolation'] = 'nearest'
+import numpy as np
+import matplotlib.pyplot as plt
+```
+
+# Image processing and machine learning
+
+Some image processing numerical techniques are very specific to image processing, such as mathematical morphology or anisotropic diffusion segmentation. However, it is also possible to adapt generic machine learning techniques for image processing.
+
++++
+
+## A short introduction to machine learning
+
+This section is adapted from the [quick start tutorial](http://scikit-learn.org/stable/tutorial/basic/tutorial.html) from the scikit-learn documentation.
+
+In general, a learning problem considers a set of N samples of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka multivariate data), it is said to have several attributes or features.
+
+Typical machine learning tasks are :
+- **classification**: samples belong to two or more classes and we want to learn from already labeled data how to predict the class of unlabeled data. For example, given examples of pixels belonging to an object of interest and background, we want the algorithm to label all the other pixels of the image. Or given images of cats and dogs, we want to label automatically images whether they show cats or dogs.
+- **clustering**: grouping together similar samples. For example, given a set of pictures, can we group them automatically by suject (e.g. people, monuments, animals...)?
+
+In image processing, a sample can either be
+- a whole image, its features being pixel values, or sub-regions of an image (e.g. for face detection)
+- a pixel, its features being intensity values in colorspace, or statistical information about a neighbourhood centered on the pixel,
+- a labeled region, e.g. for classifying particles in an image of labels
+
+The only requirement is to create a dataset composed of N samples, of m features each, which can be passed to the **estimators** of scikit-learn.
+
+Let us start with an example, using the **digits dataset** from scikit-learn.
+
+```{code-cell} python
+from sklearn import datasets
+```
+
+```{code-cell} python
+digits = datasets.load_digits()
+print(digits)
+```
+
+The dataset is a dictionary-like object that holds all the data and some metadata about the data. This data is stored in the ``.data`` member, which is a ``n_samples, n_features`` array. Response variables (if available, as here) are stored in the ``.target member.``
+
+```{code-cell} python
+print(digits.data.shape)
+print(digits.target.shape)
+```
+
+From the shape of the ``data`` array, we see that there are 1797 samples, each having 64 features. In fact, these 64 pixels are the raveled values of an 8x8 image. For convenience, the 2D images are also provided as in the ``.images`` member. In a machine learning problem, a sample always consists of a **flat array** of features, which sometimes require reshaping data.
+
+```{code-cell} python
+print(digits.images.shape)
+np.all(digits.data[0].reshape((8, 8)) == digits.images[0])
+```
+
+```{code-cell} python
+plt.imshow(digits.images[0], cmap='gray')
+print("target: ", digits.target[0])
+```
+
+We now use one of scikit-learn's estimators classes in order to predict the digit from an image.
+
+Here we use an SVC (support vector machine classification) classifier, which uses a part of the dataset (the **training set**) to find the best way to separate the different classes. Even without knowing the details of the SVC, we can use it as a black box thanks to the common estimator API of scikit-learn. An estimator is created by initializing an estimator object:
+
+```{code-cell} python
+from sklearn import svm
+clf = svm.SVC(gamma=0.001, C=100.)
+```
+
+The estimator is trained from the learning set using its ``.fit`` method.
+
+```{code-cell} python
+clf.fit(digits.data[:-10], digits.target[:-10])
+```
+
+Then the target value of new data is predicted using the ``.predict`` method of the estimator.
+
+```{code-cell} python
+print(clf.predict(digits.data[-2:]))
+fig, axes = plt.subplots(1, 2)
+axes[0].imshow(digits.images[-2], cmap='gray')
+axes[1].imshow(digits.images[-1], cmap='gray')
+```
+
+So far, so good? We completed our first machine learning example!
+
+In the following, we will see how to use machine learning for image processing. We will use different kinds of samples and features, starting from low-level pixel-based features (e.g. RGB color), to mid-level features (e.g. corner, patches of high contrast), and finally to properties of segmented regions.
+
+**Outline**
+
+- Image segmentation using pixel-based features (color and texture)
+- Panorama stitching / image registration based on mid-level features
+- Classifying labeled objects using their properties
+
+**What we will not cover**
+
+- computer vision: automatic detection / recognition of objects (faces, ...)
+
+**A follow-up by Stéfan after this part** : image classification using deep learning with Keras.
+
++++
+
+## Thresholding and vector quantization
+
+Image binarization is a common operation. For grayscale images, finding the best threshold for binarization can be a manual operation. Alternatively, algorithms can select a threshold value automatically; which is convenient for computer vision, or for batch-processing a series of images.
+
+Otsu algorithm is the most famous thresholding algorithm. It maximizes the variance between the two segmented groups of pixels. Therefore, it is can be interpreted as a **clustering** algorithm. Samples are pixels and have a single feature, which is their grayscale value.
+
+```{code-cell} python
+from skimage import data, exposure, filters
+camera = data.camera()
+```
+
+```{code-cell} python
+hi = exposure.histogram(camera)
+```
+
+```{code-cell} python
+val = filters.threshold_otsu(camera)
+```
+
+```{code-cell} python
+fig, axes = plt.subplots(1, 2)
+axes[0].imshow(camera, cmap='gray')
+axes[0].contour(camera, [val], colors='y')
+axes[1].plot(hi[1], hi[0])
+axes[1].axvline(val, ls='--')
+```
+
+How can we transpose the idea of Otsu thresholding to RGB or multichannel images? We can use the k-means algorithm, which aims to partition samples in k clusters, where each sample belongs to the cluster of nearest mean.
+
+Below we show a simple example of k-means clustering, based on the Iris dataset of ``scikit-learn``. Note that the ``KMeans`` estimator
+uses a similar API as the SVC we used for digits classification, with the .fit method.
+
+```{code-cell} python
+# From http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html
+from mpl_toolkits.mplot3d import Axes3D
+
+from sklearn.cluster import KMeans
+
+np.random.seed(5)
+
+iris = datasets.load_iris()
+X = iris.data
+y = iris.target
+
+clf = KMeans(n_clusters=3)
+
+fig = plt.figure(figsize=(4, 3))
+ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
+clf.fit(X)
+labels = clf.labels_
+
+ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(np.float), cmap='jet')
+
+ax.w_xaxis.set_ticklabels([])
+ax.w_yaxis.set_ticklabels([])
+ax.w_zaxis.set_ticklabels([])
+ax.set_xlabel('Petal width')
+ax.set_ylabel('Sepal length')
+ax.set_zlabel('Petal length')
+```
+
+k-means clustering uses the Euclidean distance in feature space to cluster samples. If we want to cluster together pixels of similar color, the RGB space is not well suited since it mixes together information about color and light intensity. Therefore, we first transform the RGB image into [Lab colorspace](https://en.wikipedia.org/wiki/Lab_color_space), and only use the color channels (a and b) for clustering.
+
+```{code-cell} python
+from skimage import io, color
+im = io.imread('../images/round_pill.jpg')
+```
+
+```{code-cell} python
+im_lab = color.rgb2lab(im)
+data = np.array([im_lab[..., 1].ravel(), im_lab[..., 2].ravel()])
+```
+
+Then we create a ``KMeans`` estimator for two clusters.
+
+```{code-cell} python
+from sklearn.cluster import KMeans
+kmeans = KMeans(n_clusters=2, random_state=0).fit(data.T)
+segmentation = kmeans.labels_.reshape(im.shape[:-1])
+```
+
+```{code-cell} python
+plt.imshow(im)
+plt.contour(segmentation, colors='y')
+```
+
+Of course we can generalize this method to more than two clusters.
+
+```{code-cell} python
+im = io.imread('../images/chapel_floor.png')
+im_lab = color.rgb2lab(im)
+data = np.array([im_lab[..., 0].ravel(),
+ im_lab[..., 1].ravel(),
+ im_lab[..., 2].ravel()])
+
+kmeans = KMeans(n_clusters=4, random_state=0).fit(data.T)
+segmentation = kmeans.labels_.reshape(im.shape[:-1])
+```
+
+```{code-cell} python
+color_mean = color.label2rgb(segmentation, im, kind='mean')
+fig, axes = plt.subplots(1, 2)
+axes[0].imshow(im)
+axes[0].axis('off')
+axes[1].imshow(color_mean)
+axes[1].axis('off')
+```
+
+### Exercise:
+
+For the chapel floor image, cluster the image in 3 clusters, using only the color channels (not the lightness one). What happens?
+
++++
+
+## SLIC algorithm: clustering using color and spatial features
+
+In the thresholding / vector quantization approach presented above, pixels are characterized only by their color features. However, in most images neighboring pixels correspond to the same object. Hence, information on spatial proximity between pixels can be used in addition to color information.
+
+SLIC (Simple Linear Iterative Clustering) is a segmentation algorithm which clusters pixels in both space and color. Therefore, regions of space that are similar in color will end up in the same segment.
+
+```{code-cell} python
+spices = io.imread('../images/spices.jpg')
+plt.imshow(spices)
+```
+
+Let us try to segment the different spices using the previous k-means approach. One problem is that there is a lot of texture coming from the relief and shades.
+
+```{code-cell} python
+im_lab = color.rgb2lab(spices)
+data = np.array([im_lab[..., 1].ravel(),
+ im_lab[..., 2].ravel()])
+
+kmeans = KMeans(n_clusters=10, random_state=0).fit(data.T)
+labels = kmeans.labels_.reshape(spices.shape[:-1])
+color_mean = color.label2rgb(labels, spices, kind='mean')
+plt.imshow(color_mean)
+```
+
+```{code-cell} python
+from skimage import segmentation
+plt.imshow(segmentation.mark_boundaries(spices, labels))
+```
+
+SLIC is a superpixel algorithm, which segments an image into patches (superpixels) of neighboring pixels with a similar color. SLIC also works in the Lab colorspace. The ``compactness`` parameter controls the relative importance of the distance in image- and color-space.
+
+```{code-cell} python
+from skimage import segmentation
+segments = segmentation.slic(spices, n_segments=200, compactness=20)
+```
+
+```{code-cell} python
+plt.imshow(segmentation.mark_boundaries(spices, segments))
+```
+
+```{code-cell} python
+result = color.label2rgb(segments, spices, kind='mean')
+plt.imshow(result)
+```
+
+After the super-pixel segmentation (which is also called oversegmentation, because we end up with more segments that we want to), we can add a second clustering step to join superpixels belonging to the same spice heap.
+
+```{code-cell} python
+im_lab = color.rgb2lab(result)
+data = np.array([im_lab[..., 1].ravel(),
+ im_lab[..., 2].ravel()])
+
+kmeans = KMeans(n_clusters=5, random_state=0).fit(data.T)
+labels = kmeans.labels_.reshape(spices.shape[:-1])
+color_mean = color.label2rgb(labels, spices, kind='mean')
+plt.imshow(segmentation.mark_boundaries(spices, labels))
+```
+
+Note that other superpixel algorithms are available, such as **Felzenswalb** segmentation.
+
+```{code-cell} python
+result = segmentation.felzenszwalb(spices, scale=100)
+plt.imshow(color.label2rgb(result, spices, kind='mean'))
+```
+
+```{code-cell} python
+plt.imshow(segmentation.mark_boundaries(spices, result))
+```
+
+### Exercise
+
+Repeat the same operations (SLIC superpixel segmentation, followed by K-Means clustering on the average color of superpixels) on the astronaut image. Vary the following parameters
+- slic: n_segments and compactness
+- KMeans: n_clusters (start with 8 for example)
+
+```{code-cell} python
+from skimage import data
+astro = data.astronaut()
+```
+
+```{code-cell} python
+# solution goes here
+```
+
+## Increasing the number of low-level features: trained segmentation using Gabor filters and random forests
+
+In the examples above, a small number of features per pixel was used: either a color triplet only, or a color triplet and its (x, y) position. However, it is possible to use other features, such as the local texture. Texture features can be obtained using Gabor filters, which are Gaussian kernels modulated by a sinusoidal wave.
+
+```{code-cell} python
+# From http://scikit-image.org/docs/dev/auto_examples/features_detection/plot_gabor.html
+from skimage import data, img_as_float
+from skimage.filters import gabor_kernel
+import scipy.ndimage as ndi
+
+shrink = (slice(0, None, 3), slice(0, None, 3))
+brick = img_as_float(data.load('brick.png'))[shrink]
+grass = img_as_float(data.load('grass.png'))[shrink]
+wall = img_as_float(data.load('rough-wall.png'))[shrink]
+image_names = ('brick', 'grass', 'wall')
+images = (brick, grass, wall)
+
+
+def power(image, kernel):
+ # Normalize images for better comparison.
+ image = (image - image.mean()) / image.std()
+ return np.sqrt(ndi.convolve(image, np.real(kernel), mode='wrap')**2 +
+ ndi.convolve(image, np.imag(kernel), mode='wrap')**2)
+
+# Plot a selection of the filter bank kernels and their responses.
+results = []
+kernel_params = []
+for frequency in (0.1, 0.4):
+ kernel = gabor_kernel(frequency, theta=0)
+ params = 'frequency=%.2f' % (frequency)
+ kernel_params.append(params)
+ # Save kernel and the power image for each image
+ results.append((kernel, [power(img, kernel) for img in images]))
+
+fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(5, 4))
+plt.gray()
+
+fig.suptitle('Image responses for Gabor filter kernels', fontsize=12)
+
+axes[0][0].axis('off')
+
+# Plot original images
+for label, img, ax in zip(image_names, images, axes[0][1:]):
+ ax.imshow(img)
+ ax.set_title(label, fontsize=9)
+ ax.axis('off')
+
+for label, (kernel, powers), ax_row in zip(kernel_params, results, axes[1:]):
+ # Plot Gabor kernel
+ ax = ax_row[0]
+ ax.imshow(np.real(kernel), interpolation='nearest')
+ ax.set_ylabel(label, fontsize=7)
+ ax.set_xticks([])
+ ax.set_yticks([])
+
+ # Plot Gabor responses with the contrast normalized for each filter
+ vmin = np.min(powers)
+ vmax = np.max(powers)
+ for patch, ax in zip(powers, ax_row[1:]):
+ ax.imshow(patch, vmin=vmin, vmax=vmax)
+ ax.axis('off')
+```
+
+We define a segmentation algorithms which:
+- computes different features for Gabor filters of different scale and angle, for every pixel
+- trains a **RandomForest** classifier from user-labeled data, which are given as a mask of labels
+- and predicts the label of the remaining non-labeled pixels
+
+The RandomForest algorithm chooses automatically thresholds along the different feature directions, and also decides which features are the most significant to discriminate between the different classes. This is very useful when we don't know if all features are relevant.
+
+```{code-cell} python
+from sklearn.ensemble import RandomForestClassifier
+from skimage import filters
+from skimage import img_as_float
+
+def _compute_features(im):
+ gabor_frequencies = np.logspace(-3, 1, num=5, base=2)
+ thetas = [0, np.pi/2]
+ nb_fq = len(gabor_frequencies) * len(thetas)
+ im = np.atleast_3d(im)
+ im_gabor = np.empty((im.shape[-1], nb_fq) + im.shape[:2])
+ for ch in range(im.shape[-1]):
+ img = img_as_float(im[..., ch])
+ for i_fq, fq in enumerate(gabor_frequencies):
+ for i_th, theta in enumerate(thetas):
+ tmp = filters.gabor(img, fq, theta=theta)
+ im_gabor[ch, len(thetas) * i_fq + i_th] = \
+ np.abs(tmp[0] + 1j * tmp[1])
+ return im_gabor
+
+
+def trainable_segmentation(im, mask):
+ """
+ Parameters
+ ----------
+
+ im : ndarray
+ 2-D image (grayscale or RGB) to be segmented
+
+ mask : ndarray of ints
+ Array of labels. Non-zero labels are known regions that are used
+ to train the classification algorithm.
+ """
+ # Define features
+ im_gabor = _compute_features(im)
+ nb_ch, nb_fq, sh_1, sh2 = im_gabor.shape
+ # Training data correspond to pixels labeled in mask
+ training_data = im_gabor[:, :, mask>0]
+ training_data = training_data.reshape((nb_ch * nb_fq,
+ (mask>0).sum())).T
+ training_labels = mask[mask>0].ravel()
+ # Data are from the remaining pixels
+ data = im_gabor[:, :, mask == 0].reshape((nb_ch * nb_fq,
+ (mask == 0).sum())).T
+ # classification
+ clf = RandomForestClassifier()
+ clf.fit(training_data, training_labels)
+ labels = clf.predict(data)
+ result = np.copy(mask)
+ result[mask == 0] = labels
+ return result
+```
+
+```{code-cell} python
+# Image from https://fr.wikipedia.org/wiki/Fichier:Bells-Beach-View.jpg
+beach = io.imread('../images/Bells-Beach.jpg')
+```
+
+```{code-cell} python
+# Define mask of user-labeled pixels, which will be used for training
+mask = np.zeros(beach.shape[:-1], dtype=np.uint8)
+mask[700:] = 1
+mask[:550, :650] = 2
+mask[400:450, 1000:1100] = 3
+plt.imshow(beach)
+plt.contour(mask, colors='y')
+```
+
+```{code-cell} python
+result = trainable_segmentation(beach, mask)
+plt.imshow(color.label2rgb(result, beach, kind='mean'))
+```
+
+## Using mid-level features
+
+
+```{code-cell} python
+from skimage import data
+camera = data.camera()
+from skimage import feature
+corner_camera = feature.corner_harris(camera)
+coords = feature.corner_peaks(corner_camera)
+plt.imshow(camera, cmap='gray')
+plt.plot(coords[:, 1], coords[:, 0], 'o')
+plt.xlim(0, 512)
+plt.ylim(512, 0)
+```
+
+[Panorama stitching](example_pano.ipynb)
+
+[A longer example](adv3_panorama-stitching.ipynb)
+
+### Exercise
+
+Represent the ORB keypoint of the camera-man
+
+```{code-cell} python
+# solution goes here
+```
+
+## Clustering or classifying labeled objects
+
+We have already seen how to use ``skimage.measure.regionprops`` to extract the properties (area, perimeter, ...) of labeled objects. These properties can be used as features in order to cluster the objects in different groups, or to classify them if given a training set.
+
+In the example below, we use ``skimage.data.binary_blobs`` to generate a binary image. We use several properties to generate features: the area, the ratio between squared perimeter and area, and the solidity (which is the area fraction of the object as compared to its convex hull). We would like to separate the big convoluted particles from the smaller round ones. Here I did not want to bother with a training set, so we will juste use clustering instead of classifying.
+
+```{code-cell} python
+from skimage import measure
+from skimage import data
+im = data.binary_blobs(length=1024, blob_size_fraction=0.05,
+ volume_fraction=0.2)
+labels = measure.label(im)
+props = measure.regionprops(labels)
+
+data = np.array([(prop.area,
+ prop.perimeter**2/prop.area,
+ prop.solidity) for prop in props])
+```
+
+```{code-cell} python
+plt.imshow(labels, cmap='nipy_spectral')
+```
+
+Once again we use the KMeans algorithm to cluster the objects. We visualize the result as an array of labels.
+
+```{code-cell} python
+clf = KMeans(n_clusters=2)
+
+clf.fit(data)
+
+
+def reshape_cluster_labels(cluster_labels, image_labels):
+ """
+ Some NumPy magic
+ """
+ cluster_labels = np.concatenate(([0], cluster_labels + 1))
+ return cluster_labels[image_labels]
+
+
+object_clusters = reshape_cluster_labels(clf.labels_, labels)
+plt.imshow(object_clusters, cmap='nipy_spectral')
+```
+
+However, our features were not carefully designed. Since the ``area`` property can take much larger values than the other properties, it dominates the other ones. To correct this effect, we can normalize the area to its maximal value.
+
+```{code-cell} python
+data[:, 0] /= data[:, 0].max()
+
+clf.fit(data)
+
+object_clusters = reshape_cluster_labels(clf.labels_, labels)
+plt.imshow(object_clusters, cmap='nipy_spectral')
+```
+
+A better way to do the rescaling is to use of the scaling methods provided by ``sklearn.preprocessing``. The ``StandardScaler`` makes sure that every feature has a zero mean and a unit standard deviation.
+
+```{code-cell} python
+from sklearn import preprocessing
+min_max_scaler = preprocessing.StandardScaler()
+data_scaled = min_max_scaler.fit_transform(data)
+
+clf = KMeans(n_clusters=2)
+
+clf.fit(data_scaled)
+
+object_clusters = reshape_cluster_labels(clf.labels_, labels)
+plt.imshow(object_clusters, cmap='nipy_spectral')
+```
+
+###Exercise
+
+Replace the area property by the eccentricity, so that clustering separates compact and convoluted particles, regardless of their size.
+
+```{code-cell} python
+
+```
diff --git a/modules/numpy_scipy_quiz.md b/modules/numpy_scipy_quiz.md
new file mode 100644
index 0000000..fac7dd6
--- /dev/null
+++ b/modules/numpy_scipy_quiz.md
@@ -0,0 +1,251 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+# NumPy / SciPy Quiz
+
+Please do explore beyond the problems given, and feel free to ask questions at any time.
+
+If you need a quick refresher on NumPy, have a look at
+[this introduction](http://mentat.za.net/numpy/intro/intro.html). Also see the [SciPy lectures](http://scipy-lectures.org/).
+
+If you run out of exercises, there are [100 more here!](http://www.loria.fr/~rougier/teaching/numpy.100/)
+
+```{code-cell} python
+import numpy as np
+
+%matplotlib inline
+import matplotlib.pyplot as plt
+
+import scipy
+```
+
+## The NumPy array
+
++++
+
+- Create a `3x4` array containing the numbers from 0 through 11 (using `np.arange` and `np.reshape`), and call it `x`.
+
++++
+
+- Predict whether `x` changes after executing the following:
+
+```python
+y = x[2]
+y[0] = 3
+```
+
++++
+
+- Compute the sums of the columns of x, using `np.sum`.
+
++++
+
+- Construct the array x = np.array([0, 1, 2, 3], dtype=np.uint8). Predict the value of `x - 1`.
+
++++
+
+## Broadcasting
+
++++
+
+- Consider two arrays, `x = np.arange(5); y = np.array([0.5, 1.5])`.
+
+ Construct a `5x2` matrix such that
+ `A[i, j] = x[i] * y[j]`, without using for-loops.
+
++++
+
+- Given a list of XYZ-coordinates, ``p``,
+
+ ```
+ [[1.0, 2.0, 10],
+ [3.0, 4.0, 20],
+ [5.0, 6.0, 30],
+ [7.0, 8.0, 40]]
+ ```
+
+ Normalise each coordinate by dividing with its Z (3rd) element. For example,
+ the first row becomes:
+
+ ```
+ [1/10, 2/10, 10/10]
+ ```
+
+ **Hint:** extract the last column into a variable ``z``, and then change its dimensions so
+ that ``p / z`` works.
+
++++
+
+## Indexing
+
++++
+
+- Create a ``3x3`` ndarray, `A = np.array([[0, 1, 2], [1, 1, 3], [2, 3, 2]])`. Find an indexing expression for extracting the diagonal elements, i.e.
+
+ `delems = A[...]`
+
++++
+
+- Generate a 10 x 3 array of random numbers (all between 0 and 1). From each row, pick
+the number closest to 0.75. Make use of ``np.abs`` and ``np.argmax`` to find the
+column ``j`` which contains the closest element in each row.
+
++++
+
+- Predict and verify the shape of the following slicing operation.
+
+```python
+x = np.empty((10, 8, 6))
+
+idx0 = np.zeros((3, 8)).astype(int)
+idx1 = np.zeros((3, 1)).astype(int)
+idx2 = np.zeros((1, 1)).astype(int)
+
+x[idx0, idx1, idx2]
+```
+
++++
+
+## Optimization
+
++++
+
+Consider the Rosenbrock test function. You can visualize it by executing the following cell:
+
+```{code-cell} python
+# From https://commons.wikimedia.org/wiki/File:Rosenbrock_function.svg
+
+from mpl_toolkits.mplot3d import Axes3D
+from matplotlib import cm
+from matplotlib.colors import LogNorm
+import matplotlib.pyplot as plt
+import numpy as np
+
+def rosenbrock(x, y):
+ return (1 - x)**2 + 100 * (y - x**2)**2
+
+fig = plt.figure()
+ax = Axes3D(fig, azim = -128, elev = 43)
+s = .05
+X = np.arange(-2, 2.+s, s)
+Y = np.arange(-1, 3.+s, s)
+X, Y = np.meshgrid(X, Y)
+Z = rosenbrock(X, Y)
+# ax.plot_surface(X, Y, Z, rstride = 1, cstride = 1, norm = LogNorm(), cmap = cm.jet)
+# Without using `` linewidth=0, edgecolor='none' '', the code may produce a graph with wide black edges, which
+# will make the surface look much darker than the one illustrated in the figure above.
+ax.plot_surface(X, Y, Z, rstride = 1, cstride = 1, norm = LogNorm(), cmap = cm.viridis, linewidth=0, edgecolor='none')
+
+# Set the axis limits so that they are the same as in the figure above.
+ax.set_xlim([-2, 2.0])
+ax.set_ylim([-1, 3.0])
+ax.set_zlim([0, 2500])
+
+plt.xlabel("x")
+plt.ylabel("y");
+```
+
+Find the *minimum* of the test function `rosenbrock` defined above. Use `scipy.optimize.minimize`, and the following template:
+
+```{code-cell} python
+from scipy import optimize
+
+def cost_f(p):
+ x, y = p
+ return rosenbrock(x, y)
+```
+
+## Plotting
+
++++
+
+Generate some random data using:
+
+```
+np.random.seed(0)
+
+mu = 200
+sigma = 25
+n_bins = 50
+x = np.random.normal(mu, sigma, size=100)
+```
+
+Now, try to reproduce this plot:
+
+![matplotlib hist example](../images/mpl_hist.png)
+
++++
+
+## ndimage
+
++++
+
+Use `scipy.ndimage.distance_transform_edt` to calculate the distance map of the image generated in the next cell:
+
+```{code-cell} python
+from skimage import draw
+
+N = 200
+S = 10
+
+np.random.seed(0)
+image = np.ones((N, N), dtype=bool)
+
+coords = np.random.randint(low=0, high=N, size=4 * S).reshape(-1, 4)
+
+for line in coords:
+ image[draw.line(*line)] = 0
+```
+
+Display the distance map; can you interpret what you see?
+
++++
+
+## interpolation
+
++++
+
+An image is sampled `T` times by the code below. Take the coordinates and values of the samples, and
+try to reconstruct the original image. **Hint:** Use `scipy.interpolate.griddata`, and provide it with the coordinates of the full image, generated using `np.indices`.
+
+```{code-cell} python
+from skimage import data, color, filters
+image = color.rgb2gray(data.astronaut())
+image = filters.gaussian(image, sigma=2)
+
+M, N = image.shape
+
+T = 5000 # nr of randomly drawn samples
+coords = np.column_stack([
+ np.random.randint(0, M, size=T),
+ np.random.randint(0, N, size=T)
+])
+
+
+# Use `coords` and `samples` to reconstruct the image
+coords = tuple(coords.T)
+samples = image[coords]
+```
+
+We can visualize the data below, to see how densely the image was sampled:
+
+```{code-cell} python
+# Visualize sampling
+
+sampled_image = np.zeros_like(image)
+sampled_image[coords] = image[coords]
+
+f, (ax0, ax1) = plt.subplots(1, 2, figsize=(20, 10))
+ax0.imshow(image, cmap='gray')
+ax1.imshow(sampled_image, cmap='gray');
+```
diff --git a/modules/other_libraries.md b/modules/other_libraries.md
new file mode 100644
index 0000000..5bb45f9
--- /dev/null
+++ b/modules/other_libraries.md
@@ -0,0 +1,334 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+# Interaction with other libraries
+
++++
+
+## Keras
+
+- It's a very romantic notion to think that we can come up with the best features
+ to model our world. That notion has now been dispelled.
+- Most *object detection/labeling/segmentation/classification* tasks now have
+ neural network equivalent algorithms that perform on-par with or better than
+ hand-crafted methods.
+- One library that gives Python users particularly easy access to deep learning is Keras: https://github.com/fchollet/keras/tree/master/examples (it works with both Theano and TensorFlow).
+- **At SciPy2017:** "Fully Convolutional Networks for Image Segmentation", Daniil Pakhomov, SciPy2017 (Friday 2:30pm)
+ - Particularly interesting, because such networks can be applied to images of any size
+ - ... and because Daniil is a scikit-image contributor ;)
+
++++
+
+### Configurations
+
+From http://www.asimovinstitute.org/neural-network-zoo/:
+
+
+
+E.g., see how to fine tune a model on top of InceptionV3:
+
+
+
+- https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
+
+
+- https://github.com/fchollet/keras/tree/master/examples
+- https://keras.io/scikit-learn-api/
+
+
+- In the Keras docs, you may read about `image_data_format`. By default, this is `channels-last`, which is
+compatible with scikit-image's storage of `(row, cols, ch)`.
+
+```{code-cell} python
+import numpy as np
+from keras.models import Sequential
+from keras.layers import Dense, Dropout
+
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+## Generate dummy data
+#X_train = np.random.random((1000, 2))
+#y_train = np.random.randint(2, size=(1000, 1))
+#X_test = np.random.random((100, 2))
+#y_test = np.random.randint(2, size=(100, 1))
+
+## Generate dummy data with some structure
+
+from sklearn import datasets
+from sklearn.model_selection import train_test_split
+
+X, y = datasets.make_classification(n_features=2, n_samples=2000, n_redundant=0, n_informative=1,
+ n_clusters_per_class=1, random_state=42)
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
+
+model = Sequential()
+model.add(Dense(64, input_dim=2, activation='relu'))
+model.add(Dropout(0.5))
+model.add(Dense(64, activation='relu'))
+model.add(Dropout(0.5))
+model.add(Dense(1, activation='sigmoid'))
+
+model.compile(loss='binary_crossentropy',
+ optimizer='rmsprop',
+ metrics=['accuracy'])
+
+model.fit(X_train, y_train,
+ epochs=20,
+ batch_size=128)
+score = model.evaluate(X_test, y_test, batch_size=128)
+
+print('\n\nAccuracy:', score[1]);
+```
+
+```{code-cell} python
+from sklearn.ensemble import RandomForestClassifier
+```
+
+```{code-cell} python
+rf = RandomForestClassifier()
+rf.fit(X_train, y_train)
+rf.score(X_test, y_test)
+```
+
+```{code-cell} python
+f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(15, 5))
+
+mask = (y_train == 0)
+ax0.plot(X_train[mask, 0], X_train[mask, 1], 'b.')
+ax0.plot(X_train[~mask, 0], X_train[~mask, 1], 'r.')
+ax0.set_title('True Labels')
+
+y_nn = model.predict_classes(X_test).flatten()
+mask = (y_nn == 0)
+ax1.plot(X_test[mask, 0], X_test[mask, 1], 'b.')
+ax1.plot(X_test[~mask, 0], X_test[~mask, 1], 'r.')
+ax1.set_title('Labels by neural net')
+
+y_rf = rf.predict(X_test)
+mask = (y_rf == 0)
+ax2.plot(X_test[mask, 0], X_test[mask, 1], 'b.')
+ax2.plot(X_test[~mask, 0], X_test[~mask, 1], 'r.');
+ax2.set_title('Labels by random forest')
+```
+
+```{code-cell} python
+from keras.applications.inception_v3 import InceptionV3, preprocess_input, decode_predictions
+net = InceptionV3()
+```
+
+```{code-cell} python
+from skimage import transform
+
+def inception_predict(image):
+ # Rescale image to 299x299, as required by InceptionV3
+ image_prep = transform.resize(image, (299, 299, 3), mode='reflect')
+
+ # Scale image values to [-1, 1], as required by InceptionV3
+ image_prep = (img_as_float(image_prep) - 0.5) * 2
+
+ predictions = decode_predictions(
+ net.predict(image_prep[None, ...])
+ )
+
+ plt.imshow(image, cmap='gray')
+
+ for pred in predictions[0]:
+ (n, klass, prob) = pred
+ print(f'{klass:>15} ({prob:.3f})')
+```
+
+```{code-cell} python
+from skimage import data, img_as_float
+inception_predict(data.chelsea())
+```
+
+```{code-cell} python
+inception_predict(data.camera())
+```
+
+```{code-cell} python
+inception_predict(data.coffee())
+```
+
+You can fine-tune Inception to classify your own classes, as described at
+
+https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
+
++++
+
+## SciPy: LowLevelCallable
+
+https://ilovesymposia.com/2017/03/12/scipys-new-lowlevelcallable-is-a-game-changer/
+
+```{code-cell} python
+import numpy as np
+image = np.random.random((512, 512))
+
+footprint = np.array([[0, 1, 0],
+ [1, 1, 1],
+ [0, 1, 0]], dtype=bool)
+```
+
+```{code-cell} python
+from scipy import ndimage as ndi
+%timeit ndi.grey_erosion(image, footprint=footprint)
+```
+
+```{code-cell} python
+%timeit ndi.generic_filter(image, np.min, footprint=footprint)
+```
+
+```{code-cell} python
+f'Slowdown is {825 / 2.85} times'
+```
+
+```{code-cell} python
+%load_ext Cython
+```
+
+```{code-cell} python
+%%cython --name=test9
+
+from libc.stdint cimport intptr_t
+from numpy.math cimport INFINITY
+
+cdef api int erosion_kernel(double* input_arr_1d, intptr_t filter_size,
+ double* return_value, void* user_data):
+
+ cdef:
+ double[:] input_arr
+ ssize_t i
+
+ return_value[0] = INFINITY
+
+ for i in range(filter_size):
+ if input_arr_1d[i] < return_value[0]:
+ return_value[0] = input_arr_1d[i]
+
+ return 1
+```
+
+```{code-cell} python
+from scipy import LowLevelCallable, ndimage
+import sys
+
+def erosion_fast(image, footprint):
+ out = ndimage.generic_filter(
+ image,
+ LowLevelCallable.from_cython(sys.modules['test9'], name='erosion_kernel'),
+ footprint=footprint
+ )
+ return out
+```
+
+```{code-cell} python
+np.sum(
+ np.abs(
+ erosion_fast(image, footprint=footprint)
+ - ndi.generic_filter(image, np.min, footprint=footprint)
+ )
+)
+```
+
+```{code-cell} python
+%timeit erosion_fast(image, footprint=footprint)
+```
+
+```{code-cell} python
+!pip install numba
+```
+
+```{code-cell} python
+# Taken from Juan Nunez-Iglesias's blog post:
+# https://ilovesymposia.com/2017/03/12/scipys-new-lowlevelcallable-is-a-game-changer/
+
+import numba
+from numba import cfunc, carray
+from numba.types import intc, CPointer, float64, intp, voidptr
+from scipy import LowLevelCallable
+
+def jit_filter_function(filter_function):
+ jitted_function = numba.jit(filter_function, nopython=True)
+
+ @cfunc(intc(CPointer(float64), intp, CPointer(float64), voidptr))
+ def wrapped(values_ptr, len_values, result, data):
+ values = carray(values_ptr, (len_values,), dtype=float64)
+ result[0] = jitted_function(values)
+ return 1
+
+ return LowLevelCallable(wrapped.ctypes)
+```
+
+```{code-cell} python
+@jit_filter_function
+def fmin(values):
+ result = np.inf
+ for v in values:
+ if v < result:
+ result = v
+ return result
+```
+
+```{code-cell} python
+%timeit ndi.generic_filter(image, fmin, footprint=footprint)
+```
+
+## Parallel and batch processing
+
++++
+
+[Joblib](https://pythonhosted.org/joblib/) (developed by scikit-learn) is used for:
+
+
+1. transparent disk-caching of the output values and lazy re-evaluation (memoize pattern)
+2. easy simple parallel computing
+3. logging and tracing of the execution
+
+```{code-cell} python
+from sklearn.externals import joblib
+
+from joblib import Memory
+mem = Memory(cachedir='/tmp/joblib')
+```
+
+```{code-cell} python
+from skimage import segmentation
+
+@mem.cache
+def cached_slic(image):
+ return segmentation.slic(image)
+```
+
+```{code-cell} python
+from skimage import io
+large_image = io.imread('../images/Bells-Beach.jpg')
+```
+
+```{code-cell} python
+%time segmentation.slic(large_image)
+```
+
+```{code-cell} python
+%time cached_slic(large_image)
+```
+
+```{code-cell} python
+%time cached_slic(large_image)
+```
+
+[Dask](https://dask.pydata.org) is a parallel computing library. It has two components:
+
+- Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads.
+- “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of the dynamic task schedulers.
+- See Matt Rocklin's [blogpost](http://matthewrocklin.com/blog/work/2017/01/17/dask-images) for a more detailed example
diff --git a/modules/skdemo b/modules/skdemo
new file mode 120000
index 0000000..36104eb
--- /dev/null
+++ b/modules/skdemo
@@ -0,0 +1 @@
+../lectures/skdemo
\ No newline at end of file
diff --git a/modules/solutions_012.md b/modules/solutions_012.md
new file mode 100644
index 0000000..fd6c122
--- /dev/null
+++ b/modules/solutions_012.md
@@ -0,0 +1,418 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+from __future__ import division, print_function
+%matplotlib inline
+```
+
+# Color and exposure
+
+```{code-cell} python
+import matplotlib.pyplot as plt
+import numpy as np
+```
+
+## Exercise:
+
++++
+
+Create three images; each should display a red, green, or blue channel of the original image.
+
+```{code-cell} python
+import skdemo
+from skimage import data
+color_image = data.chelsea()
+
+red_image = np.zeros_like(color_image)
+green_image = np.zeros_like(color_image)
+blue_image = np.zeros_like(color_image)
+
+red_image[:, :, 0] = color_image[:, :, 0]
+green_image[:, :, 1] = color_image[:, :, 1]
+blue_image[:, :, 2] = color_image[:, :, 2]
+
+skdemo.imshow_all(red_image, green_image, blue_image)
+```
+
+Alternatively, we can "unpack" our RGB image into red, green, and blue channels using `np.rollaxis`
+
+```{code-cell} python
+np.rollaxis(color_image, axis=-1).shape
+```
+
+The `axis` argument tells `rollaxis` which axis to... ahem... roll to the front.
+
+```{code-cell} python
+red_image = np.zeros_like(color_image)
+green_image = np.zeros_like(color_image)
+blue_image = np.zeros_like(color_image)
+
+r, g, b = np.rollaxis(color_image, axis=-1)
+red_image[:, :, 0] = r
+green_image[:, :, 1] = g
+blue_image[:, :, 2] = b
+
+skdemo.imshow_all(red_image, green_image, blue_image)
+```
+
+## Histograms
+
++++
+
+## Question:
+
++++
+
+The following attempt to plot as histogram does not work:
+
+```python
+plt.hist(color_image);
+```
+
+The code above raises the following error:
+
+> ValueError: x must be 1D or 2D
+
+As the error suggests, a 3D, color image will not work with a histogram
+
+```{code-cell} python
+
+plt.hist(color_image.flatten());
+```
+
+These bins are really coarse, so lets use the `bins` argument to `plt.hist` to get a better result:
+
+```{code-cell} python
+plt.hist(color_image.flatten(), bins=100);
+```
+
+Why does that look so strange? (Think about it before continuing.)
+
+The pixel values are integers, and we're dividing up the data into 100 bins. If the number of integers isn't a divisor of the range of integers, then some bins will have more integers than others. Let's take a look at the number of unique bins:
+
+```{code-cell} python
+np.ptp(color_image)
+```
+
+`np.ptp` gives the difference between the min and max values. If we used 115 bins, every bin would have approximately 2 integers, and we should get a good-looking histogram. If we use a bit fewer bins, then some bins will have 3 integers instead of 2; if go a bit higher, then some bins will have a bit fewer integers.
+
+```{code-cell} python
+fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
+
+for ax, bins in zip(axes, (100, 115, 130)):
+ ax.hist(color_image.flatten(), bins=bins);
+ ax.set_ylim(ymax=8000)
+```
+
+## Exercise:
+
++++
+
+Create a function to **plot a histogram for each color channel** in a single plot.
+
+```{code-cell} python
+for color, channel in zip('rgb', np.rollaxis(color_image, axis=-1)):
+ plt.hist(channel.flatten(), color=color, alpha=0.5)
+```
+
+Bonus: Instead of using `plt.hist`, use `plt.plot` or `plt.fill_between` in combination with `histogram` from `skimage.exposure`.
+
+```{code-cell} python
+from skimage import exposure
+exposure.histogram?
+```
+
+```{code-cell} python
+for color, channel in zip('rgb', np.rollaxis(color_image, axis=-1)):
+ hist, bin_centers = exposure.histogram(channel)
+ plt.fill_between(bin_centers, hist, color=color, alpha=0.3)
+```
+
+## Color spaces
+
++++
+
+## Exercise:
+
++++
+
+Use the CIELAB color space to **isolate the eyes** in the `chelsea` image. Plot the L, a, b components to get a feel for what they do, and see the [wikipedia page](http://en.wikipedia.org/wiki/Lab_colorspace) for more info. Bonus: **Isolate the nose** too.
+
+```{code-cell} python
+from skimage import color
+
+lab_image = color.rgb2lab(color_image)
+```
+
+```{code-cell} python
+luminance, a, b = np.rollaxis(lab_image, axis=-1)
+titles = ['luminance', 'a-component', 'b-component']
+skdemo.imshow_all(luminance, a, b, titles=titles)
+```
+
+Notice the eyes are really dark in the middle image, i.e. the "A"-component of the LAB color space. We should use that for thresholding. We don't want to just guess values for thresholding; instead, let's look at a histogram:
+
+```{code-cell} python
+skdemo.imshow_with_histogram(lab_image);
+```
+
+I've cheated here and reused our RGB histogram for LAB space. The image is wrong because the image isn't RGB, but the histograms are still valid. Note that we're plotting in RGB order, so the "A" commponent here is green. Since the eyes are very dark in the "A" image above, we'd like to select the left tail of the green histogram---roughly at 0:
+
+```{code-cell} python
+plt.imshow(a < 0)
+```
+
+To get the nose, notice that it's quite bright in the "A" component, so lets grab the right tail of the green histogram (> 30).
+
+```{code-cell} python
+plt.imshow(a > 30)
+```
+
+Combining those two, plus an extra bit of thresholding to blacken the pupil, gives:
+
+```{code-cell} python
+eyes = (a < 0) & (b > 5)
+nose = a > 30
+plt.imshow(eyes | nose)
+```
+
+Note that we want to pixels that are either the nose or the eyes; you might be tempted to write `eyes & nose`, but that would only keep pixels that are both part of the eyes and the nose, which is an empty set.
+
++++
+
+# Image filtering
+
+```{code-cell} python
+# Replicate image from demo
+image = data.camera()
+pixelated = image[::10, ::10]
+```
+
+In the image filtering section, our first attempt at applying a difference filter didn't produce good results
+
+```{code-cell} python
+horizontal_edges = pixelated[1:, :] - pixelated[:-1, :]
+vertical_edges = pixelated[:, 1:] - pixelated[:, :-1]
+skdemo.imshow_all(horizontal_edges, vertical_edges)
+```
+
+The reason for the noise is that these images are unsigned, 8-bit integers:
+
+```{code-cell} python
+print pixelated.dtype, horizontal_edges.dtype, vertical_edges.dtype
+```
+
+If you subtract two values, and the result is negative, then that value wraps around (underflow); thus, -1 becomes 255. Similarly, values exceeding 255, will wrap around (overflow) so that 256 becomes -1.
+
+To fix this, we'll convert to floating-point images:
+
+```{code-cell} python
+from skimage import img_as_float
+pixelated_float = img_as_float(pixelated)
+horizontal_edges = pixelated_float[1:, :] - pixelated_float[:-1, :]
+vertical_edges = pixelated_float[:, 1:] - pixelated_float[:, :-1]
+skdemo.imshow_all(horizontal_edges, vertical_edges)
+```
+
+## Exercise:
+
++++
+
+Create a simple difference filter to **find the horizontal or vertical edges** of an image. Try to ensure that the filtering operation doesn't shift the edge position preferentially. (Don't use slicing to produce the difference image; use convolution.)
+
+```{code-cell} python
+def imshow_edges(image, horizontal_edge_kernel, vertical_edge_kernel):
+ horizontal_edges = convolve(image, horizontal_edge_kernel)
+ vertical_edges = convolve(image, vertical_edge_kernel)
+ skdemo.imshow_all(horizontal_edges, vertical_edges)
+```
+
+We can emulate the simple differencing operation that we did earlier with the following edge kernels:
+
+```{code-cell} python
+from skimage import img_as_float
+from scipy.ndimage import convolve
+
+horizontal_edge_kernel = np.array([[1], [-1]])
+vertical_edge_kernel = np.array([[1, -1]])
+
+# As discussed earlier, use a floating-point image to prevent overflow
+image = img_as_float(pixelated)
+imshow_edges(image, horizontal_edge_kernel, vertical_edge_kernel)
+```
+
+But, as we said earlier, this tends to skew the edges toward one corner of the image. To preserve the position of edge points, we can create a kernel that's centered on each pixel:
+
+```{code-cell} python
+horizontal_edge_kernel = np.array([[1], [0], [-1]])
+vertical_edge_kernel = np.array([[1, 0, -1]])
+
+imshow_edges(image, horizontal_edge_kernel, vertical_edge_kernel)
+```
+
+We can verify that this doesn't skew the edges by looking at the bright square from earlier
+
+```{code-cell} python
+# Replicate image from demo
+bright_square = np.zeros((7, 7), dtype=float)
+bright_square[2:5, 2:5] = 1
+
+image = bright_square
+horizontal_edges = convolve(image, horizontal_edge_kernel)
+vertical_edges = convolve(image, vertical_edge_kernel)
+skdemo.imshow_all(horizontal_edges, vertical_edges)
+```
+
+## Exercise:
+
++++
+
+Using a couple of the filters in the `filter` module, **find the direction of the maximum gradient** in an image.
+
++++
+
+**NOTE:** When developing this exercise, it was assumed that `vsobel` and `hsobel` from `skimage.filters` returned the response from the vertical-edge and horizontal-edge Sobel kernels, pictured in the links below:
+
+* http://scikit-image.org/docs/dev/api/skimage.filters.html#vsobel
+* http://scikit-image.org/docs/dev/api/skimage.filters.html#hsobel
+
+As described in those docs, however, the **absolute value** of the response is returned. While this result is typically preferred, it's not in this case, since the sign of the response contributes to the gradient angle.
+
+To get around this oversight, we'll copy the edge kernels from the documentation:
+
+```{code-cell} python
+from skimage import filters
+
+# Kernel copied from `vsobel` docstring.
+# Vertical edge-reponse is the *horizontal* gradient.
+dx_kernel = np.array([
+ [1, 0, -1],
+ [2, 0, -2],
+ [1, 0, -1],
+])
+# Rotate array by 90 degrees to get y-gradient kernel
+dy_kernel = np.rot90(dx_kernel)
+```
+
+```{code-cell} python
+image_45 = np.tril(-np.ones([7, 7]))
+image_135 = np.rot90(image_45)
+skdemo.imshow_all(image_45, image_135)
+```
+
+```{code-cell} python
+def angle_image(image):
+ image = img_as_float(image)
+ dy = convolve(image, dy_kernel)
+ dx = convolve(image, dx_kernel)
+
+ angle = np.arctan2(dy, dx)
+ return np.degrees(angle)
+```
+
+```{code-cell} python
+%precision 0
+angle_image(image_45)
+```
+
+```{code-cell} python
+angle_image(image_135)
+```
+
+# Feature detection
+
++++
+
+## Exercise:
+
++++
+
+Use the response from the Hough transform to **detect the position and radius of each coin**.
+
++++
+
+First, let's replicate the data from the tutorial
+
+```{code-cell} python
+from skimage.transform import hough_circle
+
+image = data.coins()[0:95, 180:370]
+edges = filters.canny(image, sigma=3, low_threshold=10, high_threshold=60)
+hough_radii = np.arange(15, 30, 2)
+hough_response = hough_circle(edges, hough_radii)
+```
+
+```{code-cell} python
+from skimage.feature import peak_local_max
+
+centers = []
+likelihood = []
+
+for h in hough_response:
+ peaks = peak_local_max(h)
+ centers.extend(peaks)
+ likelihood.extend(h[peaks[:, 0], peaks[:, 1]])
+
+for i in np.argsort(likelihood)[-3:]:
+ row, column = centers[i]
+ plt.plot(column, row, 'ro')
+ plt.imshow(image)
+```
+
+```{code-cell} python
+coin_centers
+```
+
+```{code-cell} python
+a = np.arange(10)
+a[np.arange(3)]
+```
+
+```{code-cell} python
+centers
+```
+
+```{code-cell} python
+from skimage.feature import peak_local_max
+
+radii = []
+centers = []
+likelihood = []
+for h, r in zip(hough_response, hough_radii):
+ peaks = peak_local_max(h)
+ radii.extend([r] * len(peaks))
+ centers.extend(peaks)
+ likelihood.extend(h[peaks[:, 0], peaks[:, 1]])
+```
+
+```{code-cell} python
+from skimage.draw import circle_perimeter
+from skimage.color import gray2rgb
+
+drawn_image = gray2rgb(image)
+plt.imshow(drawn_image)
+for i in np.argsort(likelihood)[-3:]:
+ row, column = centers[i]
+ rr, cc = circle_perimeter(row, column, radii[i])
+ drawn_image[rr, cc, 1:] = 0
+ plt.plot(column, row, 'ro')
+```
+
+```{code-cell} python
+circle_perimeter?
+```
+
+```{code-cell} python
+
+```
diff --git a/modules/stackoverflow_challenges.md b/modules/stackoverflow_challenges.md
new file mode 100644
index 0000000..640d506
--- /dev/null
+++ b/modules/stackoverflow_challenges.md
@@ -0,0 +1,146 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+# StackOverflow Problems
+
++++
+
+### Real-world problems to test your skills on!
+
+```{code-cell} python
+%matplotlib inline
+```
+
+```{code-cell} python
+import matplotlib.pyplot as plt
+import numpy as np
+from skimage import (filters, io, color, exposure, feature,
+ segmentation, morphology, img_as_float)
+```
+
+# Parameters of a pill
+
+(Based on StackOverflow http://stackoverflow.com/questions/28281742/fitting-a-circle-to-a-binary-image)
+
+
+Consider a pill from the [NLM Pill Image Recognition Pilot](http://pir.nlm.nih.gov/pilot/instructions.html) (``../images/round_pill.jpg``). Fit a circle to the pill outline and compute its area.
+
+
+
+*Hints:*
+
+1. Equalize (``exposure.equalize_*``)
+2. Detect edges (``filter.canny`` or ``feature.canny``--depending on your version)
+3. Fit the ``CircleModel`` using ``measure.ransac``.
+
++++
+
+### Alternative: morphological snakes
+
+**NOTE**: this is expensive to compute, so may take a while to execute
+
+```{code-cell} python
+# Initial level set
+pill = color.rgb2gray(image)
+pill = restoration.denoise_nl_means(pill, multichannel=False)
+
+level_set = segmentation.circle_level_set(pill.shape, radius=200)
+ls = segmentation.morphological_chan_vese(pill, 80, init_level_set=level_set, smoothing=3)
+
+fig, ax = plt.subplots(1, 1, figsize=(8, 8))
+
+ax.imshow(pill, cmap="gray")
+ax.set_axis_off()
+ax.contour(ls, [0.5], colors='r');
+```
+
+# Counting coins
+
+Based on StackOverflow http://stackoverflow.com/questions/28242274/count-number-of-objects-using-watershed-algorithm-scikit-image
+
+Consider the coins image from the scikit-image example dataset, shown below.
+Write a function to count the number of coins.
+
+The procedure outlined here is a bit simpler than in the notebook lecture (and works just fine!)
+
+
+
+*Hint:*
+
+1. Equalize
+2. Threshold (``filters.threshold_otsu``)
+3. Remove objects touching boundary (``segmentation.clear_border``)
+4. Apply morphological closing (``morphology.closing``)
+5. Remove small objects (``measure.regionprops``)
+6. Visualize (potentially using ``color.label2rgb``)
+
+```{code-cell} python
+from skimage import data
+fig, ax = plt.subplots()
+ax.imshow(data.coins(), cmap='gray');
+```
+
+# Snakes
+
+Based on https://stackoverflow.com/q/8686926/214686
+
+
+
+Consider the zig-zaggy snakes on the left (``../images/snakes.png``).
Write some code to find the begin- and end-points of each.
+
+
+
+*Hints:*
+
+1. Threshold the image to turn it into "black and white"
+2. Not all lines are a single pixel thick. Use skeletonization to thin them out (``morphology.skeletonize``)
+3. Locate all snake endpoints (I used a combination of ``scipy.signal.convolve2d`` [find all points with only one neighbor], and ``np.logical_and`` [which of those points lie on the snake?] to do that, but there are many other ways).
+
++++
+
+# M&Ms
+
+How many blue M&Ms are there in this image (`../images/mm.jpg`)?
+
+
+
+Steps:
+
+1. Denoise the image (using, e.g., `restoration.denoise_nl_means`)
+2. Calculate how far each pixel is away from pure blue
+3. Segment this distance map to give a "pill mask"
+4. Fill in any holes in that mask, using `scipy.ndimage.binary_fill_holes`
+5. Use watershed segmentation to split apart any M&Ms that were joined, as described in http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_watershed.html
+
+*Alternative approach:*
+
+- http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
+
++++
+
+# Viscous fingers
+
+Based on StackOverflow: http://stackoverflow.com/questions/23121416/long-boundary-detection-in-a-noisy-image
+
+
+
+Consider the fluid experiment on the right (`../images/fingers.png`). Determine any kind of meaningful boundary in this noisy image.
+
+
+
+*Hints:*
+
+1. Convert to grayscale
+2. Try edge detection (``feature.canny``)
+3. If edge detection fails, denoising is needed (try ``restoration.denoise_tv_bregman``)
+4. Try edge detection (``feature.canny``)
diff --git a/modules/tour_of_skimage.md b/modules/tour_of_skimage.md
new file mode 100644
index 0000000..d708867
--- /dev/null
+++ b/modules/tour_of_skimage.md
@@ -0,0 +1,1105 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.10.3
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{code-cell} python
+%matplotlib inline
+
+import numpy as np
+import matplotlib.pyplot as plt
+```
+
+# scikit-image: a tour
+
+There are many tools and utilities in the package, far too many to cover in a tutorial. This notebook is designed as a road map, to guide you as you explore or search for additional tools for your applications. *This is intended as a guide, not an exhaustive list*.
+
+Each submodule of scikit-image has its own section, which you can navigate to below in the table of contents.
+
++++
+
+## Table of Contents
+
+* [skimage.color](#color)
+* [skimage.data](#data)
+* [skimage.draw](#draw)
+* [skimage.exposure](#exposure)
+* [skimage.feature](#feature)
+* [skimage.filters](#filters)
+* [skiamge.future](#future)
+* [skimage.graph](#graph)
+* [skimage.io](#io)
+* [skimage.measure](#measure)
+* [skimage.morphology](#morphology)
+* [skimage.restoration](#restoration)
+* [skimage.segmentation](#segmentation)
+* [skimage.transform](#transform)
+* [skimage.util](#util)
+
++++
+
+## [skimage.color](https://scikit-image.org/docs/stable/api/skimage.color.html) - color conversion
+
+The `color` submmodule includes routines to convert to and from common color representations. For example, RGB (Red, Green, and Blue) can be converted into many other representations.
+
+```{code-cell} python
+import skimage.color as color
+```
+
+```{code-cell} python
+# Tab complete to see available functions in the color submodule
+color.rgb2
+color.
+```
+
+### Example: conversion to grayscale
+
+```{code-cell} python
+from skimage import data
+from skimage import color
+
+original = data.astronaut()
+grayscale = color.rgb2gray(original)
+
+# Plot the results
+fig, axes = plt.subplots(1, 2, figsize=(8, 4))
+ax = axes.ravel()
+
+ax[0].imshow(original)
+ax[0].set_title("Original")
+ax[0].axis('off')
+ax[1].imshow(grayscale, cmap='gray')
+ax[1].set_title("Grayscale")
+ax[1].axis('off')
+
+fig.tight_layout()
+plt.show();
+```
+
+### Example: conversion to HSV
+
++++
+
+Usually, objects in images have distinct colors (hues) and luminosities, so that these features can be used to separate different areas of the image. In the RGB representation the hue and the luminosity are expressed as a linear combination of the R,G,B channels, whereas they correspond to single channels of the HSV image (the Hue and the Value channels). A simple segmentation of the image can then be effectively performed by a mere thresholding of the HSV channels. See below link for additional details.
+
+https://en.wikipedia.org/wiki/HSL_and_HSV
+
+We first load the RGB image and extract the Hue and Value channels:
+
+```{code-cell} python
+from skimage import data
+from skimage.color import rgb2hsv
+
+rgb_img = data.coffee()
+hsv_img = rgb2hsv(rgb_img)
+hue_img = hsv_img[:, :, 0]
+value_img = hsv_img[:, :, 2]
+
+fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(8, 2))
+
+ax0.imshow(rgb_img)
+ax0.set_title("RGB image")
+ax0.axis('off')
+ax1.imshow(hue_img, cmap='hsv')
+ax1.set_title("Hue channel")
+ax1.axis('off')
+ax2.imshow(value_img)
+ax2.set_title("Value channel")
+ax2.axis('off')
+
+fig.tight_layout();
+```
+
+The cup and saucer have a Hue distinct from the remainder of the image, which can be isolated by thresholding
+
+```{code-cell} python
+hue_threshold = 0.04
+binary_img = hue_img > hue_threshold
+
+fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(8, 3))
+
+ax0.hist(hue_img.ravel(), 512)
+ax0.set_title("Histogram of the Hue channel with threshold")
+ax0.axvline(x=hue_threshold, color='r', linestyle='dashed', linewidth=2)
+ax0.set_xbound(0, 0.12)
+ax1.imshow(binary_img)
+ax1.set_title("Hue-thresholded image")
+ax1.axis('off')
+
+fig.tight_layout();
+```
+
+An additional threshold in the value channel can remote most of the shadow
+
+```{code-cell} python
+fig, ax0 = plt.subplots(figsize=(4, 3))
+
+value_threshold = 0.10
+binary_img = (hue_img > hue_threshold) | (value_img < value_threshold)
+
+ax0.imshow(binary_img)
+ax0.set_title("Hue and value thresholded image")
+ax0.axis('off')
+
+fig.tight_layout()
+plt.show();
+```
+
+#### Additional color conversion examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/#manipulating-exposure-and-color-channels).
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+## [skimage.data](https://scikit-image.org/docs/stable/api/skimage.data.html) - test images
+
++++
+
+The `data` submodule includes standard test images useful for examples and testing the package. These images are shipped with the package.
+
+There are scientific images, general test images, and a stereoscopic image.
+
+```{code-cell} python
+from skimage import data
+```
+
+```{code-cell} python
+# Explore with tab completion
+example_image = data.camera()
+```
+
+```{code-cell} python
+fig, ax = plt.subplots(figsize=(6, 6))
+ax.imshow(example_image)
+ax.axis('off');
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+-----------------------
+
++++
+
+## [skimage.draw](https://scikit-image.org/docs/stable/api/skimage.draw.html) - drawing primitives on an image
+
+The majority of functions in this submodule return the *coordinates* of the specified shape/object in the image, rather than drawing it on the image directly. The coordinates can then be used as a mask to draw on the image, or you pass the image as well as those coordinates into the convenience function `draw.set_color`.
+
+Lines and circles can be drawn with antialiasing (these functions end in the suffix *_aa).
+
+At the current time text is not supported; other libraries including matplotlib have robust support for overlaying text.
+
+```{code-cell} python
+from skimage import draw
+```
+
+```{code-cell} python
+# Tab complete to see available options
+draw.
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+## Example: drawing shapes
+
+```{code-cell} python
+fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(10, 6))
+
+img = np.zeros((500, 500, 3), dtype=np.float64)
+
+# draw line
+rr, cc = draw.line(120, 123, 20, 400)
+img[rr, cc, 0] = 255
+
+# fill polygon
+poly = np.array((
+ (300, 300),
+ (480, 320),
+ (380, 430),
+ (220, 590),
+ (300, 300),
+))
+rr, cc = draw.polygon(poly[:, 0], poly[:, 1], img.shape)
+img[rr, cc, 1] = 1
+
+# fill circle
+rr, cc = draw.circle(200, 200, 100, img.shape)
+img[rr, cc, :] = (1, 1, 0)
+
+# fill ellipse
+rr, cc = draw.ellipse(300, 300, 100, 200, img.shape)
+img[rr, cc, 2] = 1
+
+# circle
+rr, cc = draw.circle_perimeter(120, 400, 15)
+img[rr, cc, :] = (1, 0, 0)
+
+# Bezier curve
+rr, cc = draw.bezier_curve(70, 100, 10, 10, 150, 100, 1)
+img[rr, cc, :] = (1, 0, 0)
+
+# ellipses
+rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 4.)
+img[rr, cc, :] = (1, 0, 1)
+rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=-np.pi / 4.)
+img[rr, cc, :] = (0, 0, 1)
+rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 2.)
+img[rr, cc, :] = (1, 1, 1)
+
+ax1.imshow(img)
+ax1.set_title('No anti-aliasing')
+ax1.axis('off')
+
+
+img = np.zeros((100, 100), dtype=np.double)
+
+# anti-aliased line
+rr, cc, val = draw.line_aa(12, 12, 20, 50)
+img[rr, cc] = val
+
+# anti-aliased circle
+rr, cc, val = draw.circle_perimeter_aa(60, 40, 30)
+img[rr, cc] = val
+
+
+ax2.imshow(img, cmap=plt.cm.gray, interpolation='nearest')
+ax2.set_title('Anti-aliasing')
+ax2.axis('off');
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+-----------------------------------------
+
++++
+
+## [skimage.exposure](https://scikit-image.org/docs/stable/api/skimage.exposure.html) - evaluating or changing the exposure of an image
+
+One of the most common tools to evaluate exposure is the *histogram*, which plots the number of points which have a certain value against the values in order from lowest (dark) to highest (light). The function `exposure.histogram` differs from `numpy.histogram` in that there is no rebinnning; each value along the x-axis is preserved.
+
++++
+
+### Example: Histogram equalization
+
+```{code-cell} python
+from skimage import data, img_as_float
+from skimage import exposure
+
+
+def plot_img_and_hist(image, axes, bins=256):
+ """Plot an image along with its histogram and cumulative histogram.
+
+ """
+ image = img_as_float(image)
+ ax_img, ax_hist = axes
+ ax_cdf = ax_hist.twinx()
+
+ # Display image
+ ax_img.imshow(image, cmap=plt.cm.gray)
+ ax_img.set_axis_off()
+
+ # Display histogram
+ ax_hist.hist(image.ravel(), bins=bins, histtype='step', color='black')
+ ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
+ ax_hist.set_xlabel('Pixel intensity')
+ ax_hist.set_xlim(0, 1)
+ ax_hist.set_yticks([])
+
+ # Display cumulative distribution
+ img_cdf, bins = exposure.cumulative_distribution(image, bins)
+ ax_cdf.plot(bins, img_cdf, 'r')
+ ax_cdf.set_yticks([])
+
+ return ax_img, ax_hist, ax_cdf
+
+
+# Load an example image
+img = data.moon()
+
+# Contrast stretching
+p2, p98 = np.percentile(img, (2, 98))
+img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
+
+# Equalization
+img_eq = exposure.equalize_hist(img)
+
+# Adaptive Equalization
+img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
+
+# Display results
+fig = plt.figure(figsize=(8, 5))
+axes = np.zeros((2, 4), dtype=np.object)
+axes[0, 0] = fig.add_subplot(2, 4, 1)
+for i in range(1, 4):
+ axes[0, i] = fig.add_subplot(2, 4, 1+i,
+ sharex=axes[0,0], sharey=axes[0,0])
+for i in range(0, 4):
+ axes[1, i] = fig.add_subplot(2, 4, 5+i)
+
+ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
+ax_img.set_title('Low contrast image')
+
+y_min, y_max = ax_hist.get_ylim()
+ax_hist.set_ylabel('Number of pixels')
+ax_hist.set_yticks(np.linspace(0, y_max, 5))
+
+ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
+ax_img.set_title('Contrast stretch')
+
+ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
+ax_img.set_title('Histogram eq')
+
+ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
+ax_img.set_title('Adaptive eq')
+
+ax_cdf.set_ylabel('Fraction of total intensity')
+ax_cdf.set_yticks(np.linspace(0, 1, 5))
+
+# prevent overlap of y-axis labels
+fig.tight_layout();
+```
+
+```{code-cell} python
+# Explore with tab completion
+exposure.
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+#### Additional examples available in the [example gallery](https://scikit-image.org/docs/stable/auto_examples/#manipulating-exposure-and-color-channels)
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+----------------------
+
++++
+
+## [skimage.feature](https://scikit-image.org/docs/stable/api/skimage.feature.html) - extract features from an image
+
++++
+
+This submodule presents a diverse set of tools to identify or extract certain features from images, including tools for
+
+* Edge detection
+ * `feature.canny`
+* Corner detection
+ * `feature.corner_kitchen_rosenfeld`
+ * `feature.corner_harris`
+ * `feature.corner_shi_tomasi`
+ * `feature.corner_foerstner`
+ * `feature.subpix`
+ * `feature.corner_moravec`
+ * `feature.corner_fast`
+ * `feature.corner_orientations`
+* Blob detection
+ * `feature.blob_dog`
+ * `feature.blob_doh`
+ * `feature.blob_log`
+* Texture
+ * `feature.greycomatrix`
+ * `feature.greycoprops`
+ * `feature.local_binary_pattern`
+ * `feature.multiblock_lbp`
+* Peak finding
+ * `feature.peak_local_max`
+* Object detction
+ * `feature.hog`
+ * `feature.match_template`
+* Stereoscopic depth estimation
+ * `feature.daisy`
+* Feature matching
+ * `feature.ORB`
+ * `feature.BRIEF`
+ * `feature.CENSURE`
+ * `feature.match_descriptors`
+ * `feature.plot_matches`
+
+```{code-cell} python
+from skimage import feature
+```
+
+```{code-cell} python
+# Explore with tab completion
+feature.
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+This is a large submodule. For brevity here is a short example illustrating ORB feature matching, and additional examples can be explored in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects).
+
+```{code-cell} python
+from skimage import data
+from skimage import transform as tf
+from skimage import feature
+from skimage.color import rgb2gray
+
+# Import the astronaut then warp/rotate the image
+img1 = rgb2gray(data.astronaut())
+img2 = tf.rotate(img1, 180)
+tform = tf.AffineTransform(scale=(1.3, 1.1), rotation=0.5,
+ translation=(0, -200))
+img3 = tf.warp(img1, tform)
+
+# Build ORB extractor and extract features
+descriptor_extractor = feature.ORB(n_keypoints=200)
+
+descriptor_extractor.detect_and_extract(img1)
+keypoints1 = descriptor_extractor.keypoints
+descriptors1 = descriptor_extractor.descriptors
+
+descriptor_extractor.detect_and_extract(img2)
+keypoints2 = descriptor_extractor.keypoints
+descriptors2 = descriptor_extractor.descriptors
+
+descriptor_extractor.detect_and_extract(img3)
+keypoints3 = descriptor_extractor.keypoints
+descriptors3 = descriptor_extractor.descriptors
+
+# Find matches between the extracted features
+matches12 = feature.match_descriptors(descriptors1, descriptors2, cross_check=True)
+matches13 = feature.match_descriptors(descriptors1, descriptors3, cross_check=True)
+
+# Plot the results
+fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10, 10))
+
+plt.gray()
+
+feature.plot_matches(ax[0], img1, img2, keypoints1, keypoints2, matches12)
+ax[0].axis('off')
+ax[0].set_title("Original Image vs. Transformed Image")
+
+feature.plot_matches(ax[1], img1, img3, keypoints1, keypoints3, matches13)
+ax[1].axis('off')
+ax[1].set_title("Original Image vs. Transformed Image");
+```
+
+#### Additional feature detection and extraction examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects).
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+---------------------------
+
++++
+
+## [skimage.filters](https://scikit-image.org/docs/stable/api/skimage.filters.html) - apply filters to an image
+
++++
+
+Filtering applies whole-image modifications such as sharpening or blurring. Thresholding methods also live in this submodule.
+
+Notable functions include (links to relevant gallery examples)
+
+* [Thresholding](https://scikit-image.org/docs/stable/auto_examples/applications/plot_thresholding.html)
+ * filters.threshold_* (multiple different functions with this prefix)
+ * skimage.filters.try_all_threshold to compare various methods
+* [Edge finding/enhancement](https://scikit-image.org/docs/stable/auto_examples/edges/plot_edge_filter.html)
+ * filters.sobel
+ * filters.prewitt
+ * filters.scharr
+ * filters.roberts
+ * filters.laplace
+ * filters.hessian
+* [Ridge filters](https://scikit-image.org/docs/stable/auto_examples/edges/plot_ridge_filter.html)
+ * filters.meijering
+ * filters.sato
+ * filters.frangi
+* Inverse filtering (see also [skimage.restoration](#restoration))
+ * filters.weiner
+ * filters.inverse
+* [Directional](https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_gabor.html)
+ * filters.gabor
+* Blurring/denoising
+ * filters.gaussian
+ * filters.median
+* [Sharpening](https://scikit-image.org/docs/stable/auto_examples/filters/plot_unsharp_mask.html)
+ * filters.unsharp_mask
+* Define your own
+ * LPIFilter2D
+
+```{code-cell} python
+from skimage import filters
+```
+
+```{code-cell} python
+# Explore with tab completion
+filters.
+```
+
+```{code-cell} python
+
+```
+
+```{code-cell} python
+
+```
+
+### Rank filters
+There is a sub-submodule, `skimage.filters.rank`, which contains rank filters. These filters are nonlinear and operate on the local histogram.
+
+To learn more about the rank filters, see the comprehensive [gallery example for rank filters](https://scikit-image.org/docs/stable/auto_examples/applications/plot_rank_filters.html).
+
++++
+
+#### Additional feature detection and extraction examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects).
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+---------------------------
+
++++
+
+## [skimage.future](https://scikit-image.org/docs/stable/api/skimage.future.html) - stable code with unstable API
+
++++
+
+Bleeding edge features which work well, and will be moved from here into the main package in future releases. However, on the way their API may change.
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+------------------------------
+
++++
+
+## [skimage.graph](https://scikit-image.org/docs/stable/api/skimage.graph.html) - graph theory, minimum cost paths
+
++++
+
+Graph theory. Currently this submodule primarily deals with a constructed "cost" image, and how to find the minimum cost path through it, with constraints if desired.
+
+[The panorama tutorial lecture illustrates a real-world example.](./solutions/adv3_panorama-stitching-solution.ipynb)
+
++++
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+------------------------
+
++++
+
+## [skimage.io](https://scikit-image.org/docs/stable/api/skimage.io.html) - utilities to read and write images in various formats
+
++++
+
+Reading your image and writing the results back out. There are multiple plugins available, which support multiple formats. The most commonly used functions include
+
+* io.imread - Read an image to a numpy array.
+* io.imsave - Write an image to disk.
+* io.imread_collection - Read multiple images which match a common prefix
+
++++
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+------------------------------
+
++++
+
+## [skimage.measure](https://scikit-image.org/docs/stable/api/skimage.measure.html) - measuring image or region properties
+
++++
+
+Multiple algorithms to label images, or obtain information about discrete regions of an image.
+
+* Label an image
+ * measure.label
+
+
+* In a labeled image (image with discrete regions identified by unique integers, as returned by `label`), find various properties of the labeled regions. [**`regionprops` is extremely useful**](https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html)
+ * measure.regionprops
+
+
+* Finding paths from a 2D image, or isosurfaces from a 3D image
+ * measure.find_contours
+ * measure.marching_cubes_lewiner
+ * measure.marching_cubes_classic
+ * measure.mesh_surface_area (surface area of 3D mesh from marching cubes)
+
+
+* Quantify the difference between two whole images (often used in denoising or restoration)
+ * measure.compare_*
+
+
+**RANDom Sample Consensus fitting (RANSAC)** - a powerful, robust approach to fitting a model to data. It exists here because its initial use was for fitting shapes, but it can also fit transforms.
+* measure.ransac
+* measure.CircleModel
+* measure.EllipseModel
+* measure.LineModelND
+
+
+```{code-cell} python
+from skimage import measure
+```
+
+```{code-cell} python
+# Explore with tab completion
+measure.
+```
+
+```{code-cell} python
+# Room to explore
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+---------------------
+
++++
+
+## [skimage.morphology](https://scikit-image.org/docs/stable/api/skimage.morphology.html) - binary and grayscale morphology
+
++++
+
+Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image, such as boundaries, skeletons, etc. In any given technique, we probe an image with a small shape or template called a structuring element, which defines the region of interest or neighborhood around a pixel.
+
+```{code-cell} python
+from skimage import morphology as morph
+```
+
+```{code-cell} python
+# Explore with tab completion
+morph.
+```
+
+### Example: Flood filling
+
+Flood fill is an algorithm to iteratively identify and/or change adjacent values in an image based on their similarity to an initial seed point. The conceptual analogy is the ‘paint bucket’ tool in many graphic editors.
+
+The `flood` function returns the binary mask of the flooded area. `flood_fill` returns a modified image. Both of these can be set with a `tolerance` keyword argument, within which the adjacent region will be filled.
+
++++
+
+Here we will experiment a bit on the cameraman, turning his coat from dark to light.
+
+```{code-cell} python
+from skimage import data
+from skimage import morphology as morph
+
+cameraman = data.camera()
+
+# Change the cameraman's coat from dark to light (255). The seed point is
+# chosen as (200, 100),
+light_coat = morph.flood_fill(cameraman, (200, 100), 255, tolerance=10)
+
+fig, ax = plt.subplots(ncols=2, figsize=(10, 5))
+
+ax[0].imshow(cameraman, cmap=plt.cm.gray)
+ax[0].set_title('Original')
+ax[0].axis('off')
+
+ax[1].imshow(light_coat, cmap=plt.cm.gray)
+ax[1].plot(100, 200, 'ro') # seed point
+ax[1].set_title('After flood fill')
+ax[1].axis('off');
+```
+
+### Example: Binary and grayscale morphology
+
+Here we outline the following basic morphological operations:
+
+1. Erosion
+2. Dilation
+3. Opening
+4. Closing
+5. White Tophat
+6. Black Tophat
+7. Skeletonize
+8. Convex Hull
+
+To get started, let’s load an image using `io.imread`. Note that morphology functions only work on gray-scale or binary images, so we set `as_gray=True`.
+
+```{code-cell} python
+import os
+from skimage.data import data_dir
+from skimage.util import img_as_ubyte
+from skimage import io
+
+orig_phantom = img_as_ubyte(io.imread(os.path.join(data_dir, "phantom.png"),
+ as_gray=True))
+fig, ax = plt.subplots(figsize=(5, 5))
+ax.imshow(orig_phantom, cmap=plt.cm.gray)
+ax.axis('off');
+```
+
+```{code-cell} python
+def plot_comparison(original, filtered, filter_name):
+
+ fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4), sharex=True,
+ sharey=True)
+ ax1.imshow(original, cmap=plt.cm.gray)
+ ax1.set_title('original')
+ ax1.axis('off')
+ ax2.imshow(filtered, cmap=plt.cm.gray)
+ ax2.set_title(filter_name)
+ ax2.axis('off')
+```
+
+### Erosion
+
+Morphological `erosion` sets a pixel at (i, j) to the minimum over all pixels in the neighborhood centered at (i, j). *Erosion shrinks bright regions and enlarges dark regions.*
+
+The structuring element, `selem`, passed to erosion is a boolean array that describes this neighborhood. Below, we use `disk` to create a circular structuring element, which we use for most of the following examples.
+
+```{code-cell} python
+from skimage import morphology as morph
+
+selem = morph.disk(6)
+eroded = morph.erosion(orig_phantom, selem)
+plot_comparison(orig_phantom, eroded, 'erosion')
+```
+
+### Dilation
+
+Morphological `dilation` sets a pixel at (i, j) to the maximum over all pixels in the neighborhood centered at (i, j). *Dilation enlarges bright regions and shrinks dark regions.*
+
+```{code-cell} python
+dilated = morph.dilation(orig_phantom, selem)
+plot_comparison(orig_phantom, dilated, 'dilation')
+```
+
+Notice how the white boundary of the image thickens, or gets dilated, as we increase the size of the disk. Also notice the decrease in size of the two black ellipses in the centre, and the thickening of the light grey circle in the center and the 3 patches in the lower part of the image.
+
+### Opening
+
+Morphological `opening` on an image is defined as an erosion followed by a dilation. *Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks.*
+
+```{code-cell} python
+opened = morph.opening(orig_phantom, selem)
+plot_comparison(orig_phantom, opened, 'opening')
+```
+
+Since opening an image starts with an erosion operation, light regions that are smaller than the structuring element are removed. The dilation operation that follows ensures that light regions that are larger than the structuring element retain their original size. Notice how the light and dark shapes in the center their original thickness but the 3 lighter patches in the bottom get completely eroded. The size dependence is highlighted by the outer white ring: The parts of the ring thinner than the structuring element were completely erased, while the thicker region at the top retains its original thickness.
+
+### Closing
+
+Morphological `closing` on an image is defined as a dilation followed by an erosion. *Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks.*
+
+To illustrate this more clearly, let’s add a small crack to the white border:
+
+```{code-cell} python
+phantom = orig_phantom.copy()
+phantom[10:30, 200:210] = 0
+
+closed = morph.closing(phantom, selem)
+plot_comparison(phantom, closed, 'closing')
+```
+
+Since closing an image starts with an dilation operation, dark regions that are smaller than the structuring element are removed. The dilation operation that follows ensures that dark regions that are larger than the structuring element retain their original size. Notice how the white ellipses at the bottom get connected because of dilation, but other dark region retain their original sizes. Also notice how the crack we added is mostly removed.
+
+### White tophat
+
+The `white_tophat` of an image is defined as the image minus its morphological opening. *This operation returns the bright spots of the image that are smaller than the structuring element.*
+
+To make things interesting, we’ll add bright and dark spots to the image:
+
+```{code-cell} python
+phantom = orig_phantom.copy()
+phantom[340:350, 200:210] = 255
+phantom[100:110, 200:210] = 0
+
+w_tophat = morph.white_tophat(phantom, selem)
+plot_comparison(phantom, w_tophat, 'white tophat')
+```
+
+As you can see, the 10-pixel wide white square is highlighted since it is smaller than the structuring element. Also, the thin, white edges around most of the ellipse are retained because they’re smaller than the structuring element, but the thicker region at the top disappears.
+
+### Black tophat
+
+The `black_tophat` of an image is defined as its morphological closing minus the original image. *This operation returns the dark spots of the image that are smaller than the structuring element.*
+
+```{code-cell} python
+b_tophat = morph.black_tophat(phantom, selem)
+plot_comparison(phantom, b_tophat, 'black tophat')
+```
+
+As you can see, the 10-pixel wide black square is highlighted since it is smaller than the structuring element.
+
+#### Duality
+
+As you should have noticed, many of these operations are simply the reverse of another operation. This duality can be summarized as follows:
+
+* Erosion <-> Dilation
+* Opening <-> Closing
+* White tophat <-> Black tophat
+
+
+### Skeletonize
+
+Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. It is important to note that this is performed on binary images only.
+
+```{code-cell} python
+horse = io.imread(os.path.join(data_dir, "horse.png"), as_gray=True)
+
+sk = morph.skeletonize(horse == 0)
+plot_comparison(horse, sk, 'skeletonize')
+```
+
+As the name suggests, this technique is used to thin the image to 1-pixel wide skeleton by applying thinning successively.
+
+### Convex hull
+
+The convex_hull_image is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Again note that this is also performed on binary images.
+
+```{code-cell} python
+hull1 = morph.convex_hull_image(horse == 0)
+plot_comparison(horse, hull1, 'convex hull')
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+-----------------------------------
+
++++
+
+## [skimage.restoration](https://scikit-image.org/docs/stable/api/skimage.restoration.html) - restoration of an image
+
++++
+
+This submodule includes routines to restore images. Currently these routines fall into four major categories. Links lead to topical gallery examples.
+
+* [Reducing noise](https://scikit-image.org/docs/stable/auto_examples/filters/plot_denoise.html)
+ * restoration.denoise_*
+* [Deconvolution](https://scikit-image.org/docs/stable/auto_examples/filters/plot_deconvolution.html), or reversing a convolutional effect which applies to the entire image. For example, lens correction. This can be done [unsupervised](https://scikit-image.org/docs/stable/auto_examples/filters/plot_restoration.html).
+ * restoration.weiner
+ * restoration.unsupervised_weiner
+ * restoration.richardson_lucy
+* [Inpainting](https://scikit-image.org/docs/stable/auto_examples/filters/plot_inpaint.html), or filling in missing areas of an image
+ * restoration.inpaint_biharmonic
+* [Phase unwrapping](https://scikit-image.org/docs/stable/auto_examples/filters/plot_phase_unwrap.html)
+ * restoration.unwrap_phase
+
+```{code-cell} python
+from skimage import restoration
+```
+
+```{code-cell} python
+# Explore with tab completion
+restoration.
+```
+
+```{code-cell} python
+# Space to experiment with restoration techniques
+```
+
+```{code-cell} python
+
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+---------------------------------
+
++++
+
+## [skimage.segmentation](https://scikit-image.org/docs/stable/api/skimage.segmentation.html) - identification of regions of interest
+
++++
+
+One of the key image analysis tasks is identifying regions of interest. These could be a person, an object, certain features of an animal, microscopic image, or stars. Segmenting an image is the process of determining where these things you want are in your images.
+
+Segmentation has two overarching categories: Supervised and Unsupervised.
+
+**Supervised** - must provide some guidance (seed points or initial conditions)
+
+* segmentation.random_walker
+* segmentation.active_contour
+* segmentation.watershed
+* segmentation.flood_fill
+* segmentation.flood
+* some thresholding algorithms in `filters`
+
+
+**Unsupervised** - no human input
+
+* segmentation.slic
+* segmentation.felzenszwalb
+* segmentation.chan_vese
+* some thresholding algorithms in `filters`
+
+
+There is a [segmentation lecture](./4_segmentation.ipynb) ([and solution](./solutions/4_segmentation.ipynb)) you may peruse, as well as many [gallery examples](https://scikit-image.org/docs/stable/auto_examples/index.html#segmentation-of-objects) which illustrate all of these segmentation methods.
+
+```{code-cell} python
+from skimage import segmentation
+```
+
+```{code-cell} python
+# Explore with tab completion
+segmentation.
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+---------------------------
+
++++
+
+## [skimage.transform](https://scikit-image.org/docs/stable/api/skimage.transform.html) - transforms & warping
+
++++
+
+This submodule has multiple features which fall under the umbrella of transformations.
+
+Forward (`radon`) and inverse (`iradon`) radon transforms, as well as some variants (`iradon_sart`) and the finite versions of these transforms (`frt2` and `ifrt2`). These are used for [reconstructing medical computed tomography (CT) images](https://scikit-image.org/docs/stable/auto_examples/transform/plot_radon_transform.html).
+
+Hough transforms for identifying lines, circles, and ellipses.
+
+Changing image size, shape, or resolution with `resize`, `rescale`, or `downscale_local_mean`.
+
+`warp`, and `warp_coordinates` which take an image or set of coordinates and translate them through one of the defined `*Transforms` in this submodule. `estimate_transform` may be assist in estimating the parameters.
+
+[Numerous gallery examples are available](https://scikit-image.org/docs/stable/auto_examples/index.html#geometrical-transformations-and-registration) illustrating these functions. [The panorama tutorial also includes warping](./solutions/adv3_panorama-stitching-solution.ipynb) via `SimilarityTransform` with parameter estimation via `measure.ransac`.
+
+```{code-cell} python
+from skimage import transform
+```
+
+```{code-cell} python
+# Explore with tab completion
+transform.
+```
+
+```{code-cell} python
+# Room for experimentation
+```
+
+```{code-cell} python
+
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+--------------------------
+
++++
+
+## [skimage.util](https://scikit-image.org/docs/stable/api/skimage.util.html) - utility functions
+
++++
+
+These are generally useful functions which have no definite other place in the package.
+
+`util.img_as_*` are convenience functions for datatype conversion.
+
+`util.invert` is a convenient way to invert any image, accounting for its datatype.
+
+`util.random_noise` is a comprehensive function to apply any amount of many different types of noise to images. The seed may be set, resulting in pseudo-random noise for testing.
+
+`util.view_as_*` allows for overlapping views into the same memory array, which is useful for elegant local computations with minimal memory impact.
+
+`util.apply_parallel` uses Dask to apply a function across subsections of an image. This can result in dramatic performance or memory improvements, but depending on the algorithm edge effects or lack of knowledge of the remainder of the image may result in unexpected results.
+
+`util.pad` and `util.crop` pads or crops the edges of images. `util.pad` is now a direct wrapper for `numpy.pad`.
+
+```{code-cell} python
+from skimage import util
+```
+
+```{code-cell} python
+# Explore with tab completion
+util.
+```
+
+```{code-cell} python
+# Room to experiment
+```
+
+```{code-cell} python
+
+```
+
+#### [Back to the Table of Contents](#Table-of-Contents)
+
++++
+
+----------------------------
+
+```{code-cell} python
+
+```
diff --git a/requirements.txt b/requirements.txt
index ea8af47..09914c1 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,15 +1,12 @@
scikit-image[data] >= 0.18
-numpy >= 1.12
-scipy >= 1.0
-matplotlib >= 2.1
+numpy >= 1.20
+scipy >= 1.7
+matplotlib >= 3.0
notebook >= 4.0
scikit-learn >= 0.18
-jupyter-book >= 0.10.2
napari[all]
-jupytext >=1.10.3
-sphinx_autodoc_typehints>=1.11.0
-ghp-import
-pytest
-pytest-qt
-pooch
-furo
\ No newline at end of file
+jupytext >= 1.10.3
+sphinx-book-theme
+myst-nb
+sphinx-copybutton
+imagecodecs
diff --git a/workshops/2014-scipy/images/censure_example.png b/workshops/2014-scipy/images/censure_example.png
deleted file mode 100644
index f941c52..0000000
Binary files a/workshops/2014-scipy/images/censure_example.png and /dev/null differ
diff --git a/workshops/2014-scipy/images/censure_example.png b/workshops/2014-scipy/images/censure_example.png
new file mode 120000
index 0000000..5b748d7
--- /dev/null
+++ b/workshops/2014-scipy/images/censure_example.png
@@ -0,0 +1 @@
+../../../images/censure_example.png
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/circle_detection.png b/workshops/2014-scipy/images/circle_detection.png
deleted file mode 100644
index 6161c9b..0000000
Binary files a/workshops/2014-scipy/images/circle_detection.png and /dev/null differ
diff --git a/workshops/2014-scipy/images/circle_detection.png b/workshops/2014-scipy/images/circle_detection.png
new file mode 120000
index 0000000..85fa5a6
--- /dev/null
+++ b/workshops/2014-scipy/images/circle_detection.png
@@ -0,0 +1 @@
+../../../images/circle_detection.png
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/fog_tunnel.png b/workshops/2014-scipy/images/fog_tunnel.png
deleted file mode 100644
index 2776015..0000000
Binary files a/workshops/2014-scipy/images/fog_tunnel.png and /dev/null differ
diff --git a/workshops/2014-scipy/images/fog_tunnel.png b/workshops/2014-scipy/images/fog_tunnel.png
new file mode 120000
index 0000000..0328142
--- /dev/null
+++ b/workshops/2014-scipy/images/fog_tunnel.png
@@ -0,0 +1 @@
+../../../images/fog_tunnel.png
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/particle_detection.png b/workshops/2014-scipy/images/particle_detection.png
deleted file mode 100644
index 1aacf28..0000000
Binary files a/workshops/2014-scipy/images/particle_detection.png and /dev/null differ
diff --git a/workshops/2014-scipy/images/particle_detection.png b/workshops/2014-scipy/images/particle_detection.png
new file mode 120000
index 0000000..652dc19
--- /dev/null
+++ b/workshops/2014-scipy/images/particle_detection.png
@@ -0,0 +1 @@
+../../../images/particle_detection.png
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/power_law_growth_regimes.png b/workshops/2014-scipy/images/power_law_growth_regimes.png
deleted file mode 100644
index 3acb7e0..0000000
Binary files a/workshops/2014-scipy/images/power_law_growth_regimes.png and /dev/null differ
diff --git a/workshops/2014-scipy/images/power_law_growth_regimes.png b/workshops/2014-scipy/images/power_law_growth_regimes.png
new file mode 120000
index 0000000..0e03daf
--- /dev/null
+++ b/workshops/2014-scipy/images/power_law_growth_regimes.png
@@ -0,0 +1 @@
+../../../images/power_law_growth_regimes.png
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/spice_1.jpg b/workshops/2014-scipy/images/spice_1.jpg
index 9340c83..bb5645a 120000
--- a/workshops/2014-scipy/images/spice_1.jpg
+++ b/workshops/2014-scipy/images/spice_1.jpg
@@ -1 +1 @@
-../../images/spice_1.jpg
\ No newline at end of file
+../../../images/spice_1.jpg
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/spice_1_credits.txt b/workshops/2014-scipy/images/spice_1_credits.txt
index 96abc59..3de029e 120000
--- a/workshops/2014-scipy/images/spice_1_credits.txt
+++ b/workshops/2014-scipy/images/spice_1_credits.txt
@@ -1 +1 @@
-../../images/spice_1_credits.txt
\ No newline at end of file
+../../../images/spice_1_credits.txt
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/spices.jpg b/workshops/2014-scipy/images/spices.jpg
index f797105..faa449e 120000
--- a/workshops/2014-scipy/images/spices.jpg
+++ b/workshops/2014-scipy/images/spices.jpg
@@ -1 +1 @@
-../../images/spices.jpg
\ No newline at end of file
+../../../images/spices.jpg
\ No newline at end of file
diff --git a/workshops/2014-scipy/images/spices_credits.txt b/workshops/2014-scipy/images/spices_credits.txt
index 80e18c9..dd5fb1f 120000
--- a/workshops/2014-scipy/images/spices_credits.txt
+++ b/workshops/2014-scipy/images/spices_credits.txt
@@ -1 +1 @@
-../../images/spices_credits.txt
\ No newline at end of file
+../../../images/spices_credits.txt
\ No newline at end of file