Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gaussian Blur not shown to be effective against noise #221

Closed
CaptainSifff opened this issue Aug 19, 2022 · 13 comments · Fixed by #292
Closed

Gaussian Blur not shown to be effective against noise #221

CaptainSifff opened this issue Aug 19, 2022 · 13 comments · Fixed by #292
Labels
help wanted Looking for Contributors type:clarification Suggest change for make lesson clearer type:enhancement Propose enhancement to the lesson

Comments

@CaptainSifff
Copy link
Contributor

We just tested episode 6 and were kind of puzzled by the
key points:
"Applying a low-pass blurring filter smooths edges and removes noise from an image"
While I agree that we have seen the blurring of edges, I think some example to show that blurring is effective against
small scale noise and hence can be used for denoising would be nice.
Why don't we give the example image gaussian-original.png some random noise right from the start
or let the learners apply some random noise via numpy.random ?

@tobyhodges
Copy link
Member

This is a great idea, and IMO will more clearly illustrate the motivation for including blurring in an image processing pipeline.

@tobyhodges tobyhodges added help wanted Looking for Contributors type:enhancement Propose enhancement to the lesson type:clarification Suggest change for make lesson clearer labels Aug 22, 2022
@chbrandt
Copy link
Contributor

Maybe the following can help, @CaptainSifff ?

One of these days I used parts of this lesson in an overview about image processing, and I used 3D views of the petri-dish image to show the effect of filtering.

  • the image used:
import imageio.v3 as iio
import skimage.color

image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)

fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')

image

  • 3D view of (original) image:
from skimage.util import img_as_ubyte

size = min(image_gray.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)

img = image_gray[:size,:size]
img = img_as_ubyte(img)

fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')

ax.view_init(elev=215, azim=-60)

ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L')

image

  • 3D view of blurred image:
from skimage.util import img_as_ubyte
from skimage.filters import gaussian

image_blur = gaussian(image_gray, sigma=3)

size = min(image_blur.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)

img = image_blur[:size,:size]
img = img_as_ubyte(img)

fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')

ax.view_init(elev=215, azim=-60)

ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L')

image

@bobturneruk
Copy link
Contributor

Thanks for the engagement @chbrandt. Looks like the perti dish image has the kind of noise @CaptainSifff mentions. I wonder whether the visualisation might be better in 2D, as that's what we use in most of the lesson? The 3D one looks nice, though and I can clearly see the denoising effect.

@CaptainSifff
Copy link
Contributor Author

Great idea! If we stick to 2D we should find a way to make the small scale noise visible to the audience. So either by pixel-peeping, or maybe by some type of gradient filter?

@chbrandt
Copy link
Contributor

Then, I think the best way of showing it would be through a transversal cut, showing the intensity of the pixels, say, along Y=150.

@tobyhodges
Copy link
Member

tobyhodges commented Jun 13, 2023

a transversal cut, showing the intensity of the pixels, say, along Y=150.

I like this idea, and would definitely welcome a pull request to add it. However, I think the 3D images are very effective at illustrating the denoising effect and I would propose to accompany this 2D 'slice' with the 3D images - only without the code that was used to generate them.

My rationale for omitting the 3D plot code is to avoid increasing the cognitive load of the episode/lesson (meshgrid, img_as_ubyte, plot_surface, and view_init are all new functions/methods that learners may have questions about). On the other hand, it would be a shame to deprive interested learners of an opportunity to learn more about how they can create cool 3D plots with matplotlib... 😆

@chbrandt would you be willing to create a public gist of the code you used to generate those 3D plots, and include a link to that gist in the captions to the 3D images? e.g.

![
A 3D plot of pixel intensities across the whole Petri dish image before blurring. 
[Explore how this plot was created with matplotlib](LINKTOGIST). 
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot showing pixel intensities across the whole example Petri dish image before blurring'
}

and

![
A 3D plot of pixel intensities after Gaussian blurring of the Petri dish image. 
Note the 'smoothing' effect on the pixel intensities of the colonies in the image, 
and the 'flattening' of the background noise at relatively low pixel intensities throughout the image. 
[Explore how this plot was created with matplotlib](LINKTOGIST). 
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot illustrating the smoothing effect on pixel intensities across the whole example Petri dish image after blurring'
}

[Edit: fixed the alternative text description for the second 3D plot.]

@chbrandt
Copy link
Contributor

Sure @tobyhodges , I can do that. I think you found the right balance.

Before I create the gist and push a PR, let me dump the code for the alternatives discussed (ie, the transversal cut and pixel-peeping):

Transversal cut/slice

Where are we slicing?

import matplotlib.pyplot as plt
import imageio.v3 as iio
import skimage.color

image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)

xmin, xmax = (0, image_gray.shape[1])
ymin = ymax = 150

fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
ax.plot([xmin,xmax], [ymin,ymax], color='red')

image

How does it (ie, the intensity of those pixels) look like?

image_gray_pixels_slice = image_gray[150,:]
image_gray_pixels_slice = img_as_ubyte(image_gray_pixels_slice)

fig = plt.figure()
ax = fig.add_subplot()

ax.plot(image_gray_pixels_slice, color='red')
ax.set_ylim(255, 0)
ax.set_ylabel('L')
ax.set_xlabel('X')

image

Equivalently, the same pixels/slice from the smoothed image:

image_blur_pixels_slice = image_blur[150,:]
image_blur_pixels_slice = img_as_ubyte(image_blur_pixels_slice)

fig = plt.figure()
ax = fig.add_subplot()

ax.plot(image_blur_pixels_slice, 'red')

ax.set_ylim(255, 0)

ax.set_ylabel('L')
ax.set_xlabel('X')
ax.set_title('Slice "Y=150" from Blur image')

image

Pixel-peeping

Where are we zooming?

import matplotlib.patches as patches

fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')

xini = 10
yini = 130
dx = dy = 40

rect = patches.Rectangle((xini,yini), dx, dy, edgecolor='red', facecolor='none')
ax.add_patch(rect)

image

How is it before smoothing (original image)?

fig,ax = plt.subplots()
ax.imshow(image_gray[yini:yini+dy, xini:xini+dx], cmap='gray')

image

How does it look like after smoothing?

fig,ax = plt.subplots()
ax.imshow(image_blur[yini:yini+dy, xini:xini+dx], cmap='gray')

image

@tobyhodges
Copy link
Member

Excellent, thanks @chbrandt. Both seem very effective for illustrating the blurring effect. My vote would be to use transversal cut because I think the code would be marginally easier for learners to understand.

Does anyone else have a strong opinion one way or the other? @datacarpentry/image-processing-maintainers-workbench

When you prepare the PR, please add a comment to the code block to explain the use of img_as_uint

@bobturneruk
Copy link
Contributor

This is great work. Is the idea to extend the lesson by having learners write the code to generate the cuts, or to only use the cuts as visualisations?

@tobyhodges
Copy link
Member

I think it's a bit of both, @bobturneruk - include the 3D plots as illustration only, but perhaps include the code for at least one of the methods above, as extra practice with visualising intensities. Learners should be pretty familiar with that plotting by this point in the lesson, after the previous episode about creating histograms.

It would theoretically add time to the lesson, but in practice I suspect that increase to teaching time will be limited because a good illustrative example will reduce the time spent on questions and clarification.

@bobturneruk
Copy link
Contributor

As something of an aside (sorry), I think some of the code used to generate figures was taken out of the main branch when the repo was upgraded to workbench. It may be worth having a separate issue on this, dealing with where such code should sit in the current repo structure e.g. instructor notes. This would be relevant to any code added resolving this issue not to be run by learners.

@tobyhodges
Copy link
Member

Agreed, and I've created #286 to track that as a separate discussion

chbrandt added a commit to chbrandt/image-processing that referenced this issue Aug 11, 2023
Add section about visualizing image blur effects as discussed in datacarpentry#221
chbrandt added a commit to chbrandt/image-processing that referenced this issue Aug 11, 2023
@chbrandt
Copy link
Contributor

@tobyhodges came back to this. A first version is in #292 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Looking for Contributors type:clarification Suggest change for make lesson clearer type:enhancement Propose enhancement to the lesson
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants