-
-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gaussian Blur not shown to be effective against noise #221
Comments
This is a great idea, and IMO will more clearly illustrate the motivation for including blurring in an image processing pipeline. |
Maybe the following can help, @CaptainSifff ? One of these days I used parts of this lesson in an overview about image processing, and I used 3D views of the petri-dish image to show the effect of filtering.
import imageio.v3 as iio
import skimage.color
image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
from skimage.util import img_as_ubyte
size = min(image_gray.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)
img = image_gray[:size,:size]
img = img_as_ubyte(img)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')
ax.view_init(elev=215, azim=-60)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L')
from skimage.util import img_as_ubyte
from skimage.filters import gaussian
image_blur = gaussian(image_gray, sigma=3)
size = min(image_blur.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)
img = image_blur[:size,:size]
img = img_as_ubyte(img)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')
ax.view_init(elev=215, azim=-60)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L') |
Thanks for the engagement @chbrandt. Looks like the perti dish image has the kind of noise @CaptainSifff mentions. I wonder whether the visualisation might be better in 2D, as that's what we use in most of the lesson? The 3D one looks nice, though and I can clearly see the denoising effect. |
Great idea! If we stick to 2D we should find a way to make the small scale noise visible to the audience. So either by pixel-peeping, or maybe by some type of gradient filter? |
Then, I think the best way of showing it would be through a transversal cut, showing the intensity of the pixels, say, along Y=150. |
I like this idea, and would definitely welcome a pull request to add it. However, I think the 3D images are very effective at illustrating the denoising effect and I would propose to accompany this 2D 'slice' with the 3D images - only without the code that was used to generate them. My rationale for omitting the 3D plot code is to avoid increasing the cognitive load of the episode/lesson ( @chbrandt would you be willing to create a public gist of the code you used to generate those 3D plots, and include a link to that gist in the captions to the 3D images? e.g. ![
A 3D plot of pixel intensities across the whole Petri dish image before blurring.
[Explore how this plot was created with matplotlib](LINKTOGIST).
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot showing pixel intensities across the whole example Petri dish image before blurring'
} and ![
A 3D plot of pixel intensities after Gaussian blurring of the Petri dish image.
Note the 'smoothing' effect on the pixel intensities of the colonies in the image,
and the 'flattening' of the background noise at relatively low pixel intensities throughout the image.
[Explore how this plot was created with matplotlib](LINKTOGIST).
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot illustrating the smoothing effect on pixel intensities across the whole example Petri dish image after blurring'
} [Edit: fixed the alternative text description for the second 3D plot.] |
Sure @tobyhodges , I can do that. I think you found the right balance. Before I create the gist and push a PR, let me dump the code for the alternatives discussed (ie, the transversal cut and pixel-peeping): Transversal cut/sliceWhere are we slicing? import matplotlib.pyplot as plt
import imageio.v3 as iio
import skimage.color
image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)
xmin, xmax = (0, image_gray.shape[1])
ymin = ymax = 150
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
ax.plot([xmin,xmax], [ymin,ymax], color='red') How does it (ie, the intensity of those pixels) look like? image_gray_pixels_slice = image_gray[150,:]
image_gray_pixels_slice = img_as_ubyte(image_gray_pixels_slice)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(image_gray_pixels_slice, color='red')
ax.set_ylim(255, 0)
ax.set_ylabel('L')
ax.set_xlabel('X') Equivalently, the same pixels/slice from the smoothed image: image_blur_pixels_slice = image_blur[150,:]
image_blur_pixels_slice = img_as_ubyte(image_blur_pixels_slice)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(image_blur_pixels_slice, 'red')
ax.set_ylim(255, 0)
ax.set_ylabel('L')
ax.set_xlabel('X')
ax.set_title('Slice "Y=150" from Blur image') Pixel-peepingWhere are we zooming? import matplotlib.patches as patches
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
xini = 10
yini = 130
dx = dy = 40
rect = patches.Rectangle((xini,yini), dx, dy, edgecolor='red', facecolor='none')
ax.add_patch(rect) How is it before smoothing (original image)? fig,ax = plt.subplots()
ax.imshow(image_gray[yini:yini+dy, xini:xini+dx], cmap='gray') How does it look like after smoothing? fig,ax = plt.subplots()
ax.imshow(image_blur[yini:yini+dy, xini:xini+dx], cmap='gray') |
Excellent, thanks @chbrandt. Both seem very effective for illustrating the blurring effect. My vote would be to use transversal cut because I think the code would be marginally easier for learners to understand. Does anyone else have a strong opinion one way or the other? @datacarpentry/image-processing-maintainers-workbench When you prepare the PR, please add a comment to the code block to explain the use of |
This is great work. Is the idea to extend the lesson by having learners write the code to generate the cuts, or to only use the cuts as visualisations? |
I think it's a bit of both, @bobturneruk - include the 3D plots as illustration only, but perhaps include the code for at least one of the methods above, as extra practice with visualising intensities. Learners should be pretty familiar with that plotting by this point in the lesson, after the previous episode about creating histograms. It would theoretically add time to the lesson, but in practice I suspect that increase to teaching time will be limited because a good illustrative example will reduce the time spent on questions and clarification. |
As something of an aside (sorry), I think some of the code used to generate figures was taken out of the |
Agreed, and I've created #286 to track that as a separate discussion |
Add section about visualizing image blur effects as discussed in datacarpentry#221
@tobyhodges came back to this. A first version is in #292 . |
We just tested episode 6 and were kind of puzzled by the
key points:
"Applying a low-pass blurring filter smooths edges and removes noise from an image"
While I agree that we have seen the blurring of edges, I think some example to show that blurring is effective against
small scale noise and hence can be used for denoising would be nice.
Why don't we give the example image gaussian-original.png some random noise right from the start
or let the learners apply some random noise via numpy.random ?
The text was updated successfully, but these errors were encountered: