Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image sharpening is dependent on image size #418

Open
danielmwatkins opened this issue Jun 4, 2024 · 8 comments
Open

Image sharpening is dependent on image size #418

danielmwatkins opened this issue Jun 4, 2024 · 8 comments

Comments

@danielmwatkins
Copy link
Contributor

The default settings of IceFloeTracker.imsharpen include parameters rblocks, cblocks that divide an image into a set number of row and column blocks. For a small image, the default values result in many blocks with few pixels, and for a large value, the image may be undersharpened. For consistent results, rblocks and cblocks should depend on the image size and consistently result in approximately square blocks.

@cpaniaguam
Copy link
Member

@danielmwatkins Have you thought of a specific way to do this? An easy way would be to use the gcd = gcd(width, height) of the image's height and width, and choose rblocks, cblocks = width/gcd, height/gcd but this will require a careful choice of the image size.

@danielmwatkins
Copy link
Contributor Author

@cpaniaguam That's basically the approach I was thinking about, depending on what gcd stands for in this case. There is likely some best-performing size of partition that depends on the spatial scale of the features we are trying to see. We'll want to have some level of tolerance for the image size not being exactly divisible by the size. I could see it also being important to make sure that if the segment is all ice that we don't stretch the histogram the same amount as if it was a mix of ice and water.

I wonder if the block-based method sometimes results in discontinuities in the level of sharpening?

@cpaniaguam
Copy link
Member

@danielmwatkins gcd = greatest common divisor

@danielmwatkins
Copy link
Contributor Author

Gotcha. Actually I was thinking more along the line of saying that the parameter would be something like block_size=200 and we'd have something that estimated the number of blocks that would make the block size closest to the desired size.

@tdivoll tdivoll added this to the IFT Data Process milestone Jul 19, 2024
@danielmwatkins
Copy link
Contributor Author

@cpaniaguam Monica, Minki, and I spent some time looking through the Matlab code. We determined that the matlab version does the approach I was suggesting: it takes a set pixel dimension, then determines the number of blocks for the adaptive equalization.

There’s also a step that calculates entropy within a block to determine whether there is enough variability in pixel brightness to apply the adaptive equalization. I think with these two adjustments we could get rid of a lot of the oversharpening issue.

@cpaniaguam
Copy link
Member

@danielmwatkins Thanks for this! Could you point to the relevant blocks of code where these operations are performed?

@danielmwatkins
Copy link
Contributor Author

Yes, in the version that's on the Wilhelmus lab git repo, look at the code near line 435 of MASTER.m. It calculates entropy for each tile, then only applies the histogram equalization if the entropy is larger than a threshold.

@cpaniaguam
Copy link
Member

@danielmwatkins @mmwilhelmus @hollandjg @mirestrepo @mkkim400
Revisiting the tile size as a function of the number of desired pixels in the tile, the Julia package that has this functionality requires, like Matlab, the tile size (m, n) as input. If square tiles are desired, the side-length of the square can be computed as round(sqrt(num_pixels_in_tile)):

julia> num_pixels_tile = 200;

julia> tile_size = round(Int, sqrt(num_pixels_tile)) # 196 pixels/square is best here
14

julia> num_pixels_tile = 220;

julia> tile_size = round(Int, sqrt(num_pixels_tile)) # 225 pixels is the best fit
15

If the tiles of the chosen size don't cover the whole image, one gets clippings; the package provides these leftover tiles also.

julia> A = rand(5, 5);

julia> tile_size = (2, 2);

julia> tiles = collect(TileIterator(axes(A), tile_size))
3×3 Matrix{Tuple{UnitRange{Int64}, UnitRange{Int64}}}:
 (1:2, 1:2)  (1:2, 3:4)  (1:2, 5:5)
 (3:4, 1:2)  (3:4, 3:4)  (3:4, 5:5)
 (5:5, 1:2)  (5:5, 3:4)  (5:5, 5:5)

Matlab seems to have the same behavior

>> A = rand(3, 3);
>> mat2tiles(A, [2,2])

ans =

  3×3 cell array

    {2×2 double     }    {2×2 double     }    {2×1 double}
    {2×2 double     }    {2×2 double     }    {2×1 double}
    {[0.5681 0.7043]}    {[0.9093 0.5946]}    {[  0.5030]}

Questions

How to handle the potential clippings? @danielmwatkins mentioned the possibility of making the edge tiles slightly bigger than the rest (I am guessing by splicing the adjacent clippings to the whole ones). How to handle the corner tile? Perhaps if the clippings are large enough nothing needs to be done?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In progress
Development

No branches or pull requests

3 participants