You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 15, 2023. It is now read-only.
First of all, feel the author's open source spirit, great idea!
I know that low and high frequencies are divided according to a given proportion, but how are they divided along the channel dimension? Are they divided randomly according to proportion? Or are there other pre-processing in it?
Thank you very much.
The text was updated successfully, but these errors were encountered:
@ys-dpc low frequency and high frequency is just "concept",in practical use,just avg pool,you can get the background which means low frequency data,and the origin image with specific small kernel convolution,you can get high frequency information of image.
3-3 convolution represents acquire high frequency feature and 22 average pool means acquire low frequency feature, 22 is enough for low frequency? 44 or 66 pool size will be a better low frequency?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
First of all, feel the author's open source spirit, great idea!
I know that low and high frequencies are divided according to a given proportion, but how are they divided along the channel dimension? Are they divided randomly according to proportion? Or are there other pre-processing in it?
Thank you very much.
The text was updated successfully, but these errors were encountered: