You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. It's exciting to see this great work.
I was exploring the possibility in applying the MCR in segmentation task.
One way is to treat every pixel as a sample and group all pixels into the batch dimension to compute the two losses, discriminative and compressive.
However, the coding rate operation is O(n^2) where n stands for the number of samples in a mini-batch. And, the pixels in an image may amount to from thousands to millions (2d to 3d images), this operation may exceed the capacity of a usual commercial GPU.
I was wondering if you would study in this direction and what suggestion do you have.
Many thanks.
The text was updated successfully, but these errors were encountered:
You have asked a really important question about scalability. Sadly I don't
have a good answer for you for now, but our group is currently working on
it. We will publish our work on scalability once we have more stable
results. Thanks for tuning in our paper and work!
On Tue, Jun 1, 2021 at 9:07 PM DerekSHI ***@***.***> wrote:
Hi. It's exciting to see this great work.
I was exploring the possibility in applying the MCR in segmentation task.
One way is to treat every pixel as a sample and group all pixels into the
batch dimension to compute the two losses, discriminative and compressive.
However, the coding rate operation is O(n^2) where n stands for the number
of samples in a mini-batch. And, the pixels in an image may amount to from
thousands to millions (2d to 3d images), this operation may exceed the
capacity of a usual commercial GPU.
I was wondering if you would study in this direction and what suggestion
do you have.
Many thanks.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHJPXZ7SPOGYYEJLMWEVHQLTQWVABANCNFSM4555TAHQ>
.
Hi. It's exciting to see this great work.
I was exploring the possibility in applying the MCR in segmentation task.
One way is to treat every pixel as a sample and group all pixels into the batch dimension to compute the two losses, discriminative and compressive.
However, the coding rate operation is O(n^2) where n stands for the number of samples in a mini-batch. And, the pixels in an image may amount to from thousands to millions (2d to 3d images), this operation may exceed the capacity of a usual commercial GPU.
I was wondering if you would study in this direction and what suggestion do you have.
Many thanks.
The text was updated successfully, but these errors were encountered: