-
Notifications
You must be signed in to change notification settings - Fork 50
Description
Hello everyone,
i am using your very good package in the case of solving semantic segmentation problem.
i am using the classification RAPS algorithm at the pixel level in a first place.
You gets it, since segmentation models are like a collection of classification models the size of the sets can become very large.
In my case, the calibration set in composed of 10 000 (64x64) images. Torch is designed to have the same behavior as numpy and other mathematical packages (see pytorch/pytorch#67592) and the algorithms computing quantiles are limited to tensors of size up to 2^24.
I was wondering on what could be the best way to handle this ?
Should we wait for a new pytorch release handling bigger tensors, do it on our side or it might be that i am doing a bad analysis and we don't need to fix anything...
I added a line of code computing the quantiles on each sub tensors of size 16 000 000 and returning the mean of those quantiles. I am far from expert and still discovering conformal prediction.
have a nice day guys :)