-
Notifications
You must be signed in to change notification settings - Fork 666
Description
Hi,
I tried to use AccuracyCalculator.get_accuracy() to calculate MAP@R to evaluate predictions of my model on Stanford Online Products (SOP) dataset. However, since this dataset is relatively big, it doesn't fit into the GPU memory when AccuracyCalculator.get_accuracy() tries to do the calculations. I tried two different strategies to overcome this issue, both failed to resolve it.
1. Use k = "max_bin_count": I set k = "max_bin_count" in initializing an object from AccuracyCalculator() class. But I got the following error:
RuntimeError: Error in virtual void* faiss::gpu::StandardGpuResourcesImpl::allocMemory(const faiss::gpu::AllocRequest&) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/gpu/StandardGpuResources.cpp:443: Error: 'err == cudaSuccess' failed: Failed to cudaMalloc 1610612736 bytes on device 0 (error 2 out of memory
Outstanding allocations:
2. Transfer the whole data from GPU to CPU: I applied detach().cpu() to the embedding tensors in order to bring them to CPU. However, this trick didn't solve my issue either.
Interestingly, the "A Metric Learning Reality Check" paper has reported MAP@R for SOP dataset. I'm wondering how did the authors perform the calculations? Could anyone help me what I need to do to resolve my issue?
Best,
Farshad