Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request, adaptive argK #391

Open
MH-limarco opened this issue Oct 22, 2024 · 0 comments
Open

Feature Request, adaptive argK #391

MH-limarco opened this issue Oct 22, 2024 · 0 comments

Comments

@MH-limarco
Copy link

MH-limarco commented Oct 22, 2024

Hi KeOps team, I am working on adaptive KNN, based on probabilistic sparsity mechanism.
Since my sample size N is very large, I really need to improve my original algorithm based on pytorch based on your architecture

import torch
N = 1000 #Sample number 1000~ 100000
dim = 64 # 64 ~ 1500
x = torch.randn(N, dim)
a = torch.nn.Parameter(torch.tensor([0.5])) #parm1 
b = torch.nn.Parameter(torch.tensor([0.5])) #parm2

similarty = torch.cdist(x, x) #Some distance formulas
similarty -= torch.log(-torch.log(torch.rand_like(x) + 1e-6)) #add gumbel_noise

probs = a[0] * torch.relu(similarty - b[0]) # adaptive formulas

res  = (((probs + probs.t()) / 2) > 0) * 1  # Change all values ​​> 0 to 1, else to 0

similarty_idx = probs.nonzero().T # output1, shape(2, L) L is adaptive Neighbor number

loss_probs = probs.sum(dim=1) # output2, shape(N, )

This is My own code with Keops.

from pykeops.torch import LazyTensor
import torch
N = 1000 #Sample number
dim = 64
x = torch.randn(N, dim)
a = torch.nn.Parameter(torch.tensor([0.5])) #parm1
b = torch.nn.Parameter(torch.tensor([0.5])) #parm2

##Some distance formulas
G_i = LazyTensor(x[:, None, :])
X_j = LazyTensor(x[None, :, :])
similarty = ((G_i - X_j) ** 2).sum(-1)

#similarty  -= torch.log(-torch.log(torch.rand_like(similarty) + 1e-6))  # gumbel_noise

# adaptive formulas
probs = a[0] * (similarty - b[0]).relu()
probs = (((probs + probs.t()) / 2) > 0) # Change all values > 0 to 1, else to 0

indices = probs.nonzero().T  # output1  Error: not support nonzero()
loss_probs = probs.sum(dim=1)  # output2

Key points I encountered:
How to add random noise in LazyTensor?
How to get adaptive number K index in keops

I am new to C++/keops, and I wonder if there is any keops method to handle this code?
Thanks!!!~

@MH-limarco MH-limarco changed the title Feature Request, adaptive KNN Feature Request, adaptive argK Oct 22, 2024
@MH-limarco MH-limarco changed the title Feature Request, adaptive argK Feature Request, adaptive argKmax Oct 22, 2024
@MH-limarco MH-limarco changed the title Feature Request, adaptive argKmax Feature Request, adaptive argK Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant