You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi KeOps team, I am working on adaptive KNN, based on probabilistic sparsity mechanism.
Since my sample size N is very large, I really need to improve my original algorithm based on pytorch based on your architecture
importtorchN=1000#Sample number 1000~ 100000dim=64# 64 ~ 1500x=torch.randn(N, dim)
a=torch.nn.Parameter(torch.tensor([0.5])) #parm1 b=torch.nn.Parameter(torch.tensor([0.5])) #parm2similarty=torch.cdist(x, x) #Some distance formulassimilarty-=torch.log(-torch.log(torch.rand_like(x) +1e-6)) #add gumbel_noiseprobs=a[0] *torch.relu(similarty-b[0]) # adaptive formulasres= (((probs+probs.t()) /2) >0) *1# Change all values > 0 to 1, else to 0similarty_idx=probs.nonzero().T# output1, shape(2, L) L is adaptive Neighbor numberloss_probs=probs.sum(dim=1) # output2, shape(N, )
This is My own code with Keops.
frompykeops.torchimportLazyTensorimporttorchN=1000#Sample numberdim=64x=torch.randn(N, dim)
a=torch.nn.Parameter(torch.tensor([0.5])) #parm1b=torch.nn.Parameter(torch.tensor([0.5])) #parm2##Some distance formulasG_i=LazyTensor(x[:, None, :])
X_j=LazyTensor(x[None, :, :])
similarty= ((G_i-X_j) **2).sum(-1)
#similarty -= torch.log(-torch.log(torch.rand_like(similarty) + 1e-6)) # gumbel_noise# adaptive formulasprobs=a[0] * (similarty-b[0]).relu()
probs= (((probs+probs.t()) /2) >0) # Change all values > 0 to 1, else to 0indices=probs.nonzero().T# output1 Error: not support nonzero()loss_probs=probs.sum(dim=1) # output2
Key points I encountered:
How to add random noise in LazyTensor?
How to get adaptive number K index in keops
I am new to C++/keops, and I wonder if there is any keops method to handle this code?
Thanks!!!~
The text was updated successfully, but these errors were encountered:
MH-limarco
changed the title
Feature Request, adaptive KNN
Feature Request, adaptive argK
Oct 22, 2024
MH-limarco
changed the title
Feature Request, adaptive argK
Feature Request, adaptive argKmax
Oct 22, 2024
MH-limarco
changed the title
Feature Request, adaptive argKmax
Feature Request, adaptive argK
Oct 23, 2024
Hi KeOps team, I am working on adaptive KNN, based on probabilistic sparsity mechanism.
Since my sample size N is very large, I really need to improve my original algorithm based on pytorch based on your architecture
This is My own code with Keops.
Key points I encountered:
How to add random noise in LazyTensor?
How to get adaptive number K index in keops
I am new to C++/keops, and I wonder if there is any keops method to handle this code?
Thanks!!!~
The text was updated successfully, but these errors were encountered: