Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eigendecomposition of torch LazyTensor #375

Open
huguesva opened this issue Jul 2, 2024 · 1 comment
Open

Eigendecomposition of torch LazyTensor #375

huguesva opened this issue Jul 2, 2024 · 1 comment

Comments

@huguesva
Copy link

huguesva commented Jul 2, 2024

Hello,
Thanks for the great library!

I am trying to perform the eigendecomposition of a torch LazyTensor. The following is a slight adaptation of the code in https://www.kernel-operations.io/keops/_auto_tutorials/backends/plot_scipy.html
I simply converted the input data to torch to retrieve a pykeops.torch.LazyTensor

import numpy as np
from pykeops.torch import LazyTensor
import pykeops.config
import torch

from scipy.sparse.linalg import aslinearoperator
from scipy.sparse.linalg import eigsh

dtype = "float32"  # No need for double precision here!

N = 10000 if pykeops.config.gpu_available else 1000
t = np.linspace(0, 2 * np.pi, N + 1)[:-1]
x = np.stack((0.4 + 0.4 * (t / 7) * np.cos(t), 0.5 + 0.3 * np.sin(t)), 1)
x = x + 0.01 * np.random.randn(*x.shape)
x = x.astype(dtype)
x = torch.Tensor(x)

sigma = 0.05
x_ = x / sigma
x_i, x_j = LazyTensor(x_[:, None, :]), LazyTensor(x_[None, :, :])
K_xx = (-((x_i - x_j) ** 2).sum(2) / 2).exp()  # Symbolic (N,N) Gaussian kernel matrix

K = aslinearoperator(K_xx)

eigenvalues, eigenvectors = eigsh(K, k=5)  # Largest 5 eigenvalues/vectors

print("Largest eigenvalues:", eigenvalues)
print("Eigenvectors of shape:", eigenvectors.shape)

This does not seem to work and throws an error from scipy/sparse/linalg/_eigen/arpack/arpack.py .

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
     [21] K_xx = (-((x_i - x_j) ** 2).sum(2) / 2).exp()  # Symbolic (N,N) Gaussian kernel matrix
     [23] K = aslinearoperator(K_xx)
---> [25] eigenvalues, eigenvectors = eigsh(K, k=5)  # Largest 5 eigenvalues/vectors
     [27] print("Largest eigenvalues:", eigenvalues)
     [28] print("Eigenvectors of shape:", eigenvectors.shape)

File /venv/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py:1700, in eigsh(A, k, M, sigma, which, v0, ncv, maxiter, tol, return_eigenvectors, Minv, OPinv, mode)
   [1698] with _ARPACK_LOCK:
   [1699]     while not params.converged:
-> [1700]         params.iterate()
   [1702]     return params.extract(return_eigenvectors)

File /venv/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py:549, in _SymmetricArpackParams.iterate(self)
    [546] elif self.ido == 1:
    [547]     # compute y = Op*x
    [548]     if self.mode == 1:
--> [549]         self.workd[yslice] = self.OP(self.workd[xslice])
    [550]     elif self.mode == 2:
    [551]         self.workd[xslice] = self.OPb(self.workd[xslice])

File /venv/lib/python3.11/site-packages/scipy/sparse/linalg/_interface.py:236, in LinearOperator.matvec(self, x)
    [233] if x.shape != (N,) and x.shape != (N,1):
...
     [76] @staticmethod
     [77] def view(x, s):
---> [78]     return x.view(s)

TypeError: Tuple must have size 2, but has size 3

If you have some guidance it would be very helpful.

Thanks a lot.

@huguesva
Copy link
Author

After discussion with @jeanfeydy this morning (thanks a lot!), one option would be to use the linear_operator library : https://linear-operator.readthedocs.io/en/latest/index.html instead of aslinearoperatorfrom scipy.sparse.linalg.

Sadly, the function to_linear_operator does not seem to support pykeops torch lazy tensors. Maybe there is another way from linear_operator that I am not aware of.

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant