You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I give a try to read the arXiv paper of Calandriello et al. 2017 but failed to understand the link between the actual paper formula in section 3, Sequential RLS Sampling 👇
defcompute_tau(centers_dict: CentersDictionary,
X: np.ndarray,
similarity_func: callable,
lam_new: float,
force_cpu=False):
.
.
.
diag_norm=np.asarray(similarity_func.diag(X))
# (m x n) kernel matrix between samples in dictionary and dataset XK_DU=xp.asarray(similarity_func(centers_dict.X, X))
# The estimator proposed in Calandriello et al. 2017 is# diag(XX' - XX'S(SX'XS + lam*I)^(-1)SXX')/lam# Here for efficiency we collect an S inside the inverse and compute# diag(XX' - XX'(X'X + lam*S^(-2))^(-1)XX')/lam# note that in the second term, we take care of dropping the rows/columns of X associated# with 0 entries in SU_DD, S_DD, _=np.linalg.svd(xp.asnumpy(similarity_func(centers_dict.X, centers_dict.X)
+lam_new*np.diag(centers_dict.probs)))
U_DD, S_root_inv_DD=__stable_invert_root(U_DD, S_DD)
E=xp.asarray(S_root_inv_DD*U_DD.T)
# compute (X'X + lam*S^(-2))^(-1/2)XX'X_precond=E.dot(K_DU)
# the diagonal entries of XX'(X'X + lam*S^(-2))^(-1)XX' are just the squared# ell-2 norm of the columns of (X'X + lam*S^(-2))^(-1/2)XX'tau= (diag_norm-xp.asnumpy(xp.square(X_precond, out=X_precond).sum(axis=0))) /lam_new 👈
Like,
Is X'X reflect the kernel matrix $\mathbf{K}_t=\boldsymbol{\Phi}_t^{\top}\boldsymbol{\Phi}_t$? And what about $XX'$?
I couldn't understand what is X_precond here and why svd decomposition needed.
In section 3 there was the definition of a dictionary, "we redefine a dictionary as a collection $\mathcal{I}={(i,\widetilde{p_i},q_i)}$, where $i$ is the index of the point $x_i$ stored in the dictionary, $\widetilde{p_i}$ tracks the probability used to sample it, and $q_i$ is the number of copies (multiplicity) of i." - here I couldn't understand the $q_i$
Overall, I feel, I didn't understand the EXPAND and SHRINK for Algorithm 1 intuitively. It will be a great help if you comment something on this.
It would be greatly appreciated if you could assist me in resolving this matter.
The text was updated successfully, but these errors were encountered:
I give a try to read the arXiv paper of Calandriello et al. 2017 but failed to understand the link between the actual paper formula in section 3, Sequential RLS Sampling 👇
and the code implementation of
compute_tau
:Like,
X'X
reflect the kernel matrixX_precond
here and whysvd
decomposition needed.EXPAND
andSHRINK
for Algorithm 1 intuitively. It will be a great help if you comment something on this.It would be greatly appreciated if you could assist me in resolving this matter.
The text was updated successfully, but these errors were encountered: