-
Notifications
You must be signed in to change notification settings - Fork 0
N d indexing
To gain indexing functionality rapidly, N-dimensional indexing can be implemented in terms of repeated slicing:
function ref(A::Tensor, I...) for i = 1:length(I) A = slice(A,i,I[i]) end return A end
The slice(A,n,i) function takes a tensor, the dimension to slice, and the index to slice it with. For example, slice(A,2,1:3) is equivalent to A[:,1:3,:] when A is 3-d.
Example implementation for a scalar index:
function slice(A::Tensor, n::Int, i::Int) d_in = dims(A) d_out = [d_in[:(n-1)],d_in[(n+1):]] M = prod(d_out[:n-1]) N = M * prod(d_out[n:]) stride = M * d_in[n] B = make_similar(A, dims=d_out) for j=1:N B[j] = A[1 + (i-1)*M + ((j-1)%M) + ((j-1)/M)*stride] end return B end
Here is an alternate implementation that might be faster and easier to adapt to other kinds of indexing:
function slice(A::Tensor, n::Int, i::Int) d_in = dims(A) d_out = [d_in[:(n-1)],d_in[(n+1):]] M = prod(d_in[:(n-1)]) N = prod(d_in) stride = M * d_in[n] B = make_similar(A, dims=d_out) index_offset = 1 + (i-1)*M l = 1 for j=0:stride:(N-stride) offs = j + index_offset for k=0:(M-1) B[l] = A[offs + k] l+=1 end end return B end
This code looks at the data as "pseudo-2d", where the inner loop handles copying leading dimensions (dimensions before the one indexed) and the outer loop moves through the trailing dimensions. Another way to see it is that the inner loop handles the contiguous ("memcpy") copying. When n==1, M==1 too and the inner loop isn't needed; this could be handled as a special case.