-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
slices of matrices should be vectors #231
Comments
This is fairly significant. Is this the rule we originally wanted way back when: "the rank of the result is the sum of the ranks of the indexes"? Or is it squeezing singleton dimensions? Or just trimming trailing singleton dimensions? |
As in it's a significant question as to how this should behave, or that it's a significant bug? |
I think the current behavior is intentional, and settling this question is a big deal. |
Yes, that's why I assigned this to Viral — I think it's intentional too. I don't care for it, however, but I guess let's have that discussion again now. |
We need a consistent rule. Usually, when you do that, you do want a vector. -viral On Oct 17, 2011, at 6:53 PM, Stefan Karpinski wrote:
|
For |
Yes. What else would it be? Another 3-tensor? Keep in mind that if you want that, you can still do |
We have to beware the type inference implications of this. I'm not sure how to deal with a tuple (or integer) whose length depends on the types of a sequence of values. It seems possible, but it's probably at the outer edge of what we're capable of. We might even need to add a special feature like Here's how
It's not too realistic to expect to run that whole loop at compile time. |
Would it help if we were stricter about indexing behaviors? As in you can only index into an array with the number of indices equal to the number of dimensions of the array. In other words, would eliminating linear indexing help? |
No, because even if you have the right number of indexes, you still need something like that rank-counting loop above. |
Hmm. So what are you thinking? If slices of tensors are always the same rank as the tensor being sliced, then how do you get a vector from a matrix? Or a matrix from a 3-tensor? |
I guess one approach could be that slicing always preserves the rank, making inference easy since it's completely stable. But then you need a |
We have |
That seems like a reasonable thing to have. It's special case, but we know those are necessary to make things like this work. |
Maybe the rule we want is to drop dimensions for trailing scalars? So |
That might actually work but it does feel like a bit of a hack. |
Here's another case to cover which can come up when range indexing with a variable:
The problem is that a 1x1 array isn't dimensionally compatible with anything, even though it should be--it's really a scalar. There's not really a good workaround, as squeeze()ing leaves you with a 0-dimensional Array, which is even less useful. |
That probably can't change, since in general you might do A[1:m,1:n] and
|
In general, yes. The situation I was working with had the end of the range set by a variable and this hit on the first iteration. On further review I needed to fix indexing on the other side of the operation--I got screwed up when converting to preallocating the other array. I still think the behavior of squeeze() is wrong here. I'll file a separate issue for that. |
…n dims inference on it is currently sub-optimal for unequal dimensions with some dimension greater than 2. issue #231
We now consistently treat arrays so that trailing singletons don't matter, so we should be ok here. |
This means markers and users don't need to fiddle with the tests file.
* use non rng seed dependent uuid to avoid collisions * use our own rng
The text was updated successfully, but these errors were encountered: