Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3 dimensional arrays don't do dot product of sub-matrices correctly #212

Open
mihbor opened this issue Nov 21, 2024 · 2 comments
Open

3 dimensional arrays don't do dot product of sub-matrices correctly #212

mihbor opened this issue Nov 21, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@mihbor
Copy link
Contributor

mihbor commented Nov 21, 2024

3 dimensional arrays don't do dot product correctly (unless they have only one top-level element).
Reproducer:

import org.jetbrains.kotlinx.multik.api.linalg.dot
import org.jetbrains.kotlinx.multik.api.mk
import org.jetbrains.kotlinx.multik.api.ndarray
import org.jetbrains.kotlinx.multik.ndarray.data.get

fun main() {

  val a = mk.ndarray(
    mk[
      mk[
        mk[1.0],
        mk[0.0],
      ],
    ],
  )
  val b = mk.ndarray(
    mk[
      mk[
        mk[1.0],
        mk[0.0],
      ],
      mk[
        mk[1.0],
        mk[0.0],
      ],
    ],
  )
  println("a[0] dot a[0].T: (CORRECT)")
  println(a[0] dot a[0].transpose())

  println("\nb[0] dot b[0].T: (WRONG)")
  println(b[0] dot b[0].transpose())

  println("\nb[1] dot b[1].T: (WRONG)")
  println(b[1] dot b[1].transpose())
}

prints

a[0] dot a[0].T: (CORRECT)
[[1.0, 0.0],
[0.0, 0.0]]

b[0] dot b[0].T: (WRONG)
[[0.0, 0.0],
[0.0, 0.0]]

b[1] dot b[1].T: (WRONG)
[[0.0, 0.0],
[0.0, 0.0]]
@devcrocod devcrocod added the bug Something isn't working label Nov 21, 2024
@mihbor
Copy link
Contributor Author

mihbor commented Nov 23, 2024

A couple of observations:
The problem goes away if I set the engine type to:
mk.setEngine(KEEngineType)
With the default engine the dot product doesn't work correctly for D2 x D1 shapes as well sometimes.

@devcrocod
Copy link
Collaborator

Thank you very much for the additional information.

I have reproduced the error. It indeed occurs with the native implementation, which is selected by default for tensor multiplication.

As far as I can see, this error is related to the incorrect passing of flags to the native implementation along with copying for sequential memory representation.

Unfortunately, it won’t be possible to quickly release an artifact with a fix for the bug, as there is a blocker due to linker errors on macOS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants