You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is pretty self-explanatory: I write a simple job based on demo code, I verify my GPU is recognized, and the context is properly initted-- but when I execute the code, the computatin fails with a segfault... I have an intuition that the issue is coming around compilation time for the kernel, but I don't really know.
Any hints to further debug this issue?
object Woah extends App {
import scalacl._
implicit val context = Context.best
println(context.context)
println("stage 0")
val a = CLArray[Int](1, 2, 3)
val v = 10
println("stage 1")
val clFunction: CLFunction[Int, Float] = (x: Int) => {
x * 2.0f * v
}
println("stage 2-- expect FAIL")
val clResul = a.map(clFunction).toArray
println(" we will probably not get to stage 3 : (")
}
CLContext(platform = NVIDIA CUDA; devices = Quadro K1100M)
stage 0
stage 1
stage 2-- expect FAIL
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f62df464aea, pid=2059, tid=140062637090560
#
# JRE version: OpenJDK Runtime Environment (7.0_79-b14) (build 1.7.0_79-b14)
# Java VM: OpenJDK 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 2.5.6
# Distribution: Ubuntu 14.04 LTS, package 7u79-2.5.6-0ubuntu1.14.04.1
# Problematic frame:
# C [libc.so.6+0x88aea] strlen+0x2a
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
REDACTED
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
# http://icedtea.classpath.org/bugzilla
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Process finished with exit code 134
The text was updated successfully, but these errors were encountered:
The issue is pretty self-explanatory: I write a simple job based on demo code, I verify my GPU is recognized, and the context is properly initted-- but when I execute the code, the computatin fails with a segfault... I have an intuition that the issue is coming around compilation time for the kernel, but I don't really know.
Any hints to further debug this issue?
The text was updated successfully, but these errors were encountered: