You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have just migrated from Maven to SBT. When I ran my application on spark I received these errors:
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.jena.riot.system.RiotLib
at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$1.apply(NTripleReader.scala:135)
at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$1.apply(NTripleReader.scala:118)
at net.sansa_stack.rdf.spark.io.NonSerializableObjectWrapper.instance$lzycompute(NTripleReader.scala:207)
at net.sansa_stack.rdf.spark.io.NonSerializableObjectWrapper.instance(NTripleReader.scala:207)
at net.sansa_stack.rdf.spark.io.NonSerializableObjectWrapper.get(NTripleReader.scala:209)
at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:148)
at net.sansa_stack.rdf.spark.io.NTripleReader$$anonfun$load$1.apply(NTripleReader.scala:140)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
As you can see, my application is trying to use NTripleReader when the error occurs.
Here's my build.sbt:
name :="HareSpark"ThisBuild/ scalaVersion :="2.11.8"valvarScalaVersion="2.11.8"valsansaVersion="0.6.1-SNAPSHOT"valsparkVersion="2.2.1"
resolvers ++=Seq(
"AKSW Maven Releases" at "https://maven.aksw.org/archiva/repository/internal",
"AKSW Maven Snapshots" at "https://maven.aksw.org/archiva/repository/snapshots",
"oss-sonatype" at "https://oss.sonatype.org/content/repositories/snapshots/",
"Apache repository (snapshots)" at "https://repository.apache.org/content/repositories/snapshots/",
"Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "NetBeans" at "https://bits.netbeans.org/nexus/content/groups/netbeans/", "gephi" at "https://raw.github.com/gephi/gephi/mvn-thirdparty-repo/",
Resolver.defaultLocal,
Resolver.mavenLocal,
"Local Maven Repository" at "file://"+Path.userHome.absolutePath +"/.m2/repository",
"Apache Staging" at "https://repository.apache.org/content/repositories/staging/"
)
libraryDependencies ++=Seq(
"org.scala-lang"%"scala-library"% varScalaVersion,
"com.intel.analytics.bigdl"%"bigdl-SPARK_2.2"%"0.4.0-SNAPSHOT",
"javax.ws.rs"%"javax.ws.rs-api"%"2.1" artifacts( Artifact("javax.ws.rs-api", "jar", "jar"))
)
libraryDependencies ++=Seq(
"com.databricks"%%"spark-csv"%"1.5.0",
"com.univocity"%"univocity-parsers"%"2.1.2",
"org.apache.commons"%"commons-math3"%"3.6.1"
)
libraryDependencies ++=Seq(
"org.apache.spark"%%"spark-core"% sparkVersion %"provided",
"org.apache.spark"%%"spark-sql"% sparkVersion %"provided",
"org.apache.spark"%%"spark-mllib"% sparkVersion %"provided",
"org.apache.spark"%%"spark-graphx"% sparkVersion %"provided",
"org.apache.spark"%%"spark-hive"% sparkVersion %"provided"
)
libraryDependencies ++=Seq(
"net.sansa-stack"%%"sansa-rdf-spark"% sansaVersion,
)
assemblyMergeStrategy in assembly := {
casePathList("META-INF", xs@_*) =>MergeStrategy.discard
case x =>MergeStrategy.first
}
I use Spark 2.2.1 and prebuilt Scala 2.11.8 with Java 8.
I have seen this bug fixed in SANSA maven template, but it seems persist in SBT.
The text was updated successfully, but these errors were encountered:
I have just migrated from Maven to SBT. When I ran my application on spark I received these errors:
As you can see, my application is trying to use NTripleReader when the error occurs.
Here's my
build.sbt
:I use Spark 2.2.1 and prebuilt Scala 2.11.8 with Java 8.
I have seen this bug fixed in SANSA maven template, but it seems persist in SBT.
The text was updated successfully, but these errors were encountered: