-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job fails due to NoSuchMethodError exception #103
Comments
I'm not sure if it matters, but the cluster runs in standalone mode |
Also, adding
|
For further reference the solution was addding the right versions of guice and guava to the executor's classpath. I added this line to the properties object of the ingestion specs: "properties": {
"spark.executor.extraClassPath": "guice-4.1.0.jar:guava-16.0.1.jar"
} So I'm not sure if it's a bug or not, but I expected it to work out of the box as I'm using a pretty standard version of everything. |
Class path problems are very nasty and hard to track down, especially once you distribute stuff out to the cluster. Thank you a ton for reporting your work around. This ticket will remain open until a more sustainable solution is available. |
Sharing my solution, we run in cluster mode and provide a "spark.executor.uri", didn't matter what I tried, I either ended up with the wrong version of guice on the executor or the wrong protobuf. I ended up building a modified spark dist to provide to "spark.executor.uri" with the same guice, guava and protobuf jars as druid. |
@bendoerr : if you're building your own druid after 0.11.0 you can also use the |
@drcrallen built using spark 2.x profile, but submitting jobs to Spark still fails with the same NoMethod error. |
Hi
if I add only the guava ref I get the same NoSuchMethod exception If I add both then I get
any idea to help ? |
I'm trying to execute the
index_spark
job on a Spark 1.6.2 cluster pre-built for Hadoop 2.4.0.Each time I submit the job, I've got this exception:
which causes my job to fail.
I used the
java -classpath "lib/*" io.druid.cli.Main tools pull-deps -c io.druid.extensions:druid-spark-batch_2.10:0.9.2.14 -h org.apache.spark:spark-core_2.10:1.6.2
command to install my package.I tried to add manually the guice jar to my spark's classpath, but it didn't help. I also noticed, that executing the job works with a local Spark master (
local[*]
). I read this page, because I found similar errors for theindex_hadoop
job, but I couldn't really apply those tips for my case.Any help would be really appreciated.
update: I'm using imply 2.0.0.
The text was updated successfully, but these errors were encountered: