-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spark-worker default_cmd Script not Working as Expected #43
Comments
I am sorry it is replacing the tag as it is supposed to. I was looking at |
I have created another issue and closing this one for the problem. |
Hey I have the same problem. Were you able to figure it out? |
@marcinjurek I had to edit the |
SDN* |
I am using
spark 1.0.0
docker images. It appears to me that the scriptdefault_cmd
inspark-worker
is not working as it should be working. This script callsprepare_spark $1
of/root/spark_files/configure_spark.sh
. I have debugged it a lot. Even, I have calledconfigure_spark.sh
from spark-base image by usingdocker run -it
.The problem is that these script do not replace the
__MASTER__
tag incore_site.xml
in/root/hadoop_files/
with the argument provided. Instead, the worker expects the master to bemaster
. That is, it is static.Please, can someone help me out with this as I need it to create clusters on different machines. If I am not able to specify master like this, then I cannot create a cluster on different machines as the worker nodes will not know about the master. It works on single machine though, but that is because I have installed the
docker-dns
service.The text was updated successfully, but these errors were encountered: