Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Here is a summary of the main steps in the script:
1. The lambda_handler function is the entry point for the Lambda function. It receives an event object and a context object as parameters.
2. The s3_bucket_script and input_script variables are used to specify the Amazon S3 bucket and object key where the Spark script is located.
3. The boto3 module is used to download the Spark script from Amazon S3 to a temporary file on the Lambda function's file system.
4. The os.environ dictionary is used to set the PYSPARK_SUBMIT_ARGS environment variable, which is required by the Spark application to run.
4. The os.environ dictionary is used to store any arguments passed via the lambda event.
5. The subprocess.run method is used to execute the spark-submit command, passing in the path to the temporary file where the Spark script was downloaded.The event payload recieved by the lambda is passed onto the spark application via the event arguement.
Overall, this script enables you to execute a Spark script in AWS Lambda by downloading it from an S3 bucket and running it using the spark-submit command. The script can be configured by setting environment variables, such as the PYSPARK_SUBMIT_ARGS variable, to control the behavior of the Spark application. </p>

Expand Down
13 changes: 8 additions & 5 deletions sparkLambdaHandler.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,17 @@ def spark_submit(s3_bucket_script: str,input_script: str, event: dict)-> None:
Submits a local Spark script using spark-submit.
"""
# Set the environment variables for the Spark application
pyspark_submit_args = event.get('PYSPARK_SUBMIT_ARGS', '')
# Source input and output if available in event
input_path = event.get('INPUT_PATH','')
output_path = event.get('OUTPUT_PATH', '')
# pyspark_submit_args = event.get('PYSPARK_SUBMIT_ARGS', '')
# # Source input and output if available in event
# input_path = event.get('INPUT_PATH','')
# output_path = event.get('OUTPUT_PATH', '')

for key,value in event.items():
os.environ[key] = value
# Run the spark-submit command on the local copy of teh script
try:
logger.info(f'Spark-Submitting the Spark script {input_script} from {s3_bucket_script}')
subprocess.run(["spark-submit", "/tmp/spark_script.py", "--event", json.dumps(event)], check=True)
subprocess.run(["spark-submit", "/tmp/spark_script.py", "--event", json.dumps(event)], check=True, env=os.environ)
except Exception as e :
logger.error(f'Error Spark-Submit with exception: {e}')
raise e
Expand Down