An open data engineering challenge based on real location data. Each entry in this data set is a "location-event". The idfa is the unique identifier of the user.
The expectation for this exercise is that you use Spark 2.x with Scala, Python, or Java. You can use the RDD or Dataframe APIs as you see fit, but please be ready to explain your choices. You must do your work over the entire dataset.
Instructions:
- Fork this repo with your own id for our review.
- Download the dataset here: https://s3.amazonaws.com/freckle-dataeng-challenge/location-data-sample.tar.gz
- Answer: What is the max, min, avg, std deviation of the number of location events per IDFA? We define a location event to be one record in the sample file.
- Produce geohashes for all coordinates in a new RDD or DataFrame
- Using the geohashes, determine if there clusters of people at any point in this dataset. If so, how many people and how close are they?
- Write any findings into a local parquet format file for later use.
- Bonus: Conduct any additional analysis that might give a hint about the behaviour of the IDFAs in the data set.
Please complete as much of the assignment as you have time for. How long you had time to spend on the challenge and your experience will be considered. Have some fun with it!