First of all, clone this repository:
git clone git@github.com:getlinkfire/bigdata-devops-assignment.git
Now, given this Docker Compose file, stand up a Druid cluster.
The compose file is going to launch :
- 1 zookeeper node
- 1 MySQL database
- 1 Kafka message broker
and the following druid services :
- 1 broker
- 1 middlemanager
- 1 historical
- 1 coordinator/overlord
This will take a couple of minutes, but once it is up, you will have:
- The Druid cluster dashboard on: http://localhost:3001/#/
- The Druid indexing console on: http://localhost:3001/console.html
Now that you have a running cluster, ingest the data found in data/wikiticker-2015-09-12-sampled.json.gz.
Once your ingestion job has succesfully executed, perform a simple query on the freshly created datasource.
The dataset is kindly borrowed from the Druid quickstart tutorial, you may find some inspiration there.
Read a little about the Druid indexing service if you like ;)
Piece of cake was it?
Alright, while the above should get us off the ground talking about troubleshooting skills and JVM heap configuration (hint), if you feel courageous; go ahead and setup realtime ingestion through Kafka, into the wikiticker datasource, and query your data back out again with a select query.
Also, the attentive ones will notice that we have configured Druid to emit metrics into Kafka. Now create a new Druid datasource, and ingest those metrics back into Druid where we can query them. You may find a bit of inspiration in this blog post.
Well a good puzzle is hard to resist isn't it?
If you found your way through our little builtin quirks and gotchas out of pure curiosity, we would absolutely love to hear from you. Please write us an email on devops@linkfire.com, tell us what we misconfigured and what you did to make this setup shine.