-
Notifications
You must be signed in to change notification settings - Fork 102
10 Minute Guide
This guide is to help you get Blueflood running and make some API calls to it so you can see it working.
Table of Contents
Quickly start a Blueflood instance that you can interact with via API. This just lets you get a feel for how it works. Note that this uses a Docker Hub Blueflood image that hasn't been updated in a while. Maintainers needed!
git clone https://github.com/rackerlabs/blueflood.git
docker-compose -f blueflood/contrib/blueflood-docker-compose/docker-compose.yml up
You can stop the docker containers by pressing Ctrl+C in the terminal where they're running or by sending the down
command in the same way you sent the up
command. Delete them with the rm
command, like
docker-compose -f blueflood/contrib/blueflood-docker-compose/docker-compose.yml rm
Not quite as simple as docker-compose, this is the surest way to get it running and lets you easily change configuration or code and restart it.
Clone the project, and build it. Blueflood currently requires JDK 8 to build.
git clone https://github.com/rackerlabs/blueflood.git
cd blueflood
mvn package -P skip-unit-tests,skip-integration-tests
Blueflood requires a Cassandra database. In another terminal, switch to the blueflood
directory, and create a temporary database with docker.
docker run --name bf-cass -p 9042:9042 -v $(pwd)/src/cassandra/cli/load.cdl:/load.cdl \
-e 'JVM_EXTRA_OPTS=-Dcassandra.skip_wait_for_gossip_to_settle=0' cassandra:4
Cassandra takes a little time to start the first time. Wait for the line with Starting listening for CQL clients on /0.0.0.0:9042
to show up. This means Cassandra is ready to accept connections. Then create the initial schema.
docker exec bf-cass cqlsh -f /load.cdl
Blueflood also requires Elasticsearch. In yet another terminal, switch to the blueflood
directory, and start an Elasticsearch node.
docker run --name bf-es -p "9200:9200" -p "9300:9300" elasticsearch:1.7
Once you see the "started" log from Elasticsearch, initialize it with
blueflood-elasticsearch/src/main/resources/init-es.sh
You should see {"acknowledged":true}
printed out several times.
Now you can start Blueflood.
bin/blueflood-start.sh
After a few moments, you should see All blueflood services started
.
Ctrl+C will stop Blueflood. Stop Cassandra and Elasticsearch with docker stop bf-cass
and docker stop bf-es
. You can start the same containers again later with docker start <name>
. Delete them with docker rm <name>
.
These are ways of running Blueflood that were done in the past and may or may not work anymore. Maintainers needed!
There's a Blueflood Vagrant box, but it hasn't been updated in some time. It needs maintenance. There's a packer build in Blueflood that's likely the source of the box.
mkdir blueflood_demo; cd blueflood_demo
vagrant init blueflood/blueflood
vagrant up
Now that you have Blueflood running, send it some metrics.
now="$(date +%s)000"
before=$((now-10000))
after=$((now+10000))
curl -i -H 'Content-Type: application/json' 'http://localhost:19000/v2.0/100/ingest' -d "
[
{
\"collectionTime\": $((now-1000)),
\"ttlInSeconds\": 172800,
\"metricValue\": 1337,
\"metricName\": \"example.metric.one\"
},
{
\"collectionTime\": $((now+1000)),
\"ttlInSeconds\": 172800,
\"metricValue\": 1338,
\"metricName\": \"example.metric.one\"
},
{
\"collectionTime\": $((now+1000)),
\"ttlInSeconds\": 172800,
\"metricValue\": 90210,
\"metricName\": \"example.metric.two\"
}
]"
You can query the metrics you just ingested like this:
curl -i "http://localhost:20000/v2.0/100/views/example.metric.one?from=$before&to=$after&resolution=full"
curl -i "http://localhost:20000/v2.0/100/views/example.metric.two?from=$before&to=$after&resolution=full"
You should see some json returned with the values for the metrics you sent to the ingestion endpoint. the curl for example.metric.one
should return this, for example:
HTTP/1.1 200 OK
Content-Length: 321
{
"unit": "unknown",
"values": [
{
"numPoints": 1,
"timestamp": <a recent time value in milliseconds>,
"average": 1337
},
{
"numPoints": 1,
"timestamp": <a more recent time value in milliseconds>,
"average": 1338
}
],
"metadata": {
"limit": null,
"next_href": null,
"count": 2,
"marker": null
}
}
curl -X GET "http://localhost:20000/v2.0/100/metrics/search?query=*"
This should return the list of all the metrics you ingested. There should be two so far:
[{"metric":"example.metric.one"},{"metric":"example.metric.two"}]
You've just sent a microscopically small amount of metrics to Blueflood! This is good, but the reason you are here is because you want more.
There are a couple different paths you could take now:
- Play around with more and different requests to the server. You can find more information on our Query and Ingest In Depth page.
- Start sending metrics to Blueflood from your existing infastructure.
- Set up Grafana to visualize your metrics.
- Set up a local Blueflood development environment. Check out the Installing from Source page.
- Set up multi-server Blueflood environment. Something with Cassandra and Elasticsearch clusters, perhaps, so you can run this thing at scale.
- Debug blueflood using intelliJ
You can find all configuration options and their default values
in the implementations of ConfigDefaults. Configuration
values can be set in the given blueflood.config
or as a Java
system property on the command line:
-DCONFIG_OPTION=NEW_VALUE