This repository has been archived by the owner on Mar 30, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 92
Sparkline SQL extensions
hbutani edited this page Aug 2, 2016
·
3 revisions
clear druid cache
Druid Historical Server Segment assignments and other metadata about the Druid Segments and Servers are cached in the Driver(thriftserver or spark-shell). This metadata is exposed via Runtime View Any changes in the historical servers in the Druid Cluster are automatically picked up and these caches are rebuild. Use this command to explicitly clear the cached metadata; the next time a query is executed these caches will be rebuild.
Use this to execute arbitrary Druid Queries against a Druid DataSource. Note the json must include 'jsonClass' information for the json to be converted to a Druid Query Spec. The easiest way to get started is to cut-and-paste a query from the d$druidquery view
on druiddatasource orderLineItemPartSupplier_historical execute query {
"jsonClass" : "GroupByQuerySpec",
"queryType" : "groupBy",
"dataSource" : "tpch",
"dimensions" : [ {
"jsonClass" : "ExtractionDimensionSpec",
"type" : "extraction",
"dimension" : "__time",
"outputName" : "alias-9",
"extractionFn" : {
"jsonClass" : "TimeFormatExtractionFunctionSpec",
"type" : "timeFormat",
"format" : "YYYY-MM-dd 00:00:00",
"timeZone" : "UTC",
"locale" : "en_US"
}
} ],
"granularity" : "all",
"aggregations" : [ {
"jsonClass" : "FunctionAggregationSpec",
"type" : "doubleSum",
"name" : "alias-10",
"fieldName" : "l_extendedprice"
} ],
"intervals" : [ "1993-01-01T00:00:00.000Z/1997-12-31T00:00:01.000Z" ]
}
- Overview
- Quick Start
-
User Guide
- [Defining a DataSource on a Flattened Dataset](https://github.com/SparklineData/spark-druid-olap/wiki/Defining-a Druid-DataSource-on-a-Flattened-Dataset)
- Defining a Star Schema
- Sample Queries
- Approximate Count and Spatial Queries
- Druid Datasource Options
- Sparkline SQLContext Options
- Using Tableau with Sparkline
- How to debug a Query Plan?
- Running the ThriftServer with Sparklinedata components
- [Setting up multiple Sparkline ThriftServers - Load Balancing & HA] (https://github.com/SparklineData/spark-druid-olap/wiki/Setting-up-multiple-Sparkline-ThriftServers-(Load-Balancing-&-HA))
- Runtime Views
- Sparkline SQL extensions
- Sparkline Pluggable Modules
- Dev. Guide
- Reference Architectures
- Releases
- Cluster Spinup Tool
- TPCH Benchmark