Skip to content
This repository has been archived by the owner on Mar 30, 2021. It is now read-only.

Sparkline SQLContext Options

hbutani edited this page Sep 2, 2016 · 1 revision

All options begin with spark.sparklinedata.druid and are settable in the SparkContext.

Name Description Default(if any)
cache.tables.tocheck A comma separated list of tableNames that should be checked if they are cached. For Star-Schemas with associated Druid Indexes, even if tables are cached, we attempt to rewrite the Query to Druid. In order to do this we need to convert an InMemoryRelation operator with its underlying Logical Plan. This value, will tell us to restrict our check to certain tables. No tables are checked.
debug.transformations When set to true each plan transformation is logged. false
selectquery.pagesize Num. of rows fetched on each invocation of Druid Select Query 10000
max.connections Max. number of Http Connections to Druid Cluster(s) 100
max.connections.per.route Max. number of Http Connections to each server in Druid Cluster 20
option.... use this to override Druid DataSource options, See Druid-Datasource-Options to see which ones are overriddable this way none
querycostmodel… see Druid Cost Model see cost model page
Clone this wiki locally