Skip to content

Latest commit

 

History

History
80 lines (60 loc) · 2.08 KB

jdbc.rst

File metadata and controls

80 lines (60 loc) · 2.08 KB

JDBC Connector

Table of contents

This page covers JDBC connector properties for dataSource configuration and the nuances associated with JDBC connector.

JDBC Connector Properties.

  • url [Required].
    • This parameters provides the URL to connect to a database instance provided endpoint.
  • driver [Required].
    • This parameters provides the Driver to connect to a database instance provided endpoint. Only support org.apache.hive.jdbc.HiveDriver
  • username [Optional].
    • This username for basicauth.
  • password [Optional].
    • This password for basicauth.

No Auth

[{
    "name" : "myspark",
    "connector": "jdbc",
    "properties" : {
        "url" : "jdbc:hive2://localhost:10000/default",
        "driver" : "org.apache.hive.jdbc.HiveDriver"
    }
}]

Basic Auth

[{
    "name" : "myspark",
    "connector": "jdbc",
    "properties" : {
        "url" : "jdbc:hive2://localhost:10000/default",
        "driver" : "org.apache.hive.jdbc.HiveDriver",
        "username" : "username",
        "password" : "password"
    }
}]

JDBC Table Function

JDBC datasource could execute direct SQL against the target database. The SQL must be supported by target database.

Example:

os> source = myspark.jdbc('SHOW DATABASES');
fetched rows / total rows = 1/1
+-------------+
| namespace   |
|-------------|
| default     |
+-------------+
  • PPL command other source is not supported. for example, if user use source = myspark.jdbc('SHOW DATABASES') | fields namespace, query engine will throw exception.