Skip to content

An Amazon Athena driver for Metabase 0.32 and later

License

Notifications You must be signed in to change notification settings

dacort/metabase-athena-driver

Repository files navigation

Metabase Athena Driver

Note: As of Metabase v0.45, Metabase now supports Amazon Athena as an official data source! πŸŽ‰
This driver will no longer be actively maintained. Any future issues can be asked about on the Metabase forum or with a detailed bug report.


CircleCI Latest Release Tested with Metabase v0.44.6 GitHub license

Installation

Beginning with Metabase 0.32, drivers must be stored in a plugins directory in the same directory where metabase.jar is, or you can specify the directory by setting the environment variable MB_PLUGINS_DIR. There are a few options to get up and running with a custom driver.

Docker

This repository has an example Dockerfile you can use to build the Amazon Athena Metabase driver and run the most recent supported version of Metabase:

git clone https://github.com/dacort/metabase-athena-driver.git
cd metabase-athena-driver
docker build -t metabase/athena .
docker run --name metabase-athena -p 3000:3000 metabase/athena

Then open http://localhost:3000 and skip down to Configuring

Download Metabase Jar and Run

  1. Download a fairly recent Metabase binary release (jar file) from the Metabase distribution page.
  2. Download the Athena driver jar from this repository's "Releases" page
  3. Create a directory and copy the metabase.jar to it.
  4. In that directory create a sub-directory called plugins.
  5. Copy the Athena driver jar to the plugins directory.
  6. Make sure you are the in the directory where your metabase.jar lives.
  7. Run java -jar metabase.jar.

In either case, you should see a message on startup similar to:

04-15 06:14:08 DEBUG plugins.lazy-loaded-driver :: Registering lazy loading driver :athena...
04-15 06:14:08 INFO driver.impl :: Registered driver :athena (parents: [:sql-jdbc]) 🚚

Configuring

Once you've started up Metabase, go to add a database and select "Amazon Athena".

You'll need to provide the AWS region, an access key and secret key, and an S3 bucket and prefix where query results will be written to.

Please note:

  • The provided bucket must be in the same region you specify.
  • If you do not provide an access key, the default credentials chain will be used.
  • The initial sync can take some time depending on how many databases and tables you have.

If you need an example IAM policy for providing read-only access to your customer-base, check out the Example IAM Policy below.

You can provide additional options if necessary. For example, to disable result set streaming and enable TRACE-level debugging, use UseResultsetStreaming=0;LogLevel=6.

Result set streaming is a performance optimization that streams results from Athena rather than using pagination logic, however it requries outbound access to TCP port 444 and not all organizations allow that.

Other options can be found in the "Driver Configuration Options" section of the Athena JDBC Driver Installation and Configuration Guide.

Contributing

Prerequisites

Build from source

The entire jar can now be built from the included Dockerfile.

  1. Build the project and copy the jar from the export stage
docker build --output jars --target stg_export .

You should now have a athena.metabase-driver.jar file in the jars/ directory.

  1. Download a fairly recent Metabase binary release (jar file) from the Metabase distribution page.

  2. Let's assume we download metabase.jar to ~/metabae/ and we built the project above. Copy the built jar to the Metabase plugins directly and run Metabase from there!

    TARGET_DIR=~/metabae
    mkdir ${TARGET_DIR}/plugins/
    cp jars/athena.metabase-driver.jar ${TARGET_DIR}/plugins/
    cd ${TARGET_DIR}/
    java -jar metabase.jar

You should see a message on startup similar to:

2019-05-07 23:27:32 INFO plugins.lazy-loaded-driver :: Registering lazy loading driver :athena...
2019-05-07 23:27:32 INFO metabase.driver :: Registered driver :athena (parents: #{:sql-jdbc}) 🚚

Testing

There are two different sets of tests in the project.

  1. Unit tests, located in the test_unit/ directory
  2. Integration tests, located in the standard test/ directory

The reason they're split out is because the integration tests require us to link the driver into the core Metabase code and run the full suite of tests there. I wanted to be able to have some lightweight unit tests that could be run without that overhead, so those are split out into the test_unit/ directory.

Running the tests requires you to have the metabase source relevant to the version you're building against. To make this easier, you can also run tests from the Dockerfile.

docker build -t metabase/athena-test --target stg_test .
docker run --rm --name mb-test metabase/athena-test

Resources

Example IAM Policy

This policy provides read-only access. Note you need to specify any buckets you want the user to be able to query from as well as the S3 bucket provided as part of the configuration where results are written to.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Athena",
      "Effect": "Allow",
      "Action": [
        "athena:BatchGetNamedQuery",
        "athena:BatchGetQueryExecution",
        "athena:GetNamedQuery",
        "athena:GetQueryExecution",
        "athena:GetQueryResults",
        "athena:GetQueryResultsStream",
        "athena:GetWorkGroup",
        "athena:ListDatabases",
        "athena:ListDataCatalogs",
        "athena:ListNamedQueries",
        "athena:ListQueryExecutions",
        "athena:ListTagsForResource",
        "athena:ListWorkGroups",
        "athena:ListTableMetadata",
        "athena:StartQueryExecution",
        "athena:StopQueryExecution",
        "athena:CreatePreparedStatement",
        "athena:DeletePreparedStatement",
        "athena:GetPreparedStatement"
      ],
      "Resource": "*"
    },
    {
      "Sid": "Glue",
      "Effect": "Allow",
      "Action": [
        "glue:BatchGetPartition",
        "glue:GetDatabase",
        "glue:GetDatabases",
        "glue:GetPartition",
        "glue:GetPartitions",
        "glue:GetTable",
        "glue:GetTables",
        "glue:GetTableVersion",
        "glue:GetTableVersions"
      ],
      "Resource": "*"
    },
    {
      "Sid": "S3ReadAccess",
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"],
      "Resource": [
        "arn:aws:s3:::bucket1",
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2",
        "arn:aws:s3:::bucket2/*"
      ]
    },
    {
      "Sid": "AthenaResultsBucket",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:AbortMultipartUpload",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": ["arn:aws:s3:::bucket2", "arn:aws:s3:::bucket2/*"]
    }
  ]
}

If your customer-base needs access to create tables for whatever reason, they will need additional AWS Glue permissions. Here is an example policy granting that. Note that the Resource is * so this will give Delete/Update permissions to any table.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "glue:BatchCreatePartition",
        "glue:UpdateDatabase",
        "glue:DeleteDatabase",
        "glue:CreateTable",
        "glue:CreateDatabase",
        "glue:UpdateTable",
        "glue:BatchDeletePartition",
        "glue:BatchDeleteTable",
        "glue:DeleteTable",
        "glue:CreatePartition",
        "glue:DeletePartition",
        "glue:UpdatePartition"
      ],
      "Resource": "*"
    }
  ]
}

Known Issues

  • Cannot specify a single database to sync
  • Only native SQL queries are supported
    • Native SQL Queries must not end with a semi-colon (;)
    • Basic aggregations seem to work in the query builder
    • Parameterized queries are not supported
  • Sometimes, the initial database verification can time out
    • If this happens, configure a higher timeout value with the MB_DB_CONNECTION_TIMEOUT_MS environment variable
  • Heavily nested fields can result in a StackOverflowError
    • If this happens, increase the -Xss JVM parameter

Updated Dockerfile

Test image

docker build -t metabase/athena-test --target stg_test .
docker run -it --rm --name mb-test metabase/athena-test

Copy jars

docker build --output jars --target stg_export .

Run Metabase

docker build -t metabase/athena .
docker run --rm --name metabase-athena -p 3000:3000 metabase/athena

If you have an existing Metabase database you'd like to use, you can use the following command.

docker run --rm  -p 3000:3000 \
  -v ~/metabase-data:/metabase-data \
  -e MB_DB_FILE=/metabase-data/metabase.db \
  --name metabase-athena metabase/athena