Skip to content

Commit

Permalink
Update README and docs
Browse files Browse the repository at this point in the history
Signed-off-by: Ben Sherman <bentshermann@gmail.com>
  • Loading branch information
bentsherman committed Jul 7, 2023
1 parent df3976d commit e9f3da6
Show file tree
Hide file tree
Showing 3 changed files with 121 additions and 130 deletions.
171 changes: 85 additions & 86 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,142 +1,147 @@
# SQL DB plugin for Nextflow

This plugin provides an extension to implement built-in support for SQL DB access and manipulation in Nextflow scripts.
This plugin provides support for interacting with SQL databases in Nextflow scripts.

It provides the ability to create a Nextflow channel from SQL queries and to populate database tables.
The current version provides out-of-the-box support for the following databases:
The following databases are currently supported:

* [AWS Athena](https://aws.amazon.com/athena/) (Setup guide [here](docs/aws-athena.md))
* [DuckDB](https://duckdb.org/)
* [Google BigQuery](https://cloud.google.com/bigquery) (Setup guide [here](docs/google-bigquery.md))
* [H2](https://www.h2database.com)
* [MySQL](https://www.mysql.com/)
* [MariaDB](https://mariadb.org/)
* [PostgreSQL](https://www.postgresql.org/)
* [SQLite](https://www.sqlite.org/index.html)
* [DuckDB](https://duckdb.org/)
* [AWS Athena](https://aws.amazon.com/athena/) (Setup guide [here](/docs/aws-athena.md))
* [Google BigQuery](https://cloud.google.com/bigquery) (Setup guide [here](/docs/google-bigquery.md))

NOTE: THIS IS A PREVIEW TECHNOLOGY, FEATURES AND CONFIGURATION SETTINGS CAN CHANGE IN FUTURE RELEASES.

This repository only holds plugin artefacts. Source code is available at this [link](https://github.com/nextflow-io/nextflow/tree/master/plugins/nf-sqldb).
## Getting started

## Get started

Make sure to have Nextflow `22.08.1-edge` or later. Add the following snippet to your `nextflow.config` file.
This plugin requires Nextflow `22.08.1-edge` or later. You can enable the plugin by adding the following snippet to your `nextflow.config` file:

```
```groovy
plugins {
id 'nf-sqldb@0.5.0'
id 'nf-sqldb'
}
```

The above declaration allows the use of the SQL plugin functionalities in your Nextflow pipelines.
See the section below to configure the connection properties with a database instance.
Support for BigQuery is provided in a separate plugin:

For BigQuery datasource you need to use the nf-bigquery plugin

```
```groovy
plugins {
id 'nf-bigquery@0.0.1'
id 'nf-bigquery'
}
```


## Configuration

The target database connection coordinates are specified in the `nextflow.config` file using the
`sql.db` scope. The following are available

| Config option | Description |
|--- |--- |
| `sql.db.'<DB-NAME>'.url` | The database connection URL based on Java [JDBC standard](https://docs.oracle.com/javase/tutorial/jdbc/basics/connecting.html#db_connection_url).
| `sql.db.'<DB-NAME>'.driver` | The database driver class name (optional).
| `sql.db.'<DB-NAME>'.user` | The database connection user name.
| `sql.db.'<DB-NAME>'.password` | The database connection password.
You can configure any number of databases under the `sql.db` configuration scope. For example:

For example:

```
```groovy
sql {
db {
foo {
url = 'jdbc:mysql://localhost:3306/demo'
user = 'my-user'
password = 'my-password'
}
url = 'jdbc:mysql://localhost:3306/demo'
user = 'my-user'
password = 'my-password'
}
}
}
```

The above snippet defines SQL DB named *foo* that connects to a MySQL server running locally at port 3306 and
using `demo` schema, with `my-name` and `my-password` as credentials.
The above example defines a database named `foo` that connects to a MySQL server running locally at port 3306 and
using the `demo` schema, with `my-name` and `my-password` as credentials.

The following options are available:

`sql.db.'<DB-NAME>'.url`
: The database connection URL based on the [JDBC standard](https://docs.oracle.com/javase/tutorial/jdbc/basics/connecting.html#db_connection_url).

## Available operations
`sql.db.'<DB-NAME>'.driver`
: The database driver class name (optional).

This plugin adds to the Nextflow DSL the following extensions that allows performing of queries and populating database tables.
`sql.db.'<DB-NAME>'.user`
: The database connection user name.

`sql.db.'<DB-NAME>'.password`
: The database connection password.

## Dataflow Operators

This plugin provides the following dataflow operators for querying from and inserting into database tables.

### fromQuery

The `fromQuery` factory method allows for performing a query against a SQL database and creating a Nextflow channel emitting
a tuple for each row in the corresponding result set. For example:
The `fromQuery` factory method queries a SQL database and creates a channel that emits a tuple for each row in the corresponding result set. For example:

```
```nextflow
include { fromQuery } from 'plugin/nf-sqldb'
ch = channel.fromQuery('select alpha, delta, omega from SAMPLE', db: 'foo')
channel.fromQuery('select alpha, delta, omega from SAMPLE', db: 'foo').view()
```

The following options are available:

| Operator option | Description |
|--- |--- |
| `db` | The database handle. It must must a `sql.db` name defined in the `nextflow.config` file.
| `batchSize` | Performs the query in batches of the specified size. This is useful to avoid loading the complete resultset in memory for query returning a large number of entries. NOTE: this feature requires that the underlying SQL database to support `LIMIT` and `OFFSET` capability.
| `emitColumns` | When `true` the column names in the select statement are emitted as first tuple in the resulting channel.
`db`
: The database handle. It must be defined under `sql.db` in the Nextflow configuration.

`batchSize`
: Query the data in batches of the given size. This option is recommended for queries that may return large a large result set, so that the entire result set is not loaded into memory at once.
: *NOTE:* this feature requires that the underlying SQL database supports `LIMIT` and `OFFSET`.

`emitColumns`
: When `true`, the column names in the `SELECT` statement are emitted as the first tuple in the resulting channel.

### sqlInsert

The `sqlInsert` operator provided by this plugin allows populating a database table with the data emitted
by a Nextflow channels and therefore produced as result by a pipeline process or an upstream operator. For example:
The `sqlInsert` operator collects the items in a source channel and inserts them into a SQL database. For example:

```
```nextflow
include { sqlInsert } from 'plugin/nf-sqldb'
channel
.of('Hello','world!')
.map( it -> tuple(it, it.length) )
.sqlInsert( into: 'SAMPLE', columns: 'NAME, LEN', db: 'foo' )
```

The above example creates and performs the following two SQL statements into the database with name `foo` as defined
in the `nextflow.config` file.
The above example executes the following SQL statements into the database `foo` (as defined in the Nextflow configuration).

```
```sql
INSERT INTO SAMPLE (NAME, LEN) VALUES ('HELLO', 5);
INSERT INTO SAMPLE (NAME, LEN) VALUES ('WORLD!', 6);
```

NOTE: the target table (e.g. `SAMPLE` in the above example) must be created ahead.
*NOTE:* the target table (e.g. `SAMPLE` in the above example) must be created beforehand.

The following options are available:

| Operator option | Description |
|-------------------|--- |
| `db` | The database handle. It must must a `sql.db` name defined in the `nextflow.config` file.
| `into` | The database table name into with the data needs to be stored.
| `columns` | The database table column names to be filled with the channel data. The column names order and cardinality must match the tuple values emitted by the channel. The columns can be specified as a `List` object or a comma-separated value string.
| `statement` | The SQL `insert` statement to be performed to insert values in the database using `?` as placeholder for the actual values, for example: `insert into SAMPLE(X,Y) values (?,?)`. When provided the `into` and `columsn` parameters are ignored.
| `batchSize` | The number of insert statements that are grouped together before performing the SQL operations (default: `10`).
| `setup` | A SQL statement that's executed before the first insert operation. This is useful to create the target DB table. NOTE: the underlying DB should support the *create table if not exist* idiom (i.e. the plugin will execute this statement every time the script is run).
`db`
: The database handle. It must be defined under `sql.db` in the Nextflow configuration.

## Query CSV files
`into`
: The target table for inserting the data.

The SQL plugin includes the [H2](https://www.h2database.com/html/main.html) database engine that allows the query of CSV files
as DB tables using SQL statements.
`columns`
: The database table column names to be filled with the channel data. The column names order and cardinality must match the tuple values emitted by the channel. The columns can be specified as a list or as a string of comma-separated values.

For example, create CSV file using the snippet below:
`statement`
: The SQL `INSERT` statement to execute, using `?` as a placeholder for the actual values, for example: `insert into SAMPLE(X,Y) values (?,?)`. The `into` and `columns` options are ignored when this option is provided.

```
`batchSize`
: Insert the data in batches of the given size (default: `10`).

`setup`
: A SQL statement that is executed before inserting the data, e.g. to create the target table.
: *NOTE:* the underlying database should support the *create table if not exist* idiom, as the plugin will execute this statement every time the script is run.

## Querying CSV files

This plugin supports the [H2](https://www.h2database.com/html/main.html) database engine, which can query CSV files like database tables using SQL statements.

For example, create a CSV file using the snippet below:

```bash
cat <<EOF > test.csv
foo,bar
1,hello
Expand All @@ -146,26 +151,20 @@ foo,bar
EOF
```

To query this file in a Nextflow script use the following snippet:
Then query it in a Nextflow script:

```nextflow
channel
.sql
.fromQuery("SELECT * FROM CSVREAD('test.csv') where foo>=2;")
.view()
```
include { fromQuery } from 'plugin/nf-sqldb'
The `CSVREAD` function provided by the H2 database engine allows the access of a CSV file in your computer file system,
you can replace `test.csv` with a CSV file path of your choice. The `foo>=2` condition shows how to define a filtering
clause using the conventional SQL WHERE constrains.
channel
.fromQuery("SELECT * FROM CSVREAD('test.csv') where foo>=2;")
.view()
```

## Important
The `CSVREAD` function provided by the H2 database engine allows you to query any CSV file in your filesystem. As shown in the example, you can use standard SQL clauses like `SELECT` and `WHERE` to define your query.

This plugin is not expected to be used to store and access a pipeline status in a synchronous manner during the pipeline
execution.
## Caveats

This means that if your script has a `sqlInsert` operation followed by a successive `fromQuery` operation, the query
may *not* contain previously inserted data due to the asynchronous nature of Nextflow operators.
Like all dataflow operators in Nextflow, the operators provided by this plugin are executed asynchronously.

The SQL support provided by this plugin is meant to be used to fetch DB data from a previous run or to populate DB tables
for storing or archival purpose.
In particular, data inserted using the `sqlInsert` operator is *not* guaranteed to be available to any subsequent queries using the `fromQuery` operator, as it is not possible to make a channel factory operation dependent on some upstream operation.
36 changes: 15 additions & 21 deletions docs/aws-athena.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# AWS Athena integration setup
# AWS Athena integration

## Pre-requisites

Expand All @@ -8,24 +8,22 @@

## Usage

In the example below, it is assumed that the [NCBI SRA Metadata](https://www.ncbi.nlm.nih.gov/sra/docs/sra-athena/) has been used as the data source. You can refer the [tutorial from NCBI](https://www.youtube.com/watch?v=_F4FhcDWSJg&ab_channel=TheNationalLibraryofMedicine) for setting up the AWS resources correctly
The following example uses the [NCBI SRA Metadata](https://www.ncbi.nlm.nih.gov/sra/docs/sra-athena/) as the data source. Refer to the [tutorial from NCBI](https://www.youtube.com/watch?v=_F4FhcDWSJg&ab_channel=TheNationalLibraryofMedicine) for setting up the AWS resources correctly.

### Configuration

```nextflow config
//NOTE: Replace the values in the config file as per your setup
Adjust the following configuration to match your setup:

```nextflow config
params {
aws_glue_db = "sra-glue-db"
aws_glue_db_table = "metadata"
aws_glue_db = 'sra-glue-db'
aws_glue_db_table = 'metadata'
}
plugins {
id 'nf-sqldb@0.6.0'
id 'nf-sqldb'
}
sql {
db {
athena {
Expand All @@ -35,33 +33,29 @@ sql {
}
}
}
```

### Pipeline

Once the configuration has been setup correctly, you can use it in the Nextflow code as shown below
Execute the following Nextflow pipeline:

```nextflow
include { fromQuery } from 'plugin/nf-sqldb'
def sqlQuery = """
SELECT *
FROM \"${params.aws_glue_db}\".${params.aws_glue_db_table}
WHERE organism = 'Mycobacterium tuberculosis'
LIMIT 10;
"""
Channel.fromQuery(sqlQuery, db: 'athena')
.view()
SELECT *
FROM \"${params.aws_glue_db}\".${params.aws_glue_db_table}
WHERE organism = 'Mycobacterium tuberculosis'
LIMIT 10;
"""
Channel.fromQuery(sqlQuery, db: 'athena').view()
```

### Output

When you execute the above code, you'll see the query results on the console
The pipeline script will print the query results to the console:

```console
[SRR6797500, WGS, SAN RAFFAELE, public, SRX3756197, 131677, Illumina HiSeq 2500, PAIRED, RANDOM, GENOMIC, ILLUMINA, SRS3011891, SAMN08629009, Mycobacterium tuberculosis, SRP128089, 2018-03-02, PRJNA428596, 165, null, 201, 383, null, 131677_WGS, Pathogen.cl, null, uncalculated, uncalculated, null, null, null, bam, sra, s3, s3.us-east-1, {k=assemblyname, v=GCF_000195955.2}, {k=bases, v=383901808}, {k=bytes, v=173931377}, {k=biosample_sam, v=MTB131677}, {k=collected_by_sam, v=missing}, {k=collection_date_sam, v=2010/2014}, {k=host_disease_sam, v=Tuberculosis}, {k=host_sam, v=Homo sapiens}, {k=isolate_sam, v=Clinical isolate18}, {k=isolation_source_sam_ss_dpl262, v=Not applicable}, {k=lat_lon_sam, v=Not collected}, {k=primary_search, v=131677}, {k=primary_search, v=131677_210916_BGD_210916_100.gatk.bam}, {k=primary_search, v=131677_WGS}, {k=primary_search, v=428596}, {k=primary_search, v=8629009}, {k=primary_search, v=PRJNA428596}, {k=primary_search, v=SAMN08629009}, {k=primary_search, v=SRP128089}, {k=primary_search, v=SRR6797500}, {k=primary_search, v=SRS3011891}, {k=primary_search, v=SRX3756197}, {k=primary_search, v=bp0}, {"assemblyname": "GCF_000195955.2", "bases": 383901808, "bytes": 173931377, "biosample_sam": "MTB131677", "collected_by_sam": ["missing"], "collection_date_sam": ["2010/2014"], "host_disease_sam": ["Tuberculosis"], "host_sam": ["Homo sapiens"], "isolate_sam": ["Clinical isolate18"], "isolation_source_sam_ss_dpl262": ["Not applicable"], "lat_lon_sam": ["Not collected"], "primary_search": "131677"}]

```
Loading

0 comments on commit e9f3da6

Please sign in to comment.