Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release next #210

Merged
merged 15 commits into from
Jan 11, 2024
2 changes: 1 addition & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"cSpell.words": ["glaredb"],
"editor.codeActionsOnSave": {
"source.fixAll.markdownlint": false
"source.fixAll.markdownlint": "never"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
Expand Down
8 changes: 1 addition & 7 deletions _cloud/data-sources/add-data-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,6 @@ Submitting this form will:

The exact SQL being run and its output (including errors) are displayed.

![Add pg data source output]

![Add pg data source output error]

## SQL reference

If you prefer to use SQL for adding data sources, see the following SQL
Expand All @@ -43,9 +39,7 @@ reference:
- [CREATE EXTERNAL TABLE]

[Add data source button]: /assets/images/cloud/data-sources/add-datasource-button.png
[Data sources dialog]: /assets/images/cloud/data-sources/data-sources-dialog.png
[Data sources dialog]: /assets/images/cloud/data-sources/data_sources_dialog.png
[Add pg data source]: /assets/images/cloud/data-sources/add-pg-data-source.png
[Add pg data source output]: /assets/images/cloud/data-sources/add-pg-data-source-output.png
[Add pg data source output error]: /assets/images/cloud/data-sources/add-pg-data-source-output-error.png
[CREATE EXTERNAL DATABASE]: /glaredb/sql-commands/create-external-database/
[CREATE EXTERNAL TABLE]: /glaredb/sql-commands/create-external-table/
4 changes: 2 additions & 2 deletions _cloud/data-sources/query-your-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ for a deployment.
The SQL workspace contains various helpful features for exploring your data,
querying your data and producing basic reports:

- Schema explorer (available from the sidebar)
- Schema explorer and search (available from the sidebar)
- Recent queries (available from the sidebar)
- Completion hints for tables and functions (available in the editor)
- Exporting results (available from the results panel)
Expand Down Expand Up @@ -52,7 +52,7 @@ such as `psql`. For more information on connections strings and passwords, refer
to [Connection Details] and [Managing Passwords].

[Deployment list]: /assets/images/cloud/data-sources/deployments-list.png
[SQL workspace]: /assets/images/cloud/data-sources/sql-workspace.png
[SQL workspace]: /assets/images/cloud/data-sources/sql_workspace.png
[Hybrid Execution]: /glaredb/hybrid-execution
[GlareDB Python library]: /glaredb/python/
[Working with your data]: /docs/working-with-your-data/
Expand Down
6 changes: 3 additions & 3 deletions _glaredb/hybrid-execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ is quick and easy.

Once you have a deployment ready to go, open the **Connect** dialog to get a set
of credentials for connecting to your deployment. This dialog provides the
commands needed to connect to your deployment using either the CLI or Python
library.
commands needed to connect to your deployment using either the CLI, Python or
Node.js library.

![Connect dialog]

Expand Down Expand Up @@ -127,6 +127,6 @@ external system.
[Postgres]: /docs/data-sources/supported/postgres.html
[Snowflake]: /docs/data-sources/supported/snowflake.html
[Deployment]: /cloud/deployments/
[Connect dialog]: /assets/images/glaredb/hybrid-execution/connect-dialog.png
[Connect dialog]: /assets/images/glaredb/hybrid-execution/connect_dialog.png
[Python Library]: /glaredb/python/
[CLI]: /glaredb/local/
8 changes: 4 additions & 4 deletions _glaredb/integrations/fabric.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ will automatically be enabled.
[glaredb python library]: https://pypi.org/project/glaredb/
[GlareDB Cloud]: https://console.glaredb.com
[Hybrid Execution]: /glaredb/hybrid-execution/
[create]: /assets/images/fabric/create.png
[cloud]: /assets/images/fabric/cloud.png
[success]: /assets/images/fabric/success.png
[connect]: /assets/images/fabric/connect.png
[create]: /assets/images/glaredb/fabric/create.png
[cloud]: /assets/images/glaredb/fabric/cloud.png
[success]: /assets/images/glaredb/fabric/success.png
[connect]: /assets/images/glaredb/fabric/connect_python.png
2 changes: 1 addition & 1 deletion _glaredb/integrations/hyperquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,5 +77,5 @@ dataframes can be joined with data in GlareDB Cloud.
[success]: /assets/images/glaredb/hyperquery/success.png
[GlareDB Cloud]: https://console.glaredb.com
[connect button]: /assets/images/glaredb/hyperquery/connect-button.png
[connect python]: /assets/images/glaredb/hyperquery/connect-python.png
[connect python]: /assets/images/glaredb/hyperquery/connect_python.png
[Hybrid Execution]: /glaredb/hybrid-execution/
30 changes: 15 additions & 15 deletions _glaredb/sql-commands/copy-to.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,21 +51,21 @@ COPY [(<query>) | <table>] TO '<s3_url>' [FORMAT format]
CREDENTIALS s3_credentials ( region '<aws_region>' );
```

| Field | Destination | Description |
| ------------------------- | ----------- | ----------------------------------------------------------------- |
| `aws_region` | S3 | The region of the bucket. |
| `aws_access_key_id` | S3 | ID of AWS access key with permissions to write the bucket. |
| `aws_secret_access_key` | S3 | Secret associated with the AWS access key. |
| `bucket` | GCS or S3 | Name of the bucket. |
| `format` | All | Output format. One of **csv** (default), **json** or **parquet**. |
| `gcp_credentials` | GCS | A database object containing GCP credentials. |
| `gcp_service_account_key` | GCS | A JSON-encoded GCP service account key with access to the bucket. |
| `gcs_url` | GCS | A url in the format gs://bucket/location |
| `location` | All | A path to copy to. |
| `query` | All | The query to execute, of which the results will be copied. |
| `s3_credentials` | S3 | A database object containing S3 credentials. |
| `s3_url` | S3 | A url in the format s3://bucket/location |
| `table` | All | A fully-qualified table name. |
| Field | Destination | Description |
| ------------------------- | ----------- | ---------------------------------------------------------------------------- |
| `aws_region` | S3 | The region of the bucket. |
| `aws_access_key_id` | S3 | ID of AWS access key with permissions to write the bucket. |
| `aws_secret_access_key` | S3 | Secret associated with the AWS access key. |
| `bucket` | GCS or S3 | Name of the bucket. |
| `format` | All | Output format. One of **csv** (default), **json**, **bson**, or **parquet**. |
| `gcp_credentials` | GCS | A database object containing GCP credentials. |
| `gcp_service_account_key` | GCS | A JSON-encoded GCP service account key with access to the bucket. |
| `gcs_url` | GCS | A url in the format gs://bucket/location |
| `location` | All | A path to copy to. |
| `query` | All | The query to execute, of which the results will be copied. |
| `s3_credentials` | S3 | A database object containing S3 credentials. |
| `s3_url` | S3 | A url in the format s3://bucket/location |
| `table` | All | A fully-qualified table name. |

## Usage

Expand Down
12 changes: 6 additions & 6 deletions _glaredb/sql-commands/create-external-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ CREATE EXTERNAL TABLE [IF NOT EXISTS] <table-name>
OPTIONS (<data-source-options>);
```

| Field | Description |
| --------------------- | -------------------------------------------------------------------------------------------------------------------- |
| `table-name` | Name of the database as it appears in GlareDB. |
| `data-source-type` | The type of data source: \[`bigquery`, `delta`, `gcs`, `iceberg`, `mongo`, `mysql`, `postgres`, `s3`, `snowflake`\]. |
| `tunnel-name` | [SSH tunnel] to connect with. |
| `data-source-options` | Options specific to this data source type. |
| Field | Description |
| --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| `table-name` | Name of the database as it appears in GlareDB. |
| `data-source-type` | The type of data source: \[`bigquery`, `delta`, `gcs`, `iceberg`, `mongo`, `mysql`, `postgres`, `s3`, `snowflake`, `bson`, `lance`, `azure`\]. |
| `tunnel-name` | [SSH tunnel] to connect with. |
| `data-source-options` | Options specific to this data source type. |

`table-name` may optionally be qualified with a schema.

Expand Down
54 changes: 54 additions & 0 deletions _glaredb/sql-functions/read_bson.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
layout: default
title: read_bson
parent: SQL functions
---

# `read_bson`

Reads one or more BSON files from the local filesystem or supported object store.

## Syntax

```sql
-- Location of the file or files:
read_bson(<url>);
-- Explicitly set the number of documents considered to infer the
-- schema (default is 100):
read_bson(<url>, schema_sample_size => 250);
```

Like other functions that produces table objects in queries,
`read_bson` works with all supported cloud providers for remote
storage: GCS, S3, Azure, and compatible APIs.

```sql
-- Using a cloud credentials object.
read_bson(<url>, <credential_object>);
-- Required named argument for S3 buckets.
read_bson(<url>, <credentials_object>, region => '<aws_region>');
-- Pass S3 credentials using named arguments.
read_bson(<url>, access_key_id => '<aws_access_key_id>', secret_access_key => '<aws_secret_access_key>', region => '<aws_region>');
-- Pass GCS credentials using named arguments.
read_bson(<url>, service_account_key => '<gcp_service_account_key>');
```

## Behavior

### Multiple Files

`read_bson` will expand glob patterns in the `<url>` argument and will
treat the resulting list of files as partitions of the same table.

### Schema Inference

By default, `read_bson`, sorts the files lexicographically and scans the
first 100 documents to infer the schema. Every field that appears is
added to the schema in the order that it appears as a nullable
field. The first type observed becomes the field's type.

After inferring a schema, the remaining data and files may be read
in any order.

The `schema_sample_size` option allows you to change the number of
documents considered when inferring the schema.
37 changes: 37 additions & 0 deletions _glaredb/sql-functions/read_clickhouse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
layout: default
title: read_clickhouse
parent: SQL functions
---

# `read_clickhouse`

Read a ClickHouse table. The table does not have to be a known data source to
GlareDB.

## Syntax

```sql
read_clickhouse(<connection_string>, <table>)
```

| Field | Description |
| ------------------- | ------------------------------- |
| `connection_string` | A ClickHouse connection string. |
| `table` | The name of the table to query. |

## Examples

In the following example, a 'users' table located in a ClickHouse database is
queryed.

```sql
select * from read_clickhouse(
'clickhouse://my_user:my_password@my.clickhouse.host:9000/default',
'users'
);
```

Refer to the [documentation on ClickHouse data sources] for more information.

[documentation on ClickHouse data sources]: /docs/data-sources/supported/clickhouse
39 changes: 39 additions & 0 deletions _glaredb/sql-functions/read_sqlserver.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
layout: default
title: read_sqlserver
parent: SQL functions
---

# `read_sqlserver`

Read a SQL Server table. The table does not have to be a known data source to
GlareDB.

## Syntax

```sql
read_sqlserver(<connection_string>, <schema>, <table>)
```

| Field | Description |
| ------------------- | -------------------------------------------- |
| `connection_string` | A SQL Server connection string. |
| `schema` | The name of the schema containing the table. |
| `table` | The name of the table to query. |

## Examples

In the following example, a 'users' table located in a SQL Server database is
queryed.

```sql
select * from read_sqlserver(
'server=tcp:my.sqlserver.host,1433;user=SA;password=Password123;TrustServerCertificate=true',
'dbo',
'users'
);
```

Refer to the [documentation on SQL Server data sources] for more information.

[documentation on SQL Server data sources]: /docs/data-sources/supported/sql-server
Binary file removed assets/images/cloud/admin/billing-panel.png
Binary file not shown.
Binary file removed assets/images/cloud/admin/manage-plan.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/images/cloud/data-sources/sql-workspace.png
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/images/data-sources/ssh-tunnels-sidebar.png
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/images/fabric/connect.png
Binary file not shown.
Binary file added assets/images/glaredb/fabric/connect_python.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/images/glaredb/hyperquery/connect-python.png
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified assets/images/signin.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,11 @@
],
"words": [
"arrowtypeof",
"bson",
"BSON",
"BYTEA",
"clickhouse",
"ClickHouse",
"dataframes",
"datasource",
"datasources",
Expand Down
2 changes: 1 addition & 1 deletion docs/data-sources/securing-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,6 @@ You can create SSH tunnels from the **SQL workspace**:

![SSH tunnel public key]

[SSH tunnel settings]: /assets/images/data-sources/ssh-tunnels-sidebar.png
[SSH tunnel settings]: /assets/images/data-sources/ssh_tunnels_sidebar.png
[Create SSH tunnel]: /assets/images/data-sources/create_ssh_tunnel.png
[SSH tunnel public key]: /assets/images/data-sources/public-key.png
Loading