Skip to content

Commit

Permalink
feat(client-timestream-write): This release adds the ability to inges…
Browse files Browse the repository at this point in the history
…t batched historical data or migrate data in bulk from S3 into Timestream using CSV files.
  • Loading branch information
awstools committed Feb 27, 2023
1 parent 58828d8 commit 0435738
Show file tree
Hide file tree
Showing 30 changed files with 4,523 additions and 942 deletions.
32 changes: 19 additions & 13 deletions clients/client-timestream-write/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,18 @@ AWS SDK for JavaScript TimestreamWrite Client for Node.js, Browser and React Nat

<fullname>Amazon Timestream Write</fullname>

<p>Amazon Timestream is a fast, scalable, fully managed time series database service that makes it easy to store and analyze trillions of time series data points per day.
With Timestream, you can easily store and analyze IoT sensor data to derive insights from your IoT applications.
You can analyze industrial telemetry to streamline equipment management and maintenance.
You can also store and analyze log data and metrics to improve the performance and availability of your applications.
Timestream is built from the ground up to effectively ingest, process,
and store time series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs.</p>
<p>Amazon Timestream is a fast, scalable, fully managed time-series database service
that makes it easy to store and analyze trillions of time-series data points per day. With
Timestream, you can easily store and analyze IoT sensor data to derive insights
from your IoT applications. You can analyze industrial telemetry to streamline equipment
management and maintenance. You can also store and analyze log data and metrics to improve
the performance and availability of your applications. </p>
<p>Timestream is built from the ground up to effectively ingest, process, and
store time-series data. It organizes data to optimize query processing. It automatically
scales based on the volume of data ingested and on the query volume to ensure you receive
optimal performance while inserting and querying data. As your data grows over time,
Timestream’s adaptive query processing engine spans across storage tiers to
provide fast analysis while reducing costs.</p>

## Installing

Expand All @@ -33,16 +39,16 @@ using your favorite package manager:

The AWS SDK is modulized by clients and commands.
To send a request, you only need to import the `TimestreamWriteClient` and
the commands you need, for example `CreateDatabaseCommand`:
the commands you need, for example `CreateBatchLoadTaskCommand`:

```js
// ES5 example
const { TimestreamWriteClient, CreateDatabaseCommand } = require("@aws-sdk/client-timestream-write");
const { TimestreamWriteClient, CreateBatchLoadTaskCommand } = require("@aws-sdk/client-timestream-write");
```

```ts
// ES6+ example
import { TimestreamWriteClient, CreateDatabaseCommand } from "@aws-sdk/client-timestream-write";
import { TimestreamWriteClient, CreateBatchLoadTaskCommand } from "@aws-sdk/client-timestream-write";
```

### Usage
Expand All @@ -61,7 +67,7 @@ const client = new TimestreamWriteClient({ region: "REGION" });
const params = {
/** input parameters */
};
const command = new CreateDatabaseCommand(params);
const command = new CreateBatchLoadTaskCommand(params);
```

#### Async/await
Expand Down Expand Up @@ -140,15 +146,15 @@ const client = new AWS.TimestreamWrite({ region: "REGION" });

// async/await.
try {
const data = await client.createDatabase(params);
const data = await client.createBatchLoadTask(params);
// process data.
} catch (error) {
// error handling.
}

// Promises.
client
.createDatabase(params)
.createBatchLoadTask(params)
.then((data) => {
// process data.
})
Expand All @@ -157,7 +163,7 @@ client
});

// callbacks.
client.createDatabase(params, (err, data) => {
client.createBatchLoadTask(params, (err, data) => {
// process err and data.
});
```
Expand Down
4 changes: 3 additions & 1 deletion clients/client-timestream-write/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,14 @@
"@aws-sdk/util-user-agent-browser": "*",
"@aws-sdk/util-user-agent-node": "*",
"@aws-sdk/util-utf8": "*",
"tslib": "^2.3.1"
"tslib": "^2.3.1",
"uuid": "^8.3.2"
},
"devDependencies": {
"@aws-sdk/service-client-documentation-generator": "*",
"@tsconfig/node14": "1.0.3",
"@types/node": "^14.14.31",
"@types/uuid": "^8.3.0",
"concurrently": "7.0.0",
"downlevel-dts": "0.10.1",
"rimraf": "3.0.2",
Expand Down
405 changes: 274 additions & 131 deletions clients/client-timestream-write/src/TimestreamWrite.ts

Large diffs are not rendered by default.

39 changes: 33 additions & 6 deletions clients/client-timestream-write/src/TimestreamWriteClient.ts
Original file line number Diff line number Diff line change
Expand Up @@ -54,23 +54,36 @@ import {
UserAgent as __UserAgent,
} from "@aws-sdk/types";

import {
CreateBatchLoadTaskCommandInput,
CreateBatchLoadTaskCommandOutput,
} from "./commands/CreateBatchLoadTaskCommand";
import { CreateDatabaseCommandInput, CreateDatabaseCommandOutput } from "./commands/CreateDatabaseCommand";
import { CreateTableCommandInput, CreateTableCommandOutput } from "./commands/CreateTableCommand";
import { DeleteDatabaseCommandInput, DeleteDatabaseCommandOutput } from "./commands/DeleteDatabaseCommand";
import { DeleteTableCommandInput, DeleteTableCommandOutput } from "./commands/DeleteTableCommand";
import {
DescribeBatchLoadTaskCommandInput,
DescribeBatchLoadTaskCommandOutput,
} from "./commands/DescribeBatchLoadTaskCommand";
import { DescribeDatabaseCommandInput, DescribeDatabaseCommandOutput } from "./commands/DescribeDatabaseCommand";
import {
DescribeEndpointsCommand,
DescribeEndpointsCommandInput,
DescribeEndpointsCommandOutput,
} from "./commands/DescribeEndpointsCommand";
import { DescribeTableCommandInput, DescribeTableCommandOutput } from "./commands/DescribeTableCommand";
import { ListBatchLoadTasksCommandInput, ListBatchLoadTasksCommandOutput } from "./commands/ListBatchLoadTasksCommand";
import { ListDatabasesCommandInput, ListDatabasesCommandOutput } from "./commands/ListDatabasesCommand";
import { ListTablesCommandInput, ListTablesCommandOutput } from "./commands/ListTablesCommand";
import {
ListTagsForResourceCommandInput,
ListTagsForResourceCommandOutput,
} from "./commands/ListTagsForResourceCommand";
import {
ResumeBatchLoadTaskCommandInput,
ResumeBatchLoadTaskCommandOutput,
} from "./commands/ResumeBatchLoadTaskCommand";
import { TagResourceCommandInput, TagResourceCommandOutput } from "./commands/TagResourceCommand";
import { UntagResourceCommandInput, UntagResourceCommandOutput } from "./commands/UntagResourceCommand";
import { UpdateDatabaseCommandInput, UpdateDatabaseCommandOutput } from "./commands/UpdateDatabaseCommand";
Expand All @@ -85,33 +98,41 @@ import {
import { getRuntimeConfig as __getRuntimeConfig } from "./runtimeConfig";

export type ServiceInputTypes =
| CreateBatchLoadTaskCommandInput
| CreateDatabaseCommandInput
| CreateTableCommandInput
| DeleteDatabaseCommandInput
| DeleteTableCommandInput
| DescribeBatchLoadTaskCommandInput
| DescribeDatabaseCommandInput
| DescribeEndpointsCommandInput
| DescribeTableCommandInput
| ListBatchLoadTasksCommandInput
| ListDatabasesCommandInput
| ListTablesCommandInput
| ListTagsForResourceCommandInput
| ResumeBatchLoadTaskCommandInput
| TagResourceCommandInput
| UntagResourceCommandInput
| UpdateDatabaseCommandInput
| UpdateTableCommandInput
| WriteRecordsCommandInput;

export type ServiceOutputTypes =
| CreateBatchLoadTaskCommandOutput
| CreateDatabaseCommandOutput
| CreateTableCommandOutput
| DeleteDatabaseCommandOutput
| DeleteTableCommandOutput
| DescribeBatchLoadTaskCommandOutput
| DescribeDatabaseCommandOutput
| DescribeEndpointsCommandOutput
| DescribeTableCommandOutput
| ListBatchLoadTasksCommandOutput
| ListDatabasesCommandOutput
| ListTablesCommandOutput
| ListTagsForResourceCommandOutput
| ResumeBatchLoadTaskCommandOutput
| TagResourceCommandOutput
| UntagResourceCommandOutput
| UpdateDatabaseCommandOutput
Expand Down Expand Up @@ -278,12 +299,18 @@ export interface TimestreamWriteClientResolvedConfig extends TimestreamWriteClie

/**
* <fullname>Amazon Timestream Write</fullname>
* <p>Amazon Timestream is a fast, scalable, fully managed time series database service that makes it easy to store and analyze trillions of time series data points per day.
* With Timestream, you can easily store and analyze IoT sensor data to derive insights from your IoT applications.
* You can analyze industrial telemetry to streamline equipment management and maintenance.
* You can also store and analyze log data and metrics to improve the performance and availability of your applications.
* Timestream is built from the ground up to effectively ingest, process,
* and store time series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs.</p>
* <p>Amazon Timestream is a fast, scalable, fully managed time-series database service
* that makes it easy to store and analyze trillions of time-series data points per day. With
* Timestream, you can easily store and analyze IoT sensor data to derive insights
* from your IoT applications. You can analyze industrial telemetry to streamline equipment
* management and maintenance. You can also store and analyze log data and metrics to improve
* the performance and availability of your applications. </p>
* <p>Timestream is built from the ground up to effectively ingest, process, and
* store time-series data. It organizes data to optimize query processing. It automatically
* scales based on the volume of data ingested and on the query volume to ensure you receive
* optimal performance while inserting and querying data. As your data grows over time,
* Timestream’s adaptive query processing engine spans across storage tiers to
* provide fast analysis while reducing costs.</p>
*/
export class TimestreamWriteClient extends __Client<
__HttpHandlerOptions,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
// smithy-typescript generated code
import { EndpointParameterInstructions, getEndpointPlugin } from "@aws-sdk/middleware-endpoint";
import { getEndpointDiscoveryPlugin } from "@aws-sdk/middleware-endpoint-discovery";
import { getSerdePlugin } from "@aws-sdk/middleware-serde";
import { HttpRequest as __HttpRequest, HttpResponse as __HttpResponse } from "@aws-sdk/protocol-http";
import { Command as $Command } from "@aws-sdk/smithy-client";
import {
FinalizeHandlerArguments,
Handler,
HandlerExecutionContext,
HttpHandlerOptions as __HttpHandlerOptions,
MetadataBearer as __MetadataBearer,
MiddlewareStack,
SerdeContext as __SerdeContext,
} from "@aws-sdk/types";

import {
CreateBatchLoadTaskRequest,
CreateBatchLoadTaskRequestFilterSensitiveLog,
CreateBatchLoadTaskResponse,
CreateBatchLoadTaskResponseFilterSensitiveLog,
} from "../models/models_0";
import {
deserializeAws_json1_0CreateBatchLoadTaskCommand,
serializeAws_json1_0CreateBatchLoadTaskCommand,
} from "../protocols/Aws_json1_0";
import { ServiceInputTypes, ServiceOutputTypes, TimestreamWriteClientResolvedConfig } from "../TimestreamWriteClient";

export interface CreateBatchLoadTaskCommandInput extends CreateBatchLoadTaskRequest {}
export interface CreateBatchLoadTaskCommandOutput extends CreateBatchLoadTaskResponse, __MetadataBearer {}

/**
* <p>Creates a new Timestream batch load task. A batch load task processes data from
* a CSV source in an S3 location and writes to a Timestream table. A mapping from
* source to target is defined in a batch load task. Errors and events are written to a report
* at an S3 location. For the report, if the KMS key is not specified, the
* batch load task will be encrypted with a Timestream managed KMS key
* located in your account. For more information, see <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk">Amazon Web Services managed
* keys</a>. <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html">Service quotas apply</a>. For
* details, see <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.create-batch-load.html">code
* sample</a>.</p>
* @example
* Use a bare-bones client and the command you need to make an API call.
* ```javascript
* import { TimestreamWriteClient, CreateBatchLoadTaskCommand } from "@aws-sdk/client-timestream-write"; // ES Modules import
* // const { TimestreamWriteClient, CreateBatchLoadTaskCommand } = require("@aws-sdk/client-timestream-write"); // CommonJS import
* const client = new TimestreamWriteClient(config);
* const command = new CreateBatchLoadTaskCommand(input);
* const response = await client.send(command);
* ```
*
* @see {@link CreateBatchLoadTaskCommandInput} for command's `input` shape.
* @see {@link CreateBatchLoadTaskCommandOutput} for command's `response` shape.
* @see {@link TimestreamWriteClientResolvedConfig | config} for TimestreamWriteClient's `config` shape.
*
*/
export class CreateBatchLoadTaskCommand extends $Command<
CreateBatchLoadTaskCommandInput,
CreateBatchLoadTaskCommandOutput,
TimestreamWriteClientResolvedConfig
> {
// Start section: command_properties
// End section: command_properties

public static getEndpointParameterInstructions(): EndpointParameterInstructions {
return {
UseFIPS: { type: "builtInParams", name: "useFipsEndpoint" },
Endpoint: { type: "builtInParams", name: "endpoint" },
Region: { type: "builtInParams", name: "region" },
UseDualStack: { type: "builtInParams", name: "useDualstackEndpoint" },
};
}

constructor(readonly input: CreateBatchLoadTaskCommandInput) {
// Start section: command_constructor
super();
// End section: command_constructor
}

/**
* @internal
*/
resolveMiddleware(
clientStack: MiddlewareStack<ServiceInputTypes, ServiceOutputTypes>,
configuration: TimestreamWriteClientResolvedConfig,
options?: __HttpHandlerOptions
): Handler<CreateBatchLoadTaskCommandInput, CreateBatchLoadTaskCommandOutput> {
this.middlewareStack.use(getSerdePlugin(configuration, this.serialize, this.deserialize));
this.middlewareStack.use(
getEndpointPlugin(configuration, CreateBatchLoadTaskCommand.getEndpointParameterInstructions())
);
this.middlewareStack.use(
getEndpointDiscoveryPlugin(configuration, { clientStack, options, isDiscoveredEndpointRequired: true })
);

const stack = clientStack.concat(this.middlewareStack);

const { logger } = configuration;
const clientName = "TimestreamWriteClient";
const commandName = "CreateBatchLoadTaskCommand";
const handlerExecutionContext: HandlerExecutionContext = {
logger,
clientName,
commandName,
inputFilterSensitiveLog: CreateBatchLoadTaskRequestFilterSensitiveLog,
outputFilterSensitiveLog: CreateBatchLoadTaskResponseFilterSensitiveLog,
};
const { requestHandler } = configuration;
return stack.resolve(
(request: FinalizeHandlerArguments<any>) =>
requestHandler.handle(request.request as __HttpRequest, options || {}),
handlerExecutionContext
);
}

private serialize(input: CreateBatchLoadTaskCommandInput, context: __SerdeContext): Promise<__HttpRequest> {
return serializeAws_json1_0CreateBatchLoadTaskCommand(input, context);
}

private deserialize(output: __HttpResponse, context: __SerdeContext): Promise<CreateBatchLoadTaskCommandOutput> {
return deserializeAws_json1_0CreateBatchLoadTaskCommand(output, context);
}

// Start section: command_body_extra
// End section: command_body_extra
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,9 @@ export interface CreateDatabaseCommandInput extends CreateDatabaseRequest {}
export interface CreateDatabaseCommandOutput extends CreateDatabaseResponse, __MetadataBearer {}

/**
* <p>Creates a new Timestream database. If the KMS key is not specified, the database will be encrypted with a Timestream managed KMS
* key located in your account.
* Refer to <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk">Amazon Web Services managed KMS keys</a> for more info.
* <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html">Service quotas apply</a>.
* See
* <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.create-db.html">code sample</a> for details.
* <p>Creates a new Timestream database. If the KMS key is not
* specified, the database will be encrypted with a Timestream managed KMS key located in your account. For more information, see <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk">Amazon Web Services managed keys</a>. <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html">Service quotas apply</a>. For
* details, see <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.create-db.html">code sample</a>.
* </p>
* @example
* Use a bare-bones client and the command you need to make an API call.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,12 @@ export interface CreateTableCommandInput extends CreateTableRequest {}
export interface CreateTableCommandOutput extends CreateTableResponse, __MetadataBearer {}

/**
* <p>The CreateTable operation adds a new table to an existing database in your account. In an Amazon Web Services account,
* table names must be at least unique within each Region if they are in the same database.
* You may have identical table names in the same Region if the tables are in separate databases.
* While creating the table, you must specify the table name, database name,
* and the retention properties.
* <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html">Service quotas apply</a>.
* See
* <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.create-table.html">code sample</a> for details.
* </p>
* <p>Adds a new table to an existing database in your account. In an Amazon Web Services account, table names must be at least unique within each Region if they are in the same
* database. You might have identical table names in the same Region if the tables are in
* separate databases. While creating the table, you must specify the table name, database
* name, and the retention properties. <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html">Service quotas apply</a>. See
* <a href="https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.create-table.html">code
* sample</a> for details. </p>
* @example
* Use a bare-bones client and the command you need to make an API call.
* ```javascript
Expand Down
Loading

0 comments on commit 0435738

Please sign in to comment.