Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EDU-709 - Gcp features for AFTER REPLAY #3080

Merged
merged 18 commits into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions docs/evaluate/temporal-cloud/gcp-private-service-connect.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
id: gcp-private-service-connect
title: Private Communication - GCP Private Service Connect
sidebar_label: GCP Private Service Connect
description: Secure your Temporal Cloud connections using GCP Private Service Connect.
slug: /cloud/security/gcp-private-service-connect
toc_max_heading_level: 4
keywords:
- private service connect
- private connectivity
- security
- temporal cloud
- gcp
tags:
- security
- temporal-cloud
- gcp
- private service connect
- private-connectivity
---

#### GCP Private Service Connect

[GCP Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) allows you to open a path to Temporal without opening a public egress.
It establishes a private connection between your Google Virtual Private Cloud (VPC) and Temporal Cloud.
This one-way connection means Temporal cannot establish a connection back to your service.
This is useful if normally you block traffic egress as part of your security protocols.
If you use a private environment that does not allow external connectivity, you will remain isolated.

Set up Private Service Connect with Temporal Cloud with these steps:

1. Open the Google Cloud console
2. Navigate to **Network Services**, then **Private Service Connect**. If you haven't used **Network Services** recently, you might have to find it by clicking on **View All Products** at the bottom of the left sidebar.

![GCP console showing Network Services, and the View All Products button](/img/gcp-private-service-connect/gcp-console.png)

3. Go to the **Endpoints** section. Click on **Connect endpoint**.

![GCP console showing the endpoints, and the Connect endpoint button](/img/gcp-private-service-connect/connect-endpoint-button.png)

4. Under **Target**, select **Published service**, this will change the contents of the form to allow you to fill the rest as described below

![GCP console showing the endpoints, and the Connect endpoint button](/img/gcp-private-service-connect/connect-endpoint.png)

- For **Target service**, fill in the **Service name** with the Private Service Connect Service Name for the region you’re trying to connect to:

| Region | Private Service Connect Service Name |
| ---------------------- | ------------------------------------------------------------------------------- |
| `us-central1` | `projects/PROJECT/regions/us-central1/serviceAttachments/temporal-api` |
| `australia-southeast1` | `projects/PROJECT/regions/australia-southeast1/serviceAttachments/temporal-api` |

- For **Endpoint name**, enter a unique identifier to use for this endpoint. It could be for instance `temporal-api` or `temporal-api-<namespace>` if you want a different endpoint per namespace.
- For **Network** and **Subnetwork**, choose the network and subnetwork where you want to publish your endpoint.
- For **IP address**, click the dropdown and select **Create IP address** to create an internal IP from your subnet dedicated to the endpoint. Select this IP.
- Check **Enable global access** if you intend to connect the endpoint to virtual machines outside of the selected region. We recommend regional connectivity instead of global access, as it can be better in terms of latency for your workers.

5. Click the **Add endpoint** button at the bottom of the screen.
If successful, the status of your new endpoint will appear as **Accepted**.
Take note of the **IP address** that has been assigned to your endpoint, as it will be used to connect to Temporal Cloud.

6. You can use GCP Private Service Connect.
You can use the **IP address** of the previous step to connect to Temporal Cloud using port 7233.
To establish a valid mTLS session, you must override the TLS server name used for the connection to `<namespace>.<account>.tmprl.cloud`.


:::tip

GCP Private Service Connect services are regional.
Individual Namespaces do not use separate services.

:::

Once set up, you can test your Private Service Connect connectivity using the following methods.
When connecting, you must override the TLS server name to target your Namespace’s individual hostname (`<namespace>.<account>.tmprl.cloud`) to establish a valid mTLS session:

- The Temporal CLI, using the `--tls-server-name` parameter to override the TLS server name. For example:

```
temporal workflow count \
--address <IP address of the PSC endpoint>:7233 \
--tls-cert-path /path/to/client.pem \
--tls-key-path /path/to/client.key \
--tls-server-name <namespace>.<account>.tmprl.cloud \
--namespace <namespace>
```

- Non-Temporal tools like grpcURL, useful for testing from environments that restrict tool usage, using the `-servername` parameter to override the TLS server name. For example:

```
grpcurl \
-servername <name>.<account>.tmprl.cloud \
-cert /path/to/client.pem \
-key /path/to/client.key \
<IP address of the PSC endpoint>:7233 \
temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo
```

- Temporal SDKs, by setting the endpoint server address argument to the Private Service Connect endpoint (`<IP address of the PSC endpoint>:7233`) and using the TLS configuration options to override the TLS server name.
169 changes: 169 additions & 0 deletions docs/production-deployment/cloud/audit-logging-gcp.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
---
id: audit-logging-gcp
title: Audit Logging - GCP Pub/Sub
sidebar_label: GCP Pub/Sub
description: Audit Logging in Temporal Cloud provides forensic information, integrating with GCP Pub/Sub for secure data handling and supporting key Admin and API Key operations. This streamlines audit and compliance processes.
slug: /cloud/audit-logging-gcp
toc_max_heading_level: 4
keywords:
- audit logging
- explanation
- how-to
- operations
- temporal cloud
- term
- troubleshooting
- gcp
- pubsub
tags:
- audit-logging
- explanation
- how-to
- operations
- temporal-cloud
- term
- troubleshooting
- gcp
- pubsub
---

## **Prerequisites**

Before configuring the Audit log Sink, please complete the following steps in Google Cloud:

1. Create a PubSub topic and take note of its topic name, for example, "test-
auditlog"
1. If you wish to enable customer-managed encryption keys (CMEK), do so
2. Record the GCP Project ID that owns the topic
3. Set up a service account in the same project that trusts the Temporal internal service account to let Temporal write information to your account. Follow the instructions in the Temporal Cloud UI, there are two ways to set up this service account:
1. Input the service account ID, GCP project ID and PubSub topic name
1. Follow the instructions, manually set up a new service account
2. Use the [Terraform template](https://github.com/temporalio/terraform-modules/tree/main/modules/gcp-sink-sa) to create the service account

## **Temporal Cloud UI**

![Temporal Cloud UI Setup for Audit Logging with GCP Pub/Sub](/img/audit-logging-pub-sub-gcp.png)

1. In the Cloud UI, navigate to the Settings → Integration Page → Audit Log, confirm that you see Pub/Sub as a sink option
2. Configure the Audit Log
1. Choose Pub as Sink type
2. Provide the following information
1. Service account ID: [from Prerequisite 3]
2. GCP Project ID: [from Prerequisite 2]
3. Pub/Sub topic name: [from Prerequisite 1]
3. Once you have filled in the necessary values, please click on “Create” to get Audit Log Configured
4. Please check back in few minutes to make sure everything set up successfully

## More information

More details available in our public-facing documentation: https://docs.temporal.io/cloud/audit-logging

### Example of consuming an Audit Log

The following Go code is an example of consuming Audit Logs from a PubSub stream

```go
package main
import (
"fmt"
"io/ioutil"
"os"
"github.com/gogo/protobuf/jsonpb"
// TODO: change path to your generated proto
export "generated/exported_workflow"
"go.temporal.io/api/common/v1"
enumspb "go.temporal.io/api/enums/v1"
// TODO: change path to temporal repo
ossserialization "go.temporal.io/server/common/persistence/serialization"
)
func extractWorkflowHistoriesFromFile(filename string) ([]*export.Workflow, error) {
bytes, err := ioutil.ReadFile(filename)
if err != nil {
return nil, fmt.Errorf("error reading from file: %v", err)
}
blob := &common.DataBlob{
EncodingType: enumspb.ENCODING_TYPE_PROTO3,
Data: bytes,
}
result := &export.ExportedWorkflows{}
err = ossserialization.ProtoDecodeBlob(blob, result)
if err != nil {
return nil, fmt.Errorf("failed to decode file: %w", err)
}
workflows := result.Workflows
for _, workflow := range workflows {
history := workflow.History
if history == nil {
return nil, fmt.Errorf("history is nil")
}
}
return workflows, nil
}
func printWorkflow(workflow *export.Workflow) {
// Pretty print the workflow
marshaler := jsonpb.Marshaler{
Indent: "\t",
EmitDefaults: true,
}
Export Feature (User Copy)
9
str, err := marshaler.MarshalToString(workflow.History)
if err != nil {
fmt.Println("error print workflow history: ", err)
os.Exit(1) }
print(str) }
func printAllWorkflows(workflowHistories []*export.Workflow) {
for _, workflow := range workflowHistories {
printWorkflow(workflow)
}
}
func printWorkflowHistory(workflowID string, workflowHistories []*export.Workflow) {
if workflowID == "" {
fmt.Println("invalid workflow ID")
os.Exit(1) }
for _, workflow := range workflowHistories {
if workflow.History.Events[0].GetWorkflowExecutionStartedEventAttributes().WorkflowId
== workflowID {
fmt.Println("Printing workflow history for workflow ID: ", workflowID)
printWorkflow(workflow)
} }
fmt.Println("No workflow found with workflow ID: ", workflowID)
}
func main() {
if len(os.Args) < 2 {
fmt.Println("Please provide a path to a file")
os.Exit(1) }
filename := os.Args[1]
fmt.Println("Deserializing export workflow history from file: ", filename)
workflowHistories, err := extractWorkflowHistoriesFromFile(filename)
if err != nil {
fmt.Println("error extracting workflow histories: ", err)
os.Exit(1)
}
fmt.Println("Successfully deserialized workflow histories")
fmt.Println("Total number of workflow histories: ", len(workflowHistories))
fmt.Println("Choose an option:")
fmt.Println("1. Print out all the workflows")
fmt.Println("2. Print out the workflow hisotry of a specific workflow. Enter the workflow ID:")
var option int
fmt.Print("Enter your choice: ")
_, err = fmt.Scanf("%d", &option)
if err != nil {
fmt.Println("invalid input.")
os.Exit(1) }
switch option {
case 1:
printAllWorkflows(workflowHistories)
case 2:
fmt.Println("Please provide a workflow ID:")
var workflowID string
_, err = fmt.Scanf("%s", &workflowID)
if err != nil {
fmt.Println("invalid input for workflow ID.")
os.Exit(1) }
printWorkflowHistory(workflowID, workflowHistories)
default:
fmt.Println("only options 1 and 2 are supported.")
os.Exit(1) }
}
```
2 changes: 1 addition & 1 deletion docs/production-deployment/cloud/audit-logging.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ The following example shows the contents of an Audit Log.

## How to configure Audit Logging {#configure-audit-logging}

Audit logging can be configured in both AWS Kinesis
Audit logging can be configured in AWS Kinesis

- [AWS Kinesis Instructions](/cloud/audit-logging-aws)

Expand Down
6 changes: 6 additions & 0 deletions docs/production-deployment/cloud/aws-export-s3.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -126,3 +126,9 @@ The following is an example of the output:
"lastHealthCheckTime": "2023-08-14T21:30:02Z"
}
```

### Next Steps

* [Verify export setup](/cloud/export#verify)
* [Monitor export progress](/cloud/export#monitor)
* [Work with exported files](/cloud/export#working-with-exported-files)
9 changes: 7 additions & 2 deletions docs/production-deployment/cloud/export.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ tags:
:::tip Support, stability, and dependency info

- Workflow History Export is in a Public Preview release status for Temporal Cloud.
- Workflow History Export does not currently support Namespaces in Google Cloud.

:::

Expand Down Expand Up @@ -57,6 +56,7 @@ Before configuring the Export Sink, ensure you have the following:
You can configure your Workflow History Export in AWS.

- [AWS S3 Instructions](/cloud/export/aws-export-s3)
- [GCP GCS Instructions](/cloud/export/gcp-export-gcs)

## Verify export setup {#verify}

Expand Down Expand Up @@ -105,7 +105,12 @@ Once you've finalized the setup, here's how to monitor the export progress:
- Actions from the Export Job are included in the Usage UI.
- **Metrics**: Export related metrics are available in `temporal_cloud_v0_total_action_count` with the label `is_background="true"`. For more information, see [Cloud metrics](/cloud/metrics/).

For optimal results, review the S3 bucket for any new exported files and refer to the UI insights.
For optimal results, review the S3 or GCS bucket for any new exported files and refer to the UI insights.
This dual check ensures you remain abreast of the export progress and any potential issues.

## Working with exported files

The proto schema is defined in https://github.com/temporalio/api/blob/master/temporal/api/export/v1/message.proto. You could build your own custom deserializer for it.
You can find sample go code to reference when you want to deserialize export file in the Appendix section of this document.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anchor?


{/* starting on `2024/03/01` UTC actions are charged. */}
Loading
Loading