To install Tanzu Application Platform (commonly known as TAP) within the Amazon Web Services (AWS) Ecosystem, you must create several AWS resources. Use this topic to learn how to create:
- An Amazon Elastic Kubernetes Service (EKS) cluster to install Tanzu Application Platform.
- Identity and Access Management (IAM) roles to allow authentication and authorization to read and write from Amazon Elastic Container Registry (ECR).
- ECR Repositories for the Tanzu Application Platform container images. This is because AWS ECR does not support automatically creating container repositories on initial push. For more information, see the AWS repository in GitHub.
Creating these resources enables Tanzu Application Platform to use an IAM role bound to a Kubernetes service account for authentication, rather than the typical username and password stored in a Kubernetes secret strategy. For more information, see this AWS documentation.
This is important when using ECR because authenticating to ECR is a two-step process:
- Retrieve a token using your AWS credentials.
- Use the token to authenticate to the registry.
To increase security, the token has a lifetime of 12 hours. This makes storing it as a secret for a service impractical because it has to be refreshed every 12 hours.
Using an IAM role on a service account mitigates the need to retrieve the token at all because it is handled by credential helpers within the services.
There are numerous methods to manage AWS cloud resources and create EKS clusters. The method presented in the following guide was chosen for simplicity.
Before installing Tanzu Application Platform on AWS, you need:
-
An AWS Account. You need to create all of your resources within Amazon Web Services, so you need an Amazon account. For more information, see How do I create and activate a new AWS account?. You need your account ID for this walkthrough.
-
AWS CLI. This walkthrough uses the AWS CLI to both query and configure resources in AWS, such as IAM roles. For more information, see this AWS documentation.
-
eksctl
command line. Theeksctl
command line helps you manage the life cycle of EKS clusters. This guide uses it to create clusters. To installeksctl
, see the eksctl documentation.
Variables are used throughout this guide. To simplify the process and minimize the opportunity for errors, export these variables:
export AWS_ACCOUNT_ID=012345678901
export AWS_REGION=us-west-2
export EKS_CLUSTER_NAME=tap-on-aws
Where:
Variable | Description |
---|---|
AWS_ACCOUNT_ID | Your AWS account ID |
AWS_REGION | The AWS region you are going to deploy to |
EKS_CLUSTER_NAME | The name of your EKS Cluster |
To create an EKS cluster in the specified region, run:
eksctl create cluster --name $EKS_CLUSTER_NAME --managed --region $AWS_REGION --instance-types t3.xlarge --version 1.28 --with-oidc -N 5
Creating the control plane and node group can take anywhere from 30-60 minutes.
Note This step is optional if you already have an existing EKS Cluster v1.23 or later with OpenID Connect (OIDC) authentication enabled. For more information about how to enable the OIDC provider, see AWS documentation.
Tanzu Application Platform requires stateful services. Starting from EKS v1.23, the EBS CSI driver is no longer installed by default. For more information about how to install the EBS CSI driver, see the AWS documentation.
ECR requires that the container repositories are already created for images to be pushed to them. For Tanzu Application Platform, you must create the following two repositories:
- A repository to store the Tanzu Application Platform service container images
- A repository to store Tanzu Build Service generated Base OS and Builder container images
To create these repositories, run:
aws ecr create-repository --repository-name tap-images --region $AWS_REGION
aws ecr create-repository --repository-name tap-build-service --region $AWS_REGION
Depending on your installation choices, you might also require the following additional system-related repositories:
- A repository to store Tanzu Build Service full dependencies container images
- A repository to store Tanzu Application Platform's Local Source Proxy container images
- A repository to store Tanzu Cluster Essentials container images
To create these repositories, run:
aws ecr create-repository --repository-name full-deps --region $AWS_REGION
aws ecr create-repository --repository-name tap-lsp --region $AWS_REGION
aws ecr create-repository --repository-name tanzu-cluster-essentials --region $AWS_REGION
Name the repositories any name you want, but remember the names for when you later build the configuration.
Similar to the platform container repositories, you must create repositories for each workload that Tanzu Application Platform creates before creating any workloads so that a repository is available to upload container images and workload bundles.
When installing Tanzu Application Platform, you must specify a prefix for all
workload registries. This topic uses tanzu-application-platform
as the default
value, but you can customize this value in the profile configuration created in
Install Tanzu Application Platform package and profiles on AWS.
To use the default value, create two workload repositories for each workload with the following format:
tanzu-application-platform/WORKLOADNAME-NAMESPACE
tanzu-application-platform/WORKLOADNAME-NAMESPACE-bundle
For example, to create these repositories for the sample workload
tanzu-java-web-app
in the default
namespace, you can run the following ECR
command:
aws ecr create-repository --repository-name tanzu-application-platform/tanzu-java-web-app-default --region $AWS_REGION
aws ecr create-repository --repository-name tanzu-application-platform/tanzu-java-web-app-default-bundle --region $AWS_REGION
Note The default Supply Chain Choreographer method of storing Kubernetes configuration is RegistryOps, which requires the
bundle
repository. If you enabled the GitOps capability, this repository is not required. For more information about the differences between RegistryOps and GitOps, see Use GitOps or RegistryOps with Supply Chain Choreographer.
By default, the EKS cluster is provisioned with an EC2 instance profile that provides read-only access for the entire EKS cluster to the ECR registry within your AWS account. For more information, see this AWS documentation.
However, some of the services within Tanzu Application Platform require write access or batch read access to the container repositories. To provide that access, create IAM roles and add the ARN to the Kubernetes service accounts that those services use. This ensures that only the required services have access to write container images to ECR and the ability for batch read access, rather than a blanket policy that applies to the entire cluster.
Create the following IAM Roles:
-
Tanzu Build Service: Gives write access to the repository to allow the service to automatically upload new images. Also provides elevated batch read access to the
tap-images
andfull-deps
repositories. This is limited in scope to the service account forkpack
and the dependency updater. -
Workload: Gives write access to the entire ECR registry with a prepended path. Also provides elevated batch read access to the
full-deps
repository if you use Tanzu Build Service full dependencies. This prevents you from updating the policy for each new workload created. -
Local Source Proxy: Gives write access to the repository to allow the service to automatically upload new images. This is limited in scope to the service account for Local Source Proxy.
To create the roles, you must establish two policies:
-
Trust Policy: Limits the scope to the OIDC endpoint for the Kubernetes cluster and the Kubernetes service account you attach the role to.
-
Permission Policy: Limits the scope of actions the role can take on resources.
Note These policies attempt to achieve a least privilege model. Review them to confirm they adhere to your organization's policies.
To simplify this walkthrough, use a script to create these policy documents and the roles. This script outputs the files and then creates the IAM roles by using the policy documents. If Local Source Proxy is not in your installation plan, you can omit the associated commands.
Run:
# Retrieve the OIDC endpoint from the Kubernetes cluster and store it for use in the policy.
export OIDCPROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --region $AWS_REGION --output json | jq '.cluster.identity.oidc.issuer' | tr -d '"' | sed 's/https:\/\///')
cat << EOF > build-service-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDCPROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDCPROVIDER}:aud": "sts.amazonaws.com"
},
"StringLike": {
"${OIDCPROVIDER}:sub": [
"system:serviceaccount:kpack:controller",
"system:serviceaccount:build-service:dependency-updater-controller-serviceaccount"
]
}
}
}
]
}
EOF
cat << EOF > build-service-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:DescribeRegistry",
"ecr:GetAuthorizationToken",
"ecr:GetRegistryPolicy",
"ecr:PutRegistryPolicy",
"ecr:PutReplicationConfiguration",
"ecr:DeleteRegistryPolicy"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "TAPEcrBuildServiceGlobal"
},
{
"Action": [
"ecr:DescribeImages",
"ecr:ListImages",
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:BatchGetRepositoryScanningConfiguration",
"ecr:DescribeImageReplicationStatus",
"ecr:DescribeImageScanFindings",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryScanningConfiguration",
"ecr:GetRepositoryPolicy",
"ecr:ListTagsForResource",
"ecr:TagResource",
"ecr:UntagResource",
"ecr:BatchDeleteImage",
"ecr:BatchImportUpstreamImage",
"ecr:CompleteLayerUpload",
"ecr:CreatePullThroughCacheRule",
"ecr:CreateRepository",
"ecr:DeleteLifecyclePolicy",
"ecr:DeletePullThroughCacheRule",
"ecr:DeleteRepository",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:PutImageScanningConfiguration",
"ecr:PutImageTagMutability",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryScanningConfiguration",
"ecr:ReplicateImage",
"ecr:StartImageScan",
"ecr:StartLifecyclePolicyPreview",
"ecr:UploadLayerPart",
"ecr:DeleteRepositoryPolicy",
"ecr:SetRepositoryPolicy"
],
"Resource": [
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/full-deps",
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/tap-build-service",
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/tap-images"
],
"Effect": "Allow",
"Sid": "TAPEcrBuildServiceScoped"
}
]
}
EOF
cat << EOF > workload-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:DescribeRegistry",
"ecr:GetAuthorizationToken",
"ecr:GetRegistryPolicy",
"ecr:PutRegistryPolicy",
"ecr:PutReplicationConfiguration",
"ecr:DeleteRegistryPolicy"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "TAPEcrWorkloadGlobal"
},
{
"Action": [
"ecr:DescribeImages",
"ecr:ListImages",
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:BatchGetRepositoryScanningConfiguration",
"ecr:DescribeImageReplicationStatus",
"ecr:DescribeImageScanFindings",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryScanningConfiguration",
"ecr:GetRepositoryPolicy",
"ecr:ListTagsForResource",
"ecr:TagResource",
"ecr:UntagResource",
"ecr:BatchDeleteImage",
"ecr:BatchImportUpstreamImage",
"ecr:CompleteLayerUpload",
"ecr:CreatePullThroughCacheRule",
"ecr:CreateRepository",
"ecr:DeleteLifecyclePolicy",
"ecr:DeletePullThroughCacheRule",
"ecr:DeleteRepository",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:PutImageScanningConfiguration",
"ecr:PutImageTagMutability",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryScanningConfiguration",
"ecr:ReplicateImage",
"ecr:StartImageScan",
"ecr:StartLifecyclePolicyPreview",
"ecr:UploadLayerPart",
"ecr:DeleteRepositoryPolicy",
"ecr:SetRepositoryPolicy"
],
"Resource": [
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/full-deps",
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/tanzu-application-platform/*"
],
"Effect": "Allow",
"Sid": "TAPEcrWorkloadScoped"
}
]
}
EOF
cat << EOF > workload-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDCPROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"${OIDCPROVIDER}:sub": "system:serviceaccount:*:default",
"${OIDCPROVIDER}:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
cat << EOF > local-source-proxy-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDCPROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDCPROVIDER}:aud": "sts.amazonaws.com"
},
"StringLike": {
"${OIDCPROVIDER}:sub": [
"system:serviceaccount:tap-local-source-system:proxy-manager"
]
}
}
}
]
}
EOF
cat << EOF > local-source-proxy-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "TAPLSPGlobal"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:ListTagsForResource",
"ecr:DescribeImageScanFindings",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": [
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/tap-lsp"
],
"Sid": "TAPLSPScoped"
}
]
}
EOF
# Create the Tanzu Build Service Role.
aws iam create-role --role-name tap-build-service --assume-role-policy-document file://build-service-trust-policy.json
# Attach the Policy to the Build Role.
aws iam put-role-policy --role-name tap-build-service --policy-name tapBuildServicePolicy --policy-document file://build-service-policy.json
# Create the Workload Role.
aws iam create-role --role-name tap-workload --assume-role-policy-document file://workload-trust-policy.json
# Attach the Policy to the Workload Role.
aws iam put-role-policy --role-name tap-workload --policy-name tapWorkload --policy-document file://workload-policy.json
# Create the TAP Local Source Proxy Role.
aws iam create-role --role-name tap-local-source-proxy --assume-role-policy-document file://local-source-proxy-trust-policy.json
# Attach the Policy to the tap-local-source-proxy Role created earlier.
aws iam put-role-policy --role-name tap-local-source-proxy --policy-name tapLocalSourcePolicy --policy-document file://local-source-proxy-policy.json
-
[Deploy Cluster Essentials](https://{{ vars.staging_toggle }}.vmware.com/en/Cluster-Essentials-for-VMware-Tanzu/{{ vars.url_version }}/cluster-essentials/deploy.html)
Important When you use a VMware Tanzu Kubernetes Grid cluster, you do not need to install Cluster Essentials because the contents of Cluster Essentials are already installed on your cluster.
-
Install Tanzu Application Platform package and profiles on AWS