Skip to content

Latest commit

 

History

History
157 lines (111 loc) · 12.2 KB

prerequisites.hbs.md

File metadata and controls

157 lines (111 loc) · 12.2 KB

Prerequisites and planning for installing Tanzu Application Platform

The following are required to install Tanzu Application Platform (commonly known as TAP):

Installation planning

Before you begin a Tanzu Application Platform installation:

  1. Review the Tanzu Application Platform planning and architecture documentation. For more information, see Planning and architecture reference.

  2. (Optional) To gain an understanding of Tanzu Application Platform, experiment with a Tanzu Application Platform sandbox. For more information, see Access an experimental developer sandbox environment.

Installation prerequisites

Installation requires:

VMware Tanzu Network and container image registry requirements

  • Access to VMware Tanzu Network:

  • Cluster-specific registry:

    • A container image registry, such as Harbor or Docker Hub for application images, base images, and runtime dependencies. When available, VMware recommends using a paid registry account to avoid potential rate-limiting associated with some free registry offerings.

    • Recommended storage space for container image registry:

      • 1 GB of available storage if installing Tanzu Build Service with the lite set of dependencies.
      • 10 GB of available storage if installing Tanzu Build Service with the full set of dependencies, which are suitable for offline environments.

      Note For production environments, full dependencies are recommended to optimize security and performance. For more information about Tanzu Build Service dependencies, see About lite and full dependencies.

  • Registry credentials with read and write access available to Tanzu Application Platform to store images.

  • Network access to your chosen container image registry.

DNS Records

There are some optional but recommended DNS records you must allocate if you decide to use these particular components:

  • Cloud Native Runtimes (Knative): Allocate a wildcard subdomain for your developer's applications. This is specified in the shared.ingress_domain key of the tap-values.yaml configuration file that you input with the installation. This wildcard must be pointed at the external IP address of the tanzu-system-ingress's envoy service. See Access with the shared Ingress method for more information about tanzu-system-ingress.

  • Tanzu Developer Portal: If you decide to implement the shared ingress and include Tanzu Developer Portal, allocate a fully Qualified Domain Name (FQDN) that can be pointed at the tanzu-system-ingress service. The default host name consists of tap-gui and the shared.ingress_domain value. For example, tap-gui.example.com.

  • Supply Chain Security Tools - Store: Similar to Tanzu Developer Portal, allocate a fully Qualified Domain Name (FQDN) that can be pointed at the tanzu-system-ingress service. The default host name consists of metadata-store and the shared.ingress_domain value. For example, metadata-store.example.com.

  • Artifact Metadata Repository: Similar to the Supply Chain Security Tools (SCST) - Store, allocate a fully Qualified Domain Name (FQDN) that can be pointed at the tanzu-system-ingress service. The default host name consists of amr-graphql and the shared.ingress_domain value. For example, amr-graphql.example.com.

  • Application Live View: If you select the ingressEnabled option, allocate a corresponding fully Qualified Domain Name (FQDN) that can be pointed at the tanzu-system-ingress service. The default host name consists of appliveview and the shared.ingress_domain value. For example, appliveview.example.com.

Supply Chain Security Tools - Store

The default database deployment does not support many enterprise production requirements, including scaling, redundancy, or failover. For more information about setting up the database for production, see Database configuration.

Tanzu Developer Portal

For Tanzu Developer Portal, you must have:

  • Latest version of Chrome, Firefox, or Edge. Tanzu Developer Portal currently does not support Safari browser.
  • Git repository for Tanzu Developer Portal's software catalogs, with a token allowing read access. For more information about how to use your Git repository, see Create an application accelerator. Supported Git infrastructure includes:
    • GitHub
    • GitLab
    • Azure DevOps
  • Tanzu Developer Portal Blank Catalog from the Tanzu Application section of VMware Tanzu Network. The Blank Catalog serves as a foundation for your customization, allowing you to populate it with your own content. For more information about formatting your own catalog, see Catalog operations.
    • To install, navigate to Tanzu Network. Under the list of available files to download, there is a folder titled tanzu-developer-portal-catalogs-latest. Inside that folder is a compressed archive titled Tanzu Developer Portal Blank Catalog. You must extract that catalog to the preceding Git repository of choice. This serves as the configuration location for your organization's catalog inside Tanzu Developer Portal.
  • The Tanzu Developer Portal catalog allows for two approaches to store catalog information:
    • The default option uses an in-memory database and is suitable for test and development scenarios. This reads the catalog data from Git URLs that you specify in the tap-values.yaml file. This data is temporary. Any operations that cause the server pod in the tap-gui namespace to be re-created also cause this data to be rebuilt from the Git location. This can cause issues when you manually register entities by using the UI, because they only exist in the database and are lost when that in-memory database gets rebuilt.
    • For production use cases, use a PostgreSQL database that exists outside the Tanzu Application Platform packaging. The PostgreSQL database stores all the catalog data persistently both from the Git locations and the UI manual entity registrations. For more information, see Configure the Tanzu Developer Portal database

Kubernetes cluster requirements

Installation requires Kubernetes cluster v1.26, v1.27, v1.28 or v1.29 on one of the following Kubernetes providers:

  • Azure Kubernetes Service.
  • Amazon Elastic Kubernetes Service.
    • containerd must be used as the Container Runtime Interface (CRI). Some versions of EKS default to Docker as the container runtime and must be changed to containerd.
    • EKS clusters on Kubernetes version 1.23 and above require the Amazon EBS CSI Driver due to CSIMigrationAWS is enabled by default in Kubernetes version 1.23 and above.
      • Users currently on EKS Kubernetes version 1.22 must install the Amazon EBS CSI Driver before upgrading to Kubernetes version 1.23 and above. See AWS documentation for more information.
    • AWS Fargate is not supported.
  • Google Kubernetes Engine.
    • GKE Autopilot clusters do not have the required features enabled.
    • GKE clusters that are set up in zonal mode might detect Kubernetes API errors when the GKE control plane is resized after traffic increases. Users can mitigate this by creating a regional cluster with three control-plane nodes right from the start.
  • Red Hat OpenShift Container Platform v4.13, v4.14 and v4.15.
    • vSphere
    • Baremetal
  • Tanzu Kubernetes Grid (commonly called TKG) with Standalone Management Cluster. For more information, see the Tanzu Kubernetes Grid documentation.
  • vSphere with Tanzu v8.0 Update 1c or later, v7.0 Update 3p or later.
  • Tanzu Kubernetes Grid Integrated Edition with vSphere (commonly called TKGi) v1.17 and later.
    • For TKGi with NSX, the total number of Kubernetes object labels and other tags created by both TKGi and Tanzu Application Platform can exceed the number allowed by NSX. Create or update your network profile by setting the cni_configurations parameter extensions.ncp.k8s.label_filtering_regex_list. For more information, see the VMware Tanzu Kubernetes Grid Integrated Edition documentation.

For more information about the supported Kubernetes versions, see Kubernetes version support for Tanzu Application Platform.

Resource requirements

  • To deploy Tanzu Application Platform packages full profile, your cluster must have at least:

    • 8 GB of RAM available per node to Tanzu Application Platform.
    • 16 vCPUs available across all nodes to Tanzu Application Platform.
    • 100 GB of disk space available per node.

    Important Tanzu Application Platform requires a minimum of 100 GB per node of ephemeral storage. If you do not allocate at least this amount of ephemeral storage for kubelet on all cluster nodes, you receive the error "minDiskPerNode: some cluster nodes don't meet minimum disk space requirement of '100Gi'." For more information about configuring the storage for a TKG cluster on Supervisor, see v1alpha3 Example: TKC with Default Storage and Node Volumes and v1beta1 Example: Custom Cluster Based on the Default ClusterClass.

  • To deploy Tanzu Application Platform packages build, run and iterate (shared) profile, your cluster must have at least:

    • 8 GB of RAM available per node to Tanzu Application Platform.
    • 12 vCPUs available across all nodes to Tanzu Application Platform.
    • 100 GB of disk space available per node.
  • To deploy Tanzu Application Platform packages view profile, your cluster must have at least:

    • 8 GB of RAM available per node to Tanzu Application Platform.
    • 8 vCPUs available across all nodes to Tanzu Application Platform.
    • 100 GB of disk space available per node.
  • For the full profile or use of Security Chain Security Tools - Store, your cluster must have a configured default StorageClass.

  • Pod security policies must be configured so that Tanzu Application Platform controller pods can run as root in the following optional configurations:

    • Tanzu Build Service, in which CustomStacks require root privileges. For more information, see Tanzu Build Service documentation.
    • Supply Chain, in which Kaniko usage requires root privileges to build containers.

    For more information about pod security policies, see Kubernetes documentation.

Tools and CLI requirements

Installation requires:

  • The Kubernetes CLI (kubectl) v1.26, v1.27, v1.28 or v1.29 installed and authenticated with admin rights for your target cluster. See Install Tools in the Kubernetes documentation.

Next steps