TL;DR
Skip the theory? Go here to spin up your Humanitec Google Cloud Reference Architecture Implementation.
Follow this learning path to master your Internal Developer Platform.
Building an Internal Developer Platform (IDP) can come with many challenges. To give you a head start, we’ve created a set of reference architectures based on hundreds of real-world setups. These architectures described in code provide a starting point to build your own IDP within minutes, along with customization capabilities to ensure your platform meets the unique needs of your users (developers).
The initial version of this reference architecture has been presented by Marco Marulli, Principal Delivery Lead II, McKinsey & Company at PlatformCon 2023.
An Internal Developer Platform (IDP) is the sum of all the tech and tools that a platform engineering team binds together to pave golden paths for developers. IDPs lower cognitive load across the engineering organization and enable developer self-service, without abstracting away context from developers or making the underlying tech inaccessible. Well-designed IDPs follow a Platform as a Product approach, where a platform team builds, maintains, and continuously improves the IDP, following product management principles and best practices.
When McKinsey originally published the reference architecture they proposed five planes that describe the different parts of a modern Internal Developer Platform (IDP).
This plane is the primary configuration layer and interaction point for the platform users. It harbors the following components:
- A Version Control System. GitHub is a prominent example, but this can be any system that contains two types of repositories:
- Application Source Code
- Platform Source Code, e.g. using Terraform
- Workload specifications. The reference architecture uses Score.
- A portal for developers to interact with. It can be the Humanitec Portal, but you might also use Backstage or any other portal on the market.
This plane is about building and storing the image, creating app and infra configs from the abstractions provided by the developers, and deploying the final state. It’s where the domains of developers and platform engineers meet.
This plane usually contains four different tools:
- A CI pipeline. It can be Github Actions or any CI tooling on the market.
- The image registry holding your container images. Again, this can be any registry on the market.
- An orchestrator which in our example, is the Humanitec Platform Orchestrator.
- The CD system, which can be the Platform Orchestrator’s deployment pipeline capabilities — an external system triggered by the Orchestrator using a webhook, or a setup in tandem with GitOps operators like ArgoCD.
The integration of monitoring and logging systems varies greatly depending on the system. This plane however is not a focus of the reference architecture.
The security plane of the reference architecture is focused on the secrets management system. The secrets manager stores configuration information such as database passwords, API keys, or TLS certificates needed by an Application at runtime. It allows the Platform Orchestrator to reference the secrets and inject them into the Workloads dynamically. You can learn more about secrets management and integration with other secrets management here.
The reference architecture sample implementations use the secrets store attached to the Humanitec SaaS system.
This plane is where the actual infrastructure exists including clusters, databases, storage, or DNS services. The configuration of the Resources is managed by the Platform Orchestrator which dynamically creates app and infrastructure configurations with every deployment and creates, updates, or deletes dependent Resources as required.
This repo contains an implementation of part of the Humanitec Reference Architecture for an Internal Developer Platform, including Backstage as optional Portal solution.
This repo covers the base layer of the implementation for Google Cloud (GCP).
By default, the following will be provisioned:
- VPC
- GKE Autopilot Cluster
- Google Service Account to access the cluster
- Ingress NGINX in the cluster
- Resource Definitions in Humanitec for:
- Kubernetes Cluster
- A Humanitec account with the
Administrator
role in an Organization. Get a free trial if you are just starting. - A GCP project
- gcloud CLI installed locally
- Terraform installed locally
Note: Using this Reference Architecture Implementation will incur costs for your GCP project.
It is recommended that you fully review the code before you run it to ensure you understand the impact of provisioning this infrastructure. Humanitec does not take responsibility for any costs incurred or damage caused when using the Reference Architecture Implementation.
This reference architecture implementation uses Terraform. You will need to do the following:
-
Fork this GitHub repo, clone it to your local machine and navigate to the root of the repository.
-
Set the required input variables (see Required input variables).
-
Ensure you are logged in with
gcloud
(See: gcloud auth application-default login).You will need to ensure your Google Cloud account has appropriate permissions on the project you wish to provision in.
-
Set the
HUMANITEC_TOKEN
environment variable to an appropriate Humanitec API token with theAdministrator
role on the Humanitec Organization.For example:
export HUMANITEC_TOKEN="my-humanitec-api-token"
-
Run terraform:
terraform init terraform plan terraform apply
If you're recreating the reference architecture and facing the issue of
WorkloadIdentityPool already exists
, please run the following commands to import workload identity pools and workload identity pool providergcloud iam workload-identity-pools undelete humanitec-wif-pool --location=global gcloud iam workload-identity-pools providers undelete humanitec-wif --workload-identity-pool=humanitec-wif-pool --location=global terraform import module.base.module.credentials.google_iam_workload_identity_pool.pool humanitec-wif-pool terraform import module.base.module.credentials.google_iam_workload_identity_pool_provider.pool_provider humanitec-wif-pool/humanitec-wif
Terraform reads variables by default from a file called terraform.tfvars
. You can create your own file by renaming the terraform.tfvars.example
file in the root of the repo and then filling in the missing values.
The following variables are required and so need to be set:
Variable | Type | Description | Example |
---|---|---|---|
project_id |
string |
The GCP project provision the infrastructure in. | "my-gcp-project" |
region |
string |
The GCP region to provision the infrastructure in. | "us-west1" |
humanitec_org_id |
string |
The ID of the Humanitec Organization the cluster should be associated with. | "my-org" |
There are many other optional inputs that can be set. The full list is described in Inputs.
Check for the existence of key elements of the reference architecture. This is a subset of all elements only. For a complete list of what was installed, review the Terraform code.
-
Set the
HUMANITEC_ORG
environment variable to the ID of your Humanitec Organization (must be all lowercase):export HUMANITEC_ORG="my-humanitec-org"
-
Verify the existence of the Resource Definition for the GKE cluster in your Humanitec Organization:
curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/resources/defs/htc-ref-arch-cluster \ --header "Authorization: Bearer ${HUMANITEC_TOKEN}" \ | jq .id,.type
This should output:
"htc-ref-arch-cluster" "k8s-cluster"
-
Verify the existence of the newly created GKE cluster:
gcloud container clusters list --filter "name=htc-ref-arch-cluster"
This should output cluster data like this:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS htc-ref-arch-cluster <your-region> xx.xx.xx-gke.xx xx.xx.xx.xx n2d-standard-4 xx.xx.xx-gke.xx 3 RUNNING
Backstage requires a GitHub connection, which in turn needs:
-
A GitHub organization and permission to create new repositories in it. Go to https://github.com/account/organizations/new to create a new org (the "Free" option is fine). Note: is has to be an organization, a free account is not sufficient.
-
Create a classic github personal access token with
repo
,workflow
,delete_repo
andadmin:org
scope here. -
Set the
GITHUB_TOKEN
environment variable to your token.export GITHUB_TOKEN="my-github-token"
-
Set the
GITHUB_ORG_ID
environment variable to your GitHub organization ID.export GITHUB_ORG_ID="my-github-org-id"
-
Install the GitHub App for Backstage into your GitHub organization
- Run
docker run --rm -it -e GITHUB_ORG_ID -v $(pwd):/pwd -p 127.0.0.1:3000:3000 ghcr.io/humanitec-architecture/create-gh-app
(image source) and follow the instructions:- “All repositories” ~> Install
- “Okay, […] was installed on the […] account.” ~> You can close the window and server.
- Run
- Enable
with_backstage
inside yourterraform.tfvars
and configure the additional variables that are required for Backstage. - Perform another
terraform apply
- Fetch the DNS entry of the Humanitec Application
backstage
, Environmentdevelopment
. - Open the host in your browser.
- Click the "Create" button and scaffold your first application.
Once you are finished with the reference architecture, you can remove all provisioned infrastructure and the resource definitions created in Humanitec with the following:
-
Delete all Humanitec Applications scaffolded using the Portal, if you used one, but not the
backstage
app itself. -
Ensure you are (still) logged in with
gcloud
. -
Ensure you still have the
HUMANITEC_TOKEN
environment variable set to an appropriate Humanitec API token with theAdministrator
role on the Humanitec Organization.You can verify this in the UI if you log in with an Administrator user and go to Resource Management, and check the "Usage" of each resource definition with the prefix set in
humanitec_prefix
- by default this ishtc-ref-arch-
. -
Run terraform:
terraform destroy
Name | Version |
---|---|
terraform | >= 1.3.0 |
github | ~> 5.38 |
~> 5.1 | |
helm | ~> 2.12 |
humanitec | ~> 1.0 |
kubernetes | ~> 2.25 |
random | ~> 3.5 |
Name | Version |
---|---|
humanitec | ~> 1.0 |
Name | Source | Version |
---|---|---|
base | ./modules/base | n/a |
github | ./modules/github | n/a |
github_app | github.com/humanitec-architecture/shared-terraform-modules | v2024-06-12//modules/github-app |
portal_backstage | ./modules/portal-backstage | n/a |
Name | Type |
---|---|
humanitec_service_user_token.deployer | resource |
humanitec_user.deployer | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
gar_repository_region | Region of the Google Artifact Registry repository, | string |
n/a | yes |
project_id | GCP Project ID to provision resources in. | string |
n/a | yes |
region | GCP Region to provision resources in. | string |
n/a | yes |
gar_repository_id | Google Artifact Registry repository ID. | string |
"htc-ref-arch" |
no |
github_org_id | GitHub org id (required for Backstage) | string |
null |
no |
humanitec_org_id | Humanitec Organization ID. | string |
null |
no |
humanitec_prefix | A prefix that will be attached to all IDs created in Humanitec. | string |
"htc-ref-arch-" |
no |
with_backstage | Deploy Backstage | bool |
false |
no |
Expand your knowledge by heading over to our learning path, and discover how to:
- Deploy the Humanitec reference architecture using a cloud provider of your choice
- Deploy and manage Applications using the Humanitec Platform Orchestrator and Score
- Provision additional Resources and connect to them
- Achieve standardization by design
- Deal with special scenarios
Master your Internal Developer Platform
- Introduction
- Design principles
- Structure and integration points
- Dynamic Configuration Management
- Tutorial: Set up the reference architecture in your cloud
- Theory on developer workflows
- Tutorial: Scaffold a new Workload and create staging and prod Environments
- Tutorial: Deploy an Amazon S3 Resource to production
- Tutorial: Perform daily developer activities (debug, rollback, diffs, logs)
- Tutorial: Deploy ephemeral Environments
- Theory on platform engineering workflows
- Resource management theory
- Tutorial: Provision a Redis cluster on AWS using Terraform
- Tutorial: Update Resource Definitions for related Applications