Skip to content

This project will help get a VMWare VCF management domain up and running quickly on Equinix Metal

License

Notifications You must be signed in to change notification settings

equinix-labs/terraform-equinix-metal-vcf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VMware VCF Cluster on Equinix Metal

run-pre-commit-hooks generate-terraform-docs

terraform-equinix-metal-vcf is a minimal Terraform module that utilizes the Terraform provider for Equinix to provision, configure and setup an Equinix Metal hardware environment with the pre-requisites for a VCF 5.1.1 installation.

Network Architecture Diagram

Target Metal Architecture featuring Metal VRF for Underlay routing

Usage

This project is supported by the user community. Equinix does not provide support for this project.

This project may be forked, cloned, or downloaded and modified as needed as the base in your integrations and deployments.

This project may also be used as a Terraform module.

To use this module in a new project, create a file such as: examples/vcf_management_domain/main.tf.

Pre-Requisites

Please note the following Requirement Specifics from VCF Cloud Builder

Physical Network

  • DHCP with an appropriate scope size (one IP per physical NIC per host) is configured for the ESXi Host Overlay (TEP) network. Providing static IP pool is also supported but some Day-N operations like stretching a cluster will not be allowed if static IPs are used.

    • Note: Equinix Metal doesn't currently provide DHCP or DHCP relay. If DHCP is desired for TEP port IP assignment, this must be provided by some other instance(s) with the Overlay VLANs assigned.

Physical Hardware and ESXi Host

  • Hardware and firmware (including HBA and BIOS) is configured for vSAN.

    • Note: The Equinix Support team can assist with ensuring that BIOS configuration is brought into compliance with vSAN recommendations should it be discovered that this is not already the case.
  • Physical hardware health status is 'healthy' without any errors.

    • Note: The Equinix Support team can assist with ensuring hardware is brought into a healthy state should it be discovered otherwise.

Supporting Infrastructure

  • DNS server for name resolution. Management IP of hosts is registered and queryable as both a forward (hostname-to-IP), and reverse (IP-to-Hostname) entry.
    • Note: While this module does configure the user provided DNS server details, the provided DNS server IP must be reachable by the Metal Instances through VRF Interconnection and the provided DNS server is meant for demo or Proof of Concept purposes only

This module does provide for the following VCF Infrastructure configuration as required by Cloud Builder

Network

  • Top of Rack switches are configured. Each host and NIC in the management domain must have the same network configuration. No ethernet link aggregation technology (LAG/VPC/LACP) is being used.

  • IP ranges, subnet mask, and a reliable L3 (default) gateway for each VLAN are provided.

  • Jumbo Frames (MTU 9000) are recommended on all VLANs. At a minimum, MTU of 1600 is required on the NSX Host Overlay VLAN and must be enabled end to end through your environment.

  • VLANs for management, vMotion, vSAN and NSX Host Overlay networks are created and tagged to all host ports. Each VLAN is 802.1q tagged.

  • Management IP is VLAN backed and configured on the host. vMotion & vSAN IP ranges are configured during the bring-up process.

Hardware and ESXi Hosts

  • All servers are vSAN compliant and certified on the VMware Hardware Compatibility Guide, including but not limited to BIOS, HBA, SSD, HDD, etc.

  • Identical hardware (CPU, Memory, NICs, SSD/HDD, etc.) within the management cluster is highly recommended. Refer to vSAN documentation for minimal configuration.

  • One physical NIC is configured and connected to the vSphere Standard switch. The second physical NIC is not configured.

  • ESXi is freshly installed on each host. The ESXi version matches the build listed in the Cloud Foundation Bill of Materials.

  • All hosts are configured with a central time server (NTP). NTP service policy set to 'Start and stop with host'.

  • Each ESXi host is running a non-expired license - initial evaluation license is accepted. The bring-up process will configure the permanent license provided.

Other Infrastructure

  • All hosts are configured with a DNS server for name resolution.

Module Customization Overview

Download Cloud Builder Deployment Parameter Guide spreadsheet from Broadcom

Broadcom Support Site - VCF 5.1.1 Downloads

Clone repo

  • git clone git@github.com:equinix-labs/terraform-equinix-metal-vcf.git

Copy and Modify tfvars file

Customize and align CLoud Builder Deployment Parameter Guide spreadsheet and tfvars files

  • Copy the terraform.tfvars.example file to terraform.tfvars

    • Fill in Metal API Key, Project ID, and deployment Metro variables on lines 2, 3, and 6 respectively.

      • Note: there are more secure methods of implementing the API key, but that's out of scope for this readme
    • If interconnecting VRF Fabric VCs to BGP Neighbor(s), Fill in eBGP Peering details for VRF on lines 8-21 of the tfvars file.

    • Fill in the same values you used in the vcf-ems-deployment-parameter_X.X.X spreadsheet.

      • Note: that defined variables in variables.tf file have descriptions indicating the spreasheet cell that should have aligned values. Default values in the terraform.tfvars.example file should align with the defaults in the Deployment Parameter spreadsheet.

      • For more about the "Cloud Builder Deployment Parameter Guide" spreadsheet file and its configuration, see About the Deployment Parameter Workbook on docs.vmware.com (requires a login and entitlements).

Generating custom root password

To generate a password hash of your desired ESXi root password run the 'mkpasswd' command on a Linux system with the 'whois' package installed as follows

mkpasswd --method=SHA-512

You'll be prompted to enter the desired password sting you wish to hash, then press enter.

mkpasswd command Example

Alternatively, you can instead use mkpasswd.net to generate a pasword hash. Be sure to select crypt-sha512 in the Type dropdown.

mkpasswd.net Site

mkpasswd.net Example

The output will be the string you need to use in the esxi_password variable at line 143 of the terraform.tfvars.example file

Terraform Deployment Workflow

Deployment

  • Deploy this Terraform module by running terraform init -upgrade and terraform apply.

  • Note the following values that you'll need later:

    • The public IP address of the bastion host: terraform output -raw bastion_public_ip
    • The private IP address of the bastion host: terraform output -raw bastion_private_ip
    • The public IP address of the management host: terraform output -raw windows_management_rdp_address
    • The password for the management host: terraform output -raw windows_management_password
    • The DNS address of the ESX01 host: terraform output -raw esx01_vmk0_address
  • The following steps should be run on the management host.

    • RDP to the public IP of management host you noted earlier.

      • Username: SYSTEM\Admin
      • Password: Use the password you noted earlier. (terraform output -raw management_password)
    • Download the Cloud Builder OVA from VMware

    • Log in to one of the ESXi hosts

      • Use the DNS address of the ESX01 host you noted earlier. (Our example uses: https://sfo01-m01-esx01.sfo.rainpole.io)
      • Username: root
      • Password: the custom root password you used to generate the hash earlier.
    • Deploy Cloud Builder OVA to one of the ESXi devices provisioned by Terraform, we recommend following VMware's documentation for this. https://docs.vmware.com/en/VMware-Cloud-Foundation/5.1/vcf-deploy/GUID-78EEF782-CF21-4228-97E0-37B8D2165B81.html

      • You will need to use the bastion host private IP you noted earlier as the DNS and NTP server during the OVA deployment.
    • Login to Cloud Builder at the address and username/password specified during OVA deployment.

    • Upload the vcf-ems-deployment-parameter spreadsheet to Cloud Builder when it asks for it.

    • Fix issues Cloud Builder finds.

    • Push deploy button and wait while VCF deploys.

      • Note: This process can take more than an hour to complete
      • If deploy fails, it is recommended to delete ESXi devices and re-deploy. Depending on where the process failed, Cloud Builder can end up leaving the ESXi devices config in an state that requires significant manual effort to clean-up before a subsequent attempt would pass the Cloud Builder pre-checks. Redeploying may be faster even if Cloud Builder must be re-deployed as well.
    • Create Interconnections to VRF and NSX-T Edge Uplinks by logging into Equinix Fabric and Redeeming the Fabric Service Tokens generated by this module

Known issues

  • terraform destroy sees timeout on "bastion" Metal Gateway resource as well as Metal VLANs.
    • This is a known issue because of the null resource used to configure BGP Dynamic Neighbor range on this Metal Gateway via curl command.
    • If the destroy operation times out, simply run terraform destroy again to successfully remove this resource.
      • Note: the "bastion" Metal Gateway resource is not required for a production deployment, only for PoCs where the "bastion" device is required to fulfill Cloud Builder prerequisites. Considerin removing the "bastion" device, gateway, and VLAN from the deployment if these prerequisites are satisfied by services accessible from VRF Interconnection uplinks.

Cloud Builder deployment tips

.\ovftool.exe --name=cloudbuilder --X:injectOvfEnv --acceptAllEulas --noSSLVerify --diskMode=thin --datastore="datastore1" --net:'Network 1=VM Network' --powerOn --prop:guestinfo.ROOT_PASSWORD=VMwareDemo123! --prop:guestinfo.ADMIN_PASSWORD=VMwareDemo123! --prop:guestinfo.ip0=172.16.10.2 --prop:guestinfo.netmask0=255.255.255.0 --prop:guestinfo.gateway=172.16.10.1 --prop:guestinfo.hostname=cloudbuilder --prop:guestinfo.DNS=172.16.9.2 --prop:guestinfo.ntp=172.16.9.2 .\VMware-Cloud-Builder-5.1.1.0-23480823_OVF10.ova  vi://root:VMwareDemo123!@172.16.11.101

Requirements

Name Version
terraform >= 1.5
equinix >= 1.35
random >= 3

Providers

Name Version
equinix >= 1.35
random >= 3

Modules

Name Source Version
metal_vrf ./modules/metal_vrf_w_interconnection_service_tokens n/a
metal_vrf_gateways_w_dynamic_neighbor ./modules/metal_vrf_gateway_w_dynamic_neighbor n/a
ssh ./modules/ssh/ n/a
vcf_metal_devices ./modules/vcf_metal_device n/a

Resources

Name Type
equinix_metal_device.bastion resource
equinix_metal_device.management resource
equinix_metal_port.bastion_bond0 resource
equinix_metal_port.management_bond0 resource
random_password.management resource

Inputs

Name Description Type Default Required
bastion_ip IP address for the Bastion host string n/a yes
cloudbuilder_ip IP address for the Cloudbuilder appliance string n/a yes
esxi_devices Map containing individual ESXi device details for each Metal Instance
map(object({
name = string # Short form hostname of system (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > I6:L6)
mgmt_ip = string # Management Network IP address for VMK0 (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > I7:L7)
reservation_id = optional(string, "") # Hardware reservation IDs to use for the VCF nodes. Each item can be a reservation UUID or next-available.
}))
n/a yes
esxi_management_gateway Management Network Gateway for ESXi default TCP/IP Stack (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > F8) string n/a yes
esxi_management_subnet Management Network Subnet Mask for VMK0 (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > E8) string n/a yes
esxi_network_space Overall Network space for the VCF project string n/a yes
esxi_password mkpasswd Pre-hashed root password to be set for ESXi instances (Hash the password from vcf-ems-deployment-parameter.xlsx > Credentials Sheet > C8 using 'mkpasswd --method=SHA-512' from Linux whois package) string n/a yes
esxi_plan Slug for target hardware plan type. The only officially supported server plan for ESXi/VCF is the 'n3.xlarge.opt-m4s2' https://deploy.equinix.com/product/servers/n3-xlarge-opt-m4s2/ string n/a yes
esxi_version_slug Slug for ESXi OS version to be deployed on Metal Instances https://github.com/equinixmetal-images/changelog/blob/main/vmware-esxi/x86_64/8.md string n/a yes
metal_auth_token API Token for Equinix Metal API interaction https://deploy.equinix.com/developers/docs/metal/identity-access-management/api-keys/ string n/a yes
metal_project_id Equinix Metal Project UUID, can be found in the General Tab of the Organization Settings https://deploy.equinix.com/developers/docs/metal/identity-access-management/organizations/#organization-settings-and-roles string n/a yes
metal_vrf_asn ASN to be used for Metal VRF https://deploy.equinix.com/developers/docs/metal/networking/vrf/ string n/a yes
metro Equinix Metal Metro where Metal resources are going to be deployed https://deploy.equinix.com/developers/docs/metal/locations/metros/#metros-quick-reference string n/a yes
nsx_devices Map containing NSX Cluster host and IP details
map(object({
name = string # Short form hostname of system (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > I6:L6)
ip = string # Management Network IP address for VMK0 (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > I7:L7)
}))
n/a yes
sddc_manager_ip IP address for the SDDC Manager string n/a yes
sddc_manager_name Hostname for the SDDC Manager string n/a yes
vcenter_ip IP address for the vCenter Server string n/a yes
vcenter_name Hostname for the vCenter Server string n/a yes
vcf_vrf_networks Map of Objects representing configuration specifics for various network segments required for VCF Management and Underlay Networking
map(object({
vlan_id = string # (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > C7:C10) 802.1q VLAN number
vlan_name = string # (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > D7:D10) Preferred Description of Metal VLAN
subnet = string # (vcf-ems-deployment-parameter.xlsx > Hosts and Networks Sheet > E7:E10) CIDR Subnet to be used within this Metal VLAN
enable_dyn_nei = optional(bool, false) # Whether or not to configure BGP Dynamic Neighbor functionality on the gateway, only use for NSX-t Edge uplink VLANs if NSX-t will peer with Metal VRF
dyn_nei_range = optional(string, "") # CIDR Range of IPs that the Metal VRF should expect BGP Peering from
dyn_nei_asn = optional(string, "") # ASN that the Metal VRF should expect BGP Peering from
}))
n/a yes
vrf_bgp_customer_peer_ip_pri IP of BGP Neighbor on Primary Interconnection that Metal VRF should expect to peer with string n/a yes
vrf_bgp_customer_peer_ip_sec IP of BGP Neighbor on Secondary Interconnection that Metal VRF should expect to peer with string n/a yes
vrf_bgp_md5_pri MD5 Shared Password for BGP session authentication string n/a yes
vrf_bgp_md5_sec MD5 Shared Password for BGP session authentication string n/a yes
vrf_bgp_metal_peer_ip_pri IP of Metal VRF on Primary Interconnection for peering with BGP Neighbor string n/a yes
vrf_bgp_metal_peer_ip_sec IP of Metal VRF on Secondary Interconnection for peering with BGP Neighbor string n/a yes
vrf_peer_asn ASN that will establish BGP Peering with the Metal VRF across the interconnections string n/a yes
vrf_peer_subnet Subnet used for both Metal VRF interconnections (/29 or larger) string n/a yes
vrf_peer_subnet_pri Subnet used for point to point Metal VRF BGP Neighbor connection across the Primary interconnection string n/a yes
vrf_peer_subnet_sec Subnet used for point to point Metal VRF BGP Neighbor connection across the Secondary interconnection string n/a yes
windows_management_ip IP address for the Windows management host string n/a yes
zone_name DNS Zone name to use for deployment (vcf-ems-deployment-parameter.xlsx > Deploy Parameters Sheet > J6:K6) string n/a yes
bastion_name Hostname for the Bastion host string "bastion" no
bastion_plan Which plan to use for the ubuntu based bastion host. string "m3.small.x86" no
cloudbuilder_name Hostname for the Cloudbuilder appliance string "cloudbuilder" no
windows_management_name Hostname for the Windows management host string "management" no
windows_management_plan Which plan to use for the windows management host. string "m3.small.x86" no

Outputs

Name Description
bastion_public_ip The public IP address of the bastion host. Used for troubleshooting.
cloudbuilder_default_gateway Cloudbuilder Default Gateway to use during OVA deployment.
cloudbuilder_hostname Cloudbuilder Hostname to use during OVA deployment.
cloudbuilder_ip Cloudbuilder IP to use during OVA deployment.
cloudbuilder_subnet_mask Cloudbuilder Subnet Mask to use during OVA deployment.
cloudbuilder_web_address Cloudbuilder Web Address
dns_domain_name DNS Domain Name to use during OVA deployment.
dns_domain_search_paths DNS Domain Search Paths to use during OVA deployment.
dns_server DNS Server to use during OVA deployment.
esx01_web_address The web address of the first ESXi host to use in a browser on the management host.
ntp_server NTP Server to use during OVA deployment.
ssh_private_key Path to the SSH Private key to use to connect to bastion and management hosts over SSH.
windows_management_password Randomly generated password used for the Admin accounts on the management host.
windows_management_rdp_address The public IP address of the windows management host.

Contributing

If you would like to contribute to this module, see CONTRIBUTING page.

License

Apache License, Version 2.0. See LICENSE.

About

This project will help get a VMWare VCF management domain up and running quickly on Equinix Metal

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published