Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-arch-builders/tofu: Add PowerVs configuration #933

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions multi-arch-builders/coreos-ppc64le-builder.bu
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,6 @@
#
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we need to delete the "Merge in the builder-common.ign Ignition file" line above (and should probably also remove it from the multi-arch-builders/coreos-aarch64-builder.bu) file too, which should have been done when that file was changed originally.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, multi-arch-builders/coreos-aarch64-builder.bu is already updated, just making the same here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still need to delete line 3 above

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and also line 3 from multi-arch-builders/coreos-aarch64-builder.bu if you don't mind

variant: fcos
version: 1.4.0
ignition:
config:
merge:
- local: builder-common.ign
passwd:
users:
- name: builder
Expand All @@ -23,3 +19,19 @@ storage:
overwrite: true
contents:
inline: coreos-ppc64le-builder
# It is a workaround due the IP/Route issue in PowerVs
# See more in the ppc64le README
- path: /etc/NetworkManager/system-connections/env2.nmconnection
mode: 0600
contents:
inline: |
[connection]
id=en
type=ethernet
interface-name=env2
[ipv4]
address1=10.130.1.149/25,10.130.1.129
dns=127.0.0.53;
dns-search=
may-fail=false
method=manual
ravanelli marked this conversation as resolved.
Show resolved Hide resolved
58 changes: 58 additions & 0 deletions multi-arch-builders/provisioning/ppc64le/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# OpenTofu

OpenTofu, a Terraform fork, is an open-source infrastructure as code (IaC) tool
lets you define both cloud and on-prem resources in human-readable configuration files
that you can version, reuse, and share.

To proceed with the next steps, ensure that 'tofu' is installed on your system.
See: https://github.com/opentofu/opentofu/releases

## Before starting

### PowerVS credentials

- Ensure that you have access to our account.
- Verify that the Fedora CoreOS image has been uploaded to the designated bucket.
- TODO: Add bucket creation and image upload to tofu
- See documetation in how to upload the image manually:
https://cloud.ibm.com/docs/power-iaas?topic=power-iaas-deploy-custom-image
### PowerVs Issues

- PowerVS seems to encounter a problem in creating the default local IP with the default route,
resulting in issues to ssh to the server post-boot.
To mitigate this, we've incorporated networking configurations into the Ignition file. However,
we still with one issue during the Splunk Butane configuration, where the CA certification couldn't be
downloaded during provisioning. If you encounter this issue, comment out the Red Hat CA download step
and perform it manually on the machine after provisioning.

- Additionally, it's important to note that PowerVS lacks the user data field in the web interface for providing
the Ignition config.

### TF vars via environment variables

If you'd like to override the target distro (defaults to `fcos`) you
can:

```
export TF_VAR_distro=rhcos
```

If you are deploying RHCOS you'll need to define variables for splunk configuration:

```
export TF_VAR_splunk_hostname=...
export TF_VAR_splunk_sidecar_repo=...
export TF_VAR_itpaas_splunk_repo=...
```

## Running tofu
```bash
# To begin using it, run 'init' within this directory.
tofu init
# If you don't intend to make any changes to the code, simply run it:
tofu apply
# If you plan to make changes to the code as modules/plugins, go ahead and run it:
tofu init -upgrade
# To destroy it run:
tofu destroy -target aws_instance.coreos-aarch64-builder
```
101 changes: 101 additions & 0 deletions multi-arch-builders/provisioning/ppc64le/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
data "ibm_pi_network" "network" {
pi_network_name = var.network
pi_cloud_instance_id = var.power_instance_id
}

data "ibm_pi_image" "power_images" {
pi_image_name = var.image_name
pi_cloud_instance_id = var.power_instance_id
}

provider "ct" {}

variable "project" {
type = string
default = "coreos-ppc64le-builder"
}

# Which distro are we deploying a builder for? Override the
# default by setting the env var: TF_VAR_distro=rhcos
variable "distro" {
type = string
default = "fcos"
}

check "health_check_distro" {
assert {
condition = anytrue([
var.distro == "fcos",
var.distro == "rhcos"
])
error_message = "Distro must be 'fcos' or 'rhcos'"
}
}

# Variables used for splunk deployment, which is only
# for RHCOS builders. Define them in the environment with:
# export TF_VAR_splunk_hostname=...
# export TF_VAR_splunk_sidecar_repo=...
# export TF_VAR_itpaas_splunk_repo=...
variable "splunk_hostname" {
type = string
default = ""
}
variable "splunk_sidecar_repo" {
type = string
default = ""
}
variable "itpaas_splunk_repo" {
type = string
default = ""
}

# Check that if we are deploying a RHCOS builder the splunk
# variables have been defined.
check "health_check_rhcos_splunk_vars" {
assert {
condition = !(var.distro == "rhcos" && anytrue([
var.splunk_hostname == "",
var.splunk_sidecar_repo == "",
var.itpaas_splunk_repo == ""
]))
error_message = "Must define splunk env vars for RCHOS builders"
}
}

locals {
fcos_snippets = [
file("../../coreos-ppc64le-builder.bu"),
]
rhcos_snippets = [
file("../../coreos-ppc64le-builder.bu"),
templatefile("../../builder-splunk.bu", {
SPLUNK_HOSTNAME = var.splunk_hostname
SPLUNK_SIDECAR_REPO = var.splunk_sidecar_repo
ITPAAS_SPLUNK_REPO = var.itpaas_splunk_repo
})
]
}
data "ct_config" "butane" {
strict = true
content = file("../../builder-common.bu")
snippets = var.distro == "rhcos" ? local.rhcos_snippets : local.fcos_snippets
}



resource "ibm_pi_instance" "pvminstance" {
pi_memory = var.memory
pi_processors = var.processors
pi_instance_name = "${var.project}-${formatdate("YYYYMMDD", timestamp())}"
pi_proc_type = var.proc_type
pi_image_id = data.ibm_pi_image.power_images.id
pi_network {
network_id = data.ibm_pi_network.network.id
}
pi_key_pair_name = var.ssh_key_name
pi_sys_type = var.system_type
pi_cloud_instance_id = var.power_instance_id
pi_user_data = base64encode(data.ct_config.butane.rendered)

}
20 changes: 20 additions & 0 deletions multi-arch-builders/provisioning/ppc64le/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@

output "status" {
value = ibm_pi_instance.pvminstance.status
}

output "min_proc" {
value = ibm_pi_instance.pvminstance.min_processors
}

output "health_status" {
value = ibm_pi_instance.pvminstance.health_status
}

output "addresses" {
value = ibm_pi_instance.pvminstance.pi_network
}

output "progress" {
value = ibm_pi_instance.pvminstance.pi_progress
}
18 changes: 18 additions & 0 deletions multi-arch-builders/provisioning/ppc64le/provider.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
terraform {
required_providers {
ct = {
source = "poseidon/ct"
version = "0.13.0"
}
ibm = {
source = "IBM-Cloud/ibm"
version = ">= 1.12.0"
}
}
}

provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
region = "us-south"
zone = var.ibmcloud_zone
}
87 changes: 87 additions & 0 deletions multi-arch-builders/provisioning/ppc64le/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@

variable "ibmcloud_api_key" {
description = "Denotes the IBM Cloud API key to use"
default = ""
}

variable "ibmcloud_region" {
description = "Denotes which IBM Cloud region to connect to"
default = "us-south"
}

#INSERTED FOR MULTI-ZONE REGION SUCH AS FRANKFURT

variable "ibmcloud_zone" {
description = "Denotes which IBM Cloud zone to connect to - .i.e: eu-de-1 eu-de-2 us-south etc."
default = "us-south"
}

dustymabe marked this conversation as resolved.
Show resolved Hide resolved
# Got the ID from `ibmcloud resource service-instances --long field` command, refer GUID for the instance
variable "power_instance_id" {
description = "Power Virtual Server instance ID associated with your IBM Cloud account (note that this is NOT the API key)"
default = "556eb201-32bf-4ae2-8ab5-dfd7bbe97789"
}


# The PowerVs cost are high, check the price before adding
# more processors and memory. This number may change
# due the PowerVs availability.
Comment on lines +26 to +28
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FTR I think our fcos ppc64le server only has 32G of memory so we could probably go down here if that makes sense if the costs are really high.


variable "memory" {
description = "Amount of memory (GB) to be allocated to the VM"
default = "50"
}
Comment on lines +30 to +33
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just any number we want? For powerVS do they not have fixed instance sizes?

Copy link
Member Author

@ravanelli ravanelli Nov 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same case for below


variable "processors" {
description = "Number of virtual processors to allocate to the VM"
default = "15"
}
Comment on lines +35 to +38
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

15 seems like a random number - would have expected a power of 2 (like 8 or 16) here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each server model has a maximum allowable number, which may vary depending on utilization (interesting)

On the e880, the maximum core availability is now 12.04, reduced from the previous 15 I added here.
We now have on the e980, with 31.5. I will add a note in how to check it before ,so we may add the max we can.


# The s922 model is the cheapest model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: for systemd_type and proc_type it might be useful to add a hyperlink to a place where there is a description of the options.

variable "system_type" {
description = "Type of system on which the VM should be created - s922/e880/e980"
default = "s922"
}

variable "proc_type" {
description = "Processor type for the LPAR - shared/dedicated"
default = "capped"
Comment on lines +47 to +48
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description mentions shared/dedicated as options but we have set it to capped?

}

variable "ssh_key_name" {
description = "SSH key name in IBM Cloud to be used for SSH logins"
default = ""
}

variable "shareable" {
description = "Should the data volume be shared or not - true/false"
default = "true"
ravanelli marked this conversation as resolved.
Show resolved Hide resolved
}

# TODO: We need to add the network creation via tofu for fcos
# This config is for rhcos only
variable "network" {
description = "List of networks that should be attached to the VM - Create this network before running terraform"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any instructions on how to create that network?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the internal docs, since it is for VPC. Not sure it is worth adding a general one here, maybe we won't need to create it without using VCP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we ever wanted to do this for FCOS we'd need networks created? Can that be defined in tofu configuration? Let's add a comment somewhere to mention it and that it's a TODO item.

default = "redhat-internal-rhcos"
}


variable "image_name" {
description = "Name of the image from which the VM should be deployed - IBM image name"
default = "fedora-coreos-39-2023110110"
}

variable "replication_policy" {
description = "Replication policy of the VM"
default = "none"
}

variable "replication_scheme" {
description = "Replication scheme for the VM"
default = "suffix"
}

variable "replicants" {
description = "Number of VM instances to deploy"
default = "1"
}