diff --git a/website/src/content/docs/clustertool/csi/topolvm.md b/website/src/content/docs/clustertool/csi/topolvm.md index d0c27aa1c801..44a17cfae13b 100644 --- a/website/src/content/docs/clustertool/csi/topolvm.md +++ b/website/src/content/docs/clustertool/csi/topolvm.md @@ -128,12 +128,9 @@ The following example can be used and adjust where necesarry. ## Snapshots TBD -## Optional: Non-ClusterTool only -The following steps are already included in clustertool by default. - -### Kernel Modules +## Kernel Modules Add these two kernel modules. Use modprobe for typical linux installs or add them to your talconfig.yaml if using TalHelper or ClusterTool as shown below: ```yaml @@ -168,7 +165,7 @@ Create a Thin Pool lvcreate -l 100%FREE --chunksize 256 -T -A n -n topolvm_thin topolvm_vg ``` -### Create Privilaged Namespace +## Create Privilaged Namespace Create the namespace with these labels: ```yaml diff --git a/website/src/content/docs/clustertool/virtual-machines/systemrequirements.md b/website/src/content/docs/clustertool/virtual-machines/systemrequirements.md index deeec8bf2afc..dd2a736af604 100644 --- a/website/src/content/docs/clustertool/virtual-machines/systemrequirements.md +++ b/website/src/content/docs/clustertool/virtual-machines/systemrequirements.md @@ -111,10 +111,9 @@ These include, but are not limited to ### Storage Recommendations -The file created on your host's storage device to be used by the VM is almost always a single continuous file. So whilst an SSD will obviously greatly improve the speed at which this file can be accessed by the VM, a HDD is adequate. - -Additionally, the storage backend we are using on Talos **requires** the presence of two separate "disks" to be presented to the Talos VM. As noted above however, these are [sparsely allocated](https://en.wikipedia.org/wiki/Sparse_file). This means that whilst you'd want to have the entirety of the space able to be occupied available, it will not all be used immediately. +An an SSD, HDD+METADATA zfs pool and/or having sync-writes disabled, will greatly improve performance and is assumed to be required. +Sparse allocation is adviced: For example: A 512GB "sparsely allocated" disk for the Talos VM, housed on a 1TB disk in the host system, will not immediately/always take up 512GB of space. 512GB is the maximum amount of space the file *could* occupy if needed. ### GPU Recommendations @@ -122,21 +121,3 @@ For example: A 512GB "sparsely allocated" disk for the Talos VM, housed on a 1TB Unfortunately, AMD (i)GPUs continue to be rather lacklustre in the Kubernetes world. AMD GPUs are *supposed* to work under Kubernetes, but suffer limitations such as only being able to be used by 1 app/chart at a time, which makes them hard to recommend. Nvidia, and to some extent Intel, GPUs by comparison will almost always work "out of the box". - -### SCALE VM Host Caveats - -Users running the Talos VM atop a TrueNAS SCALE host system that want to also take advantage of GPU passthrough to the VM will require a **minimum** of 2 *different* GPUs to be present in the system. - -The GPU desired to be passed through to the Talos VM will need to be [isolated](/clustertool/virtual-machines/truenas-scale/#gpu-isolation) within SCALE. - -This could include any of the following combinations: - -**GPU1:** Dedicated Nvidia GPU isolated within SCALE for VM passthrough - -**GPU2:** Intel/AMD iGPU - -or - -**GPU1:** Motherboard IPMI GPU - -**GPU2:** Intel iGPU or dedicated Nvidia GPU isolated within SCALE for VM passthrough diff --git a/website/src/content/docs/clustertool/virtual-machines/truenas-scale.mdx b/website/src/content/docs/clustertool/virtual-machines/truenas-scale.mdx index 56f91bf02171..b0d951c01ca1 100644 --- a/website/src/content/docs/clustertool/virtual-machines/truenas-scale.mdx +++ b/website/src/content/docs/clustertool/virtual-machines/truenas-scale.mdx @@ -134,8 +134,6 @@ Go back to the "preparation" section and make sure the IP you are trying to move 7. Hit `save` and wait for the system to create the Zvol. The GUI should refresh and then show your Zvol in the list of datasets like so ![ZVOL Creation 2](./img/vm_zvol2.png) - 8. As you can see, I have 2 Zvols in my dataset. Multiple storage devices/drives aren't required for Talos, but if you would like them then you can repeat the above steps, however this time set `Size for this zvol` to between `768GiB` and `2TiB` or however large you desire it to be. The first Zvol you created will be the "system" disk for Talos, and the second Zvol will be the "data" disk for Talos. Instructions on how to attach the second Zvol to the Talos VM will follow below. - ## GPU Isolation @@ -189,9 +187,9 @@ Minimum recommended amount of RAM: `32GB` ![VM CPU And Memory](./img/vm_cpu_memory.png) -### Disks +### Disk -Select the **first** previously created Zvol for your VM as shown below: +Select the previously created Zvol for your VM as shown below: ![VM Disks](./img/vm_disks.png) @@ -229,6 +227,8 @@ If you followed this guide correctly, the options shown should look similar to t You can skip this step if you don't have multiple disks configured for usage with Talos +Please be warned, we do NOT actively provide support for multi-disk setups and this *will* require modifications to the default CSI setup + ::: Now that we have created the Talos VM, we need to attach the second Zvol we created earlier to it. @@ -326,3 +326,22 @@ Workernodes can be pretty basic and should "just work". installDiskSelector: size: <= 600GB ``` + + +### GPU pass-through Caveats + +Users running the Talos VM atop a TrueNAS SCALE host system that want to also take advantage of GPU passthrough to the VM will require a **minimum** of 2 *different* GPUs to be present in the system. + +The GPU desired to be passed through to the Talos VM will need to be [isolated](/clustertool/virtual-machines/truenas-scale/#gpu-isolation) within SCALE. + +This could include any of the following combinations: + +**GPU1:** Dedicated Nvidia GPU isolated within SCALE for VM passthrough + +**GPU2:** Intel/AMD iGPU + +or + +**GPU1:** Motherboard IPMI GPU + +**GPU2:** Intel iGPU or dedicated Nvidia GPU isolated within SCALE for VM passthrough