diff --git a/docs/drivers/installation.md b/docs/drivers/installation.md
index ead38e4d..890da553 100644
--- a/docs/drivers/installation.md
+++ b/docs/drivers/installation.md
@@ -18,19 +18,12 @@ Before installing the AMD GPU driver:
Before installing the out-of-tree AMD GPU driver, you must blacklist the inbox AMD GPU driver:
-- These commands need to either be run as `root` or by using `sudo`
- Create blacklist configuration file on worker nodes:
```bash
echo "blacklist amdgpu" > /etc/modprobe.d/blacklist-amdgpu.conf
```
-- After blacklist configuration file, you need to rebuild the initramfs for the change to take effect:
-
-```bash
-echo update-initramfs -u -k all
-```
-
- Reboot the worker node to apply the blacklist
- Verify the blacklisting:
diff --git a/docs/index.md b/docs/index.md
index 9348b933..3a8340ea 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -13,46 +13,8 @@ The AMD GPU Operator simplifies the deployment and management of AMD Instinct GP
## Compatibility
-### Supported Hardware
-
-| **GPUs** | |
-| --- | --- |
-| AMD Instinct™ MI300X | ✅ Supported |
-| AMD Instinct™ MI250 | ✅ Supported |
-| AMD Instinct™ MI210 | ✅ Supported |
-
-### OS & Platform Support Matrix
-
-Below is a matrix of supported Operating systems and the corresponding Kubernetes version that have been validated to work. We will continue to add more Operating Systems and future versions of Kubernetes with each release of the AMD GPU Operator and Metrics Exporter.
-
-
-
-
- | Operating System |
- Kubernetes |
- Red Hat OpenShift |
-
-
-
-
- | Ubuntu 22.04 LTS |
- 1.29—1.31 |
- |
-
-
- | Ubuntu 24.04 LTS |
- 1.29—1.31 |
- |
-
-
- | Red Hat Core OS (RHCOS) |
- |
- 4.16—4.17 |
-
-
-
-
-Please refer to the [ROCM documentaiton](https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html) for the compatability matrix for the AMD GPU DKMS driver.
+- **Kubernetes**: 1.29.0
+- Please refer to the [ROCm documentation](https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html) for the compatibility matrix for the AMD GPU DKMS driver.
## Prerequisites
diff --git a/docs/metrics/ecc-error-injection.md b/docs/metrics/ecc-error-injection.md
deleted file mode 100644
index f3f17926..00000000
--- a/docs/metrics/ecc-error-injection.md
+++ /dev/null
@@ -1,199 +0,0 @@
-## ECC Error Injection Testing
-
-The Metric Exporter has the capability to check for unhealthy GPUs via the monitoring of ECC Errors that can occur when a GPU is not functioning as expected. When an ECC error is detected the Metrics Exporter will now mark the offending GPU as unhealthy and add a node label to indicate which GPU on the node is unhealthy. The Kubernetes Device Plugin also listens to the health metrics coming from the Metrics Exporter to determine GPU status, marking GPUs as schedulable if healthy and unschedulable if unhealthy.
-
-This health check workflow runs automatically on every node the Device Metrics Exporter is running on, with the Metrics Exporter polling GPUs every 30 seconds and the device plugin checking health status at the same interval, ensuring updates within one minute. Users can customize the default ECC error threshold (set to 0) via the `HealthThresholds` field in the metrics exporter ConfigMap. As part of this workflow healthy GPUs are made available for Kubernetes job scheduling, while ensuring no new jobs are scheduled on an unhealthy GPUs.
-
-## To do error injection follow these steps
-
-We have added a new `metricsclient` to the Device Metrics Exporter pod that can be used to inject ECC errors into an otherwise healthy GPU for testing the above health check workflow. This is fairly simple and don't worry this does not harm your GPU as any errors that are being injected are debugging in nature and not real errors. The steps to do this have been outlined below:
-
-### 1. Set Node Name
-
-Use an environment variable to set the Kubernetes node name to indicate which node you want to test error injection on:
-
-```bash
-NODE_NAME=
-```
-
-Replace with the name of the node you want to test. If you are running this from the same node you want to test you can grab the hostname using:
-
-```bash
-NODE_NAME=$(hostname)
-```
-
-### 2. Set Metrics Exporter Pod Name
-
-Since you have to execute the `metricsclient` from directly within the Device Metrics Exporter pod we need to get the Metrics Exporter pod name running on the node:
-
-```bash
-METRICS_POD=$(kubectl get pods -n kube-amd-gpu --field-selector spec.nodeName=$NODE_NAME --no-headers -o custom-columns=":metadata.name" | grep '^gpu-operator-metrics-exporter-' | head -n 1)
-```
-
-### 3. Check Metrics Client to see GPU Health
-
-Now that you have the name of the metrics exporter pod you can use the metricsclient to check the current health of all GPUs on the node:
-
-```bash
-kubectl exec -n kube-amd-gpu $METRICS_POD -c metrics-exporter-container -- metricsclient
-```
-
-You should see a list of all the GPUs on that node along with their corresponding status. In most cases all GPUs should report as being `healthy`.
-
-```bash
-ID Health Associated Workload
-------------------------------------------------
-1 healthy []
-0 healthy []
-7 healthy []
-6 healthy []
-5 healthy []
-4 healthy []
-3 healthy []
-2 healthy []
-------------------------------------------------
-```
-
-### 4. Inject ECC Errors on GPU 0
-
-In order to simulate errors on a GPU we will be using a json file that specifies a GPU ID along with counters for several ECC Uncorrectable error fields that are being monitored by the Device Metrics Exporter. In the below example you can see that we are specifying `GPU 0` and injecting 1 `GPU_ECC_UNCORRECT_SEM` error and 2 `GPU_ECC_UNCORRECT_FUSE` errors. We use the `metricslient -ecc-file-path ` command to specify the json file we want to inject into the metrics table. To create the json file and execute the metricsclient command all in in one go run the following:
-
-```bash
-kubectl exec -n kube-amd-gpu $METRICS_POD -c metrics-exporter-container -- sh -c 'cat > /tmp/ecc.json < /tmp/delete_ecc.json <