Skip to content

Commit

Permalink
updating
Browse files Browse the repository at this point in the history
  • Loading branch information
kquinn1204 committed Dec 18, 2024
1 parent 49f01b7 commit 875e681
Show file tree
Hide file tree
Showing 5 changed files with 62 additions and 11 deletions.
42 changes: 41 additions & 1 deletion content/patterns/multicloud-gitops/mcg-managed-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,44 @@ include::modules/comm-designate-cluster-as-managed-cluster-site.adoc[leveloffset

== Verification

Go to your managed cluster (edge) OpenShift console and check for the `open-cluster-management-agent` pod being launched. It might take a while for the RHACM agent and `agent-addons` to launch. After that, the OpenShift GitOps Operator is installed. On successful installation, launch the OpenShift GitOps (ArgoCD) console from the top right of the OpenShift console.
. Go to your managed cluster (edge) OpenShift console and check for the `open-cluster-management-agent` pod being launched.

[NOTE]
====
It might take a while for the RHACM agent and `agent-addons` to launch.
====

. Check the *Red Hat OpenShift GitOps Operator* is installed.

. Launch the *Group-One OpenShift ArgoCD* console from the top right nines menu of the OpenShift console.

Verify that the *hello-world* application deployed successfully as follows:

. Navigate to the *Networking* -> *Routes* menu options on your managed cluster (edge) OpenShift.

. From the *Project:* drop down select the *hello-world* project.

. Click the *Location URL*. This should reveal the following:
+
[source,terminal]
----
Hello World!
Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org'
Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org'
----

Verify that the *config-demo* application deployed successfully as follows:

. Navigate to the *Networking* -> *Routes* menu options on your managed cluster (edge) OpenShift.

. Select the *config-demo* *Project*.

. Click the *Location URL*. This should reveal the following:
+
[source,terminal]
----
Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org'
Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org'
The secret is `secret`
----
12 changes: 8 additions & 4 deletions modules/mcg-deploying-managed-cluster-using-rhacm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

* An OpenShift cluster
** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
** Select *Services* -> *Containers* -> *Create cluster*.
** Select *OpenShift* -> *Create cluster*.

* Red Hat Advanced Cluster Management (RHACM) web console to join the managed cluster to the management hub
+
Expand All @@ -21,6 +21,10 @@ After RHACM is installed, a message regarding a *Web console update is available

. In the left navigation panel of web console, click *local-cluster*. Select *All Clusters*. The RHACM web console is displayed with *Cluster** on the left navigation panel.
. On the *Managed clusters* tab, click *Import cluster*.
. On the *Import an existing cluster* page, enter the cluster name and choose *KUBECONFIG* as the "import mode". Add the tag `clusterGroup=group-one`. Click *Import*.

Now that RHACM is no longer deploying the managed cluster applications everywhere, you must indicate that the new cluster has the managed cluster role.
. On the *Import an existing cluster* page:
.. Enter the cluster name (you can get this from the login token string for example https://api.<cluster-name>.<domain>:6443)
.. You can leave the *Cluster set* blank.
.. In the *Additional labels* dialog box enter the `key=value` as `clusterGroup=group-one`.
.. Choose *KubeConfig* as the "Import mode".
.. In the *KubeConfig* window paste your KubeConfig content.
. Click *Import*.
4 changes: 2 additions & 2 deletions modules/mcg-deploying-mcg-pattern.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Other patterns build upon these concepts, making this an ideal starting point fo

* An OpenShift cluster
** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
** Select *Services \-> Containers \-> Create cluster*.
** Select *OpenShift \-> Create cluster*.
** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic `StorageClass` exists before creating one by running the following command:
+
[source,terminal]
Expand Down Expand Up @@ -256,7 +256,7 @@ $ ./pattern.sh make install
. Verify that the Operators have been installed.
.. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page.
.. Check that *{rh-gitops} Operator* is installed in the `openshift-operators` namespace and its status is `Succeeded`.
. Verify that all applications are synchronized. Under *Networking \-> Routes \* select the *Location URL* associated with the *hub-gitops-server* . All application are report status as `Synched`.
. Verify that all applications are synchronized. Under *Networking \-> Routes* select the *Location URL* associated with the *hub-gitops-server* . All application are report status as `Synched`.
+
image::multicloud-gitops/multicloud-gitops-argocd.png[Multicloud GitOps Hub]

Expand Down
4 changes: 2 additions & 2 deletions modules/mcg-understanding-rhacm-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Add a `managedClusterGroup` for each cluster or group of clusters that you want
$ git checkout my-branch main
----

. In the `value-hub.yaml` file, add a `managedClusterGroup` for each cluster or group of clusters that you want to manage as one.
. In the `value-hub.yaml` file, add a `managedClusterGroup` for each cluster or group of clusters that you want to manage as one. An example `group-one` is provided.
+
[source,yaml]
----
Expand All @@ -32,7 +32,7 @@ managedClusterGroups:
value: false
----
+
The above YAML file segment deploys the `clusterGroup` applications on managed clusters with the label `clusterGroup=group-one`. Specific subscriptions and Operators, applications and projects for that `clusterGroup` are then managed in a `value-group-one.yaml` file.
The above YAML file segment deploys the `clusterGroup` applications on managed clusters with the label `clusterGroup=group-one`. Specific subscriptions and Operators, applications and projects for that `clusterGroup` are then managed in a `value-group-one.yaml` file. The following is defined for the `clusterGroup=group-one`.
+
For example:
+
Expand Down
11 changes: 9 additions & 2 deletions modules/mcg-using-imperative-actions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,14 @@
[id="mcg-using-kubernetes-cronjob-imperative-actions"]
= Using kubernetes cronjobs to apply imperative actions

There is currently no way within Argo CD to apply an imperative action against a cluster. However, you can declaratively apply changes to a cluster using kubernetes cronjob resources. Customers can use their Ansible playbooks to take action against a cluster if necessary.
There is currently no way within Argo CD to apply an imperative action against a cluster. However, you can declaratively apply changes to a cluster using kubernetes cronjob resources.

Within the Patterns framework we mainly use jobs to:

* Schedule imperative tasks in the imperative framework such as keeping the Vault unsealed
* Run Ansible playbooks

Customers can use their Ansible playbooks to take action against a cluster if necessary.

[WARNING]
====
Expand All @@ -15,7 +22,7 @@ Adding your playbooks to the pattern requires the following:

. Move your Ansible configurations to the appropriate directory under Ansible in your forked repository.
. Define your job as a list, for example:

+
[source,yaml]
----
imperative:
Expand Down

0 comments on commit 875e681

Please sign in to comment.