Skip to content

Commit

Permalink
Implemented doc edits
Browse files Browse the repository at this point in the history
Fixed typos, punctuation, wording, grammar, style according to style
guide and documentation policies.
  • Loading branch information
chabowski committed Sep 11, 2024
1 parent 3dd7f94 commit 30757e5
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 17 deletions.
2 changes: 1 addition & 1 deletion adoc/SAP-EIC-ImagePullSecrets.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Then run:
$ kubectl -n <namespace> create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username=<yourUser> --docker-password=<yourPassword>
----

As secrets are namespace sensitive, you'll need to create this for every namespace needed.
As secrets are namespace-sensitive, you need to create this for every namespace needed.

ifdef::eic[]
The related secret can then be used for the components:
Expand Down
24 changes: 13 additions & 11 deletions adoc/SAP-EIC-Main.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ It will guide you through the steps of:
NOTE: This guide does not contain information about sizing your landscapes. Visit
https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edge Integration Cell Sizing Guide".

NOTE: In this guide we'll use $ and # for shell commands, where # means that the command needs to be executed as a root user and
$ that the command can be run by any user.
NOTE: In this guide, we use $ and # for shell commands, where # means that the command needs to be executed as a root user and
$ means that the command can be run by any user.

++++
<?pdfpagebreak?>
Expand Down Expand Up @@ -85,7 +85,7 @@ Other versions of {metallb} or {cm} can be used but may have not been tested.
** {lh}
** {sle_ha} *

+++*+++ Only needed if you want to setup {rancher} in a high available setup.
+++*+++ Only needed if you want to set up {rancher} in a high availability setup.

Additionally,

Expand Down Expand Up @@ -128,7 +128,7 @@ image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded]
We will use this graphic overview in the guide to illustrate what the next step is and what it is for.


Starting with installing the operating system of each machine or Kubernetes node, we will walk you through all the steps you need to take to get a fully set up Kubernetes landscape for deploying {eic}.
Starting with installing the operating system of each machine or Kubernetes node, we will walk you through all the steps you need to take to get a fully set-up Kubernetes landscape for deploying {eic}.

++++
<?pdfpagebreak?>
Expand Down Expand Up @@ -159,7 +159,7 @@ include::SAPDI3-Rancher.adoc[Rancher]

== Installing RKE2 using {rancher}

After installing the {rancher} cluster, we can now facilitate this one to create the {rke} clusters for {eic}.
After having installed the {rancher} cluster, we can now make use this one to create the {rke} clusters for {eic}.
SAP recommends to set up not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}.
How to do this is covered in this chapter.
Looking at the landscape overview again, we will now deal with setting up the lower part of the given graphic:
Expand Down Expand Up @@ -352,8 +352,8 @@ $ kubectl -n <namespace> create secret generic <certName> --from-file=./root.pem
NOTE: All applications are expecting to have the secret to be used in the same namespace as the application.

==== Using cert-manager
cert-manager needs to be available in your Downstream Cluster. To install cert-manager in your downstream cluster you can use the same installation steps which are described in the Rancher Prime installation.
First we need to create a selfsigned-issuer.yaml file:
`cert-manager` needs to be available in your Downstream Cluster. To install `cert-manager` in your downstream cluster, you can use the same installation steps that are described in the Rancher Prime installation section.
First, create a _selfsigned-issuer.yaml_ file:

[source,yaml]
----
Expand All @@ -365,7 +365,7 @@ spec:
selfSigned: {}
----

Then we create the a Certificate Ressource for the CA calles my-ca-cert.yaml:
Then create a Certificate Ressource for the CA called _my-ca-cert.yaml_:
[source,yaml]
----
apiVersion: cert-manager.io/v1
Expand All @@ -385,7 +385,7 @@ spec:
- "*.<cluster-name>.cluster.local"
----
For creating a ClusterIssuer using the Generated CA we create the my-ca-issuer.yaml file
For creating a _ClusterIssuer_ using the Generated CA, create the _my-ca-issuer.yaml_ file:
[source,yaml]
----
apiVersion: cert-manager.io/v1
Expand All @@ -396,7 +396,7 @@ spec:
ca:
secretName: my-ca-secret
----
The last ressource which we need to create is the certificate itself. This certificate is signed by our created CA. You can name the yaml file application-name-certificate.yaml
The last ressource you need to create is the certificate itself. This certificate is signed by your created CA. You can name the yaml file _application-name-certificate.yaml_.
[source,yaml]
----
kind: Certificate
Expand Down Expand Up @@ -425,13 +425,15 @@ $ kubectl apply -f my-ca-issuer.yaml
$ kubectl apply -f application-name-certificate.yaml
----

When you deploy your applications via Helm Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application.
When you deploy your applications via Helm Charts, you can use the generated certificate.
In the Kubernetes Secret Certificate, three files are stored. These are the file _tls.crt_, _tls.key_ and _ca.crt_ which you cann use in the _values.yaml_ file of your application.

++++
<?pdfpagebreak?>
++++

:leveloffset: 0

// Standard SUSE Best Practices includes
== Legal notice
include::common_sbp_legal_notice.adoc[]
Expand Down
6 changes: 3 additions & 3 deletions adoc/SAP-EIC-SLEMicro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ $ transactional-update

=== Disabling automatic reboot

Per default {slem} runs a timer for `transactional-update` in the background which could automatically reboot your system.
By default {slem} runs a timer for `transactional-update` in the background which could automatically reboot your system.
Disable it with the following command:

[source, bash]
Expand All @@ -61,9 +61,9 @@ $ systemctl --now disable transactional-update.timer
----

=== Preparing for {lh}
For {lh} you need to do some preparation steps. First, install some addional packages on all worker nodes. Then attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case.
For {lh} you need to do some preparation steps. First, install some addional packages on all worker nodes. Then attach a second disk to the worker nodes, create a file system ontop of it and mount it to the Longhorn default location. The size of the second disk depends on your use case.

Install some packages as a requirement for longhorn and Logical Volume Management for adding a file system to longhorn.
Install some packages as a requirement for longhorn and Logical Volume Management for adding a file system to Longhorn.
[source, bash]
----
$ transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi
Expand Down
4 changes: 2 additions & 2 deletions adoc/SAPDI3-Rancher.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
=== Preparation

To have a highly available {rancher} setup, you need a load balancer for your {rancher} nodes.
This section describes how to set up a custom load balancer using `haproxy`. If you already have a load balancer, you can make use of that to make {rancher} highly available.
This section describes how to set up a custom load balancer using `haproxy`. If you already have a load balancer, you can use that to make {rancher} highly available.

If you do not plan to set up a highly available {rancher} cluster, you can skip this section.

==== Installing an `haproxy`-based load balancer

Set up a virtual machine or a bare metal server with {sles} and the SUSE Linux Enterprise High Availability or use {sles4sap}.
Set up a virtual machine or a bare metal server with {sles} and SUSE Linux Enterprise High Availability or use {sles4sap}.
Install the `haproxy` package.

[source, bash]
Expand Down

0 comments on commit 30757e5

Please sign in to comment.