Skip to content

Latest commit

 

History

History
428 lines (303 loc) · 7.33 KB

EX280.md

File metadata and controls

428 lines (303 loc) · 7.33 KB

EX280 - Red Hat Certified Specialist in OpenShift Administration

Cluster Administration

Check Cluster Upgrade Status

oc get clusterversion
oc get clusteroperators

Get Node Logs

oc adm node-logs <name>
oc adm node-logs -u crio <name>
oc adm node-logs -u kubelet <name>

Get Node Shell

oc debug node/<name>
chroot /host  # Run inside pod to access host binaries

Upgrade Cluster

Through GUI, or:

oc adm upgrade --to=<version>

Authentication

HTPasswd OAuth Initial Configuration

Create an HTPasswd credential file:

htpasswd -c -B -b htpasswd <user> <password>

Create a secret using that file:

oc create secret generic htpasswd \
    --from-file htpasswd \
    -n openshift-config

Open oauth/cluster for editing:

oc edit oauth/cluster

Then, add:

...
spec:
  identityProviders:
  - htpasswd:
      fileData:
        name: htpasswd
    mappingMethod: claim
    name: htpasswd_provider
    type: HTPasswd

Add User to HTPassword Auth

Pull htpasswd file to local machine:

oc extract secret/htpasswd --confirm -n openshift-config

Add user to htpasswd file:

htpasswd -B -b htpasswd <user> <password>

Replace the htpasswd file in secret:

oc set data secret/htpasswd \
    --from-file=htpasswd \
    -n openshift-config

Replacing the secret will trigger a redeploy of the OAuth deployment.

Deleting HTPasswd User

Follow same process as adding a user. After deleting user, delete the user and identity from OpenShift:

oc delete user <name>
oc delete identity htpasswd_provider:<name>

Delete kubeadmin user

oc delete secret kubeadmin -n kube-system

Roles

Get available cluster roles:

oc get clusterroles

Good roles to know:

  • admin
  • basic-user
  • cluster-admin
  • cluster-status
  • edit
  • self-provisioner
  • view

Apply Role to User

oc policy add-role-to-user <role> <user> -n <project>
oc policy add-cluster-role-to-user <role> <user>

Creating ConfigMaps and Secrets

Create a ConfigMap from name=value pairs

oc create configmap <name> \
    --from-literal key1=value1 \
    --from-literal key2=value2

Create a ConfigMap from a file

oc create configmap <name> --from-file <path-to-file>

Create a Secret from name=value pairs

oc create secret generic <name> \
    --from-literal key1=value1 \
    --from-literal key2=value2

Create a Secret from YAML

apiVersion: v1
kind: Secret
metadata:
    name: <name>
    type: Opaque
stringData:
    key1: value1
    key2: value2

NOTE: stringData is converted to base64 encoded data when applied.

Update ConfigMap or Secret Data

oc set data secret/<name> --from-file <local-file>
oc set data configmap/<name> --from-file <local-file>

Using Secrets and ConfigMaps in Deployments

Set as environment variable

oc set env deployment/<name> --from configmap/<name> # ConfigMap
oc set env deployment/<name> --from secret/<name>    # Secret

Set ConfigMap as volume

oc set volume deployment/<name> --add \
    -t configmap \
    -m <path-on-container> \
    --name <name> \
    --configmap-name <name>

Set Secret as volume

oc set volume deployment/<name> --add \
    -t secret \
    -m <path-on-container> \
    --name <name> \
    --secret-name <name>

Security Context Constraints (SCCs)

Get available SCCs with oc get scc. Get additional info with oc describe.

Apply SCC to Service Account

oc adm policy add-scc-to-user <scc-name> -z <account-name>

Determine SCC Required

oc get pod <name> -o yaml | oc adm policy scc-subject-review -f -

Add Service Account to Deployment

oc create serviceaccount <service-account>
oc set serviceaccount deployment/<name> <service-account>

Networking

DNS Operator

oc describe dns.operator

Operator creates cluster.local DNS.

Service DNS

A service is available via internal DNS at <service-name>.<project>.svc.cluster.local. (You might be able to omit .cluster.local.)

NOTE: This information isn't provided in an oc describe service <name> response. Remember the format so you can point applications at each other!!

Network Operator

oc get network/cluster

Assign Network Policy

NOTE: Look up in documentation. It's a bunch of YAML that is hard to remember. Network policy is under Networking section of OpenShift documentation.

Certificate Management

Generate a private key:

openssl genrsa -out tls.key 2048

Generate a certificate signing request:

openssl req -new -subj "/CN=hostname" -key tls.key -out tls.csr

Add TLS certificate and key to OpenShift:

oc create secret tls <name> --cert tls.crt --key tls.key

Scheduling

Label a Node

oc label node <node-name> key=value

Schedule Deployment on Labeled Nodes

spec:
  template: # Pod Template
    spec:
      nodeSelector:
        key: value
...

Restrict Project to Specific Nodes

oc adm new-project <project-name> --node-selector key=value
oc annotate namespace <project-name> openshift.io/node-selector="key=value"

Scale Deployment

oc scale --replicas <#-of-replicas> deployment/<name>

Requests vs Limits

Requests are the minimum specs a pod can run with.

Limits are the max a pod is allowed to use.

spec:
  containers:
    - ...
      resources:
        requests:
          cpu: "100m"
          memory: 200Mi
        limits:
          cpu: "800m"
          memory: 1000Mi
      ...

Resource Quotas

oc create quota <name> --hard cpu=1000,memory=10Gi,services=10
oc create clusterquota <name> \
    --hard cpu=1000,memory=10Gi,services=10
    --project-annotaion-selector key=value

Update Default Project Template

NOTE: This is in the documentation under Applications > Projects > Configuring Project Creation.

oc adm create-bootstrap-project-template -o yaml > project-template.yaml

Edit the file, then:

oc create -f project-template.yaml -n openshift-config

Update the projects/cluster resource:

spec:
  projectRequestTemplate:
    name: project-request

Limit Ranges

NOTE: Look up in documentation. Limit ranges are a bunch of YAML. They are located under Nodes > Working with Clusters > Setting Limit Ranges in the documentation.

Pod Replicas

NOTE: Look up in documentation under Applications > Deployments > Understanding Deployments and DeploymentConfigs.

Pod Autoscaling

NOTE: Look up in documentation under Nodes > Working with Pods > Automatically Scaling Pods.

OpenShift CLI

Get Current API Token

oc whoami -t

oc Command Logging

oc <some-command> --loglevel 5
oc <some-command> --loglevel 10

Shell Into Pod

oc rsh pod/<name>

Forward Pod Port to Node

oc port-forward <pod> <local-port>:<remote-port>

Persistent Storage

Create New PVC for Deployment

oc set volumes deployment/<deployment-name> \
    --add \
    --name <name> \
    --type pvc \
    --claim-class <storageclass> \
    --claim-mode rwx \
    --claim-size 5Gi \
    --mount-path <container-mount-path> \
    --claim-name <name>