Skip to content

Latest commit

 

History

History
755 lines (619 loc) · 25 KB

01.openshift-cheatsheet.md

File metadata and controls

755 lines (619 loc) · 25 KB

Red Hat OpenShift Container Platform (OCP 3.11)

Reference: OpenShift cheatsheet

Show Pod

$ oc get pod -n default -o wide

OpenShift docker/kubelet status-stop-start-status

$ systemctl status docker atomic-openshift-node 
$ systemctl stop docker atomic-openshift-node
$ systemctl start docker atomic-openshift-node
$ systemctl status docker atomic-openshift-node 

Export all resources to yaml

$ oc get all --all-namespaces --export -o yaml > export-file.yaml
# --export remove the timestamp
# Show the current SCC
$ oc get scc

# Delete the anyuid and restricted SCC
$ oc delete scc anyuid
$ oc delete scc restricted

$ oc adm policy reconcile-sccs 
$ oc adm policy reconcile-sccs --confirm

Get pod-name and pod-scc

$ oc get pods <pod-name> -o=jsonpath='{.metadata.name}{"¥t"}{.metadata.annotations}{"¥n"}'
# output will be as below
# <pod-name> map[openshift.io/scc:anyuid]

Get name and fsGroups from SCC (Security Context Contrain)

$ oc get scc --no-headers | awk '{print "oc get scc "$1" -o jsonpath=@{.metadata.name}{.groups}@; echo ¥n"}' | sed 's/@/"/g' | sh
# .metadata.name = SCC Name
# .groups = SCC fsGroups
# sed to change the @ to " at jsonpath grammar
# sample result: 
anyuid[system:cluster-admins]
hostaccess[]
restricted[system:authenticated]

Open Remote Shell session to a container

# Enter into a container, and execute the "id" command
$ oc rsh pod/<pod-name> id

# See the configuration of your internal registry
$ oc rsh dc/docker-registry cat config.yaml

Check certificate built on OCP

$ ansible-playbook -i /etc/ansible/hosts /usr/share/ansible/openshift-ansible/playbooks/openshift-checks/certificate_expiry/easy-mode.yaml
# Check the result on the output html/json file 

How to get a full id of a certain container

$ docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS               NAMES
ad6d5d32576a        nginx:latest        "nginx -g 'daemon of   About a minute ago   Up About a minute   80/tcp, 443/tcp     nostalgic_sammet
9bab1a42d6a9        nginx:latest        "nginx -g 'daemon of   About a minute ago   Up About a minute   80/tcp, 443/tcp     mad_kowalevski

$ docker ps -q --no-trunc | grep ad6d5d32576a
ad6d5d32576ad3cb1fcaa59b564b8f6f22b079631080ab1a3bbac9199953eb7d

$ ls -l /var/lib/docker/ad6d5d32576ad3cb1fcaa59b564b8f6f22b079631080ab1a3bbac9199953eb7d
# Directory detail will be showed

Cordon and Uncordon

Cordon

# Cordon 1 node
$ oc adm manage-node <hostname-of-node-to-cordon> --schedulable=false

# Cordon nodes with node-selector (compute, infra, master)
$ oc adm manage-node --selector=node-role.kubernetes.io/compute=true --schedulable=false
$ oc adm manage-node --selector=node-role.kubernetes.io/infra=true --schedulable=false
$ oc adm manage-node --selector=node-role.kubernetes.io/master=true --schedulable=false

$ oc get nodes
# The status of cordoned nodes will be : Ready,SchedulingDisabled

Uncordon

# Uncordon 1 node
$ oc adm manage-node <hostname-of-node-to-cordon> --schedulable=true

# Cordon nodes with node-selector (compute, infra, master)
$ oc adm manage-node --selector=node-role.kubernetes.io/compute=true --schedulable=true
$ oc adm manage-node --selector=node-role.kubernetes.io/infra=true --schedulable=true
$ oc adm manage-node --selector=node-role.kubernetes.io/master=true --schedulable=true

$ oc get nodes
# The status of cordoned nodes will be : Ready,SchedulingDisabled

OpenShift

===== Image Stream =====
# oc get is -n openshift      is=imagestream
# oc -o yaml new-app php~https://github.com/sandervanvugt/simpleapp --name=simple-app > simple-app.yaml


===== Deployment Config =====
# oc get dc       dc=depoloymentconfig
# oc get rc       rc=replicationcontroller
# oc get pods
# oc get pods -o wide
# oc get pods --show-labels

# oc describe dc <dc-name>
# oc describe rc <rc-name>
# oc describe pods <podname>
# oc logs <podname>




===== Templates =====
-List templates
# oc get templates --namespace openshift




===== Persistent Storage (PV & PVC) =====
-Access modes for persistent storage:
ReadWriteOnce (RWO)
    The volume can be mounted as read/write by a single node.
ReadOnlyMany (ROX)
    The volume can be mounted as read-only by many nodes.
ReadWriteMany (RWX)
    The volume can be mounted as read/write by many nodes.

# oc get pvc?1】Setting up the NFS server:
# yum install nfs-server
# mkdir /storage
# chown nfsnobody.nfsnobody /storage
# chmod 7000 /nfsnobody
# echo "/storage *(rw,async,all_squash)" >> /etc/exports
# systemctl enable --now nfs-server
# ufw status?2】Create Persistent Volume (PV)
-Create yaml(nfs-pv.yml) file:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
   - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /storage
    server: 172.17.0.1
    readOnly: false?3】Adding Persistent Volume (PV)
# oc login -u system:admin -p anything
# oc create -f nfs-pv.yml
# oc get pv | grep nfs
# oc describe pv nfs-pv?4】Creating Persistent Volume Claim (PVC)
-Create yaml(nfs-pvc.yml) file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pv-claim
spec:
  accessModes:
   - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

�?5】Creating PVC
# oc whoami
# oc login -u developer -p anything
# oc create -f nfs-pvc.yml
# oc describe pvc nfs-pv-claim
# oc get pvc?6】Creating the pod
# oc create -f nfs-pv-pod.yaml
# oc describe pod nfs-pv-pod
(check the Volume section, also check Events)
# oc logs pod nfs-pv-pod

-Create nfs-pv-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nfs-pv-pod
spec:
  volumes:
    - name: nfsvolume   (!this name is nothing todo with pv-name)
      persistentVolumeClaim:
        claimName: nfs-pv-claim
  containers:
    - name: nfs-client1
      image: toccoag/openshift-nginx
      ports:
        - containerPort: 8081
          name: "http-server1"
      volumeMounts:
        - mountPath: "/mnt"
          name: nfsvolume
      resources: {}
    - name: nfs-client2
      image: toccoag/openshift-nginx
      ports:
        - containerPort: 8082
          name: "http-server2"
      volumeMounts:
        - mountPath: "/nfsshare"
          name: nfsvolume
      resources: {}

�?7】Verifying current configuration
# oc describe pod <podname>
# oc get pvc
# oc logs <podname>
# oc exec <podname> -it -- sh
   - mount | grep mysql
# oc logs pod/nfs-pv-pod -c nfs-client1
# oc logs pod/nfs-pv-pod -c nfs-client2




===== ConfigMaps =====
ConfigMaps can be used to separate Dynamic Data from Static Data in a Pod
ConfigMaps can be used in 3 different ways:
1. make variables available within a Pod
2. provide command line arguments
3. mount them on the location where the application expects to find a configuration file

# vim variables
VAR_1=Hello
VAR_2=World
esc : wq!
# oc create cm variables --from-env-file=variables
# oc get cm
# oc describe cm variables
# oc create -f test-pod1.yml
# oc get pods
# oc logs pod/example

-Create test-pod1.yml
apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: example
      image: cirros
      command: ["/bin/sh", "-c", "env"]
      envFrom:
        - configMapRef:
            name: variables





===== OpenShift Troubleshoot =====
-Show recent events
# oc get events

-Show what has happened to specific pod
# oc logs <podname>

-Show pod details
# oc describe pod <podname>

-Show current working project
# oc projects

-Delete everything
# oc delete all --all

# oc logs -f bc/<app-name>




===== Demo1 =====
oc login -u developer -p anything
oc new-project firstproject
oc new-app --docker-image=nginx:latest --name=nginx
oc status (use it repeatedly to trace the process)
oc get pods
oc describe pod <podname>
oc get svc
oc describe service nginx
oc port-forward <podname> 33080:80
curl -s http://localhost:33080


===== Demo2 =====
oc whoami
oc new-project mysql
oc new-app --docker-image=mysql:latest --name=mysql-openshift -e MYSQL_USER=myuser -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=mydb -e MYSQL_ROOT_PASSWORD=password
oc status -v
oc get all
oc get pods -o=wide
login to the Webconsoleand see the new application
https://127.0.0.1:8443

Creating new app (sample app: open-liberty)

# Get the open-liberty image
$ docker save open-liberty > open-liberty.tar
$ ls -l open-liberty.tar

# Docker Load the open-liberty image
$ docker load -i open-liberty.tar
Loading layer ...
Loading layer ...
Loaded image: docker.io/openliberty/open-liberty:latest

# Or, you may just pull open-liberty image to your environment
$ docker pull open-liberty
Using default tag: latest
latest: Pulling from library/open-liberty
5bed26d33875: Pull complete
f11b29a9c730: Pull complete
930bda195c84: Pull complete
78bf9a5ad49e: Pull complete
bf7abfeb0680: Pull complete
72e3fc2f84f3: Pull complete
22e18c7d3d21: Pull complete
c6a94ffbb4bd: Pull complete
b3d5728c0015: Pull complete
42a91fcb5bbf: Pull complete
3fc5267dabfe: Pull complete
17655dfe9734: Pull complete
Digest: sha256:d0690a004189913f4471e4440c49e7a4360882ef73984132205300067e023f1a
Status: Downloaded newer image for open-liberty:latest
docker.io/library/open-liberty:latest

# Check the open-liberty images
$ docker images | grep -E "REPO|open-liberty"
REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
open-liberty                                              latest              f9215bfcd756        5 days ago          429MB


# Login to OpenShift console
$ oc login -u [user]

# Create project for our new-app
$ oc new-project test1-project

# Login to OpenShift internal registry
$ oc registry info
$ docker login -u `oc whoami` -p `oc whoami -t` `oc registry info`
     Login Succeed

# Tag and push image to internal registry
$ docker tag docker.io/open-liberty:latest `oc registry info`/`oc project -q`/open-liberty:latest
$ docker images | grep open-liberty
$ docker push `oc registry info`/`oc project -q`/open-liberty:latest
  # You may specify the `oc registry info` to any project name you like
  # this operation pushed the image to the internal registry, and create the image-stream

# Check the image stream
$ oc get is | grep open-liberty
NAME           DOCKER REPO                              TAGS                           UPDATED
open-liberty   172.30.1.1:5000/openshift/open-liberty   latest                         2 minutes ago

# Check the new-app list
$ oc new-app --list | grep -A 2 open-liberty
open-liberty
  Project: project-name
  Tags:    latest

# Create new app (using "oc new-app" command)
$ oc new-app --image-stream=`oc project -q`/open-liberty:latest --name=liberty01

--> Found image f9215bf (5 days old) in image stream "openshift/open-liberty" under tag "latest" for "openshift/open-liberty:latest"
    * This image will be deployed in deployment config "liberty01"
    * Ports 9080/tcp, 9443/tcp will be load balanced by service "liberty01"
      * Other containers can access this service through the hostname "liberty01"
--> Creating resources ...
    imagestreamtag.image.openshift.io "liberty01:latest" created
    deploymentconfig.apps.openshift.io "liberty01" created
    service "liberty01" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/liberty01'
    Run 'oc status' to view your app.
    
$ oc status --suggest
In project sample01 on server https://192.168.99.102:8443

http://liberty01-sample01.192.168.99.102.nip.io to pod port 9080-tcp (svc/liberty01)
  dc/liberty01 deploys openshift/open-liberty:latest
    deployment #1 deployed 11 minutes ago - 1 pod

Info:
  * dc/liberty01 has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/liberty01 --readiness ...
  * dc/liberty01 has no liveness probe to verify pods are still running.
    try: oc set probe dc/liberty01 --liveness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.


$ oc get all
NAME                    READY     STATUS    RESTARTS   AGE
pod/liberty01-1-mr66r   1/1       Running   0          2m

NAME                                DESIRED   CURRENT   READY     AGE
replicationcontroller/liberty01-1   1         1         1         3m

NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/liberty01   ClusterIP   172.30.193.12   <none>        9080/TCP,9443/TCP   3m

NAME                                           REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/liberty01   1          1         1         config,image(open-liberty:latest)

NAME                                       DOCKER REPO                          TAGS      UPDATED
imagestream.image.openshift.io/liberty01   172.30.1.1:5000/sample01/liberty01   latest


# Expose service as route
$ oc expose service liberty01
route.route.openshift.io/liberty01 exposed

$ oc get route
NAME        HOST/PORT                                  PATH      SERVICES    PORT       TERMINATION   WILDCARD
liberty01   liberty01-sample01.192.168.99.102.nip.io             liberty01   9080-tcp                 None


# Open browser and access the app from the above route url
Go to liberty01-sample01.192.168.99.102.nip.io

# Test connection from internal pod: 
oc rsh `oc get pod -o jsonpath={.items[*].metadata.name}` curl liberty01:9080
# liberty01 = servicename
# 9080 = service port number


# 2. Create new app (using yaml)
--inprogress--
  • YAML file

User, Group, Project, Rolebinding (give authority to project specific )

Scenario:
  1. Group: devgroup01, user: developer01, project: sample01
  2. Group: devgroup02, user: developer02, project: sample02
  3. Each group only has access to their own project. (devgroup01 only has access to sample01)
  4. To realize the No.3, we need to create RoleBinding for project specific authority.
# Create group
$ oc adm groups new devgroup01
$ oc adm groups new devgroup02
$ oc get group devgroup01 devgroup02

# Create user
$ oc create user developer01
$ oc create user developer02
$ oc get user developer01 developer02

# Add user to the specific group
$ oc adm groups add-users devgroup01 developer01
$ oc adm groups add-users devgroup02 developer02
$ oc get group devgroup01 devgroup02

# If you want to remove users from group: 
# oc adm groups remove-users devgroup01 developer01

# (MINISHIFT only) Enable and apply the addon for httpasswd addon 
$ minishift addon enable htpasswd-identity-provider
$ minishift addon apply htpasswd-identity-provider
$ minishift addon list

# Set HTTP password to developer01 and developer02 user (htpasswd)
# (do this procedure on all Master nodes)
$ htpasswd -b /etc/origin/master/htpasswd developer01 [password-for-developer01]
$ htpasswd -b /etc/origin/master/htpasswd developer02 [password-for-developer02]
$ cat /etc/origin/master/htpasswd

# Login test with the created user account
$ oc login -u developer01
$ oc logout
$ oc login -u developer02
$ oc logout

# Create Project
$ oc new-project sample01
$ oc new-project sample02
$ oc projects | grep sample0
  • Create RoleBinding to give authority for group member to specific Project
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata: 
  name: sample01-devgroup       -> specify the [projectName]-[groupName]
  namespace: sample01           -> specify namespace for specific group
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin                   -> specify type of ClusterRole your user can do as its ClusterRole
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: devgroup01              -> specify the group you want to give access
  • Create the same yaml for sample02-group02 RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: sample02-devgroup
  namespace: sample02
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: devgroup02
  • Create the RoleBinding
$ oc apply -f sample01-devgroup01.yaml
rolebinding.rbac.authorization.k8s.io/sample01-devgroup created

$ oc apply -f sample02-devgroup02.yaml
rolebinding.rbac.authorization.k8s.io/sample02-devgroup created

$ oc get rolebinding -n sample01
NAME                    AGE
admin                   2d
sample01-devgroup       2m      -> This one
system:deployers        2d
system:image-builders   2d
system:image-pullers    2d

$ oc get rolebinding -n sample02
NAME                    AGE
admin                   2d
sample02-devgroup       2m      -> This one
system:deployers        2d
system:image-builders   2d
system:image-pullers    2d
  • Access to each user and check they can only view their project.
$ oc login -u developer01
  Logged into "https://192.168.99.102:8443" as "developer01" using existing credentials.
  You have one project on this server: "sample01"
  Using project "sample01".
  
$ oc new-app --image-stream=openshift/open-liberty:latest --name=liberty1
--> Found image f9215bf (8 days old) in image stream "openshift/open-liberty" under tag "latest" for "openshift/open-liberty:latest"
    * This image will be deployed in deployment config "liberty1"
    * Ports 9080/tcp, 9443/tcp will be load balanced by service "liberty1"
      * Other containers can access this service through the hostname "liberty1"
--> Creating resources ...
    imagestreamtag.image.openshift.io "liberty1:latest" created
    deploymentconfig.apps.openshift.io "liberty1" created
    service "liberty1" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/liberty1'
    Run 'oc status' to view your app.
    
$ oc get pod -n sample01
  NAME               READY     STATUS    RESTARTS   AGE
  liberty1-1-rj94p   1/1       Running   0          43s

$ oc get pod -n sample02
  No resources found.
  Error from server (Forbidden): pods is forbidden: User "developer01" cannot list pods in the namespace "sample02": no RBAC  policy matched
  
$ oc login -u developer02
  Logged into "https://192.168.99.102:8443" as "developer02" using existing credentials.
  You have one project on this server: "sample02"
  Using project "sample02".
  
$ oc get pod -n sample02
  No resources found.       -> We didn't create any pod
  
$ oc get pod -n sample01
  No resources found.
  Error from server (Forbidden): pods is forbidden: User "developer02" cannot list pods in the namespace "sample01": no RBAC policy matched

-> This proof our concept that each user can only access their own Projects

Copy specific file inside a container to local host

oc cp <namespace>/<podname>:/path/to/file.txt /localhost/directory/path/file.txt

Collecting OpenShift Cluster, Master, Node and Project Information to provide information for Support Team

  1. OpenShift Cluster
$ oc get node,hostsubnet -o wide
$ oc describe nodes 
$ oc get all,events -o wide -n default
  1. OpenShift Master
$ sosreport
$ oc get --raw /metrics --server https://`hostname` --config=/etc/origin/master/admin.kubeconfig
$ oc get --raw /metrics --server https://`hostname`:8444 --config=/etc/origin/master/admin.kubeconfig
$ ovs-ofctl -O OpenFlow13 dump-flows br0
  1. OpenSHift Node
$ sosreport 
$ ovs-ofctl -O OpenFlow13 dump-flows br0
  • Gather Metrics from a Node, run below command on a Master host
$ NODE_NAME=<NODE_HOSTNAME>
$ oc get --raw /metrics --server https://$NODE_NAME:10250 --config=/etc/origin/master/admin.kubeconfig
$ oc get --raw /metrics/cadvisor --server https://$NODE_NAME:10250 --config=/etc/origin/master/admin.kubeconfig
$ oc describe node $NODE_NAME
  1. OpenShift Project
$ oc get all,events -o wide -n <project>
$ oc get all,events -o yaml -n <project>
  • Gather logs from specific pod
$ oc logs <pod-name> -n <project>
  1. Openshift 3.11 diagnostics
  • Openshift kubelet logs are managed through another service. On enterprise openshift, they can be gathered with journalctl -u atomic-openshift-node, and on okd they can be gathered with journalctl -u origin-node. Docker or crio logs can also be collected on each node if necessary.

This script will gather information from the system namespaces (kube, openshift, infra, and default):

export MGDIR=openshift-diag-$(date -Ihours)
export LOGLIMIT="--tail=1000"
mkdir -p $MGDIR
oc get nodes > $MGDIR/node-list.txt
oc describe nodes > $MGDIR/node-describe.txt
oc get namespaces > $MGDIR/namespaces.txt
oc get pods --all-namespaces -o wide > $MGDIR/all-pods-list.txt
for NS in `oc get ns | awk 'NR>1 && (/openshift/ || /kube/ || /infra/){ORS=" "; print $1}'` default; do
export NS=$NS; mkdir $MGDIR/$NS; echo gathering info from namespace $NS
oc get pods,svc,route,ing,secrets,cm,events -n $NS -o wide &> $MGDIR/$NS/all-list.txt
oc get pods -n $NS | awk 'NR>1{print "oc -n $NS describe pod "$1" > $MGDIR/$NS/"$1"-describe.txt && echo described "$1}' | bash
oc get pods -n $NS -o go-template='{{range $i := .items}}{{range $c := $i.spec.containers}}{{println $i.metadata.name $c.name}}{{end}}{{end}}' > $MGDIR/$NS/container-list.txt
awk '{print "oc -n $NS logs "$1" -c "$2" $LOGLIMIT -p > $MGDIR/$NS/"$1"_"$2"_previous.log && echo gathered previous logs of "$1"_"$2}' $MGDIR/$NS/container-list.txt | bash
awk '{print "oc -n $NS logs "$1" -c "$2" $LOGLIMIT > $MGDIR/$NS/"$1"_"$2".log && echo gathered logs of "$1"_"$2}' $MGDIR/$NS/container-list.txt | bash
oc get svc -n $NS | awk 'NR>1{print "oc -n $NS describe svc "$1" > $MGDIR/$NS/svc-describe-"$1".txt && echo described service "$1}' | bash
done
tar czf $MGDIR.tgz $MGDIR/

There will be some error messages regarding "previous terminated container ... not found," those do not indicate any issues.

Targeted Mustgather (3.11)
  • Collect a sosreport from a master node and any other nodes that have specific issues:
# You may ignore the yum update
yum update sos 
sosreport
  • Log in to openshift as a cluster administrator. If you cannot login, you may be able to use the installer's credentials from the "boot" node with this command:
export KUBECONFIG=/etc/origin/master/admin.kubeconfig

You can verify you are logged in correctly with oc whoami.

Create the initial directory and get some general information:

export SUPPDIR=support-mustgather-$(date -Ihours)
mkdir $SUPPDIR
oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig > $SUPPDIR/oc-raw-metrics.log
oc get nodes > $SUPPDIR/node-list.txt
oc describe nodes > $SUPPDIR/node-describe.txt
  • Gather information from the kube-system namespace, using the template below. Depending on the issue, information for other namespaces should also be collected (this list is incomplete). Problem Namespace to collect Any kube-system Openshift router default Openshift private registry default Openshift networking openshift-sdn Multicloud Management multicluster-endpoint IBM Cloud Automation Manager services
export NSPACE=<namespace>
mkdir $SUPPDIR/$NSPACE
oc -n $NSPACE get all,events,secrets,cm,ing -o wide &> $SUPPDIR/$NSPACE/all-list.txt
oc get pods -n $NSPACE | awk 'NR>1{print "oc -n $NSPACE describe pod "$1" > $SUPPDIR/$NSPACE/"$1"-describe.txt && echo described "$1}' | bash
oc get pods -n $NSPACE -o go-template='{{range $i := .items}}{{range $c := $i.spec.containers}}{{println $i.metadata.name $c.name}}{{end}}{{end}}' > $SUPPDIR/$NSPACE/container-list.txt
awk '{print "oc -n $NSPACE logs "$1" -c "$2" --tail=1000 > $SUPPDIR/$NSPACE/"$1"_"$2".log && echo gathered logs of "$1"_"$2}' $SUPPDIR/$NSPACE/container-list.txt | bash

After the information is collected, you may pack the folder for easier transfer with this command (you will still need to also collect the sosreport separately):

tar czf $SUPPDIR.tgz $SUPPDIR/