oc
can explain each object type definition using oc explain <object>
. Drill
down into the object using JSON dot notation: spec.containers
.
Create pull secret in OpenShift:
oc create secret generic <name> \
--from-file '.dockerconfigjson=<path-to-auth.json>' \
--type kubernetes.io/dockerconfigjson
Link service account to pull secret:
oc secrets link default <name> --for pull
Parent image:
FROM registry.access.redhat.com/rhscl/nodejs-6-rhel7
RUN mkdir -p /opt/app-root/
WORKDIR /opt/app-root
ONBUILD COPY package.json /opt/app-root
CMD [ "npm", "start" ]
Child image:
FROM mynodejs-base
RUN echo "Started Node.js server..."
Child image will copy ./package.json
from the child source code to
/opt/app-root
of the container on build.
LABEL io.openshift.expose-services="8080:http"
Add anyuid to service account:
oc adm policy add-scc-to-user anyuid -z myserviceaccount
oc create configmap <name> \
--from-literal key1=value1 \
--from-literal key2=value2
oc create configmap <name> --from-file <path-to-file>
oc create secret generic <name> \
--from-literal key1=value1 \
--from-literal key2=value2
apiVersion: v1
kind: Secret
metadata:
name: <name>
type: Opaque
stringData:
key1: value1
key2: value2
NOTE: stringData
is converted to base64 encoded data
when applied.
oc set env deployment/<name> --from configmap/<name> # ConfigMap
oc set env deployment/<name> --from secret/<name> # Secret
oc set volume deployment/<name> --add \
-t configmap \
-m <path-on-container> \
--name <name> \
--configmap-name <name>
oc set volume deployment/<name> --add \
-t secret \
-m <path-on-container> \
--name <name> \
--secret-name <name>
oc new-app \
--name <name> \
--docker-image quay.io/image:latest
oc new-app \
--name <name> \
--as-deployment-config \ # Use DeploymentConfig instead of Deployment
--build-env KEY=VALUE \ # Environment variable for build time only
--env KEY=VALUE \ # Environment variable for build and run time
--context-dir <dir-path> # Directory inside the Git repo with app code
php:7.3~https://github.com/repo#branch
FORMAT: image:tag~repo#branch
oc scale dc/<name> --replicas=3
NOTE: Might be easier to oc edit dc/<name>
and edit the # of replicas.
DeploymentConfig supports custom deployment methods. Look for .spec.strategy
in the DeploymentConfig object. After update, force rollout with: oc rollout latest dc/<name>
.
Rollout latest deployment version:
oc rollout latest dc/<name>
Rollback to last successful deployment:
oc rollback dc/<name>
Reset deployment triggers after rollback:
oc set triggers dc/<name> --auto
$ oc edit config.imageregistry cluster -n openshift-image-registry
# Set spec.defaultRoute to true
NOTE: There is a patch command to do this but patch is annoying outside of scripting.
For minimum capabilities to push and pull (More than likely you want this):
oc policy add-role-to-user system:image-puller <username> -n <project>
oc policy add-role-to-user system:image-pusher <username> -n <project>
For enterprise use cases (grants more than above):
oc policy add-role-to-user registry-viewer <username> -n <project>
oc policy add-role-to-user registry-editor <username> -n <project>
Can also add these roles to groups, specifically for services accounts in a project:
oc policy add-role-to-group \
system:image-puller \
system:serviceaccounts:<project> \
-n <project>
oc import-image imagestream-name:latest \
--from quay.io/image:latest \
--confirm
NOTE: Ommitting the tag on both the image stream and external image will import all tags for that image on the external registry.
Image streams are namespace scoped. Sometimes image streams need a project
prefix. Full format looks like project/imagestream:tag
.
In the BuildConfig manifest, add:
...
spec:
successfulBuildsHistoryLimit: 5
failedBuildsHistoryLimit: 5
...
oc set env bc/<name> BUILD_LOGLEVEL="4"
oc set triggers bc/<name> --from-image=<imagestream>:tag
NOTE: S2I created containers will be triggered on S2I builder image change.
Example: php:latest
image stream is updated, S2I applications built using
php:latest
will be triggered.
oc set triggers bc/<name> --from-gitlab
oc set build-hook bc/<name> \
--post-commit \
--script="echo 'Hello World!'"
NOTE: This requires the base image to have /bin/sh
. Probably safe to
assume most images have this.
oc set env bc/<name> KEY="VALUE"
Scripts in the ./s2i/bin
directory of the source Git repo will override
builder image scripts. This is typically done to wrap the builder image scripts
to perform some action at a given point in the build.
Set S2I scripts directory using Containerfile LABEL directive:
LABEL io.openshift.s2i.scripts-url="image:///usr/libexec/s2i"
s2i create <image-name> <directory>
s2i build <repo> <image-name> <tag-name>
apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: <template-name>
labels:
mylabel: value
parameters:
- description: <description>
name: KEY
defaultValue: VALUE
required: true
objects:
- ... # K8s object 1
- ... # K8s object 2
Preferred:
oc new-app --file <template-file> \
-p KEY1=VALUE1 \
-p KEY2=VALUE2
Optional:
oc process -f <template-file> \
-p KEY1=VALUE1 \
-p KEY2=VALUE2 | oc create -f -
helm create <name>
gets you 80% to your goal.
Chart.yaml:
...
dependencies:
- name: <chart-name>
version: <chart-version>
repository: <repo-url>
...
Then install/update dependencies:
helm dependency update
./template/deployment.yaml:
...
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
...
Values.yaml:
...
env:
- name: KEY1
value: VALUE1
...
Directory layout:
./base
./base/kustomize.yaml
./base/<manifest>.yaml
./overlays
./overlays/dev
./overlays/dev/kustomize.yaml
./overlays/dev/<manifest>.yaml
./overlays/prod
./overlays/prod/kustomize.yaml
./overlays/prod/<manifest-patches>.yaml
./base/kustomize.yaml:
resources:
- deployment.yaml
- service.yaml
commonLabels:
key1: value1
./overlay/{dev,prod}/kustomize.yaml:
bases:
- ../../base
patches:
- ../../<manifest-patch>.yaml
oc apply -k <kustom-directory>
Liveness Probe:
oc set probe deployment <name> \
--liveness \
--get-url=http://:<port>
Readiness Probe:
oc set probe deployment <name> \
--readiness \
--get-url=http://:<port>
Use oc set probe --help
to get other options.
A service is available via internal DNS at
<service-name>.<project>.svc.cluster.local
. (You might be able to omit
.cluster.local
.)
NOTE: This information isn't provided in an oc describe service <name>
response. Remember the format so you can point applications at each other!!
To create a service that points to an external dependency:
oc create service externalname <service-name> --external-name <external-fqdn>
This can also be used to create aliases for internal services:
oc create svc externalname <alias-service-name> \
--external-name=<orignial-service-name>.<project>.svc.cluster.local