Skip to content

Latest commit

 

History

History
1820 lines (1425 loc) · 68.8 KB

01.openshift-learning.md

File metadata and controls

1820 lines (1425 loc) · 68.8 KB

Learn Openshift

# Deploy app from GitHub

$ oc new-app --name version https://github.com/fahmifahim/DO101-apps#update-app --context-dir version

$ oc get all
        NAME                           READY   STATUS      RESTARTS   AGE
        pod/version-1-build            0/1     Completed   0          12m
        pod/version-6886df6f44-b6d6v   1/1     Running     0          10m

        NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
        service/version   ClusterIP   172.30.203.24   <none>        8080/TCP   12m

        NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
        deployment.apps/version   1/1     1            1           12m

        NAME                                 DESIRED   CURRENT   READY   AGE
        replicaset.apps/version-6886df6f44   1         1         1       10m
        replicaset.apps/version-b66c8c69c    0         0         0       12m

        NAME                                     TYPE     FROM             LATEST
        buildconfig.build.openshift.io/version   Source   Git@update-app   1

        NAME                                 TYPE     FROM          STATUS     STARTED          DURATION
        build.build.openshift.io/version-1   Source   Git@6459e50   Complete   12 minutes ago   1m37s

        NAME                                     IMAGE REPOSITORY                                                                             TAGS     UPDATED
        imagestream.image.openshift.io/version   default-route-openshift-image-registry.apps.ocp-ap3.prod.nextcle.com/fahmi-version/version   latest   10 minutes ago

$ oc get svc
        NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
        version   ClusterIP   172.30.203.24   <none>        8080/TCP   23m

$ oc expose svc/version
        route.route.openshift.io/version exposed

$ oc get routes
        NAME      HOST/PORT                                             PATH   SERVICES   PORT       TERMINATION   WILDCARD
        version   version-fahmi-version.apps.ocp-ap3.prod.nextcle.com          version    8080-tcp                 None

# Configuring the Horizontal Pod Autoscaler

  • Some applications receive a large number of concurrent requests only during certain periods, which makes it very difficult to size the number of pods up front before running the application. However, there are extra costs associated with running more pods than required when traffic is not at its peak.

  • Red Hat OpenShift Container Platform refers to the action of changing the number of pods for an application as scaling. Scaling up refers to increasing the number of pods for an application. Scaling down refers to decreasing that number. Scaling up allows the application to handle more client requests, and scaling down provides cost savings when the load goes down.

  • When scaling up an application, the OpenShift platform first deploys a new pod and then waits for the pod to be ready. Only after the new pod becomes available does the OpenShift platform configure the route to also send traffic to the new pod.

  • When scaling down, OpenShift reconfigures the route to stop sending traffic to the pod, and then deletes the pod.

$ oc autoscale dc/frontend --min=1 --max=5 --cpu-percent=80

The options are as follows:

dc/frontend
    Name of the application deployment configuration resource 

--min=1
    Minimum number of pods 

--max=5
    Maximum number of pods. HPA does not scale up the application beyond this limit, even if the load continues to increase. 

--cpu-percent=80
    Ideal average CPU utilization for each pod. If the global average CPU utilization is above that value, then HPA starts a new pod. If the global average CPU utilization is below the value, then HPA deletes a pod. 

# Creating Containerized Services

# Provisioning Containerized Services

# Managing Containers with Podman

  • Podman is an open source tool for managing containers and container images and interacting with image registries. Some of the following key features:
    • It uses image format specified by the Open Container Initiative (OCI). It define an standard, community-driven, non-proprietary image format.
    • It stores local images in local file-system. It avoids unnecessary client/server architecture or having daemons running on local machine.
    • It follows the same command patterns as the Docker CLI, no need to learn a new toolset.
    • Podman compatible with Kubernetes.

# Fetching Container Images with Podman

$ sudo podman search rhel
		INDEX      NAME                            DESCRIPTION  STARS OFFICIAL AUTOMATED
		redhat.com registry.access.redhat.com/rhel This plat... 0
		
$ sudo podman pull rhel
		Trying to pull registry.access.redhat.com/rhel...Getting image source signatures
		Copying blob sha256: ...output omitted...
		 72.25 MB / 72.25 MB [======================================================] 8s
		Writing manifest to image destination
		Storing signatures
		699d44bc6ea2b9fb23e7899bd4023d3c83894d3be64b12e65a3fe63e2c70f0ef

$ sudo podman images
        REPOSITORY                        TAG      IMAGE ID       CREATED       SIZE
        registry.access.redhat.com/rhel   latest   699d44bc6ea2   4 days ago    214MB
$ docker search rhel
$ docker pull rhel
$ docker images

# Running Containers

  • Podman command options:
    • -t or --tty, meaning a pseudo-tty (pseudo-terminal) is to be allocated for the container.
    • -i or --interactive. When used, standard input is kept open into the container.
    • -d or --detach, means the container runs in the background (detached). Podman prints the container id.
$ sudo podman run rhel:latest echo 'Hello!'
        Hello!

$ sudo podman run -d rhscl/httpd-24-rhel7:2.4-36.5
        ff4ec6d74e9b2a7b55c49f138e56f8bc46fe2a09c23093664fea7febc3dfa1b2
  • Exercise

    1. Creating MYSQL Database instance
    $ sudo podman run --name mysql-basic \
            > -e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 \
            > -e MYSQL_DATABASE=items -e MYSQL_ROOT_PASSWORD=r00tpa55 \
            > -d rhscl/mysql-57-rhel7:5.7-3.14
            Trying to pull ...output omitted...
            Copying blob sha256:e373541...output omitted...
            69.66 MB / 69.66 MB [===================================================] 8s
            Writing manifest to image destination
            Storing signatures
            92eaa6b67da0475745b2beffa7e0895391ab34ab3bf1ded99363bb09279a24a0
    • Verify containers
    $ sudo podman ps --format "{{.ID}} {{.Image}} {{.Names}}"
            92eaa6b67da0 registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7-3.14 mysql-basic
    • Access the container
    $ sudo podman exec -it mysql-basic /bin/bash
            bash-4.2$
    • Access the mysql database and put some entries
    bash-4.2$ mysql -uroot
            Welcome to the MySQL monitor.  Commands end with ; or \g.
            ...output omitted...
    mysql> 
    
    mysql> show databases;
            +--------------------+
            | Database           |
            +--------------------+
            | information_schema |
            | items              |
            | mysql              |
            | performance_schema |
            | sys                |
            +--------------------+
            5 rows in set (0.01 sec)
    
    mysql> use items;
            Database changed
    
    mysql> CREATE TABLE Projects (id int(11) NOT NULL,
        -> name varchar(255) DEFAULT NULL,
        -> code varchar(255) DEFAULT NULL,
        -> PRIMARY KEY (id));
            Query OK, 0 rows affected (0.01 sec)
    
    mysql> show tables;
            +---------------------+
            | Tables_in_items     |
            +---------------------+
            | Projects            |
            +---------------------+
            1 row in set (0.00 sec)
    
    mysql> insert into Projects (id, name, code) values (1,'DevOps','AAO180');
            Query OK, 1 row affected (0.02 sec)
    
    mysql> select * from Projects;
            +----+-----------+-------+
            | id | name      | code  |
            +----------------+-------+
            |  1 | DevOps    | AAO180 |
            +----------------+-------+
            1 row in set (0.00 sec)
    
    mysql> exit
            Bye
    bash-4.2$ exit
            exit
    1. Creating Web-Service apps
    # Start HTTP-Apache container 
    $ sudo podman run -d -p 8080:80 --name httpd-basic redhattraining/httpd-parent:2.4
    
    # Test the container
    $ curl http://localhost:8080
    
    # Change the display message to HOLA!!
    $ sudo podman exec -it httpd-basic /bin/bash
        bash-4.4# 
        bash-4.4# echo "HOLA!!" > /var/www/html/index.html
        bash-4.4# exit
    
    # Test the container
    $ curl http://localhost:8080
    

# Managing Containers

# Managing the Life Cycle of Containers

# Container Life Cycle Management with Podman

  • Podman managed subcommands for container life cycle management Container-Life-Cycle

  • Podman query subcommand Podman-subcommand

# Creating container

$ sudo podman run rhscl/httpd-24-rhel7
        Trying to pull regist...httpd-24-rhel7:latest...Getting image source signatures
        Copying blob sha256:23113...b0be82
        72.21 MB / 72.21 MB [======================================================] 7s
        ...output omitted...AH00094: Command line: 'httpd -D FOREGROUND'

$ sudo podman run --name my-httpd-container rhscl/httpd-24-rhel7
        ...output omitted...AH00094: Command line: 'httpd -D FOREGROUND'

$ sudo podman run --name my-httpd-container -d rhscl/httpd-24-rhel7
        77d4b7b8ed1fd57449163bcb0b78d205e70d2314273263ab941c0c371ad56412

# Running command inside a container

$ sudo podman exec my-httpd-container cat /etc/hostname
        7ed6e671a600

$ sudo podman exec -l cat /etc/hostname
        7ed6e671a600

# Managing Containers

  • podman ps: Lists running containers
$ sudo podman ps
        CONTAINER ID   IMAGE         COMMAND      CREATED  STATUS   PORTS    NAMES
        77d4b7b8ed1f1 rhscl/httpd-24-rhel72 "httpd..."3 ...ago4 Up...5 80/tcp6 my-htt...7

# List all containers including the stopped ones
$ sudo podman ps -a
        CONTAINER ID  IMAGE        COMMAND     CREATED  STATUS        PORTS  NAMES
        4829d82fbbff  rhscl/httpd-24-rhel7  "httpd..."  ...ago   Exited (0)...        my-httpd...
  • podman inspect: lists metadata about a running or stopped container.
$ sudo podman inspect my-httpd-container
        [
        {
        "Id": "980e45...76c8be",
        ...output omitted...
        "NetworkSettings": {
                "Bridge": "",
                "EndpointID": "483fc9...5d801a",
                "Gateway": "172.17.42.1",
                "GlobalIPv6Address": "",
                "GlobalIPv6PrefixLen": 0,
                "HairpinMode": false,
                "IPAddress": "172.17.0.9",
        ...output omitted...

$ sudo podman inspect -f '{{ .NetworkSettings.IPAddress }}' my-httpd-container
        172.17.0.9
  • podman stop
    podman stop -a

  • podman kill

  • podman restart

  • podman rm
    podman rm -a

# Managing Container Images

# Accessing Registries

# Public Registries

  • Public domain registry which is available for all developers to pull the provided container images

# Private Registries

  • Organization privacy and secret protection
  • Legal restrictions and laws
  • Avoidance of publishing images in development

# Configuring Registries in Podman

  • Registry configuration file in Podman located at /etc/container/registries.conf
[registries.search]
registries = ["registry.access.redhat.com", "quay.io"]

Use an FQDN and port number to identify a registry. Default port number for a registry is 5000. If the registry uses a different port, it must be specified. Indicate port numbers by appending a colon (:) and the port number after the FQDN.

  • Secure connections to a registry require a trusted certificate.
  • To enable insecure connections, add the registry name to the registries entry in [registries.insecure] section of /etc/containers/registries.conf :
[registries.insecure]
registries = ['localhost:5000']

# Accessing Registries

  • Searching for Images in Registries
$ sudo podman search [OPTIONS] <term>
$ sudo podman search docker.io/nginx -f is-official=true

[OPTIONS]
--limit <number> : Limits the number of listed images per registry.
--filter <filter=value>
    stars=<number> : Show only images with at least this number of stars
    is-automated=<true|false> : Show only images automatically built.
    is-official=<true|false> : Show only images flagged as official.
--tls-verify <true|false> : Enables or disables HTTPS certificate validation for all used registries. true

# Registry Authentication

  • Some container image registries require access authorization.
$ sudo podman login -u username -p password registry.access.redhat.com
        Login Succeeded!

# Pulling Images

$ sudo podman pull [OPTIONS] [REGISTRY[:PORT]/]NAME[:TAG]
$ sudo podman pull quay.io/bitnami/nginx

If the image name does not include a registry name, Podman searches for a matching container image using the registries listed in the /etc/containers/registries.conf. Podman search for images in registries in the same order they appear in the configuration file.
If the tag is not specified, latest will be the default tag information.

# Manipulating Container Images

  • Developer can save the container image to a .tar file
  • Developer can publish (push) container image to a image registry

# Saving and Loading Images

  • Existing images from the Podman local storage can be saved to a .tar file
  • The generated file is not a regular TAR archive; it contains image metadata and preserves the original image layers.
$ sudo podman save [-o FILE_NAME] IMAGE_NAME[:TAG]
$ sudo podman save -o mysql.tar registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
  • Load or restore container image using below command.
$ sudo podman load [-i FILE_NAME]
$ sudo podman load -i mysql.tar

# Deleting Images

  • Downloaded images are kept in local storage. However, images can be outdated and should be regularly replaced
  • Images are not automatically updated.
  • Delete image from local storage by running below
$ sudo podman rmi [OPTIONS] IMAGE [IMAGE...]
  • Deleting all Images
$ sudo podman rmi -a

# Modifying Images

# Tagging Images

$ sudo podman tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
$ sudo podman tag mysql-custom devops/mysql
$ sudo podman tag mysql-custom devops/mysql:snapshot
$ sudo podman tag mysql-custom:1.1 devops/mysql:1.2

Podman assumes the latest tag version if the tag value is not mentioned

  • Removing Tags from Images
$ sudo podman rmi devops/mysql:snapshot

# Publishing Images to a Registry

$ sudo podman push [OPTIONS] IMAGE [DESTINATION]
$ sudo podman push quay.io/bitnami/nginx

# Exercise

Review your knowledge, now!
  1. Use the podman search command to locate the docker.io/nginx official image and pull it into your local file system. Ensure that the image has been successfully retrieved.
$ sudo podman search docker.io/nginx -f is-official=true
        INDEX      NAME                     DESCRIPTION               STARS  OFFICIAL...
        docker.io  docker.io/library/nginx  Official build of Nginx.  12022  [OK]    ...

$ sudo podman pull docker.io/nginx:1.17
        Trying to pull Trying to pull docker.io/nginx:1.17...
        ...output omitted...
        Storing signatures
        42b4762643dcc9bf492b08064b55fef64942f055f0da91289a8abf93c6d6b43c

$ sudo podman images nginx
        REPOSITORY                TAG        IMAGE ID       CREATED      SIZE
        docker.io/library/nginx   1.17   42b4762643dc   8 days ago   130MB
  1. Start a new container using the Nginx image, according to the specifications listed in the following list.
    - Name: official-nginx
    - Run as daemon: yes
    - Container image: nginx
    - Port forward: from host port 8080 to container port 80.
$ sudo podman run --name official-nginx -d -p 8080:80 docker.io/nginx:1.17
        12dbc348c7dcf8560604a44b11926712f018b0ac44063d34b05704fb8447316f

# Check running pod
$ sudo podman ps 
  1. Log in to the container using the exec subcommand. Replace the contents of the index.html file with "HELLO WORLD". The web server directory is located at /usr/share/nginx/html.
    After the file has been updated, exit the container and use the curl command to access the web page.
$ sudo podman exec -it official-nginx /bin/bash
        root@12dbc348c7dc:/ 

# Update the index.html file located at /usr/share/nginx/html. 
root@12dbc348c7dc:/ echo 'HELLO WORLD' > /usr/share/nginx/html/index.html

# Exit the container.
root@12dbc348c7dc:/ exit

# Use the curl command to ensure that the index.html file is updated.
$ curl localhost:8080
or
$ curl 127.0.0.1:8080
        HELLO WORLD

# You may also access directly to container IP
$ sudo podman inspect official-nginx | grep IPAddress
        "IPAddress": "10.88.0.3"
$ curl 10.88.0.3:80
        HELLO WORLD
  1. Stop the running container and commit your changes to create a new container image. Give the new image a name of test/mynginx and a tag of v1.0-SNAPSHOT. Use the following specifications:
  • Image name: test/mynginx
  • Image tag: v1.0-SNAPSHOT
  • Author name: your name
# Stop the official-nginx container.
$ sudo podman stop official-nginx
        2dbc348c7dcf8560604a44b11926712f018b0ac44063d34b05704fb8447316f

# Check pod status
$ sudo podman ps -a
        # Check the container status is Exited

# Commit your changes to a new container image. Use your name as the AUTHOR of the changes.
$ sudo podman commit -a 'YOUR NAME' official-nginx test/mynginx:v1.0-SNAPSHOT
        Getting image source signatures
        ...output omitted...
        Storing signatures
        4a13dd08d175a6095e6462e52431be1577ca931fcd1aea139b71346bfc7f9c76

# List the available container images to locate your newly created image.
$ sudo podman images
        REPOSITORY                      TAG           IMAGE ID      CREATED        SIZE
        localhost/test/mynginx         v1.0-SNAPSHOT 4a13dd08d175  5 minutes ago  130MB
        docker.io/nginx                 1.17          42b4762643dc  8 days ago     113MB
  1. Start a new container using the updated Nginx image, according to the specifications listed in the following list.
  • Name: official-nginx-dev
  • Run as daemon: yes
  • Container image: test/mynginx:v1.0-SNAPSHOT
  • Port forward: from host port 8080 to container port 80.
$ sudo podman run --name official-nginx-dev -d -p 8080:80 test/mynginx:v1.0-SNAPSHOT
$ sudo podman ps
  1. Log in to the container using the exec subcommand to introduce a final change. Replace the contents of the file /usr/share/nginx/html/index.html file with "UNDER CONSTRUCTION".
    After the file has been updated, exit the container and use the curl command to verify the changes.
# Log in to the container
$ sudo podman exec -it official-nginx-dev /bin/bash
        root@12dbc348c7dc:/ 

# Update the index.html file located at /usr/share/nginx/html. The file should read HELLO WORLD Page.
        root@02dbc348c7dc:/ echo 'UNDER CONSTRUCTION"' > /usr/share/nginx/html/index.html

# Exit the container.
        root@02dbc348c7dc:/ exit

# Use the curl command to ensure that the index.html file is updated.
$ curl localhost:8080
or
$ curl 127.0.0.1:8080
        UNDER CONSTRUCTION

# Access directly to container
$ sudo podman inspect official-nginx-dev | grep IPAddress
$ curl [IPAddress]:80
        UNDER CONSTRUCTION
  1. Stop the running container and commit your changes to create the final container image. Give the new image a name of test/mynginx and a tag of v1.0. Use the following specifications:
  • Image name: test/mynginx
  • Image tag: v1.0
  • Author name: your name
# Stop the official-nginx-dev container.
$ sudo podman stop official-nginx-dev
        2dbc348c7dcf8560604a44b11926712f018b0ac44063d34b05704fb8447316f

#Check pod status. Container should be Exited
$ sudo podman ps
$ sudo podman ps -a

# Commit your changes to a new container image. Use your name as the author of the changes.
$ sudo podman commit -a 'Your Name' official-nginx-dev test/mynginx:v1.0
        Getting image source signatures
        ...output omitted...
        Storing signatures
        4a13dd08d175a6095e6462e52431be1577ca931fcd1aea139b71346bfc7f9c76

# List the available container images in order to locate your newly created image.
$ sudo podman images
        REPOSITORY                     TAG            IMAGE ID      CREATED        SIZE
        localhost/test/mynginx        v1.0           892569a87e3f  7 seconds ago  130MB
        localhost/test/mynginx        v1.0-SNAPSHOT  0857d81f5a4b  4 minutes ago  130MB
        docker.io/nginx                1.17           42b4762643dc  8 days ago     130MB
  1. Remove the stopped container and the development image test/mynginx:v1.0-SNAPSHOT from local image storage.
# Despite being stopped, the official-nginx-dev is still present. Display the container with the podman ps command with the -a flag.
$ sudo podman ps -a --format="{{.ID}} {{.Names}} {{.Status}}"
        e169c5fc8c3e   official-nginx-dev   Exited (0) 9 minutes ago
        ccf046c2f87d   official-nginx       Exited (0) 12 minutes ago

# Remove the container.
$ sudo podman rm official-nginx-dev 
        e169c5fc8c3ed5c024af94aec752fa565650f9d07b95bb009329874801d859a1o

# Verify that the container is deleted.
$ sudo podman ps -a --format="{{.ID}} {{.Names}} {{.Status}}"
        ccf046c2f87d   official-nginx       Exited (0) 12 minutes ago

# Remove the test/mynginx:v1.0-SNAPSHOT image.
$ sudo podman rmi test/mynginx:v1.0-SNAPSHOT
        Untagged: localhost/test/mynginx:v1.0-SNAPSHOT

# Verify that the image is no longer present by listing all images using the podman images command.
$ sudo podman images
        REPOSITORY                       TAG      IMAGE ID       CREATED          SIZE
        localhost/test/mynginx          v1.0     892569a87e3f   13 minutes ago   113MB
        docker.io/library/nginx          1.17     42b4762643dc   8 days ago       130MB
  1. Use the image tagged test/mynginx:v1.0 to create a new container with the following specifications:
  • Container name: my-nginx
  • Run as daemon: yes
  • Container image: test/mynginx:v1.0
  • Port forward: from host port 8280 to container port 80
    On workstation, use the curl command to access the web server, accessible from the port 8280.
$ sudo podman run -d --name my-nginx -p 8280:80 do180/mynginx:v1.0
        c1cba44fa67bf532d6e661fc5e1918314b35a8d46424e502c151c48fb5fe6923

# Use the curl command to ensure that the index.html page is available and returns the custom content.
$ curl localhost:8280
$ curl 127.0.0.1:8280
        HELLO WORLD Page

# Creating Custom Container Images

# Designing Custom Container Images

# Dockerfile and S2I (Source to Image)

  • A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

  • S2I
    - S2I provides alternatives to Dockerfile. S2I uses following process to build a custom container image:
    1. Start a container from a base container image called the builder image, which includes a programming language runtime and essential development tools such as compilers and package managers.
    2. Fetch the application source code, usually from a Git server, and send it to the container.
    3. Build the application binary files inside the container.
    4. Save the container, after some clean up, as a new container image, which includes the programming language runtime and the application binaries.

# Building Custom Container Images with Dockerfiles

# Building Base Containers

  1. Create a Working Directory
    - it is a directory containing all files needed to build the image - for security reason, avoid root directory / as working directory
  2. Write Dockerfile
# Comment
INSTRUCTION
  1. Build the image with Podman
$ podman build -t NAME:TAG DIR
  • DIR : path to the working directory, which must include the Dockerfile. It can be the current directory as designated by a dot (.)
  • NAME:TAG is a name with a tag given to the new image. If TAG is not specified, default is latest.

# Write Dockerfile

Sample of Dockerfile:

# This is a comment line        -->1
FROM ubi7/ubi:7.7               -->2
LABEL description="This is a custom httpd container image" -->3
MAINTAINER Tim Robert <trobert@xyz.com>         -->4
RUN yum install -y httpd                        -->5
EXPOSE 80                                       -->6
ENV LogLevel "info"                             -->7
ADD http://someserver.com/filename.pdf /var/www/html -->8
COPY ./src/ /var/www/html/                      -->9
USER apache                                     -->10
ENTRYPOINT ["/usr/sbin/httpd"]  -->11
CMD ["-D", "FOREGROUND"]        -->12

1 : Lines that begin with # are comments.
2 : The FROM instruction declares that the new container image extends ubi7/ubi:7.7 container base image.
3 : The LABEL is responsible for adding generic metadata to an image. A LABEL is a simple key-value pair.
4 : MAINTAINER indicates the Author field of the generated container image's metadata.
5 : RUN executes commands in a new layer on top of the current image. The shell to execute commands is /bin/sh.
6 : EXPOSE indicates that the container listens on the specified network port at runtime. The EXPOSE instruction defines metadata only; it does not make ports accessible from the host. The -p option in the podman run command exposes container ports from the host.
7 : ENV is responsible for defining environment variables that are available in the container.
8 : ADD instruction copies files or folders from a local or remote source and adds them to the container's file system. If used to copy local files, those must be in the working directory. ADD instruction unpacks local .tar files to the destination image directory.
9 : COPY copies files from the working directory and adds them to the container's file system. It is not possible to copy a remote file using its URL with this Dockerfile instruction.
10 : USER specifies the username or the UID to use when running the container image for the RUN, CMD, and ENTRYPOINT instructions.
11 : ENTRYPOINT specifies the default command to execute when the image runs in a container. If omitted, the default ENTRYPOINT is /bin/sh -c.
12 : CMD provides the default arguments for the ENTRYPOINT instruction.

# Layering Image

  • Each instruction in a Dockerfile creates a new image layer. Having too many instructions in a Dockerfile causes too many layers, resulting in large images.
  • Sample of heavy layers Dockerfile:
RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms"
RUN yum update -y
RUN yum install -y httpd
  • Simplyfied into one instruction
RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms" && yum update -y && yum install -y httpd
  • Make it better readability
RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms" && \
    yum update -y && \
    yum install -y httpd

# Exercise

Review your knowledge, now!
  1. Create a new Dockerfile with below specs:
  • Use UBI 7.7 as a base image by adding the following FROM instruction at the top of the new Dockerfile
  • Include the MAINTAINER instruction to set the Author field in the new image. Replace the values to include your name and email address
  • Add the following LABEL instruction to add description metadata to the new image
  • Add a RUN instruction with a yum install command to install Apache on the new container
  • Add a RUN instruction to replace contents of the default HTTPD home page
  • EXPOSE port 80, so that the container will listen on this port

NOTE
The EXPOSE instruction does not actually make the specified port available to the host; rather, the instruction serves as metadata about which ports the container is listening on.

  • Use ENTRYPOINT instruction to set httpd as the default entry point:
# Create new folder with docker file
mkdir /test/mydocker
vi /test/mydocker/Dockerfile

# Create your dockerfile
FROM ubi7/ubi:7.7
MAINTAINER Tim Robert <trobert@tmail.com>
LABEL description="This is a custom image for Ubi"
RUN yum install -y httpd && ¥
    yum clean all
RUN echo "Hello! This is a cumtomized docker" > /usr/share/httpd/noindex/index.html
EXPOSE 80
ENTRYPOINT ["httpd", "-D", "FOREGROUND"]
  1. Build and verify the Apache container image.
  • Use the following commands to create a basic Apache container image using the newly created Dockerfile:
$ sudo podman build --layers=false -t test/apache /test/mydocker/

        STEP 1: FROM ubi7/ubi:7.7
        Getting image source signatures 1
        Copying blob sha256:...output omitted...
        71.46 MB / 71.46 MB [=====================================================] 18s
        ...output omitted...
        Storing signatures
        STEP 2: MAINTAINER Tim Robert <trobert@tmail.com>
        STEP 3: LABEL description="This is a custom image for Ubi"
        STEP 4: RUN yum install -y httpd &&     yum clean all
        Loaded plugins: ovl, product-id, search-disabled-repos, subscription-manager
        ...output omitted...
        Complete!
        STEP 5: RUN echo "Hello! This is a cumtomized docker" > /usr/share/httpd/noindex/index.html
        STEP 6: EXPOSE 80
        STEP 7: ENTRYPOINT ["httpd", "-D", "FOREGROUND"]
        ERRO[0109] HOSTNAME is not supported for OCI image format...output omitted...2
        STEP 8: COMMIT ...output omitted... localhost/do180/apache:latest
        Getting image source signatures
        ...output omitted...
        Storing signatures
        --> 190a...95c5

$ sudo podman images
        REPOSITORY                          TAG     IMAGE ID      CREATED        SIZE
        localhost/test/apache              latest  c69affe9d93b  33 seconds ago 247MB
        registry.access.redhat.com/ubi7/ubi latest  6fecccc91c83  4 weeks ago    215MB

NOTE Podman creates many anonymous intermediate images during the build process. They are not be listed unless -a is used. Use the --layers=false option of build subcommand to instruct Podman to delete intermediate images.

  1. Run the Apache Container
$ sudo podman run --name test-apache -d -p 10080:80 
$ sudo podman ps
$ curl localhost:10080
        Hello! This is a cumtomized docker

# Lab Exercise

Review your knowledge, now!
  1. Create a new Dockerfile with below specs:
  • The base image is ubi7/ubi:7.7
  • Sets the desired author name and email in the MAINTAINER part
  • Sets the environment variable PORT as 8080
  • Install Apache (httpd)
  • Change Apache configuration file /etc/httpd/conf/httpd.conf to listen port 8080 instead of default 80
  • Change ownership /etc/httpd/logs and /run/httpd to user and group apache
  • Expose the value set in the PORT environment variable
  • Copy the contents of the src/ folder to the Apache Document Root (/var/www/html) inside the container
  • Start the Apache httpd daemon in the foreground using:
httpd -D FOREGROUND
# Create new folder with docker file
mkdir /test/myapache
vi /test/myapache/Dockerfile

# Create your dockerfile
FROM ubi7/ubi:7.7
MAINTAINER TestUser <testuser@mail.com>
ENV PORT 8080
RUN yum install -y httpd && ¥
    yum clean all
RUN sed -ri -e "/^Listen 80/c\Listen ${PORT}" /etc/httpd/conf/httpd.conf && \
    chown -R apache:apache /etc/httpd/logs
    chown -R apache.apache /run/httpd
USER apache
EXPOSE ${PORT}
COPY ./src/ /var/www/html
ENTRYPOINT ["httpd", "-D", "FOREGROUND"]
  1. Build the custom Apache image with the name test/custom-apache
$ sudo podman build -t test/custom-apache /test/myapache/
$ sudo podman images
  1. Run the Apache Container
  • Name: app-container
  • Container image: test/custom-apache
  • Port forward: from host port 20080 to container port 8080
  • Run as daemon: yes
$ sudo podman run --name app-container -d -p 20080:8080 test/custom-apache
$ sudo podman ps
$ curl localhost:20080

# RedHat OpenShift: Containers & Kubernetes

# Creating Kubernetes Resources

# RHOCP Command-line

  • Login
oc login <clusterURL>
  • Port forwarding
    Sometime developers and system administrators need special network access to a container that would not be needed by application users.
    OpenShift provides the oc port-forward command for forwarding a local port to a pod port. This is different than having access to a pod through a service resource:
  1. The port-forwarding mapping exists only in the workstation where the oc client runs, while a service maps a port for all network users.
  2. A service load-balances connections to potentially multiple pods, whereas a port-forwarding mapping forwards connections to a single pod.
$ oc port-forward [pod-name] port:port
$ oc port-forward mysql-openshift-1-glqrp 3306:3306

When running this command, be sure to leave the terminal window running. Closing the window or canceling the process stops the port mapping.

  • Creating New Applications
# Create new app. Create deployment (in OCP4.5)
$ oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql

# Create new app. Create deployment config
$ oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql --as-deployment-config

# Specify image from private Docker image registry
$ oc new-app --docker-image=myregistry.com/mycompany/myapp --name=myapp --as-deployment-config
$ oc new-app --docker-image=registry.access.redhat.com/rhscl/mysql-57-rhel7:latest --name=mysql-openshift 

# Creates application based on source code from a Git repository
$ oc new-app https://github.com/openshift/ruby-hello-world --name=ruby-hello --as-deployment-config
  • Get all resources
$ oc get all
        NAME       DOCKER REPO                              TAGS      UPDATED
        is/nginx   172.30.1.1:5000/basic-kubernetes/nginx   latest    About an hour ago

        NAME       REVISION   DESIRED   CURRENT   TRIGGERED BY
        dc/nginx   1          1         1         config,image(nginx:latest)

        NAME         DESIRED   CURRENT   READY     AGE
        rc/nginx-1   1         1         1         1h

        NAME        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
        svc/nginx   172.30.72.75   <none>        80/TCP,443/TCP   1h

        NAME               READY     STATUS    RESTARTS   AGE
        po/nginx-1-ypp8t   1/1       Running   0          1h
  • Get as yaml
$ oc get RESOURCE_TYPE RESOURCE_NAME -o yaml
$ oc get pod pod-name-1 -o yaml
  • Describe resources
$ oc describe RESOURCE_TYPE RESOURCE_NAME
  • Edit resource
$ oc edit RESOURCE_TYPE RESOURCE_NAME
$ oc edit pod pod-name-1
  • Delete resource
$ oc delete RESOURCE_TYPE RESOURCE_NAME
$ oc delete pod pod-name-1
  • Execute commands inside a container
$ oc exec CONTAINER_ID options command

# Exercise: Deploy a database server on OpenShift

Here is what you are going to do:
port-forward

Lets get it done!
  1. Create a new application from the rhscl/mysql-57-rhel7:
  • Create your own project
  • Application name = mysql-openshift
  • Specify mysql basic environment variable (user, pwd, and rootpwd)
  • Deploy as deployment config
  1. Verify that MySQL pod successfully created and view some details inside.
  • Expose service so that outside client can access the pod
  1. Connect to the MySQL database server and verify that the database was created successfully. Use 3306 local and 3306 inside container.
1. Create a new application from the rhscl/mysql-57-rhel7:  

# Login to OCP
$ oc login -u OCP-USER -p OCP-PWD

# Create project
$ oc new-project sample01

# Create new application
$ oc new-app --as-deployment-config \
> --docker-image=registry.access.redhat.com/rhscl/mysql-57-rhel7:latest \
> --name=mysql-openshift \
> -e MYSQL_USER=user1 -e MYSQL_PASSWORD=mypa55 -e MYSQL_DATABASE=testdb \
> -e MYSQL_ROOT_PASSWORD=r00tpa55
        --> Found Docker image b48e700 (5 weeks old) from registry.access.redhat.com for "registry.access.redhat.com/rhscl/mysql-57-rhel7:latest"
        ...output omitted...
        --> Creating resources ...
        imagestream.image.openshift.io "mysql-openshift" created
        deploymentconfig.apps.openshift.io "mysql-openshift" created
        service "mysql-openshift" created
        --> Success
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
        'oc expose svc/mysql-openshift'
        Run 'oc status' to view your app.

$ oc status
        In project youruser-mysql-openshift on server https://api.cluster.lab.example.com:6443
        ...output omitted...

$ oc get pods -o=wide
        NAME                      READY  STATUS   ...  NODE
        mysql-openshift-1-glqrp   1/1    Running  ...  ip-10-0-148-115.ec2.internal

$ oc describe pod mysql-openshift-1-glqrp
        Name:               mysql-openshift-1-glqrp
        Namespace:          youruser-mysql-openshift
        Priority:           0
        PriorityClassName:  <none>
        Node:               ip-10-0-148-115.ec2.internal/10.0.148.115
        Start Time:         Fri, 15 Feb 2019 02:14:34 +0000
        Labels:             app=mysql-openshift
                        deployment=mysql-openshift-1
                        deploymentconfig=mysql-openshift
        Annotations:        openshift.io/deployment-config.latest-version: 1
                        openshift.io/deployment-config.name: mysql-openshift
                        openshift.io/deployment.name: mysql-openshift-1
                        openshift.io/generated-by: OpenShiftNewApp
                        openshift.io/scc: restricted
        Status:             Running
        IP:             10.129.0.85
        ...output omitted...

$ oc get svc
        NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
        mysql-openshift   ClusterIP   172.30.114.39   <none>        3306/TCP   6m

$ oc describe service mysql-openshift
        Name:              mysql-openshift
        Namespace:         youruser-mysql-openshift
        Labels:            app=mysql-openshift
        Annotations:       openshift.io/generated-by: OpenShiftNewApp
        Selector:          app=mysql-openshift,deploymentconfig=mysql-openshift
        Type:              ClusterIP
        IP:                172.30.114.39
        Port:              3306-tcp  3306/TCP
        TargetPort:        3306/TCP
        Endpoints:         10.129.0.85:3306
        Session Affinity:  None
        Events:            <none>

$ oc describe dc mysql-openshift
        Name:              mysql-openshift
        Namespace:         youruser-mysql-openshift
        Created:           15 minutes ago
        Labels:            app=mysql-openshift
        ...output omitted...
        Deployment #1 (latest):
                Name:		mysql-openshift-1
                Created:	15 minutes ago
                Status:		Complete
                Replicas:	1 current / 1 desired
                Selector:	app=mysql-openshift,deployment=mysql-openshift-1,deploymentconfig=mysql-openshift
                Labels:		app=mysql-openshift,openshift.io/deployment-config.name=mysql-openshift
                Pods Status:	1 Running / 0 Waiting / 0 Succeeded / 0 Failed
        ...output omitted...

$ oc expose service mysql-openshift
        route.route.openshift.io/mysql-openshift exposed

$ oc get routes
        NAME            HOST/PORT                                   ... PORT
        mysql-openshift mysql-openshift-youruser-mysql-openshift... ... 3306-tcp
  • Connect to MySQL container
# configure port forwarding between workstation and the database pod running on OpenShift using port 3306. The terminal will hang after executing the command.
$ oc get pod
        mysql-openshift-1-glqrp

$ oc port-forward mysql-openshift-1-glqrp 3306:3306
        Forwarding from 127.0.0.1:3306 -> 3306
        Forwarding from [::1]:3306 -> 3306

# Connect to MySQL server using MySQL client
$ mysql -uuser1 -pmypa55 --protocol tcp -h localhost
        Welcome to the MariaDB monitor.  Commands end with ; or \g.
        Your MySQL connection id is 1
        Server version: 5.6.34 MySQL Community Server (GPL)

        Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

        MySQL [(none)]>
        MySQL [(none)]> show databases;
                +--------------------+
                | Database           |
                +--------------------+
                | information_schema |
                | testdb             |
                +--------------------+
                2 rows in set (0.00 sec)


        MySQL [(none)]> exit
                Bye


# Creating Routes

# Working with Routes

Routes

  • Route connects a public-facing IP address and DNS host name to an internal-facing service IP.
  • Route uses the Service resource to find the endpoints (Container); that is, the ports exposed by the Service.
  • Router service uses HAProxy as default implementation.
  • Unlike Service resource, which use selectors to link to pod resources containing specific labels, a Route links directly to the service resource name.

# Creating Routes

$ oc expose service [service-name] --name [route-name]
$ oc expose service [service-name]
$ oc create -f [route-create-yaml-file]
  • By default, routes created by oc expose generate DNS names of the form:
RouteName-ProjectName.DefaultDomain

For example: 
routename: mysqlroute
projectname: sample01
DNS: example.com
Routes = mysqlroute-sample01.example.com

# Leveraging the Default Routing Service

  • Inspect the route pod
$ oc get pod --all-namespaces -l app=router
        NAMESPACE          NAME                            READY  STATUS   RESTARTS  AGE
        openshift-ingress  router-default-746b5cfb65-f6sdm 1/1    Running  1         4d
  • Default domain is configured on ROUTER_CANONICAL_HOSTNAME variable inside router pod
$ oc describe pod router-default-746b5cfb65-f6sdm
        Name:               router-default-746b5cfb65-f6sdm
        Namespace:          openshift-ingress
        ...output omitted...
        Containers:
        router:
        ...output omitted...
        Environment:
        STATS_PORT:                 1936
        ROUTER_SERVICE_NAMESPACE:   openshift-ingress
        DEFAULT_CERTIFICATE_DIR:    /etc/pki/tls/private
        ROUTER_SERVICE_NAME:        default
        ROUTER_CANONICAL_HOSTNAME:  apps.cluster.lab.example.com
        ...output omitted...

# Exercise: Expose service as a Route

  • Here is what you are going to do:
    routes2
Let's get it done!
  1. Create a new PHP application using Source-to-Image from the php-helloworld directory in the Git repository at http://github.com/yourgituser/DO180-apps/
  2. Expose the service as a publicly available Route
  3. Access the service from a host external to the cluster. Or try to access the routes via a browser.
  4. Add a new Routes which access the same service to the apps you have just created
  5. Verify you can also access to the newly created Route.
$ oc login -u OCP-USER -p OCP-PWD
$ oc new-project sample01
$ oc new-app --as-deployment-config ¥
     php:7.3~https://github.com/git-user/DO180-apps ¥
     --context-dir php-helloworld --name sampleapp

$ oc get pod
$ oc describe svc/sampleapp
        Name:              sampleapp
        Namespace:         sample01
        Labels:            app=php-sampleapp
        Annotations:       openshift.io/generated-by=OpenShiftNewApp
        Selector:          app=sampleapp,deploymentconfig=sampleapp
        Type:              ClusterIP
        IP:                172.30.200.65
        Port:              8080-tcp  8080/TCP
        TargetPort:        8080/TCP
        Endpoints:         10.129.0.31:8080
        Port:              8443-tcp  8443/TCP
        TargetPort:        8443/TCP
        Endpoints:         10.129.0.31:8443
        Session Affinity:  None
        Events:            <none>

$ oc expose svc/sampleapp
        route.route.openshift.io/sampleapp exposed

$ oc describe route
        Name:             sampleapp
        Namespace:        sample01
        Created:          4 minutes ago
        Labels:           app=sampleapp
        Annotations:      openshift.io/host.generated=true
        Requested Host:   sampleapp-sample01.your_wildcard_domain
        exposed on router default (host your_wildcard_domain) 4 minutes ago
        Path:             <none>
        TLS Termination:  <none>
        Insecure Policy:  <none>
        Endpoint Port:    8080-tcp

        Service:    sampleapp
        Weight:     100 (100%)
        Endpoints:  10.130.0.48:8443, 10.130.0.48:8080  

$ curl sampleapp-sample01.your_wildcard_domain
        Hello, World! php version is 7.3.11

# Create new routes 
$ oc exposed svc/sampleapp --name=sampleapp02

$ curl sampleapp02-sample01.your_wildcard_domain
        Hello, World! php version is 7.3.11

# Creating Application with Source-to-Image (S2I)

# S2I Process

Diagram on how S2I work. Pay attention on BuildConfig(bc), ImageStream (is), DeploymentConfig(dc)
s2i-diagram

# Relation between BuildConfig and DeploymentConfig

  • The BuildConfig pod
    - responsible for creating the images in OpenShift and pushing them to the internal container registry.
    - in the Build step, it responsible for compiling source code, downloading library dependencies and packaging the application as container image.
    - BC creates or updates a container image.
    - Any source code or content update typically requires a new build to guarantee the image is updated.

  • The DeploymentConfig pod
    - responsible for deploying pods to OpenShift.
    - DC reacts to the new/updated image event; and creates pod from the container image.
    - The outcome of a DeploymentConfig pod execution is the creation of pods with the images deployed in the internal container registry.
    - Any existing running pod may be destroyed, depending on how the DeploymentConfig resource is set.

Advantages using S2I:

  • User efficiency: Developers do not need to understand Dockerfiles and OS commands such as yum install. They focus on standard programming language tools.

  • Patching: S2I allows for rebuilding all the applications consistently if a base image needs a patch due to a security issue.

  • Speed: With S2I, the assembly process can perform a large number of complex operations without creating a new layer at each step, resulting in faster builds.

  • Ecosystem: S2I encourages a shared ecosystem of images where base images and scripts can be customized and reused across multiple types of applications.

# What is Image Streams?

  • The image stream resource is a configuration that names specific container images associated with image stream tags, an alias for these container images.
  • OpenShift builds applications against an image stream.
  • The OpenShift installer populates several image streams by default during installation.
  • To determine available image streams, use the oc get command, as follows:
$ oc get is -n openshift
        NAME           IMAGE REPOSITORY                      TAGS
        dotnet         ...svc:5000/openshift/dotnet          2.0,2.1,latest
        dotnet-runtime ...svc:5000/openshift/dotnet-runtime  2.0,2.1,latest
        httpd          ...svc:5000/openshift/httpd           2.4,latest
        jenkins        ...svc:5000/openshift/jenkins         1,2
        mariadb        ...svc:5000/openshift/mariadb         10.1,10.2,latest
        mongodb        ...svc:5000/openshift/mongodb         2.4,2.6,3.2,3.4,3.6,latest
        mysql          ...svc:5000/openshift/mysql           5.5,5.6,5.7,latest
        nginx          ...svc:5000/openshift/nginx           1.10,1.12,1.8,latest
        nodejs         ...svc:5000/openshift/nodejs          0.10,10,11,4,6,8,latest
        perl           ...svc:5000/openshift/perl            5.16,5.20,5.24,5.26,latest
        php            ...svc:5000/openshift/php             5.5,5.6,7.0,7.1,latest
        postgresql     ...svc:5000/openshift/postgresql      10,9.2,9.4,9.5,9.6,latest
        python         ...svc:5000/openshift/python          2.7,3.3,3.4,3.5,3.6,latest
        redis          ...svc:5000/openshift/redis           3.2,latest
        ruby           ...svc:5000/openshift/ruby            2.0,2.2,2.3,2.4,2.5,latest

# Build an App with S2I and the CLI

$ oc new-app --as-deployment-config php~http://my.git.server.com/my-app --name=myapp
$ oc new-app --as-deployment-config -i php http://my.git.server.com/my-app --name=myapp

# Build app from local source repository
$ oc new-app --as-deployment-config /path/to/local/repository

# Build with Git repository and context subdirectory
$ oc new-app --as-deployment-config https://github.com/openshift/sti-ruby.git \
  --context-dir=2.0/test/my-test-app

# Build with Git repository and specific branch
$ oc new-app --as-deployment-config \
  https://github.com/openshift/ruby-hello-world.git#beta4
  • After creating a new application, the build process starts.
$ oc get builds
        NAME               TYPE     FROM          STATUS    STARTED          DURATION
        php-helloworld-1   Source   Git@9e17db8   Running   13 seconds ago

$ oc get buildconfig
        myapp-1

$ oc logs build/myapp-1
        ...logs...
  • Trigger new build
$ oc get buildconfig
        NAME           TYPE      FROM      LATEST
        myapp          Source    Git       1

$ oc start-build myapp
        build "myapp-2" started

# Deeper into Build

  • The S2I image creation process is composed of two major steps:
  1. Build step: Responsible for compiling source code, downloading library dependencies, and packaging the application as a container image. Furthermore, the build step pushes the image to the OpenShift registry for the deployment step. The BuildConfig (BC) OpenShift resources drive the build step.
    Pod name for Build step = [application-name]-build-[number]-[string]

  2. Deployment step: Responsible for starting a pod and making the application available for OpenShift. This step executes after the build step, but only if the build step succeeded. The DeploymentConfig (DC) OpenShift resources drive the deployment step.

# Exercise: Creating a containerized application with Source-to-Image

Let's get it done! belum

# Exercise: Deploying the Web Application and MySQL application

Let's get it done!
  1. Build the MySQL image
  • A custom MySQL 5.7 image is used for this exercise. It is configured to automatically run any scripts in the /var/lib/mysql/init directory. The scripts load the schema and some sample data into the database for the To Do List application when a container starts.
  • Review the Dockerfile (multicontainer-design/images/mysql/Dockerfile)
  • Use podman to Build the MySQL database image.
  • Build image as test/mysql-57-rhel7
# Inspect your Dockerfile
$ cat multicontainer-design/images/mysql/Dockerfile

        FROM rhscl/mysql-57-rhel7
        # Volumes:
        #  * /var/lib/mysql/data - Datastore for MySQL
        #    /var/lib/mysql/init - Folder to load *.sql scripts
        # Environment:
        #  * $MYSQL_USER - Database user name
        #  * $MYSQL_PASSWORD - User's password
        #  * $MYSQL_DATABASE - Name of the database to create
        #  * $MYSQL_ROOT_PASSWORD (Optional) - Password for the 'root' MySQL account

        ADD root /
# Build container image from Dockerfile
# Dockerfile directory = multicontainer-design/images/mysql/Dockerfile

$ sudo podman build -t test/mysql57-rhel7 --layers=false /multicontainer-design/images/mysql/Dockerfile

$ sudo podman images
        REPOSITORY                           TAG     IMAGE ID      CREATED         SIZE
        localhost/test/mysql-57-rhel7       latest  8dc111531fce  21 seconds ago  444MB
  1. Build the Node.js parent image using the provided Dockerfile.
    2.1. Review the Dockerfile (multicontainer-design/images/nodejs/Dockerfile)
    2.2. Build the parent image using podman. Build the image as test/nodejs
# Check the dockerfile
$ cat multicontainer-design/images/nodejs/Dockerfile

        FROM    ubi7/ubi:7.7

        MAINTAINER TESTUSER <testuser@example.com>

        ENV     NODEJS_VERSION=8.0 \
                HOME=/opt/app-root/src

        # Setting tsflags=nodocs helps create a leaner container
        # image, as documentation is not needed in the container.
        RUN yum install -y --setopt=tsflags=nodocs rh-nodejs8 make && \
                yum clean all --noplugins -y && \
                mkdir -p /opt/app-root && \
                groupadd -r appuser -f -g 1001 && \
                useradd -u 1001 -r -g appuser -m -d ${HOME} -s /sbin/nologin \
                -c "Application User" appuser && \
                chown -R appuser:appuser /opt/app-root && \
                chmod -R 755 /opt/app-root

        ADD	./enable-rh-nodejs8.sh /etc/profile.d/

        USER	appuser
        WORKDIR	${HOME}

        CMD	["echo", "You must create your own container from this one."]
$ sudo podman build -t test/nodejs --layers=false ./multicontainer-design/images/nodejs/
        STEP 1: FROM ubi7/ubi:7.7
        Getting image source signatures
        ...output omitted...
        --> Finished Dependency Resolution

        Dependencies Resolved

        ================================================================================
        Package                    Arch   Version       Repository                Size
        ================================================================================
        Installing:
        rh-nodejs8                x86_64 3.0-5.el7     ubi-server-rhscl-7-rpms   7.3 k
        ...output omitted...
        Writing manifest to image destination
        Storing signatures
        --> 5e61...30de


# Check the images that has just been built
$ sudo podman images test/nodejs
  1. Build the To Do List application child image using the provided Dockerfile.
    3.1. Check the Dockerfile
    3.2. Inspect the environment variables that allow the Node.js REST API container to communicate with the MySQL container.
    3.3. Build the child image.
# Check the Dockerfile 
$ cat multicontainer-design/deploy/nodejs/Dockerfile
        FROM	test/nodejs
        ARG NEXUS_BASE_URL
        MAINTAINER TESTUSERNAME <testusername@example.com>

        COPY run.sh build ${HOME}/
        RUN scl enable rh-nodejs8 'npm install --registry=http://$NEXUS_BASE_URL/repository/nodejs/'
        EXPOSE	30080

        CMD	["scl","enable","rh-nodejs8","./run.sh"]

Explore the environment variables

$ cat multicontainer-design/deploy/nodejs/nodejs-source/models/db.js
        module.exports.params = {
                dbname: process.env.MYSQL_DATABASE,
                username: process.env.MYSQL_USER,
                password: process.env.MYSQL_PASSWORD,
                params: {
                host: "10.88.100.101",
                port: "3306",
                dialect: 'mysql'
                }
        };

The host and port details of the MySQL container are embedded with the REST API application. The host, as shown above in the db.js file, is the IP address of the mysql container.

# Troubleshooting Containerized Applications

# Common Problems

1. Troubleshooting Permission Issues

  • Case1:
    Some containers may require a specific user ID, whereas S2I is designed to run containers using a random user as per the default OpenShift security policy.

The following Dockerfile creates a Nexus container. Note the USER instruction indicating the nexus user should be used:

FROM ubi7/ubi:7.7
...contents omitted...
RUN chown -R nexus:nexus ${NEXUS_HOME}

USER nexus
WORKDIR ${NEXUS_HOME}

VOLUME ["/opt/nexus/sonatype-work"]
...contents omitted...

Trying to use the image generated by this Dockerfile without addressing volume permissions drives to errors when the container starts:

$ oc logs nexus-1-wzjrn
        ...output omitted...
        ... org.sonatype.nexus.util.LockFile - Failed to write lock file
        ...FileNotFoundException: /opt/nexus/sonatype-work/nexus.lock (Permission denied)
        ...output omitted...
        ... org.sonatype.nexus.webapp.WebappBootstrap - Failed to initialize
        ...lStateException: Nexus work directory already in use: /opt/nexus/sonatype-work
        ...output omitted...

To solve this issue, relax the OpenShift project security with the command oc adm policy.

$ oc adm policy add-scc-to-user anyuid -z default

To avoid file system permission issues, local folders used for container volume mounts must satisfy the following:

  • The user executing the container processes must be the owner of the folder, or have the necessary rights. Use the chown command to update folder ownership.

  • The local folder must satisfy the SELinux requirements to be used as a container volume. Assign the container_file_t group to the folder by using the below command

$ semanage fcontext -a -t container_file_t <folder> 

then refresh the permissions with the restorecon -R <folder> command.

2. Troubleshooting Invalid Parameters

Multi-container applications may share parameters, such as login credentials. Ensure that the same values for parameters reach all containers in the application.

A good practice to centralize shared parameters is to store them in ConfigMaps. Those ConfigMaps can be injected through the Deployment Config into containers as environment variables.

apiVersion: v1
kind: Pod
...
spec:
  containers:
    - name: test-container
...
  env:
        - name: ENV_1
          valueFrom:
            configMapKeyRef:
              name: configMap_name1
              key: configMap_key_1
...
  envFrom:
        - configMapRef:
            name: configMap_name_2
...

3. Troubleshooting Volume Mount Errors

When redeploying an application that uses a persistent volume on a local file system, a pod might not be able to allocate a persistent volume claim even though the persistent volume indicates that the claim is released. To resolve the issue, delete the persistent volume claim and then the persistent volume. Then recreate the persistent volume.

4. Troubleshooting Obsolete Images

OpenShift pulls images from the source indicated in an image stream unless it locates a locally-cached image on the node where the pod is scheduled to run. If you push a new image to the registry with the same name and tag, you must remove the image from each node the pod is scheduled on with the command podman rmi.

Run the oc adm prune command for an automated way to remove obsolete images and other resources.

5. Forwarding Ports for Troubleshooting
Below shows you to map the host port 30306 to the port 3306 on the db container. This container is created from mysql image.

$ sudo podman run --name db -p 30306:3306 mysql

Below shows you to forwards port 30306 from the developer machine to port 3306 on the db pod, where a MySQL server (inside a container) accepts network connections.

$ oc port-forward db 30306 3306

6. Accessing containers log

$ podman logs
$ docker logs
$ oc logs [podName] {-c [containerName]}

7. Openshift Events

$ oc get events
$ oc describe pod [podName]

8. Overriding Container Binaries

  • Many container images do not contain all of the troubleshooting commands users expect to find in regular OS installations, such as telnet, netcat, ip, or traceroute. Stripping the image from basic utilities or binaries allows the image to remain slim, thus, running many containers per host.
  • The following command starts a container, and overrides the image's /bin folder with the one from the host. It also starts an interactive shell inside the container.
$ sudo podman run -it -v /bin:/bin image /bin/bash

9. Transferring/Copy files to and out of containers

  • Volume mounts
    Another option for copying files from the host to a container is the usage of volume mounts. You can mount a local directory to copy data into a container.
$ sudo podman run -v /sourceDirectoryFile:/destinationDirectoryFile -d [containerImage]
$ sudo podman run -v /conf:/etc/httpd/conf -d do180/apache
  • podman copy
    Copy files from host to container
$ sudo podman cp [filesToCopy] [containerName]:[destinationCopy]
$ sudo podman cp documentSetting.conf todoapicontainer:/opt/jboss/doc.conf

Copy from container to host

$ sudo podman cp todoapicontainer:/opt/jboss/doc.conf ./documentSetting.conf

# Exercise: Comprehensive Lab

Lets get it done!
  1. Create a container image that starts an instance of a Nexus server:
  • Write a Dockerfile that containerizes the Nexus server. The Dockerfile must be located in the ~/labs/comprehensive-review/image directory. Specify The Dockerfile with below instructions:

  • Use a base image of ubi7/ubi:7.7 and set an arbitrary maintainer.

  • Set the environment variable NEXUS_VERSION to 2.14.3-02, and set NEXUS_HOME to /opt/nexus.

  • Install the java-1.8.0-openjdk-devel package

  • The RPM repositories are configured in the provided training.repo file. Be sure to add this file to the container in the /etc/yum.repos.d directory.

  • Run a command to create a nexus user and group. They both have a UID and GID of 1001.

  • Unpack the nexus-2.14.3-02-bundle.tar.gz file to the ${NEXUS_HOME}/ directory. Add thenexus-start.sh to the same directory.

  • Run a command, ln -s ${NEXUS_HOME}/nexus-${NEXUS_VERSION} ${NEXUS_HOME}/nexus2, to create a symlink in the container. Run a command to recursively change the ownership of the Nexus home directory to nexus:nexus.

  • Make the container run as the nexus user, and set the working directory to /opt/nexus.

  • Define a volume mount point for the /opt/nexus/sonatype-work container directory. The Nexus server stores data in this directory.

  • Set the default container command to nexus-start.sh.

  • There are two *.snippet files in the /home/student/DO180/labs/comprehensive-review/images directory that provide the commands needed to create the nexus account and install Java. Use the files to assist you in writing the Dockerfile.

  • Build the container image with the name nexus.

$ cd ~/labs/comprehensive-review/image
$ cat ./get-nexus-bundle.sh

$ ./get-nexus-bundle.sh
        #!/bin/bash
        if curl -L --progress-bar -O https://download.sonatype.com/nexus/oss/nexus-2.14.3-02-bundle.tar.gz
        then
        echo "Nexus bundle download successful"
        else
        echo "Download failed"
        fi

$ vi Dockerfile
        FROM ubi7/ubi:7.7

        MAINTAINER username <username@example.com>

        ENV NEXUS_VERSION=2.14.3-02 \
        NEXUS_HOME=/opt/nexus

        RUN yum install -y --setopt=tsflags=nodocs java-1.8.0-openjdk-devel && \
        yum clean all -y

        RUN groupadd -r nexus -f -g 1001 && \
        useradd -u 1001 -r -g nexus -m -d ${NEXUS_HOME} -s /sbin/nologin \
                -c "Nexus User" nexus && \
        chown -R nexus:nexus ${NEXUS_HOME} && \
        chmod -R 755 ${NEXUS_HOME}

        USER nexus

        ADD nexus-${NEXUS_VERSION}-bundle.tar.gz ${NEXUS_HOME}
        ADD nexus-start.sh ${NEXUS_HOME}/

        RUN ln -s ${NEXUS_HOME}/nexus-${NEXUS_VERSION} \
                ${NEXUS_HOME}/nexus2

        WORKDIR ${NEXUS_HOME}

        VOLUME ["/opt/nexus/sonatype-work"]

        CMD ["sh", "nexus-start.sh"]


$ sudo podman build -t nexus .
        STEP 1: FROM ubi7/ubi:7.7
        Getting image source signatures
        ...output omitted...
        STEP 25: COMMIT nexus
  1. Build and test the container image using Podman with a volume mount:
  • Use the script ~/labs/comprehensive-review/deploy/local/run-persistent.sh to start a new container with a volume mount.

  • Review the container logs to verify that the server is started and running.

  • Test access to the container service using the URL: http://:8081/nexus.

  • Remove the test container.

$ cd ~/labs/comprehensive-review
$ cd deploy/local
$ cat ./run-persistent.sh
        #!/bin/bash
        if [ ! -d /tmp/docker/work ]; then
        mkdir -p /tmp/docker/work
        sudo semanage fcontext -a -t container_file_t '/tmp/docker/work(/.*)?'
        sudo restorecon -R /tmp/docker/work
        sudo chown 1001:1001 /tmp/docker/work
        fi

        sudo podman run -d -v /tmp/docker/work:/opt/nexus/sonatype-work nexus

$ ./run-persistent.sh
        80970007036bbb313d8eeb7621fada0ed3f0b4115529dc50da4dccef0da34533


# Review the container logs to verify that the server is started and running.
$ sudo podman ps --format="table {{.ID}} {{.Names}} {{.Image}}"
        CONTAINER ID   NAMES                IMAGE
        81f480f21d47   inspiring_poincare   localhost/nexus:latest
        
$ sudo podman logs -f inspiring_poincare
        ...output omitted...
        ... INFO  [jetty-main-1] ...jetty.JettyServer - Running
        ... INFO  [main] ...jetty.JettyServer - Started
        Ctrl+C

# Inspect the running container to determine its IP address. Provide this IP address to the curl command to test the container.

$ sudo podman inspect -f '{{.NetworkSettings.IPAddress}}' inspiring_poincare
10.88.0.12

$ curl -v 10.88.0.12:8081/nexus/
        About to connect() to 10.88.0.12 port 8081 (#0)
        *   Trying 10.88.0.12...
        * Connected to 10.88.0.12 (10.88.0.12) port 8081 (#0)
        > GET /nexus/ HTTP/1.1
        > User-Agent: curl/7.29.0
        > Host: 10.88.0.12:8081
        > Accept: */*
        >
        < HTTP/1.1 200 OK
        < Date: Tue, 05 Mar 2019 16:59:30 GMT
        < Server: Nexus/2.14.3-02
        ...output omitted...
        <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
        <head>
        <title>Nexus Repository Manager</title>
        ...output omitted...

# Remove the test container
$ sudo podman kill inspiring_poincare
  1. Deploy the Nexus server container image to the OpenShift cluster. You must:
  • Tag the Nexus server container image as quay.io/{QUAY_USER}/nexus:latest, and push it the private registry.

  • Create an OpenShift project with a name of final-review.

  • Process the deploy/openshift/resources/nexus-template.json template and create the Kubernetes resources.

  • Create a route for the Nexus service. Verify that you can access http://nexus-final-review.${WILDCARD_DOMAIN}/nexus/ from your browser.

# Login to your Quay account
$ sudo podman login -u ${QUAY_USER} quay.io
        Password: your_quay_password
        Login Succeeded!

# Publish the Nexus server container image to your quay.io
$ sudo podman push localhost/nexus:latest quay.io/${QUAY_USER}/nexus:latest
        Getting image source signatures
        ...output omitted...
        Writing manifest to image destination
        Storing signatures

NOTES: 
Repositories created by pushing images to quay.io are private by default. 
Dont forget to make it public to enable it accessed by others.

# Create openshift project
$ cd ~/labs/comprehensive-review/deploy/openshift

$ oc login -u ${OCP4_DEV_USER} -p ${OCP4_DEV_PASSWORD} ${OCP4_MASTER_API}
        Login successful.
        ...output omitted...

$ oc new-project final-review
        Now using project ...output omitted...

# Process the template and create the Kubernetes resources:
$ oc process -f resources/nexus-template.json -p RHT_OCP4_QUAY_USER=${QUAY_USER} | oc create -f -
        service/nexus created
        persistentvolumeclaim/nexus created
        deploymentconfig.apps.openshift.io/nexus created
        
$ oc get pods
        NAME            READY     STATUS    RESTARTS   AGE
        nexus-1-wk8rv   1/1       Running   1          1m
        nexus-1-deploy   0/1   Completed   0     2m


# Expose the service by creating a route:
$ oc expose svc/nexus
        route.route.openshift.io/nexus exposed.
        
$ oc get route -o yaml
        apiVersion: v1
        items:
        - apiVersion: route.openshift.io/v1
        kind: Route
        ...output omitted...
        spec:
        host: nexus-your_dev_username-review.your_wildcard_domain
        ...output omitted...


Use a browser to connect to the Nexus server web application at http://nexus-${RHT_OCP4_DEV_USER}-review.${RHT_OCP4_WILDCARD_DOMAIN}/nexus/.

# AAA

# BBB

# Template

template