Skip to content

ibm-cloud-architecture/store-mq-gitops

Repository files navigation

Store inventory demonstration with IBM MQ and Event Streams

This is a GitOps repository to let you install in a minimum set of commands a simple MQ to Kafka integration demonstration that is using the following components:

  • Store sale simulator
  • IBM MQ as part of IBM Cloud Pak for Integration
  • IBM Event Streams as part of IBM Cloud Pak for Integration
  • Kafka Connector as part of Event Streams
  • A simple Kafka consumer or the Event Streams console to demonstrate the items topic content.

There are two ways to use this demonstration:

  • You have only an OpenShift 4.7+ cluster.
  • You already have a Cloud Pak for Integration deployed on OpenShift 4.7 cluster.

Audience

  • Architect, developer who want to understand Event Streams, Kafka Connectors, MQ source connector

Option 1: From a new OpenShift Cluster

In this option we start from a newly created OpenShift Cluster on IBM Cloud (ROKS cluster) with a minimum of three nodes of 8 CPU at 32GB, then will use some manual step to bootstrap the GitOps process and then start the ArgoCD deployment, which will deploy all the components automatically.

  • Add IBM product catalog, so IBM products are visible into the OperatorHub. It may take few seconds to get product visible.

    ./bootstrap/scripts/addIBMCatalog.sh
  • Deploy OpenShift GitOps operator, so we can use ArgoCD to deploy the demo in one command

    oc apply -k bootstrap/openshift-gitops/operator/overlays/stable

    The operator is managing all namespaces, and will also takes some second to be up and running.

  • Deploy IBM Event Streams Operator in openshift-operators project to watch all namespaces. As a first installation it will take some time as it also installs 'Cloud Pak foundational services'

      oc apply -k bootstrap/eventstreams

    This should add one eventstreams operator pod in the openshift-operators, and 3 pods for common services in ibm-common-services.

  • Get your entitlement key from IBM site and use the following scripts to define a secret so images for MQ and Event Streams can be downloaded the IBM image registry:

    ./boostrap/scripts/defineEntitlementSecret.sh your_long_entitlement_key 

    This should add one pod to share secret in the ibm-common-services project.

  • Deploy IBM Event Streams Operator to watch all namespaces. As a first installation it will take some time as it also install 'Cloud Pak foundational services'

      oc apply -k bootstrap/eventstreams
  • You can verify the operator pods with

    oc get pods -n openshift-operators
  • Deploy IBM MQ Operator, in the openshift-operators project, and it adds a new pod for the MQ operator.

      oc apply -k bootstrap/mq
  • Create ArgoCD project to isolate the solution from other. The name of the project is smq

    oc apply -k bootstrap/argocd-project
  • Get ArgoCD admin password

     oc extract secret/openshift-gitops-cluster -n openshift-gitops --to=- 
  • Get ArgoCD URL, verify configuration

    oc get routes openshift-gitops-server -n openshift-gitops

    Login to the url like: openshift-gitops-server-openshift-gitops.........appdomain.cloud

    In the ArgoCD Setting verify the project smq is present.

    In the ArgoCD Applications, if there is other project, filter on the project named smq, you should see No applications yet message.

  • Lets Go!: Start GitOps

     oc apply -k config/argocd 

    Now in the ArgoCD console you should see ArgoCD applications defined and after sometime all becoming green

This will take some time to make it running: when not already, event streams cluster creation will enforce common services to be created.

  • Go to the smq-dev project, in the OpenShift console, or with oc project smq-dev.

  • Try the following commands to assess state of the different deployments

    # For event streams
     oc get eventstreams
     # For MQ
     oc get QueueManager
  • Access the MQ console:

    • First get the console URL with:

      oc get routes store-mq-ibm-mq-web  -n smq-dev
    • Get admin password

      oc get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' -n ibm-common-services | base64 --decode && echo ""
    • Go to the console and verify the QM1 broker with the DEV.QUEUE.1 queue are up and running.

ArgoCD outcome

Option 2: Using existing Cloud Pak for Integration

You have an OpenShift Cluster with the needed resources and you already installed Cloud Pak for Integration common services and operators, you should have Event Streams Operators and MQ Operators deployed to monitor all namespaces.

If you want to use the OpenShift, Event Streams and MQ broker administrator consoles follow the instructions in this EDA MQ connector lab.

If you want to use a minimum set of commands do the following

Demonstration script

The script is moved to this website page

Maintenance

You can contribute via Pull Request once you have forked this repository.

This project was built using KAM CLI, and pruned to remove pipeline and stage environment.

This project has some dependencies to run:

  • The mq Sink connector jar which can be downloaded from the release page of this IBM messaging kafka-connect-mq-source git repo and saved into the environments/smq-dev/apps/services/kafkaconnect/my-plugins folder.

  • The MQ java client jars:

    curl -s https://repo1.maven.org/maven2/com/ibm/mq/com.ibm.mq.allclient/9.2.2.0/com.ibm.mq.allclient-9.2.2.0.jar -o com.ibm.mq.allclient-9.2.2.0.jar
  • The Store simulator from this repo

The following images are already built and ready to run:

  • quay.io/ibmcase/demomqconnect Docker Repository on Quay
  • quay.io/ibmcase/eda-store-simulator

One of the potential maintenance is update to the MQ source jars.

About

Bootstrapped GitOps Repository

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published