The OpenShift organization currently maintains several plugins related to integration between V3 OpenShift and V2 Jenkins.
OpenShift Client Plugin |
OpenShift Repo | Jenkins Repo | Wiki |
OpenShift Sync Plugin |
OpenShift Repo | Jenkins Repo | Wiki |
OpenShift Login Plugin |
OpenShift Repo | Jenkins Repo | Wiki |
OpenShift Pipeline Plugin |
OpenShift Repo | Jenkins Repo | Wiki |
The OpenShift Pipeline Plugin is only in maintenance for the 3.x stream and is not supported in 4.x. The message to use the client plugin instead seems to have been received, and it has been quite a while since we have seen any github or bugzilla activity with it.
As the development process for each of these plugins are similar in many ways, we are documenting the process here under the repository for the OpenShift Jenkins images, where each of the plugins will reference this document in their respective READMEs.
You will need an environment with the following tools installed:
- Maven (the
mvn
command) - Git
- Java (need v8)
- an IDE or editor of your choosing
The Jenkins open source project already has a bunch of useful links on setting up your development environment, including launching Jenkins from the IDE (though there are pros and cons for developing this way vs. developing against a Jenkins server running in an OpenShift pod).
Here are a few of those links for reference:
- https://wiki.jenkins.io/display/JENKINS/Extend+Jenkins (note, there are many useful child pages under this wiki page)
- https://wiki.jenkins.io/display/JENKINS/Plugin+tutorial (one of the key child pages under Extend Jenkins)
- https://wiki.jenkins.io/display/JENKINS/Setting+up+Eclipse+to+build+Jenkins (has concepts common to other IDEs undoubtedly)
Our plugins are constructed such that if they are running in an OpenShift pod, they can determine how to connect to the associated OpenShift master automatically, and no configuration of the plugin from the Jenkins console is needed.
If you choose to run an external Jenkins server and you would like to test interaction with an OpenShift master, you will need to manually configure the plugin. See each plugin's README or the OpenShift documentation for the specifics.
An example flow when running in an OpenShift pod:
- Clone this git repository:
git clone https://github.com/openshift/<plugin in question>-plugin.git
- In the root of the local repository, run maven
cd <plugin in question>-plugin mvn
- Maven will build target/.hpi (the Jenkins plugin binary)
- Open Jenkins in your browser, log in as an administrator, and navigate as follows:
- Manage Jenkins > Manage Plugins.
- Select the "Advanced" Tab.
- Find the "Upload Plugin" HTML form and click "Browse".
- Find the .hpi built in the previous steps.
- Submit the file.
- Check that Jenkins should be restarted.
Aside from updating the plugin via the Jenkins plugin manager, OpenShift's capabilities provide various means for building a new image containing your plugin and pointing your jenkins deployment to that image. For example, consider templates such as this one.
Or you can ...
What follows in "Actual PR Testing" is a complete description of the build and test flows for PRs against the OpenShift/Jenkins Images repo and the three plugins we provide to facilitate OpenShift/Jenkins integration. In the end of that chapter will detail the tests run in the PR and how you can run them against your local clusters.
Each plugin repository, and the images repository, under https://github.com/openshift is under the umbrella of the Prow based OpenShift CI/CD infrastructure. As such, there is
- an
openshift-ci-robot
bot that accepts these commands from within a PR - a code review process that leverages the aforementioned commands
- OpenShift CI/CD's specific Prow configuration for the various repositories of the project, detailed below
If you look at the CI workflow configuration summary, the ci-operator/config
, ci-operator/jobs
, and ci-operator/templates
bullets there pertain in some fashion to the Jenkins related repositories. Specifics within that context:
- For the plugins, we've managed to only need a
master
branch to date, and onlyci-operator/config
andci-operator/jobs
artifacts have had to be created for the three plugins - For master and 4.x branches of the jenkins images out of the OpenShift/Jenkins repo, we now only need
ci-operator/config
andci-operator/jobs
artifacts. - For 3.11, we had to also compose a special template under
ci-operator/templates
- For branches prior to 3.11, the only tests performed are some basic image startup and
s2i
invocations that were the equivalent of runningmake test
from the local copy of the repository on your workstation. They are still Jenkins based and not Prow based. No tests were executed in an actual OpenShift environment. At this point, only changes to address support cases are going into branches older than 3.11.
Quick preamble ... some cool facts about OpenShift CI:
- it runs on OpenShift
- it leverages development features like Builds and ImageStreams extensively
The ci-operator based configuration, the ci-operator/config
, for PR testing in 3.11 is at https://github.com/openshift/release/blob/master/ci-operator/config/openshift/jenkins/openshift-jenkins-openshift-3.11.yaml.
The parts at the beginning and end of that file are more generic setup needs for ci-operator.
What makes 3.11 a bit more complicated than 4.0 is
- During the 3.11 timeframe,
ci-operator
was pretty brand new. In particular, updating image streams on the test cluster was not baked into the system... enhancements came during 4.0 that helped there - 3.x did not create image streams for the slave/agent example images we ship
With that background, let's examine the relevant parts of https://github.com/openshift/release/blob/master/ci-operator/config/openshift/jenkins/openshift-jenkins-openshift-3.11.yaml.
First, we make a copy of the test container images and store it as ImagStreamTag cluster-tests
:
cluster-tests:
cluster: https://api.ci.openshift.org
name: origin-v3.11
namespace: openshift
tag: tests
Then, we are going to build a new test container image with:
- from: cluster-tests
inputs:
src:
paths:
- destination_dir: .
source_path: /go/src/github.com/openshift/jenkins/test-e2e/.
to: tests
which employs an OpenShift Docker Strategy Build, using this Dockerfile for the docker build. And the buid is adding a script that will tag the Jenkins images created during the PRs build into the test cluster's jenkins imagestream in the openshift namespace. The to: tests
line updates the existing test image. It is literally storing the output image from the OpenShift Docker Strategy Build into the tests
ImageStreamTag in the CI systems's internal ImageStreams.
Next, this stanza is for the main jenkins image:
- dockerfile_path: Dockerfile
from: base
inputs:
src:
paths:
- destination_dir: .
source_path: /go/src/github.com/openshift/jenkins/2/.
to: 2-centos
Let's dive into this stanza:
- As with updating the test image, this yaml is short hand for defining an OpenShift Docker Build Strategy build, and it does a Docker build with:
from: base
is an imagestreamtag to the latest 3.11.x OpenShift CLI image ... that gets substituted into the Dockerfile's FROM clause for the Docker buildDockerfile
means we will literally using theDockerfile
at thesource_path
, where the/go/src/github.com/openshift/jenkins/2/.
corresponds to thegit checkout
for the git branch we are testing.- the
to: 2-centos
tellsci-operator
to set the output of the Docker Build Strategy build to an imagestreamtag named2-centos
in the test artifacts. Then,ci-operator
takes all such imagestremtag names, converts them to upper case, and then setsIMAGE_<TAG NAME>
environment variables into the test system
- There are similar stanzas for the
slave-base
,agent-maven-3.5
, andagent-nodejs-8
images.
But again, since we do not have imagestreams defined for those slave/agent images in 3.x, we have to do some more manipulation of the Prow setup. With that, let's move onto the ci-operator/jobs
definitions for 3.11.
The key two files are the presubmit definition and the postsubmit definition.
The most relevant pieces of the presubmit (aside from a bunch of Prow and ci-operator gorp you can dismiss):
- Whether a PR related job is Prow based or Jenkins based is indicated by the
agent
setting. In the presubmits, they are all Prow based:
- agent: kubernetes
- There are unique Prow definitions for each branch a given repo. So this stanza signifies that this handles PRs for the
openshift-3.11
branch of the jenkins repo:
branches:
- openshift-3.11
- If the building of the image fails for some reason within a PR (
yum mirror
flake during RPM install, Jenkins update center flake during plugin download, a bug in your PR), you can re-run the OpenShift Docker Strategy build defined in the config via a/test image
comment made to the PR, which maps to the ci-operator definition with the stanza:
rerun_command: /test images
- To kick off a another run of the PR tests, you can type in
/test e2e-gcp
as a PR comment because of:
rerun_command: /test e2e-gcp
- To trigger the tagging of the PRs newly build Jenkins image into the test cluster, this stanza sets up an environment variable that allows code down the line to call that script we mounted into the test container earlier:
- name: PREPARE_COMMAND
value: tag-in-image.sh
- As an optimization, we only run the Jenkins extended tests defined in https://github.com/openshift/origin. This is achieved via:
- name: TEST_COMMAND
value: TEST_FOCUS='openshift pipeline' TEST_PARALLELISM=3 run-tests
OpenShift extended tests are based on the Golang Ginkgo test framework. We leverage the focus feature of Ginkgo to limit the tests executed to the ones pertaining the our OpenShift/Jenkins integrations. The run-tests
executable actually launches the Ginkgo tests against the test cluster, and will leverage TEST_FOCUS
. The TEST_PARALLELISM=3
is also worth noting, as it is unique to Jenkins e2e. We discovered that the amount of memory Jenkins needed to run Pipelines was such that we could overload our 3.11 based GCP clusters used for CI. Setting this environment variable restricted the amount of Jenkins based tests that would be run concurrently.
The most relevant pieces of the postsubmit:
-
There is 1 Prow based and 1 Jenkins based task in the postsubmits.
-
The Prow based one gathers artifacts from the various image and test jobs for analysis. Aside from debugging failures, the artifacts will prove helpful when we are updating the versions of any of the plugins the image depends on (more on that below).
-
The Jenkins based job is a legacy from our pre-Prow, pre-4.x days, that still runs on the OpenShift CI Jenkins server at https://ci.openshift.redhat.com/jenkins. It pushes new versions of the 3.11 images to docker.io. With 4.0, we are only pushing the community versions of the images to quay.io.
OK, now with ci-operator/config
and ci-operator/jobs
covered for 3.11, ci-operator/templates
is the next item to cover. Reminder, the need for jenkins to define a template is unique to 3.11.
The template is at https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/openshift-ansible/cluster-launch-e2e-openshift-jenkins.yaml. And by template, yes, we mean an OpenShift template used to create API objects, where parameters are defined to vary the specific settings of those API objects. The template to a large degree is a clone of https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/openshift-ansible/cluster-launch-e2e.yaml, but with additions to:
- Tag in the jenkins image from the PR's build into the test cluster's jenkins imagestream
- Change the image pull spec of our example maven and nodejs agent images used to create our default k8s plugin pod templates configuration
To tag in the Jenkins image, we added a prepare
container into the e2e pod. It leverages the PREPARE_COMMAND
and IMAGE_2_CENTOS
environment variables noted above to call that script from the Jenkins repo that was added to the base test container from the CI system.
To update the agent images used, the IMAGE_MAVEN_AGENT
and IMAGE_NODEJS_AGENT
environment variables, which the ci-operator/config
noted above sets to the pull spec of the newly built images from the PR, are read in by the extended tests in https://github.com/openshift/origin. Those values in turn are used to set the environment variables that the Jenkins image recognizes for overriding the default images used for the k8s plugin configuration.
So, by comparison, the ci-operator/config
for the master branch is here. As versions of 4.x accumulate, you'll see more openshift-jenkins-<branch name>.yaml
files in that directory. As of this writing, the initial GA of 4.x has not occurred.
Key differences from 3.11:
- The Prow based infra evolved in many ways....like leveraging AWS vs. GCP for example. But it got to the point also, where the mapping to imagestreams running in the test cluster was more direct. So in a stanza like:
- dockerfile_path: Dockerfile.rhel7
from: base
inputs:
src:
paths:
- destination_dir: .
source_path: /go/src/github.com/openshift/jenkins/2/.
to: jenkins
that still executed an OpenShift Docker Strategy build against the git checkout
from the PR, but now the to: jenkins
line meant the system would tag our resulting image directly into the test cluster's jenkins
imagestream.
So no more need for the adding in the tagging script into the test container and doing special test cluster setup in the template.
Also, with the agent images now having imagestreams in a 4.x cluster as part of being part of the installed payload, we can leverage the same pattern for those as well, simply tagging in the PR's versions of those images into the test cluster. Hence, no need for the IMAGE_MAVEN_AGENT
and IMAGE_NODEJS_AGENT
environment variables.
With both those removed, we no longer needed a special template under ci-operator/templates
for jenkins. We now use the default one in 4.x.
You will also notice the use of Dockerfile.rhel7
vs. Dockerfile
for the dockerfile_path
. This stems from the move in 4.x to the UBI and the end of CentOS based content for OpenShift.
The extended tests defined in OpenShift origin were also reworked in 4.x. The use of Ginkgo focuses were moved to only within the test executables, and can no longer be specified from the command line. "Test suites" were defined for the most prominent focuses, including one for Jenkins called openshift/jenkins-e2e
. This simplification, along with other rework, makes it easier to run the extended tests against existing clusters (including ones you stand up for your development ... more on that later). The tests defined for jenkins repo PRs are the specific jenkins e2e's, as well as the generic OpenShift conformance regression bucket:
tests:
- as: e2e-aws
commands: TEST_SUITE=openshift/conformance/parallel run-tests
openshift_installer:
cluster_profile: aws
- as: e2e-aws-jenkins
commands: TEST_SUITE=openshift/jenkins-e2e run-tests
openshift_installer:
cluster_profile: aws
Moving on to the ci-operator/jobs
data, not as different from 3.11 as the ci-operator/config
, the presubmits are the ci-operator and Prow related definitions for the e2e-aws
and e2e-aws-jenkins
test jobs noted above, as well as the job to build the images. Each can be re-run via /test e2e-aws
, /test e2e-aws-jenkins
, or /test images
.
Likewise, the postsubmit has the Prow job defined to collect the test artifacts as with 3.11. The Jenkins based job to push the resulting images has been removed as part of the move off of docker.io and onto quay.io. The CI system in general will push the updates to quay.io for all relevant images, and we no longer need special jobs for our Jenkins images on this front.
For the plugins, we've managed to only need a master
branch to date in providing support against 3.x and 4.x. The OpenShift features needed to support the various OpenShift/Jenkins integrations landed in 3.4, and we are only actively supporting OpenShift/Jenkins images back to 3.6. An experimental branch v3.6-fixes
was created in https://github.com/jenkinsci/openshift-sync-plugin a while back to confirm we could backport specific fixes to older versions if need be and craft versions like 1.0.24.1
if need be. But for simplicity that will be a last resort sort of thing.
Also note, we have not been updating older versions of the OpenShift/Jenkins image with newer versions of our 3 plugins to pull in fixes unless a support case comes in through bugzilla that dictates so.
With that background as context:
- During the 4.0 time frame PR testing for the supported plugins has migrated fully to Prow (and away from the OpenShift CI Jenkins server)
- Only
ci-operator/config
andci-operator/jobs
artifacts are needed for the three plugins, similar to what exists for the jenkins repo. - The deprecated plugin remains on jobs defined at https://github.com/openshift/aos-cd-jobs that run on the OpenShift CI Jenkins server ... note since its EOL notice on Aug 3 2018 there have been no further changes needed to that plugin from a support perspective.
The ci-operator/config
files for each plugin are very similar. For reference, here are the locations for the sync plugin, the client plugin, and login plugin.
The 3 key elements are:
- We define a "copy" of the latest jenkins imagestream to serve as the basis of the new image build that contains the new plugin binary stemming from the PR's changes:
base_images:
original_jenkins:
name: '4.0'
namespace: ocp
tag: jenkins
- With that as the base image, like before, we define an OpenShift Docker Strategy build against the given plugin's checked out repo, leveraging the new 4.x specific
Dockerfile
files present for each. Theci-operator/config
for that build is:
images:
- dockerfile_path: Dockerfile
from: original_jenkins
inputs:
src:
paths:
- destination_dir: .
source_path: /go/src/github.com/openshift/jenkins-openshift-login-plugin/.
to: jenkins
Where source_path
and dockerfile_path
correspond to the git checkout
of the plugin repo for the PRs branch, and to: jenkins
says take the resulting image and tag it into the jenkins imagestream.
The Dockerfile
files are like:
FROM quay.io/openshift/origin-jenkins-agent-maven:v4.0 AS builder
WORKDIR /java/src/github.com/openshift/jenkins-login-plugin
COPY . .
USER 0
RUN export PATH=/opt/rh/rh-maven35/root/usr/bin:$PATH && mvn clean package
FROM quay.io/openshift/origin-jenkins:v4.0
RUN rm /opt/openshift/plugins/openshift-login.jpi
COPY --from=builder /java/src/github.com/openshift/jenkins-login-plugin/target/openshift-login.hpi /opt/openshift/plugins
RUN mv /opt/openshift/plugins/openshift-login.hpi /opt/openshift/plugins/openshift-login.jpi
Where the second FROM ..
is replaced with the image pull spec for from: original_jenkins
, and we use AS builder
and our maven image to compile the plugin using the PR's branch, and then copy the resulting hpi file into the Jenkins image's plugin directory, so it is picked up when the images is started up.
- And lastly, we leverage just the jenkins-e2e suite for plugin testing with:
tests:
- as: e2e-aws-jenkins
commands: TEST_SUITE=openshift/jenkins-e2e run-tests
openshift_installer:
cluster_profile: aws
The use of the generic conformance regression bucket is omitted for plugin testing, as it will be covered when we attempt to update the openshift/jenkins image with any new version of the plugin.
(A subtle reminder that what we officially support is the openshift/jenkins images. New releases of the plugin not yet incorporated into the image are considered "pre-release function").
The ci-operator/jobs
files are very similar to the ones for the jenkins image itself for 4.x, in the Prow jobs they define, etc. For reference here are the locations of the client plugin, login plugin, and sync plugin.
First, the extended tests in OpenShift Origin that we've made some references to ... where are the Jenkins ones specifically? There are two golang files:
- The reduced jenkins e2e suite run in the OpenShift Build's regression bucket
- In addition to the minimal suite, these tests get run in the jenkins e2e for the image and each plugin
The structure of those golang files are a bit unique from a Ginkgo perspective, and as compared to the other tests in OpenShift origin. Given the memory demands of Jenkins, as well as intermittent issues with Jenkins trying to contact the Update Center, we've gone through pains:
- To minimize the number of Jenkins instance running concurrently during the e2e.
- To minimize the number of times we have to bring Jenkins up
So you'll see less use of g.Context(..)
and g.It(...)
, as well as cleaning up of resources between logical tests. Currently the divisions in the tests that result in concurrent test runs are between:
- The client plugin and sync plugin
- The ephemeral storage template and persistent storage template
The openshift-tests
binary in the 4.x branches of OpenShift Origin includes those tests (and as it turns out, can be run against both 3.11 and 4.x clusters). Once you have a cluster up, and the openshift-tests
binary built (run hack/build-go.sh cmd/openshift-tests
from you clone of origin), you can:
- set and export KUBECONFIG to the location of the admin.kubeconfig for the cluster
- run
openshift-tests run openshift/jenkins-e2e --include-success
against the cluster ... theopenshift/jenkins-e2e
is considered a "suite" inopenshift-tests
and under the covers it leverages Ginkgo focuses to pick up the tests from those two golang files.
To run extended tests against one of your clusters using a set of local changes of a plugin, from one of the plugin repo's top dirs, you can:
- run
docker build -f ./Dockerfile -t <a publicly accessible docker registry spec, like docker.io/gmontero/plugin-tests:latest>
- run
docker push <a publicly accessible docker registry spec, like docker.io/gmontero/plugin-tests:latest>
- run
oc tag --source=docker <a publicly accessible docker registry spec, like docker.io/gmontero/plugin-tests:latest> openshift/jenkins:2
- run
openshift-tests run openshift/jenkins-e2e --include-success
... the imagestream controller in OpenShift will pull the publicly accessible docker registry spec, like docker.io/gmontero/plugin-tests:latest, when the standard jenkins template is provisioned.
The Dockerfiles and scripts in these folders were used in the pre-Prow days, where the Jenkins jobs would build Docker images with the updated plugin local to the test nodes, and then via environment variables, the extended tests would provision Jenkins either using the standards templates, or specialized ones that would leverage the local test image.
We can most likely remove these, but are holding on in case we have to create non-master branches for back porting fixes to older versions of the plugins in corresponding older openshift/jenkins images.
Once we've merged changes into one of the OpenShift org GitHub repositories for a given plugin, we need to transfer the associated commit to the corresponding JenkinsCI org GitHub repository and follow the upstream Jenkins project release process when we have deemed changes suitable for inclusion into the non-subscription OpenShift Jenkins image (the CentOS7 based one hosted on docker.io for 3.x, the UBI based one hosted on quay.io for 4.x).
Some key specifics from the upstream Jenkins project release process:
- You need a login/account via https://accounts.jenkins.io/ .... by extension it should also give you access to https://issues.jenkins-ci.org. See https://wiki.jenkins-ci.org/display/JENKINS/User+Account+on+Jenkins.
- You should add this account to your
~/.m2/settings.xml
. The release process noted above has details on how to do that, as well as workarounds for potential hiccups. Read thoroughly. - Someone on the OpenShift Developer Experience team who already has access will need to give you the necessary permissions in the files for each plugin at https://github.com/jenkins-infra/repository-permissions-updater/tree/master/permissions. Existing users will need to construct a PR that adds your ID to the list of folks.
- Similarly for the https://github.com/jenkinsci repositories for each plugin, we'll need to update each repositories administrator lists, etc. with your github ID. We currently have these github teams defined: one has to be a member of the github teams:
For our Jenkins image repository to include particular versions of our plugins in the image, the plugin versions in question need to be available at these locations, depending on the particular plugin of course. These are the official landing spots for a newly minted version of a particular plugin.
- https://updates.jenkins.io/download/plugins/openshift-client
- https://updates.jenkins.io/download/plugins/openshift-sync
- https://updates.jenkins.io/download/plugins/openshift-login
We as of yet have not had to pay attention to them, but the CI jobs over on CloudBee's Jenkins server for our 4 plugins are:
- https://jenkins.ci.cloudbees.com/job/plugins/job/openshift-client-plugin/
- https://jenkins.ci.cloudbees.com/job/plugins/job/openshift-sync-plugin/
- https://jenkins.ci.cloudbees.com/job/plugins/job/openshift-login-plugin/
These kick in when we cut the official version at the Jenkins Update Center for a given plugin.
To cut a new release of any of our plugins, you will set up a local clone of the https://github.com/jenkinsci repository for the plugin in question, like https://github.com/jenkinsci/openshift-client-plugin, and then transfer the necessary commits from the corresponding https://github.com/openshift repository, like https://github.com/openshift/jenkins-client-plugin.
Transfer commits from the https://github.com/openshift repositories to the https://github.com/jenkinsci repositories ... prior to generating a new plugin release
In your clone of https://github.com/jenkinsci/<plugin dir>
, set up your git remotes so origin is the https://github.com/jenkinsci/<plugin dir>
repository, and upstream is the https://github.com/openshift/<plugin dir>/
repository. Using openshift-client
as an example (and substitute the other plugin names if working with those plugins):
- From the parent directory you've chosen for you local repository, clone it ... for example, run
git clone git@github.com:jenkinsci/openshift-client-plugin.git
for the client plugin - Change into the resulting directory, again for example
openshift-client-plugin
, and add an git upstream link for the corresponding repo under openshift ...for example, rungit remote add upstream git://github.com/openshift/jenkins-client-plugin
for the client plugin - Then pull and rebase the latest changes from https://github.com/openshift/jenkins-client-plugin with the following:
$ git checkout master
$ git fetch upstream
$ git fetch upstream --tags
$ git cherry-pick <commit id> # for each commit that needs to be migrated
$ git push origin master
$ git push origin --tags
After pushing the desired commits to the https://github.com/jenkinsci repository for the plugin in question, you can now actually initiate the process to create a new version of the plugin in the Jenkins Update Center.
Prerequisite: your Git ID should have push access to the https://github.com/jenkinsci repositories for this plugin; your Jenkins ID (again see https://wiki.jenkins-ci.org/display/JENKINS/User+Account+on+Jenkins) is listed in the permission file for the plugin, like https://github.com/jenkins-infra/repository-permissions-updater/blob/master/permissions/plugin-openshift-pipeline.yml. Given these assumptions:
- Then run
mvn release:prepare release:perform
- You'll minimally be prompted for the
release version
,release tag
, and thenew development version
. Default choices will be provided for each, and the defaults are acceptable, so you can just hit the enter key for all three prompts. As an example, if we are currently at v1.0.36, it will provide 1.0.37 for the newrelease version
andrelease tag
. For thenew development version
it will provide 1.0.38-SNAPSHOT, which is again acceptable. - The
mvn release:prepare release:perform
command will take a few minutes to build the plugin and go through various verifications, followed by a push of the built artifacts up to the Jenkins Artifactory server. This typically works without further involvement but has failed for various reasons in the past. If so, to retry with the same release version, you will need to callgit reset --hard HEAD~2
to back off the two commits created as part of publishing the release (the "release commits", where the pom.xml is updated to reflect the new version and the next snapshot version), as well as usegit tag
to delete both the local and remote version of the corresponding tag. After deleting the commits and tags, usegit push -f
to update the commits at the Jenkinsci GitHub Org repo in question. Address whatever issues you have (you might have to solicit help on the Jenkins developer group: https://groups.google.com/forum/#!forum/jenkinsci-dev) or on the #jenkins channel on freenode (Daniel Beck from CloudBees has been helpful and returned message on #jenkins), then try again. - If
mvn release:prepare release:perform
completes successfully, those "release commits" will look something like this if you rangit log -2
:
commit 1c6dabc66c24c4627941cfb9fc2a53ccb0de59b0
Author: gabemontero <gmontero@redhat.com>
Date: Thu Oct 26 14:18:52 2017 -0400
[maven-release-plugin] prepare for next development iteration
commit e040110d466249dd8c6f559e343a1c6b4b5f19a8
Author: gabemontero <gmontero@redhat.com>
Date: Thu Oct 26 14:18:48 2017 -0400
[maven-release-plugin] prepare release openshift-login-1.0.0
Transfer commits from the https://github.com/jenkinsci repositories to the https://github.com/openshift repositories ... after generating a new plugin release
Keeping the commit lists between the openshift and jenkinsci repositories as close as possible helps with general sanity, as we do our development work on the openshift side, but have to cut the plugin releases on the jenkinsci side. So we want to pick the 2 version commits back into our corresponding openshift repositories.
First, from you clone on the openshift side, say for example the clone for the sync plugin, run git fetch git@github.com:jenkinsci/openshift-sync-plugin.git
Second, create a new branch.
Third, pick the two version commits via git cherry-pick
Lastly, push the branch and create a PR against the https://github.com/openshift repo in question. Quick FYI, the plugin repos are now set up with the right github rebase configuration options such that we do not get that extra, empty "merge commit".
A described above, we can employ our CI server and extended test framework for respositories in https://github.com/openshift. We cannot for https://github.com/jenkinsci. For this reason we maintain the separate repositories, where we run OpenShift CI against incoming changes before publishing them to the "official" https://github.com/jenkinsci repositories.
We can continue development of new features for the next release in the https://github.com/openshift repositories. But if a new bug for the current release pops up in the interim, we can make the change in https://github.com/jenkinsci first, then cherry-pick into https://github.com/openshift (employing our regression testing), and then when ready cut the new version of the plugin in https://github.com/jenkinsci.
Through various scenarios and human error, those "release commits" (where the version in the pom.xml is updated) sometimes have landed in https://github.com/openshift repositories after the fact, or out of order with how they landed in https://github.com/jenkinsci repositories. The https://github.com/jenkinsci repositories are the master in this case. The plugin version a change is in is expressed by the commit orders in the pom.xml release changes in the various https://github.com/jenkinsci repositories.
As referred to previously, the new plugin version will land at https://updates.jenkins.io/download/plugins/<plugin-name>
. Monitor that page for the existence of the new version of the plugin. Warning: the link for the new version can show up, but does not mean the file is available yet. Click the link to confirm you can download the new version of the plugin. When you can download the new version file, the new release is available.
At this point, we are back to the steps articulated in the base plugin installation section of this repository's README. You'll modify the text file with the new version for whatever OpenShift plugin you have cut a new version for, and create a new PR. The image build and e2e extended test runs we detailed in "Actual PR Testing" will commence.
If the PR passes all tests and merges, the api.ci system will promote the jenkins images to quay.io for 4.x, and we have the separate jenkins job in the openshift ci jenkins server to push the 3.11 images to docker.io.
The image build from the PR in particular is of interest when it comes to plugin versions within our openshift/jenkins image, and what we have to do in creating the RPM based images hosted on registy.redhat.io/registry.access.redhat.com that we provide subscription level support for. The PR will have a link to the ci/prow/images
job. If you click that link, then the artifacts
link, then the next artifacts
link, then build-logs
, you'll see gzipped output from each of the image builds. Click the one for the main jenkins image. If you search for the string Installed plugins:
you'll find the complete list of every plugin that was installed. Copy that output to clipboard and paste it into the PR that just merged. See openshift#829 (comment) as an example.
Step 2: updating OSBS/Brew for generating the officially supported images available with Red Hat subscriptions
First, some background: for all Red Hat officially supported content, to ensure protection from outside hackers, all content is built in a quarantined system with no access to the external Internet. As such, we have to inject all content into OpenShift's Brew server (see links like https://pagure.io/koji/ and https://osbs.readthedocs.io/en/latest/ if you are interested in the details/histories of this infrastructure), which is then scrubbed before official builds are run with it. The injection is specifically the creation of an RPM which contains all the plugin binaries.
The team responsible for all this build infrastructure for OpenShift, the Automated Response Team or ART, not surprisingly has there own, separate, Jenkins instance that helps manage some of their dev flows.
They have provided us a Jenkins pipeline (fitting, I know) that facilitates the building of the plugin RPM and its injection into the Brew pipeline that ultimate results in an image getting built. They have also provided a pipeline for updating the version of Jenkins core we have installed. The pipeline for updating the version of the Jenkins core in our image is at https://github.com/openshift/aos-cd-jobs/tree/master/jobs/devex/jenkins-bump-version and the pipeline for updating the set of plugins installed is at https://github.com/openshift/aos-cd-jobs/tree/master/jobs/devex/jenkins-plugins.
Now, we used to be able to log onto their Jenkins server and initiate runs of those 2 pipelines, but towards the end of the 4.1 cycle, corporate processes and guidelines changed such that only members of the ART are allowed to access it.
So now, you need to open a Jira bug on the ART board at https://jira.coreos.com/secure/RapidBoard.jspa?rapidView=85 to inform them of what is needed. Supply these parameters:
- The jenkins core version to base off of (just supply what we are shipping with 3.11 or 4.x)
- The "OCP_RELEASE" is the OpenShift release (OCP == OpenShift Container Platform) ... so either 3.11, 4.0, 4.1, etc.
- The plugin list is the list you saved from the image build in the PR. Remove the "Installed plugins" header, but include the
<plugin name>:<plugin version>
lines
An example of such a request is at https://jira.coreos.com/browse/ART-673.
The job typically takes 10 to 15 minutes to succeed. Flakes with Jenkins upstream when downloading plugins is the #1 barrier to success. Just need to retry again until the Jenkins update center stabilizes. Once in a while there is a dist git hiccup (dist git is the git server used by brew). Again, just try again until it settles down.
When the job succeeds, an email is sent to the folks in the mailing lists. Add yours to the list when submitting the job if it is not listed there. Submit a PR to update https://github.com/openshift/aos-cd-jobs/blob/master/jobs/devex/jenkins-plugins/Jenkinsfile to add your email if you'll be doing this long term.
The email sent will contain links that will point you to the dist git commit for the new RPM.
Store the job link and dist git link in the original PR, like this comment.
AND THE LAST PIECE !!!!! .... the openshift/jenkins images produced by OpenShift's Brew are listed at https://brewweb.engineering.redhat.com/brew/packageinfo?packageID=67183. Make sure the brew registry in in your docker config's insecure registry list (it is brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
). If you are on the Red Hat network or VPN, you can download the images and try them out. Also, QA/QE goes to this registry to verify bugs/features.
NOTE: when they build those images in brew, they actually modify the Dockerfile.rhel7
files to facilitate:
- some magic with respect to the yum repositories used that is particular to running in this quarantined environment
- switch the
INSTALL_JENKINS_VIA_RPMS
so our build scripts do not attempt to download plugins, but rather install the RPMs
The file which triggers these things for the main image is at https://github.com/openshift/ocp-build-data/blob/openshift-4.0/images/openshift-jenkins-2.yml for the 4.0 release. There are branches for the other releases. For the agent images you have https://github.com/openshift/ocp-build-data/blob/openshift-4.0/images/jenkins-slave-base-rhel7.yml, https://github.com/openshift/ocp-build-data/blob/openshift-4.0/images/jenkins-agent-maven-35-rhel7.yml, and https://github.com/openshift/ocp-build-data/blob/openshift-4.0/images/jenkins-agent-nodejs-8-rhel7.yml.