Skip to content

Latest commit

 

History

History
386 lines (256 loc) · 18.1 KB

CONTRIBUTING.adoc

File metadata and controls

386 lines (256 loc) · 18.1 KB

Contributing to OpenShift

The OpenShift architecture builds upon the flexibility and scalability of Docker and Kubernetes to deliver a powerful new Platform-as-a-Service system. This article explains how to set up a development environment and get involved with this latest version of OpenShift. Kubernetes is included in this repo for ease of development, and the version we include is periodically updated.

To get started you can either:

Or if you are interested in development, start with:

Download from GitHub

The OpenShift team periodically publishes binaries to GitHub on the Releases page. These are Linux, Windows, or Mac OS X 64bit binaries (note that Mac and Windows are client only). You’ll need Docker installed on your local system (see the installation page if you’ve never installed Docker before).

The tar file for each platform contains a single binary openshift which is the all-in-one OpenShift installation.

  • Use sudo openshift start to launch the server. Root access is required to create services due to the need to modify IPTables. See issue: kubernetes/kubernetes#1859.

  • Use oc login <server> …​ to connect to an OpenShift server

  • Use openshift help to see more about the commands in the binary

OpenShift Development

To get started, fork the origin repo.

Develop locally on your host

You can develop OpenShift 3 on Windows, Mac, or Linux, but you’ll need Docker installed on Linux to actually launch containers.

  • For OpenShift 3 development, install the Go programming language

  • To launch containers, install the Docker platform

Here’s how to get set up:

  1. For Go, Git and optionally also Docker, follow the links below to get to installation information for these tools:

  2. Next, create a Go workspace directory:

    $ mkdir $HOME/go
  3. In your .bashrc file or .bash_profile file, set a GOPATH and update your PATH:

    export GOPATH=$HOME/go
    export PATH=$PATH:$GOPATH/bin
  4. Open up a new terminal or source the changes in your current terminal. Then clone this repo:

    $ mkdir -p $GOPATH/src/github.com/openshift
    $ cd $GOPATH/src/github.com/openshift
    $ git clone git://github.com/<forkid>/origin  # Replace <forkid> with the your github id
    $ cd origin
    $ git remote add upstream git://github.com/openshift/origin
  5. From here, you can generate the OpenShift binaries by running:

    $ make clean build
  6. Next, assuming you have not changed the kubernetes/openshift service subnet configuration from the default value of 172.30.0.0/16, you need to instruct the Docker daemon to trust any Docker registry on the 172.30.0.0/16 subnet. If you are running Docker as a service via systemd, add the --insecure-registry 172.30.0.0/16 argument to the options value in /etc/sysconfig/docker and restart the Docker daemon. Otherwise, add "--insecure-registry 172.30.0.0/16" to the Docker daemon invocation, eg:

    $ docker -d --insecure-registry 172.30.0.0/16
  7. Then, the OpenShift firewalld rules are also a work in progress. For now it is easiest to disable firewalld altogether:

    $ sudo systemctl stop firewalld
  8. Firewalld will start again on your next reboot, but you can manually restart it with this command when you are done running OpenShift:

    $ sudo systemctl start firewalld
  9. Now change into the directory with the OpenShift binaries, and start the OpenShift server:

    $ cd _output/local/bin/linux/amd64
    $ sudo ./openshift start
    Note
    Replace "linux/amd64" with the appropriate value for your platform/architecture.
  10. Launch another terminal, change into the same directory you started OpenShift, and deploy the private docker registry within OpenShift with the following commands (note, the --credentials option allows secure communication between the internal OpenShift Docker registry and the OpenShift server, and the --config option provides your identity (in this case, cluster-admin) to the OpenShift server):

    $ sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
    $ sudo chmod +r openshift.local.config/master/admin.kubeconfig
    $ ./oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
  11. If it is not there already, add the current directory to the $PATH, so you can leverage the OpenShift commands elsewhere.

  12. You are now ready to edit the source, rebuild and restart OpenShift to test your changes.

  13. NOTE: to properly stop OpenShift and clean up, so that you can start fresh instance of OpenShift, execute:

    $ sudo pkill -x openshift
    $ docker ps | awk 'index($NF,"k8s_")==1 { print $1 }' | xargs -l -r docker stop
    $ mount | grep "openshift.local.volumes" | awk '{ print $3}' | xargs -l -r sudo umount
    $ cd <to the dir you ran openshift start> ; sudo rm -rf openshift.local.*

Develop on virtual machine using Vagrant

To facilitate rapid development we’ve put together a Vagrantfile you can use to stand up a development environment.

  1. Install Vagrant

  2. Install VirtualBox (Ex: yum install VirtualBox from the RPM Fusion repository)

  3. In your .bashrc file or .bash_profile file, set a GOPATH:

    export GOPATH=$HOME/go
  4. Clone the project and change into the directory:

    $ mkdir -p $GOPATH/src/github.com/openshift
    $ cd $GOPATH/src/github.com/openshift
    $ git clone git://github.com/<forkid>/origin  # Replace <forkid> with the your github id
    $ cd origin
    $ git remote add upstream git://github.com/openshift/origin
  5. Bring up the VM (If you are new to Vagrant, consider Vagrant Docs for help on items like provider selection. Also consider the enablement of your hardware’s virtualization extensions, such as RHEL for example.). Also note, for the make clean build in step 7 to work, a sufficient amount of memory needs to be allocated for the VM, where that amount of memory is not necessarily needed if you are not doing a compile, but simply running openshift (and hence is not set as the default):

    $ export OPENSHIFT_MEMORY=4192
    $ vagrant up
    Tip
    To ensure you get the latest image first run vagrant box remove fedora_inst. And if later on you employ a dev cluster, additionally run vagrant box remove fedora_deps.
  6. SSH in:

    $ vagrant ssh
  7. Run a build:

    $ cd /data/src/github.com/openshift/origin
    $ make clean build
  8. You are now ready to edit the source, rebuild and restart OpenShift to test your changes. At this point you may want to update your $PATH:

    # back to /home/vagrant
    $ cd
    # update path to include binaries for oc, oadm, etc
    # this is temporary, to make it persistent add it to .bash_profile
    $ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:$PATH
  9. Now start the OpenShift server:

    # redirect the logs to  /home/vagrant/openshift.log for easier debugging
    $ sudo `which openshift` start --public-master=localhost &> openshift.log &
    Note
    This will generate three directories in /home/vagrant (openshift.local.config, openshift.local.etcd, openshift.local.volumes) as well as create the openshift.log file.
    Note
    By default your origin directory (on your host machine) will be mounted as a vagrant synced folder into /data/src/github.com/openshift/origin.
  10. Deploy the private docker registry within OpenShift with the following commands (note, the --credentials option allows secure communication between the internal OpenShift Docker registry and the OpenShift server, and the --config option provides your identity (in this case, cluster-admin) to the OpenShift server):

    $ sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
    $ sudo chmod +r openshift.local.config/master/admin.kubeconfig
    $ oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
  11. At this point it may be helpful to load some image streams and templates. These commands will make use of fixtures from the openshift/origin/examples dir:

    # load image stream
    $ oc create -f /data/src/github.com/openshift/origin/examples/image-streams/image-streams-centos7.json -n openshift --config=openshift.local.config/master/admin.kubeconfig
    # load templates
    $ oc create -f /data/src/github.com/openshift/origin/examples/sample-app/application-template-stibuild.json -n openshift --config=openshift.local.config/master/admin.kubeconfig
    $ oc create -f /data/src/github.com/openshift/origin/examples/db-templates --config=openshift.local.config/master/admin.kubeconfig
  12. At this point you can open a browser on your host system and navigate to https://localhost:8443/console to view the web console.

  13. NOTE: to properly stop OpenShift and clean up, so that you can start fresh instance of OpenShift, execute:

    # shut down openshift
    $ sudo pkill openshift
    # stop the docker containers
    $ docker ps | awk 'index($NF,"k8s_")==1 { print $1 }' | xargs -l -r docker stop
    # deleting all the internal config files, etcd, etc and starting openshift fresh
    sudo rm -rf openshift.local.*
    # if you used the --volume-dir=/home/vagrant/volumes flag, then run these

Ensure virtual box interfaces are not managed by Network Manager

If you are developing on a Linux host, then you need to ensure that Network Manager is ignoring the virtual box interfaces, otherwise they cause issues with multi-vm networking.

Follow these steps to ensure that virtual box interfaces are unmanaged:

  1. Check the status of Network Manager devices:

    $ nmcli d
  2. If any devices whose name start with vboxnet* are not unmanaged, then they need to be added to NetworkManager configuration to be ignored.

    $ cat /etc/NetworkManager/NetworkManager.conf
    [keyfile]
    unmanaged-devices=mac:0a:00:27:00:00:00;mac:0a:00:27:00:00:01;mac:0a:00:27:00:00:02
  3. One can use the following command to help generate the configuration:

    $ ip link list | grep vboxnet  -A 1 | grep link/ether | awk '{print "mac:" $2}' |  paste -sd ";" -
  4. Reload the Network Manager configuration:

    $ sudo nmcli con reload

Develop and test using a docker-in-docker cluster

It’s possible to run an OpenShift multinode cluster on a single host via docker-in-docker (dind). Cluster creation is cheaper since each node is a container instead of a VM. This was implemented primarily to support multinode network testing, but may prove useful for other use cases.

To run a dind cluster in a VM, follow steps 1-3 of the Vagrant instructions and then execute the following:

# Extra memory is required to build the extended tests
$ export OPENSHIFT_MEMORY=3072
$ export OPENSHIFT_DIND_DEV_CLUSTER=true
$ vagrant up

Bringing up the VM for the first time will take a while due to the overhead of package installation, building docker images, and building openshift. Assuming the 'vagrant up' command completes without error, a dind OpenShift cluster should now be running on the VM. To access the cluster, login to the VM:

$ vagrant ssh

Once on the VM, the 'oc' and 'openshift' commands can be used to interact with the cluster:

$ oc get nodes

It’s also possible to login to the participating containers (openshift-master, openshift-node-1, openshift-node-2, etc) via docker exec:

$ docker exec -ti openshift-master bash

While it is possible to manage the OpenShift daemon in the containers, dind cluster management is fast enough that the suggested approach is to manage at the cluster level instead.

Invoking the dind-cluster.sh script without arguments will provide a usage message:

Usage: hack/dind-cluster.sh {start|stop|restart|...}

Additional documentation of how a dind cluster is managed can be found at the top of the dind-cluster.sh script.

Attempting to start a cluster when one is already running will result in an error message from docker indicating that the named containers already exist. To redeploy a cluster after making changes, use the 'start' and 'stop' or 'restart' commands. OpenShift is always built as part of the dind cluster deployment initiated by 'start' or 'restart'.

By default the cluster will consist of a master and 2 nodes. The OPENSHIFT_NUM_MINIONS environment variable can be used to override the default of 2 nodes.

Containers are torn down on stop and restart, but the root of the origin repo is mounted to /data in each container to allow for a persistent installation target.

While it is possible to run a dind cluster on any host (not just a vagrant VM), it is recommended to consider the warnings at the top of the dind-cluster.sh script.

Testing networking with docker-in-docker

It is possible to run networking tests against a running docker-in-docker cluster (i.e. after 'hack/dind-cluster.sh start' has been invoked):

$ OPENSHIFT_CONFIG_ROOT=dind test/extended/networking.sh

Since a cluster can only be configured with a single network plugin at a time, this method of invoking the networking tests will only validate the active plugin. It is possible to target all plugins by invoking the same script in 'ci mode' by not setting a config root:

$ test/extended/networking.sh

In ci mode, for each networking plugin, networking.sh will create a new dind cluster, run the tests against that cluster, and tear down the cluster. The test dind clusters are isolated from any user-created clusters, and test output and artifacts of the most recent test run are retained in /tmp/openshift-extended-tests/networking.

It’s possible to override the default test regexes via the NETWORKING_E2E_FOCUS and NETWORKING_E2E_SKIP environment variables. These variables set the '-focus' and '-skip' arguments supplied to the ginkgo test runner.

To debug a test run with delve, make sure the dlv executable is installed in your path and run the tests with NETWORKING_DEBUG set to true:

$ NETWORKING_DEBUG=true test/extended/networking.sh

Running networking tests against any cluster

It’s possible to run networking tests against any cluster. To target the default vm dev cluster:

$ OPENSHIFT_CONFIG_ROOT=dev test/extended/networking.sh

To target an arbitrary cluster, the config root (parent of openshift.local.config) can be supplied instead:

$ OPENSHIFT_CONFIG_ROOT=[cluster config root] test/extended/networking.sh

See the script’s inline documentation for further details.

Running Kubernetes e2e tests

It’s possible to target the Kubernetes e2e tests against a running OpenShift cluster. From the root of an origin repo:

$ pushd ..
$ git clone http://github.com/kubernetes/kubernetes/
$ pushd kubernetes/build
$ ./run hack/build-go.sh
$ popd && popd
$ export KUBE_ROOT=../kubernetes
$ hack/test-kube-e2e.sh --ginkgo.focus="[regex]"

The previous sequence of commands will target a vagrant-based OpenShift cluster whose configuration is stored in the default location in the origin repo. To target a dind cluster, an additional environment variable needs to be set before invoking test-kube-e2e.sh:

$ export OS_CONF_ROOT=/tmp/openshift-dind-cluster/openshift

Development: What’s on the Menu?

Right now you can see what’s happening with OpenShift development at:

Ready to play with some code? Hop down and read up on our roadmap for ideas on where you can contribute.

If you are interested in contributing to Kubernetes directly:
Join the Kubernetes community and check out the contributing guide.

Troubleshooting

If you run into difficulties running OpenShift, start by reading through the troubleshooting guide.

The Roadmap

The OpenShift project roadmap lives on Trello. A summary of the roadmap, releases, and other info can be found here.

Stay in Touch

Reach out to the OpenShift team and other community contributors through IRC and our mailing list: