-
Notifications
You must be signed in to change notification settings - Fork 41
How to run CAP cached VM
# Save this file to somewhere libvirt can access it
wget https://s3.amazonaws.com/cf-opensusefs2/vagrant/cap.scf-opensuse-2.7.0.cf1.9.0.0.g2d95fcb5.console-1.0.0.qcow2
Usually nice to create a disk clone to start from so you don't actually modify the VM image that's been provided.
# An example of using the images storage directory in a home directory
# to create a disk image that's based off of the image you download.
#
# Providing the absolute path to the original file is required.
qemu-img create -b ~/images/cap.scf-opensuse-2.7.0.cf1.9.0.0.g2d95fcb5.console-1.0.0.qcow2 -f qcow2 ~/images/scf-ephemeral.qcow2
# Ensure the network has been started:
virsh net-start default
# Start the instance
# Adjust the amount of RAM depending on your host system. 10G is the minimum.
virt-install --connect=qemu:///system --name=scf --ram=$((16*1024)) --vcpus=2 --disk path=~/images/scf-ephemeral.qcow2,format=qcow2 --import
sudo virsh domifaddr <name-of-vm>
If the domifaddr
command isn't available, an alternate way to find the IP is to get the MAC address, then look in the DHCP lease information:
MAC=$(virsh dumpxml scf | grep "mac address" |sed "s/.*'\(.*\)'.*/\1/g")
grep ${MAC} /var/lib/libvirt/dnsmasq/default.leases
If you're using bridged mode, you'll have to log in via the terminal to find the IP with ip -4 -o a
since libvirt won't know the VM IP.
ssh scf@<ip-address>
Credentials are:
username: scf
password: changeme
echo '{"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"persistent"},"provisioner":"kubernetes.io/host-path"}' | kubectl create -f -
Put the following configuration in values.yaml
, ensure to change all the places with VMIPADDRESS
to the address of the VM (ip -4 -o a
to find it):
env:
# Password for the cluster
CLUSTER_ADMIN_PASSWORD: changeme
# Domain for SCF. DNS for *.DOMAIN must point to the a kube node's (not master)
# external ip. This must match the value passed to the
# cert-generator.sh script.
DOMAIN: VMIPADDRESS.nip.io
# Password for SCF to authenticate with UAA
UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret
# UAA host/port that SCF will talk to. If you have a custom UAA
# provide its host and port here. If you are using the UAA that comes
# with the SCF distribution, simply use the two values below and
# substitute the cf-dev.io for your DOMAIN used above.
UAA_HOST: uaa.VMIPADDRESS.nip.io
UAA_PORT: 2793
kube:
# The IP address assigned to the kube node pointed to by the domain. The example value here
# is what the vagrant setup assigns, you will likely need to change it.
external_ip: VMIPADDRESS
storage_class:
# Make sure to change the value in here to whatever storage class you use
persistent: persistent
# The next line is needed for CaaSP 2, but should _not_ be there for CaaSP 1
auth: rbac
helm install helm/uaa-opensuse/ -n uaa --namespace uaa --values values.yaml
Ensure UAA is ready 1/1 (k get pods :
can show you this) before continuing.
# Put this in a variable for the next statement
CA_CERT="$(kubectl get secret secret --namespace uaa -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
# The cert below has been renamed recently, so we'll set both values to ensure
# these instructions stay good for longer.
helm install helm/cf-opensuse/ -n scf --namespace scf --values values.yaml --set "env.UAA_CA_CERT=${CA_CERT}" --set "env.HCP_CA_CERT=${CA_CERT}"
helm install console --name console --namespace ui --values values.yaml
Note: Only works from within box if nat, external only possible with bridged address or some custom ssh tunneling)
cf api --skip-ssl-validation https://api.VMIPADDRESS.nip.io
cf login -u admin -p changeme
Note: Only works from within box if nat, external only possible with bridged address or some custom ssh tunneling)
First we need to get the port that it's running on
# Retrieve the list of services for the UI
k get svc ui:
# Sample output
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# console-mariadb ClusterIP 10.254.30.178 <none> 3306/TCP 3m
# console-ui-ext NodePort 10.254.30.135 10.9.170.14 8443:31174/TCP 3m
We need to look for the port of the console-ui-ext service, and we need its external port and IP. In this example what we would enter into our web browser would be the following:
CONSOLE_PORT=$(k get svc ui:console-ui-ext -o jsonpath='{.spec.ports[0].nodePort}')
CONSOLE_IP=$(k get svc ui:console-ui-ext -o jsonpath='{.spec.externalIPs[0]}')
echo https://${CONSOLE_IP}:${CONSOLE_PORT}
TCP Ports: 22, 80, 443, 2222, 2793, 4443, 20000-20008