With this step-by-step guide, you have everything you need to provision a bare-metal server using the Tinkerbell project.
- Get two machine, one is provisioner which it could be VM, the other one if the bare metal server you'd like to be provisioned by tinkerbell, here we call it worker node.
Use the Tinkerbell Terraform module to setup a single provisioner and worker machine
You will need a Packet account and a personal user access token, not a project-level token.
- You need setup the tinkerbell provision engine before working on the workflow.
curl -sLS https://raw.githubusercontent.com/tinkerbell/tink/master/setup.sh | sh
# Fix Docker from interfering with NAT
# https://docs.docker.com/network/iptables/
iptables -I DOCKER-USER -i src_if -o dst_if -j ACCEPT
# Now setup NAT from the internal network to the public network
# https://www.revsys.com/writings/quicktips/nat.html
iptables -t nat -A POSTROUTING -o bond0 -j MASQUERADE
iptables -A FORWARD -i bond0 -o enp1s0f1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i enp1s0f1 -o bond0 -j ACCEPT
Customise the cloud-init stage with an SSH key from the provisioner.
Run ssh-keygen
on the provisioner, then hit enter to each prompt.
Now run cat ~/.ssh/id_rsa.pub
and paste the value into the generate.sh
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8TlZp6SMhZ3OCKxWbRAwOsuk8alXapXb7GQV4DPwZ+ug1AtkDCSSzPGZI6PP3rFILfobQdw6/t/GT3TKwQ1HY2vYqikWXG7YjT6r5IlsaaZ6y3KAuestYx2lG8I+MCbLmvcjo4k2qeJuf2yj331izRkeNRlRx/VWFUAtoCw2Kr2oZK+LbV8Ewv+x6jMVn9+NgxmMj+fHj9ajVtDacVvyJ8cStmRmOyIGd+rPKDb8txJT4FYXIsy5URhioni7QQuJcXN/qqy4TSY+EaYkGUo2j91MuDJZbdQYniOV4ODS8At/a/Ua51x+ia6Y51pCHMvPsm7DFhK13EQUXhIGdPVY3 root@tf-provisioner
Each image from 00-07 will be created as a Docker image and then pushed to the registry.
./create_images.sh
Archives will be required for:
- rootfs
- kernel
- modules
- initrd
Since we are using Packet's infrastructure, we can also use their image builder and custom repository.
A Docker build will be run to reproduce for tar.gz files which need to be copied into Nginx's root, where OSIE will serve them to the worker.
The initial Terraform uses the c3.small.x86 worker type, so use the following parameters to configure Ubuntu 18.04.
apt update && apt install -qy git git-lfs fakeroot jq
git clone https://github.com/packethost/packet-images
cd packet-images
git-lts install
# This will take a few minutes
./tools/build.sh -d ubuntu_18_04 -p c3.small.x86 -a x86_64 -b ubuntu_18_04-c3.small.x86
# Now copy the output so that it's available to be served over HTTP
mkdir -p /var/tinkerbell/nginx/misc/osie/current/ubuntu_18_04
cp *.tar.gz /var/tinkerbell/nginx/misc/osie/current/ubuntu_18_04/
# ls -l /var/tinkerbell/nginx/misc/osie/current/ubuntu_18_04/
total 397756
-rw-r--r-- 1 root root 278481368 May 19 08:54 image.tar.gz
-rw-r--r-- 1 root root 25380938 May 19 08:54 initrd.tar.gz
-rw-r--r-- 1 root root 7896480 May 19 08:54 kernel.tar.gz
-rw-r--r-- 1 root root 65386698 May 19 08:54 modules.tar.gz
Alternatively run:
# 1. git-lfs
apt-get install git-lfs
#2. get-ubuntu-image
wget https://raw.githubusercontent.com/packethost/packet-images/master/tools/get-ubuntu-image
#3. make get-ubuntu-image executable
chmod +x get-ubuntu-image
#4. packet-save2image
wget https://raw.githubusercontent.com/packethost/packet-images/master/tools/packet-save2image
#5. set packet-save2image to executable
chmod +x packet-save2image
#6. Download Dockerfile
wget https://raw.githubusercontent.com/packethost/packet-images/ubuntu_18_04-base/x86_64/Dockerfile
#7. Download Image:
./get-ubuntu-image 16.04 x86_64 .
#8. Build:
docker build -t custom-ubuntu-16 .
#9. Save
docker save custom-ubuntu-16 > custom-ubuntu-16.tar
#10. Package:
./packet-save2image < custom-ubuntu-16.tar > image.tar.gz
-
Download this repo to your provisioner
-
Use
vim
to modify thegenerate.sh
file to create the hardware.json file for your baremetal server
#!/bin/bash
export UUID=$(uuidgen|tr "[:upper:]" "[:lower:]") #UUID will be generated by uuidgen
export MAC=b8:59:9f:e0:f6:8c # Change this MAC address to your worker node PXE port mac address, it has to match.
cat hardware.json | envsubst > hw1.json
echo wrote hw1.json - $UUID
Now run ./generate.sh
to create the hw1.json
file which contains the MAC address and a unique UUID.
- Login into tink-cli client and push hw1.json and ubuntu.tmpl into tink You need copy both hw1.json and ubuntu.tmpl file to tink cli before you can push them into tink
3.1 Create hardware
# Run the CLI from within Docker
docker exec -it deploy_tink-cli_1 sh
# push the hardware information to tink database
/tmp # tink hardware push --file /tmp/hw1.json
- Create workflow template
# Save ubuntu.tmpl to a file /tmp/ubuntu.tmpl
# Create a template based upon the output
tink template create -n 'ubuntu' -p /tmp/ubuntu.tmpl
1.1 Create workflow
# See the output from Terraform
export MAC="<MAC of your worker PXE port>"
# See tink template list
export TEMPLATE_ID="<template-uuid>"
tink workflow create -t "$TEMPLATE_ID" -r '{"device_1": "'$MAC'"}'
Reboot worker node will trigger workflow starts and you can monitoring the workflow events
/tmp # tink workflow events f588090f-e64b-47e9-b8d0-a3eed1dc5439
+--------------------------------------+-----------------+-----------------+----------------+---------------------------------+--------------------+
| WORKER ID | TASK NAME | ACTION NAME | EXECUTION TIME | MESSAGE | ACTION STATUS |
+--------------------------------------+-----------------+-----------------+----------------+---------------------------------+--------------------+
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | disk-wipe | 0 | Started execution | ACTION_IN_PROGRESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | disk-wipe | 7 | Finished Execution Successfully | ACTION_SUCCESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | disk-partition | 0 | Started execution | ACTION_IN_PROGRESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | disk-partition | 12 | Finished Execution Successfully | ACTION_SUCCESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | install-root-fs | 0 | Started execution | ACTION_IN_PROGRESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | install-root-fs | 8 | Finished Execution Successfully | ACTION_SUCCESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | install-grub | 0 | Started execution | ACTION_IN_PROGRESS |
| 90e16ddd-a4ce-4591-bb91-3ec1eddd0e2b | os-installation | install-grub | 5 | Finished Execution Successfully | ACTION_SUCCESS |
+--------------------------------------+-----------------+-----------------+----------------+---------------------------------+--------------------+
Important note: if you need to re-run the provisioning workflow, you need to run tink workflow create
again.
You now need to stop the machine from netbooting.
Go to the Packet dashboard and click "Server Actions" -> "Disable Always PXE boot". This setting can be toggled as required, or if you need to reprovision a machine.
Now reboot the worker machine, and it should show GRUB before booting Ubuntu.
The username and password are both ubuntu
and this must be changed on first logon. To change or to remove the password edit ./05-cloud-init/cloud-init.sh
.
You can connect with the packet SOS ssh console or over SSH from the worker, the IP should be 192.168.1.5.
ssh ubuntu@192.168.1.5
Please direct queries to #tinkerbell on Packet's Slack channel
This work is derived from a sample by Packet and Infracloud
- Xin Wang - Initial set of fixes and adding cloud-init - tink-workflow
- Alex Ellis - Fixed networking and other bugs, user experience & README
License: Apache 2.0
Copyright: tink-workflow authors