Skip to content

Commit

Permalink
Merge pull request #3 from sown/tds/document-things
Browse files Browse the repository at this point in the history
document dns, copy pxe docs from old wiki
  • Loading branch information
TimStallard authored Dec 1, 2024
2 parents ee026bc + 303edd0 commit c2bcfd1
Show file tree
Hide file tree
Showing 2 changed files with 46 additions and 0 deletions.
22 changes: 22 additions & 0 deletions docs/infrastructure/dns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# DNS

We have three way split DNS - an external zone that's visible to the world, a university-internal zone that's only visible inside the university network, and a SOWN-internal zone that's only visible inside SOWN.

## External and universiversity-internal DNS

Our domains `suws.org.uk` and `sown.org.uk` both have DNS hosted by the University, with DNS managed through their Infoblox system. Certain SOWN members have access to the web interface to update the records.

The university-internal zone is also managed through this.

## SOWN Internal DNS

Our internal DNS for `sown.org.uk` is hosted on our legacy server `auth2` running BIND. This also hosts reverse DNS for `10.5.0.0/16` and `2001:630:d0:f700::/56`. The DNS zone is built hourly by `/etc/cron.hourly/updatednszones`.

Parts of the zones are built from the legacy admin system, `node_control`, which is invoked through a PHP script and writes out temporary zonefiles. This is what generates the DNS records used for our nodes.

These are combined with newer parts of the zonefile generated from Netbox. This uses netbox export templates which generate the zonefile. This is what generates the DNS records for our servers and infrastructure.

The script then concatenates the zonefiles together, so the final zone is the combination of these two.

## Resolvers
Servers within SOWN should use our floating gateway addresses (`10.5.0.254` and `2001:630:d0:f700::254`) as DNS resolvers. These run BIND, and also hold our internal zones, AXFR'd from auth2. This means our internal DNS still works when auth2 is down.
24 changes: 24 additions & 0 deletions docs/infrastructure/management/pxe.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# PXE
PXE is a standard for booting machines over the network. In SOWN, this is used to allow servers to be remotely recovered or reinstalled - and avoiding needing to go to campus with a USB stick!

On boot, the NIC's PXE ROM will get an address via DHCP, then download via TFTP and boot iPXE. iPXE then chainloads a small script (sown.ipxe) which is used to fetch a kernel and initrd over HTTP.

The DHCP, TFTP and HTTP servers run on both of our GW servers. See the [ansible role](https://github.com/sown/ansible/tree/main/roles/pxe) for details. This also downloads an Ubuntu ISO and extracts the parts of the ISO needed for PXE boot.

## How?
See [our IPMI/iDRAC docs](idrac.md) for how to remotely get a console on servers.

During boot, do `<Escape><@>` repeatedly to get a machine to PXE boot.

## Building iPXE
We build our iPXE like:
```
apt install git build-essential liblzma-dev
git clone https://git.ipxe.org/ipxe.git
cd ipxe/src
echo "#define DIGEST_CMD" > config/local/general.h # enable md5sum+sha1sum
echo "#define CONSOLE_SERIAL" > config/local/console.h # enable serial console
make bin/undionly.kpxe
```

The DIGEST_CMD bit isn't needed now, we used to use it for validating image checksums by hand (when booting off public mirror servers).

0 comments on commit c2bcfd1

Please sign in to comment.