Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build sacrificial VM image with Packer #7

Merged
merged 8 commits into from
May 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions packer/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
gcp.key.json

45 changes: 45 additions & 0 deletions packer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Packer

We use [Packer](https://www.packer.io/) to create a [VM image on GCP](https://cloud.google.com/compute/docs/images) with the latest software and Docker installed.
The image can be used to create secure VMs.

## How it works

Packer will

1. create a VM on Google Cloud Platform (GCP)
2. run our scripts to update software and install Docker on the VM
3. take a snapshot of the VM and store it as an image called `sacrificial-vm` on GCP
4. delete the VM

## Getting Started

### Prerequisite

What you need:

- `gcloud` installed locally
- A GCP project
- [Packer installed](https://www.packer.io/downloads) locally

### Set up Packer

1. Set up GCP service account for Packer following [Packer - Running outside of Google Cloud](https://www.packer.io/plugins/builders/googlecompute#running-outside-of-google-cloud)

2. Move the downloaded service account key file to `./gcp.key.json`

> Note: if you want to use a different file name or location, change `account_file` in [`./main.pkr.hcl`](./main.pkr.hcl) accordingly

3. Update `project-id` in `main.pkr.hcl` to match yours

### Build the image

Run

```bash
packer init . && packer build -force .
```

An image should be built to your GCP project

Note: `-force` to overwrite previously built image.
28 changes: 28 additions & 0 deletions packer/main.pkr.hcl
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
packer {
required_plugins {
googlecompute = {
version = ">= 1.0.0"
source = "github.com/hashicorp/googlecompute"
}
}
}

source "googlecompute" "ubuntu-2204" {
project_id = "containerssh"
source_image_family = "ubuntu-pro-2204-lts"
ssh_username = "root"
zone = "europe-west3-c"
account_file = "./gcp.key.json"
image_name = "sacrificial-vm-image"
}

build {
name = "ubuntu-2204-with-docker"
sources = [
"source.googlecompute.ubuntu-2204"
]
provisioner "shell" {
scripts = ["./scripts/update.sh", "./scripts/install_docker.sh"]
}
}

28 changes: 28 additions & 0 deletions packer/scripts/install_docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash

set -euxo pipefail

[ -f ./util_fn ] && source ./util_fn

export DEBIAN_FRONTEND=noninteractive

apt-get update
apt-get upgrade -y
apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release

# add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# set up a stable repo
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

# install docker engine
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

41 changes: 41 additions & 0 deletions packer/scripts/update.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#!/bin/bash -eux
# This script is adapted from
# https://github.com/chef/bento/blob/main/packer_templates/ubuntu/scripts/update.sh

[ -f ./util_fn ] && source ./util_fn

export DEBIAN_FRONTEND=noninteractive

echo "disable release-upgrades"
sed -i.bak 's/^Prompt=.*$/Prompt=never/' /etc/update-manager/release-upgrades;

echo "disable systemd apt timers/services"
systemctl stop apt-daily.timer;
systemctl stop apt-daily-upgrade.timer;
systemctl disable apt-daily.timer;
systemctl disable apt-daily-upgrade.timer;
systemctl mask apt-daily.service;
systemctl mask apt-daily-upgrade.service;
systemctl daemon-reload;

# Disable periodic activities of apt to be safe
cat <<EOF >/etc/apt/apt.conf.d/10periodic;
APT::Periodic::Enable "0";
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "0";
APT::Periodic::Unattended-Upgrade "0";
EOF

echo "remove the unattended-upgrades and ubuntu-release-upgrader-core packages"
rm -rf /var/log/unattended-upgrades;
apt-get -y purge unattended-upgrades ubuntu-release-upgrader-core;

echo "update the package list"
apt-get -y update;

echo "upgrade all installed packages incl. kernel and kernel headers"
apt-get -y dist-upgrade -o Dpkg::Options::="--force-confnew";

reboot

11 changes: 11 additions & 0 deletions packer/scripts/util_fn
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

# This apt-get waits until lock is released
# https://github.com/geerlingguy/packer-boxes/issues/7#issuecomment-425641793
function apt-get() {
while fuser -s /var/lib/apt/lists/lock;
do echo 'apt-get is waiting for the lock release ...';
sleep 1;
done;
/usr/bin/apt-get "$@";
}
40 changes: 40 additions & 0 deletions terraform/Deploy ContainerSSH.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
Deploy ContainerSSH

## Goal

Implement step 1-3 of https://containerssh.io/guides/honeypot/

## Components and Requirements

> Extracted from https://containerssh.io/guides/honeypot/

### Gateway VM x1

- [ ] have sufficient disk space to hold audit logs and containers
- [ ] firewall rules
- [ ] Port 22 should be open to the Internet.
- [ ] Ports 9100 and 9101 should be open from your Prometheus instance. These will be used by the Prometheus node exporter and the ContainerSSH metrics server respectively.
- [ ] Outbound rules to your S3-compatible object storage.

### Sacrificial VM x1

- [ ] Use a prebuilt VM image with Docker installed to keep the host up to date.
- [ ] Use tools like [Packer](https://www.packer.io/) to keep the VM image updated
- [ ] run on its own dedicated physical hardware
- [ ] have sufficient disk space to hold audit logs and containers
- [ ] Firewall rules
- [ ] Only allows connection with the gateway host
- [ ] Only allow inbound connections on TCP port 2376 from the gateway host

### S3-compatible object storage x1

Maybe set up MINIO on GCP?
For uploading audit logs

- [ ] decide what S3 object to use

### Prometheus x1

For monitoring audit logs.

- [ ] get familiar with prometheus
18 changes: 14 additions & 4 deletions terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,31 @@ Install it as follows:

2. Install [Terraform CLI](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/gcp-get-started)

3. Create and download a GCP _service account key_ (in JSON) following [_Set up GCP_ in this guide](https://learn.hashicorp.com/tutorials/terraform/google-cloud-platform-build?in=terraform/gcp-get-started).\
Terraform will use it to manage your GCP resources. Move the key file to `./.gcp-key.json`
3. Create and download a GCP _service account key_ (in JSON) following [Terraform - Set Up GCP](https://learn.hashicorp.com/tutorials/terraform/google-cloud-platform-build?in=terraform/gcp-get-started).\
Terraform will use it to manage your GCP resources. Move the key file to current folder as `./gcp-key.json`

4. Update `terraform/terraform.tfvars` file with the following content

```bash
project = "<GCP_project_ID>"
credentials_file = "<path_to_GCP_key_file>"
project = "<your_GCP_project_ID>"
credentials_file = "gcp.key.json"
```

5. Verify if your Terraform is successfully set up.

```bash
cd terraform
terraform init # initialize the working directory
terraform plan # preview the changes
```

You should not see any error message in the output.

## Trouble Shooting

1. `terraform apply` failed with `Error creating Network: googleapi: Error 403: Required 'compute.networks.create' permission for '<project-id>', forbidden`

Possible Issue:

1. `project-id` might be wrong. Check Deployment step 4.
2. Did you grant the _Project Editor_ permission to the service account in step 3?
20 changes: 18 additions & 2 deletions terraform/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,22 @@ resource "google_compute_network" "main" {
auto_create_subnetworks = false
}

resource "google_compute_firewall" "benchmark_vpc_rules" {
name = "benchmark-vpc-rules"
network = google_compute_network.main.self_link

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_subnetwork" "gateway_subnet" {
name = "gateway-subnet"
ip_cidr_range = "10.0.0.0/24"
Expand Down Expand Up @@ -43,12 +59,12 @@ resource "google_compute_instance" "gateway_vm" {
}

resource "google_compute_instance" "sacrificial_vm" {
name = "sacrificial"
name = "sacrificial-vm"
machine_type = "e2-micro"

boot_disk {
initialize_params {
image = "ubuntu-os-pro-cloud/ubuntu-pro-2204-lts"
image = "sacrificial-vm-image"
}
}

Expand Down