Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

capacity and size settings of volume created from external sourse seems not to work #52

Closed
minhuw opened this issue Apr 8, 2023 · 3 comments

Comments

@minhuw
Copy link

minhuw commented Apr 8, 2023

I tried to build an image with some pre-installed packages and a little bit larger disk space using the following file:

packer {
  required_plugins {
    sshkey = {
      version = ">= 1.0.1"
      source  = "github.com/ivoronin/sshkey"
    }
    libvirt = {
      version = ">= 0.4.4"
      source  = "github.com/thomasklein94/libvirt"
    }
  }
}

data "sshkey" "install" {
}

source "libvirt" "example" {
  libvirt_uri = "qemu:///system"

  vcpu   = 4
  memory = 8192

  network_interface {
    type  = "managed"
    alias = "communicator"
  }

  # https://developer.hashicorp.com/packer/plugins/builders/libvirt#communicators-and-network-interfaces
  communicator {
    communicator         = "ssh"
    ssh_username         = "ubuntu"
    ssh_private_key_file = data.sshkey.install.private_key_path
  }
  network_address_source = "lease"

  volume {
    alias = "artifact"

    source {
      type = "external"
      # With newer releases, the URL and the checksum can change.
      urls     = ["https://cloud-images.ubuntu.com/releases/22.04/release-20230302/ubuntu-22.04-server-cloudimg-amd64-disk-kvm.img"]
      checksum = "3b11d66d8211a8c48ed9a727b9a74180ac11cd8118d4f7f25fc7d1e4a148eddc"
    }

    name       = "jammy-base"
    pool       = "default"
    capacity   = "32G"
    size       = "32G"
    target_dev = "sda"
    bus        = "sata"
    format     = "qcow2"
  }

  volume {
    source {
      type = "cloud-init"
      user_data = format("#cloud-config\n%s", jsonencode({
        ssh_authorized_keys = [
            data.sshkey.install.public_key,
        ]
      }))
    }

    pool       = "default"
    target_dev = "sdb"
    bus        = "sata"
  }
  shutdown_mode = "acpi"
}

build {
  sources = ["source.libvirt.example"]
  provisioner "shell" {
    inline = [
      "echo The domain has started and became accessible",
      "echo The domain has the following addresses",
      "ip -br a",
      "echo if you want to connect via SSH use the following key: ${data.sshkey.install.private_key_path}",
    ]
  }
  provisioner "breakpoint" {
    note = "You can examine the created domain with virt-manager, virsh or via SSH"
  }
}

But when I login into the domain on the breakpoint and check the disk space, the disk size is still 2.0G, ignoring the capacity and size settings.

ubuntu@ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           797M  556K  796M   1% /run
/dev/sda1       2.0G  1.4G  597M  71% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/sda15      105M  6.1M   99M   6% /boot/efi
tmpfs           797M  4.0K  797M   1% /run/user/1000

Do I need any other settings to increase the disk size of the generated image?

@thomasklein94
Copy link
Owner

thomasklein94 commented Apr 8, 2023

Both capacity and size arguments affect the volume, but not the partition table of that volume or the filesystems on those partitions. You can check the size of the virtual disk with lsblk.
What you need to do is to resize the partitions with growpart and use resize2fs to resize the partition. You can do it as a provision step or with cloudinit.

@minhuw
Copy link
Author

minhuw commented Apr 8, 2023

Sorry that I need to re-open the issue, but after adding the required cloud-init configurations (if I didn't make any mistakes), I still could not get a larger disk.

The updated volume is:

  volume {
    source {
      type = "cloud-init"
      user_data = format("#cloud-config\n%s", jsonencode({
        resize_rootfs = true
        growpart = {
          mode                     = "auto"
          devices                  = ["/"]
          ignore_growroot_disabled = false
        }
        ssh_authorized_keys = [
            data.sshkey.install.public_key,
        ]
      }))
    }

    pool       = "default"
    target_dev = "sdb"
    bus        = "sata"
  }

The output of lsblk is

NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0  63.3M  1 loop /snap/core20/1822
loop1     7:1    0  49.8M  1 loop /snap/snapd/18357
loop2     7:2    0 111.9M  1 loop /snap/lxd/24322
sda       8:0    0   2.2G  0 disk 
├─sda1    8:1    0   2.1G  0 part /
├─sda14   8:14   0     4M  0 part 
└─sda15   8:15   0   106M  0 part /boot/efi
sr0      11:0    1   364K  0 rom

Are there any other things I need to check?

@thomasklein94
Copy link
Owner

You are absolutely right, I'm sorry.

I've pushed a commit fixing your issue and started a release pipeline for v0.4.5. It will be available in a few minutes.
Once you upgrade to this version, packer will resize the volume after the upload if you specify the capacity for the volume.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants