Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxmox VE #268

Open
rogerxu opened this issue May 14, 2022 · 12 comments
Open

Proxmox VE #268

rogerxu opened this issue May 14, 2022 · 12 comments

Comments

@rogerxu
Copy link
Owner

rogerxu commented May 14, 2022

No description provided.

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Install

Upgrade

https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Configuration

xiangfeidexiaohuo/ProxmoxVE-7.0-DIY: Proxmox VE 7.x 换源、关掉订阅提示、直通等相关教程。 (github.com)

Proxmox VE 6/7 配置源及关闭订阅提醒 - inSilen Studio

ivanhao/pvetools: proxmox ve tools script (github.com)

Mirror

/etc/apt/sources.list

deb https://mirrors.ustc.edu.cn/debian bullseye main contrib

deb https://mirrors.ustc.edu.cn/debian bullseye-updates main contrib

deb https://mirrors.ustc.edu.cn/debian-security bullseye-security main contrib

deb https://mirrors.ustc.edu.cn/proxmox/debian/pve bullseye pve-no-subscription

Subscription Alert

/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js

checked_cmd: function(orig_cmd) {
}
$ systemctl restart pveproxy

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Storage

Storage - Proxmox VE

Proxmox VE Storage

LVM

Proxmox VE 虚拟机磁盘的选择 (buduanwang.vip)

CIFS

CIFS Backend - Proxmox VE Storage

Storage pool type: cifs

What is CIFS (Common Internet File System)? (techtarget.com)

  • server: server
  • share: temp
  • username: share
  • password: share

/etc/pve/storage.cfg

cifs: nas-temp
        path /mnt/pve/nas-temp
        server 192.168.31.104
        share temp
        content images
        options gid=100,file_mode=0664,dir_mode=0775
        prune-backups keep-all=1
        username nas

Restart PVE and check the mounted path /mnt/pve/nas-temp

$ ll /mnt/pve/nas-temp
drwxrwxr-x 2 root users  0 Sep 26 20:26 dir
-rw-rw-r-- 1 root users  6 Oct  3 10:40 file.txt

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Linux Container

Linux Container - Proxmox VE

Proxmox Container Toolkit

Container Template Mirror

修改PVE容器CT镜像源 (buduanwang.vip)

Proxmox - USTC Mirror Help

sed -i.bak 's|http://download.proxmox.com|https://mirrors.ustc.edu.cn/proxmox|g' /usr/share/perl5/PVE/APLInfo.pm

TurnKey Linux - USTC Mirror Help

sed -i.bak 's|https://releases.turnkeylinux.org|https://mirrors.ustc.edu.cn/turnkeylinux/metadata|g' /usr/share/perl5/PVE/APLInfo.pm
$ systemctl restart pvedaemon
$ pveam update
$ pveam available

aplinfo.dat

mkdir -p /etc/systemd/system/pve-daily-update.service.d/
touch /etc/systemd/system/pve-daily-update.service.d/update-turnkey-releases.conf

```ini
[Service]
ExecStopPost=/bin/sed -i 's|http://mirror.turnkeylinux.org|https://mirrors.ustc.edu.cn|' /var/lib/pve-manager/apl-info/releases.turnkeylinux.org
systemctl daemon-reload
systemctl start pve-daily-update.service

LXC - Index of /images (canonical.com)

Network

/etc/systemd/network/eth0.network

DHCP IPv4

[Match]
Name = eth0

[Network]
Description = Interface eth0 autoconfigured by PVE
DHCP = v4
IPv6AcceptRA = true

Static IPv4

[Match]
Name = eth0

[Network]
Description = Interface eth0 autoconfigured by PVE
Address = 192.168.31.101/24
Gateway = 192.168.31.1
DHCP = no
IPv6AcceptRA = true

mDNS

$ cp /etc/systemd/network/eth0.network /etc/systemd/network/10-eth0-mdns.network 
$ echo 'MulticastDNS = true' >> /etc/systemd/network/10-eth0-mdns.network

Bind Mount Points

Local devices or local directories can be mounted directly using bind mounts. This gives access to local resources inside a container with practically zero overhead. Bind mounts can be used as an easy way to share data between containers.

Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container. Some potential use cases are:

  • Accessing your home directory in the guest
  • Accessing an USB device directory in the guest
  • Accessing an NFS mount from the host in the guest

Bind mounts are considered to not be managed by the storage subsystem, so you cannot make snapshots or deal with quotas from inside the container.

Unprivileged LXC containers - Proxmox VE

With unprivileged containers you might run into permission problems caused by the user mapping and cannot use ACLs.

$ pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared

However you will soon realise that every file and directory will be mapped to "nobody" (uid 65534).

All of the UIDs (user id) and GIDs (group id) are mapped to a different number range than on the host machine, usually root (uid 0) became uid 100000, 1 will be 100001 and so on.

FS Trim

$ pct fstrim 100

Docker

pve 用lxc容器运行docker最小化部署,以及lxc所有问题汇总! (leiyanhui.com)

Unprivileged LXC

Running docker inside an unprivileged LXC container on Proxmox - du.nkel.dev

Privileged LXC

/etc/pve/lxc/100.conf

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

FS Share

Proxmox: Mounting a remote share in LXC - Unix Samurai

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

QEMU VM

QEMU Guest Agent

Qemu-guest-agent - Proxmox VE

$ sudo apt install qemu-guest-agent
$ sudo systemctl start qemu-guest-agent

Display

VirGL GPU (virtio-gl) is a virtual 3D GPU for use inside VMs that can offload workloads to the host GPU without requiring special (expensive) models and drivers and neither binding the host GPU completely, allowing reuse between multiple guests and or the host.

VirGL support needs some extra libraries that aren’t installed by default due to being relatively big and also not available as open source for all GPU models/vendors. For most setups you’ll just need to do: apt install libgl1 libegl1

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Hardware Passthrough

Qemu/KVM Virtual Machines (proxmox.com)

How to configure PCI(e) passthrough on Proxmox VE | Matthew DePorter

PVE开启硬件直通功能 (buduanwang.vip)

ProxmoxVE 开启硬件直通 - ZIMRI`Blog (zimrilink.com)

GPU

PVE8.1 小白ALL IN ONE安装教程 Refresh-小陈折腾日记 (geekxw.top)

Check devices in PVE host

pve$ ls -l /dev/dri
crw-rw---- 1 root video  226,   0 Oct  2 17:48 card0
crw-rw---- 1 root render 226, 128 Oct  2 17:48 renderD128

Query the ids of the render group on the host system

pve$ getent group render | cut -d: -f3
103

LXC

/etc/pve/lxc/100.conf

# video
# lxc.cgroup2.devices.allow: c 226:0 rwm
# lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file

# render
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Check devices in LXC guest

lxc$ ls -l /dev/dri
total 0
crw-rw---- 1 root kvm   226, 128 Oct  2 17:48 renderD128

Check the VA-API codecs in LXC guest

lxc$ apt install vainfo
lxc$ vainfo
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Mesa Gallium driver 22.3.6 for AMD Radeon Vega 3 Graphics (raven2, LLVM 15.0.6, DRM 3.57, 6.8.12-2-pve)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc

Verify GPU usage

lxc$ apt install radeontop
lxc$ radeontop

Jellyfin in Docker

AMD GPU | Jellyfin

Query the ids of the kvm (render device) group on the LXC guest

lxc$ getent group kvm | cut -d: -f3
103
name: jellyfin
services:
  jellyfin:
    image: jellyfin/jellyfin
    container_name: jellyfin
    user: 1000:1000
    group_add:
      - 103 # Change this to match your "render" host group id and remove this comment
    network_mode: host
    volumes:
      - /home/nas/.config/jellyfin:/config
      - /home/nas/.cache/jellyfin:/cache
      - /mnt/media:/media
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
    restart: unless-stopped
    environment:
      - TZ=Asia/Shanghai
      - JELLYFIN_PublishedServerUrl=http://example.com
    # Optional - may be necessary for docker healthcheck to pass if running in host network mode
    extra_hosts:
      - 'host.docker.internal:host-gateway'

Check devices in Jellyfin container

root@jellyfin:/$ ls -l /dev/dri
total 0
crw-rw---- 1 root 103 226, 128 Oct  2 14:52 renderD128

Check the VA-API codecs in Jellyfin container

root@jellyfin:/$ /usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128

Trying display: drm
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_22
amdgpu: os_same_file_description couldn't determine if two DRM fds reference the same file description.
If they do, bad things may happen!
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.22 (libva 2.22.0)
vainfo: Driver version: Mesa Gallium driver 24.0.9 for AMD Radeon Vega 3 Graphics (radeonsi, raven2, LLVM 16.0.6, DRM 3.57, 6.8.12-2-pve)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc

Install Chinese fonts in Jellyfin container

Jellyfin出现方框的解决方案 - Chancel's blog

root@jellyfin:/$ apt update
root@jellyfin:/$ apt install fonts-noto-cjk-extra
root@jellyfin:/$ ls -l /usr/share/fonts/opentype/
total 4
drwxr-xr-x 2 root root 4096 Oct  3 18:12 noto

IOMMU

/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet"
$ update-grub

Kernel Modules

You have to make sure the following modules are loaded.

/etc/modules

 vfio
 vfio_iommu_type1
 vfio_pci
 vfio_virqfd
$ lspci -nn
05:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Raven Ridge [Radeon Vega Series / Radeon Vega Mobile Series] [1002:15dd] (rev cb)
05:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Raven/Raven2/Fenghuang HDMI/DP Audio Controller [1002:15de]
05:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15df]
05:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Raven2 USB 3.1 [1022:15e5]
05:00.6 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller [1022:15e3]

/etc/modprobe.d/vfio.conf

options vfio-pci ids=1002:15dd,1002:15de

/etc/modprobe.d/kvm.conf

options kvm ignore_msrs=1

Update the initramfs

$ update-initramfs -u -k all

Reboot

$ dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
AMD-Vi: Interrupt remapping enabled

@rogerxu
Copy link
Owner Author

rogerxu commented May 14, 2022

Network

IP Gateway

PVE Node > System > Network

  • Name: vmbr0
  • IPv4/CIDR: 192.168.31.53/24
  • Gateway (IPv4): 192.168.31.1
  • Autostart: true
  • Bridge ports: enp1s0

/etc/network/interfaces

auto vmbr0
iface vmbr0 inet static
    address 192.168.31.53/24
    gateway 192.168.31.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0

DNS

PVE Node > System > DNS

  • Search domain: local
  • DNS server 1: 192.168.31.101
  • DNS server 2: 192.168.31.2
  • DNS server 3: 192.168.31.1

/etc/resolve.conf

search local
nameserver 192.168.31.101
nameserver 192.168.31.2
nameserver 192.168.31.1

IPv6

Proxmox网桥通过SLAAC配置公网ipv6地址 - 海运的博客 (haiyun.me)

使用ipv6连接Proxmox VE – Ferrets家的Wordpress

Dual-stacking Proxmox Web UI (pveproxy) - Simon Mott

/etc/sysctl.conf

# SLAAC IPv6
net.ipv6.conf.all.accept_ra=2
net.ipv6.conf.default.accept_ra=2
net.ipv6.conf.vmbr0.accept_ra=2
net.ipv6.conf.all.autoconf=1
net.ipv6.conf.default.autoconf=1
net.ipv6.conf.vmbr0.autoconf=1
$ cat /proc/sys/net/ipv6/conf/vmbr0/accept_ra
2
$ cat /proc/sys/net/ipv6/conf/vmbr0/autoconf
1
$ cat /proc/sys/net/ipv6/conf/vmbr0/forwarding
1

@rogerxu
Copy link
Owner Author

rogerxu commented May 16, 2022

Bootloader

Host Bootloader - Proxmox VE

$ proxmox-boot-tool status

Check EFI boot status

$ efibootmgr -v

GRUB

/etc/default/grub

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

/etc/grub.d/

$ ls -1 /etc/grub.d/
000_proxmox_boot_header
00_header
05_debian_theme
10_linux
20_linux_xen
20_memtest86+
30_os-prober
30_uefi-firmware
40_custom
41_custom

/etc/grub.d/40_custom

#!/bin/sh
exec tail -n +3 $0

menuentry 'Microsoft Windows' --class windows --class os --id 'win' {
    insmod part_gpt
    insmod fat
    insmod chain
    
    # search --label --set root --no-floppy EFI
    # search --fs-uuid --set root --no-floppy DEC3-2445
    search --file --set root --no-floppy /EFI/Microsoft/Boot/bootmgfw.efi

    # set root=(hd0,1)

    echo 'Start Windows...'
    chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}

Generate grub config

$ update-grub

/usr/sbin/update-grub

#!/bin/sh
set -e
exec grub-mkconfig -o /boot/grub/grub.cfg "$@"

/boot/grub/grub.cfg

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os --id 'gnulinux-simple-xxx' {

}
submenu 'Advanced options for Proxmox VE GNU/Linux' --id 'gnulinux-simple-xxx' {

}
### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry 'Memory test (memtest86+)' {

}
submenu 'Advanced options for Proxmox VE GNU/Linux' --id 'gnulinux-simple-xxx' {

}
### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'System setup' $menuentry_id_option 'uefi-firmware' {
    fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

@rogerxu
Copy link
Owner Author

rogerxu commented May 16, 2022

Sensors

$ apt install lm-sensors
$ sensors-detect
$ sensors

/usr/share/perl5/PVE/API2/Nodes.pm

$res->{pveversion} = PVE::pvecfg::package() . "/" .
    PVE::pvecfg::version_text();

$res->{thermalstate} = `sensors -j`;

$res->{thermal_hdd} = `hddtemp /dev/sd?`;

my $dinfo - df('/', 1); # output is bytes

/usr/share/pve-manager/js/pvemanagerlib.js

Ext.define('PVE.node.StatusView', {
    extend: 'PVE.panel.StatusView',
    alias: 'widget.pveNodeStatus',
  
    height: 400,
    bodyPadding: '20 15 20 15',
}
{
    title: gettext('PVE Manager Version'),
    textField: 'pveversion',
    value: '',
},
{
    itemId: 'thermal',
    colspan: 2,
    printBar: false,
    title: gettext('Thermal'),
    textField: 'thermalstate',
    renderer: function(value) {
      	const obj = JSON.parse(value);
	const cpu = obj['k10temp-pci-00c3']['Tctl']['temp1_input'];
        const ssd1 = obj['nvme-pci-0100']['Sensor 2']['temp3_input'];
        const ssd2 = obj['nvme-pci-0200']['Composite']['temp1_input'];
        
        return `CPU: ${cpu} ℃ || SSD1: ${ssd1} ℃ | SSD2: ${ssd2} ℃`;
    },
},
{
    itemId: 'thermal',
    colspan: 2,
    printBar: false,
    title: gettext('Thermal'),
    textField: 'thermalstate',
    renderer: function(value) {
        const obj = JSON.parse(value);
        const package = obj['coretemp-isa-0000']['Package id 0']['temp1_input'];
        const core0 = obj['coretemp-isa-0000']['Core 0']['temp2_input'];
        const core1 = obj['coretemp-isa-0000']['Core 1']['temp3_input'];

        return `CPU Package: ${package} ℃ || Core 0: ${core0} ℃ | Core 1: ${core1} ℃`;
    },  
},
$ systemctl restart pveproxy

@rogerxu
Copy link
Owner Author

rogerxu commented Jun 10, 2022

LVM

第7章 使用RAID与LVM磁盘阵列技术 | 《Linux就该这么学》 (linuxprobe.com)

Complete Beginner's Guide to LVM in Linux [With Hands-on] (linuxhandbook.com)

PVE的local和local-lvm,并删除 (buduanwang.vip)

All in one小主机重装实录--Proxmox VE 6.3安装配置 | D2O | 重水 (d2okkk.net)

  • pve-root 作为根目录
  • pve-swap 作为虚拟内存
  • pve-data 作为磁盘镜像储存

LVM之中,建了一个thinpool,名为data

$ lvs
  LV                        VG  Attr       LSize  Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                      pve twi-aotz-- 59.66g                    27.26  2.40                            
  root                      pve -wi-ao---- 27.75g                                                           
  snap_vm-100-disk-0_docker pve Vri---tz-k 32.00g data vm-100-disk-0                                        
  swap                      pve -wi-ao----  8.00g                                                           
  vm-100-disk-0             pve Vwi-aotz-- 32.00g data               40.88                                  
  vm-100-state-docker       pve Vwi-a-tz-- <4.49g data               27.20                                  
  vm-101-disk-0             pve Vwi-aotz--  8.00g data               17.46                  

Storage configuration /etc/pve/storage.cfg

dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

List LVM vg (volume group)

$ pvesm scan lvm
pve

List LVM thin pool for a vg (volume group)

$ pvesm scan lvmthin pve
data

PV

Create LVM partition

$ gdisk /dev/sda

Create PV

$ pvcreate /dev/sda2

List

$ pvs

VG

List

$ vgs

Create VG

$ vgcreate vg /dev/sda2

Add PV to VG

$ vgextend vg /dev/sda2

LV

List

$ lvs

Display

$ lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/vm-113-disk-0
  LV Name                vm-113-disk-0
  VG Name                pve
  LV UUID                lOyUiE-MPK0-lRxz-62me-mBBK-frmY-KstAdx
  LV Write Access        read/write
  LV Creation host, time pve, 2022-05-22 18:42:16 +0800
  LV Pool name           data
  LV Thin origin name    base-112-disk-0
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            48.76%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:29

Create LVM Thin Pool

$ lvcreate -n pool --type thin-pool -l 100%free vg
/dev/vg/pool

Create Thin LV in Thin Pool

$ lvcreate -n lvol1 -V 10G --thin-pool pool vg
$ lvcreate -n lvol1 -V 10G vg/pool
/dev/vg/lvol1

Expand Disk Size

Resize disks - Proxmox VE

LXC

Extend LV size

root@host$ lvextend -r -v -L +1G pve/vm-100-disk-1
root@host$ qm rescan

Restore LXC with the specified rootfs disk size

root@host$ pct restore 100 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10

VM

Extend LV size of the VM disk in the host.

root@host$ lvextend -v -L +1G pve/vm-104-disk-1
root@host$ qm rescan
rescan volumes...
VM 104 (scsi0): size of disk 'local-lvm:vm-104-disk-1' updated from 15G to 16G

Extend the partition table in the guest VM

user@guest$ sudo parted /dev/sda
(parted) print free
(parted) resizepart 2 100%

Check and fix file system error

user@guest$ sudo e2fsck -f /dev/sda2

Resize to file system

user@guest$ sudo resize2fs -p /dev/sda2

Shrink LV size

LXC

系统运维|如何在 Linux 中减少/缩小 LVM 大小(逻辑卷调整)

Trim

root@host$ pct fstrim 100

Check file system error

root@host$ e2fsck -f /dev/pve/vm-100-disk-0

Resize to filesystem to shrink an unmounted file system located on device.

root@host$ resize2fs -p /dev/pve/vm-100-disk-0 2G

Reduce LV size

root@host$ lvreduce -r -v -L -1G pve/vm-100-disk-0
root@host$ qm rescan

Restore LXC with the specified rootfs disk size

root@host$ pct restore 100 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10

Change rootfs size

/etc/pve/lxc/100.conf

rootfs: local-lvm:vm-100-disk-0,size=2G

VM

Trim

How to: Shrink/Reclaim free virtual disk space from Virtual Machines on Proxmox VE (PVE) (Windows/Linux/Debian/Ubuntu/Kali Linux/RHEL/CentOS/Fedora etc.) > Blog-D without Nonsense (dannyda.com)

user@guest$ sudo fstrim -av

Unmount partition in the VM

user@guest$ sudo umount /dev/sdb1

Check and fix file system error

user@guest$ sudo e2fsck -f /dev/sdb1

Resize to file system to shrink an unmounted file system located on device.

user@guest$ sudo resize2fs -p /dev/sdb1 16G

Reduce partition size in the guest VM with gparted.

user@guest$ sudo parted /dev/sdb
(parted) resizepart
Partition number: 1
End? [18.0GB]? 16G
(parted) print free

The unused partition should be larger than the size you want to reduce.

Check and fix file system error

user@guest$ sudo e2fsck -f /dev/sdb1

Resize to file system

user@guest$ sudo resize2fs -p /dev/sdb1

Reduce VM disk size in the host

root@host$ qemu-img info /dev/pve/vm-104-disk-1
root@host$ qemu-img resize --shrink /dev/pve/vm-104-disk-1 –1G

Shrink LV size of the VM disk in the host.

root@host$ lvreduce -v -L -1G pve/vm-104-disk-1
root@host$ qm rescan
rescan volumes...
VM 104 (scsi0): size of disk 'local-lvm:vm-104-disk-1' updated from 16G to 15G

Fix the partition table in the guest VM

user@guest$ sudo gdisk /dev/sdb
Command (? for help): v
Command (? for help): x
Command (? for help): e
Command (? for help): w
Command (? for help): q

user@guest$ sudo parted
(parted) print free

@rogerxu
Copy link
Owner Author

rogerxu commented Jul 17, 2022

OpenWrt

Firmware

OpenWrt Downloads

$ wget https://downloads.openwrt.org/releases/21.02.3/targets/x86/64/openwrt-21.02.3-x86-64-rootfs.tar.gz

KoolCenter 固件下载服务器

$ wget https://fw.koolcenter.com/LEDE_X64_fw867/LXC%20CT%E6%A8%A1%E6%9D%BF/openwrt-koolshare-router-v3.2-r19470-2f7d60f0e5-x86-64-generic-rootfs.tar.gz

OpenWrt固件下载与在线定制编译 (supes.top)

$ wget https://op.supes.top/releases/targets/x86/64/openwrt-08.02.2022-x86-64-generic-rootfs.tar.gz

LXC

OpenWrt Wiki - OpenWrt in LXC containers

双网卡 pve8.0.3 lxc 运行 openwrt作为主路由 最靠谱简单教程 (leiyanhui.com)

ProxmoxVE 7.0 LXC下搭建openwrt软路由 - kangzeru的博客-CSDN博客

ProxmoxVE 7.0 LXC 环境下搭建OpenWrt软路由 - 4XU|思绪

最简单可行的pve LXC下搭建openwrt软路由,完全可以做主路由-软路由,x86系统,openwrt(x86),Router OS 等-恩山无线论坛 (right.com.cn)

pct create 101 local:vztmpl/openwrt-02.01.2024-x86-64-generic-rootfs.tar.gz \
	--rootfs local-lvm:1 \
	--ostype unmanaged \
	--hostname openwrt \
	--arch amd64 \
	--cores 2 \
	--memory 512 \
	--swap 0 \
	-net0 bridge=vmbr0,name=eth0

/etc/pve/lxc/101.conf

# openwrt.common.conf是PVE自带的openwrt配置文件示例,内含一些基本设置
lxc.include: /usr/share/lxc/config/openwrt.common.conf

# 将主机的网卡enp3s0分配给容器使用,根据自己的实际情况更改
lxc.net.1.type: phys
lxc.net.1.link: enp3s0
lxc.net.1.flags: up
lxc.net.1.name: eth1

# 挂载ppoe到lxc内
lxc.cgroup2.devices.allow: c 108:0 rwm
lxc.mount.entry: /dev/ppp dev/ppp none bind,create=file

# 挂载tun到lxc内
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

# 取消 openwrt.common.conf 里面 对 cap的限制,不然openclash无法使用
lxc.cap.drop:
$ pct start 101

/etc/config/network

config interface 'loopback'
        option device 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config globals 'globals'
        option ula_prefix 'fd1e:6bd7:3a36::/48'

config device
        option name 'br-lan'
        option type 'bridge'
        list ports 'eth0'

config interface 'lan'
        option device 'br-lan'
        option proto 'static'
        option ipaddr '192.168.1.101'
        option netmask '255.255.255.0'
        option gateway '192.168.1.1'
        option ip6assign '60'
        list dns '192.168.1.1'

http://192.168.1.101

Mirror

/etc/opkg/distfeeds.conf

$ sed -i 's|downloads.openwrt.org|mirrors.ustc.edu.cn/openwrt|g' /etc/opkg/distfeeds.conf
$ opkg update
$ opkg list-upgradable

@rogerxu
Copy link
Owner Author

rogerxu commented Nov 15, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant