Friday, January 23, 2026

Adjust the size of default vmdk Vagrantfile disks (you can use this procedure to relocate + resize as well)

 

Adjust the size of default vmdk Vagrantfile disks (you can use this procedure to relocate + resize as well):


1) Shutdown the VM


2) VBoxManage.exe clonehd box-disk001.vmdk E:\Virtualbox_VMs\voracle9x-docker-sa1\voracle9x-docker-sa1-box-disk001.vdi --format VDI



C:\VirtualBox_VMs\voracle9x-docker-sa1_default_1763076112652_13435>"C:\Program Files\Oracle\VirtualBox"\VBoxManage.exe clonehd box-disk001.vmdk E:\Virtualbox_VMs\voracle9x-docker-sa1\voracle9x-docker-sa1-box-disk001.vdi --format VDI

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Clone medium created in format 'VDI'. UUID: f09bc1a5-fdab-44b0-814b-cb599464562f




3) VBoxManage.exe modifyhd E:\Virtualbox_VMs\voracle9x-docker-sa1\voracle9x-docker-sa1-box-disk001.vdi --resize 76800

C:\VirtualBox_VMs\voracle9x-docker-sa1_default_1763076112652_13435>"C:\Program Files\Oracle\VirtualBox"\VBoxManage.exe modifyhd E:\Virtualbox_VMs\voracle9x-docker-sa1\voracle9x-docker-sa1-box-disk001.vdi --resize 76800

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%


C:\VirtualBox_VMs\voracle9x-docker-sa1_default_1763076112652_13435>




4) Attach the new disk to VirtualBox media


5) Attach the new vdi disk in place of the existing vmdk (only disk sata0) attached to the VM using VBox.


6) Boot the VM using "vagrant up"


7) Run "fdisk -l" to list the partition table, we will see the partition table changed and new expanded size visible. in my case it expanded from 32GB to 75GB.


8) Run "cfdisk", in my case only 1 disk, so it picked up all the partitions from that disk


9) Choose /dev/sda3 (which corresponds to / FS) and use arrow key select resize, enter new size 74GB in my case; Write, Accept partition modification "yes"


10) Verify all the pvs, lvs, vgdisplay outputs to ensure the current sizes of the / FS is aligned.


11) Run "pvresize /dev/sda3" [partition corresponds to / FS]


12) Run "pvdisplay /dev/sda3" [partition corresponds to / FS] -- should display the extra space in free space.


13) Run "lvextend -l +100%FREE /dev/mapper/vg_main-lv_root" [vg corresponds to the / FS]


14) Run "xfs_growfs /" to accomadate the growth


15) Finally review the size allocated using  df -Th.


**********************Sample output:


Before:


(base) [root@localhost ~]# pvdisplay

  --- Physical volume ---

  PV Name               /dev/sda3

  VG Name               vg_main

  PV Size               <36.00 GiB / not usable 0

  Allocatable           yes (but full)

  PE Size               4.00 MiB

  Total PE              9215

  Free PE               0

  Allocated PE          9215

  PV UUID               6X5HFj-GRnV-Idjh-BO95-cLUf-biD4-dFPP3z


(base) [root@localhost ~]# pvdisplay /dev/sda3

  --- Physical volume ---

  PV Name               /dev/sda3

  VG Name               vg_main

  PV Size               <36.00 GiB / not usable 0

  Allocatable           yes (but full)

  PE Size               4.00 MiB

  Total PE              9215

  Free PE               0

  Allocated PE          9215

  PV UUID               6X5HFj-GRnV-Idjh-BO95-cLUf-biD4-dFPP3z


(base) [root@localhost ~]# vgdisplay vg_main

  --- Volume group ---

  VG Name               vg_main

  System ID

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  5

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                2

  Open LV               2

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               <36.00 GiB

  PE Size               4.00 MiB

  Total PE              9215

  Alloc PE / Size       9215 / <36.00 GiB

  Free  PE / Size       0 / 0

  VG UUID               7VoDZZ-Qqta-yTHp-UDUc-Ovju-UFID-ms8Riy


(base) [root@localhost ~]# lvdisplay /dev/vg_main/lv_root

  --- Logical volume ---

  LV Path                /dev/vg_main/lv_root

  LV Name                lv_root

  VG Name                vg_main

  LV UUID                btd7vi-Qtwj-nsIw-0tqf-2ury-a160-SA966o

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2025-08-28 12:38:32 +0000

  LV Status              available

  # open                 1

  LV Size                <32.00 GiB

  Current LE             8191

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           252:0


(base) [root@localhost ~]# df -Th

Filesystem                  Type      Size  Used Avail Use% Mounted on

devtmpfs                    devtmpfs  4.0M     0  4.0M   0% /dev

tmpfs                       tmpfs     2.9G     0  2.9G   0% /dev/shm

tmpfs                       tmpfs     1.2G   17M  1.2G   2% /run

/dev/mapper/vg_main-lv_root xfs        32G   24G  8.9G  73% /

/dev/sda2                   xfs       960M  217M  744M  23% /boot

vagrant                     vboxsf    466G  404G   63G  87% /vagrant

tmpfs                       tmpfs     593M  4.0K  593M   1% /run/user/1000

tmpfs                       tmpfs     593M  4.0K  593M   1% /run/user/0

(base) [root@localhost ~]



After:

(base) [root@localhost ~]# pvresize /dev/sda3

  Physical volume "/dev/sda3" changed

  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

(base) [root@localhost ~]# pvdisplay /dev/sda3

  --- Physical volume ---

  PV Name               /dev/sda3

  VG Name               vg_main

  PV Size               <74.00 GiB / not usable 16.50 KiB

  Allocatable           yes

  PE Size               4.00 MiB

  Total PE              18943

  Free PE               9728

  Allocated PE          9215

  PV UUID               6X5HFj-GRnV-Idjh-BO95-cLUf-biD4-dFPP3z


(base) [root@localhost ~]# lvextend -l +100%FREE /dev/mapper/vg_main-lv_root

  Size of logical volume vg_main/lv_root changed from <32.00 GiB (8191 extents) to <70.00 GiB (17919 extents).

  Logical volume vg_main/lv_root successfully resized.

(base) [root@localhost ~]# df -Th /

Filesystem                  Type  Size  Used Avail Use% Mounted on

/dev/mapper/vg_main-lv_root xfs    32G   24G  8.9G  73% /

(base) [root@localhost ~]# xfs_growfs /

meta-data=/dev/mapper/vg_main-lv_root isize=512    agcount=4, agsize=2096896 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=1, sparse=1, rmapbt=0

         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0

         =                       exchange=0

data     =                       bsize=4096   blocks=8387584, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0, ftype=1, parent=0

log      =internal log           bsize=4096   blocks=16384, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

data blocks changed from 8387584 to 18349056

(base) [root@localhost ~]# df -Th /

Filesystem                  Type  Size  Used Avail Use% Mounted on

/dev/mapper/vg_main-lv_root xfs    70G   24G   47G  34% /

(base) [root@localhost ~]#

****************



No comments:

Post a Comment

Ingest csv data into Apache Iceberg using spark notebook

  Objective: Ingest csv data into Apache Iceberg using spark notebook Steps: 1. Start our VM 2. start our docker containers cd /opt/de [root...