Wednesday, October 3, 2018

How to create software raid1 mirroring in Proxmox PVE

These instructions below will show you how to create software raid1 (mirroring) in your proxmox pve. The example below uses 2 hard drives in /dev/sdb and /dev/sdc (yours may be different).

 

apt-get update 
apt-get upgrade
apt-get dist-upgrade

apt-get install mdadm

modprobe raid1
                        (do not need to add to /etc/modules)


cat /proc/mdstat


Prepare partitions to Linux Raid Auto-Detect:

fdisk -l
fdisk /dev/sdb
    type fd  (linux auto raid)




Copy partition from one drive to another

sfdisk -d /dev/sdb | sfdisk --force /dev/sdc



Remove previous RAID data if any:
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdc1


Create the array with missing disk
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1


Create the array with both disk
mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sdb1 /dev/sdc1

-----------------------------------  (if using LVM stop here - continue to: https://sites.google.com/a/datafeedfile.com/code/proxmox/proxmoxrecreatelocalmd0lvm2basedsharedlocalfilesystem)

Create the filesystem
mkfs.ext4 /dev/md0

Double check to make sure all okay:
cat /proc/mdstat

Create mount target
mkdir /mnt/md0
chmod -R 777 /mnt/md0


Edit fstab
blkid

nano /etc/fstab
UUID=<UUID> /mnt/md0 ext4 defaults,noatime,nodiratime,noacl,data=writeback,barrier=0,nobh,errors=remount-ro 0 2

mount -a

reboot


Proxmox install boot options

linux ext4 hdsize=8
linux ext4 hdsize=10
linux ext4 hdsize=12
linux ext4 hdsize=16

For Proxmox 5.x I have been using linux ext4 hdsize=16 for 16GB in my PVE Root capacity. After Proxmox has been completely installed I usually have 1.4GB Free.

  • linux ext4 – sets the partition format to ext4. The default is ext3.
  • hdsize=nGB – this sets the total amount of hard disk to use for the Proxmox installation. This should be smaller than your disk size.
  • maxroot=nGB – sets the maximum size to use for the root partition. This is the max size so if the disk is too small, the partition may be smaller than this.
  • swapsize=nGB – sets the swap partition size in gigabytes.
  • maxvz-nGB – sets the maximum size in gigabytes that the data partition will be. Again, this is similar to maxroot and the final partition size may be smaller.
  • minfree=nGB – sets the amount of free space to remain on the disk after the Proxmox installation.

Tuesday, October 2, 2018

Restart all Proxmox services in PVE 5.x

Sometime you just need to restart promox pve services in your hardware nodes because they froze or not functioning properly. These commands below will help you restart all your proxmox services:

killall -9 corosync
systemctl restart pve-cluster
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd


Stopping ALL your proxmox services in Proxmox PVE 5.x

Sometime you just need to STOP promox pve services in your hardware nodes because they froze or not functioning properly. These commands below will help you stop all your proxmox services:

killall -9 corosync
systemctl stop pve-cluster
systemctl stop pvedaemon
systemctl stop pveproxy
systemctl stop pvestatd


Friday, September 7, 2018

CHOWN Permission Denied Errors during VZDUMP container backup from Proxmox GUI

When using Proxmox I have always backedup my containers into a shared NFS storage. Usually I would use Open Media Vault (OMV) to host the NFS service. It is a bit slow sometime, but it has worked well in version 4. When I tried to do the same in Proxmox 5.x I was surprise to find these errors during backup:
INFO: starting new backup job: vzdump 110 --storage omvbak1_sdb1tb --compress lzo --mode snapshot --node e4 --remove 0 INFO: Starting Backup of VM 110 (lxc) INFO: status = running INFO: CT Name: posidev INFO: mode failure - some volumes do not support snapshots INFO: trying 'suspend' mode instead INFO: backup mode: suspend INFO: ionice priority: 7 INFO: CT Name: posidev INFO: temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf INFO: starting first sync /proc/27345/root// to /mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/." failed: Operation not permitted (1) INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/bin" failed: Operation not permitted (1) INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/bin/bzcmp" failed: Operation not permitted (1) INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/bin/bzegrep" failed: Operation not permitted (1) INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/bin/bzfgrep" failed: Operation not permitted (1) INFO: rsync: chown "/mnt/pve/omvbak1_sdb1tb/dump/vzdump-lxc-110-2018_09_07-08_22_21.tmp/bin/bzless" failed: Operation not permitted (1)

When I read the error message carefully, it complained about working inside temporary directory on NFS:

INFO: temporary directory is on NFS, disabling xattr and acl support, consider configuring a local

tmpdir via /etc/vzdump.conf so I did some research and found out that was indeed NOT GOOD! So I edited by /etc/vzdump.conf (must be edited manually in each hardware node)

from:




















 to:





SEE HOW THE FIRST LINE is different? Instead of DIR it is now pointing to my local RAID1 array mounted on /mnt/md0/tmp

That basically fixed my problem!  I can now backup by Proxmox containers to my NFS shared storage just like before. 

I hope this article helps someone to solve this issue.

If you like these articles about proxmox issues and solutions I have encountered please subscribe to my blog! 

Thanks

--Andrew

Tuesday, August 14, 2018

What you need to know about Proxmox VE 5.2

On May 16, 2018 the proxmox ve team in Vienna just released their newest Proxmox Virtual Environment (Proxmox VE) version 5.2.  Coincidentally a month after their 10th anniversary in April 2018.  Happy birthday proxmox ve!  I love this software!

Here are the most important points you need to know about Proxmox VE 5.2:

Cloud-Init

Proxmox VE 5.2 now includes cloud-init built in!  Cloud-init is a software package to help automating and setting up of virtual machines.

Cloud-init features includes:
  • Deploy VM based on templates
  • Pre configure host names, SSH keys, mount points
  • Run post install scripts
  • Enables automation tools such as Ansible, Puppet, Chef, and Salt
  • Access to pre-installed disk images and copy servers from pre-created server images
Best of all, all of the above features are now available in Proxmox VE 5.2' graphical user interface (web based GUI). Just awesome!  Thanks Proxmox Team!


SMB/CIFS Storage Plug-in

One of the strength of Proxmox VE is the fact that it comes pre-installed with many useful storage plug-ins such as Local directory, NFS, Ceph.  SMB/CIFS has been missing until now.  SMB obviously is very important and popular because it is the default file sharing protocol preferred by Windows / Microsoft.  It is also known to be faster and lighter on the data communication compared to NFS.  This addition of SMB/CIFS storage support to Proxmox VE 5.2 is definitely a huge improvement!


Let's Encrypt Certificate Management via GUI



Are you tired of seeing that warning message above?  Or having to spent the extra 5 seconds to make exception to allow yourself to see your own server? Yeah, me too!

Thanks to Proxmox VE 5.2, in the future setting up SSL on your Proxmox VE hardware nodes will be a breeze thanks to the built in GUI support to install Let's Encrypt SSL certificate.

In case you are not aware... Let's Encrypt SSL project allows anyone to get a FREE SSL certificate for any purpose (such as protecting your Proxmox VE server).

I have created a short video - How to get your own Free Let's Encrypt Certificate in less than 5 min:



Added features in Proxmox VE 5.2:

  • Creation of clusters via the graphical user interface. This feature makes creating and joining nodes to a Proxmox cluster extremely simple and intuitive even for novice users.
  • Expanded functionality of LXC: Creating templates or moving disks from one storage to another now also work for LXC. The move-disk function can be used for stopped/paused containers and instead of backup/restore.
  • If the QEMU guest agent is installed, the IP address of a virtual machine is displayed on the GUI
  • Administrators can now easily create and edit new roles via the GUI.
  • Setting I/O limits for restore operations is possible (globally or more fine-grained per storage) to avoid I/O load getting too high while restoring a backup.
  • Configuration of ebtables in the Proxmox VE Firewall.

I just want to once again say Thank You to the Proxmox VE team for making such a great software!

Sunday, April 15, 2018

Preparation commands I do for every LXC Container

Every Proxmox LXC container I configure should have a good base.  A good base for any type of server is having its Time Zone and Locales set correctly.  Another software which is usually required is the Ubuntu PPA software repository system.

Here are the commands I usually execute for every Ubuntu server I create:

UPDATE AND UPGRADE

apt-get update
apt-get -y dist-upgrade


SETTING TIME ZONE

dpkg-reconfigure tzdata


SETTING LOCALES

locale-gen en_US en_US.UTF-8

INSTALL PPA REPOSITORY SUPPORT

apt-get install software-properties-common python-software-properties


UNINSTALL POSTFIX (if you don't need to send email)

service stop postfix
apt-get remove postfix
apt-get purge
apt-get autoclean





Sunday, February 18, 2018

Creating EXT4 partition from unused partition with LVM

This is a quick guide to create a usable EXT4 partition from an unused disk partition with LVM in your Proxmox server

In this example my UNUSED DISK PARTITION is on /dev/sda4

*** THIS WILL ERASE ALL CONTENT IN /dev/sda4 ***


STEP 1 - PREPARE THE PARTITION TO TYPE: linux LVM

fdisk /dev/sda

change partition type by entering 't', '4', '31'

't' is for changing partition type
'4' is to select partition #4
'31' is to select Linux LVM partition type

check to make sure partition type has been set properly by pressing 'p' then ENTER, you should see something like this (I highlighted partition 4 which shows Linux LVM:

/dev/sda1      2048      4095      2048     1M BIOS boot
/dev/sda2      4096    528383    524288   256M EFI System
/dev/sda3    528384  33554432  33026049  15.8G Linux LVM
/dev/sda4  33556480 586072334 552515855 263.5G Linux LVM


STEP 2 - CREATE PV, then VG, then LV 


Every LVM volume involves creating these 3 types of sub lvm systems:

1. Physical volume (PV)
2. Volume group (VG) (I named mine vg_ssd_data)
3. Logical volume (LV) (I named mine lv_ssd_data)

Type the following commands:

pvcreate /dev/sda4

vgcreate vg_ssd_data /dev/sda4

lvcreate -l 100%FREE -n lv_ssd_data vg_ssd_data

lvs   (this command is optional - only to display your logical volumes - see below)

  LV          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data        pve         twi-a-tz--   8.25g             0.00   0.55
  root        pve         -wi-ao----   3.75g
  swap        pve         -wi-ao----   1.88g
  lv_ssd_data vg_ssd_data -wi-a----- 263.46g


STEP 3 - GET YOUR DEVICE MAPPER NAME

Enter the following command to display all your logical volumes, to get your device mapper path.
(I highlighted my logical volume which shows the device mapper path)

lvdisplay

  --- Logical volume ---
  LV Path                /dev/vg_ssd_data/lv_ssd_data
  LV Name                lv_ssd_data
  VG Name                vg_ssd_data
  LV UUID                dDqnRP-1wt2-UHcI-gE3T-WYle-HtNB-cWbfis
  LV Write Access        read/write
  LV Creation host, time e1, 2018-02-18 10:18:07 -0600
  LV Status              available
  # open                 0
  LV Size                263.46 GiB
  Current LE             67445
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                QzAL07-mdzD-RoH5-ulXa-gwGt-FtTN-XRhA4q
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-12-02 20:53:30 -0600
  LV Status              available
  # open                 2
  LV Size                1.88 GiB
  Current LE             480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                2LuMFi-XzY9-TWNf-uiH3-qdAc-Y5Xa-r6ajdz
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-12-02 20:53:30 -0600
  LV Status              available
  # open                 1
  LV Size                3.75 GiB
  Current LE             960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                hBfLjZ-Zqib-TMwJ-7dQh-qBrJ-7XHZ-D1G2lQ
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-12-02 20:53:30 -0600
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                8.25 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.55%
  Current LE             2112
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4


STEP 4 - MAKE THE EXT4 PARTITION

The command below will create the EXT4 partition:

mkfs.ext4 /dev/vg_ssd_data/lv_ssd_data

the result should look similar to this:

mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 69063680 4k blocks and 17268736 inodes
Filesystem UUID: 123a79fb-226e-45b2-ab97-bdd8df335538
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done


STEP 5 - CREATE THE DIRECTORY WHERE WE WILL MOUNT THE PARTITION

The following 2 commands will create a directory and set its partition to 777 (all access - this is just for my example - you probably want to restrict your permission)

mkdir /mnt/ssd_data
chmod -R 777 /mnt/ssd_data


STEP 6 - GET UUID USING COMMAND BLKID

The following command 'blkid' will show you the UUID which you will need to mount your newly created partition.

blkid

My result looks like:

/dev/sda2: UUID="72B0-472C" TYPE="vfat" PARTUUID="43003f1e-9cbd-461c-ba1f-b288196eaf8d"
/dev/sda3: UUID="ceYVkY-WxbN-UQME-i6Gt-frKB-rDeW-StrMMz" TYPE="LVM2_member" PARTUUID="ac2082b2-aed1-432d-8102-86246ab4d738"
/dev/mapper/pve-swap: UUID="d44d4cfc-c6b2-4f75-b24a-a6ec99e6d748" TYPE="swap"
/dev/mapper/pve-root: UUID="d41800f7-bf3a-42fb-bbd2-9f4519d99bd6" TYPE="ext4"
/dev/sda1: PARTUUID="491629a5-bea8-4a61-84a7-b4610e8b4607"
/dev/sda4: UUID="Iv347n-th9t-fC3v-UpUc-bHFN-LKOA-shD5Eg" TYPE="LVM2_member" PARTUUID="bfc4c91b-148b-429c-a9eb-b6d80a1dd685"
/dev/sdb: PTUUID="05ca81c9-9745-4fb2-bd15-ced9e1525169" PTTYPE="gpt"
/dev/sdc: PTUUID="8e2952c8-96a6-4c41-a9a8-0ae9cde93182" PTTYPE="gpt"
/dev/mapper/vg_ssd_data-lv_ssd_data: UUID="123a79fb-226e-45b2-ab97-bdd8df335538" TYPE="ext4"


STEP 6 - ADDING YOUR PARTITION TO FSTAB

The following command, you will edit your FSTAB (filesystem table) to add a line which will tell your computer to AUTOMATICALLY MOUNT the partition everytime the computer starts (cold / warm start)

nano /etc/fstab

add the following line (note the UUID in RED and DIRECTORY in BLUE):

UUID=123a79fb-226e-45b2-ab97-bdd8df335538 /mnt/ssd_data ext4 defaults,noatime,nodiratime,noacl,data=writeback,barrier=0,nobh,errors=remount-ro 0 2



STEP 7 - FINALLY! MOUNT AND CHECK YOUR PARTITION FREE DISK SPACE

The following command will mount all your partition by reading your settings from the filesystem table (/etc/fstab)

mount -a

(mount -a does not have any output when successful)

The following command will show FREE DISK SPACE for all your MOUNTED PARTITIONS:

df -h

Filesystem                           Size  Used Avail Use% Mounted on
udev                                  24G     0   24G   0% /dev
tmpfs                                4.8G  9.2M  4.8G   1% /run
/dev/mapper/pve-root                 3.7G  1.8G  1.7G  52% /
tmpfs                                 24G   45M   24G   1% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                 24G     0   24G   0% /sys/fs/cgroup
/dev/fuse                             30M   24K   30M   1% /etc/pve
tmpfs                                4.8G     0  4.8G   0% /run/user/0
/dev/mapper/vg_ssd_data-lv_ssd_data  259G   61M  246G   1% /mnt/ssd_data






Monday, February 5, 2018

Upgrading Proxmox PVE from version 5.1.x to 5.1.43 and install community repo and GPG key


UPGRADING PROXMOX PVE version 5.1 to its newer version!

This is the solution / fix if you encounter this error during 'apt-get update'

The repository 'https://enterprise.proxmox.com/debian/pve stretch Release' does not have a Release file.

The steps below will upgrade your Proxmox PVE, mine was updated:

from: pve-manager/5.1-35/722cc488 (running kernel: 4.13.4-1-pve)
to:     pve-manager/5.1-43/bdb08029 (running kernel: 4.13.13-5-pve)

STEP 1:  Remove subscription REPO

cd /etc/apt/sources.list.d/
rm pve-enterprise.list

STEP 2:  Add regular REPO

echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

STEP 3:  Download and Install GPG Key

wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

After the pre-requisite steps above you can then execute the following standard distribution upgrade:

apt-get update
apt-get dist-upgrade

OPTIONAL (clean up - to free up disk space):

apt-get purge
apt-get clean
apt-get autoclean


Checking you new PVE version using:

pveversion