Monday, November 23, 2015

Proxmox v4.x mini home server with less than 10 watts of power consumption

I have just built a home server using Intel Celeron J1900 CPU (Baytrail chipset) and Asrock Q1900 mini ITX motherboard, loaded Proxmox 4.x virtualization environment.  I picked this hardware configuration for low power consumption and low maintenance.  This intel Celeron J1900 CPU is soldered on the Asrock Q1900 motherboard, it has Quad Core, runs at 2.0Ghz and consumes less than 10 watts (about 8.9 according to Intel's specifications).  Awesome.

I bought the motherboard on Ebay for only $80 including shipping. And it came with 8GB memory module (what a deal!).

For storage I am using a single Intel 320 80GB SSD drive and 2 x 4TB Western Digital hard drives in software RAID1 (mirroring) mode.  I am betting on the Intel SSD's long lifetime to provide me trouble free operation.

NO moving parts!  well actually I have a fan in the power supply and obviously the 2 4TB hard drives are moving inside. Haha... well nothing I can do there, but at least the CPU and SSD drive have no moving parts.

SILENT!  I can not hear anything from this new server. It runs almost completely silent which I love and makes my wife happy. :-)

Okay enough with the introduction, let me give the specifications and describe how I managed to install Proxmox and what virtual servers I have loaded on this new mini home server.

SPECIFICATIONS:

Motherboard Asrock Q1900

  1. Intel® Quad-Core Processor J1900 + Mini-ITX Motherboard
  2. All Solid Capacitor design
  3. Supports DDR3/DDR3L 1333 memory, 2 x SO-DIMM slots
  4. 1 PCIe 2.0 x1, 1 mini-PCIe
  5. Graphics Output Options : D-Sub, DVI-D, HDMI
  6. Built-in Intel® 7th generation (Gen 7) graphics, DirectX 11.0, Pixel Shader 5.0
  7. 7.1 CH HD Audio with Content Protection (Realtek ALC892 Audio Codec)
  8. 2 SATA3, 2 SATA2, 4 USB 3.0 (2 Front, 2 Rear), 4 USB 2.0 (2 Front, 2 Rear)
  9. 1 x Print Port Header, 1 x COM Port Header
  10. Supports A-Tuning, XFast LAN, XFast RAM, USB Key

Extra NIC (gigabit lan card) for PFSense WAN PORT:



As I mentioned, I bought the parts above from Ebay and it came with 8GB SO-DIMM already included. Nice!


OPERATING SYSTEM:

I tried to install Proxmox 4 by dowloading its ISO directly from Proxmox, from this page.  But it did not work well with this motherboard because it failed to enter VESA graphic mode that Proxmox installer required.  After some googling I decided to follow one of the advice of other users with similar issue to install Proxmox via Debian.

So I downloaded Debian Jessie (minimal ISO version), loaded it into USB thumb drive, and installed it on my SSD boot drive.  Please make sure you choose LVM partition management.

This is the guide I followed after I have installed Debian to install Proxmox flawlessly:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie

Once Proxmox has been installed I started to install the following software:

1.  Software Raid

This image below will show my RAID1 partition mounted and being used as Proxmox storage:


2.  Download templates from Turnkey Linux



3. File Server  (samba)
    Even though I am not a fan of Samba, it is one of the most popular and most used network file sharing system. So I have decided to use Turnkey Linux's appliance called "File Server".  It works quite well and it installed as LXC (linux container) which saves a lot of resources. Nice!

4. Plex Media Server
     I have also installed a Free Plex Media Server and let it scan through all my home videos and family photos.  Plex Server is free for you to install on your server.  I was able to install it inside a Debian LXC with just one install command.  Very easy, just sign-up for free as a Plex user and find the instructions to download the Plex Server and install into Debian LXC.  Works perfectly.


5. Torrent Server
    I don't use torrent much, but sometime it is handy to have a torrent server that does not need a computer to be on during the long file transfer. So the Turnkey Appliance for ruTorrect from Turnkey Linux was perfect. And again, it is an LXC!


6. My family like to use Roku, so I used to have roConnect (a web appliance to serve and play movies via browser) running on my Macbook which was annoying to need to have a LAMP running on my Macbook.  This mini server finally free me up to not have to worry about this anymore.  I simply install a Turnkey Linux LAMP appliance, load up roConnect, configure it, and Voila. roConnect 24/7!


7. Failed to install LXDE, I wanted to have a simple and lightweight Linux desktop (headless), something that I can remote into using VNC or Teamviewer, but it failed to work.  I have not had time to troubleshoot why, maybe later.  I have an XCFE working with OpenVZ before and I think LXDE on LXC should work also.  I will create a post with why it did not work and how I worked around it later.


8. PFSense firewall, I would like to install PFSense next, using KVM (QEMU).  This will replace my Tomato USB Asus Router (which has served me well for 5 years).   Having a real firewall like PFSense at home would be overkill, but what the heck I have a server for it anyways.  Haha
I will post another article about how this goes also later.


SUMMARY:

Overall I am very happy with my mini server plan, I have 5 LXC containers running so far:
    
      a.  File Server
      b.  Torrent Server
      c.  roConnect
      d.  LXDE (running but can not startx yet)
      e.  Plex Server

Plus 3.5TB of mirrored storage for my family all less than $450.00.   The 4TB hard drives cost the most at about $140 each.  What I love most about this server is that it will cost me only about $10 being powered on 24 / 7 for a full year.  Yes, $10 / year!  WOW!

I am waiting to install another 8GB memory, but the CPU and Memory utilization has been modest and very reasonable, see image below:


Thanks for reading I will post more articles about PFSense and LXDE how to later.

--Andrew



Install Software RAID to existing Proxmox PVE installation

This is my notes of commands I have executed in Proxmox hardware node to add Software RAID capability to any Proxmox PVE existing installation:


apt-get update 
apt-get upgrade
apt-get dist-upgrade

apt-get install mdadm

modprobe raid1
                        (do not need to add to /etc/modules)


cat /proc/mdstat


Prepare partitions to Linux Raid Auto-Detect:

fdisk -l
fdisk /dev/sdb
    type fd  (change partition type to linux auto raid)


Copy partition from one drive to another

sfdisk -d /dev/sdb | sfdisk --force /dev/sdc



Remove previous RAID data if any:
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdc1


Create the array with missing disk
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1


Create the array with both disk
mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sdb1 /dev/sdc1


To Add Secondary Drive Later do
mdadm --manage /dev/md0 --add /dev/sdc1

-----------------------------------  (if using LVM use this section below) -------------------------------


umount /mnt/md0
cat /proc/mdstat

pvcreate /dev/md0
vgcreate vglocalmd0 /dev/md0

lvcreate -n localmd0 -l 90%FREE vglocalmd0
-or-
lvcreate -n localmd0 -l 100%FREE vglocalmd0   (allocate 100% - but backup using snapshot will fail)

mkfs.ext4 /dev/vglocalmd0/localmd0

blkid      (to get and copy the uuid)

nano /etc/fstab

UUID=<uuid> /mnt/md0 ext4 defaults,noatime,nodiratime,noacl,data=writeback,barrier=0,nobh,errors=remount-ro 0 2

rm -rf /mnt/md0/*
chmod 777 /mnt/md0

mount -a

df -h /mnt/md0      (just to check and make sure the mount is there and okay)


-----------------------------------  (if using NON-LVM (just ext4) use this section below) -------------------------------

Create the filesystem
mkfs.ext4 /dev/md0

Double check to make sure all okay:
cat /proc/mdstat

Create mount target
mkdir /mnt/md0
chmod -R 777 /mnt/md0


---------------------- end of (LVM / Ext4 configuration -----------------

Edit fstab
blkid

nano /etc/fstab
UUID=<UUID> /mnt/md0 ext4 defaults,noatime,nodiratime,noacl,data=writeback,barrier=0,nobh,errors=remount-ro 0 2

mount -a

reboot

Saturday, November 21, 2015

Proxmox boot failure - missing proxmox entry from grub menu - caused by answering YES during upgrade

Proxmox boot failure - missing proxmox entry from grub - can not boot

I got into a problem during a proxmox upgrade.  I was trying to do a proxmox upgrade from 3.1 to 3.5.
I have performed this proxmox upgrade many times, however this time I answered the wrong question during grub upgrade. The question was “Upgrade GRUB 2 (continue without upgrading Grub 2)“, I should have answered NO, but I made a mistake and answered YES.

Answering yes, means grub configuration will not be updated, which is a huge mistake, and I did not realize this mistake until I reboot the server and realized that proxmox is missing from the grub menu, and it booted to memtest immediately.

I start googling and found that many people have experience this problem… most people have this same issue from a bug that proxmox had, and some people experienced error during upgrade, nevertheless the same solution may work for many of the proxmox grub installation / upgrade issues you may have.

Here are the steps I did to make my server boots properly again:

  1. BOOT TO EXISTING PROXMOX INSTALLATION USING PROXMOX CD / USB live drive

    on the proxmox boot screen, type in ‘pveboot’ and press enter.

    pveboot will allow you to boot into your existing proxmox installation (yes, the one you can not automatically boot into)
  2. FIX GRUB CONFIGURATION AND RE-INSTALL GRUB TO BOOT DEVICE

    nano /boot/grub/grub.cfg

    look for the first entry, similar to this:


    menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 2.6.32-43-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-2.6.32-43-pve-advanced-' {
    load_video
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 699b8223-64d7-4963-82b2-1b0c14db974a
    else
    search --no-floppy --fs-uuid --set=root 699b8223-64d7-4963-82b2-1b0c14db974a
    fi
    echo 'Loading Linux 2.6.32-43-pve ...'
    linux /vmlinuz-2.6.32-43-pve root= ro quiet
    echo 'Loading initial ramdisk ...'
    initrd /initrd.img-2.6.32-43-pve
    }


    replace whatever necessary to something that works (I compared and copied from another proxmox server I have - does not have to be same version)
    The menu entry below is the right one I copied from another hardware node:

  3. menuentry 'Proxmox Virtual Environment GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-' {
    load_video
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='(hd0,gpt2)'
    search --no-floppy --fs-uuid --set=root 699b8223-64d7-4963-82b2-1b0c14db974a
    echo 'Loading Linux 2.6.32-43-pve ...'
    linux /vmlinuz-2.6.32-43-pve root=/dev/mapper/pve-root ro quiet gfxpayload=text nomodeset
    echo 'Loading initial ramdisk ...'
    initrd /initrd.img-2.6.32-43-pve
    }
    I have bolded the text that I have noticed to be different and I have corrected
    Please note:
    - your Linux kernel version may be different, mine is 2.6.32-43-pve
    - your root UUID should be different, mine is 699b8223-64d7-4963-82b2-1b0c14db974aif you don't know your UUID you can type 'blkid' command to get a list of your device's UUID.
    - my root device path is /dev/mapper/pve-root because I use LVM partition, yours may be different.

    Then install the corrected configuration to your boot device:


    grub-install /dev/sda

    then we are done, we are ready to reboot


    reboot



The steps above took me hours to figure out. I hope this helps and saves somebody some time.

Friday, October 16, 2015

Proxmox version 4 remove support for OpenVZ, but added LXC (Linux Containers)

I have just read Proxmox's press release and watch video about Proxmox Version 4.

One major shock was the Team's decision to move away from OpenVZ and replacing it with LXC.

Both OpenVZ and LXC are good linux VPS (virtual private server) technologies. I believe OpenVZ was more mature to start with but, LXC is more efficient and faster.

In my opinion Proxmox team's decision is the correct one because OpenVZ is tied in too much with their sponsoring company Odin (used to be SWSoft). Versus LXC which started as true open source collaboration and eventually got some support from Canonical (Ubuntu people).

As far as I know their features are very similar but OpenVZ require kernel modification and LXC does NOT. Which is a huge difference for DevOps admin.

Anyhow, I am now thinking about moving to LXC from OpenVZ also, and realizing I will need to re-learn everything that I know about OpenVZ and look them up how do to all the basic things like Backup, Stop, Suspend, Start, etc... Haha... Here we go again.

Sunday, June 28, 2015

Enable automatic time synchronization for OpenVZ container in Proxmox (for Ubuntu / Debian)

apt-get update
apt-get -y install ntpdate

locale-gen en_US en_US.UTF-8
dpkg-reconfigure locales

dpkg-reconfigure tzdata

/usr/sbin/vzctl stop  <ctid>
/usr/sbin/vzctl set <ctid>  --capability sys_time:on  --save
/usr/sbin/vzctl start  <ctid>

Rejoin hardware node to Proxmox cluster

service pvestatd stop
service pvedaemon stop

service cman stop
killall -9 corosync cman dlm_controld fenced

service pve-cluster stop

rm /etc/cluster/cluster.conf
rm -rf /var/lib/pve-cluster/* /var/lib/pve-cluster/.*
rm /var/lib/cluster/*

// check versions - make sure it is running same version (kernel) as other nodes
uname -a
pveversion -v   (look for Running kernel: ... )

reboot (you have to reboot!)



After rebooting, you can add the node as usual:
pvecm add <IP address of one of the nodes already in the cluster>


Backup OpenVZ container CT to another hardware node for automatic scheduled remote backup

Proxmox does have a nice user interface which allows you to backup any OpenVZ container (even when it is live - using snapshot).  Did you know you can also do the same from command line?

You can use vzdump!

For example:

vzdump 104 --mode snapshot --compress lzo --stdout | ssh 10.0.1.1 "cat > /mnt/backup/vz/104/vzdump-openvz-104-2013_05_18-11_00_00.tar.lzo"


The above command will backup

CT 104

using 'snapshot' mode (live - without needing to shut it down)

with compression LZO mode

stores it to another server (in this example 10.0.1.1) using SSH

to a remote directory and filename:

    /mnt/backup/vz/104/vzdump-openvz-104-2013_05_18-11_00_00.tar.lzo


Why is this useful? well... for daily / routine backup of course!  Imagine never having to worry about backing up your VM ever again once you set this command nicely in your crontab. :-)

Removing openvz container CT manually from Proxmox

/etc/vz/conf/<ctid>.*
     there may be .mount files here, just back them up to /root/ but don't worry, vzdump back up these configuration files too.
     vzrestore will restore .mount files also.

/var/lib/vz/root/<ctid>   may contain fastboot file. just move this also to /root/ just in case
     this could be the directory used if local storage is used for container

where the actual CT was:

/mnt/md0/private/


notes:
vzdump also backs up <ctid> configuration find and <ctid>.mount files  and   vzrestore restores conf and .mount files just fine.

Increase open file limit number of open file limit on HN host node and vm / ct

DO THIS IN HN (hardware nodes):

nano /etc/security/limits.conf

# wildcard does not work for root, but for all other users
*               soft     nofile           65536
*               hard     nofile           65536
# settings should also apply to root
root            soft     nofile           65536
root            hard     nofile           65536

ulimit -n 65536

do not need to modify pam limits

-----------------------------------------------

for each CTs we also need to do this:

Fix repair replace broken failed RAID mirrored hard drive from proxmox hardware node HN MD0

** REBOOT NOT REQUIRED FOR HOT-SWAP DRIVE BAY **

cat /proc/mdstat

FROM SDB TO SDC:
dd if=/dev/zero of=/dev/sdc bs=512 count=1
sfdisk -d /dev/sdb | sfdisk --force /dev/sdc
mdadm --manage /dev/md0 --add /dev/sdc1

FROM SDC TO SDB:
dd if=/dev/zero of=/dev/sdb bs=512 count=1
sfdisk -d /dev/sdc | sfdisk --force /dev/sdb
mdadm --manage /dev/md0 --add /dev/sdb1

cat /proc/mdstat

Proxmox repair fix md127 issue raid1 software raid broke from md0 to md 127 after some time

first try to examine the array:

cat /proc/mdstat

examine each of the array:

mdadm --detail /dev/md0
mdadm --detail /dev/md127

Take note of the UUID of each array, they should be the SAME

example: 02068dc1:63b677bb:e3f0cfcc:8ccc0d3b

next, examine the current mdadm scan result:

mdadm --detail --scan

to fix this problem basically we need to do the following:

add the ARRAY line to /etc/mdadm/mdadm.conf then update initrd image to read and include new setting in mdadm.conf, then reboot


nano /etc/mdadm/mdadm.conf 

add this line under the # definitions of existing MD arrays:

ARRAY /dev/md0 UUID=02068dc1:63b677bb:e3f0cfcc:8ccc0d3b

update the initrd image:

update-initramfs -u

reboot

after reboot comes back... examine the array again... you may need to add non-included drives like this:

mdadm --manage /dev/md0 --add /dev/sdb1

monitor rebuilding progress:

cat /proc/mdstat


That is all!

Proxmox hardware node and openvz container increase shared memory SHMMAX SHMMNI limits

Example uses CT 118 (OpenVZ container ID 118) for example.

Login to your hardware node

vzctl stop 118

vzctl set 118 --kmemsize unlimited --save
vzctl set 118 --lockedpages unlimited --save
vzctl set 118 --privvmpages unlimited --save
vzctl set 118 --shmpages unlimited --save
vzctl set 118 --numproc unlimited --save
vzctl set 118 --numtcpsock unlimited --save
vzctl set 118 --numflock unlimited --save
vzctl set 118 --numpty unlimited --save
vzctl set 118 --numsiginfo unlimited --save
vzctl set 118 --tcpsndbuf unlimited --save
vzctl set 118 --tcprcvbuf unlimited --save
vzctl set 118 --othersockbuf unlimited --save
vzctl set 118 --dgramrcvbuf unlimited --save
vzctl set 118 --numothersock unlimited --save
vzctl set 118 --dcachesize unlimited --save
vzctl set 118 --numfile unlimited --save
vzctl set 118 --numiptent unlimited --save

(for elasticsearch... set memlock to unlimited by:   vzctl set <CT_NUM> --memlock unlimited --save )


nano /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144

/sbin/sysctl -p


vzctl start 118

vzctl enter 118

nano /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128

/sbin/sysctl -p

exit


vzctl stop 118

vzctl start 118

Add second mirrored raid 1 hard drive to software raid 1 /dev/md0 md0 on hardware node

# SDB is the first hard drive
# SDC is the second hard drive
# only one partition for each drive
sfdisk -d /dev/sdb | sfdisk --force /dev/sdc  (copies partition table from sdb to sdc)

-----------------------

cat /proc/mdstat

dd if=/dev/zero of=/dev/sdc bs=512 count=1
sfdisk -d /dev/sdb | sfdisk --force /dev/sdc
mdadm --manage /dev/md0 --add /dev/sdc1

cat /proc/mdstat

-----------------------

example how it should look like after partitioned:

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
81 heads, 63 sectors/track, 191411 cylinders
Units = cylinders of 5103 * 512 = 2612736 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x31961bff

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      191412   488385560   fd  Linux raid autodetect

-------------

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       60802   488385560   fd  Linux raid autodetect



///////////////////////////////////////

if SDC is source and SDB is the target:

dd if=/dev/zero of=/dev/sdb bs=512 count=1
sfdisk -d /dev/sdc | sfdisk --force /dev/sdb
mdadm --manage /dev/md0 --add /dev/sdb1


How to manually backup an OpenVZ container from command line for Promox

first try to examine the array:

cat /proc/mdstat

examine each of the array:

mdadm --detail /dev/md0
mdadm --detail /dev/md127

Take note of the UUID of each array, they should be the SAME

example: 02068dc1:63b677bb:e3f0cfcc:8ccc0d3b

next, examine the current mdadm scan result:

mdadm --detail --scan

to fix this problem basically we need to do the following:

add the ARRAY line to /etc/mdadm/mdadm.conf then update initrd image to read and include new setting in mdadm.conf, then reboot


nano /etc/mdadm/mdadm.conf 

add this line under the # definitions of existing MD arrays:

ARRAY /dev/md0 UUID=02068dc1:63b677bb:e3f0cfcc:8ccc0d3b

update the initrd image:

update-initramfs -u

reboot

after reboot comes back... examine the array again... you may need to add non-included drives like this:

mdadm --manage /dev/md0 --add /dev/sdb1

monitor rebuilding progress:

cat /proc/mdstat


That is all!

Saturday, January 24, 2015

Mount a partition from a Proxmox hardware node to openvz container(s)

Sometime we need to access another partition outside our OpenVZ container. In Proxmox PVE you can do this easily, but require manual configuration (can not be done from Proxmox admin) and will require your OpenVZ container reboot.

[STEP 1] create the destination directory in the container:

in this example, I am mounting /mnt/<name> from hardware node to /data_ssd in container

mkdir /data_ssd
chmod -R 777 /data_ssd

shutdown the container:

shutdown -h now

[STEP 2] prepare the mount to whatever partition you would like to access (ON HARDWARE NODE)

on the proxmox HN:

mkdir /mnt/<name>    (name example 'ssd_shared')
chmod -R 777 /mnt/<name>

in this example I am mounting an LVM partition called lvm_ssd.
lets make sure the partition will be mounted automatically on HN boot.

blkid            (copy the UUID of the newly created LVM partition)

Then make sure your fstab file has the following entry:

nano /etc/fstab

UUID=<uuid> /mnt/<name> ext4 defaults,noatime,nodiratime,noacl,data=writeback,barrier=0,nobh,discard,errors=remount-ro 0 2

To make sure the fstab setting work, execute the command below to auto-mount all fstab entries

mount -a

[STEP 3] setup the container's configuration file to allow container to access the shared mount

cd /etc/pve/openvz
nano <ctid>.mount                 (ctid is the container ID for example 112)

#!/bin/bash
. /etc/vz/vz.conf
. ${VE_CONFFILE}
SRC=/mnt/<name>
DST=/data_ssd
if [ ! -e ${VE_ROOT}${DST} ]; then mkdir -p ${VE_ROOT}${DST}; fi
mount -o noatime -n -t simfs ${SRC} ${VE_ROOT}${DST} -o ${SRC}


[STEP 4] Test by starting the container

once container has started back up, execute the 'mount' command to see all your mounted partition in your container. Mine looks like this:

/mnt/md0/private/112 on / type simfs (rw,relatime)
/mnt/ssd_shared on /data_ssd type simfs (rw,noatime)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
none on /dev/pts type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
none on /run type tmpfs (rw,nosuid,noexec,relatime,size=419432k,mode=755)
none on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
none on /run/shm type tmpfs (rw,relatime)

The one I highlighted in yellow is the /mnt/ssd_shared (from HN) that was already mounted to /data_ssd inside container 112.

You can also use 'df' (disk free) command to check the available disk space, mine looks like this:

Filesystem      1K-blocks      Used Available Use% Mounted on
/dev/simfs      167772160 113525032  54247128  68% /
/dev/simfs       16513960   4398600  11276500  29% /data_ssd
none               419432       996    418436   1% /run
none                 5120         0      5120   0% /run/lock
none              2097152         0   2097152   0% /run/shm


You are done!

Add SMART smartmontools SATA smart drive monitoring tool smartd to Proxmox PVE hardware node

I recommend always installing SMARTMONTOOLS to any server with physical disc hard drive(s). Meaning if you have a spinning hard drive (not SSD) you will eventually have to replace it, because it will fail soon or later.

I hate to be surprised when it is too late to replace a failing hard drive. SmartMonTools stands for SMART Monitoring Tool, will query your hard drive for its health status.  If you do this daily and setup an alert system to your email, you will most likely avoid a bad surprise in the future.

I highly recommend installing and using this smartmontools monitoring and alert for any server.

Here is how I have deployed on EACH on of my server:

THIS CAN BE INSTALLED ON ANY BARE METAL SERVER, FOR PROMOX PVE, THIS MEANS YOUR HARDWARE NODE.


1. install smartmontools

aptitude update && aptitude -y install smartmontools


2. edit default daemon start configuration:

nano /etc/default/smartmontools

unremark all commented lines

enable_smart="/dev/sda /dev/sdb /dev/sdc"
start_smartd=yes
smartd_opts="--interval=28800"

3. edit smartd.conf  (in this example I have 3 SATA drives: sda, sdb, sdc)

nano /etc/smartd.conf

/dev/sda -d sat -a -s L/../../7/4 -m john@smith.com,jack@jill.com
/dev/sdb -d sat -a -s L/../../7/5 -m john@smith.com,jack@jill.com
/dev/sdc -d sat -a -s L/../../7/6 -m john@smith.com,jack@jill.com

The above example will do the following:
1. scan sda at 4am Saturday
2. scan sdb at 5am Saturday
3. scan sdc at 6am Saturday

Email alert will be sent to john@smith.com and jack@jill.com if there is something wrong.

NOTE about the -s parameter:

The second from the last is the DAY parameter:

            Sunday is day # 1
            Monday is day # 2
            ...
            Saturday is day #7


4. restart smartmontools

/etc/init.d/smartmontools restart

5. check current HEALTH status:

smartctl -H /dev/sda
smartctl -H /dev/sdb
smartctl -H /dev/sdc

DONE!