Monday, December 2, 2019

Clean up logs, cache, docs, man files from Proxmox pve-root to free up disk space

My main pve-root partition was getting full (I only reserved 4GB, I should have reserved more space here, at least 8GB). I searched and found many areas where I can remove files to gain back a few megabytes.

DELETING UN-NEEDED FILES IN /usr/share/doc

cd /usr/share/doc
rm -rf *


DELETING UN-NEEDED FILES IN /usr/share/man

cd /usr/share/man
rm -rf *


CLEAN UP JOURNAL LOGS FROM /run/log/journal

journalctl --disk-usage
nano /etc/systemd/journald.conf
     add or edit this line:  SystemMaxUse=50M
Signal or restart systemd-journald service:
     systemctl kill --kill-who=main --signal=SIGUSR2 systemd-journald.service
       -or-
     systemctl restart systemd-journald.service


CLEAN UP /var/cache/apt

apt clean
apt purge
apt autoclean

SUMMARY

The commands above freed up about 650MB of space in my Proxmox PVE 5.1
I hope this article has been useful for you.

Tuesday, September 10, 2019

[EASY 5 MIN] Install Ubiquiti Unifi Controller on Proxmox VE LXC Container





This is a detailed video to show you how to setup/install Ubiquiti Unifi Controller on Proxmox VE LXC Container. I am using Ubuntu 16.04 / 18.04 in this video. This method is very fast, only takes about 5 min to do.

Help support my channel, if you are looking to buy ubiquiti products please use my Amazon link below:

https://amzn.to/2N8Ihro

#unifi #ubiquiti #unificontroller


Monday, September 9, 2019

Installing Ubiquiti UNIFI controller as a Proxmox VE LXC Container

This guide will help you Install a Ubiquiti UNIFI controller as a Proxmox LXC Container.

There are many benefits of having an always ONLINE Unifi controller, one of them that I am interested in is the Hot Spot / Captive Portal landing page customization.

Since I already have a Proxmox VE server running, I thought this will be the best method for me to have a Ubiquiti Unifi controller 24/7.


STEP 1 - Download Template for Ubuntu 18.04

Go to your Proxmox VE, go to your storage, and make sure you have downloaded a template for Ubuntu 18.04 Standard

STEP 2 - Create the LXC Container

Create an LXC container using the following settings:
Hostname: unifi
Disk size: 32GB
Memory: 2GB (but 1GB is also okay)
Network:  192.168.8.4/24

STEP 3 - Login to your container and Update APT

sudo apt update

STEP 4 - Installing CA Certificates

sudo apt-get install ca-certificates wget -y

STEP 5 - Download installation script


wget https://get.glennr.nl/unifi/install/unifi-5.11.39.sh

STEP 6 - Make the script executable (change filename to the script version you downloaded)

chmod +x unifi-5.11.39.sh

STEP 6 - Run the Install Script

./unifi-5.11.39.sh

STEP 7 - When finish installing go to your browser and type in

https://ip.of.your.server:8443

----------------------------------------

Thank you for using this guide.

Please help me support this blog by purchasing your Ubiquiti products from Amazon using my link below:

https://amzn.to/2A4WdtW

Much appreciated! :-)




Wednesday, July 10, 2019

[SOLVED] Mounting external USB drive to Proxmox LXC Container

I needed to mount an external USB drive directly to an LXC container.  On initial research, this process seems simple enough.

I am using Ubuntu / Debian for this guide.

Basically I do the following to get it mounted on the Proxmox Hardware Node first:

mkdir /mnt/mountpoint
chmod -R 777 /mnt/mountpoint (this is optional)
mount /dev/bus/usb/001/004 /mnt/mountpoint

BUT! when I tried to use it in the LXC container I got 'READ ONLY' permission issues. Some error message like this:

Read only filesystem!

So I did a lot more research and found many people have similar issues. I wasted 30 minutes trying different suggestions until I found one that worked.  The solution is the fact that linux was mounting the partition as fat32 / exfat. Therefore it could not perform Read and Write.  Here is the solution that worked for me...

STEP 1 - INSTALL ntfs-3g

sudo apt-get update
sudo apt-get install ntfs-3g

STEP 2 - MOUNT using NTFS file system type (my usb drive is at USB 001:004)

mkdir /mnt/mountpoint
chmod -R 777 /mnt/mountpoint (this is optional)
mount -t ntfs-3 /dev/bus/usb/001/004 /mnt/mountpoint

STEP 3 - CHECK IF YOU CAN READ & WRITE

cd /mnt/mountpoint
touch test

If step 3 did not produce any error and you can clearly see file 'test' being created, then you are good to go.

Next you just need to add one line to your LXC container configuration to set the mountpoint.

nano /etc/pve/lxc/123.conf   (assuming 123 is your CT ID)

Tuesday, May 21, 2019

How to mount NFS share to Proxmox LXC container

This is a quick how to guide to mount NFS share to your Proxmox VE LXC container.

If you are experiencing permission issue, you may want to read this article:

for example: May 21 12:42:31 e7 systemd[1]: Failed to start PVE LXC Container: 164.

Enabling NFS sharing mount bind in Proxmox

STEP 1 - EDIT YOUR LXC CONTAINER CONFIGURATION

nano /etc/pve/lxc/[CTID].conf


STEP 2 - ADD THE FOLLOWING LINE TO END OF CONFIGURATION FILE


mp0: [/host/nfs/shared_path],mp=[/lxc/container/mount_point_path]

for example:

mp0: /mnt/pve/nfs_share/,mp=/data_shared


(note: please make sure you pre-created the /data_shared (mount point) directory in your LXC container first)

STEP 3 - RESTART YOUR LXC


pct shutdown [CTID]
pct start [CTID]


Unlike openvz you do not have to edit / add anything to your fstab file. The next time you enter your container, you will have the NFS share automatically mounted.

Enabling NFS sharing (mount bind) for mounting NFS shares into LXC container

If you are running Proxmox 5.x and you are trying to mount an NFS share to your LXC container you may encounter permission denied issues.

such as:
May 21 12:42:31 e7 systemd[1]: Failed to start PVE LXC Container: 164.


This issue is caused by security permission in apparmor. We just need to reconfigure it so that it will support NFS shares.

This article will describe how to overcome this issue permission issue.

STEP 1 - COPY APPARMOR CONFIGURATION (on Proxmox Host)

cd /etc/apparmor.d/lxc
cp lxc-default-cgns lxc-container-default-with-nfs
nano lxc-container-default-with-nfs

Change lxc-container-default-cgns with lxc-container-default-with-nfs

Add the following lines right before the } (closing curly brace)

  mount fstype=nfs*,
  mount fstype=rpc_pipefs,


STEP 2 - RELOAD APPARMOR

systemctl reload apparmor



The above 2 steps will fix the AppArmor permission issues.

Next, you may want to know how to Bind Mount your NFS share to your LXC container, please click on the next article How to mount NFS share to Proxmox LXC container.



SAMPLE OF MY LXC CONFIGURATION FILE:


arch: amd64
cores: 4
hostname: blahblah
memory: 8192
nameserver: 10.0.0.1 8.8.8.8 4.4.4.4
net0: name=eth0,bridge=vmbr0,gw=10.0.0.1,hwaddr=_______________,ip=10.0.110.42/16,type=veth
ostype: ubuntu
rootfs: local_md0:164/vm-164-disk-1.raw,size=64G
searchdomain: localhost
swap: 8192
lxc.apparmor.profile: lxc-container-default-with-nfs
mp0: /mnt/pve/vepublicb1/tf/data_shared,mp=/data_shared

Debugging a proxmox LXC container that will not start

Sometime after making changes in your LXC configuration file in /etc/pve/lxc your LXC container may have problem starting. You will get a message like this:

Job for pve-container@165.service failed because the control process exited with error code.
See "systemctl status pve-container@165.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@165' failed: exit code 1

Then it recommends to get status of the start by typing...

systemctl status pve-container@165.service

And its output is...

● pve-container@165.service - PVE LXC Container: 165
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2019-05-21 08:13:01 CDT; 10s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 3560220 ExecStart=/usr/bin/lxc-start -n 165 (code=exited, status=1/FAILURE)

May 21 08:12:59 e2 systemd[1]: Starting PVE LXC Container: 165...
May 21 08:13:01 e2 lxc-start[3560220]: lxc-start: 165: lxccontainer.c: wait_on_daemonized_start: 865 Received container state "ABORTING" instead of "RUNNING"
May 21 08:13:01 e2 lxc-start[3560220]: lxc-start: 165: tools/lxc_start.c: main: 330 The container failed to start
May 21 08:13:01 e2 lxc-start[3560220]: lxc-start: 165: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
May 21 08:13:01 e2 lxc-start[3560220]: lxc-start: 165: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
May 21 08:13:01 e2 systemd[1]: pve-container@165.service: Control process exited, code=exited status=1
May 21 08:13:01 e2 systemd[1]: pve-container@165.service: Killing process 3560226 (3) with signal SIGKILL.
May 21 08:13:01 e2 systemd[1]: Failed to start PVE LXC Container: 165.
May 21 08:13:01 e2 systemd[1]: pve-container@165.service: Unit entered failed state.

May 21 08:13:01 e2 systemd[1]: pve-container@165.service: Failed with result 'exit-code'.


As you can see the output still does not tell me enough information.

So I recommend using the following command to start LXC with debug and output log to temporary file:


lxc-start --logfile /tmp/lxc-start.log -n [CTID]



if the debug or log above is still not enough, the next command below will provide an EXTREMELY detailed log:

strace -f lxc-start -l trace -o /tmp/trace.log -n [CTID]

Wednesday, May 15, 2019

Debugging (verbose logging) LXC startup process in Proxmox VE

Sometime you run into problem and your LXC container do not start.  Proxmox VE graphical UI is very good, but it does not provide enough information about what is wrong.

To get the most detailed information, you should try to start the LXC container from command using verbose logging. Here is the command you should use:


(replace [ID] with your actual LXC container ID)

lxc-start -n [ID] -F -l DEBUG -o /tmp/lxc-[ID].log


Then, to view the log use this command:

cat /tmp/lxc-[ID].log

Upgrading Proxmox VE

To upgrade your Proxmox virtual environment to the next version do the following steps.

Step 1 - Get latest APT updates

apt-get update


Step 2 - Perform distribution upgrade

apt-get dist-upgrade


You may be asked if you would like to use existing or newer configuration file. I usually answer N (no) to keep using the current version which is usually safer.

Can not start LXC container with error: unsupported Ubuntu version '18.04'

I restored an LXC container I have backed up from a newer Proxmox VE to an older node. To my surprise... it would not start. This is the error message I got when I debugged the LXC start-up process.


lxc pre-start with output: unsupported Ubuntu version '18.04'


When I press start button, it quickly fails and displays:




I thought it may be caused by file system (the LXC image) corruption. So I tried to mount and unmount it first using

pct mount 165
pct unmount 165

both worked perfectly so that was not the issue.

Then I made sure the file system is clean by running fsck against it, using this fsck command

pct fsck 165

That also ran perfectly clean!

After being puzzled, I researched and found people with similar issue, it turned out that Proxmox LXC start up script checks for version of Ubuntu and did not have the latest version 18.04 LTS Bionic Beaver.  So I updated this file:

nano /usr/share/perl5/PVE/LXC/Setup/Ubuntu.pm

look for this section:
my $known_versions = {
  
    '17.10' => 1, # artful
    '17.04' => 1, # zesty
    '16.10' => 1, # yakkety
    '16.04' => 1, # xenial
    '15.10' => 1, # wily
    '15.04' => 1, # vivid
    '14.04' => 1, # trusty LTS
    '12.04' => 1, # precise LTS
};

and replace it with this:
my $known_versions = {
    '22.04' => 1,
    '20.04' => 1,
    '18.04' => 1, # Bionic Beaver LTS
    '17.10' => 1, # artful
    '17.04' => 1, # zesty
    '16.10' => 1, # yakkety
    '16.04' => 1, # xenial
    '15.10' => 1, # wily
    '15.04' => 1, # vivid
    '14.04' => 1, # trusty LTS
    '12.04' => 1, # precise LTS
};


This SOLVED my issue and I was able to start the LXC container.

If you don't feel comfortable editing /usr/share/perl5/PVE/LXC/Setup/ 
You can also simply upgrade your Proxmox VE node to get the latest update from Proxmox.

Proxmox LXC Backup issue: INFO: mode failure - some volumes do not support snapshots

I am getting this notice when performing LXC container backup on Proxmox 5.x

INFO: mode failure - some volumes do not support snapshots




At first I thought there was something wrong with the container, I checked for strange things that I may have done such as mounting external NFS share, etc... but I did not find anything strange.

Then I realize this disk partition or space that the container is on is not an LVM type of partition.

Because of this I can NOT do a 'SNAPSHOT' backup.  Suspend somehow does not work either.

But backup did work after full shutdown. Or you can just start the backup right away with SHUTDOWN mode.

Hope this help someone with same issue.

Sunday, May 12, 2019

What is new in Proxmox VE 5.4? is it worth upgrading?

New Wizard for Installing Ceph (even easier)

Ceph has been integrated into the Proxmox VE since 2014 as its primary distributed storage technology. The configuration of Ceph has been available in Proxmox VE GUI for a while, but some configuration are still required to be done from command line interface. Proxmox VE 5.4 eliminates all command line requirements and make Ceph fully configurable from Proxmox VE web based GUI. For those of you who are not familiar with Ceph, it is very robust and stable distributed storage architecture which allows you to add cheap and scable storage using cheap disk from multiple nodes within your Proxmox cluster.

Better HA (High Availability) features

Proxmox VE 5.4 improved on HA policy data center-wide, changing the way how guests are treated upon a node shutdown or reboot. This brings greater flexibility and choice to the user.
These new HA policy choices are:
- Freeze: always freeze services—independently of the shutdown type (reboot, poweroff).
- Fail-over: never freeze services—this means a service will get recovered to another node if possible and if the current node doesn’t come back up in the grace period of one minute.
- Default: this is the current behavior—freeze on reboot but do not freeze on poweroff.

Qemu/KVM guests can now Suspend to disk/hibernation

You can now hibernate Qemu guests independent of the guest OS and have them resumed properly on the next restart. Hibernating your virtual machine saves the RAM contents and the internal state to disk. This allows users to restore the running state of their qemu-guests between upgrades to and reboots of the PVE-node.

Support for U2F (Universal 2nd Factor) Authentication (optional)

Proxmox VE 5.4 now supports the U2F (Universal 2nd Factor) protocol which can be used in the web-based user interface as an additional method of two-step verification for users. The U2F is an open authentication standard and simplifies the two-factor authentication. Since it is required in certain domains and environments this is an important improvement to security practices.

Move options and features in QEMU Guest creation wizard

As often requested by the Proxmox community some options like for example:
- Machine-type (q35, pc-i440fx)
- Firmware (Seabios, UEFI)
- SCSI controller can now be selected directly in the VM creation wizard

What is new in Proxmox 5.3 - CEPHFS is finally integrated in GUI

CephFS finally easy to use in Proxmox VE 5.3

Ceph has been integrated in Proxmox VE for a few years already. However it was difficult to use because it was only available as a block device (not as a mountable file system).
Now Proxmox has finally integrated mountable Ceph file system directly to the Proxmox VE web gui. Proxmox release statements claims administrators can now create CephFS just within a few clicks.
Being able to create and mount (use) CephFS is a true game changer. I will test to make sure it is able to be used for backup, to be mounted inside lxc containers.

I have been using other file system such as NFS via OMV (media vault) for a while. I am not happy with its performance. I hope CephFS will be much faster.

Stay tuned for my next blog for my review of how CephFS for my real scenario review.

What is new in Proxmox VE (virtual environment) 5.2 - Should I upgrade?

Cloud-Init - automatic virtual machine provisioning

Cloud-Init will help you do initial setups on your virtual machines after its first boot. Cloud-Init is a multi-distribution automation software. It is similar to Puppet, Ansible, Chef, etc...

SMB/CIFS Storage Plug-in

SMB/CIFS (aka Samba) is a network shared storage usually used by Windows platform. SMB/CIFS is typically faster than NFS. It is also simpler / easier to learn due to its simpler username / password.
This is a huge feature and one that I have been waiting to arrive. As of currently I only use NFS for sharing network shared mounted storage inside LXC containers.

Let’s Encrypt Certificate Management via GUI

Let's encrypt has changed the SSL industry. Let's encrypt basically offers 100% free perpetual SSL. I am using Let's Encrypt for almost all my websites. Having Let's encrypt integration in Proxmox will be useful in cases where I can not install certbot or acme due to age of server issue, etc...

Proxmox VE 5.2 also adds many new improvement features for better usability, scalability, and security


+ Creation of clusters via the web gui. Now you can create and join nodes to a Proxmox cluster extremely simple and effective.
+ LXC management improvement: Creating templates or moving disks from one storage to another now also work for LXC.
+ If the QEMU guest agent is installed, the IP address of a virtual machine is displayed on the GUI
+ Administrators can create and edit new roles via the GUI.
+ Setting I/O limits for restore operations is now possible (globally or more fine-grained per storage) to avoid I/O load getting too high while restoring a backup.