Saturday, November 13, 2021

AVERMEDIA LIVE STREAMER WEBCAM

 Quick and honest review of the AVERMEDIA LIVE STREAMER WEBCAM.

This avermedia webcam is one of the best valued webcam around. It offers all the standard features you will ever need for just under $40. It is a 1080p FULL HD webcam with microphone. It is the perfect zoom, web conference, online meeting camera! If you want to buy this Avermedia Live Streamer Webcam please use my link: 👉 👉 ✅ ✅ https://amzn.to/3w1aofu Please LIKE and SUBSCRIBE! #avermedia #webcam #zoomcam

Sunday, November 7, 2021

How to restart proxmox cluster services (also will reconnect to cluster)

If your node has been disconnected from your Proxmox Cluster (PVE Cluster). You may want to rejoin without rebooting.

Here are the commands you should execute (no pause needed in between command):

killall -9 corosync
systemctl restart pve-cluster
systemctl restart pvedaemon
systemctl restart pveproxy

systemctl restart pvestatd 



Friday, November 5, 2021

How to fix this error: E: Repository 'http://security.debian.org buster/updates InRelease' changed its 'Suite' value from 'stable' to 'oldstable'

If you are getting this error:

E: Repository 'http://security.debian.org buster/updates InRelease' changed its 'Suite' value from 'stable' to 'oldstable'

You can fix it using the following command: 

apt-get --allow-releaseinfo-change update

apt-get update


Wednesday, November 3, 2021

How to troubleshoot / debug "lxc_init failed to run lxc.hook.pre-start for container ###"?

 Are you seeing this error while starting your Proxmox container?


lxc_init: 797 Failed to run lxc.hook.pre-start for container "###"

__lxc_start: 1896 Failed to initialize container "###"



Here is the command to find out more (debug to LOG) what is going on...


lxc-start -n ### -F -lDEBUG -o your_log_file.log


replace the ### with your Container ID 

then examine the content of 'your_log_file.log'

cat your_log_file.log


Output of log file:

In my case my container failed to start because I ran out of disk space (disk quota limit exceeded). here is the content of my log file:


lxc-start: 157: conf.c: run_buffer: 323 Script exited with status 1

lxc-start: 157: start.c: lxc_init: 797 Failed to run lxc.hook.pre-start for container "157"

lxc-start: 157: start.c: __lxc_start: 1896 Failed to initialize container "157"

lxc-start: 157: tools/lxc_start.c: main: 308 The container failed to start

lxc-start: 157: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options

root@e8:/etc/pve/lxc# ls

157.conf  164.conf  168.conf  176.conf lxc-157.log

root@e8:/etc/pve/lxc# cat lxc-157.log

lxc-start 157 20211103125836.780 INFO     lsm - lsm/lsm.c:lsm_init:29 - LSM security driver AppArmor

lxc-start 157 20211103125836.788 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "157", config section "lxc"

lxc-start 157 20211103125837.708 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 157 lxc pre-start produced output: unable to open file '/fastboot.tmp.32694' - Disk quota exceeded


lxc-start 157 20211103125837.714 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 157 lxc pre-start produced output: error in setup task PVE::LXC::Setup::pre_start_hook


lxc-start 157 20211103125837.723 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1

lxc-start 157 20211103125837.731 ERROR    start - start.c:lxc_init:797 - Failed to run lxc.hook.pre-start for container "157"

lxc-start 157 20211103125837.739 ERROR    start - start.c:__lxc_start:1896 - Failed to initialize container "157"

lxc-start 157 20211103125837.747 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "157", config section "lxc"

lxc-start 157 20211103125838.259 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "157", config section "lxc"

lxc-start 157 20211103125839.156 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start

lxc-start 157 20211103125839.163 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can

Sunday, February 14, 2021

Basic network commands in Proxmox PVE

If you use Proxmox PVE, you may have noticed that popular ethernet / IP / networking tools commands are not working in proxmox.  I am talking about basic commands like ifconfig.

So I just want to create this simple post as reminder for myself and for others using proxmox to use the following commands instead:


ip a

The ip command kind of replaces the ifconfig command.  The above command 'ip a' will list all network adapter detected and show their states (up/down).


ifup

The ifup command will bring up (activate) (make online) a particular network adapter.


ifdown

The ifup command will bring down (de-activate) (make offline) a particular network adapter.


The commands above works on virtual / bonded adapters too such as vmbr0.


LOCATION OF LIST OF INTERFACES CONFIGURATION

If you want to edit the list of interfaces, you can find them here:

/etc/network/interfaces

to edit simply use nano command like this:

nano /etc/network/interfaces


INSTALL NET-TOOLS

If you don't want to deal with new commands that you have to remember like 'ip a' etc... maybe you just want to install the same networking commands you are used to like ifconfig, you can just install them by typing the commands below:

apt-get update

apt-get install net-tools

Thursday, August 27, 2020

How to replace a bad hard drive in ZFS Raid

How to replace a bad hard drive in ZFS Raid and start the regeneration (resilvering process).


STEP 1 - INFORMATION OF THE FAILED DRIVE

Get GUID of the failed drive:

root@localhost# zdb
raid1:
    version: 5000
    name: 'raid1'
    state: 0
    txg: 1178836
    pool_guid: 8019483820723122312
    errata: 0
    hostid: 3155752912
    hostname: 'a6'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 8019483820723122312
        create_txg: 4
        children[0]:
            type: 'mirror'
            id: 0
            guid: 11864727355575360377
            metaslab_array: 256
            metaslab_shift: 34
            ashift: 12
            asize: 2000384688128
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 129
            children[0]:
                type: 'disk'
                id: 0
                guid: 15304200656844780564
                path: '/dev/sdb1'
                devid: 'ata-HITACHI_HUA723020ALA640_YGKU6BBG-part1'
                phys_path: 'pci-0000:00:1f.2-ata-2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 130
            children[1]:
                type: 'disk'
                id: 1
                guid: 980353070042574228
                path: '/dev/sdc1'
                devid: 'ata-HITACHI_HUA723020ALA640_YGKT7U6G-part1'
                phys_path: 'pci-0000:00:1f.2-ata-3'
                whole_disk: 1
                DTL: 384
                create_txg: 4
                com.delphix:vdev_zap_leaf: 131
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data

In this example we will pretend /dev/sdc is the bad drive. We will find the GUID for /dev/sdc which is 980353070042574228

Get serial #:

root@localhost# smartctl -a /dev/sdc | grep Serial Serial Number: YGKT7U6G


STEP 2 - REMOVE THE FAILED DRIVE

zpool offline raid1 980353070042574228


STEP 3 - REPLACE THE HARD DRIVE PHYSICALLY

Please replace the broken hard drive with a new hard drive.


STEP 4 - COPY PARTITION TABLE

Please note the first device in the command below is the TARGET and the second device is the SOURCE.

sgdisk --replicate=[TARGET] [SOURCE]
sgdisk --replicate=/dev/sdc /dev/sdb


STEP 5 - GENERATE RANDOM GUID

sgdisk --randomize-guids /dev/sdc


STEP 6 - ADD NEW HARD DRIVE TO ZFS POOL

zpool replace raid1 /dev/sdc


FINAL - CHECK AND MONITOR SILVERING PROCESS

watch zpool status raid1 -v




Saturday, June 6, 2020

Proxmox PVE APT UPDATE "not authorized" Error Message - How to configure No-Subscription APT Repository

Proxmox PVE is a great software and mature Virtualization Environment.  I am very thankful for the team and grateful for the software. I wish I can afford to pay for subscription, but so far I am not that much profitable yet. Because of that I am still not a subscriber :-(    However I plan to be and recommend anyone who can to subscribe.

Having said the above, I would like to show you what steps are necessary to disable Enterprise Repository and Enable the No-Subscription Repository.  This is for the New Proxmox PVE 6.x using Debian Buster.

STEP 1 - ADD NO-SUBSCRIPTION REPOSITORY

nano /etc/apt/sources.list

Add the following lines at the end of your /etc/apt/sources.list

# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

Your new sources.list file should now look like this:

deb http://ftp.us.debian.org/debian buster main contrib

deb http://ftp.us.debian.org/debian buster-updates main contrib

# security updates
deb http://security.debian.org buster/updates main contrib

# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription


STEP 2 - COMMENT OUT THE ENTERPRISE REPOSITORY FILE


nano /etc/apt/sources.list.d/pve-enterprise.list

Put a pound sign in front of this line:

# deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise

Your pve-enterprise.list file should now look like this:

# deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise



STEP 3 - TEST BY EXECUTING APT UPDATE

You should now be able to execute 'apt update' without errors.



# deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise



STEP 3 - CONGRATULATIONS! YOU CAN NOW UPDATE FROM PROXMOX

Remember to Subscribe to Proxmox when you can!