If you log in to Proxmox’s shell and see error stating that “setting locale failed“, then execute the following command to get rid of that message:
dpkg-reconfigure locales
Not quite sure why it happens, but I see it almost all the time.
If you log in to Proxmox’s shell and see error stating that “setting locale failed“, then execute the following command to get rid of that message:
dpkg-reconfigure locales
Not quite sure why it happens, but I see it almost all the time.
Image a hypothetical scenario having two 512 GB drives and want to use a Proxmox ZFS VM replication onto a second server with one 1 TB drive. Solution is quite simple. By using fdisk, create two primary partitions on the bigger drive and then go to Disks.ZFS.CreateZFS and you will be able to select a partition for the particular pool. One downside of such a setup is that ZFS liks to have whole drive for it’s own, please keep in mind that the performance may vary.
Having LVM on the disk causes system to automatically active such volumes. I installed used drive with Ubuntu on it to my Proxmox server. Using Proxmox’s UI you will not be able to wipe this drive, because volume group has been already auto-activated. You need to log into shell and then:
vgs # pick VG on your interest
vgchange -a n vgname # pass your VG to deactivate it
After this, you are ready to wipe drive from the UI.
I own a preowned HP z800 workstation. As purchased it contained HDD inside – Western Digital WD5000AAKX-75U6AA0, SATA III, 500GB, 7200 RPM, 16MB of cache. Unfortunately it passes SMART and do not show wearout metric. However going into detailed information we get:

Raw_Read_Error_Rate has positive value of 11, threshold is set to 51. Having 11 685 hours of runtime it should understandable that it might break and it did actually. There is only one Current_Pending_Sector which means that it waits to be remapped or rellocated. But, will it happen anytime soon?
I’m unable to clone, migrate or replicate VM to another server. ZFS states that this pool is in degraded state. You can see this by using zfs status -v command. It says that is unrecoverable and most probably it is. I’ve tried zpool scrub river to no avail. There is a problem with VM-104 disk. Still the forementioned VM is accessible by console and it works just fine.

This VM is Redash installation, and it is the last VM left on that drive waiting for better time. As this is home lab setup it makes use of any devices I have available. Not all of them are fully functional as you can see. Always need to have backups, replicated VMs and redundantly configured RAID.
I was unable to migrate VM:

I was unable to replicate VM:

I was unable to backup VM:

I ended up identifiying 4 or more bad blocks, but because it is ZFS there is little tools for filesystem checks. Does ZFS pool should be able in theory to recover from such failure? In case you use mirrored drive setup then it is not a case. I was thinking about overwriting these blocks, but leave it as it was. Drive is no decommisioned.
If you happen to use Proxmox’s LXC which stands for Linux Containers, then you might be interested in this one. Proxmox has this turnkey containers available to download from its UI. Among them there is WordPress. If you create a container using template then while creating remember to select nesting=on, because without this it will show permission denied on starting Apache2. Not quite familiar with nesting option but it is necessary to run this container.
I already have over 10 virtual machines in my home virtual environment. It is the number I find enough to start using some automation tools like Ansible. Installing Ansible on Ubuntu or CentOS systems is straightforward, just use regular apt or yum commands. With Ansible you can use commands or create scripts. First you should start with hosts definition file which by default is located at /etc/ansible/hosts.
[servers]
myhostname ansible_host=192.168.1.1
You can use different hosts file but then you need to point it while running commands. This one is default, so no need to do anything special here. Now going quickly to Proxmox’s qemu-guest-agent. What it is for? By using it, Proxmox knows what’s going on inside your VM, for example it will know IP addresses. Install package using apt or yum modules:
ansible all -m apt -a "name=qemu-guest-agent" --become -K -o
Go to each and every VM in Proxmox’s UI and enable QEMU Guest Agent. Then stop and start all VM. You will know that it works when there will be IP address shown on Summary page. Still you can check if service is running using Ansible:
ansible all -m shell -a "service qemu-guest-agent status | grep Active" --become -K -o -f 4