Encrypt with LUKS an unencrypted LVM Ubuntu 22 Server without system reinstallation

Keep your data safe. Device loss or unauthorized access can be mitigated by encrypting drive in your server and workstation.

So you may have a Ubuntu Linux installation on your bare metal or virtual machine. Does it have an encrypted drive? If the answer is no, then you could be in trouble when device is stolen or lost, or someone just gained unauthorized access to your hardware. In this short step-by-step article you can see what steps your should take to encrypt your unencrypted drives without need to reinstall the system.

When speaking workstation, there is much less concern about system reinstallation. Just move out your data, configuration in favourable time and proceed with clean system installation, now with proper drive encryption configured. But hold on a second. If there is option to encrypt without need to reinstall your system, then why just not try it?

It is especially important when talking about server installations running production software handling customers data. You can opt for system replacement in maintainance window and redo all your work. But, sometimes it is not a option. Actually it does not matter what kind of you unmaintained and obsolote software you run on your unencrypted servers. I think most of us know at least one example of such a thing. With such a problematic software it would better to just do it as-is without additional steps required.

How to migrate data?

Here you can learn how to encrypt your existing data on LVM based drives in Ubuntu 22 server virtual machine. To get started you need to add additional drive to your system with equal or more space than on unencrypted drive. Lets say your source is at /dev/sda and your spare drive is at /dev/sdb. I assume that it is a default setup with 3 partitions. First one is for GRUB spacer, second for boot and thind for root filesystem.

Boot your VM with GRML ISO. On Proxmox, when VM is starting, press Esc and select disc drive with ISO mounted.

Once booted into GRML…

Create PV and extend existing VG with new drive:

pvcreate /dev/sdb
vgextend ubuntu-vg /dev/sdb

Move your data to new drive:

pvmove /dev/sda3 /dev/sdb

Get rid of existing source unencrypted drive from VG and remove PV:

vgreduce ubuntu-vg /dev/sda3
pvremove /dev/sda3

Now it’s time to wipe existing unencrypted drive:

crypt setup open --type plain -d /dev/urandom /dev/sda3 to_be_wiped
dd if=/dev/zero of=/dev/mapper/to_be_wiped bs=1M status=progress
cryptsetup close to_be_wiped

Now the most important aspect of the procedure. Create mountpoint for boot and write a LUKS header over there. It is critical to save this header on permanent storage. If header with keys is lost then your data will be lost also. Keep this in mind:“`

mkdir /mnt/boot
mount /dev/sda2 /mnt/boot

Encrypt and open drive container:

cryptsetup -y luksFormat /dev/sda3 --header /mnt/boot/luksheader.img
cryptsetup luksOpen /dev/sda3 lvmcrypt --header /mnt/boot/luksheader.img

Create new PV and include it into existing VG.

pvcreate /dev/mapper/lvmcrypt
vgextend ubuntu-vg /dev/mapper/lvmcrypt

Move your data from additional spare drive into newly created lvmcrypt container:

pvmove /dev/sdb /dev/mapper/lvmcrypt

And finally, remove this new drive from VG and remove PV itself:

vgreduce ubuntu-vg /dev/sdb
pvremove /dev/sdb

How to update initramfs with detached LUKS2 header

So what is the deal with this LUKS2 header being detached. In this format, first 16MB of space is used for header. Once the original drive includes this space it will not have enough proper space for data which needs to be moved back again. The second reason to have detached header is to increase security level somehow. But remember that in case of loss of the device or filesystem holding it you permamently loose your data also. So…

If booting once again into GRML:

mkdir /mnt/luks
cryptsetup luksOpen /dev/sdb3 lvmcrypt --header /mnt/boot/luksheader.img
vgchange -ay
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

If you continue without rebooting you can just create mountpoint directory and mount it:

mkdir /mnt/luks
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

Mount and bind necessary special directories and then chroot into the system:

mount -t proc proc /mnt/luks/proc
mount -t sysfs sys /mnt/luks/sys
mount -o bind /dev /mnt/luks/dev
mount --bind /run /mnt/luks/run
mount /dev/sda2 /mnt/luks/boot
chroot /mnt/luks /bin/bash

Now you are back in your Ubuntu, in your encrypted drive. Is it over? No. We need to tell at boot time where there LUKS2 header is stored. Copy your header into any additional drive found. It case of VM it could be 0.1GB drive. In case of workstation it can be a USB pendrive:

dd if=/boot/luksheader.img of=/dev/sdb

Edit your /etc/crypttab file with the following:

lvmcrypt PARTUUID=A none luks,header=/dev/disk/by-uuid/B

where A is your blkid /dev/sda3 and B is your blkid /dev/sdb.

Finally, regenerate initramfs:

update-initramfs -c -k all
exit
reboot

You’re good to go. Now your drive is encrypted and you will be asked for password set earlier every time you boot the system. To be clear, you need to keep safe your additional drive holding the LUKS2 header. After you booted the system, the drive or pendrive can be removed, but it need to be inserted once again on every further reboots.

Further reading

https://unix.stackexchange.com/questions/444931/is-there-a-way-to-encrypt-disk-without-formatting-it
https://www.michelebologna.net/2020/encrypt-an-existing-linux-installation-with-zero-downtime-luks-on-lvm/
https://dev.to/goober99/encrypt-an-existing-linux-installation-online-with-the-magic-of-lvm-1mjc
https://linuxconfig.org/how-to-use-luks-with-a-detached-header
https://medium.com/@privb0x23/lose-your-head-attempting-to-boot-from-luks-without-a-header-2d61174df360
https://askubuntu.com/questions/1351911/what-does-regenerate-your-initramfs-mean
https://superuser.com/questions/111152/whats-the-proper-way-to-prepare-chroot-to-recover-a-broken-linux-installation
https://unix.stackexchange.com/questions/720202/detached-luks-header-on-debian-based-gnu-linux

Compatibility of Suricata IPS on Proxmox

For non-users of either Proxmox or Suricata: the first one is virtualization appliance which helps firing up virtual machines as well as LXC containers and the latter is network traffic security system which is able to identify (IDS mode) or even block malicious traffic (IPS mode). Suricata works just fine on Proxmox which is usually installed on Debian Linux, but sometimes there are some hardware/software compatibility issues which I'm going to tell you about right now...

Having Proxmox server exposed in public space could be really not the best way possible. However if there is no chance for dedicated hardware, then hiding your box from the world is the only reasonable way. There is of course possibility to setup Proxmox cluster with only one server exposed and the rest being only thru private link (e.g. VLANs on vSwitch on Hetzner). But still you will be left with at least one server which needs to be accessible from outside.

Note: without dedicated networking hardware you can try setting up everything offline with KVM console (with private link only just for cluster communication), but this way if something goes wrong you will be left waiting in queue to access it as resources often are limited, just as they are on Hetzner. Usually KVM access is given within 15 – 60 minutes from request time.

So in case you have your box exposed you need to hide it somehow from malicious traffic. I prefer to disable RPC and SSH. Enable 2FA for UI authentication. And last one is to install Suricata IPS directly on Debian. Of course if you have some VM inside Proxmox (and you will have) you can install Suricata on them too, like on pfSense where it is conieniently prepackaged. Installation is straightforward but… it relies on Linux kernel features which need proper drivers and hardware compatibility. It means that you are going to have it running on almost all modern hardware but not on all of it. There are some issues with onboard NICs and Proxmox custom kernel picking up different drivers. Official documentation states that we should load nfnetlink_queue module but the most important thing on those problematic hardware is to enable and start nftables service. Without it, Suricata will pick af-packet mode, but we are interested in nfqueue mode as it supports blocking by default.

Suricata system service is configured to run in af-packet mode by default, you can check service system files. In case startup scripts recognize that there are overrides it will start in different mode, for instance in nfqueue mode. On problematic setups most probably you might need to customize it a little bit. And basically that’s all, but you are not going to read it anywhere in official documentation. So hopefully with these small things you will get big benefits.