E-mail Service Desk with self-hosted GitLab EE and iRedMail

In order to process incoming e-mail messages as issues registered in GitLab you need to have catch-all mailbox and GitLab configured to handle it. However, there are few quirks you need to know when setting it up.

Assuming you have GitLab installed as well as iRedMail.

iRedMail

First, create new mailbox in iRedMail admin panel. Then manually create forwarding record. You could skip this is you purchased iRedMail Pro:

INSERT INTO forwardings 
  (address, forwarding, domain, dest_domain)
VALUES 
  ('domain.com', 'catchall@domain.com', 'domain.com', 'domain.com');

At this point you can check if catch-all feature is working by sending mail to non existing mailbox user. What is especially important is to verify if messages will come once sent to catchall+something@domain.com address. At first they will appear as separate mailbox folders, which is no good from GitLab perspective. However it is easy to fix with removing extensions parameter from dovecot. So, change this:

dovecot unix    -       n       n       -       -      pipe
    flags=DRh ... -m ${extension}

to this one:

dovecot unix    -       n       n       -       -      pipe
    flags=DRh ...

And run /etc/init.d/postix restart

GitLab

As for GitLab itself it is straightforward. Just edit /etc/gitlab/gitlab.rb at incoming_email* parameters. You need to use IMAP however so choose between 143 ad 993 ports. First one is intended to use STARTTLS and second one is for TLS. In case of a default iRedMail installation there will be self-signed certificate and incoming email will not work which you can by the way with the following command:

sudo gitlab-rake gitlab:incoming_email:check

Once you fill in all the required parameters:

sudo gitlab-ctl reconfigure

Remember that all configuration you have changed in PostgreSQL will be discarded (/var/opt/gitlab/postgresql/data/pg_hba.conf and /var/opt/gitlab/postgresql/data/postgresql.conf) so need to take care of it either by templating it or placing in GitLab main configuration.

Get iRedMail certificate. You can grab it using browser. Save it as .crt file in /etc/gitlab/trusted-certs directory. After that reconfigure and check connectivity once again. You should be good to go.

How to operate?

In GitLab a service desk is enabled in all the projects created, automatically. You can find it Monitor – Service Desk. You will find there also a target email address to which you will be sending your service desk messages. Once mail is sent it will appear in this section as an issue. You can comment and reply thru email client directly without need to visit GitLab itself.

Encrypt with LUKS an unencrypted LVM Ubuntu 22 Server without system reinstallation

Keep your data safe. Device loss or unauthorized access can be mitigated by encrypting drive in your server and workstation.

So you may have a Ubuntu Linux installation on your bare metal or virtual machine. Does it have an encrypted drive? If the answer is no, then you could be in trouble when device is stolen or lost, or someone just gained unauthorized access to your hardware. In this short step-by-step article you can see what steps your should take to encrypt your unencrypted drives without need to reinstall the system.

When speaking workstation, there is much less concern about system reinstallation. Just move out your data, configuration in favourable time and proceed with clean system installation, now with proper drive encryption configured. But hold on a second. If there is option to encrypt without need to reinstall your system, then why just not try it?

It is especially important when talking about server installations running production software handling customers data. You can opt for system replacement in maintainance window and redo all your work. But, sometimes it is not a option. Actually it does not matter what kind of you unmaintained and obsolote software you run on your unencrypted servers. I think most of us know at least one example of such a thing. With such a problematic software it would better to just do it as-is without additional steps required.

How to migrate data?

Here you can learn how to encrypt your existing data on LVM based drives in Ubuntu 22 server virtual machine. To get started you need to add additional drive to your system with equal or more space than on unencrypted drive. Lets say your source is at /dev/sda and your spare drive is at /dev/sdb. I assume that it is a default setup with 3 partitions. First one is for GRUB spacer, second for boot and thind for root filesystem.

Boot your VM with GRML ISO. On Proxmox, when VM is starting, press Esc and select disc drive with ISO mounted.

Once booted into GRML…

Create PV and extend existing VG with new drive:

pvcreate /dev/sdb
vgextend ubuntu-vg /dev/sdb

Move your data to new drive:

pvmove /dev/sda3 /dev/sdb

Get rid of existing source unencrypted drive from VG and remove PV:

vgreduce ubuntu-vg /dev/sda3
pvremove /dev/sda3

Now it’s time to wipe existing unencrypted drive:

crypt setup open --type plain -d /dev/urandom /dev/sda3 to_be_wiped
dd if=/dev/zero of=/dev/mapper/to_be_wiped bs=1M status=progress
cryptsetup close to_be_wiped

Now the most important aspect of the procedure. Create mountpoint for boot and write a LUKS header over there. It is critical to save this header on permanent storage. If header with keys is lost then your data will be lost also. Keep this in mind:“`

mkdir /mnt/boot
mount /dev/sda2 /mnt/boot

Encrypt and open drive container:

cryptsetup -y luksFormat /dev/sda3 --header /mnt/boot/luksheader.img
cryptsetup luksOpen /dev/sda3 lvmcrypt --header /mnt/boot/luksheader.img

Create new PV and include it into existing VG.

pvcreate /dev/mapper/lvmcrypt
vgextend ubuntu-vg /dev/mapper/lvmcrypt

Move your data from additional spare drive into newly created lvmcrypt container:

pvmove /dev/sdb /dev/mapper/lvmcrypt

And finally, remove this new drive from VG and remove PV itself:

vgreduce ubuntu-vg /dev/sdb
pvremove /dev/sdb

How to update initramfs with detached LUKS2 header

So what is the deal with this LUKS2 header being detached. In this format, first 16MB of space is used for header. Once the original drive includes this space it will not have enough proper space for data which needs to be moved back again. The second reason to have detached header is to increase security level somehow. But remember that in case of loss of the device or filesystem holding it you permamently loose your data also. So…

If booting once again into GRML:

mkdir /mnt/luks
cryptsetup luksOpen /dev/sdb3 lvmcrypt --header /mnt/boot/luksheader.img
vgchange -ay
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

If you continue without rebooting you can just create mountpoint directory and mount it:

mkdir /mnt/luks
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

Mount and bind necessary special directories and then chroot into the system:

mount -t proc proc /mnt/luks/proc
mount -t sysfs sys /mnt/luks/sys
mount -o bind /dev /mnt/luks/dev
mount --bind /run /mnt/luks/run
mount /dev/sda2 /mnt/luks/boot
chroot /mnt/luks /bin/bash

Now you are back in your Ubuntu, in your encrypted drive. Is it over? No. We need to tell at boot time where there LUKS2 header is stored. Copy your header into any additional drive found. It case of VM it could be 0.1GB drive. In case of workstation it can be a USB pendrive:

dd if=/boot/luksheader.img of=/dev/sdb

Edit your /etc/crypttab file with the following:

lvmcrypt PARTUUID=A none luks,header=/dev/disk/by-uuid/B

where A is your blkid /dev/sda3 and B is your blkid /dev/sdb.

Finally, regenerate initramfs:

update-initramfs -c -k all
exit
reboot

You’re good to go. Now your drive is encrypted and you will be asked for password set earlier every time you boot the system. To be clear, you need to keep safe your additional drive holding the LUKS2 header. After you booted the system, the drive or pendrive can be removed, but it need to be inserted once again on every further reboots.

Further reading

https://unix.stackexchange.com/questions/444931/is-there-a-way-to-encrypt-disk-without-formatting-it
https://www.michelebologna.net/2020/encrypt-an-existing-linux-installation-with-zero-downtime-luks-on-lvm/
https://dev.to/goober99/encrypt-an-existing-linux-installation-online-with-the-magic-of-lvm-1mjc
https://linuxconfig.org/how-to-use-luks-with-a-detached-header
https://medium.com/@privb0x23/lose-your-head-attempting-to-boot-from-luks-without-a-header-2d61174df360
https://askubuntu.com/questions/1351911/what-does-regenerate-your-initramfs-mean
https://superuser.com/questions/111152/whats-the-proper-way-to-prepare-chroot-to-recover-a-broken-linux-installation
https://unix.stackexchange.com/questions/720202/detached-luks-header-on-debian-based-gnu-linux

External and redundand Azure VM backups with Veeam to remote site

Backup is a must. Primary hardware fails. Local backups can also fail or can be inaccessible. Remote backups can also fail, but if you have 2, 3 or even more backup copies in different places and on various medium chances are high enough that you will survive major incidents without data loss or too much of being offline.

Talking about Microsoft Azure public cloud platform. But in case of any infrastructure environment you should have working and verified backup tools. Azure has its own. To keep those backups in secure remote place (in the context of Storage Account) you can use Veeam Backup for Microsoft Azure which can be used with up to 10 instances for free, besides costs of storage and VM to Veeam itself of course.

Source: Veeam Backup for Microsoft Azure Free Edition

To deploy Veeam you can use VM template from Azure’s marketplace. Its called “Veeam Backup for Microsoft Azure Free Edition”. You need to have also a storage account. I recommend setting it up with firewall enabled, configuring remote public IP address. This is the place where your VM backups made by Veeam will go.

Unlike Veeam Backup and Replication Community Edition, this one comes with browser-based user interface. It looks also quite differently from desktop-based version. What you need to do first is to define backup policy (Managment – Policies), add virtual machines and run it. That’s all at this point.

Resources covered with this policy can be found in Management – Protected Data. During backup, Veeam spins additional VM with Ubuntu template to take this backups. After backup or snapshot job is completed this temporary VM are gone.

As mentioned earlier, there are 10 slots within this free license. But you need to manually configure license usage which is a little bit annoying of course. Keep in mind that at least one backup or snapshot uses license seat. Need to remove to free it up.

You could use Veeam as a replacement for native backups coming from Azure. In this this proposed scenario, Veeam backups and the first step for having redundant and remote backups in case of environment inaccessibility.

Remote: Veeam Backup and Replication Community Edition

In order to move backups/snapshots from Azure Storage Account created by Veeam for Microsoft Azure you need to have Community Edition of Veeam installed in remote place. For sake of compliance it is necessary that it should be physically separate place and in my opinion it must not be the same service provider. So your remote site could be also on public cloud but from different provider.

In order to install Veeam Community you need to obtain Windows license for your virtual machine. Install Windows from official ISO coming from Microsoft and buy license directly from Microsoft Store. This way you can purchase electronic license even for Windows 10 which sometimes if preferable over Windows 11. Veeam installation is rather straight forward.

There is variaty of choise from where you can copy your backups. Which means that the similar setup can be done in other public clouds like AWS of GCP. In case of Microsoft Azure you need to copy you access token for Storage Account with backups from Azure Portal. Adding external repository can be done at Backup Infrastructure – External Repositories.

You need to have also a local repository which can be a virtual hard drive added to your Veeam Community VM and initialized with some drive letter in Windows.

There is a choice what to backup or have to transfer it to remote place. In this given scenario the optimum is to create Backup Copy which will immediately copy backups from the source as soon as they appear over there. Other scenarios are also possible but when additional requirements are met.

Once you have defined Backup Copy Job, run it. When completed you will have your source backup secured in remote place. Now you have copy those backups to different medium.

How to restore backups to remote Proxmox server?

Now you have your source backups secured and placed in remote site. The question arise, how to restore such backup? You could run instant recovery but to do this you need to have a commercial virtualization platforms set up. There is Proxmox on that list. However you can Export content as virtual disk, which will produce VMDK files with disk descriptors.

There is however one quirk you need to fix before continuing. Disk descriptors exported by Veeam are incompatible with disk import in Proxmox. Surround createType variable with quotes.

createType="monolithicFlat"

Copy disks exported to Proxmox server. Now you can create empty VM, ie. without disk drives and possibly event network adapters at first glace. Import disk into this newly created VM with qm utility. Then add drive to VM and change its boot order. You are good to go.

To recap the those procedure:

  • Export content as virtual disks
  • Fix createType variable in disk descriptor
  • Copy disk to Proxmox server
  • Create empty VM
  • Import disks into new VM
  • Configure VM and run it

Keep in mind that redundant backup is a must.