Make Opera browser great again!

Get rid of ads and stop sending your data for free to Opera

Little history

My favourite browser was Opera for so many years. Between 2000 and 2005 it was adware showing, well… ads. In 2005 ads have been remove as the financing came from Google, Opera’s default search engine. In 2013 Opera dropped its own rendering engine in favor to Chromium. In 2023 Opera gets some AI features.

What is all about?

I still like Opera.

It has this great multi workspace feature, battery saving mode and in general it is much more capable of running plenty of tabs comparing to other major browsers like Firefox or Chome. However…

Opera has tons of “features” like shopping, Booking.com, promotional offers, AI services etc. Most of those features, including wallet, address data, spelling and payment options are enabled by default. Image how much data you share with Opera this way. Image how many of features can be used against you. Fortunately you can disable all of these, which makes Opera the great browser again.

Start with blank configuration page

Click on Opera logo and select Settings. You will go to configuration page in which you find multiple sections like Basic, Advanced, Privacy & security, Features and Browser. Remember that in some cases configuration page navigation is not linear.

Privacy & Security

Here in this section you find settings which concerns suggestions and diagnostics, but also here you can find promotional notifications and promotional Speed Dials, bookmarks and campaigns. As you can see it is mixed. This is the main issue with Opera settings, there are mixed to confuse you more so you will not be able to identify whether you disabled all unwanted features already.

In this section you can find:

  • Improve search suggestions
  • Automatically send crash reports to Opera
  • Fetch images for suggested sources in News, based on history
  • Display promotional notifications
  • Receive promotional Speed Dials, bookmarks, and campaigns

Change search engine to non-big-tech

Instead of using Google, and feed big-tech with loads of your search data, you can use DuckDuckGo as your primary engine. Fortunately there is option to change default search engine.

In this section you can find:

  • Search engine used in the address bar – set to DuckDuckGo 🙂

Password manager

I prefer to manually enter passwords which I keep in secure encrypted place and I know how they are secured. Saving passwords in any other form could be dangerous as you do not know to whom you give those passwords and in what form. And there are several examples of similar tools that have been hacked in the past.

In this section you can find:

  • Offer to save passwords
  • Sign in automatically

Payment methods

I think that anything (id est information) remembered/saved about my person, location or browsing scheme could be potentially monetized by those companies like Opera or Google which offer browsers. You may say that those things like payment types or passwords probably are locally save. Maybe, but how about future upgrades? Will someone give me guarantee about this? I’m not so sure.

In this section you can find:

  • Save and fill payment methods
  • Allow sites to check if you have payment method saved

Address forms

In case of form data it is more about malicious websites stilling data than Opera as such. There are known vulnerabilities which offer hidden form elements which will be auto-filled even if you could not see them. Keeping this option as “on” may cause to similar issues in the future. And actually it does not matter if Opera is vulnerable to this kind of “attack” today or it is not, it is all about approach.

In this section you can find:

  • Save and fill addresses

Crypto wallet

If you own some cryptos you may wonder if this option is a safe place for your crypto wallets. I am not so sure about this. As far as I remember it is all about having some private key. So keep your private key private. Keeping any keys or IDs in such place from my perspective is not a good idea. You may see this in other colors and keep using this, but this my opinion.

In this section you can find:

  • Enable Wallet Selector

AI services

It is nothing bad about having AI features in a browser. I do not see any major issues with this one as I do not think that Opera would send all the traffic and data to those machine learning pipelines. So with that being crossed-out, you may only think about your battery life if you have more and more features enabled. Please note that I did not conduct any test, so it is only my opinion about this one.

In this section you can find:

  • Aria in the sidebar
  • AI Prompts in text hightlight popup

My Flow

My files on my computer and phone at the same time? It sounds like sending my data outside of my device? I would not do this as I do not use OneDrive and Dropbox and as I identify that my device contain such software it is immediately uninstalled. If I want to send some files to where else I send it by myself and on my own rules. You may choose differently, it is my approach, the secure way.

In this section you can find:

  • Enable My Flow
  • Enable Pinboards

Start page

So here you have suggestions, which are based on our data. You have Booking.com options. It should be self-explaining that these are commercial contracts which are based either or data or on affiliation, which still my identify you as a person making a purchase somewhere else.

In this section you can find:

  • Hide search box
  • Hide Speed Dial
  • Show Continue Shopping section
  • Show Continue on Booking.com section
  • Show weather widget

Spell check

This feature itself is not harmuf, but consumes battery. You may leave it enabled if you want.

In this section you can find:

  • Check for spelling errors when you type text on web page

Social media

Messenger and WhatsApp are the most popular ways of communicating nowadays but having Telegram here… well, I have heard that are some issues with this, so be sure you know what you are doing actually. WhatsApp works just fine. Messenger is just a little bit less crippled that the whole Facebook thing.

In this section you can find:

  • You can disable Telegram 🙂

What’s next?

With beforementioned adjustments you can start using your Opera in way more secure way that comparing to its default settings which are stupendous but still somehow understandable. Opera is a commercial company which would like to make money, and they make money thru various channels like: ads, affiliations, “by-defaulting” things, data/diagnostics, features inclusion as services. With just a little time spend on this configuration you get great and efficient workspace. I think it is worth spending this time.

NIS 2: anti-rootkit & anti-virus installation and scanning with Ansible

If you run digital services platform or critical infrastructure then most probably you are covered by NIS 2 and its requirements including those concerning information security. Even if you are not covered by NIS 2, then still you may benefit from its regulations which seem to be similar with those coming from ISO 27001. In this article I show how to automatically deploy anti-rootkit and anti-virus software for your Linux workstations and servers.

TLDR

By using rkhunter anti-rootkit and ClamAV anti-virus you are closer to NIS 2 and ISO 27001 and farther away from threats like cryptocurrency miners and ransomware. You can automate deployment with Ansible.

Course of action

  • Prepare Proxmox virtualization host server
  • Create 200 LXC containers
  • Start and configure containers
  • Install rkhunter and scan systems
  • Install ClamAV and scan systems

What is NIS 2?

The NIS 2 Directive (Directive (EU) 2022/2555) is a legislative act that aims to achieve a high common level of cybersecurity across the European Union. Member States must ensure that essential and important entities take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems, and to prevent or minimise the impact of incidents on recipients of their services and on other services. The measures must be based on an all-hazards approach.

source: https://www.nis-2-directive.com/

Aside from being a EU legislation regulation, NIS 2 can be benefication from security point of view. However, not complying with NIS 2 regulations will cause significant damages to organization budget.

Non-compliance with NIS2 can lead to significant penalties. Essential entities may face fines of up to €10 million or 2% of global turnover, while important entities could incur fines of up to €7 million or 1.4%. There’s also a provision that holds corporate management personally liable for cybersecurity negligence.

source: https://metomic.io/resource-centre/a-complete-guide-to-nis2

What are the core concepts of NIS 2?

To implement NIS 2 you will need to cover various topics concernig technology and its operations, such as:

  • Conduct risk assesment
  • Implement security measures
  • Set up supply chain security
  • Create incident response plan
  • Perform regular cybersecurity awareness and training
  • Perform regular monitoring and reporting
  • Plan and perform regular audits
  • Document processes (including DRS, BCP etc)
  • Maintain compliance by review & improve to achieve completeness

Who should be interested?

As NIS 2 requirements implementation impacts on businesses as whole, the point of interest should be in organizations in various departments, not only IT but technology in general as well as business and operations. From employees perspective they will be required to participate in trainings concerning cybersecurity awareness. In other words, NIS 2 impacts on whole organization.

How to define workstation and server security

We can define workstation as a desktop or laptop computer which is physically available to its user. On the other hand we can define a server as a computing entity which is intended to offload workstation tasks as well as provide multi-user capabilities. So can describe a server also as a virtual machine or system container instance (such as LXC).

The security concepts within both workstations and servers are basically the same as they do share many similarities. They both run some operating system with some kind of kernel inside. They both run system level software along with user level software. They are both vulnerable to malicious traffic, software and incoming data especially in form of websites. There is major difference however impacting workstation users the most. It is the higher level of variability of tasks done on computer. However, even with less variable characteristics of server tasks, a hidden nature of server instances could lead lack of visibility of obvious threats.

So, both workstation and server should run either EDR (Endpoint Detection and Response), or antivirus as well as anti-rootkit software. Computer drives should be encrypted with LUKS (or BitLocker in case of Windows). Users should run on least-privileged accounts not connecting to unknown wireless networks and not inserting unknown devices to computer input ports (like USB devices which could be keyloggers for instance).

Prepare 200 LXC containers on Proxmox box

Find how to install 200 LXC containers for testing purposes and then, using Ansible, how to install and execute anti-rootkit and anti-virus software, rkhunter and ClamAV respecitvely. Why to test on that many containers you may ask? In case of automation it is necessary to verify performance ability on remote hosts as well as how we identify automation results on our side. In our case those 200 containers will be placed on single Proxmox node so it is critically important to check if it is going to handle that many of them.

Ansible software package gives us ability to automate work by defining “playbooks” which are group of tasks using various integration components. Aside from running playbooks you can also run commands without file-based definitions. You can use shell module for instance and send commands to remote hosts. There is wide variety of Ansible extensions available.

System preparation

In order to start using Ansible with Proxmox you need to install “proxmoxer” Python package. To do this Python PIP is required.

apt update
apt install pip
pip install proxmoxer

To install Ansible (in Ubuntu):

sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

Then in /etc/ansible/ansible.cfg set the following setting which skips host key check during SSH connection.

[defaults]
host_key_checking = False

Containers creation

Next define playbook for containers installation. You need to pass Proxmox API details, your network configuration, disk storage and pass the name of OS template of your choice. I have used Ubuntu 22.04 which is placed on storage named “local”. My choice for target container storage is “vms1” with 1GB of storage for each container. I loop thru from 20 to 221.

The inventory for this one should contain only the Proxmox box on which we are going to install 200 LXC containers.

---
- name: Proxmox API
  hosts: proxmox-box
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'
  serial: 1
  tasks:
  - name: Create new container with minimal options
    community.general.proxmox:
      node: lab
      api_host: 192.168.2.10:8006
      api_user: root@pam
      api_token_id: root-token
      api_token_secret: TOKEN-GOES-HERE
      password: PASSWORD-GOES-HERE
      hostname: "container-{{ item }}"
      ostemplate: 'local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst'
      force: true
      disk: "vms1:1"
      netif:
        net0: "name=eth0,gw=192.168.1.1,ip=192.168.2.{{item}}/22,bridge=vmbr0"
      cores: 2
      memory: 4000
    loop: "{{ range(20, 221) }}"

And then run this playbook to install containers:

ansible-playbook containers-create.yml -i inventory.ini -u root

Start and configure containers

In order to start those newly created containers (run it on Proxmox box), use shell loop with pct command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct start $i; 
  echo $i; 
done

To help yourself with generating IP addresses for your containers you can use “prips”:

apt install prips
prips 192.168.2.0/24 > hosts.txt

For demo pursposes only: next, you enable root user SSH login as it is our only user so far and it cannot login. In a daily manner you should use unprivileged user. Use shell loop and “pct” command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config; 
  echo $i; 
done

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- service ssh restart; 
  echo $i; 
done

Checkpoint: So far we have created, started and configured 200 LXC containers to run further software intallation.

rkhunter: anti-rootkit software deployment

You may ask if this anti-rootkit is real world use case? Definitely it is. From my personal experience I can say that even using (or especially, rather) well known brands for you systems layer like public cloud operators you can face with risk of having open vulnerabilites. Cloud operators or any other digital services providers often rely on content from third party providers. So effectively quality and security level is as good as those third parties provided. You can expect to possibly receive outdated and unpatched software or open user accounts etc. This can lead to system breaches which then could lead to data steal, ransomware, spyware or cryptocurrency mining and many more.

There are similarities between anti-rootkit and anti-virus software. rkhunter is much more target at specific use cases so instead of checking hundred thousands of virus signatures it looks for well known hundreds of signs of having rootkits present in your system. You can then say that is a specialized form of anti-virus software.

Installation of anti-rootkit

First install rkhunter with the following playbook:

---
- name: install rkhunter
   hosts: all
   tasks:
     - name: Install rkhunter Ubuntu
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: rkhunter
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install rkhunter CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: rkhunter
         state: present

Execute it with Ansible:

ansible-playbook rkhunter-install.yml -i hosts.txt -u root

Scanning systems with anti-rootkit

And then scan with rkhunter:

---
- name: Run rkhunter
   hosts: all
   tasks:
     - name: Run rkhunter
       ansible.builtin.command: rkhunter -c --sk -q
       register: rkrun
       ignore_errors: true
       failed_when: "rkrun.rc not in [ 0, 1 ]"

Execute it with Ansible:

ansible-playbook rkhunter-run.yml -i hosts.txt -u root

To verify results it is much easier to run it separately using ansible command instead of ansible-playbook which runs playbooks:

ansible all -i hosts.txt -m shell -a "cat /var/log/rkhunter.log | grep Possible | wc -l" -u root -f 12 -o

Results interpretation and reaction

What in case if you see some “Possible rootkits”? First of all calm down and follow incident management procedure, if have such.

192.168.2.23 | CHANGED | rc=0 | (stdout) 0
192.168.2.31 | CHANGED | rc=0 | (stdout) 0
192.168.2.26 | CHANGED | rc=0 | (stdout) 0
192.168.2.29 | CHANGED | rc=0 | (stdout) 0
192.168.2.24 | CHANGED | rc=0 | (stdout) 0
192.168.2.27 | CHANGED | rc=0 | (stdout) 0
192.168.2.22 | CHANGED | rc=0 | (stdout) 0
192.168.2.28 | CHANGED | rc=0 | (stdout) 0
192.168.2.21 | CHANGED | rc=0 | (stdout) 0
192.168.2.20 | CHANGED | rc=0 | (stdout) 0
192.168.2.25 | CHANGED | rc=0 | (stdout) 0

If you do not have proper procedure, then follow the basic escalation path within your engineering team. Before isolating the possibly infected system, first check if it is not a false-positive alert. There are plenty of situations when tools like rkhunter will detect something unusual. It can be Zabbix Proxy process with some memory alignment or script replacement for some basic system utilities such as wget. However if rkhunter finds well known rootkit then you should start shutting system down or isolate it at least. Or take any other planned action for such situations.

If you found single infection within your environment then there is high chance that other systems might be infected also, and you should be ready to scan all accessible things over there, especially if you have password-less connection between your servers. For more about possible scenarios look for MITRE ATT&CK knowledge base and framework.

ClamAV: anti-virus deployment

What is the purpose of having anti-virus in your systems? Similar to anti-rootkit software, a anti-virus utility keep our system safe and away from common threats like malware, adware, keyloggers etc. However it has got much more signatures and scans everything, so the complete scan takes lot longer than in case of anti-rootkit software.

Installation of anti-virus

First, install ClamAV with the following playbook:

---
- name: ClamAV
  hosts: all
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'

  tasks:
     - name: Install ClamAV
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: clamav
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install ClamAV CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: clamav
         state: present

Then execute this playbook:

ansible-playbook clamav-install.yml -i hosts.txt -u root

With each host containing ClamAV there is clamav-freshclam service which is tool for updating virus signatures databases locally. There are rate limits. It is suggested to set up a private mirror by using “cvdupdate” tool. If you leave as it is, there might be a problem when all hosts ask at the same time resulting in race condition. You will be blocked for some period of time. If your infrastructure consists of various providers, then you should go for multiple private mirrors.

Scanning systems with anti-virus

You can either scan particular directory or complete filesystem. You could either run scan from playbook, but you can run it promply using ansible command without writing playbook. If seems that ClamAV anti-virus, contrary to rkhunter, returns less warnings and thus it is much easier to manually interpret results without relying on return codes.

ansible all -i hosts.txt -m shell -a "clamscan --infected -r /usr | grep Infected" -v -f 24 -u root -o

You can also run ClamAV skipping /proc and /sys folders which hold virtual filesystem of a hardware/software communication.

clamscan --exclude-dir=/proc/* --exclude-dir=/sys/* -i -r /

There is possiblity to install ClamAV as a system service (daemon), however it will be much harder to accomplish as there might be difficulties with AppArmor (or similar solution) and file permissions. It will randomly put load on your systems, which is not exactly what we would like to experience. You may prefer to put scans in cron schedule instead.

Please note: I will not try to tell you to disable AppArmor as it will be conflicting with NIS 2. Even more, I will encourage you to learn how to deal with AppArmor and SELinux as they are required by various standards like DISA STIG.

To run ClamAV daemon it is requied to have main virus database present in your system. Missing this one prevents this service from startup and it is directly linked with freshclam service.

○ clamav-daemon.service - Clam AntiVirus userspace daemon
     Loaded: loaded (/lib/systemd/system/clamav-daemon.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/clamav-daemon.service.d
             └─extend.conf
     Active: inactive (dead)
  Condition: start condition failed at Mon 2024-09-02 15:24:40 CEST; 1s ago
             └─ ConditionPathExistsGlob=/var/lib/clamav/main.{c[vl]d,inc} was not met
       Docs: man:clamd(8)
             man:clamd.conf(5)
             https://docs.clamav.net/

Results interpretation and reaction

Running clamscan will give us this sample results:

Using /etc/ansible/ansible.cfg as config file
192.168.2.33 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.23 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.28 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.42 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.26 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.29 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.38 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.45 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.40 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.32 | CHANGED | rc=0 | (stdout) Infected files: 0

As it is a manual scan, it will be straightforward to identify possible threats. In case of automatic scan or integration with Zabbix you will need to learn what clamscan could possibly output, same as with rkhunter output.

Conclusion

Automation in the form of Ansible can greatly help in anti-rootkit and anti-virus software deployment, rkhunter and ClamAV respectively. These tools will for sure increase the level of security in your environment if will cover all the systems up and running. Having automation itself is not required by NIS 2 directly, however in positively impacts for future use.

Further reading

Making Satel Opal Plus motion sensor wireless with Fibaro Smart Implant

It is possible to make wire based motion sensor from Satel, wireles using Zwave network. Satel Opal, Opal Plus and Opal Pro have similar case which can hold additional module of Fibaro Smart Implant. On the picture below is on the bottom right side on the case, just below the sensor module. It fits quite well here.

Both sensor and implant are powered by 12V DC, so you need to have only 2 wires coming to the sensor intead of 3 or more. It is especially important if you already made in-soil wiring and would not like to change it that much.

So, Fibaro Smart Implant has 2 binary switch-like inputs as well as 2 resistive outputs which can be disconnected if not needed. On the picture above you see these outputs unused. Import FSI into your system of a choice. Mine is Fibaro HC3. But be aware that there is one quirk going on here. After adding it you will not see binary inputs as separate devices and moreover you need to manually put an association from those inputs into controller. It is counter-intuitive.

There is binary switch on the list, but no binary input:

Then, once you have this configured at this time you will not be able to use it as a security device because it is not such a device. It is just a binary input and switch as name states. So Fibaro HC3 will not include it in alarm zone. You can change this by using QuickApp with motion sensor type set. As binary input changes binary switch then you can use this parameter and rewrite state from one device into another. As simple as it is:

function QuickApp:onInit()
    self:debug("onInit")
    self:loop()
end

function QuickApp:loop()
    fibaro.setTimeout(500, function() 
        self:debug(hub.getValue(111, "state"))
        self:updateProperty("value", hub.getValue(111, "state"))
        if hub.getValue(111, "state") == true then
          fibaro.setTimeout(500, function() 
            hub.sleep(5000)
          end)
        end
        self:loop(text)
    end)
end 

Beaware that this granularity is enough which has been tested. Satel Opal motion sensors are powerful so no need to increase frequency. By default this motion sensor gives alarm for 2 seconds and here I have extended it to 5 seconds just for testing sake. In case of alarm zone it does not matter how long it reports as when it reports is done.

Encrypt with LUKS an unencrypted LVM Ubuntu 22 Server without system reinstallation

Keep your data safe. Device loss or unauthorized access can be mitigated by encrypting drive in your server and workstation.

So you may have a Ubuntu Linux installation on your bare metal or virtual machine. Does it have an encrypted drive? If the answer is no, then you could be in trouble when device is stolen or lost, or someone just gained unauthorized access to your hardware. In this short step-by-step article you can see what steps your should take to encrypt your unencrypted drives without need to reinstall the system.

When speaking workstation, there is much less concern about system reinstallation. Just move out your data, configuration in favourable time and proceed with clean system installation, now with proper drive encryption configured. But hold on a second. If there is option to encrypt without need to reinstall your system, then why just not try it?

It is especially important when talking about server installations running production software handling customers data. You can opt for system replacement in maintainance window and redo all your work. But, sometimes it is not a option. Actually it does not matter what kind of you unmaintained and obsolote software you run on your unencrypted servers. I think most of us know at least one example of such a thing. With such a problematic software it would better to just do it as-is without additional steps required.

How to migrate data?

Here you can learn how to encrypt your existing data on LVM based drives in Ubuntu 22 server virtual machine. To get started you need to add additional drive to your system with equal or more space than on unencrypted drive. Lets say your source is at /dev/sda and your spare drive is at /dev/sdb. I assume that it is a default setup with 3 partitions. First one is for GRUB spacer, second for boot and thind for root filesystem.

Boot your VM with GRML ISO. On Proxmox, when VM is starting, press Esc and select disc drive with ISO mounted.

Once booted into GRML…

Create PV and extend existing VG with new drive:

pvcreate /dev/sdb
vgextend ubuntu-vg /dev/sdb

Move your data to new drive:

pvmove /dev/sda3 /dev/sdb

Get rid of existing source unencrypted drive from VG and remove PV:

vgreduce ubuntu-vg /dev/sda3
pvremove /dev/sda3

Now it’s time to wipe existing unencrypted drive:

crypt setup open --type plain -d /dev/urandom /dev/sda3 to_be_wiped
dd if=/dev/zero of=/dev/mapper/to_be_wiped bs=1M status=progress
cryptsetup close to_be_wiped

Now the most important aspect of the procedure. Create mountpoint for boot and write a LUKS header over there. It is critical to save this header on permanent storage. If header with keys is lost then your data will be lost also. Keep this in mind:“`

mkdir /mnt/boot
mount /dev/sda2 /mnt/boot

Encrypt and open drive container:

cryptsetup -y luksFormat /dev/sda3 --header /mnt/boot/luksheader.img
cryptsetup luksOpen /dev/sda3 lvmcrypt --header /mnt/boot/luksheader.img

Create new PV and include it into existing VG.

pvcreate /dev/mapper/lvmcrypt
vgextend ubuntu-vg /dev/mapper/lvmcrypt

Move your data from additional spare drive into newly created lvmcrypt container:

pvmove /dev/sdb /dev/mapper/lvmcrypt

And finally, remove this new drive from VG and remove PV itself:

vgreduce ubuntu-vg /dev/sdb
pvremove /dev/sdb

How to update initramfs with detached LUKS2 header

So what is the deal with this LUKS2 header being detached. In this format, first 16MB of space is used for header. Once the original drive includes this space it will not have enough proper space for data which needs to be moved back again. The second reason to have detached header is to increase security level somehow. But remember that in case of loss of the device or filesystem holding it you permamently loose your data also. So…

If booting once again into GRML:

mkdir /mnt/luks
cryptsetup luksOpen /dev/sdb3 lvmcrypt --header /mnt/boot/luksheader.img
vgchange -ay
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

If you continue without rebooting you can just create mountpoint directory and mount it:

mkdir /mnt/luks
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt/luks

Mount and bind necessary special directories and then chroot into the system:

mount -t proc proc /mnt/luks/proc
mount -t sysfs sys /mnt/luks/sys
mount -o bind /dev /mnt/luks/dev
mount --bind /run /mnt/luks/run
mount /dev/sda2 /mnt/luks/boot
chroot /mnt/luks /bin/bash

Now you are back in your Ubuntu, in your encrypted drive. Is it over? No. We need to tell at boot time where there LUKS2 header is stored. Copy your header into any additional drive found. It case of VM it could be 0.1GB drive. In case of workstation it can be a USB pendrive:

dd if=/boot/luksheader.img of=/dev/sdb

Edit your /etc/crypttab file with the following:

lvmcrypt PARTUUID=A none luks,header=/dev/disk/by-uuid/B

where A is your blkid /dev/sda3 and B is your blkid /dev/sdb.

Finally, regenerate initramfs:

update-initramfs -c -k all
exit
reboot

You’re good to go. Now your drive is encrypted and you will be asked for password set earlier every time you boot the system. To be clear, you need to keep safe your additional drive holding the LUKS2 header. After you booted the system, the drive or pendrive can be removed, but it need to be inserted once again on every further reboots.

Further reading

https://unix.stackexchange.com/questions/444931/is-there-a-way-to-encrypt-disk-without-formatting-it
https://www.michelebologna.net/2020/encrypt-an-existing-linux-installation-with-zero-downtime-luks-on-lvm/
https://dev.to/goober99/encrypt-an-existing-linux-installation-online-with-the-magic-of-lvm-1mjc
https://linuxconfig.org/how-to-use-luks-with-a-detached-header
https://medium.com/@privb0x23/lose-your-head-attempting-to-boot-from-luks-without-a-header-2d61174df360
https://askubuntu.com/questions/1351911/what-does-regenerate-your-initramfs-mean
https://superuser.com/questions/111152/whats-the-proper-way-to-prepare-chroot-to-recover-a-broken-linux-installation
https://unix.stackexchange.com/questions/720202/detached-luks-header-on-debian-based-gnu-linux

External and redundand Azure VM backups with Veeam to remote site

Backup is a must. Primary hardware fails. Local backups can also fail or can be inaccessible. Remote backups can also fail, but if you have 2, 3 or even more backup copies in different places and on various medium chances are high enough that you will survive major incidents without data loss or too much of being offline.

Talking about Microsoft Azure public cloud platform. But in case of any infrastructure environment you should have working and verified backup tools. Azure has its own. To keep those backups in secure remote place (in the context of Storage Account) you can use Veeam Backup for Microsoft Azure which can be used with up to 10 instances for free, besides costs of storage and VM to Veeam itself of course.

Source: Veeam Backup for Microsoft Azure Free Edition

To deploy Veeam you can use VM template from Azure’s marketplace. Its called “Veeam Backup for Microsoft Azure Free Edition”. You need to have also a storage account. I recommend setting it up with firewall enabled, configuring remote public IP address. This is the place where your VM backups made by Veeam will go.

Unlike Veeam Backup and Replication Community Edition, this one comes with browser-based user interface. It looks also quite differently from desktop-based version. What you need to do first is to define backup policy (Managment – Policies), add virtual machines and run it. That’s all at this point.

Resources covered with this policy can be found in Management – Protected Data. During backup, Veeam spins additional VM with Ubuntu template to take this backups. After backup or snapshot job is completed this temporary VM are gone.

As mentioned earlier, there are 10 slots within this free license. But you need to manually configure license usage which is a little bit annoying of course. Keep in mind that at least one backup or snapshot uses license seat. Need to remove to free it up.

You could use Veeam as a replacement for native backups coming from Azure. In this this proposed scenario, Veeam backups and the first step for having redundant and remote backups in case of environment inaccessibility.

Remote: Veeam Backup and Replication Community Edition

In order to move backups/snapshots from Azure Storage Account created by Veeam for Microsoft Azure you need to have Community Edition of Veeam installed in remote place. For sake of compliance it is necessary that it should be physically separate place and in my opinion it must not be the same service provider. So your remote site could be also on public cloud but from different provider.

In order to install Veeam Community you need to obtain Windows license for your virtual machine. Install Windows from official ISO coming from Microsoft and buy license directly from Microsoft Store. This way you can purchase electronic license even for Windows 10 which sometimes if preferable over Windows 11. Veeam installation is rather straight forward.

There is variaty of choise from where you can copy your backups. Which means that the similar setup can be done in other public clouds like AWS of GCP. In case of Microsoft Azure you need to copy you access token for Storage Account with backups from Azure Portal. Adding external repository can be done at Backup Infrastructure – External Repositories.

You need to have also a local repository which can be a virtual hard drive added to your Veeam Community VM and initialized with some drive letter in Windows.

There is a choice what to backup or have to transfer it to remote place. In this given scenario the optimum is to create Backup Copy which will immediately copy backups from the source as soon as they appear over there. Other scenarios are also possible but when additional requirements are met.

Once you have defined Backup Copy Job, run it. When completed you will have your source backup secured in remote place. Now you have copy those backups to different medium.

How to restore backups to remote Proxmox server?

Now you have your source backups secured and placed in remote site. The question arise, how to restore such backup? You could run instant recovery but to do this you need to have a commercial virtualization platforms set up. There is Proxmox on that list. However you can Export content as virtual disk, which will produce VMDK files with disk descriptors.

There is however one quirk you need to fix before continuing. Disk descriptors exported by Veeam are incompatible with disk import in Proxmox. Surround createType variable with quotes.

createType="monolithicFlat"

Copy disks exported to Proxmox server. Now you can create empty VM, ie. without disk drives and possibly event network adapters at first glace. Import disk into this newly created VM with qm utility. Then add drive to VM and change its boot order. You are good to go.

To recap the those procedure:

  • Export content as virtual disks
  • Fix createType variable in disk descriptor
  • Copy disk to Proxmox server
  • Create empty VM
  • Import disks into new VM
  • Configure VM and run it

Keep in mind that redundant backup is a must.

Recovering Proxmox VM from failed HDD

Due to previous failure of SSD drive from Goodram I was forced to use brand new 1TB HDD from Toshiba. It was not a problem because the system running on it mainly have been using writes with not too much reads. My SSD drive had some performance drops which could be because of the fact being run out of the same power socket shared with some DIY tools in garage. Now there is no power socket sharing I think that I may close server lid with too much force, so even brand new HDD failed.

Proxmox reported failure of disk access directly on the virtual machine:

Drive disappeard from the server. I remount it and rebooted still with no avail. I cleaned connections a little bit. Blow on vent hole on the drive. All that without success. So I used my LogiLink adapter to connect this drive to my workstation. Drive spun which means that it is somehow mechanically working.

I connected the drive to another Proxmox server thru USB and then magically it popped up being available again.

Quick look on SMART values and no disaster found here. Especially no read errors and no reallocations. So it might be that the drive itself is fine although file system is struggling inside.

So, the thing is then to use testdrive utility to read raw files from the drive regardless of what problems with partition table there are. We can check partition scheme with parted, fdisk and few other similar tools.

Just run testdrive (or install with apt install testdrive):

Select failed drive:

Select partition table type:

Analyze:

Press P to list files:

Now you can navigate thru filesystem. In my case it is possible as the drive itself seems to be almost fine and the problem is within filesystem. In other cases like drive drops and weird noise coming out of it – your milage may vary on success level factor.

Having qcow files holding your VM disk image you can then import it to another VM created without disk:

qm importdisk VIMD FILE DATASTORE

Remember to run datastore in at least mirrored setup (with md or zfs) and have proper backup. Although I had here some backups, I decided to give a try of recoving thoses files as it might be a lesson learned for the future real world cases.

Germany BSI abuse message prevention

In case of an Abuse message from BSI (in Germany), the following services must be enabled and disabled:

systemctl disable rpcbind.target
systemctl disable rpcbind.socket
systemctl disable rpcbind.service
systemctl stop rpcbind.target
systemctl stop rpcbind.socket
systemctl stop rpcbind.service

Verify in 2 ways:

rpcinfo -T udp -p 192.168.45.67

telnet 192.168.45.67 111

Cutting with DROP traffic by Suricata IPS is not sufficient, because not all possible traffic signatures on port 111 are used and in most cases this port can be enumerated.

Dual WAN with failover in pfSense

Once in a while there is outage in my main internet connection. In order to keep everything up and running (fortunately) we can setup secondary WAN over LTE connection. Start with connecting your device (e.g. LTE router) over ethernet cable to pfSense box. Then in pfSense itself go to System.Routing.Gateways and a secondary one. Be sure to first activate your secondary interface in Interfaces. You cannot have same monitor IP on both gateways so try to point at well known addresses. Then go to Gateway Groups and configure as follows:

At this point you should have both gateways up and running. You could try to simulate outage either by plugging out cable from the box or selecting “Mark gateway as down” on any of these gateways.

Compatibility of Suricata IPS on Proxmox

For non-users of either Proxmox or Suricata: the first one is virtualization appliance which helps firing up virtual machines as well as LXC containers and the latter is network traffic security system which is able to identify (IDS mode) or even block malicious traffic (IPS mode). Suricata works just fine on Proxmox which is usually installed on Debian Linux, but sometimes there are some hardware/software compatibility issues which I'm going to tell you about right now...

Having Proxmox server exposed in public space could be really not the best way possible. However if there is no chance for dedicated hardware, then hiding your box from the world is the only reasonable way. There is of course possibility to setup Proxmox cluster with only one server exposed and the rest being only thru private link (e.g. VLANs on vSwitch on Hetzner). But still you will be left with at least one server which needs to be accessible from outside.

Note: without dedicated networking hardware you can try setting up everything offline with KVM console (with private link only just for cluster communication), but this way if something goes wrong you will be left waiting in queue to access it as resources often are limited, just as they are on Hetzner. Usually KVM access is given within 15 – 60 minutes from request time.

So in case you have your box exposed you need to hide it somehow from malicious traffic. I prefer to disable RPC and SSH. Enable 2FA for UI authentication. And last one is to install Suricata IPS directly on Debian. Of course if you have some VM inside Proxmox (and you will have) you can install Suricata on them too, like on pfSense where it is conieniently prepackaged. Installation is straightforward but… it relies on Linux kernel features which need proper drivers and hardware compatibility. It means that you are going to have it running on almost all modern hardware but not on all of it. There are some issues with onboard NICs and Proxmox custom kernel picking up different drivers. Official documentation states that we should load nfnetlink_queue module but the most important thing on those problematic hardware is to enable and start nftables service. Without it, Suricata will pick af-packet mode, but we are interested in nfqueue mode as it supports blocking by default.

Suricata system service is configured to run in af-packet mode by default, you can check service system files. In case startup scripts recognize that there are overrides it will start in different mode, for instance in nfqueue mode. On problematic setups most probably you might need to customize it a little bit. And basically that’s all, but you are not going to read it anywhere in official documentation. So hopefully with these small things you will get big benefits.