Azure FreeBSD (pfSense) not able to pick boot partition after disk resize

Following previous struggles on pfSense FreeBSD on Azure. After resizing disk Microsoft starts at some point adding an another drive which is temporary one. I do not need this by any means. It seems that NetGate pfSense template is also not quite compatible with cloud-init as the platform tries to execute systemctl which is not present here.

The problem with this is that FreeBSD will not be able to pick which partition it should start from. Fix for this is pretty simple. Just go and edit /etc/fstab and instead of pointing to some virtual labels. In my case it was /dev/da0s2a. You can pick the proper one during boot procedure by pressing question mark. Load will list for you all the partitions available. Try at your best. In mine case, it was the first one from the list.

What causes the problem? Being not compatible with cloud-init, NetGate template is vulnerable to unpredictable changes from my personal perspective. But here we are in public cloud environment and we should not be suprised by that.

Resize drive and partition on FreeBSD 12 (pfSense)

Recently I described how to increase drive, partition and filesystem on FreeBSD in Azure virtual machine. It turned out that this scenario was little bit different from a virtual machine running on top of VMware. First of all, with or without vm-tools I was not able to get updated drive size:

camcontrol rescan all

It gives messages that rescanning was successful, but actually it did not apply new values. However we can check new drive size with the following command:

camcontrol readcap da0 -h    # daX

If on your system you get same results, then you need to reboot.

There is also a difference of having swap space. First disable it:

cat /etc/fstab               # pick swap device to use
swapoff /dev/label/swap0     # as an example

Then remove swap partition as freebsd-ufs partition need to be a continuous space:

gpart delete -i 2 da0s1      # refer to gpart show 

Then as previously run resizing commands:

gpart resize -i X da0        # X would be probably 1
service growfs onestart

Now you are done.

WordPress migration

After 2 years on DigitalOcean I’ve decided to move out. Not because it’s bad or expensive, but because I need one more feature which is IDS/IPS. By default WordPress installation on DO contains fail2ban, which is fine, but I find Suricata way more powerful.

To start with, I’ve created a backup using All-In-One WP migration and Increate Max File Size. Former is responsible for dumping all of the site content, and the latter is giving a ability to upload this dump file into new WordPress installation.

But… you also need to edit one file in your need WP setup:

/etc/php/X.Y/apache2/php.ini

With the following:

post_max_size = 2048M
upload_max_filesize = 2048M

Then restart Apache, for instance:

sudo service apache2 restart

At plugin settings page select 2 GB. Migration plugin should be installed both on the source and target system. Before uploading dump file into new location be sure to check version if it’s the same as the source system. Then upload dump file and you should be all right with almost all the things. I’ve disabled and deleted all the plugins except for NextGen Gallery.

As I’ve been using all the backend things behind NGINX proxy, with WP it’s the same history. Nothing special should be considered here. Please remember to set Host, X-Real-IP, X-Forwarded-For and X-Original-Request headers in your configuration file. TLS configuration is managed on my NGINX by certbot so initially I give only configuration for port 80 and let it do the remaining.

On this particular migration scenario I’ve encountered one problem. There was HTTP 404 error message on all posts and categories. To overcome it go to Seetings.Permalinks and hit Save button. That’s all, maybe there is some issue with the backup tool.

Proxmox setting locale failed

If you log in to Proxmox’s shell and see error stating that “setting locale failed“, then execute the following command to get rid of that message:

dpkg-reconfigure locales

Not quite sure why it happens, but I see it almost all the time.

Recover pfSense from out of disk space and config.xml corruption

I use NetGate pfSense+ version 22 based on FreeBSD 21. I encountered a problem of running out of disk space because of packet logging enable in Suricata. Out of a sudden there was high traffic and therefore high logs production. Disk space utilization was over 100%. Now there is a problem with /config/config.xml file, there is no VPN, Suricata, pf configuration available from the UI. Dashboard is corrupted also. However there is /config/backup folder with backed up configuration files. I’ve taken the last known good with proper file size and put it in place. After reboot it works just fine. However I’m not quite sure if there are other issues out there.

To be sure not running out of space, increase disk space and reconfigure Suricata packet logging with proper file sizing to fit available space. If you are on bare metal, then chances are little to zero that you put too small drive in there. In case of virtualization, go to your configuration panel and increase drive space. Then go to serial console or SSH:

gpart show
gpart resize -i 2 da0
gpart show
service growfs onestart
df -h

After this you should have both partition and file system extended.

Ruby on Rails vs PostgreSQL 14 partitioning primary keys

Starting from PostgreSQL 10 we have available new type of paritioning, which is declarative one. Before we had inheritance which is also good but has some limitations. Now, with declarative partitioning user need not to specify partition name while inserting data. To create partitioned table:

CREATE TABLE tablename 
( 
  id BIGSERIAL NOT NULL, 
  created_at TIMESTAMP WITHOUT TIME ZONE NOT NULL,
  CONSTRAINT tablename_pkey PRIMARY KEY (id, created_at)
) PARTITION BY RANGE(created_at);

There are few things worth explaining. First one is BIGSERIAL data type. It is a bigger form of integer type with automatic sequence creation, which is very convenient. Second thing is PARTITION BY RANGE(columnname). It defined that the data will be spread across paritions based on created_at date range. It positively affects query performance.

To create a partition:

CREATE TABLE tabelename_2022_09 PARTITION OF tablename
    FOR VALUES FROM ('2022-09-01 00:00:00.00') TO ('2022-09-30 23:59:59.99');

With table inheritance I’ve used partitioning with over 1000 tables. Generally speaking, PostgreSQL, even older versions, could easily handle 100 000 tables on single server and data volume around 10 TB. However with declarative partitioning you need to remember, that when executing a query o master table there will be shared lock on each and every parition and it costs memory. Althought it migth be a little simplification you need to increase max_locks_per_transaction parameter to fit required number of partitions created, attached. If you do not need online access to some old data or created future paritions upfront then either do not attach them or just detach if unsed.

Last and the most important thing related to multi-column primary key. Ruby on Rails and ActiveRecord library does not support it by default. You could use some gem to overcome this limitation, but what you only need is to point the specific colum in model definition:

self.primary_key = :id

It works just fine.

Allow .local domain suffix to be resolved

On many chances you may need to configure some domains using .local suffix. Unfortunately this is covered by RFC 6762 (multicast DNS). On specific Linux distributions you will not be able to resolv such domains using DNS server you set either in the machine or in the cloud settings (for instance Azure).

How to you know that the DNS query is not reaching your desired DNS server? In case you use named, then first enable query logging by the following comman:

rndc querylog on

Now DNS queries from the clients who set that particular DNS server as its resolved will be visible in system logger file, like /var/log/messages. Then try to make a ping or curl to the desired domain address with .local suffix and check if it appears in the DNS server log. You can also force DNS query to reach that DNS server by using dig command:

dig yourdomain.local @yourdnsserver

If it’s present in DNS server log file then you’re confident that if you reconfigure your client machine it will work properly. To do this on Ubuntu 20 few things are required. First one is to have IP configuration fixed (some yaml file in /etc/netplan/ directory):

network:
    ethernets:
        eth0:
            dhcp4: no
            addresses:
            - 10.99.99.10/24
            gateway4: 10.99.99.1
            nameservers:
              addresses: [10.99.99.20]
            dhcp4-overrides:
                route-metric: 100
            dhcp6: false
            match:
                driver: hv_netvsc
                macaddress: 60:45:bd:94:4a:85
            set-name: eth0
    version: 2

And also as suggested in the netplan configuration file, to disable cloud init feature create file in /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following content:

network: {config: disabled}

Then reapply network configuration:

sudo netplan try

For me, personally, it is sometimes overcomplicated as comparing to CentOS distribution. Now with the fixed settings that should not be overwritten by some other processes you can proceed with disabling local DNS stub listener:

cd /etc/
sudo ln -sf ../run/systemd/resolve/resolv.conf
cd /etc/systemd/
sudo sed -i -e 's/#DNSStubListener=yes/DNSStubListener=no/' resolved.conf
sudo systemctl restart systemd-resolved

You can put your own desired DNS server in the /etc/systemd/resolved.conf file. After all of these operations try ping, curl or wget with your .local domain and check if this query appears in the DNS server log file.

Multiple ZFS pools on single drive

Image a hypothetical scenario having two 512 GB drives and want to use a Proxmox ZFS VM replication onto a second server with one 1 TB drive. Solution is quite simple. By using fdisk, create two primary partitions on the bigger drive and then go to Disks.ZFS.CreateZFS and you will be able to select a partition for the particular pool. One downside of such a setup is that ZFS liks to have whole drive for it’s own, please keep in mind that the performance may vary.

disk/partition ‘/dev/sdX’ has a holder (500)

Having LVM on the disk causes system to automatically active such volumes. I installed used drive with Ubuntu on it to my Proxmox server. Using Proxmox’s UI you will not be able to wipe this drive, because volume group has been already auto-activated. You need to log into shell and then:

vgs # pick VG on your interest
vgchange -a n vgname # pass your VG to deactivate it

After this, you are ready to wipe drive from the UI.