Disabling uncategorized internal Suricata rules in pfSense

I want to enable only particular rules categories. Do not want to have all these internal Suricate rules as they cover too broad variaty of cases including loads of false-positive. If one would like to go for deep traffic analysis then they would be fine, but in case you see “STUN Binding Request On Non-Standard High Port” and know that is your P2P camera in the LAN then it’s worth disabling all of that things at once. To disable them it is a little bit tricky on pfSense installation.

Go to Interfaces and selected desired one. Be sure to uncheck all snort or ET rules before. Then go to WAN Rules and select Active Rules in category dropdown box. Click Disable All. Now you are running without all default rules and can enable only those which you are interested most to have. For example you can try with the following:

  • attach_response
  • botcc
  • 3coresec
  • ciarmy
  • compromised
  • deleted
  • dos
  • exploit / exploit_kit
  • hunting
  • malware
  • phishing
  • scan
  • shellcode
  • sql
  • threatview_CS_c2
  • tor
  • user_agent
  • web_client
  • web_server
  • web_specific_apps
  • worm

Multiple ZFS pools on single drive

Image a hypothetical scenario having two 512 GB drives and want to use a Proxmox ZFS VM replication onto a second server with one 1 TB drive. Solution is quite simple. By using fdisk, create two primary partitions on the bigger drive and then go to Disks.ZFS.CreateZFS and you will be able to select a partition for the particular pool. One downside of such a setup is that ZFS liks to have whole drive for it’s own, please keep in mind that the performance may vary.

disk/partition ‘/dev/sdX’ has a holder (500)

Having LVM on the disk causes system to automatically active such volumes. I installed used drive with Ubuntu on it to my Proxmox server. Using Proxmox’s UI you will not be able to wipe this drive, because volume group has been already auto-activated. You need to log into shell and then:

vgs # pick VG on your interest
vgchange -a n vgname # pass your VG to deactivate it

After this, you are ready to wipe drive from the UI.

Package is in a very bad inconsistent state

Aborting Ubuntu packages update via Ansible gave me some weird state of libatk-wrapper-java-jni package. My playbook installs default-jre and it was painfully slow so I thought that there is some lockup, but there was not. My VM performed poorly at that moment, probably due to the fact that it came from a HDD with badblocks which got migrated to another one. It just works, but I’m not 100% sure if it is healhty. So… in case you have some broken package installation try:

sudo apt-get install --reinstall libatk-wrapper-java-jni

And now you are good to go with autoclean, update and possibly upgrade to check whether really there is no problem still dangling on your system. On mine, it is fine now.

UnsupportedClassVersionError

Changing Java language level to run compiled code on older runtimes

I’m working on some Java project. I use IntelliJ IDEA and deciced to go with Oracle’s OpenJDK-18. But… this runtime is available by default only during compilation in the IDE. In the system I have OpenJDK-11. Trying to run code compiled by JDK-18 on JDK-11 gives me the following message:

java.lang.UnsupportedClassVersionError: Main has been compiled by a more recent version of the Java Runtime (class file version 62.0), this version of the Java Runtime only recognizes class file versions up to 55.

To overcome this go to module settings (or Project Structure) and change Language Level to lower value. In case you compile with 18 then it will have version 18 set. Switch to version 11 (local variables syntax for lambda parameters) and compile once again. Now you will be able to launch your application even with older JDK.

min.io server behind NGINX reverse-proxy

The most recent min.io server release requires one additional thing in the configuration comparing to versions in the past years. Having min.io on one box and NGINX on another one requires setting up a reverse proxy, which is straightforward operation. You need to remember to add proper headers to pass hostname and schema to min.io box. This whole thing is described in the documentation.

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;

But… you are required to put the following into a min.io configuration file:

MINIO_SERVER_URL="https://your.url/"

This should be put in bold letters beause without this one you could upload artifacts into buckets, but will not be able to download them with share link beacuse of checksum error. This checksum depends on domain name. Default min.io installation offers a console and API on one port only in theory. If you go for console then you are redirected to some temporary port, applicable until next restart of the server. So API will work flawlessly on public domain, but a console in the default installation will not. Please keep that in mind.

DEGRADED ZFS pool, hard disk failure

I own a preowned HP z800 workstation. As purchased it contained HDD inside – Western Digital WD5000AAKX-75U6AA0, SATA III, 500GB, 7200 RPM, 16MB of cache. Unfortunately it passes SMART and do not show wearout metric. However going into detailed information we get:

Raw_Read_Error_Rate has positive value of 11, threshold is set to 51. Having 11 685 hours of runtime it should understandable that it might break and it did actually. There is only one Current_Pending_Sector which means that it waits to be remapped or rellocated. But, will it happen anytime soon?

I’m unable to clone, migrate or replicate VM to another server. ZFS states that this pool is in degraded state. You can see this by using zfs status -v command. It says that is unrecoverable and most probably it is. I’ve tried zpool scrub river to no avail. There is a problem with VM-104 disk. Still the forementioned VM is accessible by console and it works just fine.

This VM is Redash installation, and it is the last VM left on that drive waiting for better time. As this is home lab setup it makes use of any devices I have available. Not all of them are fully functional as you can see. Always need to have backups, replicated VMs and redundantly configured RAID.

I was unable to migrate VM:

I was unable to replicate VM:

I was unable to backup VM:

I ended up identifiying 4 or more bad blocks, but because it is ZFS there is little tools for filesystem checks. Does ZFS pool should be able in theory to recover from such failure? In case you use mirrored drive setup then it is not a case. I was thinking about overwriting these blocks, but leave it as it was. Drive is no decommisioned.

Docker Registry on Ubuntu 20.04

Docker Registry could be useful in various scenarios, but the most common would be containers orchestration software such as Kubernetes, OpenShift/OKD or Nomad. In example environment you could have Gitlab CI pipeline configured to build Docker image and push it to your registry and then let orchestration software to deploy it to a cluster.

In order to install internal, private Docker Registry you need to install Docker package first.

sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl status docker

Then you need to grab certificate and start docker registry container:

sudo docker run -d --restart=always --name registry -v /home/user/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certfile.pem -e REGISTRY_HTTP_TLS_KEY=/certs/keyfile.key -p 443:443 registry:2

This way we start a registry container which will always be up and running. We bind it on port 443 so need to have valid certificate and key for the domain. You can create A record for either public or private IP address. Former is easier, but the latter would be more secure. Both ways no need to setup insecure registries configuration as it will be offered via TLS connection.

After setting up, just build, tag and push container image to this registry.