min.io server behind NGINX reverse-proxy

The most recent min.io server release requires one additional thing in the configuration comparing to versions in the past years. Having min.io on one box and NGINX on another one requires setting up a reverse proxy, which is straightforward operation. You need to remember to add proper headers to pass hostname and schema to min.io box. This whole thing is described in the documentation.

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;

But… you are required to put the following into a min.io configuration file:

MINIO_SERVER_URL="https://your.url/"

This should be put in bold letters beause without this one you could upload artifacts into buckets, but will not be able to download them with share link beacuse of checksum error. This checksum depends on domain name. Default min.io installation offers a console and API on one port only in theory. If you go for console then you are redirected to some temporary port, applicable until next restart of the server. So API will work flawlessly on public domain, but a console in the default installation will not. Please keep that in mind.

DEGRADED ZFS pool, hard disk failure

I own a preowned HP z800 workstation. As purchased it contained HDD inside – Western Digital WD5000AAKX-75U6AA0, SATA III, 500GB, 7200 RPM, 16MB of cache. Unfortunately it passes SMART and do not show wearout metric. However going into detailed information we get:

Raw_Read_Error_Rate has positive value of 11, threshold is set to 51. Having 11 685 hours of runtime it should understandable that it might break and it did actually. There is only one Current_Pending_Sector which means that it waits to be remapped or rellocated. But, will it happen anytime soon?

I’m unable to clone, migrate or replicate VM to another server. ZFS states that this pool is in degraded state. You can see this by using zfs status -v command. It says that is unrecoverable and most probably it is. I’ve tried zpool scrub river to no avail. There is a problem with VM-104 disk. Still the forementioned VM is accessible by console and it works just fine.

This VM is Redash installation, and it is the last VM left on that drive waiting for better time. As this is home lab setup it makes use of any devices I have available. Not all of them are fully functional as you can see. Always need to have backups, replicated VMs and redundantly configured RAID.

I was unable to migrate VM:

I was unable to replicate VM:

I was unable to backup VM:

I ended up identifiying 4 or more bad blocks, but because it is ZFS there is little tools for filesystem checks. Does ZFS pool should be able in theory to recover from such failure? In case you use mirrored drive setup then it is not a case. I was thinking about overwriting these blocks, but leave it as it was. Drive is no decommisioned.

Docker Registry on Ubuntu 20.04

Docker Registry could be useful in various scenarios, but the most common would be containers orchestration software such as Kubernetes, OpenShift/OKD or Nomad. In example environment you could have Gitlab CI pipeline configured to build Docker image and push it to your registry and then let orchestration software to deploy it to a cluster.

In order to install internal, private Docker Registry you need to install Docker package first.

sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl status docker

Then you need to grab certificate and start docker registry container:

sudo docker run -d --restart=always --name registry -v /home/user/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certfile.pem -e REGISTRY_HTTP_TLS_KEY=/certs/keyfile.key -p 443:443 registry:2

This way we start a registry container which will always be up and running. We bind it on port 443 so need to have valid certificate and key for the domain. You can create A record for either public or private IP address. Former is easier, but the latter would be more secure. Both ways no need to setup insecure registries configuration as it will be offered via TLS connection.

After setting up, just build, tag and push container image to this registry.

Ubuntu 21 (impish) updates unavailable

I was trying to add some new packages and figured it out that my Ubuntu 21 is out of date. Unfortunately some time ago I unchecked all repositories in Software & Updates so my /etc/apt/sources.list was empty. But I managed to have it working properly adding this one:

deb http://old-releases.ubuntu.com/ubuntu impish main universe

Next

apt update

and I was more or less back on the track. How trivial was it just to be able to install default-jre package as I am trying to learn how to build custom deb packages.

pg_stat_statements must be loaded via shared_preload_libraries

Latest Clustercontrol – version 1.9.4 – installation somehow requires pg_stat_statements to be activated in PostgreSQL 14. I have never noticed it before, both on 9.6, 12 and 14. This is simple fix. Change postgresql.conf configuration file (which should be either in /etc or /var/lib/ in standard package installations):

shared_preload_libraries = 'pg_stat_statements'

Then activate extension in psql:

CREATE EXTENSION pg_stat_statements;

Restart server and you are done.

Mirror Git repository

If you need to mirror remote repository and push it to another remote repo, then there is one thing you should do to be able to export all remote branches and not only the master.

git remote add NAME URL
for remote in `git branch -r `; do git branch --track $remote; done
git push --all NAME

It will push all things remotely, but it varies from version to version so better double check.

Extending CentOS7 partition on Azure

Using CentOS7 templates on Azure could result in lack of LVM, swap and non auto-extendable partitions. Some other templates have such feature, but not all of them. In case you assigned bigger disk in Azure portal you need still to do few things. First delete /dev/sda2 partition:

fdisk /dev/sda
d
2
w

Then reboot to apply.

yum install cloud-utils-growpart -y
growpart /dev/sda 1
xfs_growfs /dev/sda1

Zabbix Proxy 4.2 installation years later

Old Zabbix Proxy 4.2 installation could be quite confusing as you cannot grab the latest one to work with old Zabbix server. Fortunately there is archive packages repository at repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix/. You need to keep track which operating systemu should be used. Refer to server requirements as it will be the same. So it’s Ubuntu 14, 16 or 18. First you should grab repository package which can be found named zabbix-release. For example:

zabbix-release_4.2-2+bionic_all.deb

After installation with dpkg command, you can install particular proxy package with either MySQL, PostgreSQL or SQLite3.

zabbix-proxy-mysql_4.2.1-1%2Bbionic_amd64.deb

If you choose MySQL then you need to manually install database package:

sudo apt-get install mysql-server

Then:

service mysql start
service mysql status
mysql
create database zabbix_proxy;
create user zabbix@localhost identified by 'yourpassword';
grant all privileges on zabbix_proxy.* to zabbix@localhost;
quite;

Now you are good to go with loading structure and data into the database:

zcat /usr/share/doc/zabbix-sql-scripts/mysql/server.sql.gz | mysql -uzabbix -p zabbix_proxy

It could take few minutes to complete. Now it’s time to set configuration parameters in /etc/zabbix/zabbix_proxy.conf file:

DBName=zabbix_proxy
DBUser=zabbix
DBPassword=yourpassword

Should something went wrong, check in logs at /var/log/zabbix/zabbix_proxy.log