Configure outgoing mail on Redash

Installing Redash is straightforward if you use https://github.com/getredash/setup repository. Just run setup script to download, install and configure all required packages. But there is one thing missing from the default. It is mail configuration. First install Redash with all default settings and then edit /opt/redash/env file:

REDASH_MAIL_SERVER
REDASH_MAIL_PORT
REDASH_MAIL_USERNAME
REDASH_MAIL_PASSWORD
REDASH_MAIL_DEFAULT_SENDER
REDASH_USE_TLS = true
REDASH_USE_SSL = false
REDASH_HOST

After setting those values run docker-compose up -d command to recreate containers. Use TLS and local IP address if you host your mail server on-premise. Now you should be able to send mail messages from your Redash.

Private Docker Registry with pfSense-offloaded TLS connections

Benefit from running your own docker registry is cost and privacy. But in order to use it in various places you need to secure its connection with SSL/TLS certificates. In case you have only ZeroSSL or LE certificates it can be difficult to maintain both you certificate at ACME/HAProxy leve in pfSense and secondary also in docker registry somewhere else.

Fortunately there is solution for that.

Add your docker registry domain to ACME certificate enrollment as usual. Run docker registry without giving it certificate. Instead configure a domain pointing at pfSense, preferably using non-WAN address. Next configure proxy-pass at Nginx level:

server {
  listen 80; 
  server_name registry.somedomain.com;
  ignore_invalid_headers off;
  client_max_body_size 0;
  proxy_buffering off;
 
  location / {
    access_log  /var/log/nginx/registry-somedomain-com.log;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    chunked_transfer_encoding off;
    proxy_pass http://a.b.c.d:5000/;
  }
}

Most important part of it is to set X-Forwarded-Proto to https. Now you can use docker registry which runs locally and forget about setting up certificate renewal at registry level itself.

Proxmox LXC backup with exit code 11

In case you have some LXC containers on your Proxmox server, then there is high chance that you will get some errors during backup them up. Some container templates may not support snapshot or suspend modes. Instead you should you use stop mode. It is important to remember that during such backup container will be stopped, so be aware of that in case you have some encryption which could ask for a key during startup.

Proxmox 8 on Scaleway

Bare metal servers offered at Scaleway are at reasonable prices, which can be compared to those on Hetzner. You even get Proxmox installation by default contrary to Hetzner were you need to install Debian first. To setup Proxmox with one public IP anf pfSense with another one you need to order Flexible IP and request for virtual MAC address. Remember that on Scaleway’s flexible IPs gateway is fixed and it is 62.210.0.1. Then for the network configuration:

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet manual
        address x.x.x.x/24
        gateway y.y.y.y
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        pointopoint y.y.y.y
        hwaddress a:b:c:d:e:f
        up route add -net a.a.a.a netmask b.b.b.b gw y.y.y.y dev vmbr0
        up ip route add z.z.z.z/32 dev vmbr0
# PUBLIC

auto vmbr1
iface vmbr1 inet static
        address 10.0.0.1/8
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
# LAN

x.x.x.x is primary Proxmox public IP. y.y.y.y is its gateway (with a.a.a.a network with b.b.b.b netmask). z.z.z.z is then your secondary public IP (flexible IP) for pfSense firewall. As you can see, there are two network adapters, eno1 and eno2. First one (eno1) is used for public IPs and second one (eno2) is for internal network only. For some reason the same configuration as on Hetzner, concerning LAN networking, does not work, so there are few changes there over that configuration at Hetzner.

Almost private-only Proxmox cluster

If you wonder if it is possible to have private-only Proxmox cluster in Hetnzer, then the answer is yes, almost. Of couse you can order dedicated hardware to hide your boxes from public eyes, but if you are not going that way, then you can try other way.

  • Install the first Proxmox with public IP, VLAN na pfSense, as usual
  • Second step is to install another Proxmox also with public IP, set up everything you need and leave it only with VLAN deleting public IP configuration. In /etc/hosts you need to set VLAN address i reload network interface. After this you need to go to first box as number 2 is no longer available
  • From the first box create cluster and join the second box (from sandbox VM with VLAN)
  • In order to have internet connectivity from private-only boxes you need to setup its VLAN routing to the first box VLAN address

Configuration of the first box:

auto DEV
iface DEV inet manual

auto vmbr0
iface vmbr0 inet static
        address PUBLIC-IP/26
        gateway PUBLIC-GW
        bridge-ports DEV
        bridge-stp off
        bridge-fd 0
        pointopoint PUBLIC-GW
        up route add -net PUBLIC-NET netmask MASK gw PUBLIC-GW dev vmbr0
        up ip route add 2PUBLIC-IP/32 dev vmbr0
#PUBLIC

iface DEV.4xxx inet manual
auto vmbr4xxx
iface vmbr4xxx inet manual
        address 10.x.x.x/16
        bridge-ports DEV.4xxx
        bridge-stp off
        bridge-fd 0
        mtu 1400
#VLAN

Configuration of the second, private-only, box:

iface DEV.4xxx inet manual

iface DEV inet manual

auto vmbr4xxx
iface vmbr4xxx inet static
        address 10.x.x.x/16
        gateway PFSENSE-AT-1ST-BOX
        bridge-ports DEV.4xxx
        bridge-stp off
        bridge-fd 0
        mtu 1400
#VLAN

oc rsync takes down OKD master processes

It might sound a little weird, but that’s the case. I was trying to setup NFS mount in OKD docker registry (from this tutorial). During oc rsync from inside docker-registry container I found that OKD master processes are down because of heath check thinking that there is some connectivity problem. This arised because oc rsync does not have rate limiting feature and it I fully utilized local network then there is no bandwidth left for the cluster itself.

Few things taken out from logs (/var/log/messages):

19,270,533,120  70%   57.87MB/s    0:02:19  The connection to the server okd-master:8443 was refused - did you specify the right host or port?
Liveness probe for "okd-master.local_kube-system (xxx):etcd" failed (failure): member xxx is unhealthy: hot unhealthy result
okd-master origin-node: cluster is unhealthy

The starting transfer from docker-registry container is at the of 200MB/s. I’m not quite sure if network is actually capable of such speed. The problem is repeatable, after liveness probe is triggered, master, etcd and webconsole are restarted which could lead to unstable cluster. We should avoid it if possible. Unfortunately docker-registry container is a very basic one, without ip, ifconfig, ssh, scp or any utilities which could help with transfering out files. But…

  • you can check IP of the container in webconsole
  • you can start HTTP server python -m SimpleHTTPServer on port 8000
  • you can then download the file with wget x.x.x.x:8000/file.tar --limit-rate=20000k

It is really funny, that the container lacks basic tools, but got Python. Set rate in wget on reasonable level that the internal network will not be fully utilized. To sum up. I did not encounter such problem on any other environment, either bare-metal or virtualized so it might be related specially with Microsoft Azure SDN and how it behaves on such traffic load.

Conditional Nginx logging

Logging all HTTP traffic is often unnecessary. It especially applies to website which include not only text content but also all kind of additional components, like JavaScripts, stylesheets, images, fonts etc. You can select what you would like to log inclusively, but it is much easier to do this by conditional negative selection. First define log format, then create conditional mapping, last thing is to specify logger with decision variable. For instance:

log_format mylogformat '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent" "$gzip_ratio"';

map $request_uri $loggable {
    default                                             1;
    ~*\.(ico|css|js|gif|jpg|jpeg|png|svg|woff|ttf|eot)$ 0;
}

access_log /var/log/nginx/access.log mylogformat if=$loggable;

This way we are not going to log any of additional stuff and keep only regular pages in the log. Will be more useful for further traffic analysis than filtering out those things manually.

Mounting SSD drive as swap in VM

First of all this SSD drive which I use is somehow faulty. It is a Goodram SSDPR-CX400-01T-G2 drive of 1TB. It have been working fine for few weeks until some construction worker made a electric short causing some abnormal frequences in wires resulting a faulty drives and memory sticks. One of victim was this drive:

  • in CrystalDisk it is reports as good,
  • but in Ubuntu disks utility it supposedly has 1 bad block
  • badblocks -svt 0x00 /dev/sdX shows no bad blocks
  • zeroing with dd and veryfing with cmp is fine

    This drives for sure has some issues as at least one of tools shows that it as a problematic badblocks. Second of all in regular use it fails to run VM. It once switched into read-only mode in VM filesystem then after formatting it it refused to restore VM from backup. So last thing in which it might be useful is being storage for swap:

    mkswap -c /dev/sdX
    swapon /dev/sdX
    

    Then in /etc/fstab:

    /dev/sdX none swap sw
    

    I set it in VM as 256 GB drive, why? Because I encountered some leaking Ruby libraries in my project and program required way more memory than it should have actually require, so the temporary solution is to increase available memory by adding such a huge amount of swap.

    iRedMail mail server with SPF and DKIM

    Having your own mail server could be useful but also sometimes dangereous. I am happy to see appliance such as iRedMail which cover variaty of topics regarding a somehow complete solution. I pick Ubuntu 22 on Hetzner. First you create DNS A record for your mail server and following by this a MX record pointing at that A record. Be sure to set proper hostname in the system. You can check it with:

    hostname -f
    

    Ensure you have it set also in /etc/hosts and /etc/hostname. Next download iRedMail installer and iRedMail.sh script. It will prompt for various things but in my case I was missing dialout package, so be sure to install it before running installation. I choose PostgreSQL and NGINX backends as I am more aware of them than MariaDB and Apache. Once the installer finishes it is required to reboot your system to apply all the settings.

    Administration panel is available at mail.yourdomain.ext/iredadmin. Webmail at mail.yourdomain.ext/mail. Now if all went fine you will be able to create additional user accounts and send messages to your mailboxes. However you will not be able to send it to external mail servers such as Gmail because you lack of security and antispam configuration on your DNS side. So…

    To setup SPIF create TXT DNS entry saying which IP addresses are eligable to send messages from. If you do not care that much, then create record allowing any server to send:

    yourdomain.ext. 3600 IN TXT "v=spf1 mx -all"
    

    This is however not enough for several mail providers and you need to setup also DKIM record. It utilizes digital signature using public key. Not to going too much deep into the topic setting it up is also quire easy:

    amavisd showkeys
    

    Grab command output and create another DNS TXT entry:

    v=DKIM1; p=verylongkeystringwithoutquotes
    

    So with these two DNS mail server verification features you will be able to send messages to external servers without them complaining about your setup. iRedMail documentation explains both basic installation and this DNS confguration quite nice so be sure to check it out also.