c++11 on MacOS 10.8.5 or even 10.4.11

I tried to compile my c++ code with c++11 features on GCC 4.2.1 which is default on my MacBookPro3,1 with MacOS 10.8.5. I used random library, so I needed newer version of GCC, 4.7 at least to be more precise. I already have MacPorts on this machine (installation guide can be found here). So:

/opt/local/bin/port install gcc5

No I can compile:

/opt/local/bin/g++-mp-5 -std=c++11 main.cpp

Even that you use GCC 5.5.0 you need to pass flag to enable c++11.

If someone thinks that this was too easy, then lets try to do this on MacOS 10.4.11 on PowerBook3,4. First thing is to install XCode 2.5 which suppose to be the last one for 10.4 operating system. You can grab it from xcodereleases.com after signing in with your Apple ID. Install XCode and then download MacPorts 2.8.0. Next, install GCC 5 using same command as previously. This time it will take forever to build it as it has way less powerful hardware than newer MacBookPro3,1. During that time it got so hot so I could not even touch keyboard. Iit was around 50 degrees Celcius while being cooled and over 65 degrees without cooling.

sudo /opt/local/bin/port -f deactivate libunwind-headers
sudo /opt/local/bin/port install apple-gcc42
sudo /opt/local/bin/port install gcc5

Unfortunately GCC 5 is not supported and you need to install GCC 4.8 or at least 4.7. Prepare one day for building libgcc6, another day for libgcc7, one more for GCC itself and half a day for other stuff.

SMB shares on Ubuntu 22 server

SMB shares can be created either with UI on a desktop version or command line on a server installation. You can use this shares both on Linux stations and Windows boxes as well. We are going to create editors group and add current user to this group. Share location is under /data folder which is owner by editors group who’s participants can read and write to this shares folders.

sudo apt-get install samba -y
# edit /etc/samba/smb.conf and set your workgroup name
# ...
sudo mkdir /data
sudo groupadd editors
sudo usermod -aG editors $USER # current user
sudo chgrp -R editors /data
sudo chmod -R g+rw /data

Now once again edit smb.conf file to add a share configuration:

[DATA]
path = /data
valid users = @editors
browsable = yes
writable = yes
read only = no

Restart smbd service:

sudo systemctl restart smbd

And the last thing is to map system users to SMB users:

sudo smbpasswd -a SOMEUSER
sudo smbpasswd -e SOMEUSER

simpleproxy

In case you use ever changing outbound public IP connection like in Microsoft Azure, then you can try create machine with public IP and passing your local traffic to remote site via simplaproxy. L switch is for local and R is for remote.

simpleproxy -L x.x.x.x:YYYY -R someremotedomain.remote:ZZZZ -v

To make it durable you can try creating systemd service or keep it open on screen session.

MikroTik basic reconfiguration

In case you have access to MikroTik device without UI available, only CLI then you possibly would like to know how to do some basic reconfiguration to gain access to admin panel listening on local network. So… to change user password:

/user set admin password=mynewpassword

To see all the configuration:

export

To create PPTP VPN:

ip pool add name=PPTP-Pool ranges=192.168.99.10-192.168.99.200
ppp profile add change-tcp-mss=yes local-address=PPTP-Pool name=PPTP-Profile only-one=yes remote-address=PPTP-Pool use-encryption=yes dns-server=8.8.8.8,9.9.9.9
ppp secret add name=user1 password=YourPassword profile=PPTP-Profile
 interface pptp-server server set authentication=chap,mschap1,mschap2 default-profile=PPTP-Profile enabled=yes

To adjust firewall for PPTP VPN:

ip firewall filter add chain=input comment="PPTP VPN" dst-port=1723 protocol=tcp
ip firewall nat add chain=srcnat out-interface=ether1 action=masquerade
interface pptp-server add name=pptp-user1 user=user1

To enable HTTP web configuration:

ip service print
ip service set www port=81
ip service www address=0.0.0.0/0

Now, once you connect thru PPTP VPN to the remote host, you can access UI. But if you set 0.0.0.0/0 as an address then you may also be able to connect to it using public address.

OpenShift 4.11 TLS handshake timeout on oc login

Finally after OKD 3.11 support has ended I’ve decided to try 4.x releases. I found that there is quite nice installation assistant available on console.redhat.com (Red Hat Hybrid Cloud Console). So I tried it and installed new cluster on my dedicated hardware. I set up all things as usual which is project, token and GitLab runner. Unfortunately on oc login command there was error “TLS handshake timeout”. Investigation was quite broad including replacing docker base images, downloading custom oc binary, doing regular networking diagnostics etc. In the end it turned out that there was issue with MTU and as it is setup in Hetzner on vSwitch this setting is a must have. So…. go to /lib/systemd/system/docker.service and edit it:

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --mtu=1400

Crucial is to set mtu in the end. After this you reload systemctl and restart docker service. Now you should be able to login using oc binary either provided by regular origin-cli image or manually downloaded binary on any other base system.

Proxmox Backup Server timeout on disk initialization

I installed Proxmox Backup Server several times, both on bare metal and virtualized inside Proxmox VE. This time I did it in PVE as virtual machine and encountered an issue while initializing disk with GPT, creating directory, LVM or ZFS. In other words, I was unable to create any datastore at all. So there is CLI that can help us:

proxmox-backup-manager disk list
proxmox-backup-manager disk initialize sdX
proxmox-backup-manager disk fs create mydatastore --disk sdX --filesystem ext4 --add-datastore true

And that’s all, it works fine. In UI I got timeout. No idea why. Fortunately there is a alternative path.

Proxmox VE & pfSense on Hetzner dedicated servers

There is not too much a precise documentation from Hetzner available if you want to know what exactly you should do to run dedicated servers with primary and secondary public IP, virtual machines and vSwitch. There are some articles but they are written in non-informative way. However their support is on very high level so far, they respond quickly.

Debian & Proxmox Installation

So, to go with Proxmox on Hetzner you will need to know that there is supported installation. You restart your server into rescue system (remember to power cycle your server) and then there is Proxmox to choose from but it is said that there is no support for it, just like it would be for other systems… If you play around with somehow complex environment you should yourself be prepared to overcome all obstacles not relaying on third parties. So now you are in the rescue system and instead of selecting Proxmox, you choose latest Debian installation.

For Debian you disable software RAID as it might and will be incompatible with ZFS later on. You put your domain name registered in public DNS. Future domain change will be difficult no to say that is may be impossible (from Proxmox perspective). Depending on your disk configuration in the server you possibly may want to adjust mount points configuration. I prefer to put Proxmox on smaller disk and to allocate all of available space to root mount instead of setting various smaller mount points. Once you are done with it save changes (F10) and wait until it prompts you to reboot.

After server reboots you are going to install Proxmox on it:

echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bullseye pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg 
apt update && apt full-upgrade
apt install proxmox-ve postfix open-iscsi
systemctl reboot
apt remove linux-image-amd64 'linux-image-5.10*'
update-grub
apt remove os-prober
systemctl reboot

Network Configuration

You will access your Proxmox VE UI at your public IP, port 8006. Next go to Hetzner panel and create new vSwitch instance. Add your server to newly created vSwitch. Applying network configuration takes around a minute in Hetzner. In Proxmox go to node settings and navigate to System – Network and create two empty bridges, call it vmbr0 and vmbr1. First one is for main public IP which will be used for accessing Proxmox only. Second one is for LAN as every virtual machine and container will have its own IP within the server only. For VLAN at vSwitch we are going to create bridge and VLAN manually in /etc/network/interfaces a little later. After creating these two bridge apply configuration and reboot server.

In the interfaces file remove IPv6 and any comments. First of all in most cases you will not need IPv6. If your setup requires “public” IPv6 then for now I will not try to advise anything in this matter. I think it is a very useful in things like IoT or mobile networks but for regular consumer servers not too much. You may have different point of view and it is fine, I disable IPv6 by habit. For instance my ISP provider do not offer IPv6 connectivity.

So now, we have two blank bridges and removed IPv6. Now it’s time to configure main and additional public IPv4 as well as LAN and VLAN. So let’s get started with this one. Ethernet devices start with “en” followed by “p” for PCI bus with N as a consecutive number, then “s” for slot and its number. So for instance enp5s0 can be identified as Ethernet PCI number 5 slot 0. There are also other naming conventions for WLAN and WWAN devices as well as various source of this devices like BIOS or kernel based.

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface lo inet6 loopback

auto enpNs0
iface enpNs0 inet manual

Further part of network configuration file of vmbr0 bridge used for main and additional public IP:

auto vmbr0
iface vmbr0 inet static
  address 65.109.x.x/MASK
  gateway 65.109.x.x
  pointopoint 65.109.x.x # same as gateway
  bridge-ports enpNs0 # device name
  bridge-stp off
  bridge-fd 0
  up route add -net 65.109.x.x netmask 255.255.255.x gw 65.109.x.x dev vmbr0 # main IP route
  up ip route add 65.109.x.x/32 dev vmbr0 # additional IP route
  post-up ip route add x.x.x.0/24 via 65.109.x.x dev vmbr0 # LAN network via additional IP

First, few words of explaination. Address is your primary public IPv4. Gateway and pointopoint are here the same. First route added is the default one which comes from installation process, so just copy it here (should be same as in Hetzner admin portal). Second route defines additional public IPv4 address. Last one is LAN network of your choice which is passed thru vmbr0 and additional IPv4 address to outside world.

This LAN network is configured as a blank bridge and all configuration required is setup inside the gateway appliance (e.g. pfSense) and VM itself:

auto vmbr1
iface vmbr1 inet manual
	bridge-ports none
	bridge-stp off
	bridge-fd 0

The last section of network configuration file is for VLAN:

iface enpNs0.400X inet manual
auto vmbr400X
iface vmbr400X inet manual
	address 10.x.x.1/16 # VLAN gateway and network range
	bridge_ports enp41s0.400X
	bridge_sftp off
	bridge_fd 0
	mtu 1400 # important to have

We create Linux VLAN device and bridge. We define address which will be used as a local gateway for accessing machines outside the box. It is required to set up MTU with 1400.

Virtual MAC at pfSense

For LAN within the single server and outside world connectivity (internet) we use pfSense gateway. Setup is straightforward, we give it two network interfaces. First one is for WAN with additional public IPv4. We need to ask for virtual MAC in Hetzner admin panel. Second interface is for LAN and it can have automatically generated MAC address. All virtual machines within LAN should be addressed within network defined in vmbr1 with gateway set as local pfSense. For inter-server communication within VLAN we give for VM additional network interface pointed at vmbr400X device and set network within range defined at that bridge pointing bridge IP as a gateway.

Single gateway across several physical boxes

I mentioned before, that you should use both LAN and VLAN for your machines, however you might decide to go differently. You can set up single pfSense gateway with LAN and VLAN and point other VMs route at this pfSense VLAN address. One thing to remember is that you must set MTU as 1400. Any other values will give you weird results, like ping and DNS working but no bigger transfers as they exceed packet limit that is allowed at vSwitch. So setting up proper MTU value will give you ability to route your outbound internet traffic via single pfSense. It is a questionable setup as it has single point of failure. Other downside of this is that you need to keep track of public IPv4 addresses you want to use at your single gateway as this IP is bind to particular server at Hetzner with MAC address. Maybe there is a solution for this but not for now.

Note: Setup like this requires adding third network adapter to pfSense gateway.

OKD Docker private Registry on NFS mount

If you use OKD/OpenShift then most probably you also run internal and private Docker registry for your builds. Cluster uses this to lookup for containers images for further deployment. For basic, default installation your Docker Registry is located in a project called default. It also uses quasi permanent storage which lasts until next redeployment of registry container (pod). There is however a possiblity to mount a NFS volume in the registry deployment configuration so your images which have been pushed onto the registry will not go away in case you need to redeploy registry itself. This need might come if you run certificates redeploy Ansible playbook. If you review this playbook you are going to see a step in which there is a registry redeploment so you need to have permenent storage in your registry in such a scenario.

First install NFS server on separate machine and configurate directory and /etc/exports. After that restart NFS server (service nfs-server restart).

/opt/PVdirectory		*(rw,root_squash,no_wdelay)

Next you need to create PV (which stands for persistant volume) configuration in OKD master:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: PVname
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  nfs:
    path: /opt/PVdirectory
    server: 192.168.1.2
  persistentVolumeReclaimPolicy: Recycle

Apply this configuration:

oc create -f filename.yaml

You just created PV definiton which tell OKD to look for NFS volume at 192.168.1.2 at /opt/PVdirectory which 10GiB of space which will be recycled if unbound. Next you need to copy you current registry contents, which is Docker images. There is no scp to copy files, but first pack them with tar:

cd /registry
tar -cf docker.tar docker

Now go to the master, locate your docker-registry container name (replace abcdefg with proper ID):

oc rsync docker-registry-abcdefg:/registry/docker.tar .

Move archive file to your NFS server and unpack it there. Main folder you have owner as nfsnobody but internal contents same as original:

sudo chown -R 1000000000:1000000000 docker

Now go to OKD webconsole bring registry down (scale to 0 pods). Go to deployment configuration and remove default storage and add in place your passing /registry as path for it. Bring registry online and test it. Now it should use NFS mount and you are free to redeploy your registry if you need.

Redeploying OKD 3.11 certificates

Since the beginning of 3.x line of OpenShift/OKD releases there are various issues with internal certificates. TLS communication inside the cluster is used in several places like router, registry, compute nodes, master nodes, etcd and so on. Unfortunately having hundreds of developers across the globe gives not exactly chaos but uncertainty and lack of confidence from the user perspective.

CSR should be automatically approved and they are not:

oc get csr -o name | xargs oc adm certificate approve

But in worst case scenario you also need to check validity of certificates. You can do this with ansible playbook. These can be obtained at https://github.com/openshift/openshift-ansible. You need to remember that should always check out the version you have deployed. Use tag or branch specific for the release. Avoid running playbooks from master as it will contain the latest one, which may be incompatible with yours.

To check validity run the following:

ansible-playbook openshift-checks/certificate_expiry/easy-mode.yaml

To redeploy certificates run this one:

ansible-playbook playbooks/redeploy-certificates.yml

In case it fails at outdated certificates or outdating soon (yes…) you need to set in /etc/ansible/host or any other file which you use as the inventory:

openshift_certificate_expiry_warning_days=7

And run check or redeploy once again. In case your certificate expires today or tommorow then use 0 as a value for this parameter. After redeploy, please use value 10000 to check if any certificate expires. There are few bugs here preventing you from redeploying or even properly checking certificates validity and no real one solution can be found. There might be one, but requires Red Hat subscription to access their closed access forum.

After redeploying and checking that is fine or at least a little better sometimes there are problem with having openshift-web-console up and running. Sometimes there is HTTP 502 error. The web-console works fine itself, but is unable to register its route in the HAProxy router. You can check this with:

oc get service webconsole -n openshift-web-console
curl -vk https://172.x.y.z/console/ # replace x, y and z with your webconsole IP

If you get valid response then you need to delete and recreate webconsole things manually. But first, try basic solutions as they may work for you:

oc scale --replicas=0 deployment.apps/webconsole
# wait around a minute
oc scale --replicas=1 deployment.apps/webconsole

If still got no webconsole:

oc delete secret webconsole-serving-cert
oc delete svc/webconsole
oc delete pod/webconsole-xxx # xxx is your pod ID

OKD should automatically recreate just deleted webconsole configuration. But in case it still fails, try to run complete playbook for webconsole recreation from scratch:

ansible-playbook openshift-web-console/config.yml

As for now, you should be able to get you webconsole back. I wonder if same low quality applies to OKD 4.x but for 3.x a number of problems and quirks is quite high, way higher than I would expected.