Load OSM map files into PostgreSQL

You can use OpenStreetMaps on your own hardware. You need to grab map files first, which can be found at https://download.geofabrik.de. Once you downloaded it, install PostgreSQL and enable few extensions:

CREATE DATABASE yourdbname;
CREATE EXTENSION postgis;
CREATE EXTENSION hstore;

It may be useful at some point later on especially if you would like to try build your own tile server. Next you need to install osm2pgsql and it can be found in system packages in Ubuntu 22. Then:

osm2pgsql -c -d yourdbname -U youruser -H yourdbhost somefile.osm.pbf -W

Now give it database password and it will start loading. Depening on hardware it might take from few minutes to tens of hours in case of large files like Europe area. You should have something like this:

geography_columns
geometry_columns
planet_osm_line
planet_osm_point
planet_osm_polygon
planet_osm_roads
spatial_ref_sys 

These are tables with points, lines, polygons and roads. Most of the data is located in the PBF file itself, but few other are calculated on the fly while loading. There is plenty of options for such data. You can use them in some desktop software like QGIS (loading directly PBF file) or setting up complete tile server similar as on the official OSM website.

Install Redash

Data is valuable if consumed or at least identified. For private and corporate usages I suggest installing Redash as it gives options for saving queries, exporting data, creating visualizations and dashboards and also setting up alerts. There are few other interesting features like creating dropdowns and inputs from saved queries or joining resuls from different data sources thru in-memory SQLite instance.

To install Redash, clone the repository. I recommend running Ubuntu 18 LTS server version as it is tested on this distribution. Then chmod a setup.sh file for execution and run it. It will ask for sudo password and going to install all the required things like packges and containers. It is based on Docker so in case of production setup better tweak it a little bit for volumes at least.

That’s all, your Redash should be instantly available to use.

WebGL performance

As already stated in my year 2023 plans, there will be some time for graphics programming. Back in the days I was exploring OpenGL but that was way long ago. Nowadays I think to try something much simpler and accessible like for instance WebGL. This library relies on JavaScript for logic and OpenGL ES for presentation layer. So it is hardware accelerated.

But there are some issues with this acceleration. On some computers you can specifically select particular GPU in BIOS/UEFI. However not on every computer. To start with I tried on Lenovo ThinkPad T420s with NVIDIA NVS 4200M. Testing on aquarium demo (which can be found here) I get stable 37 FPS on 5k elements with somewhere around 95 – 99% of load. There is sometimes some performance degradation when FPS count goes as low as 10. GPU and CPU temperature show the same value of 95 degress C. That is way too much. This computer has great advantage over others because it has NVIDIA Optimus feature which gives be ability to switch between graphics chips or let harware to choose itself which GPU should be used at the moment.

My other machine, which is Dell G15 lack such a feature and Ubuntu 22 picks integrated Intel UHD 10th gen GPU instead of heavy NVIDIA GeForce RTX 3050 Ti. So on this integrated chip we have stable 45 FPS, but without any performance drops. I am able to run Firefox browser on “discrete graphics” because I have this proprietary driver but it gives me only 50 FPS, so only 5 more than integrated one. It shows only 30% load on nvidia-smi so it is not doing that much. I tell you even more, on 30k elements there is more FPS on integrated GPU (20 vs 15). I do not quite get it what is going on here.

I still test it on various other harware configurations and it gives more weird unexpecting results.

Plans for 2023

I’m looking forward for new year’s technology opportunities. I have few ongoing projects which I would like to finish by the end of this year. Here is some brief overview of them:

  • Data Mining chapter of Simple HPC series
  • News feed tool as a subproject for data mining
  • AI/ML project utilizing news feed, OpenCL processing and user-input training
  • Highly portable system monitoring tool for my day-shift
  • Video graphics… most probably WebGL/THREE.js

I will try to fit all of these within just around 300 hours available…

pg gem on Ubuntu 22

I thought that installing pg gem on my clean Ubuntu 22 will be easy, but no. I got some weird message:

The following packages have unmet dependencies:
 libpq-dev : Depends: libpq5 (= 14.5-0ubuntu0.22.04.1) but 15.1-1.pgdg22.04+1 is to be installed
E: Unable to correct problems, you have held broken packages

So I tried to force installation:

sudo apt-get install libpq5=14.5-0ubuntu0.22.04.

After this:

sudo apt install libpq-dev
sudo gem install pg

On this Ubuntu 22 release installing Ruby interpreter from packages you got 3.0.2p107. The problem might be because of pgadmin4 which I installed before, so it could break something.

SMB shares on Ubuntu 22 server

SMB shares can be created either with UI on a desktop version or command line on a server installation. You can use this shares both on Linux stations and Windows boxes as well. We are going to create editors group and add current user to this group. Share location is under /data folder which is owner by editors group who’s participants can read and write to this shares folders.

sudo apt-get install samba -y
# edit /etc/samba/smb.conf and set your workgroup name
# ...
sudo mkdir /data
sudo groupadd editors
sudo usermod -aG editors $USER # current user
sudo chgrp -R editors /data
sudo chmod -R g+rw /data

Now once again edit smb.conf file to add a share configuration:

[DATA]
path = /data
valid users = @editors
browsable = yes
writable = yes
read only = no

Restart smbd service:

sudo systemctl restart smbd

And the last thing is to map system users to SMB users:

sudo smbpasswd -a SOMEUSER
sudo smbpasswd -e SOMEUSER

simpleproxy

In case you use ever changing outbound public IP connection like in Microsoft Azure, then you can try create machine with public IP and passing your local traffic to remote site via simplaproxy. L switch is for local and R is for remote.

simpleproxy -L x.x.x.x:YYYY -R someremotedomain.remote:ZZZZ -v

To make it durable you can try creating systemd service or keep it open on screen session.

OpenShift 4.11 TLS handshake timeout on oc login

Finally after OKD 3.11 support has ended I’ve decided to try 4.x releases. I found that there is quite nice installation assistant available on console.redhat.com (Red Hat Hybrid Cloud Console). So I tried it and installed new cluster on my dedicated hardware. I set up all things as usual which is project, token and GitLab runner. Unfortunately on oc login command there was error “TLS handshake timeout”. Investigation was quite broad including replacing docker base images, downloading custom oc binary, doing regular networking diagnostics etc. In the end it turned out that there was issue with MTU and as it is setup in Hetzner on vSwitch this setting is a must have. So…. go to /lib/systemd/system/docker.service and edit it:

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --mtu=1400

Crucial is to set mtu in the end. After this you reload systemctl and restart docker service. Now you should be able to login using oc binary either provided by regular origin-cli image or manually downloaded binary on any other base system.

Proxmox Backup Server timeout on disk initialization

I installed Proxmox Backup Server several times, both on bare metal and virtualized inside Proxmox VE. This time I did it in PVE as virtual machine and encountered an issue while initializing disk with GPT, creating directory, LVM or ZFS. In other words, I was unable to create any datastore at all. So there is CLI that can help us:

proxmox-backup-manager disk list
proxmox-backup-manager disk initialize sdX
proxmox-backup-manager disk fs create mydatastore --disk sdX --filesystem ext4 --add-datastore true

And that’s all, it works fine. In UI I got timeout. No idea why. Fortunately there is a alternative path.