Aborting Ubuntu packages update via Ansible gave me some weird state of libatk-wrapper-java-jni package. My playbook installs default-jre and it was painfully slow so I thought that there is some lockup, but there was not. My VM performed poorly at that moment, probably due to the fact that it came from a HDD with badblocks which got migrated to another one. It just works, but I’m not 100% sure if it is healhty. So… in case you have some broken package installation try:
And now you are good to go with autoclean, update and possibly upgrade to check whether really there is no problem still dangling on your system. On mine, it is fine now.
Changing Java language level to run compiled code on older runtimes
I’m working on some Java project. I use IntelliJ IDEA and deciced to go with Oracle’s OpenJDK-18. But… this runtime is available by default only during compilation in the IDE. In the system I have OpenJDK-11. Trying to run code compiled by JDK-18 on JDK-11 gives me the following message:
java.lang.UnsupportedClassVersionError: Main has been compiled by a more recent version of the Java Runtime (class file version 62.0), this version of the Java Runtime only recognizes class file versions up to 55.
To overcome this go to module settings (or Project Structure) and change Language Level to lower value. In case you compile with 18 then it will have version 18 set. Switch to version 11 (local variables syntax for lambda parameters) and compile once again. Now you will be able to launch your application even with older JDK.
The most recent min.io server release requires one additional thing in the configuration comparing to versions in the past years. Having min.io on one box and NGINX on another one requires setting up a reverse proxy, which is straightforward operation. You need to remember to add proper headers to pass hostname and schema to min.io box. This whole thing is described in the documentation.
But… you are required to put the following into a min.io configuration file:
MINIO_SERVER_URL="https://your.url/"
This should be put in bold letters beause without this one you could upload artifacts into buckets, but will not be able to download them with share link beacuse of checksum error. This checksum depends on domain name. Default min.io installation offers a console and API on one port only in theory. If you go for console then you are redirected to some temporary port, applicable until next restart of the server. So API will work flawlessly on public domain, but a console in the default installation will not. Please keep that in mind.
In case you would like to enable some PostgreSQL extensions there is basically one thing you should do which is install postgresql14-contrib package and then fire CREATE EXTENSION from PostgreSQL itself.
I own a preowned HP z800 workstation. As purchased it contained HDD inside – Western Digital WD5000AAKX-75U6AA0, SATA III, 500GB, 7200 RPM, 16MB of cache. Unfortunately it passes SMART and do not show wearout metric. However going into detailed information we get:
Raw_Read_Error_Rate has positive value of 11, threshold is set to 51. Having 11 685 hours of runtime it should understandable that it might break and it did actually. There is only one Current_Pending_Sector which means that it waits to be remapped or rellocated. But, will it happen anytime soon?
I’m unable to clone, migrate or replicate VM to another server. ZFS states that this pool is in degraded state. You can see this by using zfs status -v command. It says that is unrecoverable and most probably it is. I’ve tried zpool scrub river to no avail. There is a problem with VM-104 disk. Still the forementioned VM is accessible by console and it works just fine.
This VM is Redash installation, and it is the last VM left on that drive waiting for better time. As this is home lab setup it makes use of any devices I have available. Not all of them are fully functional as you can see. Always need to have backups, replicated VMs and redundantly configured RAID.
I was unable to migrate VM:
I was unable to replicate VM:
I was unable to backup VM:
I ended up identifiying 4 or more bad blocks, but because it is ZFS there is little tools for filesystem checks. Does ZFS pool should be able in theory to recover from such failure? In case you use mirrored drive setup then it is not a case. I was thinking about overwriting these blocks, but leave it as it was. Drive is no decommisioned.
Docker Registry could be useful in various scenarios, but the most common would be containers orchestration software such as Kubernetes, OpenShift/OKD or Nomad. In example environment you could have Gitlab CI pipeline configured to build Docker image and push it to your registry and then let orchestration software to deploy it to a cluster.
In order to install internal, private Docker Registry you need to install Docker package first.
This way we start a registry container which will always be up and running. We bind it on port 443 so need to have valid certificate and key for the domain. You can create A record for either public or private IP address. Former is easier, but the latter would be more secure. Both ways no need to setup insecure registries configuration as it will be offered via TLS connection.
After setting up, just build, tag and push container image to this registry.
I was trying to add some new packages and figured it out that my Ubuntu 21 is out of date. Unfortunately some time ago I unchecked all repositories in Software & Updates so my /etc/apt/sources.list was empty. But I managed to have it working properly adding this one:
deb http://old-releases.ubuntu.com/ubuntu impish main universe
Next
apt update
and I was more or less back on the track. How trivial was it just to be able to install default-jre package as I am trying to learn how to build custom deb packages.
Latest Clustercontrol – version 1.9.4 – installation somehow requires pg_stat_statements to be activated in PostgreSQL 14. I have never noticed it before, both on 9.6, 12 and 14. This is simple fix. Change postgresql.conf configuration file (which should be either in /etc or /var/lib/ in standard package installations):
If you need to mirror remote repository and push it to another remote repo, then there is one thing you should do to be able to export all remote branches and not only the master.
git remote add NAME URL
for remote in `git branch -r `; do git branch --track $remote; done
git push --all NAME
It will push all things remotely, but it varies from version to version so better double check.