Distributed, partitioned, multi-master, increment horizontal scale-out NoSQL data manegement system for global mission-critical use cases handling petabyte sized datasets Dynamo, Amazon, Facebook, Apache… “Reliability at massive scale is one of the biggest challenges we face at Amazon“ source: “Dynamo: amazon’s highly available key-value store” In 2004 there have been performance issues in Amazon e-commerce handling due to high traffic. By 2007 the concept of Dynamo has been materialized as Amazon S3. Then in 2008 Facebook with co-authors of Amazon Dynamo developed its own distributed NoSQL system, Cassandra. In 2009 Cassandra became Apache’s project. However, Amazon’s DynamoDB came in 2012, it
Have you ever wondered how many tables can we create and use in PostgreSQL database server? Shall we call them partitions or shards? Why not to use built-in “automatic” partitioning? Partitions or shards? Lets first define the difference between partitions and shards. Partitions are placed on the same server, but shards can be spread across various machines. We can use inheritance or more recent “automatic” partitioning. However both of these solutions lead to tight join with PostgreSQL RDBMS, which in some situations we would like to avoid. Imagine a perspective of migrating our schemas to different RDBMS like Microsoft SQL
Even wondered how computer is built? And no, I’m not talking about unscrewing your laptop… but exactly how the things happen inside the CPU. If so, then check out TINA from Texas Instruments and open my custom-made all-in-one computer. I spend few weeks preparing this schematic. It contains clock, program counter, memory address register, RAM, ALU, A&B registers, instruction register, microcode decoder, instruction register, address register and program counter. Well that’s a lot ot stuff you need to build 8-bit data and 4-bit address computer, even in simulator. Sample program in my assembly + binary representation, which needs to be
Imagine you would like run 2 x vCPU and 4 GB RAM virtual machine. Which service provider do you choose? Azure, AWS or Hetzner? With AWS you pay 65 USD (c5.large instance… and by the way why this is called large at all?). You pick Microsoft Azure you pay 36 Euro (B2s). If you would pick DigitealOcean then you pay 24 USD (noname “droplet”). Choosing Scaleway you pay 19 Euro (PLAY2-NANO compute instance in Warsaw DC). However, with Hetzner Cloud you pay as little as 4.51 Euro (CX22 virtual server). How is that even possible? So it goes like this
Ever wondered what can be the highest load average on the unix-like system? Do we even know what this parameter tells about? It shows the average number of either actively running or waiting processes. It should be close to the number of logical processors present on the system, otherwise, in case it is greater than this, some things will need to wait in order to be executed. So I was testing 1000 LXC containers on the 2 x 6 core Xeon system (totalling as 24 logical processors) and leave it for a while. Once I got back I saw that
I defined Docker Swarm cluster with 20 nodes and created service using Nginx HTTP server Docker image. I scaled it to 1000 container instances, which took a while on my demo hardware. Containers are up and running but to get such statistics from Portainer CE UI is quite difficult, so I suggest using CLI in such a case: I got exacly 1000 containers on my service named “nginx3”. Hardware is not so much utilized, combined 2 servers RAM usage oscillates around 50GB, load stays low as there is not much happening, so even using 20 VM and Docker containers, we
If you wonder how to automatically deploy 20 nodes of Docker Swarm and run 100 Docker containers in it, then continue reading. I will show how to achieve this by using Terraform, Ansible and Portainer. Course of action What is Docker Swarm and why I need to have 20 of these? Docker is containers toolkit utilizing cgroups, namespaces which allows to control and share resources of the CPU and operating system. Docker Swarm its a special kind of runtime mode, which allows to run multiple clustered nodes which can be separate physical computers or virtual machines. It gives us scalability
If you wonder how to automate Ubuntu virtual machine creation and then deploy it to Proxmox in multple copies, then you are looking for Packer and Terraform. Side note: going for virtual machines in Proxmox is the proper way. I tried for several days to have LXC containers working, however finally I will say that it is not the best option with lot of things going bad like cgroups, AppArmor, nesting, FUSE, ingress networking etc. There is literally too much to handle with LXC and with VM there is no such problem, so discussion end here in favour of Proxmox
The article explores the use of ArchiMate, a modeling language, for enterprise architecture, focusing on its application through the Archi tool. ArchiMate offers a structured approach to modeling complex IT systems by providing various layers such as motivation, strategy, business, and technology. The author explains how Archi facilitates the visualization of enterprise architecture, making it easier to manage and understand. Additionally, the article covers practical aspects like setting up a Git repository for team collaboration and managing model changes effectively. A practical example project demonstrates the application of these concepts in a real-world scenario. In complex world of IT artifacts