NIS 2: anti-rootkit & anti-virus installation and scanning with Ansible

If you run digital services platform or critical infrastructure then most probably you are covered by NIS 2 and its requirements including those concerning information security. Even if you are not covered by NIS 2, then still you may benefit from its regulations which seem to be similar with those coming from ISO 27001. In this article I show how to automatically deploy anti-rootkit and anti-virus software for your Linux workstations and servers.

TLDR

By using rkhunter anti-rootkit and ClamAV anti-virus you are closer to NIS 2 and ISO 27001 and farther away from threats like cryptocurrency miners and ransomware. You can automate deployment with Ansible.

Course of action

  • Prepare Proxmox virtualization host server
  • Create 200 LXC containers
  • Start and configure containers
  • Install rkhunter and scan systems
  • Install ClamAV and scan systems

What is NIS 2?

The NIS 2 Directive (Directive (EU) 2022/2555) is a legislative act that aims to achieve a high common level of cybersecurity across the European Union. Member States must ensure that essential and important entities take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems, and to prevent or minimise the impact of incidents on recipients of their services and on other services. The measures must be based on an all-hazards approach.

source: https://www.nis-2-directive.com/

Aside from being a EU legislation regulation, NIS 2 can be benefication from security point of view. However, not complying with NIS 2 regulations will cause significant damages to organization budget.

Non-compliance with NIS2 can lead to significant penalties. Essential entities may face fines of up to €10 million or 2% of global turnover, while important entities could incur fines of up to €7 million or 1.4%. There’s also a provision that holds corporate management personally liable for cybersecurity negligence.

source: https://metomic.io/resource-centre/a-complete-guide-to-nis2

What are the core concepts of NIS 2?

To implement NIS 2 you will need to cover various topics concernig technology and its operations, such as:

  • Conduct risk assesment
  • Implement security measures
  • Set up supply chain security
  • Create incident response plan
  • Perform regular cybersecurity awareness and training
  • Perform regular monitoring and reporting
  • Plan and perform regular audits
  • Document processes (including DRS, BCP etc)
  • Maintain compliance by review & improve to achieve completeness

Who should be interested?

As NIS 2 requirements implementation impacts on businesses as whole, the point of interest should be in organizations in various departments, not only IT but technology in general as well as business and operations. From employees perspective they will be required to participate in trainings concerning cybersecurity awareness. In other words, NIS 2 impacts on whole organization.

How to define workstation and server security

We can define workstation as a desktop or laptop computer which is physically available to its user. On the other hand we can define a server as a computing entity which is intended to offload workstation tasks as well as provide multi-user capabilities. So can describe a server also as a virtual machine or system container instance (such as LXC).

The security concepts within both workstations and servers are basically the same as they do share many similarities. They both run some operating system with some kind of kernel inside. They both run system level software along with user level software. They are both vulnerable to malicious traffic, software and incoming data especially in form of websites. There is major difference however impacting workstation users the most. It is the higher level of variability of tasks done on computer. However, even with less variable characteristics of server tasks, a hidden nature of server instances could lead lack of visibility of obvious threats.

So, both workstation and server should run either EDR (Endpoint Detection and Response), or antivirus as well as anti-rootkit software. Computer drives should be encrypted with LUKS (or BitLocker in case of Windows). Users should run on least-privileged accounts not connecting to unknown wireless networks and not inserting unknown devices to computer input ports (like USB devices which could be keyloggers for instance).

Prepare 200 LXC containers on Proxmox box

Find how to install 200 LXC containers for testing purposes and then, using Ansible, how to install and execute anti-rootkit and anti-virus software, rkhunter and ClamAV respecitvely. Why to test on that many containers you may ask? In case of automation it is necessary to verify performance ability on remote hosts as well as how we identify automation results on our side. In our case those 200 containers will be placed on single Proxmox node so it is critically important to check if it is going to handle that many of them.

Ansible software package gives us ability to automate work by defining “playbooks” which are group of tasks using various integration components. Aside from running playbooks you can also run commands without file-based definitions. You can use shell module for instance and send commands to remote hosts. There is wide variety of Ansible extensions available.

System preparation

In order to start using Ansible with Proxmox you need to install “proxmoxer” Python package. To do this Python PIP is required.

apt update
apt install pip
pip install proxmoxer

To install Ansible (in Ubuntu):

sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

Then in /etc/ansible/ansible.cfg set the following setting which skips host key check during SSH connection.

[defaults]
host_key_checking = False

Containers creation

Next define playbook for containers installation. You need to pass Proxmox API details, your network configuration, disk storage and pass the name of OS template of your choice. I have used Ubuntu 22.04 which is placed on storage named “local”. My choice for target container storage is “vms1” with 1GB of storage for each container. I loop thru from 20 to 221.

The inventory for this one should contain only the Proxmox box on which we are going to install 200 LXC containers.

---
- name: Proxmox API
  hosts: proxmox-box
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'
  serial: 1
  tasks:
  - name: Create new container with minimal options
    community.general.proxmox:
      node: lab
      api_host: 192.168.2.10:8006
      api_user: root@pam
      api_token_id: root-token
      api_token_secret: TOKEN-GOES-HERE
      password: PASSWORD-GOES-HERE
      hostname: "container-{{ item }}"
      ostemplate: 'local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst'
      force: true
      disk: "vms1:1"
      netif:
        net0: "name=eth0,gw=192.168.1.1,ip=192.168.2.{{item}}/22,bridge=vmbr0"
      cores: 2
      memory: 4000
    loop: "{{ range(20, 221) }}"

And then run this playbook to install containers:

ansible-playbook containers-create.yml -i inventory.ini -u root

Start and configure containers

In order to start those newly created containers (run it on Proxmox box), use shell loop with pct command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct start $i; 
  echo $i; 
done

To help yourself with generating IP addresses for your containers you can use “prips”:

apt install prips
prips 192.168.2.0/24 > hosts.txt

For demo pursposes only: next, you enable root user SSH login as it is our only user so far and it cannot login. In a daily manner you should use unprivileged user. Use shell loop and “pct” command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config; 
  echo $i; 
done

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- service ssh restart; 
  echo $i; 
done

Checkpoint: So far we have created, started and configured 200 LXC containers to run further software intallation.

rkhunter: anti-rootkit software deployment

You may ask if this anti-rootkit is real world use case? Definitely it is. From my personal experience I can say that even using (or especially, rather) well known brands for you systems layer like public cloud operators you can face with risk of having open vulnerabilites. Cloud operators or any other digital services providers often rely on content from third party providers. So effectively quality and security level is as good as those third parties provided. You can expect to possibly receive outdated and unpatched software or open user accounts etc. This can lead to system breaches which then could lead to data steal, ransomware, spyware or cryptocurrency mining and many more.

There are similarities between anti-rootkit and anti-virus software. rkhunter is much more target at specific use cases so instead of checking hundred thousands of virus signatures it looks for well known hundreds of signs of having rootkits present in your system. You can then say that is a specialized form of anti-virus software.

Installation of anti-rootkit

First install rkhunter with the following playbook:

---
- name: install rkhunter
   hosts: all
   tasks:
     - name: Install rkhunter Ubuntu
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: rkhunter
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install rkhunter CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: rkhunter
         state: present

Execute it with Ansible:

ansible-playbook rkhunter-install.yml -i hosts.txt -u root

Scanning systems with anti-rootkit

And then scan with rkhunter:

---
- name: Run rkhunter
   hosts: all
   tasks:
     - name: Run rkhunter
       ansible.builtin.command: rkhunter -c --sk -q
       register: rkrun
       ignore_errors: true
       failed_when: "rkrun.rc not in [ 0, 1 ]"

Execute it with Ansible:

ansible-playbook rkhunter-run.yml -i hosts.txt -u root

To verify results it is much easier to run it separately using ansible command instead of ansible-playbook which runs playbooks:

ansible all -i hosts.txt -m shell -a "cat /var/log/rkhunter.log | grep Possible | wc -l" -u root -f 12 -o

Results interpretation and reaction

What in case if you see some “Possible rootkits”? First of all calm down and follow incident management procedure, if have such.

192.168.2.23 | CHANGED | rc=0 | (stdout) 0
192.168.2.31 | CHANGED | rc=0 | (stdout) 0
192.168.2.26 | CHANGED | rc=0 | (stdout) 0
192.168.2.29 | CHANGED | rc=0 | (stdout) 0
192.168.2.24 | CHANGED | rc=0 | (stdout) 0
192.168.2.27 | CHANGED | rc=0 | (stdout) 0
192.168.2.22 | CHANGED | rc=0 | (stdout) 0
192.168.2.28 | CHANGED | rc=0 | (stdout) 0
192.168.2.21 | CHANGED | rc=0 | (stdout) 0
192.168.2.20 | CHANGED | rc=0 | (stdout) 0
192.168.2.25 | CHANGED | rc=0 | (stdout) 0

If you do not have proper procedure, then follow the basic escalation path within your engineering team. Before isolating the possibly infected system, first check if it is not a false-positive alert. There are plenty of situations when tools like rkhunter will detect something unusual. It can be Zabbix Proxy process with some memory alignment or script replacement for some basic system utilities such as wget. However if rkhunter finds well known rootkit then you should start shutting system down or isolate it at least. Or take any other planned action for such situations.

If you found single infection within your environment then there is high chance that other systems might be infected also, and you should be ready to scan all accessible things over there, especially if you have password-less connection between your servers. For more about possible scenarios look for MITRE ATT&CK knowledge base and framework.

ClamAV: anti-virus deployment

What is the purpose of having anti-virus in your systems? Similar to anti-rootkit software, a anti-virus utility keep our system safe and away from common threats like malware, adware, keyloggers etc. However it has got much more signatures and scans everything, so the complete scan takes lot longer than in case of anti-rootkit software.

Installation of anti-virus

First, install ClamAV with the following playbook:

---
- name: ClamAV
  hosts: all
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'

  tasks:
     - name: Install ClamAV
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: clamav
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install ClamAV CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: clamav
         state: present

Then execute this playbook:

ansible-playbook clamav-install.yml -i hosts.txt -u root

With each host containing ClamAV there is clamav-freshclam service which is tool for updating virus signatures databases locally. There are rate limits. It is suggested to set up a private mirror by using “cvdupdate” tool. If you leave as it is, there might be a problem when all hosts ask at the same time resulting in race condition. You will be blocked for some period of time. If your infrastructure consists of various providers, then you should go for multiple private mirrors.

Scanning systems with anti-virus

You can either scan particular directory or complete filesystem. You could either run scan from playbook, but you can run it promply using ansible command without writing playbook. If seems that ClamAV anti-virus, contrary to rkhunter, returns less warnings and thus it is much easier to manually interpret results without relying on return codes.

ansible all -i hosts.txt -m shell -a "clamscan --infected -r /usr | grep Infected" -v -f 24 -u root -o

You can also run ClamAV skipping /proc and /sys folders which hold virtual filesystem of a hardware/software communication.

clamscan --exclude-dir=/proc/* --exclude-dir=/sys/* -i -r /

There is possiblity to install ClamAV as a system service (daemon), however it will be much harder to accomplish as there might be difficulties with AppArmor (or similar solution) and file permissions. It will randomly put load on your systems, which is not exactly what we would like to experience. You may prefer to put scans in cron schedule instead.

Please note: I will not try to tell you to disable AppArmor as it will be conflicting with NIS 2. Even more, I will encourage you to learn how to deal with AppArmor and SELinux as they are required by various standards like DISA STIG.

To run ClamAV daemon it is requied to have main virus database present in your system. Missing this one prevents this service from startup and it is directly linked with freshclam service.

○ clamav-daemon.service - Clam AntiVirus userspace daemon
     Loaded: loaded (/lib/systemd/system/clamav-daemon.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/clamav-daemon.service.d
             └─extend.conf
     Active: inactive (dead)
  Condition: start condition failed at Mon 2024-09-02 15:24:40 CEST; 1s ago
             └─ ConditionPathExistsGlob=/var/lib/clamav/main.{c[vl]d,inc} was not met
       Docs: man:clamd(8)
             man:clamd.conf(5)
             https://docs.clamav.net/

Results interpretation and reaction

Running clamscan will give us this sample results:

Using /etc/ansible/ansible.cfg as config file
192.168.2.33 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.23 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.28 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.42 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.26 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.29 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.38 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.45 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.40 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.32 | CHANGED | rc=0 | (stdout) Infected files: 0

As it is a manual scan, it will be straightforward to identify possible threats. In case of automatic scan or integration with Zabbix you will need to learn what clamscan could possibly output, same as with rkhunter output.

Conclusion

Automation in the form of Ansible can greatly help in anti-rootkit and anti-virus software deployment, rkhunter and ClamAV respectively. These tools will for sure increase the level of security in your environment if will cover all the systems up and running. Having automation itself is not required by NIS 2 directly, however in positively impacts for future use.

Further reading

Enterprise Architecture: modelling ArchiMate with Archi

The article explores the use of ArchiMate, a modeling language, for enterprise architecture, focusing on its application through the Archi tool. ArchiMate offers a structured approach to modeling complex IT systems by providing various layers such as motivation, strategy, business, and technology. The author explains how Archi facilitates the visualization of enterprise architecture, making it easier to manage and understand. Additionally, the article covers practical aspects like setting up a Git repository for team collaboration and managing model changes effectively. A practical example project demonstrates the application of these concepts in a real-world scenario.

In complex world of IT artifacts it is good to introduce a standarized and well built model of how to describe the important things. It will for sure help organizing information and keep track of changes. It is important in professional world to describe things in professional manner. So there is Enterprise Architecture with can be defined as

conceptual blueprint that defines the structure and operation of organizations. The intent of enterprise architecture is to determine how an organization can effectively achieve its current and future objectives

https://www.techtarget.com/searchcio/definition/enterprise-architecture

You could imagine what would happen if some organization do not practice enterprise architecture, but I would go a little further asking what would happend if organization model something but in unstructured way not enjoying things coming from a formal framework. You could easily end up with a mess and information chaos which will nagatively impact your business, technology and operations.

ArchiMate language

Although it is said that ArchiMate is a language, I say that it is more than a language because it has a characteristics of a framework. It provides us with several layers of modelling such as motivation, strategy, business, application, technology, implementation and migration. Each layer consists of several elements which can be then divedided into 3 groups such as active, behavioral and passive. In a result you get a matrix like structure of grouped elements which can be used in form of diagrams called ArchiMate views.

So, to recap. It is a language with bonuses. Constists of 6-ish concepts (layers) which then divides into 3 groups of various elements. Everything can be put on diagrams and views.

Modelling tools available

There are few tools that I would recommend when speaking about Enterprise Architecture and especially ArchiMate. I would say Sparx Enterprise Architect is the most complete one, but it requires some minor investments. The second one is Archi, which is open source tool. Both tools support collaborative work. Sparx EA by means of Pro Cloud Server, and Archi by using coArchi plugin with Git repository. For bigger projects with lots of people working on model and lots of things happening EA will be better for sure, but for solo projects Archi should do the work.

Repository setup

For those who work from different places and in team environment, having a repository will be huge benefit. It is not as flexible and dynamic as Miro for instance, but Enterprise Architecture is somehow more like a static thing than brainstorming sessions. So start with setting up a Git repository. I prefer using GitLab. Create group if needed and empty project. You are done for now.

Installation of Archi

Go to https://www.archimatetool.com/download/ and download the latest version of Archi. It is available for Linux, Windows and Mac. After download you either install or just unpack archive. Now go to https://www.archimatetool.com/plugins/ and download coArchi plugin as it is needed to enable collaboration feature using Git repository. Plugins can be installed from Archi by going to Help – Manage Plug-ins. Now you are good to go with relaunching Archi to enable collaboration plugin.

Model management

There are two scenarios with models management. You either start your own model locally and then publish it remotely or import someone else’s model into your blank project. I think it covers most of use cases. The first case is to create blank model and then select Collaboration – Add local model to workspace and publish. The second case is to Collaboration – Import remote model to workspace. If you do not create own project then in model-repository/project/.git Archi will create so-called temp file for your project. If you created project locally, then you can just open it by this way.

Changes with Git SCM

Team work requires basic knowledge of Git SCM as when you start you should pull changes from the server (Collaboration – refresh model). Once you’re done with your changes then Commit changes and finally Publish changes. As with any other Git repository you can switch to different branches, merge changes, ammend commits etc.

Sample project

To express ArchiMate visually I decided to select my next technology project/demo as a base for this model. It is NIS2 and its EDR, antivirus, antirootkit etc requirements which I will show on a hundreds of containers in a automated form of Ansible automation. So there are various layers concerning this project motivation as well as strategy and going further down with business, application, technology and finally implementation and motivation.

Please note that colors are important here as they group concepts.

Motivation

Source of the project itself is at EU legislator which places several NIS2 drivers and requirements which implies goals and outcomes. All of those can be easily mapped to strategy elements such as cource of action but also business processes. You can quick path down to the bottom for implementation plateau describing before and after state of architecture.

Strategy

Different viewers require different viewpoints so each view/diagram allows us to select particular interesing viewpoint for this very diagram. This way unrelated (directly) elements are blurred and you can clearly see only these elements coming directly from this concept. So strategy level describes high-level way of describing what we can do and with what means. I help myself with little addition of motivation requirement as well as technology process. Finally there is “coverage” requirement which can be defined as a result of this strategy but it could be also a value of stream if needed.

Business

We defined business layer for who does what, so there is Bob which is a automation engineed who creates automation processes. We put here also motivation elements, strategy elements, but also application elements which are not blurred becaues this is “business process cooperation” viewpoint and it contains also application elements such as “clamav” and “rkhunter” which in particular implements and realizes antivirus and antirootkit coverage respectively.

Application

It is really good to be able to apart from obvious application components and its functions to put also why we do this. Often people ask why we do something in technology, so such a convention to mix elements from various level of abstraction gives this answer to them.

Technology

Being a “technology usage” viewpoint I can both pick technology and application elements here as well as add some different elements like motivation ones. One new thing appears here which is grouping but in structure way not only a visual one. By placing things inside a group you will be asked about relation between group and its elements as it decreases amount of possibilites in terms of additional relation mixins. I think that technology layer is the at the bottom of artifacts tree logically, and the lower placed implementation & migration is not actually.

Implementation and migration

It is the most complete viewpoint as it contains business, application, technology, some of motivation and implementation elements. So here you defined what, in what form, possibly who and what is the final result. It is more like project plan. Feel free to add whatever you need to describe things in order to plan changes and what baseline will it refer to and will it change it to some new baseline.

Archi layout with Hints

Here you have a screen from Archi. Please note that you have “Hints” section so you can learn more about specifics of various viewspoints, concepts and elements as well as relationships between them.