NIS 2: anti-rootkit & anti-virus installation and scanning with Ansible

If you run digital services platform or critical infrastructure then most probably you are covered by NIS 2 and its requirements including those concerning information security. Even if you are not covered by NIS 2, then still you may benefit from its regulations which seem to be similar with those coming from ISO 27001. In this article I show how to automatically deploy anti-rootkit and anti-virus software for your Linux workstations and servers.

TLDR

By using rkhunter anti-rootkit and ClamAV anti-virus you are closer to NIS 2 and ISO 27001 and farther away from threats like cryptocurrency miners and ransomware. You can automate deployment with Ansible.

Course of action

  • Prepare Proxmox virtualization host server
  • Create 200 LXC containers
  • Start and configure containers
  • Install rkhunter and scan systems
  • Install ClamAV and scan systems

What is NIS 2?

The NIS 2 Directive (Directive (EU) 2022/2555) is a legislative act that aims to achieve a high common level of cybersecurity across the European Union. Member States must ensure that essential and important entities take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems, and to prevent or minimise the impact of incidents on recipients of their services and on other services. The measures must be based on an all-hazards approach.

source: https://www.nis-2-directive.com/

Aside from being a EU legislation regulation, NIS 2 can be benefication from security point of view. However, not complying with NIS 2 regulations will cause significant damages to organization budget.

Non-compliance with NIS2 can lead to significant penalties. Essential entities may face fines of up to €10 million or 2% of global turnover, while important entities could incur fines of up to €7 million or 1.4%. There’s also a provision that holds corporate management personally liable for cybersecurity negligence.

source: https://metomic.io/resource-centre/a-complete-guide-to-nis2

What are the core concepts of NIS 2?

To implement NIS 2 you will need to cover various topics concernig technology and its operations, such as:

  • Conduct risk assesment
  • Implement security measures
  • Set up supply chain security
  • Create incident response plan
  • Perform regular cybersecurity awareness and training
  • Perform regular monitoring and reporting
  • Plan and perform regular audits
  • Document processes (including DRS, BCP etc)
  • Maintain compliance by review & improve to achieve completeness

Who should be interested?

As NIS 2 requirements implementation impacts on businesses as whole, the point of interest should be in organizations in various departments, not only IT but technology in general as well as business and operations. From employees perspective they will be required to participate in trainings concerning cybersecurity awareness. In other words, NIS 2 impacts on whole organization.

How to define workstation and server security

We can define workstation as a desktop or laptop computer which is physically available to its user. On the other hand we can define a server as a computing entity which is intended to offload workstation tasks as well as provide multi-user capabilities. So can describe a server also as a virtual machine or system container instance (such as LXC).

The security concepts within both workstations and servers are basically the same as they do share many similarities. They both run some operating system with some kind of kernel inside. They both run system level software along with user level software. They are both vulnerable to malicious traffic, software and incoming data especially in form of websites. There is major difference however impacting workstation users the most. It is the higher level of variability of tasks done on computer. However, even with less variable characteristics of server tasks, a hidden nature of server instances could lead lack of visibility of obvious threats.

So, both workstation and server should run either EDR (Endpoint Detection and Response), or antivirus as well as anti-rootkit software. Computer drives should be encrypted with LUKS (or BitLocker in case of Windows). Users should run on least-privileged accounts not connecting to unknown wireless networks and not inserting unknown devices to computer input ports (like USB devices which could be keyloggers for instance).

Prepare 200 LXC containers on Proxmox box

Find how to install 200 LXC containers for testing purposes and then, using Ansible, how to install and execute anti-rootkit and anti-virus software, rkhunter and ClamAV respecitvely. Why to test on that many containers you may ask? In case of automation it is necessary to verify performance ability on remote hosts as well as how we identify automation results on our side. In our case those 200 containers will be placed on single Proxmox node so it is critically important to check if it is going to handle that many of them.

Ansible software package gives us ability to automate work by defining “playbooks” which are group of tasks using various integration components. Aside from running playbooks you can also run commands without file-based definitions. You can use shell module for instance and send commands to remote hosts. There is wide variety of Ansible extensions available.

System preparation

In order to start using Ansible with Proxmox you need to install “proxmoxer” Python package. To do this Python PIP is required.

apt update
apt install pip
pip install proxmoxer

To install Ansible (in Ubuntu):

sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

Then in /etc/ansible/ansible.cfg set the following setting which skips host key check during SSH connection.

[defaults]
host_key_checking = False

Containers creation

Next define playbook for containers installation. You need to pass Proxmox API details, your network configuration, disk storage and pass the name of OS template of your choice. I have used Ubuntu 22.04 which is placed on storage named “local”. My choice for target container storage is “vms1” with 1GB of storage for each container. I loop thru from 20 to 221.

The inventory for this one should contain only the Proxmox box on which we are going to install 200 LXC containers.

---
- name: Proxmox API
  hosts: proxmox-box
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'
  serial: 1
  tasks:
  - name: Create new container with minimal options
    community.general.proxmox:
      node: lab
      api_host: 192.168.2.10:8006
      api_user: root@pam
      api_token_id: root-token
      api_token_secret: TOKEN-GOES-HERE
      password: PASSWORD-GOES-HERE
      hostname: "container-{{ item }}"
      ostemplate: 'local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst'
      force: true
      disk: "vms1:1"
      netif:
        net0: "name=eth0,gw=192.168.1.1,ip=192.168.2.{{item}}/22,bridge=vmbr0"
      cores: 2
      memory: 4000
    loop: "{{ range(20, 221) }}"

And then run this playbook to install containers:

ansible-playbook containers-create.yml -i inventory.ini -u root

Start and configure containers

In order to start those newly created containers (run it on Proxmox box), use shell loop with pct command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct start $i; 
  echo $i; 
done

To help yourself with generating IP addresses for your containers you can use “prips”:

apt install prips
prips 192.168.2.0/24 > hosts.txt

For demo pursposes only: next, you enable root user SSH login as it is our only user so far and it cannot login. In a daily manner you should use unprivileged user. Use shell loop and “pct” command:

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config; 
  echo $i; 
done

for i in `pct list | grep -v "VMID" | cut -d " " -f1 `; 
do 
  pct exec $i -- service ssh restart; 
  echo $i; 
done

Checkpoint: So far we have created, started and configured 200 LXC containers to run further software intallation.

rkhunter: anti-rootkit software deployment

You may ask if this anti-rootkit is real world use case? Definitely it is. From my personal experience I can say that even using (or especially, rather) well known brands for you systems layer like public cloud operators you can face with risk of having open vulnerabilites. Cloud operators or any other digital services providers often rely on content from third party providers. So effectively quality and security level is as good as those third parties provided. You can expect to possibly receive outdated and unpatched software or open user accounts etc. This can lead to system breaches which then could lead to data steal, ransomware, spyware or cryptocurrency mining and many more.

There are similarities between anti-rootkit and anti-virus software. rkhunter is much more target at specific use cases so instead of checking hundred thousands of virus signatures it looks for well known hundreds of signs of having rootkits present in your system. You can then say that is a specialized form of anti-virus software.

Installation of anti-rootkit

First install rkhunter with the following playbook:

---
- name: install rkhunter
   hosts: all
   tasks:
     - name: Install rkhunter Ubuntu
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: rkhunter
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install rkhunter CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: rkhunter
         state: present

Execute it with Ansible:

ansible-playbook rkhunter-install.yml -i hosts.txt -u root

Scanning systems with anti-rootkit

And then scan with rkhunter:

---
- name: Run rkhunter
   hosts: all
   tasks:
     - name: Run rkhunter
       ansible.builtin.command: rkhunter -c --sk -q
       register: rkrun
       ignore_errors: true
       failed_when: "rkrun.rc not in [ 0, 1 ]"

Execute it with Ansible:

ansible-playbook rkhunter-run.yml -i hosts.txt -u root

To verify results it is much easier to run it separately using ansible command instead of ansible-playbook which runs playbooks:

ansible all -i hosts.txt -m shell -a "cat /var/log/rkhunter.log | grep Possible | wc -l" -u root -f 12 -o

Results interpretation and reaction

What in case if you see some “Possible rootkits”? First of all calm down and follow incident management procedure, if have such.

192.168.2.23 | CHANGED | rc=0 | (stdout) 0
192.168.2.31 | CHANGED | rc=0 | (stdout) 0
192.168.2.26 | CHANGED | rc=0 | (stdout) 0
192.168.2.29 | CHANGED | rc=0 | (stdout) 0
192.168.2.24 | CHANGED | rc=0 | (stdout) 0
192.168.2.27 | CHANGED | rc=0 | (stdout) 0
192.168.2.22 | CHANGED | rc=0 | (stdout) 0
192.168.2.28 | CHANGED | rc=0 | (stdout) 0
192.168.2.21 | CHANGED | rc=0 | (stdout) 0
192.168.2.20 | CHANGED | rc=0 | (stdout) 0
192.168.2.25 | CHANGED | rc=0 | (stdout) 0

If you do not have proper procedure, then follow the basic escalation path within your engineering team. Before isolating the possibly infected system, first check if it is not a false-positive alert. There are plenty of situations when tools like rkhunter will detect something unusual. It can be Zabbix Proxy process with some memory alignment or script replacement for some basic system utilities such as wget. However if rkhunter finds well known rootkit then you should start shutting system down or isolate it at least. Or take any other planned action for such situations.

If you found single infection within your environment then there is high chance that other systems might be infected also, and you should be ready to scan all accessible things over there, especially if you have password-less connection between your servers. For more about possible scenarios look for MITRE ATT&CK knowledge base and framework.

ClamAV: anti-virus deployment

What is the purpose of having anti-virus in your systems? Similar to anti-rootkit software, a anti-virus utility keep our system safe and away from common threats like malware, adware, keyloggers etc. However it has got much more signatures and scans everything, so the complete scan takes lot longer than in case of anti-rootkit software.

Installation of anti-virus

First, install ClamAV with the following playbook:

---
- name: ClamAV
  hosts: all
  vars:
    ansible_ssh_common_args: '-o ServerAliveInterval=60'

  tasks:
     - name: Install ClamAV
       when: ansible_distribution == "Ubuntu"
       ansible.builtin.apt:
         name: clamav
         state: present
     - name: Install epel-release CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: epel-release
         state: present
     - name: Install ClamAV CentOS
       when: ansible_distribution == "CentOS"
       ansible.builtin.yum:
         name: clamav
         state: present

Then execute this playbook:

ansible-playbook clamav-install.yml -i hosts.txt -u root

With each host containing ClamAV there is clamav-freshclam service which is tool for updating virus signatures databases locally. There are rate limits. It is suggested to set up a private mirror by using “cvdupdate” tool. If you leave as it is, there might be a problem when all hosts ask at the same time resulting in race condition. You will be blocked for some period of time. If your infrastructure consists of various providers, then you should go for multiple private mirrors.

Scanning systems with anti-virus

You can either scan particular directory or complete filesystem. You could either run scan from playbook, but you can run it promply using ansible command without writing playbook. If seems that ClamAV anti-virus, contrary to rkhunter, returns less warnings and thus it is much easier to manually interpret results without relying on return codes.

ansible all -i hosts.txt -m shell -a "clamscan --infected -r /usr | grep Infected" -v -f 24 -u root -o

You can also run ClamAV skipping /proc and /sys folders which hold virtual filesystem of a hardware/software communication.

clamscan --exclude-dir=/proc/* --exclude-dir=/sys/* -i -r /

There is possiblity to install ClamAV as a system service (daemon), however it will be much harder to accomplish as there might be difficulties with AppArmor (or similar solution) and file permissions. It will randomly put load on your systems, which is not exactly what we would like to experience. You may prefer to put scans in cron schedule instead.

Please note: I will not try to tell you to disable AppArmor as it will be conflicting with NIS 2. Even more, I will encourage you to learn how to deal with AppArmor and SELinux as they are required by various standards like DISA STIG.

To run ClamAV daemon it is requied to have main virus database present in your system. Missing this one prevents this service from startup and it is directly linked with freshclam service.

○ clamav-daemon.service - Clam AntiVirus userspace daemon
     Loaded: loaded (/lib/systemd/system/clamav-daemon.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/clamav-daemon.service.d
             └─extend.conf
     Active: inactive (dead)
  Condition: start condition failed at Mon 2024-09-02 15:24:40 CEST; 1s ago
             └─ ConditionPathExistsGlob=/var/lib/clamav/main.{c[vl]d,inc} was not met
       Docs: man:clamd(8)
             man:clamd.conf(5)
             https://docs.clamav.net/

Results interpretation and reaction

Running clamscan will give us this sample results:

Using /etc/ansible/ansible.cfg as config file
192.168.2.33 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.23 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.28 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.42 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.26 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.29 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.38 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.45 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.40 | CHANGED | rc=0 | (stdout) Infected files: 0
192.168.2.32 | CHANGED | rc=0 | (stdout) Infected files: 0

As it is a manual scan, it will be straightforward to identify possible threats. In case of automatic scan or integration with Zabbix you will need to learn what clamscan could possibly output, same as with rkhunter output.

Conclusion

Automation in the form of Ansible can greatly help in anti-rootkit and anti-virus software deployment, rkhunter and ClamAV respectively. These tools will for sure increase the level of security in your environment if will cover all the systems up and running. Having automation itself is not required by NIS 2 directly, however in positively impacts for future use.

Further reading

DNS privacy issues

Until recently I though that having DNS subdomain entries provides enough obscurity thus should it be secure. If your DNS server does not offer transfering domain to another place then any subdomains should be hidden from public sight. Transfers, if enabled (or rather misconfigured) could be made by:

dig -t axfr example.com

Second thing is querying for ANY option, but it does not mean “all”:

dig example.com any

So, with disabled transfers and lack of exactly private entries while quering for any, you would think that you are on a safe side. And that is actually wrong. There are two 3 options on a table:

  • Someone run crawler and scrap websites for domain names, possibly there are plenty of such systems as I see them quite often in HTTP server logs
  • Someone hacked your network perimeter and changed your DNS addresses for their own, this affects all the clients connected to such network if you would be able to force such traffic. This is of course a malicious and intrusive procedure, not happening on every day manner.
  • You are using public/private/provider DNS server and it is saving your requests building a database. Of course it could be either DNS forwarder or resolver or any in DNS query chain with similar configuration.

As far as I know for the most of domains there is no possibility of transfering or exposing too much with “any”. Not every domain was ever present on any other website so it could have been automatically crawled. Is everyone hacked or the most popular public DNS server are involved in building domains list database? There are plenty of subdomains or even domains that are way too complicated to be guessed. I do not think that those information leak from domain registrars but there is a chance.

So there is this domainsproject.org. They say that they use crawling and DNS checks, but I do not bother even to check their code as it seems to be fugazi (fake). How on earth could they check for some random text put in subdomains. It is for sure coming from DNS queries that should stay at those servers safe. Fortunately it does not include every subdomain configured.

Today thinking should be changed a little bit. If you put something on the internet then it is not safe or hidden by default. Maybe I just assumed too naively, and took for granted that people running public DNS servers share the feeling about privacy things as myself.

MikroTik basic reconfiguration

In case you have access to MikroTik device without UI available, only CLI then you possibly would like to know how to do some basic reconfiguration to gain access to admin panel listening on local network. So… to change user password:

/user set admin password=mynewpassword

To see all the configuration:

export

To create PPTP VPN:

ip pool add name=PPTP-Pool ranges=192.168.99.10-192.168.99.200
ppp profile add change-tcp-mss=yes local-address=PPTP-Pool name=PPTP-Profile only-one=yes remote-address=PPTP-Pool use-encryption=yes dns-server=8.8.8.8,9.9.9.9
ppp secret add name=user1 password=YourPassword profile=PPTP-Profile
 interface pptp-server server set authentication=chap,mschap1,mschap2 default-profile=PPTP-Profile enabled=yes

To adjust firewall for PPTP VPN:

ip firewall filter add chain=input comment="PPTP VPN" dst-port=1723 protocol=tcp
ip firewall nat add chain=srcnat out-interface=ether1 action=masquerade
interface pptp-server add name=pptp-user1 user=user1

To enable HTTP web configuration:

ip service print
ip service set www port=81
ip service www address=0.0.0.0/0

Now, once you connect thru PPTP VPN to the remote host, you can access UI. But if you set 0.0.0.0/0 as an address then you may also be able to connect to it using public address.

KeeneticOS basic reconfiguration

In case you would like to have PPTP VPN server but no such is installed. But first things first. For such an obsolete hardware and software keep in mind that accessing it requires passing old host key algorithms and cipers. Then:

components install vpnserver
components commit
# now it will reflash software and reboot
service vpn-server
no isolate-private
# here you apply regular PPTP configuration (see my other posts)
# ...
system configuration save
copy running-config startup-config

If you have other VPN types enabled, better disable it as you will be using PPTP and not SSTP or OpenVPN/WireGuard. Now, for HTTP access, most of the time it will be configured for local network access only. We can change this:

ip http security-level public
ip http port 81

Changing HTTP UI port to something different than default will give us ability to run other service on that port and do port forwarding to inside of local network.

Cowrie Honeypot

Until recently I did not know that honeypots I encounter are based on Cowrie. But it does not change much to be honest. I already seen almost 100 of these while randomly exploring things on the internet. It was first thought that they are honeypots, by pure intuition. You enter a system and run ps to list processes and see broad range of things like:

  • ejabberd (erlang)
  • nginx worker process
  • VBoxService
  • mysqld

But on the other hand you are unable to run top command or any other which normally are available on valid systems. Then comes to your mind that there is something wrong with it. How naive it is to think that someone or something will be so dumb not to know that it is fake…

Anyway, I started thinking why on earth would someone deploy Cowrie or any other Honeypot setup. In theory by using honeypots you attract attacker to enter and play around. You can grab passwords, files with malware and commands they try to use. This way you can build your antimalware solutions. But it would require lots of such systems deployed and for sure it is just like that.

Second use case is when such honeypot is deployed on private networks and would be the first choice of person or system entering your perimeter. It will spend time and effort on hacking fake system loosing interest in other things and gives time to identify this threat and properly mitigate it by applying patches and reconfiguring systems.

Further reading:

https://github.com/cowrie/cowrie

SSH access using Ruby to outdated servers

If for some unknown reasons you need to connect thru SSH to some outdated server using Ruby code, then you are in the right place. But first a little piece of theory behind it. There are 4 configurable things in OpenSSH concerning security configuration:

  • KexAlgorithms: per-connection keys (key exchange)
  • HostkeyAlgorithms: public key to identify server to clients
  • Ciphers: connection encryption
  • MACs: message authentication codes

As time passes various algorithms become deprecated and are known to be vulnerable. However there are tons of servers on the internet and on private networks which still use these old things. On a regular SSH client as well as net-ssh Ruby client code you will not be able to connect to such servers, instead you will receive error messages that you are unable to establish connection.

There is solution for that.

First:

:host_key => "+ssh-dss"

Then:

append_all_supported_algorithm: true

With these two additional options which you pass to Net::SSH.start method you will be able to connect to the most of running servers out there. Still there will be some chance you may need to adjust it, like for some old Cisco hardware, but you will cover 99.99%. Hope that you find it useful no matter if you try to administer you old networking rig or just playing around with various things…

KeeneticOS PPTP VPN server

If you own this router then you might be interested how to setup VPN Server. I am personally interested in a way of how to do this using CLI only. It looks easy however there is one odd settings you should be aware of. To start with, it will be PPTP VPN server, but as far as I know this KeeneticOS can set up also OpenVPN and possibly WireGuard, but as always PPTP is the easiest to configure. Start with logging into the box. It might sound obvious, but there are plenty of hardware which requires now unsuported algorithms and ciphers:

ssh  -o "HostKeyAlgorithms=+ssh-dss" admin@x.x.x.x -o"KexAlgorithms=+diffie-hellman-group1-sha1" -c aes256-ctr

Fortunately you can pass options to ssh client for HostKeyAlgorithms, KexAlgorithms and also ciphers (-c argument). This allows you to enter the box. Next:

vpn-server
mtu 1400
system configuration save
exit
service vpn-server
vpn-server interface YOURINTERFACE
vpn-server pool-range x.x.x.x y

Now we are close, but one odd thing:

no isolate-private
system configuration save
copy running-config startup-config

A word of explanation here. We just set up server on a interface with given pool-range which is client IP range while connecting to the VPN. We connect using username and password which is also used to access the box via SSH. Service command enables VPN server or any particular service you work on. Last thing is to disable network isolation, and we do this by specifying “no” before isolate-private. In the end we save configuration and copy current configuration into defaults. We are done here.

PPTP VPN on Wive-NG router

On Wive-NG router when talking about VPN you have either PPP or WireGuard. As PPP is first listed so I tried it before trying WireGuard. Here are my thoughts. Configuration is pretty simple. In the router menu, go to Services – VPN Server. Then select PPTP protocol. In the Users section create new user with login and password and that’s all from the server side.

From the client side it is also pretty simple. First install pptp-linux:

sudo apt install pptp-linux
cd /etc/ppp/peers
touch PPTP

Next thing is to fill configuration file with settings as follows:

pty "pptp x.x.x.x --nolaunchpppd --debug"
name USERNAME
password PASSWORD
remotename PPTP
require-mppe-128
require-mschap-v2
refuse-eap
refuse-pap
refuse-chap
refuse-mschap
noauth
debug
persist
maxfail 0
defaultroute
replacedefaultroute
usepeerdns

With this configuration it is time to activate connection:

chmod 600 /etc/ppp/peers/PPTP
pon PPTP

To disable connection:

poff PPTP

There is one downside of using it. It create several routes of course but it prevents accessing client from outside, so in case of setting it up on remote host you will instantly loose connection. On a local machine it will not be a problem.

EcoRouter 2008 change user password

Possibility that you own or operate EcoRouter 2008 with EcoBNGOS are very low. However just in case you will have to work with this one it will be useful to know at least some basic commands. I choose user password change as it is the most important thing at first glace after setup, so:

enable
configure terminal
username admin
password YOURPASSWORD
write

After this logout and login again to check if it is working. On some routers there might be a problem with not enough disk space to save configuration change, that’s why you need to also check if the change has been applied properly.