One problem that all security professionals have is staying ahead of the attacks. Honeypots help in this regard by gaining an insight into how the naydoers attack vulnerable systems. I therefore set out to create a reproduceable honeypot system that could be deployed to any target in a secure and reproduceable manner.
The 2 requirements was to automate the installation of the honeypot software and to integrate the logging to a central system where the logs could be analysed.
I therefore started to build some ansible playbooks and roles for doing the automation and used an ELK stack (Elasticsearch, Logstash and Kibana) for the centralised analysis. Currently the following ansible roles have been built:
- hardening - This role sets up iptables and the logging chains, this is required by all other roles
- elasticsearch - This role installs elasticsearch and kibana, it depends on nginx and hardening roles
- logstash-server - This role install logstash and the translate plugin, it depends on the hardening role
- nginx - This role install NGinx and installs a config that only works with Kibana, requires hardening role
- filebeat - This role install filebeats and configures it to send to logstash, requires hardening role
- metricbeat - This role install metricbeats and configures it to send to logstash, requires hardening role
- cowrie -This installs the cowrie SSH and Telnet honeypot and configures filebeat to send logs to logstash, it also sets up logstash to redirect ports to more unprivileged ports.
- hp-telnet - This installs a simple telnet honeypot which logs only logins, it configures filebeat to send logs to logstash and configures iptables to forward ports
The below diagram shows how this all hangs together:
From the diagram above you can see that cowrie logs to filebeats, which in turn logs to logstash which then logs to elasticsearch. The communication between Filebeats/Metricbeats to logstash is encrypted with TLS. To encrypt the conection and authenticate it a PKI systems needs to be used, whereby the logstash server has a valid certificate and then each honeypot client also has a certificate. All certificates are generated with a self signed CA cert which the ansible scripts also deploy.
In the diagram it shows Cowrie as the honeypot software but this can be replaced with any honeypot and the basic principals will stay the same, as long as filebeat can read the output from the honeypot software it can fit into the system. Hopefully you can see from the diagram why the hardening role is required by almost all the other roles, this is because the hardening role sets up IPTables which all other roles require. IPTables is especially required for running the honeypots under an unprivileged user as, IPTables is then used to redirect the TCP ports. The hardening also sets up the logging meaning all IPTables blocks and allows are logged and sent back to the central server.
An Ansible version of 2.1 or above is required due to the IPTables redirect commands.
Ansible roles and playbooks can be found here:
Creating the PKI
The PKI needs to be created so the nodes can talk to the central server. I’ve included some bash scripts that creates the PKI (all scripts are located in the ca folder)
This creates the PKI structure under /root/ca
The ca certificate can be found at /root/ca/root-ca.crt and needs to be copied to:
./ansible/roles/filebeat/files/ssl/ca.crt ./ansible/roles/logstash-server/files/ssl/ca.crt ./ansible/roles/metricbeat/files/ssl/ca.crt
The server certificate needs to be created (Currently this is hardcoded to server.generic.local in all configs, the ceriticate needs to match this CN)
This creates /root/ca/certs/server.crt and /root/ca/certs/server.key which both need to be copied to:
All that needs to be created now are the individual client certificates. The name of this certificate needs to be the same as the hostname as the ansible variable “ansible_hostname” is used to copy over these certs. In this case the hostname of the client is “honeypot1”
This will create /root/ca/certs/honeypot1.crt and /root/ca/certs/honeypot1.key which need to be copied to:
./ansible/roles/filebeat/files/ssl/honeypot1.crt ./ansible/roles/filebeat/files/ssl/honeypot1.key ./ansible/roles/metricbeat/files/ssl/honeypot1.crt ./ansible/roles/metricbeat/files/ssl/honeypot1.key
Don’t forget a new certificate should be created for each honeypot you deploy. This is only used for encrypting and authenticating the communication between the honeypots and the central server, therefore no considerations have been put in place for AIA or CRL components.
The hosts file first needs to be created with your own configuration, an example host file is located in ./ansible/hosts
There are currently only 2 playbooks ./ansible/server.yml which deploys the server and ./ansible/honeypot.yml which deploys a SSH and telnet honeypot.
First deploy the server. You will be prompted for a username and password which will be used to access Kibana:
ansible-playbook -i hosts -u ubuntu -k ansible/server.yml What username to use for Elasticsearch access?: user Password:
Once this is complete you can now deploy the honeypot
ansible-playbook -i hosts -u ubuntu -k ansible-honeypots/ansible/honeypot.yml
If using the Vagrantfile you can deploy both from the control box and add parameter “–private-key” and point it the generated SSH key.
Kibana can be accessed over HTTPS on the Server. The Username and password given when running the server playbook can be used in the Basic Auth prompt.
The following index patterns should be added to Kibana:
- logstash-* - Contains all log files received
- logstash-cowrie-* - Contains all logs created by the cowrie honeypot
- logstash-iptables-* - Contains all the iptables logs
- logstash-telnet-* - Contains all logs created by the hp-telnet role
- metricbeat-* - Contains all metricbeat logs and metrics
- nginx-* - Contains all nginx access and error logs generated on the server