Pages

Thursday, February 23, 2017

Setting up a Pentesting... I mean, a Threat Hunting Lab - Part 5

Up to this point, this setup might look familiar. However, what I believe takes any lab set up to the next level is having a central repository where logs generated during an attack can be stored, parsed and analyzed. This is how you learn the real skills because it allows you to see exactly what you need to look for when hunting for adversaries attacking your network.

For the purpose of my next threat hunting series, I will be using an ELK stack to store native Windows and Sysmon logs from my compromised systems and Winlogbeat to forward those logs to my basic stack. Later on, I will add other open-source projects such as Security Onion Rock NSM, or even AlienVault's OSSIM  and implement other applications to make my ingestion and distribution of data more robust such as Kafka.


I am confident that by now you know exactly how to create a new VM and add it to our Virtual LAN. For this post, we will need to stand up an Ubuntu server 16.04 and install Elasticsearch, Logstash and Kibana. In the past, I followed this tutorial to set up my ELK stack, and it has been a good reference since then. Also, I highly recommend following the Elastic Stack and product Documentation


Requirements


ISO

  • Ubuntu Server 16.04.1 LTS 
    • Set it up to only one Network Adapter and assign it to our Virtual LAN
    • At least 4GB RAM (Elasticsearch itself needs 2GB to start) - Thank you @Malwaresoup
    • Set your CD/DVD Drive to your Ubuntu ISO
    • Boot it up and go install your Ubuntu Server (pretty straight forward. Go with the defaults)
    • A basic step-by-step here

Elastic Products




Setting up an ELK Stack


Elasticsearch


First of all, Elasticsearch and Logstash require Java, and, as you can see in figure 1 below, a fresh Ubuntu build does not come with java packages installed by default. We can try to install Java 8 from the Official Oracle Distribution or OpenJDK 8 which is ready to be installed in your Ubuntu box. Do NOT install Java 9 since it is not supported. 


Check your java version by typing:  java -version


Figure 1. Checking Java version.




Install openjdk-8-jre-headless : sudo apt-get install openjdk-8-jre-headless


Figure 2. Installing openjdk-8-jre-headless.




Figure 3. openjdk-8-jre-headless installed successfully. 




Check your java version again by typing:  java -version . You will see that you are now running "Openjdk version 1.8.0.1_121"



Figure 4. checking java version after installing openjdk.




Now, we can start installing our first application, Elasticsearch. Elasticsearch comes in different package formats such as zip/tar, deb, rpm, and docker. For this post, we are going to use the deb package since it is recommended for Debian, Ubuntu, and other Debian-based system according to the elasticsearch installation guide.

To get started download and install (write to a file) the public signing key to our Ubuntu box. Elastic signs all of their packages with their own Elastic PGP signing key.


wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

  • -q (--quiet) : No Output
  • -O (--output-document) : Write documents to file


Figure 5. Installing elastic public key.




Next, before installing elasticsearch, we have to set the elastic packages definitions to our source list. For this step, elastic recommends to have "apt-transport-https" installed already or install it before adding the elasticsearch apt repository source list definition to your /etc/apt/sources.list

sudo apt-get install apt-transport-https


Figure 6. Installing apt-transport-https.




Add elastic packages source list definitions to your sources list. (This step will allow you to install Elasticsearch, Kibana and Logstash directly)

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list 


Figure 7. adding elastic packages to our source list. 




We can now install the elasticsearch debian package, but only after updating our system first. (if you have any issues updating your system make sure you run "sudo apt-get clean")

sudo apt-get update && sudo apt-get install elasticsearch


Figure 8. Updating and installing Elasticsearch.




Figure 9. Updating and installing Elasticsearch.




For best practices, make sure you restrict outside access to your elasticsearch instance (port 9200), so no one can read the data stored in your database or shutdown your elasticsearch server through its HTTP API.

Start by editing your elasticsearch config located at /etc/elasticsearch/elasticsearch.yml


sudo nano /etc/elasticsearch/elasticsearch.yml

Figure 10. Editing elasticsearch config. 




Figure 11. Editing elasticsearch config.




Look for the "network.host" line in the Network Section and do the following:
  • Delete the # sign to enable the feature
  • Delete the IP address and type "localhost"
  • Type CTR+ X to Exit
  • Type Y to accept changes and ENTER to save it on the original config file. 

Figure 12. Editing elasticsearch config.




Figure 13. Editing elasticsearch config.




Figure 14. Editing elasticsearch config.




Figure 15. Editing elasticsearch config.




Next, we will have to start the elasticsearch service and set it to start automatically after reboots. We have two options:
  • Init
  • Systemd

If you are installing this on a different distro, check what your system is using by default by running the following command:   ps -p 1


Figure 16. Checking if you are using Init and Systemd.




Lets set elasticsearch to start automatically when our system boots with the following commands

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service


Figure 17. Setting elasticsearch service to start automatically after rebooting the system.




Start the elasticsearch service and check its status to confirm that it is running.


Figure 18. Starting the elasticsearch service and confirming that it is running.





Kibana


As mentioned earlier, we will be installing elastic products by using their deb packages and our source list is already pointing to elastics source list definitions. Therefore, all we have to do is update our box (just in case) and install kibana.

sudo apt-get update && sudo apt-get install kibana


Figure 19.  Updating our system and installing kibana.




Figure 20.  Updating our system and installing kibana



Next, for best practices and future setups, I will show you how to set up a reverse proxy and create an account to access your Kibana web interface. 

We will have to apply the same approach we did to our elasticsearch instance and bind the server to "localhost". Edit your kibana config by doing the following:

sudo nano /etc/kibana/kibana.yml


Figure 21. Editing our kibana config file.




Look for the line server.host: "localhost"
  • Delete the # sign
  • CTRL+ X to exit
  • Type Yes to confirm the changes and ENTER to save the changes to the original file.

Figure 22. Editing our kibana config file.




Figure 23. Editing our kibana config file.




Same as elasticsearch, lets set kibana to start automatically when our system boots with the following commands

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service


Figure 24. Setting kibana service to start automatically after rebooting the system.




Start the kibana service and check its status to confirm that it is running.


Figure 25. Starting the kibana service and confirming that it is running. 




Install NGINX (Reverse Proxy)



sudo apt-get -y install nginx


Figure 26. Installing nginx.




Next, lets create an admin user to log on to our Kibana web interface. 

sudo -v

echo "kibadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Type a password...


Figure 27. Creating an admin user. 




Figure 28. Creating an admin user.




We will have to create a new configuration for our nginx application, create a backup of the original one.

sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/original_backup_default


Figure 29. Backing up original nginx configuration. 




Check your system's IP address and try to remember it. You will need it for your new nginx configuration. 


Figure 30. Getting the system's network interfaces information. 




Create a new nginx configuration

sudo nano /etc/nginx/sites-available/default


Figure 31. Creating a new nginx configuration. 




Copy the following text to your new configuration:

server {
    listen 80;

    server_name Your_own_Ubuntus_IPAddress;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}

Once you are done:
  • CTRL+ X to exit
  • Type Yes to confirm the changes and ENTER to save the changes to the new file.

Figure 32. Creating a new nginx configuration




Check the permissions of the file to make sure they match the original one (just in case). 

ls -la [config file]


Figure 33. Checking new files permission. 




Test your new configuration and restart your nginx service

sudo nginx -t

sudo systemctl restart nginx


Figure 34. Resting new config and restarting nginx service.




Now just go to your favorite browser and type the IP address of your Ubuntu System. Nginx will allow you to hit kibana as long as you log on with the credentials you created earlier. 


Figure 35. Logging on to Kibana.



After logging in, you will be presented with the "Configure an index pattern" page which will expect you to have already an index created or configured to start looking into data being stored in your elasticsearch DB. However, we have not set up that part yet. We will have that ready after configuring logstash and receiving logs from our computers in the domain. Therefore, you can just minimize or close that window for now.


Figure 36. Successfully logged on to kibana. 




Logstash


As we already know, in order to install another elastic product, we just have to update our system and request for it with the apt-get install command. 

sudo apt-get update && sudo apt-get install logstash


Figure 37. Updating the system and installing logstash.




In order to secure our connection between our endpoints and our ELK stack, we need to generate SSL Certificates. To get started, lets create the directories needed to store our certificate and its private key.

sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private


Figure 38. creating directories to store the certificate and private key. 




If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file. [Source]

sudo nano /etc/ssl/openssl.cnf


Figure 39. Opening openssl.cnf. 




Find the [ v3_ca ] section, add the new line shown in figure 41 and save it. Substitute the IP I have there with your own. (Ubuntu System)


Figure 40. Editing openssl.cnf




Figure 41. Editing openssl.cnf




Figure 42. Saving changes to openssl.cnf.




Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

cd /etc/pki/tls

sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


Figure 43. Generating the SSL certificate.




Next, lets create our custom Logstash configuration files. They are in JSON-format and reside in /etc/logstash/conf.d. The configuration consists of three sections: Inputs, Filters and Out


To get started we will create our Input file. This file basically sets the way how logstash is going to receive logs being sent to our ELK Stack. 

Create the file: sudo nano /etc/logstash/conf.d/02-beats-input.conf


Figure 44. Creating our Input File.




Make sure your input file looks like picture 45 below. You can see that we are setting it to listen on port 5044 and use our certificate and private key to handle the traffic coming in. 


Figure 45. Creating our Input File.




Create the Output file: sudo nano /etc/logstash/conf.d/50-elasticsearch-output.conf


Figure 46. Creating our Output File




Make sure your input file looks like picture 47 below. You can see that we are setting it to start an if statement to validate that what we are handling winlogbeat traffic coming into our elasticsearch instance. One important thing to mention is the the following line:

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" is already creating an index for the data being sent to Elasticsearh. Therefore, there is not need to upload a winlogbeat template to our Elasticsearch instance. Save your file. 


Figure 47. Creating our Output file.




Check if logstash is running. 


Figure 48. Checking if logstash is running.




Start your logstash service. 


Figure 49. Starting the logstash service.



This is the perfect time to take a snapshot of your ELK Stack. It is a fresh install with all the applications running properly. 

Also, one important thing to remember is that if you go to any of your systems in your Virtual LAN and browse to your ELK Stack's IP (as shown in figure 36 above) you still will not be able to create an index yet. This is because no logs are hitting your ELK Stack yet. We have to configure a data shipper such as Winlogbeat or Nxlog to send logs to our ELK Stack, but most importantly we need to generate meaningful and helpful logs on your endpoints. Here is where Sysmon comes into play.

In our next post, I will show you how to install Sysmon on your endpoints with a custom configuration file that you can use to start out. The config will allow you to capture everything that sysmon can, and you will just have to tweak it to filter out noise in your environment. In addition, I will show you how to install Winlogbeat on your endpoints to ship all your native Windows and Sysmon logs.  



Feedback is greatly appreciated!  Thank you.




update 3/1/2017 : Added Memory requirement for ELK VM - At least 4GB RAM
update 09/09/2017:

7 comments:

  1. Great post, thank you very much. Just got my lab setup!

    One thing I notice is after figure 28 the command you give references nginc instead of nginx.

    ReplyDelete
    Replies
    1. Thank you very much for pointing that out J.Hall. I just fixed it. Also, I am glad you were able to get your lab set up too! That's awesome! I tried to provide as many details as I could. How long did it take you to follow all the 6 parts of this series? Just curious. :) Thank you for the feedback!

      Delete
  2. Excellent Post! This is the most comprehensive and detailed post that I have found on the subject. A question, it was recommended to download Ubuntu Server 16.04 LTS however the screenshots is showing Ubuntu Desktop. Are you using desktop or just installed the desktop environment on the server?

    ReplyDelete
    Replies
    1. Thank you very much RM Command. I am glad you find this series helpful. I would love to hear what you are testing in your lab. Also, yes I installed the desktop environment on the server.

      Delete
  3. This is very helpful and thank u very much

    ReplyDelete
    Replies
    1. Glad to hear that it was helpful, thank you very much for the feedback :)

      Delete
  4. This comment has been removed by the author.

    ReplyDelete