Getting Started with Threat Intelligence

Joe Partlow, CISO, is a guest blogger from ReliaQuest. He has been involved with InfoSec in some capacity or role for over 15 years, mostly on the defensive side, but always fascinated by those cool kids on offense.

Current projects include mobile and memory forensics, SIEM optimization, disaster recovery and business continuity planning. Joe has experience in many different business verticals including e-commerce, healthcare, state/local government and the Department of Defense.

What is Threat Intelligence?

Threat intelligence seems to be the buzzword of the year, but what does it really mean? It’s much more than a list of bad IP addresses or URLs and the definition depends on how the individual defines it in most cases.

However, most will agree effective and timely intelligence is absolutely critical to become more proactive in hunting down potential attackers and researching incidents.

Using threat intelligence has been standard operating procedure for the military for years, but it is a relatively new concept for most IT security teams in their incident response process and procedures.

How to Get It

Most vendors are now building the capabilities to either import a feed sourced from their own internal R&D teams as well as commercial/open source third-party threat feed sources. Both have their pros and cons, but companies should take advantage of both if possible.

Open-source intelligence (OSINT) could be defined as any free or low threat feed of malicious IP addresses or URLs made publicly available.

Some reliable sources include malware domains, emerging threats and SANS ISC. These are a great low-cost way to get started. However there are some drawbacks:

  • Some lists are not updated on a regular basis.
  • They’re not necessarily focused on threats for a particular vertical (retail, financial, health care, utilities, etc.)
  • There’s typically no way to determine source or age of the entries.
  • They are found in a variety of formats that the user will have to parse or import themselves.
  • Commercial feeds (vendor or third-party supplied) are usually subscription-based feeds. They typically have more options to integrate, and they can be set up to automatically pull into the vendor’s security appliance or application.

These feeds are also typically more consistent and provide more data in the feed as far as source, aging, frequency, etc.

More than Just Feeds!

Commercial and OSINT feeds are a great way to get your hunt or incident response teams started, but they are only part of a much bigger picture.

Next-level proactive intelligence can come from any of the following sources with a bit of integration work, research and imagination:

  • Generic honeypots can be set up around the world in key locations to look for overall trends in attacks, methods or bad actors. Firewall or IDS/IPS geo-locations are helpful for active sources.
  • Honeypots can be used in your actual IP or DNS space to get even more targeted information against your organization. This carries a much higher value, but also a higher risk. So make sure the network segment is isolated.
  • You can also mine social networks for keywords of early indicators or intentions of compromise. Many times, attacks or breaches are mentioned on sites such as Shodan, Full Disclosure, Pastebin or Twitter days before they are publicized in mainstream media.
  • IRC bots can scan known underground forums in categories the organization considers risky in order to look for keywords or intent of malicious activity.
  • You can also set up passive DNS monitors to catch potential outbound callouts or exfiltration tunneling attempts. This can generate a ton of logs, but new domains or overly long URLs are the usual suspects in tracking down a comprised machine.
  • Run your won TOR exit node to get intelligence on protocols being used or source/destination information.
    And there are many others and new sources discovered all the time!

Most importantly, make sure whatever threat intelligence you consume is timely and in a usable format for the incident response and hunt teams.

Many security applications and appliances on the market offer some sort of integration, but some SIEM tools still need custom integration work to make them usable.

Threat intelligence is many things to many people, but security teams should take advantage of everything they have the ability to consume depending on budget, development resources and time.

You never know when one piece of obscure intelligence can help you correlate an event investigation or proactively protect the organization.

Tags: , , , ,

0 Comments | Security


Cybersecurity in Asia: Keep Your Castle Safe

When it comes to a cyber-attack, it is no longer a question of if your company will be hacked but when. Companies from 2 to 10,000 will get hacked. There’s no question.

If you think that’s bad news, then consider this: in Asia Pacific, enterprises were expected to spend US $230 billion in 2014 to deal with cyber breaches, and it wasn’t enough. Organized crime accounted for US $138 billion in enterprise losses in the region, according to an IDC/NUS study.

Sixty percent of security breaches are due to compromised credentials. The threat won’t lessen as time goes on—cybercrime is profitable. In fact, it’s now more profitable than the illicit drug trade—and it’s a powerful magnet, attracting unscrupulous individuals, organized crime rings and even nation states interested in doing “bad” for profit.

With the advent of the third platform—mobile, social, big data, and cloud—the surface of attack for cyber criminals is just getting larger. Security measures of yesterday are struggling to cope with the speed of mobile adoption and how devices are now interconnected. Meanwhile, hackers are already onto the next stage of the game: it’s a game of cat and mouse, and we, unfortunately, are the mice.

Governments are also getting in the game, with a number of countries in Asia ramping up measures to counter cybersecurity threats. For example, in Singapore the Infocomm Development Authority of Singapore (IDA) introduced the Singapore National Cybersecurity Master Plan 2018 to provide the strategic directions to guide Singapore’s national efforts in enhancing cyber security for public, private and people sectors.

Singapore is by no means the only nation worried about cybersecurity: Japan, Indonesia and other countries in the region have also put forward initiatives to bolster efforts to address the clear threat cybersecurity poses.

This is the current threat landscape. It’s scary, but by no means is it impossible to address. In fact, Asia Pacific is leading the world in digital security, according to PwC.

Companies in the region are more likely to have an IT security strategy that is aligned to the needs of the business and to have a senior executive who communicates the importance of security.

The Asia Pacific region remains a leader in implementing strategic processes and safeguards for information security, setting the pace in numerous practices. As with any potential threat, being prepared to deal with attacks is just as important, if not more so, than preventing attacks in the first place.

Let’s put it this way: If a company with firewalls, anti-spam and anti-viruses all in place is a castle, they are well prepared for an attack that castles would naturally expect: armies with arrows, a catapult, hordes that can be repelled by ensuring they do not enter the castle.

What happens when the enemy evolves? In order for the company to keep its castle safe, it needs to change its mindset and understand that they are no longer dealing with the same enemy as before.

Changing Mindsets on Cybersecurity to Match the Evolving Threat Landscape

Today, the focus is on preventive technology. In the castle analogy, this would be the equivalent of the strong walls, narrow windows and a moat—anything in place to ensure that intruders can’t get in.

While it’s absolutely necessary to make sure you have adequate defenses on the outside, none of these preventative measures are able to do anything about aggressors that have already gotten into the castle.

The mean time to detect a threat and mean time to respond is currently in months—hundreds of days—long after the damage has taken place and is too large or severe to salvage.

Once the detection and response times are closer together—in the weeks, days, or even hours, ideally—companies can meet that challenge of seeing their environments in real-time and knowing when you there is an intruder.

How can this be done? By not holding compliance up as a shield. 2014 was a year of major breaches—and many of the major breaches that took place last year happened at companies that considered themselves compliant to security standards.

Being compliant to regulation is not the same thing as being protected. Enterprises can no longer be satisfied with a “check the box” mentality. Regulation is a good start, but by no means does it comprehensively cover a company’s security measures.

Once companies change the “check the box” mentality towards cybersecurity, the realize that the threats that their businesses are facing aren’t necessarily ones that fit into a checklist or a framework. That’s why preventative measures are not enough.

Let’s go back to the castle analogy: you have a well-fortified castle but you’re not dealing with an enemy that’s knocking on your gate anymore. You’re looking at enemies that are wearing your uniforms, or drones that are attacking from above – a more sophisticated intruder that you can’t prevent from getting in. What can you do? You need to identify them, and respond to them in a timely manner, before they can deal your castle significant damage.

The Way Forward: Early Detection and Response

Organizations need to baseline their environment and determine who has access to which areas and what information. Can you see your environment? Are you aware of where your servers are and what’s on your servers? Where is your sensitive information? Who is allowed to access what information?

We help organizations create that baseline so they can see, with the help of analytics, if something is going wrong in real time. For example, if an employee’s credentials are compromised, any new activity from that account is and should be a red flag.

Today, organizations needs to invest in security intelligence and have a set of security analytical tools. As services become more interconnected and as more data is generated, there is an increasing need to shift to a combination of machine and user analytics. Let the machines analyze the thousands or millions of logs generated and have it identify the unusual occurrences.

This way, we can reduce the time to detect and the time to respond. These are the critical factors when it comes to cyber security: When you realize that they are going to get in, then you need to kick them out before they can do any real damage.


Tags: ,

0 Comments | Security


LogRhythm Challenge: Black Hat 2015

{Collaboration between Thomas Hegel and Greg Foss}

For Black Hat this year, Labs decided to try something new and put together a packet capture analysis challenge for the conference. The goal of the challenge was to find the secret launch codes for the fictional company, “Missiles R Us.”

Below, you will find the solution to the puzzle along with details on the Easter eggs hidden throughout.

The PCAP’s Distractions

  • Mozilla FTP server browsing and various file downloading
  • Streaming of this YouTube video
  • Downloading the Dropbox application (not actually using it)
  • Uploading of a .txt file containing useless assembly code to
  • One post to with the base64 encoding string “so close, yet so far”
  • Another post with the string “This is a test…hmmmm”
  • Using telnet to play Star Wars over telnet
Figure 1. Star Wars TCP Stream

Figure 1: Star Wars TCP Stream

Figure 2. Star Wars Over Telnet

Figure 2: Star Wars over Telnet

The Solution

There was one last use of, in particular, to paste a large string of binary text. Following the encoding/decoding trend of the challenge, we must convert that binary to ASCII. Doing so will provide the following string:


Now we have some base64-encoded data! Decoding that, we get:

Secret La
unch Code:

Getting closer! At this point, we now have a hex encoded string (&#xNN entities to be specific). As the final step, we decode this string to ASCII, and now have the following:

Secret Launch Code: 2g389a34!0297#

Easter Eggs

Hidden throughout the challenge were some Easter eggs. The first of which was basically hidden in plain-sight within a comment field at the bottom of the HTML on the first page.

Figure 3. Easter Egg

Figure 3: Easter Egg

This was another basic encoding challenge, somewhat similar to the actual solution. However, the encoding was a bit different. To reverse this, bring up your favorite encoder/decoder (I like to use and take this apart. This first string is Octal JavaScript encoded.


Once you decode this, you are left with a Unicode string.


FIgure 4: Unicode (renders in HTML, so we went with a screenshot. Click for a link to the plain-text)

Which decodes to base64.


Which finally gives you a key…

key = 9ughgjw9241110x41

That, when entered into the scoreboard, does nothing. Its sole purpose is to throw challengers off and send them down a rabbit hole.

Figure 4. PCAP Easter Egg

Figure 5: PCAP Easter Egg

In addition to the red herring mentioned above, there was a hidden game that was available if keywords such as “LogRhythm” or “Labs” were entered in to the scoreboard. You can still play the game here:


Overall, we had a great turnout and want to thank everyone who participated in the game!

Figure 5. Metrics

Figure 6: Metrics

Until next year…

Tags: , , , , , ,

0 Comments | Digital Forensics


Network Monitor as a Programmatic Intrusion Detection System

Detect Threats, Passively Identify Devices and Selectively Capture Packets

Network Monitor release 2.7.1 implements the ability to add custom scripting rules that can run on every packet or flow, allowing automatic analysis of network metadata. This capability allows for advanced intrusion detection — not just checking for certain byte sequences, but direct access to metadata while coding in a programming language.

For example, a rule can be created to match instances where a sender of an email uses an email address that has a different domain from where it was actually sent. Another could save PCAP for a certain IP address or when a certain protocol is used or only during off-hours. Rules could determine if the traffic includes a series of suspicious behaviors before sending alarms to both the Network Monitor interface and SIEM for further correlations.

Programmatic handing of network traffic is fairly complex, and this post will give a quick overview of how to create new analytics. The examples above only touch the surface, but the ability to perform these tasks is built based on only a few major features: selecting metadata, setting custom metadata, alarming and syslog and saving PCAP.

These scripts are written in the Lua programming language, and these features are basically functions that allow the user to plug into Network Monitor. These will be described in the following sections.

Selecting Metadata

Network Monitor’s core feature is recognizing and parsing thousands of metadata fields from hundreds of network applications. With that metadata, users can search key/value pairs using Lucene Query Syntax to find pretty much anything happening on their network (Network Monitor Quick Tips). Combining this power with a scripting language opens many possibilities.

Metadata is divided into categories, and it’s important to know the category when trying to select specific fields. There are two major sections: general metadata and extended metadata. General metadata is information that should be found in most flows. It will have its own function to retrieve it.

Name Function Datatype Use
Session ID/Unique ID GetUuid(dpiMsg) int Find specific flow
Application GetLatestApplication(dpiMsg) string Protocol/known application (ie, http, dns, gmail)
Source IP GetSrcIP4String(dpiMsg) string Filter on IP
Destination IP GetDstIP4String(dpiMsg) string Filter on IP
Start Time GetStartTime(dpiMsg) int Beginning of flow
End Time GetEndTime(dpiMsg) int End of flow
Source MAC GetSrcMacString(dpiMsg) string
Destination MAC GetDstMacString(dpiMsg) string

Extended metadata falls into three subcategories based on datatype: strings, integers, and longs. To determine which datatype a field is, the Network Monitor Help tab has a document under Help/Content/Managing Network Monitor/Deep Packet Analytics/Network Monitor Metadata fields. Or use, this link with your Network Monitor’s hostname.

To get a specific value for a field, the following functions will make sure you are selecting the correct datatype:

GetString(dpiMsg, protocol, field)
GetInt(dpiMsg, protocol, field)
GetLong(dpiMsg, protocol, field)

For example, local port_dst = GetInt(dpiMsg, 'internal', 'destport') will return the destination port for a flow.

To get all parsed field names available to a flow (those fields that have been observed and parsed), use the following functions:


Setting and Getting Custom Fields

Although Network Monitor can identify thousands of fields, it certainly doesn’t parse everything. With Lua rules, a user can set their own search-able fields.

This might be useful for parsing headers for obscure protocols, identifying devices, and including that information in the flow or adding suspicious tags to flows that violate certain behaviors.

The function for this is very simple: SetCustomField(dpiMsg, key, value), where the key is the field name, and the value is the data being set. Note that when searching for the field name, Network Monitor appends “_NM” to the field name in order to avoid naming conflicts.

To retrieve a custom field in another flow, use: {GetCustomField(dpiMessage, key)}. Note the brackets, because the return type will be a Lua table (ie, an array).

Alarming, Syslog and Packet Capture

Being able to see flows where the rule’s conditions are met is the ultimate goal. There are three main methods for doing so: sending an alarm to the Network Monitor Alarms tab, sending syslog to another device (probably a SIEM), and saving the raw packets as PCAP.

To trigger an alarm and send syslog to the pre-configured source (again, probably a SIEM), use the TriggerUserAlarm function:

TriggerUserAlarm(dpiMsg, ruleEngine, rule_severity)

The alarm will then be visible in the Alarms tab. Use the Session ID to then find the specific flow that triggered the alarm. Likewise, alarms will be searchable in the SIEM.

Alarming from Lua rules

Alarming from Lua rules (Click to enlarge)

Saving PCAP is done by invoking the VoteForPacketCapture() function in a packet-scoped script. This will set the selected packet for capture. To download the PCAP, use the Capture or Analyze tabs and click on the Captured icon (for the Analyze tab, first make sure to set “Captured” as a field).

Note that alarms can only be triggered from flows, and PCAP can only be saved for packets. The distinction between these will be described in the next section.

Putting It All Together: Simple Examples

First, let’s quickly cover how to upload a rule. The scripts can be uploaded and managed under the Configuration/Deep Packet Analytics tab.

Configure and add Lua rules here.

Configure and add Lua rules here. (Click to enlarge)

Add a rule by clicking the “Choose File…” button. Notice the dropdown menu under “Scope.”

Window that will pop-up when adding a new rule.

Window that will pop-up when adding a new rule. (Click to enlarge)

Until now, we haven’t addressed the topic of when the scripts will be run. This is the scope, and Network Monitor has two: packets and flows.


Packets are individual units of combined network layer data. Rules set to run on packets will run very, very frequently. For example, a medium-sized organization might generate 10-30,0000 packets per second during business hours. A bad rule at this level can easily take down a Network Monitor appliance, and so any rules running at this level should be very streamlined and minimal.

Rules at packet level do have one major, exclusive feature not available to flows, however, so they are still useful. This is access to the raw data. With access to the data, a packet level rule can save that data in PCAP format before that data is discarded.

This example will save PCAP when a certain IP (defined as my_ip) is seen as the source or destination. Note the two functions to return general metadata and the lua_packet_object being used to save the packet.

function packet_capture_ip (msg, packet)
  --- this IP is purely an example (google dns server)
  if (myIpMatch == nil) then
    myIpMatch = IpMatch:new()
    myIpMatch:SetIP4Src("") -- using example IP: google dns server
  if (myIpMatch:MatchIP4SrcOrDst(msg)) then


Flows are a series of packets that make up transport layer communication. For example, the conversation between a client and a server to download a webpage. Rules that trigger on flows will occur much less frequently than packets—a medium-sized organization might expect 75–150 flows per second—and so to reduce the chance of negatively affecting Network Monitor performance, most Lua rules should be classified as flow rules.

For this example, we’ll use some simple scripting logic to find non-DNS traffic on port 53. This may be a sign of a covert channel (e.g., malware or a malicious actor attempting to hide among legitimate traffic).

53 is the destination port, so we’ll start by getting that value and stopping the script if the flow isn’t using it. Then we check if the application is identified as DNS. If not, we have a protocol mismatch.

To notify a user that the mismatch was detected, two actions are taken: First, a custom metadata value is set to make the session easy to find in the Analyze tab (again, remember that “_NM” is appended to custom field names. So the Lucene query would be “proto_mismatch_NM:53″); Second, an alarm is triggered that will be visible in both Network Monitor and the SIEM.

function flow_proto_mismatch_53 (dpiMsg, ruleEngine)
  local port_dst = GetInt(dpiMsg, 'internal', 'destport')
  if port_dst ~= 53 then
  local apps = {dns=true, krb5=true}
  local my_application = GetLatestApplication(dpiMsg)

  if not apps[my_application] then
    TriggerUserAlarm(dpiMsg, ruleEngine, 'medium')
    SetCustomField(dpiMsg, "proto_mismatch", '53')

Flow States

Network Monitor can classify a flow as being in one of three stages: Final, Intermediate or Intermediate Final.

The important element here is the intermediate cutoff time. This can be set in Configure/Engine/Report on Long Running Sessions. Shorter lengths will increase the update rate, but may affect performance. Most installations will work well with values between 60-300s. Any Flow that last longer than this value will become “Intermediate.” This prevents a long-running flow from evading detection by never ending.

Using these three functions to determine the flow’s stage:

Function Stage Description
IsFinalShortFlow(dpiMsg) Final Shorter Flows that last for less time than the intermediate cutoff rate.
IsIntermediateFlow(dpiMsg) Intermediate Flows that are not finished but have lasted longer than the intermediate cutoff time.
IsFinalLongFlow(dpiMsg) Intermediate Final Long flows that have ended.

For many rules, Flow state can be ignored. But to filter the script on Flow states, use a typical Lua pattern of returning false if the Flow isn’t in the desired state.

if not (IsFinalShortFlow(dpi_msg)) then
  return false

Long Example

This example shows an effective rule for finding potential phishing emails: when the email address used has a domain different from the actual domain sending the SMTP message. Malicious actors can claim to send an email from anyone, like from someone inside your organization, but they won’t be able to fake the sender domain.

-- domain in the email address of the sender domain does not match the domain sending the email. 
-- possible indicator of a phishing attack, although additional indicators are needed to confirm
-- eg:
-- SenderEmail:
-- SenderDomain:

function flow_smtp_sender_domain_mismatch (dpiMsg, ruleEngine)
  -- get/verify current application
  local app = GetLatestApplication(dpiMsg)
  if app ~= "smtp"  then

  -- get/verify sender domain
  local sender_domain = GetString(dpiMsg, "smtp", "sender_domain")
  if (sender_domain == nil or sender_domain == '') then
  sender_domain = string.lower(sender_domain)

  -- get/verify sender email
  local sender_email  = GetString(dpiMsg, "smtp", "sender_email")
  if (sender_email == nil or sender_email == '') then

  -- parse/verify/save the domain from sender email
  local sender_email_domain = string.sub(sender_email, string.find(sender_email, '@')+1, string.len(sender_email))
  if (sender_email_domain == nil or sender_email_domain == '') then
  sender_email_domain = string.lower(sender_email_domain)
  SetCustomField(dpiMsg, "sender_email_domain", sender_email_domain)
  -- check if sender's real domain matches their claimed domain (exclude gmail)
  -- alarm on mismatch
  if not string.find(sender_domain, sender_email_domain) then
    if (string.find(sender_domain, 'gmail') or string.find(sender_domain, 'google')) then
      return false
    SetCustomField(dpiMsg, "sender_domain_mismatch", 'true')
    TriggerUserAlarm(dpiMsg, ruleEngine, 'medium')
    return true

Programming Notes

Function Name and Initial Argument

Primary function names must be unique — two scripts with the same function name cannot be uploaded to the same Network Monitor. Because the main function could either work on a flow or packet, system functions provided by LogRhythm will start their name with either “packet” or “flow,” but user-added functions can be named with any Lua-compliant naming scheme. Also, there must be a space between the function name and the parenthesis that contain the arguments.

The names for the initial arguments are somewhat arbitrary, but it’s a good idea so standardize them. For packets, the first argument is the packet message (‘msg’); the second are the packet functions (‘packet’). For flows, the first argument is the message, although the Deep Packet Inspection message (‘dpiMsg’) rather than just a packet. The ‘ruleEngine’ is the set of functions that can be used.


Getting rules to work can be tricky. Fortunately, debug messages can be printed to log files at any stage of a script.

To do so, include require 'LOG' in your code before places where you wish to print. Then use INFO(debug.getinfo(1, "S"), "Test: " .. variable) to write out strings, where “Test” is a static string and “variable” is the dynamic value you wish to print.

For packet rules, this will print to the ProbeReader log (/var/log/probe/ProbeReader.log); for flow rules, to ProbeLogger (/var/log/probe/ProbeLogger.log). These two log files are also where errors will be found.

Use standard CentOS/Linux commands for reading the files (like “tail -F /var/log/probe/ProbeReader.log” to see it live).


When processing network traffic, speed and volume are critical considerations. It would not be a good idea to perform complex actions on every packet. Even in flows, make sure that complex operations occur on heavily filtered traffic. This can be done by eliminating broad swaths of traffic early on using if statements.

For example, start by filtering down on values that will exclude the most traffic right off the bat, like application. Or, in the SMTP example above, note how the script ends (ie, returns false) whenever something makes the next step pointless.

Future Features

The Lua capabilities in 2.7.1 are the first release where scripting functionality is being made public to customers. Features will be added in future versions to allow for even greater capabilities.

This includes thread-safe writing to save script output to files or databases, setting custom alarm field names to have multiple alarms in one script, and setting the organization’s internal IP space for filtering.

Getting Help

The Help tab includes a large section on the Lua scripting features. Click on the Help tab, Managing Network Monitor in the Contents list, and then Deep Packet Analytics. Or use this link, substituting in your Network Monitor hostname.


none | Digital ForensicsSecurity


PSRecon – PowerShell Forensic Data Acquisition

Live incident response and forensic data acquisition is often a very manual and time consuming process that leaves significant room for error and can even result in the destruction of evidence. There are many people involved when investigating an incident, which makes process consistency difficult. Often when retrieving a system, evidence can be tampered with and altered in the short time frame between the identification of an issue and the interception of the suspected host or user. For this reason, electronic evidence can sometimes be thrown out of a court of law due to possible tampering or inability to show proof in a court of law.

To help fill in this gap, Labs developed a Live Incident Response and Forensic Data Acquisition PowerShell script that uses only native Windows tools to gather evidence and system data in its current state. This script also incorporates account lockout and lockdown functionality to essentially take suspect hosts offline and/or disable accounts within Active Directory following data acquisition.

Download Here  =>


Figure 1: PSRecon Banner

PSRecon gathers data from a remote Windows host using built-in PowerShell (v2 or later), organizes the data into folders, hashes all extracted data, and sends the data off to the security team in the form of an HTML report. This can then either be pushed to a share, sent over email, or retained locally.


Figure 2: PSRecon basic data acquisition

The ability to lock down and endpoint can be useful when investigating a system infected with malware, especially when there is risk that the malware will spread to a share or other critical systems within the enterprise. Sometimes the quickest and most effective way to stop the spread of malware is to simply knock the host offline until IT/Security can respond.

Alternatively to quarantining the host, PSRecon allows you to disable an active directory account following the acquisition of live forensic data. All of these various options can be combined to turn an often manual process into a streamlined and easy method to remotely obtain forensic data from a target host and quarantine the system from the network.


Figure 3: Example Email with Attached HTML Report

Within the report, you have an accurate view of a significant portion of the target host, in its entirety. One of my favorite aspects of the report, is that everything is self-contained, making it easy to share as there is no reliance on a centralize server. Even the images are encoded directly into the report’s HTML.


Figure 4: Report HTML Image Encoding

What’s more is that if you have a team of Incident Response professionals, you can push the report out to all of them at the same time. With more eyes on the evidence, the easier and faster it is to spot the threat, especially when combined with the data already gathered by the SIEM. Not only are we mitigating the threat and kicking off an investigation within minutes, we have also already obtained somewhat ‘forensically-sound’ data from the remote host that will help responders better understand the full picture.

While on the topic of SIEM, this script integrates easily into LogRhythm to allow for the execution of this script as a SmartResponse™. This will work with various AIE™ alarms or can be run ad-hoc against local/remote hosts. In particular, any case where Malware is observed would be key to pull forensic data and then quarantine the host. The SmartResponse™ also uses its own included version of PowerShell, as the remote host is normally in an compromised state, so the PowerShell executable itself should not be trusted. One of the limitations of the open source version of the tool.


Figure 5: SmartResponse™ and AIE™ Rule Integration

The current available SmartResponse™ actions are:

  • Gather Local Data and Send Report via Email / Push to Share / Pass Additional Arguments
  • Gather Remote Data and Send Report via Email
  • Gather Remote Data, including client email and Send Report via Email
  • Remote Lockdown and Quarantine
  • Disable AD Account and Host Lockdown

This works well with LogRhythm’s Case Management workflow as well. This is helpful in not only creating a timeline of events but gathering forensic data and performing automated defensive actions. Speaking of timeline, PSRecon writes its own logs as well, to track its actions and verify activities on the host. This is beneficial in that it will help you show what activities the script performed on the suspected host.


Figure 6: Host Logging

While on the topic of logging, PSRecon also logs attempted attacks against itself… So, take an example scenario where someone tries to hijack another employee’s browser by way of a SmartResponse™. To do this they would inject an XSS attack within a user-controllable field that is reflected on the HTML report. These attacks are detected and logged, allowing for additional actions to be taken. Of course, there are tons of ways around this, it’s just a small added precaution for when the script is integrated with security infrastructure.


Figure 7: XSS Attack Example

PSRecon takes a long, cumbersome, and inaccurate process that used to take days to complete and turns it into a quick, effective, and powerful means to instantly acquire forensic data and respond to various threats directly from the LogRhythm SIEM; streamlining a significant portion of the incident response process.

The project is still very much in beta and I’m looking for feedback from the security community to help with ideas and code improvements going forward. So, check out the GitHub repository and let me know how we can make this better!

Tags: , , , , , , , , ,

none | Digital ForensicsSecuritySIEM