Posts tagged: 'malware'

The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.

Getting Started with Threat Intelligence

Joe Partlow, CISO, is a guest blogger from ReliaQuest. He has been involved with InfoSec in some capacity or role for over 15 years, mostly on the defensive side, but always fascinated by those cool kids on offense.

Current projects include mobile and memory forensics, SIEM optimization, disaster recovery and business continuity planning. Joe has experience in many different business verticals including e-commerce, healthcare, state/local government and the Department of Defense.

What is Threat Intelligence?

Threat intelligence seems to be the buzzword of the year, but what does it really mean? It’s much more than a list of bad IP addresses or URLs and the definition depends on how the individual defines it in most cases.

However, most will agree effective and timely intelligence is absolutely critical to become more proactive in hunting down potential attackers and researching incidents.

Using threat intelligence has been standard operating procedure for the military for years, but it is a relatively new concept for most IT security teams in their incident response process and procedures.

How to Get It

Most vendors are now building the capabilities to either import a feed sourced from their own internal R&D teams as well as commercial/open source third-party threat feed sources. Both have their pros and cons, but companies should take advantage of both if possible.

Open-source intelligence (OSINT) could be defined as any free or low threat feed of malicious IP addresses or URLs made publicly available.

Some reliable sources include malware domains, emerging threats and SANS ISC. These are a great low-cost way to get started. However there are some drawbacks:

  • Some lists are not updated on a regular basis.
  • They’re not necessarily focused on threats for a particular vertical (retail, financial, health care, utilities, etc.)
  • There’s typically no way to determine source or age of the entries.
  • They are found in a variety of formats that the user will have to parse or import themselves.
  • Commercial feeds (vendor or third-party supplied) are usually subscription-based feeds. They typically have more options to integrate, and they can be set up to automatically pull into the vendor’s security appliance or application.

These feeds are also typically more consistent and provide more data in the feed as far as source, aging, frequency, etc.

More than Just Feeds!

Commercial and OSINT feeds are a great way to get your hunt or incident response teams started, but they are only part of a much bigger picture.

Next-level proactive intelligence can come from any of the following sources with a bit of integration work, research and imagination:

  • Generic honeypots can be set up around the world in key locations to look for overall trends in attacks, methods or bad actors. Firewall or IDS/IPS geo-locations are helpful for active sources.
  • Honeypots can be used in your actual IP or DNS space to get even more targeted information against your organization. This carries a much higher value, but also a higher risk. So make sure the network segment is isolated.
  • You can also mine social networks for keywords of early indicators or intentions of compromise. Many times, attacks or breaches are mentioned on sites such as Shodan, Full Disclosure, Pastebin or Twitter days before they are publicized in mainstream media.
  • IRC bots can scan known underground forums in categories the organization considers risky in order to look for keywords or intent of malicious activity.
  • You can also set up passive DNS monitors to catch potential outbound callouts or exfiltration tunneling attempts. This can generate a ton of logs, but new domains or overly long URLs are the usual suspects in tracking down a comprised machine.
  • Run your won TOR exit node to get intelligence on protocols being used or source/destination information.
    And there are many others and new sources discovered all the time!

Most importantly, make sure whatever threat intelligence you consume is timely and in a usable format for the incident response and hunt teams.

Many security applications and appliances on the market offer some sort of integration, but some SIEM tools still need custom integration work to make them usable.

Threat intelligence is many things to many people, but security teams should take advantage of everything they have the ability to consume depending on budget, development resources and time.

You never know when one piece of obscure intelligence can help you correlate an event investigation or proactively protect the organization.

Tags: , , , ,

0 Comments | Security


A Case of the Mondays: How a Routine Visit Discovered a Cyber Attack

Recently, I learned a valuable lesson from what appeared as though it would be a regular Monday. My day started off routinely, but along the way some surprising events unfurled.

I was scheduled to go on-site with a federal customer for a “knowledge transfer” (aka OJT) as a new NOC/SOC team was coming online. When I got there, it started out as a rather typical meeting—get an understanding of the team’s knowledge of LogRhythm, assess their goals, and introduce them to any new features/products available.

Prior research led me to believe that this was an older deployment that had been neglected for some time. I was pleasantly surprised to find out that this was not the case. Rather, it was a rather new XM-6350 running LogRhythm  6.3.3 and the latest KB. Not having to perform a system upgrade was a relief.

However, as I listened to the customers’ requirements and dug into the deployment deeper, I quickly realized that the system was only being used for log collection, and rarely were people logging in or monitoring the data being collected by the XM. I’d say they were using about 30% of the LogRhythm  features and functionality. Moreover, the WebUI wasn’t installed and AIE was disabled.

After a quick install of the WebUI, initialization of AIE, addition of the basic LOW/LOW-LOW rules found in the Post-Install Guide for POCs and setting up the third-party threat feeds (as well as some other customer requested tweaks)—the XM was back in fighting shape!

After a brief conversation, my sales rep and I started to walk the team through a quick demo using the customers XM, Data and WebUI. Within 10 minutes of talking, we happened to click on the Alarms tab, and to our surprise, we found some interesting data.

A few red alarm cards with ratings of 90 appeared denoting “Malware Found.” The team asked “What’s that all about?” We began to pivot drilldown and discovered the systems affected by the malware. Apparently this information was being reported by their Symantec and McAfee systems for quite some time and on the regular.

The NOC/SOC team quickly sprang into action to remedy the issue (albeit they were a bit annoyed). Moments later, we returned to the demo and another alarm fired with a score of 97. The team (almost in unison) said “What now!?”

After a quick drilldown, we discovered that their Fortinet deployment was reporting a breach coming from an “external IP” (outside the U.S.). Being that this was a government customer, we’d just tripped DEFCON-1 (and were called into the manager’s offices to explain the situation). The customer thanked us for helping to spot the external breach and asked us to leave for the day, as they were about to get very busy.

What was the lesson learned? Make sure that you always look under the hood of an existing LogRhythm deployment to verify that the basics have been covered. It’s important to understand what LogRhythm is capable of and how to best leverage the system to your advantage.

As one of the NOC/SOC employees put it, “If we’d been paying attention and using LogRhythm more regularly, we could have caught this sooner. Who knows how long we’ve been under attack.”

A typical Monday, indeed…

Tags: , , , , , , , ,

none | GeneralSecuritySIEM


Investigation Operational Security Tips

Operational security during an investigation is extremely important and there are a couple tips I’d like to share. While they may seem obvious at times, it’s imperative that they be kept in mind during an emergency or a normal investigation, especially for those organizations without a dedicated incident response team. Cutting corners in order to save time or ignoring some basic rules can compromise and inform unwanted individuals of your investigation.

  1. DNS Requests

A common practice in many investigations is to gather information and observe a suspicious domain, if one is involved. While this preliminary information gathering can be beneficial it can also compromise your investigation. Making DNS requests to an attacker’s server could inform them that research is being conducted, or their activity has been detected. It is extremely rare for an average foe to monitor inbound DNS requests, but it is a reasonable practice for a targeted and much more sophisticated adversary. In theory, such an adversary could monitor their own DNS servers for communication. Once irregular inbound requests are detected an adversary then has a head start to immediately back out, cover their tracks. and change their tactics. 

This mistake is most common with traffic capturing utilities used within a network for automatic name resolution. These tools can automatically give intelligence to the attacker around your monitoring capabilities. Here are two common examples of disabling automatic name resolution:


This setting is disabled by default, but good to double check if loading a PCAP that may contain network traffic for an investigation or research. The setting can be found in Edit > Preferences > Name Resolution.

Figure 1


Use the –n option. As stated in the manual, this setting will disable name resolution.

Figure 2

  1. Communication Timing

This one is more on the theoretical side, and again very dependent on the sophistication and focus of the attacker. If you are examining a file that could be malicious, the odds are this file will want to ‘talk’ outbound to its control server or request further downloads onto the network. There are obvious best practices when it comes to examining and testing malicious binaries, such as offline operations only, but that’s not always possible.

If you need the code to run, or perhaps you identified the URL holding the download file, be careful with your timing. As a small warning, the hosting location you are reaching has an increased possibility to spot the odd behavior. If they were to know all of their victim machines will reach out to them at a specific time, anything outside of that could inform them that you are researching and potentially identified their activities.

  1. Upload and Sharing of Data

For many, the first step to discovering if a file is malicious or not is to upload it to all the free scanning websites. While this process will provide some quick details for an initial analysis, it can also inform its creator that you have found and are researching their efforts. Some of the more popular destinations for these actions would be VirusTotal, Malwr, and the new IBM X-Force Exchange. These are some of my favorites, and also Lenny Zeltser happens to have a great list of options focused on automated malware analysis.

When it comes to using the numerous online scanning and analysis tools, avoid any aspect of sharing the upload during the initial investigation. Instead, simply search for the data names or hash when possible. An attacker can automate the ability to query the various sites for their binary to see if anyone has found and uploaded it. Searching instead of uploading will allow you to check for possible results while also keeping it concealed that you are performing an analysis on it.

Sharing with the industry can be a great thing, but take caution in the timing and amount of information you decide to make public. VirusTotal does not currently provide the option to opt out of sharing data while Malwr along with X-Force Exchange do. Lastly, there is the option for setting up your own analysis environment with something like Cuckoo Sandbox, but that’s for another blog post!

Figure 3


  1. Defense On The Sea Begins On The Shore

We’ve all heard it, do not share anything private or confidential on social media, yet many people still do. An interesting trend I’ve seen recently comes from LinkedIn in particular. It is common among professionals to connect with each other over LinkedIn shortly after meeting. Think about the timing and viewpoint from individuals watching for this activity. They could find it rather helpful when the team of Company-A begins to connect with people from Company-B. For example, if you began connecting with multiple individuals from a particular security vendor, someone from outside could then assume there may be use of that vendor at the organization. Also, LinkedIn is one of the most widely used sources of information for the social engineering competition (SECTF) each year at DEF CON. The solution may be obvious, but avoid connecting with patterns like this, or just turn off the ability to publicize this information within the account settings.

Treat any social media information with the same paranoia. Tweeting your thoughts on how terrible your users are at clicking links in phishing emails, or taking pictures of your office for your friends on Facebook are the first that come to mind as bad practice. This awareness would also be helpful to TV broadcasts as well, such as the recent event at a London rail station.

In conclusion, it all really comes down to the basic policy of “need to know” for all investigations or activity taking place. Individuals or groups targeting you can use the above methods to potentially gain the upper hand on your labors. None of this is a new idea or method to operational security, but in the ever changing world we must continually assess our practices to ensure its integrity.

Tags: , , , , ,

0 Comments | Digital ForensicsSecurity


Kippo Honeypot – Log Replay Automation

Kippo is one of my favorite honeypots due to its sheer simplicity, portability, and ease-of-use. It comes with a really neat feature that allows you to replay what the attacker did once they gained access to the honeypot by way of the script. This is a somewhat lesser-known feature within Kippo that can be valuable as it gives the analyst insight into how the attacker interacted with the server , what commands they ran, what services were installed, potentially what Command and Control server they are operating from, and much more.

While you can already pull this data in to LogRhythm to gather important information using the extracted metadata, it’s another thing entirely to watch the attacker in action. This feature helps to determine whether it is an actual person or a bot interacting with the host. The problem, however, is that this takes time and effort to manually log in to each honeypot, run the script against each TTY session log, and review the results. Why not automate this process? Below is an example of a real attack and some valuable data that was observed in an SSH session with a Kippo honeypot (in this case a malware payload was downloaded).

Kippo replay of basic C2 post exploitation activity

Figure 1: Kippo replay of basic C2 post exploitation activity

As you can see from the screenshot above, whenever an attacker successfully breaches the honeypot, a new TTY log file is generated – this is separate from the kippo.log files we are already pulling in to the SIEM. So, the quickest way to automate this is to create a small bash script on the honeypot host(s). This script is pretty simple, it defines a few environmental variables, runs the script and outputs the results of the script to a text file in a separate directory. Once this is complete, it moves the replayed TTY session log to a separate folder, modifies the output so it can be viewed easily on Windows hosts, and sends a notification email to the SOC with the attached log replay file.

This script is available in my forked Kippo repository along with other small additions, here:

Figure 2: Kippo Log Replay Alert Script

Figure 2: Kippo Log Replay Alert Script

We could easily set this up to run via cron, however then we would have limited control of the script execution. Since cron is limited to running on a set schedule, this has the potential to affect File Integrity Monitoring (FIM) notifications and there may be times when you don’t want the script to execute. So, the easiest way to do this would be to use a LogRhythm SmartResponse™. First we need to set up a new FIM policy, for this use case I simply cloned the existing Linux FIM policy and added the /opt/kippo/log/tty/ directory, configuring it to check for any new log files every 10 minutes.

Figure 3: File Integrity Monitoring Policy Update

Figure 3: File Integrity Monitoring Policy Update

Now with the new FIM policy in place, a log file will be generated whenever changes are observed in this folder. Reviewing the log metadata allows us to see what attributes we can alert on.

Figure 4: Kippo FIM Event -- Log Details

Figure 4: Kippo FIM Event — Log Details

As you can see from the screenshot above, we can create a simple Advanced Intelligence Engine™ (AIE) rule that checks for the above common event, object, and log source…

Figure 5: Kippo Log Replay FIM Alert

Figure 5: Kippo Log Replay FIM Alert

Now for the fun part, we need to set up a SmartResponse™ to trigger whenever this rule fires. We can do this using any scripting language, but for this proof of concept, I chose PowerShell. All we need to do is authenticate to the server using Plink and run the aforementioned bash script remotely by way of a quick PowerShell one-liner that Andrew Hollister put together.

cmd /c "plink.exe -ssh -i key.ppk -noagent -m commands.txt $UserName@$TargetHost <yes.txt"

Using this script, we can build the SmartResponse™ and configure the script to run whenever the associated AIE rule fires. At this point, it is possible to configure approvers if desired. With the new action in place, enable the AIE rule and test  to assure everything is working as intended.

Figure 6: SmartResponse Action Approval

Figure 6: SmartResponse Action Approval

If all goes well, you will receive an email notification with an attached log message displaying full details of the attacker’s replayed TTY session…

Figure 7: Auto Replay Email Alert

Figure 7: Auto Replay Email Alert

Using SmartResponses™ to automate TTY session replays, extracting the session log, and forwarding this on within an email can help security analysts, researchers, etc. quickly analyse the attack and gather useful data very easily. While the general honeypot metadata already being captured by the SIEM is excellent, it can be drastically improved by automating Kippo’s attack replay capabilities. Not only will this give analysts a better idea of who is attacking them and what methods they are using, but it can serve as an early-warning indicator, complimenting the wealth of information already being extracted from the honeypot metadata.

Figure 8: Honeypot Analytics Dashboard

Figure 8: Honeypot Analytics Dashboard

Tags: , , , , , , ,

none | Digital ForensicsSecuritySIEM


Professional Malware

There is an adage in physical security that criminals will go for the lowest hanging fruit — if a car parked on the street has a security system (denoted by a blinking light), it will be fine when next to a model without one. So the cost-effective strategy is to stay just ahead of the weakest competition. Ten years ago, this held true in information security, when the field was still a large unknown for many organizations, and criminals didn’t need to work particularly hard to compromise an unprotected one. But organizations slowly began catching up to best security practices with an expansion of security devices throughout networks: moving beyond just anti virus to add full endpoint monitoring with virtualized malware sandboxes; beyond a single firewall to intrusion detection systems; and beyond disparate monitoring of multiple devices to centralized security information and event management.

But in this rapidly evolving environment, the attackers didn’t just give up — they became more professional. As a result, malware tools have reached commercial-grade quality. In particular, modularity, first introduced several years ago, now allows variants to be customized to the victim.

The Zeus Trojan, now one of the older pieces of malware (developed nearly 10 years ago), became one of the earliest to be customized in this way after its source code was released in 2010. At first, Zeus was primarily used to target financial institutions. But as they ramped up security, attackers moved to pilfering vulnerable business payrolls. Most recently, malware authors have gone after large retailers payment systems, stealing credit card information.

The introduction of malware loaders, with the ability to covertly pull down any program that the attacker chooses, allowed malware authors to share their best features, giving criminals had the ability to greatly accelerate their progress. Smoke Loader, now a few years old, is a prime example. While investigating a machine that was exfiltrating passwords a couple years ago, I noticed the traffic pattern matched another piece of malware, Carberp. Further analysis showed that the password grabbing function was placed into Smoke Loader. Just as exploit kits customized their payload depending on their victim’s vulnerabilities, botnet operators could customize their control over their bots.

Smoke Loader Interface

The latest assessment of BlackEnergy shows that it now includes modules for nearly any feature an attacker would want — the ability to target different operating systems (including routers), launch DoS attacks, read BIOS details from the motherboard, and decrypt or destroy data. More importantly, these tools are being used by attackers with significant infrastructure and skill to target organizations like NATO and government agencies. And most alarmingly, some BlackEnergy modules target critical infrastructure such as energy and water sectors.

Information security divisions will be facing increased complexity and professionalism in attacks. Simply buying and deploying a security device is not going to protect an organization — to do so requires utilizing the defense’s one major asset — homefield advantage. By understanding their own network, an security operations team can use Cyber Discovery techniques to find the unusual activity that criminals will inevitably generate. This requires significant effort, but defenders must evolve along with their adversaries.

Tags: ,

none | SecuritySIEM