Posts tagged: 'logrhythm'

The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.

http://blog.logrhythm.com/tags/logrhythm/feed
 

Last week, when Ajay sent an email to the company about the upcoming Tube to Work Day in Boulder, my first reaction was, “Is this for real?”  This was quickly followed by “What a crazy, unique, yet awesome idea that … Continue reading

Tags: , ,

none | Uncategorized

 
 

Closing the backdoor…

Closing the backdoor…

Today Mikko Hypponen from F-Secure announced they had retrieved and analyzed the excel file used to create a backdoor into RSA.

The attacker used a phishing attempt to send an excel file loaded with an embedded flash object to a recipient inside RSA.   Once opened, the excel file used an advanced 0-day exploit executed by the embedded Flash object to create backdoor that allows full remote access to the infected workstation.

Certainly this brings up the obvious warnings of not opening strange attachments, etc.  However, the reality is that spear phishing attempts work.  This wasn’t even a case of a highly sophisticated spear phishing attack (you can see the original email on Mikki’s blog entry listed above).   Examples of successful spear phishing, like the IMF breach and the Epsilon breach, highlight that we should not only continue to educate end users to recognize the red flags and the dangers of opening attachments, but be prepared for the inevitable human mistake.

If a user falls prey to a phishing attempt, and a 0-day exploit evades our endpoint security, what can we do?

  • In this case, the backdoor opened connections to servers at mincesure.com that have been used in previous espionage attacks.  Using an IP reputation list could have detected the initial connection and actions could have been taken to stop it.
  • What if they hadn’t used a known IP address?  If you can detect the IP address geo-location, you can see the location is Venezuela.  Using geo-location could have detected irregular data traffic being sent to a uncommon destination.
  • Or what if the location wasn’t interesting?  Once the backdoor was established, having a mechanism to recognize which user accounts access files on the workstation could have identified abuse the system or other accounts.
  • Maybe the user accounts had permission to the files.  In that case it was the sequence of access to the files after the new connection to an unknown location had been created that could have registered that an attack was occurring.

While it’s important to continually educate our end users on attacks they can help prevent, as security professionals we must have safeguards for when mistakes are made.  Although the attacks themselves may not be sophisticated, they can still bypass many of our traditional security solutions.  However, with behavior analysis and pattern recognition we should be able to detect and prevent these types of breaches.

 

Tags: , , , , , , , , , ,

0 Comments | Digital ForensicsIT OptimizationSecuritySIEM

 
 

Social Engineering Security Analysts

I just attended a briefing at Black Hat where Julia Wolf presented a topic titled “The Rustock Botnet Takedown.”  During the presentation some very interesting details and examples were given that described some of the ways the botnets operated to blend in with legitimate network traffic.  One that stood out to me was that the payloads that compromised machines downloaded were in password-protected RAR files with names such as “mybackup13.rar.”  In addition, the command and control servers (C&C) were placed in smaller cities within the United States, such as Kansas City or Scranton, PA.  In addition, instead of sending out spam directly over SMTP, which is easily detectable (look for any non-SMTP servers sending out traffic on port 25), some of the botnets discussed would utilize a web-based e-mail service, such as hotmail, allowing the spamming to look like a normal user accessing their hotmail account.  When a security analyst sees a RAR file named the way described above being downloaded from a US-based server, the analyst might consider it legitimate traffic.  Even if there is a little suspicion and the analyst decides to investigate the RAR file itself, it will be password protected.  The analyst is likely to look at the name of the RAR file, taking into consideration that a backup is something a user is likely to password protect, and decide not to investigate further.  After a compromise, as the zombie starts to spam, it will also look simply like a user accessing their hotmail account, something many security analysts are likely to not consider malicious in nature.  This all sort of reminds me of the premise behind social engineering.  The malware is being designed in a way that when looked at from several different angles, paints a picture for a security analyst that there is no actual threat, and in some cases each additional piece re-enforces a belief that things are okay.

So, how can these issues be addressed?  It’s unreasonable to assume a security analyst can investigate every time a hotmail server is hit, or any time a password protected file is downloaded.  Some automated correlation and/or alarming is in order.  One approach might be to simply alarm on any after-hours web-based e-mail access.  Another might be to build a correlation rule that looks for an indication of an exploit on the end point (process crashing for instance), followed very shortly by a new connection originating from the same host (from perimeter log data or flow data).  A threshold can be set to be around the average size of the payloads that are downloaded before the alarm fires, further limiting false-positives.  To look for a possible outbreak, one can take a baseline of average hotmail activity by determine about how many unique internal hosts access hotmail each day, then build an alarm that looks for a higher number of unique hosts hitting hotmail.

Overall the “Rustock Botnet Takedown” presentation was very interesting, and really got my wheels turning on how to detect this activity using correlation on enterprise log-data.  I’m looking forward to attending more sessions tomorrow, and will be blogging on other topics I find interesting.

Tags: , , , , , , ,

0 Comments | ComplianceDigital ForensicsGeneralIT OptimizationSecuritySIEM

 
 

The Nuances of Advanced Correlation Rules for Authentication Logs

AI Engine Rule BlockUsing the Advanced Intelligence (AI) Engine with LogRhythm allows users to correlate among all the logs in a network and alert when there is anything unusual in the log patterns.  My team, the Knowledge Engineers, is tasked with creating rules for advanced correlation and pattern recognition.  In the early days of the AI Engine, however, we ran across many unexpected challenges when building many of our prepackaged correlation rules.

Let’s start with an example:

One of our AI Engine rules is designed to detect a brute force attempt followed by a successful login on the same account.  First this rule looks to see if there are at least 5 authentication failures followed by a successful logon by that same origin login.  Seems simple enough, however in the AI Engine Beta program we discovered that this alert fires way too often.

Why is this?  Investigation into this issue showed us that the Windows logon process doesn’t produce logs exactly how we expected (or how we would like).  It turns out that if domain controllers in the environment are connected, then an authentication failure log is produced for every domain controller in the system.  This means that in a large environment, someone who types their password incorrectly only one time can produce as many authentication failure logs as there are domain controllers in the network running active directory.  In this case, this AI Engine rule, which is supposed to catch people logging into accounts that aren’t theirs, had a very high false positive rate during beta testing.

But that raised the question what about other windows authentication events, such as logons, so we decided to investigate further.  Testing logons just around our office yielded a wide variety of results.  When I logged in I produced 12 logs classified as Authentication Success and even 4 logs that were classified as Logoffs.  Another employee produced 49 logs with a single logon! Some of these logon logs are simply connections to shared resources like a network drive – not actual logons.  If a typical user of LogRhythm wants to investigate a specific employee, they will want to find the exact time the employee in question logged on, what they accessed, and what that person did with the objects they accessed.  This is a daunting task for a user that doesn’t have a deep understanding of Windows authentication events.

With the creation of the AI Engine rules we realized just how noisy authentication events actually are in Windows.  To address this issue, we have fine-tuned our common events and AI Engine rules to filter out all of this extra noise.  Because AI Engine rules can leverage all of LogRhythm’s extensive normalization, data enrichment and log parsing, they can be quickly modified for much greater accuracy.  And a typical LogRhythm user can investigate incidents without having to understand every individual detail of the Windows authentication process.  That is what every SIEM product user is looking for – a simple way to understand exactly what is happening in their network without needing a detailed knowledge of the authentication process.

Not only did this analysis help us create more refined AI Engine rules with fewer false positives, it gave us more refined guidance for normalization, making LogRhythm easier for our customers to use.

Tags: , , ,

0 Comments | General

 
 

FIM for Fonts: Using file integrity monitoring to protect against hidden threats

Almost everybody uses antivirus. Chances are that you have an antivirus running right now on the computer you are using to read this article. Antivirus is great – it gives us that warm, cozy, and protected feeling that we need when conducting our business on the internet. Let’s face it; the DMZ is not a safe place. The reason we all still use antivirus is the same reason why it’s not one hundred percent effective. New viruses, malware, Trojans, zero day exploits and attacks are discovered everyday that bypass existing AV scanners. The “bad guys” are usually one small step ahead of the “good guys”.

So what can we do? Well, we can start by brainstorming possible scenarios and objectives that the malware creators have in mind. Once a particular malicious piece of code has made its way into our environment it needs to live somewhere. Usually hiding in hidden folders or disguised as hidden/important system objects. Chances are that this malicious code might try to hide its tracks, delete audit trails, create default/super user accounts, modify the registry, modify linked libraries, lower file permissions, steal sensitive information, or maybe clone itself. In all these scenarios a specific signature is left, files are modified, created, deleted, or read – leaving evidence of its existence on your machine.

The second line of defense comes in the form of File Integrity monitoring (FIM). FIM is a key component to maintaining a secure environment and often a requirement for compliance standards such as PCI. Having FIM in place will frequently provide you with your first warning of a zero-day exploit and other stealthy attacks. An effective FIM tool should provide you with integrity information on file creations, sizes, permission changes, deletes, modifications and provide a means to track down the activity associated with the event.

Creating a useful and effective default file integrity monitoring policy requires a fair amount of research to ensure that all the critical folders and files are covered. One of the first steps that LogRhythm uses to create a default monitoring policy for a Windows machine is to create a script that will sweep the default install of a given OS, Windows 2008 R2, for instance, for various important folders and file types. In the first sweep we want to find all executables. Simply looking for all .exes on the host will not give you a complete list of all executable files on the box. Some common file extensions that can easily be overlooked are: .bat, .js, .ocx, and .vb. Instead, let’s take advantage of the fact that each file on a windows machine has a specific byte signature that determines the nature of the file at the byte level and is easily viewable in a hex editor. A way to perform a more comprehensive sweep is by opening each file on the host with a hex editor and looking for the “MZ” characters or “4D 5A” hex bits. These bytes are the signature of an executable file.

Now we have a starting point to begin our FIM policy. On a default install of Windows 2008 R2, a script like this will return over 25 different types of file extensions – one of them will be .fon extension for bitmapped fonts. To the untrained eye these files can easily be overlooked. The .fon extension was created by Microsoft, specifically for the native Windows 3.x library. Since files with the .fon extension are also executables, they have been known to be a great spot for malware to hide.

A few months ago Microsoft released a security bulletin MS01-091. This vulnerability in the OpenType font can allow an attacker who has successfully modified a font to install programs, read, modify or, delete files and create accounts with super user privileges.  Potential Malware?

FIM can be a powerful tool for detecting stealth attacks. Yes, it is important to use FIM to monitor high profile objects such as essential system/startup files, system32 files, authentication records, folders with logon rights information and user privilege data. But using FIM to monitor seemingly harmless or insignificant files can sometimes be the first warning sign in the event of an attack.

Tags: , , ,

0 Comments | Security