Posts tagged: 'network security'
The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.http://blog.logrhythm.com/tags/network-security/feed
Closing the backdoor…
Today Mikko Hypponen from F-Secure announced they had retrieved and analyzed the excel file used to create a backdoor into RSA.
The attacker used a phishing attempt to send an excel file loaded with an embedded flash object to a recipient inside RSA. Once opened, the excel file used an advanced 0-day exploit executed by the embedded Flash object to create backdoor that allows full remote access to the infected workstation.
Certainly this brings up the obvious warnings of not opening strange attachments, etc. However, the reality is that spear phishing attempts work. This wasn’t even a case of a highly sophisticated spear phishing attack (you can see the original email on Mikki’s blog entry listed above). Examples of successful spear phishing, like the IMF breach and the Epsilon breach, highlight that we should not only continue to educate end users to recognize the red flags and the dangers of opening attachments, but be prepared for the inevitable human mistake.
If a user falls prey to a phishing attempt, and a 0-day exploit evades our endpoint security, what can we do?
- In this case, the backdoor opened connections to servers at mincesure.com that have been used in previous espionage attacks. Using an IP reputation list could have detected the initial connection and actions could have been taken to stop it.
- What if they hadn’t used a known IP address? If you can detect the IP address geo-location, you can see the location is Venezuela. Using geo-location could have detected irregular data traffic being sent to a uncommon destination.
- Or what if the location wasn’t interesting? Once the backdoor was established, having a mechanism to recognize which user accounts access files on the workstation could have identified abuse the system or other accounts.
- Maybe the user accounts had permission to the files. In that case it was the sequence of access to the files after the new connection to an unknown location had been created that could have registered that an attack was occurring.
While it’s important to continually educate our end users on attacks they can help prevent, as security professionals we must have safeguards for when mistakes are made. Although the attacks themselves may not be sophisticated, they can still bypass many of our traditional security solutions. However, with behavior analysis and pattern recognition we should be able to detect and prevent these types of breaches.
I just attended a briefing at Black Hat where Julia Wolf presented a topic titled “The Rustock Botnet Takedown.” During the presentation some very interesting details and examples were given that described some of the ways the botnets operated to blend in with legitimate network traffic. One that stood out to me was that the payloads that compromised machines downloaded were in password-protected RAR files with names such as “mybackup13.rar.” In addition, the command and control servers (C&C) were placed in smaller cities within the United States, such as Kansas City or Scranton, PA. In addition, instead of sending out spam directly over SMTP, which is easily detectable (look for any non-SMTP servers sending out traffic on port 25), some of the botnets discussed would utilize a web-based e-mail service, such as hotmail, allowing the spamming to look like a normal user accessing their hotmail account. When a security analyst sees a RAR file named the way described above being downloaded from a US-based server, the analyst might consider it legitimate traffic. Even if there is a little suspicion and the analyst decides to investigate the RAR file itself, it will be password protected. The analyst is likely to look at the name of the RAR file, taking into consideration that a backup is something a user is likely to password protect, and decide not to investigate further. After a compromise, as the zombie starts to spam, it will also look simply like a user accessing their hotmail account, something many security analysts are likely to not consider malicious in nature. This all sort of reminds me of the premise behind social engineering. The malware is being designed in a way that when looked at from several different angles, paints a picture for a security analyst that there is no actual threat, and in some cases each additional piece re-enforces a belief that things are okay.
So, how can these issues be addressed? It’s unreasonable to assume a security analyst can investigate every time a hotmail server is hit, or any time a password protected file is downloaded. Some automated correlation and/or alarming is in order. One approach might be to simply alarm on any after-hours web-based e-mail access. Another might be to build a correlation rule that looks for an indication of an exploit on the end point (process crashing for instance), followed very shortly by a new connection originating from the same host (from perimeter log data or flow data). A threshold can be set to be around the average size of the payloads that are downloaded before the alarm fires, further limiting false-positives. To look for a possible outbreak, one can take a baseline of average hotmail activity by determine about how many unique internal hosts access hotmail each day, then build an alarm that looks for a higher number of unique hosts hitting hotmail.
Overall the “Rustock Botnet Takedown” presentation was very interesting, and really got my wheels turning on how to detect this activity using correlation on enterprise log-data. I’m looking forward to attending more sessions tomorrow, and will be blogging on other topics I find interesting.
The National Institute of Standards and Technology (NIST) has drafted a document that specifically addresses Personally identifiable information (PII). The document will become Appendix J of SP 800-53. This means FISMA is likely to change to include these new privacy controls.
What does this mean to you? In the short term, nothing. The standard will be in a public comment period until September 2, 2011, and is not scheduled to be included in SP 800-53 until December 2011 when Revision 4 gets released. After that, it still needs to be updated in FISMA and other regulations derived from SP 800-53.
However, the controls outlined in the draft are significant, and will eventually add extra layers of complexity to an organization’s plan to become compliant. I’m still fully digesting the draft, but something that initially stands out is that NIST is treating PII data similar to how PCI-DSS treats card-holder data, and to how NERC-CIP treats Critical Cyber Assets. For example, control SE-1 in the NIST draft states:
a. Establishes, maintains, and regularly updates a PII inventory that contains a listing of all programs and information systems identified as collecting, using, maintaining, or sharing PII;
b. Provides each update of the PII inventory to the CIO or other information security officials to support the establishment of appropriate information security requirements for all new or modified information systems containing PII.
This sounds very similar to the approaches in PCI-DSS for defining and securing the Cardholder Data Environment (CDE), or the NERC-CIP Critical Cyber Asset identification.
The draft also adds on all the standard process and procedure requirements, auditing requirements, monitoring, roles and responsibilities, etc, that are seen across the board in other compliance regulations.
While there will still be quite some time before organizations are mandated to adhere to this draft standard, getting a head-start now will save headaches in the future, and protecting PII is simply the right thing to do, regardless of whether it’s mandated by compliance.
If it didn’t come across as mind-bendingly smug, I might describe the Sega hack as ‘old news before it even broke’. But it is. Old news. Another global digital meganame falls prey to malicious, possibly mafia- or triad-backed ill-doers.
Recently I sat and watched a trusted colleague deliver a presentation to a roomful of security personnel and liken their industry to an air wreck. I believe his exact words were ‘if this were a plane, I’d be running up and down the aisles screaming that we’re all going to die!’. Needless to say this was not well received on the day, but I can’t help but think that he had a point.
Now, I work for an SIEM vendor – the best on the planet, in my opinion, but I’m not going to ambulance-chase this one. There are crucial issues raised now. This raises questions about whose responsibility personal privacy actually IS. As I’ve said before, Amazon, Barclays Retail, Dell, Dabs – any of these guys could get hacked tomorrow and lose YOUR data. What then? It can take weeks to recover from a personal identity breach – resetting email accounts, changing card numbers, suppliers and addressing the huge numbers of interconnected services and locations where your identity converges. This is not to mention the consequences if you actually lose money.
What more can individuals do? Most of us are getting it right: Don’t throw old business cards in the bin. Go for strong passwords, changed at least monthly. Don’t show identity badges in public places (watch out for my next blog on this!). Speak to everyone about the need for security. Educate the less technically literate about malware. Don’t respond to emails or phone calls about online matters unless you initiated the conversation. Keep one eye on the security blogs. Learn the language.
Can companies say the same thing? What about the people who I entrust my identity to? Invest in security – with all that entails. Infrastructure. Dedicated FTEs. Education. Compliance. Regular reviews. Fire drills. Specific executives whose job IS security. Clearly the people who take online privacy seriously are being let down by the companies who don’t, and the more companies that are breached, the more excusable it seems.
My own view on Sega and the bi-monthly additions to the ranks of large companies who didn’t make the grade, is that it’s time to think of security as a multi-partite affair. Your strategy should start with compliance, then loop through infrastructure best practise, via rigorous HR policies and finish by directly addressing social engineering. The modern breach is a blended affair. Only a blended security strategy will work. One that centres around human factors.
RSA has been breached by an Advanced Persistent Threat exposing critical crypto information that could lead to a compromise of SecureID effectiveness and possibly forged tokens. SecureID, the security product compromised, is a popular two-factor authentication system with a wide distribution. Defense contractor Lockheed-Martin (LMC) was compromised but repelled an attack that used the stolen codes.
LMC is much like RSA. They hold keys to many doors used by the U.S. Department of Defense, as well as other members and customers of the military-industrial complex. And while LMC is mostly a staffing firm, selling skills and labor for defense contracts, they also have various stove-piped facilities that exist for facilitating projects. They include data centers, laboratories, cube farms, etc., many of which house classified data, such as engineering schematics for top secret projects.
Information of value that can be collected through an APT include things like the specifics of the projects LMC is working on, human resources information of their employees and locations where work is performed. Access to this information can help open even more doors.
Unfortunately, the public has yet to be presented enough details to determine if LMC was the second step in a master plan or just an opportunistic target due to the nature of the RSA stolen key crypto. We also don’t know if they can pick a second target or even if the worst case scenario has been stopped cold by LMC’s incident response team.
So who would the next target be? We don’t have any information about the assailants except that they are skilled, criminal, and likely have access to significant resources. The U.S. Military might not be of any value to them at all – instead they might be after manipulating a commercial or political target.
The systemic failure the attacker(s) are exploiting is targeting organizations that are responsible for the security of their clients. Sound familiar? It’s the same issue we are having with “The Cloud” nowadays and the security issues cloud providers have that lead to significant breaches. The only real difference with this attack is that they are aiming for the highest profile security systems and succeeding.
It goes without saying that these events are not being whitewashed by the U.S. Government. In the future there may be development of a more advanced system to find the source of APTs, presumably by advancing their monitoring capabilities and increasing the participants involved with CyberScope or related technologies.
And while there may be no foolproof system in place now (and there may never be one), there are three primary precautions you can take to protect your organization from APTs:
1) Ensure that user activities are monitored and that appropriate monitoring systems are in place. Users of SIEM technologies such as LogRhythm can find suspicious behaviors proactively across the enterprise, rather than waiting for post-incident forensics.
2) Educate employees so that they can identify suspicious activities, such as phishing e-mails, fake telephone solicitations, or other lapses in security enforcement such as tailgating through doors.
3) Isolate important information and add additional controls to prevent having a compromise by a single computer or employee from becoming a complete breach of company information.