Posts tagged: 'enterprise security'
The following posts are associated with the tag you have selected. You may subscribe to the RSS feed for this tag to receive future updates relevant to the topic(s) of your interest.https://blog.logrhythm.com/tags/enterprise-security/feed
Tricking users into copying different commands from what is displayed on a web page…
OK, maybe I’m late to this party but I recently came across a very cool attack vector that I had not heard about until now. There’s an excellent write up on this here that was actually published in 2008, so I won’t go through the details of how this works. However you can view an interactive demo of this in action here.
Essentially, this is ruse that can be used to trick people into running a different command on their system than what they thought they had initially copied from a website. Go ahead and try it out over at JSFiddle.net, just copy the text within the ‘result’ box and paste it into a text editor to review the full command. Neat, huh?!
The demo above shows an attempt to shovel a reverse python shell back to the attackers system though make it appear like the command simply echoed “this is a test” to the screen as expected. This proof of concept is demonstrated below.
This is merely another vector that can be leveraged in social engineering attacks. Demonstrating the risk with blindly copying + running commands from websites that you do not trust. Always re-type commands such as this or paste them into a text-editor prior to running them directly. Also, if you are cloning a repository from a resource such as GitHub, review the code before integrating this into your project. All too often websites are backdoored due to the themes or modules that have been downloaded and installed from an un-trusted repository without going through code-review. In general, you shouldn’t implicitly trust anything at face value; trust but verify…
Internationalized Domain Names (IDNs) are becoming more and more common in regards to phishing attacks, command and control servers, malware propagation, advertisements, search engine optimization (SEO) poisoning, and other web-based attacks. Recently, LogRhythm Labs has intercepted many phishing messages utilizing IDNs and punycode in an attempt to mask the URI and potentially trick users into clicking their links — often masquerading as legitimate sites. Fortunately their tactics are normally not very effective and often don’t work at all as intended. Granted there are a few exceptions…
To demonstrate, let’s take a look at an example SPAM email that attempts to use a Russian punycode IDN domain name to display the URI.
Translating the URI from punycode to Russian, shows how the spammer wanted the URI to appear.
Following three redirects brings you to (hxxp://www.bigdiscountmeds.com/). The domains it bounces through are listed below and the full report on this link can be obtained at anubis.iseclab.org.
Researching the primary domain, we can see that it is located within the Ukraine.
The Russian domain name and Ukraine hosting location is especially odd because the page has a Chinese signature at the bottom of the source-code. This isn’t visible in the trace above, but when the source is viewed directly or CURL‘d this can be observed.
Regardless, this is all just common SPAM and millions of emails such as this are sent out every day. Personally, the really interesting aspect of IDNs and punycode attacks is that many security tools are not currently set up to assess these domains in the same way as with conventional (.com/.org/.net) domain names. Folks who are vigilant about security and assess links that they receive may run into road blocks when it comes to easily analyzing IDNs. Many of the standard analysis tools fail to see these domains as valid and thus would not scan these links. The first example of which is a great free tool — Zulu URL Risk Analyzer by Zscaler that unfortunately is not yet ready to handle punycode URI’s.
Running a whois from dnsstuff.com, the punycode domain extension was not recognized.
We encountered similar errors when analyzing the URI using a variety of open-source first-step analysis tools, however, there are services available that have the ability to analyze and convert punycode IDNs. The fact that many major tools cannot yet handle these domains highlights a problem in that attackers are advancing faster than our defenses and what’s worse is that this is nothing new — DNS has supported non ASCII URI’s since 2009. If we as an industry are unable to keep up with new techniques employed by our adversaries we will surely lose this fight.
To that end, using a SIEM, web proxy, and/or firewall, these URI’s are easy to detect, block, and alert on when accessed. It’s simple, if there is no business justification for folks within your organization to visit these domains, access to such URI’s should be blocked or alerted on at a minimum. Within LogRhythm this activity can be detected using two basic AIE rules.
The first AIE rule checks for suspicious Top Level Domains (TLDs) and augments this with a white-list of known-good IDN TLD’s so that alarms do not fire when ‘approved’ applications are visited.
The second rule simply looks for any IDN domain.
none | Security
The National Institute of Standards and Technology (NIST) has drafted a document that specifically addresses Personally identifiable information (PII). The document will become Appendix J of SP 800-53. This means FISMA is likely to change to include these new privacy controls.
What does this mean to you? In the short term, nothing. The standard will be in a public comment period until September 2, 2011, and is not scheduled to be included in SP 800-53 until December 2011 when Revision 4 gets released. After that, it still needs to be updated in FISMA and other regulations derived from SP 800-53.
However, the controls outlined in the draft are significant, and will eventually add extra layers of complexity to an organization’s plan to become compliant. I’m still fully digesting the draft, but something that initially stands out is that NIST is treating PII data similar to how PCI-DSS treats card-holder data, and to how NERC-CIP treats Critical Cyber Assets. For example, control SE-1 in the NIST draft states:
a. Establishes, maintains, and regularly updates a PII inventory that contains a listing of all programs and information systems identified as collecting, using, maintaining, or sharing PII;
b. Provides each update of the PII inventory to the CIO or other information security officials to support the establishment of appropriate information security requirements for all new or modified information systems containing PII.
This sounds very similar to the approaches in PCI-DSS for defining and securing the Cardholder Data Environment (CDE), or the NERC-CIP Critical Cyber Asset identification.
The draft also adds on all the standard process and procedure requirements, auditing requirements, monitoring, roles and responsibilities, etc, that are seen across the board in other compliance regulations.
While there will still be quite some time before organizations are mandated to adhere to this draft standard, getting a head-start now will save headaches in the future, and protecting PII is simply the right thing to do, regardless of whether it’s mandated by compliance.
If it didn’t come across as mind-bendingly smug, I might describe the Sega hack as ‘old news before it even broke’. But it is. Old news. Another global digital meganame falls prey to malicious, possibly mafia- or triad-backed ill-doers.
Recently I sat and watched a trusted colleague deliver a presentation to a roomful of security personnel and liken their industry to an air wreck. I believe his exact words were ‘if this were a plane, I’d be running up and down the aisles screaming that we’re all going to die!’. Needless to say this was not well received on the day, but I can’t help but think that he had a point.
Now, I work for an SIEM vendor – the best on the planet, in my opinion, but I’m not going to ambulance-chase this one. There are crucial issues raised now. This raises questions about whose responsibility personal privacy actually IS. As I’ve said before, Amazon, Barclays Retail, Dell, Dabs – any of these guys could get hacked tomorrow and lose YOUR data. What then? It can take weeks to recover from a personal identity breach – resetting email accounts, changing card numbers, suppliers and addressing the huge numbers of interconnected services and locations where your identity converges. This is not to mention the consequences if you actually lose money.
What more can individuals do? Most of us are getting it right: Don’t throw old business cards in the bin. Go for strong passwords, changed at least monthly. Don’t show identity badges in public places (watch out for my next blog on this!). Speak to everyone about the need for security. Educate the less technically literate about malware. Don’t respond to emails or phone calls about online matters unless you initiated the conversation. Keep one eye on the security blogs. Learn the language.
Can companies say the same thing? What about the people who I entrust my identity to? Invest in security – with all that entails. Infrastructure. Dedicated FTEs. Education. Compliance. Regular reviews. Fire drills. Specific executives whose job IS security. Clearly the people who take online privacy seriously are being let down by the companies who don’t, and the more companies that are breached, the more excusable it seems.
My own view on Sega and the bi-monthly additions to the ranks of large companies who didn’t make the grade, is that it’s time to think of security as a multi-partite affair. Your strategy should start with compliance, then loop through infrastructure best practise, via rigorous HR policies and finish by directly addressing social engineering. The modern breach is a blended affair. Only a blended security strategy will work. One that centres around human factors.
Ten years ago, I was put into the position of having to figure how to manage a serious gap in enterprise security for a vitally sensitive environment. The problem was introduced to me like this: “The team can read 3,000 pages of logs per day but they receive 55,000 pages.” Adding more heads to the problem wasn’t the right solution- they were losing context, were inaccurate, under-trained, and bored. The success stories from this team were few and the needle-in-a-haystack factor was high. This process needed to be automated.
There was another serious problem besides log volume. I explained the systemic issues to my supervisors like this; “The security controls in our organization are like musical instruments in an orchestra. We have firewalls, anti-virus, intrusion detection devices, file integrity monitoring, content filters, anti-spam, host security, and application security. But right now each does security ‘solo’.”
Each control was managed by a different group. Reporting was all hand constructed and carried to the security team. The firewall manager would deliver the firewall report daily, intrusion detection was handled by the incident handling team once a week, anti-virus reports were delivered monthly by the IT staff, and so on.
The whole of the system was as reliable as clockwork: each piece did its part exactly as planned and yet each was still ineffective. The whole process was too high-level to determine if a problem existed, too low-level to see the problem as it happened, incomplete because not all data was reviewed, and lacked the context to determine if a threat was real even if it was suspected.
The bottom line? All controls play a different piece in the same symphony. They need to work together to turn noise into music.
The promise of SIEM was that computers can solve large, complex and tedious problems faster and more accurately than people; as such they are the ideal tool to police themselves and their users. We wanted the entire system of security controls in the enterprise working cooperatively, in harmony, and armed with the intelligence needed to protect the organization.
The emerging SIEM (SIM, SEM, or ‘master console’) technology was often focused on specific tools such as Intrusion Detection or Host Security that ‘extended’ into logging. Many critical systems had no logging whatsoever and companies withheld such features in an attempt to keep their proprietary systems closed. It would take most of the 2000’s to get the message to vendors that 3rd party auditing was a requirement. Now SIEMs can be practical and more creative means of using them can be applied.
Once the orchestra is together, the Conductor can lead. It’s of utmost importance that the managers, investigators, analysts, engineers and operators hear the same song. What SIEM needed and has achieved today the ability to connect logs to organizational regulations, policy, plan, procedures, organizational divisions, and even by individual project requirements. Alerts can be tailored to correlate between identified events, known threats, and critical assets. Reports can be automated and customized to fit any manner of output requirements rather than being limited hand-made spreadsheets. And valuable metrics can be mined from the official record of events that the SIEM establishes.
Thanks to SIEM technology, logs can be reviewed properly, human resource requirements to review them are less, and can handle volumes that now may exceed tens-of-millions of pages of data per day rather than just 55,000. Coverage now extends to the entire networked enterprise and information can be presented in a timely manner and in ways that are useful to all stakeholders. There is no doubt to me, SIEMs are a key innovation for information technology and will be critical for what is shaping up to be a brutal future for security problems.