Friday, July 22, 2011

Defeating Attacks Easier Than Detecting Them


ctually, after reading the first few paragraphs I’m surprised that this flaw wasn’t exploited a lot sooner than it was. The ability to measure fairly accurately the components that make up web applicationimage performance is not something that’s unknown, after all.  So the claim that an algorithm can correctly “weed out” network latency is not at all surprising.
But what if the performance was randomized by, say, an intermediary interjecting additional delays into the response? You can’t accurately account for something that’s randomly added (or not added, as the case may be) and as long as you seeded the random generation with something that cannot be derived from the context of the session there are few algorithms that could figure out what the random generation seed might be. That’s important because random number generation often isn’t and it can often be predicted based on knowing what was used to seed the generator. So we could defeat such an attack by simply injecting random amounts of delay into the response.
Or, because the attack depends on an observable difference in timing, simply normalizing response times for the login process would also defeat this attack. This is the solution pointed out in another article on the discovery, “OAuth and OpenID Vulnerable to Timing Attack”, in which it is reported developers of impacted libraries indicate that just six lines of code will solve this problem by normalizing response times. This, of course, illustrates a separate problem, which is the reliance on external sources to address security risks that millions may be vulnerable to now because while it’s a simple resolution, it may takes days, weeks, or more before it is available.
This particular attack would indeed be disastrous were it to be exploited given the reliance on these libraries by so many popular web sites. And though the solutions are fairly easy to implement, that isn’t the real problem. The real problem is how difficult such attacks are becoming to detect, especially in the face of the risk incurred by remaining vulnerable while solutions are developed and distributed.

DEFEATING is EASY. DETECTING is HARD.

The trick here, and what makes many modern attacks so dangerous is that it’s really really hard to detect them in the first place.image

Any attack that could be distributed across multiple clients – clients smart enough to synchronize and communicate with one another – becomes difficult to detect, especially in a load balanced (elastic) environment in which those requests are likely spread across multiple application instances. The variability in where attacks are coming from makes it very difficult to see an attack occurring in real-time because no single stream exhibits the behavior most security-focused infrastructure watches for.
What’s needed to detect such an attack is to be more concerned withwhat is being targeted rather than by whom or from where. While you want to keep track of that data the trigger for such brute-force attacks is the target, not the client activity. Attackers are getting smart, they know that repeated attempts at X or Y will be detected and that more than likely they will find their client blacklisted for a period of time (if not permanently) so they’ve come up with alternative methods that “hide” and try to appear like normal requests and responses instead. In fact it could be postulated that it is not repeated attempts to login from a single location that are the problem today, but rather the attempt to repeatedly login from multiplelocations across time that’s the problem. So what you have to look for is not necessarily (or only) repeated requests but also at repeated attempts to access specific resources, like a login page.
But a login page is going to see a lot of use so it’s not just the login page you need to be concerned with, but the credentials, as well. In any brute force account level attack there are going to be multiple requests to try to access a resource using the same credentials. That’s the trigger. It requires more context than your traditional connection or request based security triggers because you’re not just looking at IP address, or resource, you’re looking deeper and trying to infer from a combination of data and behavior what’s going on.

THIS SITUATION will BECOME the STATUS QUO

As we move forward into a new networking frontier, as applications and users are decoupled from IP addresses and distribution of both clients and applications across cloud computing environments becomes more common, the detection of attacks is going to get even more difficult without collaboration.

The new network, the integrated and collaborative network, the dynamic network, is going to become a requirement.  Network and application network and security infrastructure needs a new way of combating modern threats and their attack patterns. That new way is going to require context and the sharing of that context across allstrategic points of control at which such attacks might be mitigated.
This becomes important because, as pointed out earlier, many web applications rely upon third-party solutions for functionality. Open source or not, it still takes time for developers to implement a solution and subsequently for organizations to incorporate and redeploy the “patched” application. That leaves users of such applications vulnerable to exploitation or identity theft in the interim. Security infrastructure must be able to detect such attacks in order to protect users and corporate resources (infrastructure and applications and data) from exploitation while they wait for such solutions. When is more important than where when it comes to addressing newly discovered vulnerabilities, but when is highly dependent upon the ability of infrastructure to detect an attack in the first place.
We’re going to need infrastructure that is context-aware.