Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Network Intrusion Detection, Third Edition.pdf
Скачиваний:
212
Добавлен:
15.03.2015
Размер:
2.58 Mб
Скачать

Repeat the Executive Summary

You know the drill. Tell them what you are going to tell them, tell it to them, and then tell them what you told them. This is an excellent time to repeat your Executive Summary points.

Summary

I hope this chapter and this book have been helpful to you. This chapter was tailored for security professionals who don't have an intrusion-detection capability, want to upgrade their capability, or have these job positions under scrutiny. In much of the book, we try to give you a bit of insight into the enemy. In this chapter, we have tried to give you insight into management and business processes.

The most important thing to keep in mind, both for yourself and when you brief management, is that intrusion detection should be an integral part of your organization's information-assurance strategy. In fact, intrusion detection should be a part of every nation's information-assurance strategy. The events of this coming year with massive IRC bot driven distributed denial-of- service attacks, SNMP/ASN.1 exploits, and polymorphic attacks will prove this to be true. You don't need an IDS to detect a DDoS attack, but it will help you find the compromised hosts before they can be used to hurt someone. Now, let us take some time to discuss the future of intrusion detection in our final chapter in this book.

Chapter 20. Future Directions

Prognostication is dangerous. Have you seen the studies on the accuracy of newspaper and tabloid predictions? How will we do better? It is time to discuss the leading edge, the emerging tools and trends in intrusion detection. I am asked to speak on the future of various information assurance topics a couple times a year and try to stay abreast of trends, hold focus groups, and so forth. None of that ensures that I will be right about anything; so, consider what you read in this chapter with care. With that, here is my read on the future for intrusion detection.

In terms of broad trends, we will discuss the emerging threat, cyber-terrorism, the ease by which attackers are able to install and run malicious code on our systems, the improvements in reconnaissance and targeting, skills versus tools, defense in depth, and large-scale intrusion detection. Finally, we'll close with some short takes on emerging trends.

Increasing Threat

One of the drivers that fuels the continued interest in intrusion detection is the increasing threat. The progress in attacker tools over the past year has been incredible. I am not talking about Code Red so much as Leaves and the IRC bot (robot programs) nets that reached a significant level of sophistication in mid-2001. Attackers have the firepower to knock almost any site off the Internet. They can coordinate a fast scan, blowing through half of a class B in about five minutes from 2,500 or so discrete source hosts. They can also scan very slowly, modulating the technique to be almost undetectable. Many of these attackers are also security practitioners by day, a disturbing fact, and they are not planning to stop writing attack code.

Cyber-Terrorism

"Have you seen any evidence of increasing attacks, anything significant?" No less than five of my friends that work for the government had asked me that question by noon on 9/11/2001. Suddenly, we started hearing about cyber-terrorism and, with Executive Order 13231 filed after the attack, we see the US Government preparing defensive mechanisms against cyberterrorism. Although we have tried to detail the increasing threat, and to be sure there is a lot of firepower out there, I do not see any evidence that cyber-based terrorism is a near-term threat. There are hints and glimmerings of it, but the emphasis of terrorism seems to remain fixed on bombs and guns. Is cyber-terrorism a credible threat? In some sense, it has to be. Much of the infrastructure is computer controlled, and the computers are certainly vulnerable. The main thing that seems to be holding cyber-based terrorism back seems to be the attacker's apparent lack of skill and motivation. In other words, the committed terrorists still seem to prefer bombs and guns to laptops for now.

With that said, we do need to consider the implications of the large attack networks that have been formed in the past year. One reason we have not seen more damage is that many of the people involved in creating these attack networks are not really malevolent.

An interesting trend that is as true today as it was when I first learned about it in 1997 is that a main theme of all this advanced denial of service is Internet Relay Chat (IRC). Groups of hackers fighting for control of IRC chatrooms developed the denial-of-service tools. As long as people were content to clobber IRC servers, who cared? Now the genie is out of the bottle and it cannot be put back. It is interesting that the latest attack networks are IRC bots, but they are certainly not constrained to IRC targets. If a group bent on terrorism was to gain control of one of these networks, it could certainly do significant damage, especially financially. If you could keep the top ten Internet businesses offline for a week, what would the potential financial damage be? It is more than just the lost revenue; it would include the weakened state of the companies and potentially a serious effect on the stock market, especially the technology rich NASDAQ exchange.

The bottom line on terrorism, cyber or not, is that your organization should have a contingency plan. Right after 9/11, there was a bit of concern about creating and updating business continuity plans, but it seemed to pass quickly, even while the site of the World Trade Center was still smoking. The main thing is to make sure you have an alternative way of doing business in case the net infrastructure gets severely perturbed at some point.

Large-Scale Compromise

Trojan horses, logic bombs, and software vulnerabilities are incredibly rampant. The bad news is that it is essentially impossible to secure modern operating systems. One of the reasons for this is their complexity. Take a look at your active processes, ps –ax or ps –ef on UNIX and

Ctrl+Alt+Del on Windows. Ask yourself if you would recognize if something changed on anything that is shown. These are high-level listings, not the function calls and .dlls themselves. If someone were able to plant a malicious routine on one of your systems, you would probably not be able to find it except with a tool like anti-virus software. So how do these backdoors and such get planted on your systems?

A huge vector for Windows systems in the past two years has been browser related problems. A number of vulnerabilities in Microsoft's Internet Explorer have been reported that allow attackers to run arbitrary programs on systems when the browser downloads web pages with specially formatted strings. This is on top of the previous trend of creating attacks based on vulnerabilities in the Outlook mail program. Granted, these attacks are at the bottom of the food chain in some sense—PCs—many of which are on dial up connections and cannot do that much damage; but just as many are inside government facilities, corporations, educational institutions, and homes with broadband connectivity. On UNIX systems, a variety of buffer overflows have been found and exploited that allow attackers to accomplish the same thing. In addition to the techniques that attackers use to break into systems, they are also becoming more adept at finding systems to break into.

Improved Targeting

In this book, you have learned a lot about the various reconnaissance techniques attackers use. Multiple organizations are involved in Internet mapping efforts. Some of the aspects of advanced targeting include the following:

Techniques to maximize results using broadcast packets when possible. If a site allows broadcast packets to enter its network from the Internet, this allows the attackers to get significant results with a fairly low number of stimulus packets. Scanning is actually fairly slow going; this is the reason nmap and other tools default to an echo request first. If they get a reply, they invest in scanning for open ports and protocols.

Avoidance of dangerous IP address ranges, based on lists of honeypots and sites that are known to be alert and active in reporting to CIRTs and law enforcement.

Sharing reconnaissance data between scanning organizations minimizes the footprint. If two groups have different techniques and they share the results, it is harder to detect them in action, especially if they both use distributed scanning.

Because the reconnaissance has been going on for a long time, we are now seeing the results of long-term mapping efforts. When you see a few probes, they might be validating that the site map the attackers hold is still fairly up-to-date. As new vulnerabilities are found, the attackers will have the capability to launch precision attacks.

How the Threat Will Be Manifested

The fact that systems are vulnerable and attackers are perfecting their techniques for finding vulnerable systems is not news. What changed in late 2001 and early 2002 was the scale. Largescale, successful attacks such as Leaves SubSeven scans, Code Red and nimda against IIS, and the SNMP/ASN.1 and Apache PHP attacks in early 2002, left attackers with networks of thousands and thousands of compromised zombie systems, and they had primitive, but workable command and control systems to manage these networks.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]