Chapter 8

Security Assessment and Testing

IN THIS CHAPTER

Bullet Designing and validating assessment, test, and audit strategies

Bullet Conducting security control testing

Bullet Collecting security process data

Bullet Analyzing test output and generating reports

Bullet Conducting or facilitating security audits

In this chapter, you learn about the various tools and techniques that security professionals use to continually assess and validate an organization’s IT environment. This domain represents 12 percent of the CISSP certification exam.

Design and Validate Assessment, Test, and Audit Strategies

Modern security threats are rapidly and constantly evolving. Likewise, an organization’s systems, applications, networks, services, and users are frequently changing. Thus, it is critical that organizations develop an effective strategy to regularly test, evaluate, and adapt their business and technology environment to reduce the probability and impact of successful attacks, as well as to achieve compliance with applicable laws, regulations, and contractual obligations.

Crossreference This section covers Objective 6.1 of the Security Assessment and Testing domain in the CISSP Exam Outline (May 1, 2021).

Organizations need to implement a proactive assessment, test, and audit strategy for both existing and new information systems and assets. The strategy should be an integral part of the risk management process to help the organization identify new and changing risks that are important enough to warrant analysis, decisions, and action.

Security personnel must identify all applicable laws, regulations, and other legal obligations (such as contracts) to understand what assessments, testing, and auditing are required. Further, security personnel should examine their organization’s risk management framework and control framework to see what assessments, control testing, and audits are suggested or required. Then the combination of these solutions would become part of the organization’s overall strategy for ensuring that all its security-related tools, systems, and processes are operating properly.

Three main perspectives come into play in planning for an organization’s assessments, testing, and auditing:

· Internal: Assessments, testing, and auditing performed by personnel who are part of the organization. The advantages of using internal resources for assessments, tests, and audits include lower costs and greater familiarity with the organization’s practices and systems. Internal personnel may not be as objective as external parties, however.

· External: Assessments, testing, and audits performed by people from an external organization or agency. Some laws and regulations, as well as contractual obligations, may require external assessments, tests, and audits of certain systems and processes. The greatest advantage of using external personnel is that they’re objective. They’re often more expensive, however, particularly for activities that require higher skill levels or specialized tools.

· Third-party: Audits of critical business activities that have been outsourced to external service providers (third parties). The systems and personnel being examined belong to an external service provider. Depending on the requirements in applicable laws, regulations, and contracts, these assessments of third parties may be performed by internal personnel; in some cases, external personnel may be required.

Tip To avoid multiple audits, many third-party service providers commission external audits whose audit reports can be distributed to their customers. Examples of such audits include SSAE 18, SOC-1, SOC-2, and PCI DSS. Service providers also commission security consulting firms to conduct penetration tests on systems and applications, which helps them reduce the number of customers who would want to do this testing themselves.

Conduct Security Control Testing

Security control testing employs various tools and techniques, including vulnerability assessments, penetration (or pen) testing, synthetic transactions, interfaces testing, and more. You learn about these and other tools and techniques in the following sections.

Crossreference This section covers Objective 6.2 of the Security Assessment and Testing domain in the CISSP Exam Outline (May 1, 2021).

Vulnerability assessment

A vulnerability assessment is performed to identify, evaluate, quantify, and prioritize security weaknesses in an application or system. Additionally, a vulnerability assessment provides remediation steps to mitigate specific vulnerabilities that are identified in the environment.

There are three general types of vulnerability assessments:

· Port scan (not intensive)

· Vulnerability scan (more intensive)

· Penetration test (most intensive)

Generally, automated network-based scanning tools are used to identify vulnerabilities in applications, systems, and network devices in a network. Sometimes, system-based scanning tools are used to examine configuration settings to identify exploitable vulnerabilities. Often, network- and system-based tools are used together to build a complete picture of vulnerabilities in an environment.

The various types of vulnerability assessments fit into a fundamental activity in information security known as vulnerability management, which is a formal process of assessment, vulnerability identification, and remediation within specific timeframes. The purpose of vulnerability management often includes attack surface reduction, the quest to reduce the number of systems, devices, and components that are potentially exploitable.

Port scanning

A port scan uses a tool that communicates over the network with one or more target systems on various Transmission Control Protocol/Internet Protocol (TCP/IP) ports. A port scan can discover ports that probably should be disabled (because they serve no useful or necessary purpose on a particular system).

Vulnerability scans

Network-based vulnerability scanning tools send network messages to systems in a network to identify any utilities, programs, or tools that may be configured to communicate over the network. These tools attempt to identify the version of any utilities, programs, and tools; often, it is enough to know the versions of the programs that are running, because scanning tools often contain a database of known vulnerabilities associated with program versions. Scanning tools may also send specially crafted messages to running programs to see whether those programs contain any exploitable vulnerabilities.

Tools are also used to identify vulnerabilities in software applications. Generally, these tools are divided into two types: dynamic application security testing (DAST) and static application security testing (SAST). DAST executes an application and then uses techniques such as fuzzing in an attempt to identify exploitable vulnerabilities that could permit an attacker to compromise a software application and then alter or steal data or take control of the system. SAST examines an application’s source code for exploitable vulnerabilities. Neither DAST nor SAST can find all vulnerabilities, but when these tools are used together by skilled personnel, many exploitable vulnerabilities can be found.

Examples of network-based vulnerability scanning tools include Nessus, Rapid7, and Qualys. Flexera (formerly Secunia) PSI is an example of a system-based vulnerability scanning tool. Example application scanning tools include HCL AppScan, Fortify WebInspect, Acunetix, and Burp Suite.

Unauthenticated and authenticated scans

Vulnerability scanning tools (those used to examine systems and network devices, as well as those that examine applications) generally perform two types of scans: unauthenticated and authenticated. In an authenticated scan, the scanning tool is configured with login credentials and attempts to log in to the device, system, or application to identify vulnerabilities that are not discoverable otherwise. In an unauthenticated scan, the scanning tool does not attempt to log in; hence, it can discover only vulnerabilities that would be exploitable by someone who does not possess valid login credentials.

Vulnerability scan reports

Generally, all the types of scanning tools discussed in this section create a report that contains summary and detailed information about the scan that was performed and vulnerabilities that were identified. Many of these tools produce a good amount of detail, including steps used to identify each vulnerability, the severity of each vulnerability, and steps that can be taken to remediate each vulnerability.

Some vulnerability scanning tools employ a proprietary methodology for vulnerability identification, but most scanning tools include a common vulnerability scoring system score for each identified vulnerability. Application security is discussed in more detail in Chapter 10.

Vulnerability assessments are a key part of risk management (discussed in Chapter 3).

Penetration testing

Penetration testing (pen testing for short) is the most rigorous form of vulnerability assessment. The level of effort required to perform a penetration test is far higher than that required for a port scan or vulnerability scan. Typically, an organization will employ a penetration test on a target system or environment when it wants to simulate an attack by an adversary.

Network penetration testing

A network penetration test of systems and network devices generally begins with a port scan and/or a vulnerability scan. This scan gives the pen tester an inventory of the attack surface of the network and the systems and devices connected to the network. The pen test continues with extensive use of manual techniques to identify and/or exploit vulnerabilities. In other words, the pen tester uses both automated and manual techniques to identify and confirm vulnerabilities.

Occasionally, a pen tester exploits vulnerabilities during a penetration test. Pen testers generally tread carefully because they must be acutely aware of the target environment. If a pen tester is testing a live production environment, for example, exploiting vulnerabilities could result in malfunctions or outages in the target environment. In some cases, data corruption or data loss could also result.

When performing a penetration test, the pen tester may take screen shots showing the exploited system or device because some system/device owners don’t believe that their environments contain exploitable vulnerabilities. By including screen shots in the final report, the pen tester is “proving” that vulnerabilities exist and are exploitable.

THE COMMON VULNERABILITY SCORING SYSTEM

The common vulnerability scoring system (CVSS) is an industry-standard method used to numerically score vulnerabilities according to severity. Numeric scores for vulnerabilities help security personnel prioritize remediation, generally by fixing vulnerabilities with higher scores before tackling those with lower scores.

The formula for arriving at a CVSS score for a given vulnerability is fairly complicated, but all you need to understand is its basic structure. A CVSS score examines several aspects of a vulnerability, including the following:

· Access vector: How a vulnerability is exploited

· Access complexity: How easy or difficult it is to exploit a vulnerability

· Authentication: Whether an attacker must authenticate to a target system to exploit it

· Confidentiality: The potential impact on the confidentiality of data on the target system if it is exploited

· Integrity: The potential impact on the integrity of data on the target system if it is exploited

· Availability: The potential impact on the availability of data or applications on the target system if it is exploited

Generally speaking, the higher a CVSS score, the easier it is to exploit a given vulnerability and the greater the impact on the target systems.

Pen testers often include details for reproducing exploits in their reports. These details are helpful to system or network engineers who want to reproduce the exploit so that they can see for themselves that the vulnerability does in fact exist. They’re also helpful when engineers or developers make changes to mitigate the vulnerabilities; they can use the same techniques to see whether their fixes closed the vulnerabilities.

In addition to scanning networks, techniques that are generally included in the topic of network penetration testing include the following:

· Wardialing: Hackers use wardialing to sequentially dial all phone numbers in a range to discover any active modems; then they attempt to compromise any connected systems or networks via the modem connection. This attack is old-school, but it’s still used occasionally.

· Wardriving: Wardriving is the 21st-century version of wardialing. Someone with a laptop computer literally drives around a densely populated area, looking for unprotected (or poorly protected) wireless access points.

· Radiation monitoring: Radio frequency (RF) emanations are the electromagnetic radiation emitted by computers and network devices. Radiation monitoring is similar to packet sniffing and wardriving, in that someone uses sophisticated equipment to try to determine what data is being displayed on monitors, transmitted on local area networks (LANs), or processed in computers.

· Eavesdropping: Eavesdropping is as low-tech as dumpster diving but a little less (physically) dirty. Basically, an eavesdropper takes advantage of one or more people who are talking or using a computer and paying little attention to whether someone else is listening to their conversations or watching them work with discreet over-the-shoulder glances. (The technical term for the latter approach is shoulder surfing.)

· Packet sniffing: A packet sniffer is a tool that captures all TCP/IP packets on a network, not just those being sent to the system or device doing the sniffing. An Ethernet network is a shared-media network (see Chapter 6), which means that any or all devices on the LAN can (theoretically) view all packets. Switched-media LANs are more prevalent today, however, and sniffers on switched-media LANs generally pick up only packets intended for the device running the sniffer.

Tip A network adapter that operates in promiscuous mode accepts all packets, not just the packets destined for the system, and sends them to the operating system.

PACKET SNIFFING ISN’T ALL BAD

Packet sniffing isn’t just a tool that hackers use to pick up user IDs and passwords from a LAN; it has legitimate uses as well. Primarily, you can use it as a diagnostic tool to troubleshoot network devices, such as a firewall (to see whether the desired packets get through), routers, switches, and virtual LANs (VLANs).

The obvious danger of the packet sniffer’s falling into the wrong hands is that it provides the capability to capture sensitive data, including user IDs and passwords. Equally perilous is the fact that packet sniffers can be difficult to detect on a network.

Application penetration testing

An application penetration test is used to identify vulnerabilities in a software application. Although the principles of an application penetration test are the same as those of a network penetration test, the tools and skills are somewhat different. Someone performing an application penetration test generally has an extensive background in software development. Indeed, the best application pen testers are often former software developers or software engineers.

Physical penetration testing

Penetration tests are also performed on the controls protecting physical premises to see whether it is possible for an intruder to bypass security controls such as locked doors and keycard-controlled entrances. Sometimes, pen testers employ various social engineering techniques to gain unauthorized access to work centers and sensitive areas within work centers, such as computer rooms and file storage rooms. Often, they plant evidence, such as a business card or other object, to prove that they were successful.

Tip Hacking For Dummies, 6th Edition (John Wiley & Sons, Inc.), by Kevin Beaver, explores penetration testing and other techniques in more detail.

In addition to breaking into facilities, physical pen testers practice dumpster diving. Dumpster diving is low-tech penetration testing at its best (or worst) and is exactly what it sounds like. This test can be an extraordinarily fruitful way to obtain information about an organization. Organizations in highly competitive environments also need to be concerned about where their trash and recycled paper go.

GET OUT OF JAIL FREE

Penetration testers who are hired to target physical premises often ask for a signed letter printed on company letterhead that authorizes them to use various techniques to break into physical premises. Pen testers carry these letters (often called “get out of jail free” letters) in case on-site personnel call security or law enforcement.

This safeguard usually helps keep a pen tester out of trouble, but not always. Knowing this technique, cybercriminals may use these letters to try to fool personnel into leaving them alone. A key feature of these letters is contact information for one or more senior officials in the organization whom security or law enforcement can call to verify that the pen tester is legitimate. But this feature is not foolproof either; cybercriminals can cite a real name but provide the phone number of one of their accomplices.

Social engineering

Social engineering is any testing technique that employs some means of tricking people into performing some action or providing some information that enables the pen tester to break into an application, system, or network. Social engineering involves such low-tech tactics as pretending to be a support technician, calling an employee, and asking for their password. You’d think that most people would be smart enough not to fall for this trick, but people are people (and Soylent Green is people)! Some of the ruses used in social engineering tests include the following:

· Phishing messages: Email messages purporting to be something they’re not, sent in an attempt to lure someone into opening a file or clicking a link. Test phishing messages are harmless, of course, but they’re used to see how many personnel fall for the ruse.

· Telephone calls: Calls made to various workers inside an organization to trick them into performing tasks. A call to the service desk, for example, might attempt to reset a user’s account (possibly enabling the pen tester to log in to that user’s account).

· Tailgating: Attempts to enter a restricted work area by following legitimate personnel as they pass through a controlled doorway. Sometimes, the tester carries boxes in the hopes that an employee will hold the door open for them or poses as a delivery or equipment repair person.

PHISHING AND ITS VARIANTS

Phishing messages pretend to be something they’re not. There are several specific forms of phishing, including

· Pharming: This attack results in users visiting an imposter website instead of the site they intend to visit. Pharming can be accomplished through an attack on a system’s hosts file, an organization’s Domain Name System (DNS), or a domain homograph attack.

· Spearphishing: These phishing messages target a single organization (or part of an organization) with highly customized messaging.

· Whaling: These phishing messages are sent to executives in a target organization.

· Smishing: These phishing messages are delivered through Short Message Service (SMS), also known as texting.

Log reviews

Reviewing your various security logs on a regular basis (ideally, daily) is a critical step in security control testing. Unfortunately, this important task often ranks only slightly higher than updating documentation on many administrators’ to-do lists. Log reviews often happen only after an incident has occurred, but that’s not the time to discover that your logging is incomplete or insufficient.

Logging requirements (including any regulatory or legal mandates) need to be clearly defined in an organization’s security policy, including the following:

· What gets logged, such as

· Events in network devices, such as firewalls, intrusion prevention systems (IPSes), web filters, and data loss prevention (DLP) systems

· Events in server and workstation operating systems

· Events in subsystems, such as web servers, database management systems, and application gateways

· Events in applications

· What’s in the logs, such as

· Date/time of the event

· Source (and destination, if applicable), protocol, and IP addresses

· Device, system, and/or User ID

· Event ID and category

· Event details

· When and how often the logs are reviewed

· The level of logging (how verbose the logs are)

· How and where the logs are transmitted, stored, and protected:

· Are the logs stored on a centralized log server or on the local system hard drives?

· Which secure transmission protocol is used to ensure the integrity of the logging data in transit?

· How are date and timestamps synchronized (for example, using a network time protocol (NTP) server)?

· Is encryption of the logs required?

· Who is authorized to access the logs?

· Which safeguards are in place to protect the integrity of the logs?

· How is access to the logs logged?

· How long the logs are retained

· Which events in logs are triggered to generate alerts and to whom alerts are sent

Tip Various log management tools, such as security information and event management (SIEM) systems (discussed in Chapter 9), may be used to help with real-time monitoring, parsing, anomaly detection, and generation of alerts to key personnel.

Synthetic transactions

Synthetic transactions are real-time actions or events that execute on monitored objects automatically. A tool might be used to regularly perform a series of scripted steps on an e-commerce website, for example, to measure performance, identify impending performance issues, simulate the user experience, and confirm calculations. Thus, synthetic transactions can help an organization proactively test, monitor, and ensure integrity and availability (refer to the C-I-A triad in Chapter 3) for critical systems and monitor service-level agreement (SLA) guarantees.

NOBODY REVIEWS LOGS ANYMORE

Systems create event logs that are sometimes the only indicator that something is amiss. Originally, logs were designed for either of two purposes: for periodic reviews, as a way of looking for unwanted events, or for forensic purposes in case of an incident or breach, so that investigators can piece together the clues.

Back in the day, sysadmins would check logs first thing in the morning to see what was amiss. But as sysadmins got busier, guess what was the first daily task to fall by the wayside? You got it: reviewing logs. Soon after, the mere existence of logs was practically forgotten. Logs had become only forensic resources. But for logs to be useful, you must know that an unwanted event has occurred.

Enter the security information and event management (SIEM) system. A SIEM system does what no sysadmin could ever do: monitors log entries from all systems and network devices in real time, correlates events from various systems and devices, and automatically creates actionable alerts on the spot when unwanted events occur.

Not every organization has a SIEM system, and many organizations that don’t have one don’t review logs either. We strongly discourage this form of negligence. It’s essential for an organization to be aware of what’s happening in its environment.

Application performance monitoring tools traditionally produce such metrics as system uptime, correct processing, and transaction latency. Although uptime certainly is an important aspect of availability, it is only one component. Increasingly, reachability (which is a more user- or application-centric metric) is becoming the preferred metric for organizations that focus on customer experience. After all, it doesn’t do your customers much good if your web servers are up 99.999 percent of the time but Internet connections in their region of the world are slow, DNS doesn’t resolve quickly, or web pages take 5 or 6 seconds to load in an online world that measures responsiveness in milliseconds. Hence, other key metrics for applications are correct processing (perhaps expressed as a percentage, which should be close to 100 percent) and transaction latency (the length of time it takes for specific types of transactions to complete). These metrics help operations personnel spot application problems.

Code review and testing

Code review and testing (sometimes known as peer review) involves systematically examining application source code to identify bugs, mistakes, inefficiencies, and security vulnerabilities in software programs. Online software repositories, such as Mercurial and Git, enable software developers to manage source code in a collaborative development environment. A code review can be accomplished manually, by carefully examining code changes visually, or by using automated code reviewing software (such as HCL AppScan Source, HP Fortify, and CA Veracode). Different types of code review and testing techniques include

· Pair programming: Pair (or peer) programming is a technique commonly used in agile software development and extreme programming (both discussed in Chapter 10), in which two developers work together and alternate between writing and reviewing code line by line.

· Lightweight code review: Often performed as part of the development process, this technique consists of conducting informal walk-throughs and email pass-around, tool-assisted, and/or over-the-shoulder (not recommended for the rare introverted or paranoid developer) reviews.

· Formal inspections: Structured processes such as the Fagan inspection are used to identify defects in design documents, requirements specifications, test plans, and source code throughout the development process.

Tip Code review and testing can be invaluable for identifying software vulnerabilities such as buffer overflows, script injection vulnerabilities, memory leaks, and race conditions (see Chapter 10).

Misuse case testing

The opposite of use case testing (in which normal or expected behavior in a system or application is defined and tested), abuse/misuse case testing is the process of performing unintended and malicious actions in a system or application to produce abnormal or unexpected behavior and thereby identify potential vulnerabilities.

After misuse case testing identifies a potential vulnerability, a use case can be developed to define new requirements for eliminating or mitigating similar vulnerabilities in other programs and applications.

A common technique used in misuse case testing is fuzzing, which involves the use of automated tools that can produce dozens (or hundreds, or even more) of combinations of input strings to be fed to a program’s data input fields to elicit unexpected behavior. Fuzzing is used, for example, in an attempt to attack a program by using script injection, a technique that tricks a program into executing commands in various languages, mainly JavaScript and SQL. Tools such as HP WebInspect, IBM AppScan, Acunetix, and Burp Suite have built-in fuzzing and script injection tools that are pretty good at identifying script injection vulnerabilities in software applications.

WHY WOULD SOMEONE TYPE THAT?

Since time immemorial, while writing programs that interfaced with people, programmers thought of all the valid use cases for input fields. An input field that asked for an amount of currency, for example, would be programmed to accept proper numeric input. The program would expect and accept a numeric value in that field, and process it accordingly. Programmers focused on valid input, and that was that.

Fast-forward to the web with HTML, in which programs with their input fields were exposed to the world. Hackers soon found that interesting things could be typed in input fields to provide interesting results. We know these results today as SQL injection, JavaScript injection, cross-site scripting, cross-site request forgery, and buffer overflow.

These attacks caught nearly the entire programming community off guard. Simply put, a programmer’s perspective was to ask “Why would someone type binary code in a numeric field?” Abuse simply had not occurred to many programmers. Fortunately, today a multitude of code libraries sanitize input to make sure that only proper characters are input, thereby blunting the effects of SQL injection and other attacks. And organizations like the Open Web Application Security Project (OWASP) produce learning content so that programmers can more easily write better, more secure programs that are less susceptible to input field attacks.

Test coverage analysis

Test coverage analysis (also called code coverage analysis) measures the percentage of source code that is tested by a given test or validation suite. Basic coverage criteria typically include

· Branch coverage (every branch at a decision point is executed as TRUE or FALSE, for example)

· Condition (predicate) coverage (each Boolean expression is evaluated as both TRUE and FALSE, for example)

· Function coverage (every function or subroutine is called, for example)

· Statement coverage (every statement is executed at least once for example)

A security engineer might use a dynamic application security testing tool (DAST) such as AppScan or WebInspect to test a travel booking program to determine whether the program has any exploitable security defects. Tools such as these are powerful, using a variety of methods to “fuzz” input fields in attempts to discover flaws. But the other thing these tools need to do is fill out forms in every conceivable combination so that all the program’s code will be executed. In this example of a travel booking tool, these combinations would involve every way in which flights, hotels, or cars could be searched, queried, examined, and finally booked. In a complex program, this test can be really daunting. Highly systematic analysis would be needed to make sure that every possible combination of conditions is tested so that all of a program’s code is tested.

Interface testing

Interface testing focuses on the interface between different systems and components. It ensures that functions (such as data transfer and control between systems or components) perform correctly and as expected. Interface testing also verifies that any execution errors are handled properly and do not expose any potential security vulnerabilities. Examples of interfaces tested include

· Application programming interfaces (APIs)

· Web services

· Transaction processing gateways

· Physical interfaces, such as keypads, keyboard/mouse/display, and device switches and indicators

Tip APIs, web services, and transaction gateways can often be tested with automated tools such as HP WebInspect, IBM AppScan, and Acunetix, which are also used to test the human-input portion of web applications.

Breach attack simulations

Organizations that regularly employ penetration testing in their environments can do still more to understand the effectiveness of their protective safeguards. A breach attack simulation is an attack on an organization that includes

· Penetration testing

· An intrusion objective such as the theft of specific data

· A test of security event monitoring to recognize the attack

· Security incident response

The value of breach attack simulation comes from an exercise of not only defensive safeguards, but also detective safeguards and the steps taken after personnel recognize that an attack has occurred.

Compliance checks

In many industries, it’s not enough to be secure; it’s also necessary to be compliant with various laws, standards, and other types of obligations. For IT, security, and privacy-related matters, information security personnel often perform various types of compliance checks to ensure that organizations are doing what is specifically required of them. The Payment Card Industry Data Security Standard (PCI DSS), for example, requires specific safeguards to be implemented regardless of whether they are justified through risk management. HIPAA, NYDFS, and others are similar in that they prescribe specific safeguards that must be tested periodically to ensure that organizations are not only secure, but also compliant with applicable laws and regulations.

Collect Security Process Data

Assessments of security management processes and systems help organizations determine the efficacy of their key processes and controls. Periodic testing of key activities is an important part of management and regulatory oversight, confirming the proper functioning of key processes and identifying improvement areas.

Crossreference This section covers Objective 6.3 of the Security Assessment and Testing domain in the CISSP Exam Outline (May 1, 2021).

Several factors must be considered in determining who will perform this testing, including

· Regulations: Various regulations specify which parties must perform testing, such as qualified internal staff or outside consultants.

· Staff resources and qualifications: Regulations and other conditions permitting, an organization may have adequately skilled and qualified staff members who can perform some or all of its testing.

· Independence: Although an organization may have the resources and expertise to test its management processes, organizations often elect to have a qualified outside organization perform testing. Independent outside testing helps prevent bias.

These factors also determine required testing methods, including the tools used, testing criteria, sampling, and reporting. In a U.S. public company, an organization is required to self-evaluate its information security controls in specific ways and with specific auditing standards under the auspices of the Sarbanes-Oxley (SOX) Act of 2002, also known as the Public Company Accounting Reform and Investor Protection Act.

The types of testing that can be performed include

· Document review: An auditor will examine process or control procedure documentation to get an understanding of the activity and how it is performed.

· Walk-through: An auditor will interview a process or control owner to hear in their own words how a process or control is performed and the nature of any business records that are created. The auditor will also note any variations between what they are told and what is written in process or control procedures.

· Records review: An auditor will examine the business records that are created by the process or control to see whether they are consistent with process or control documentation and what they heard in a walk-through.

· Corroboration: After a walk-through with a process or control owner, an auditor will ask others about the same process or control to see whether there are variations or inconsistencies in the descriptions of the process or control.

· Reperformance: Here, an auditor will follow the steps in process or procedure documentation to see whether they obtain the same results as workers.

Account management

Management must regularly review user and system accounts, as well as related business processes and records, to ensure that user privileges are provisioned and deprovisioned appropriately and with proper approvals. The types of reviews include the following:

· All user account provisioning was properly requested, reviewed, approved, and executed.

· All internal personnel transfers resulted in timely termination of access that was no longer needed.

· All personnel terminations resulted in timely termination of all access.

· All users who hold privileged account access still require it, and their administrative actions are logged.

· All user accounts can be traced back to a proper request, review, and approval.

· All unused user accounts are evaluated to see whether they can be deactivated.

· All users’ access privileges are certified regularly as necessary.

Account management processes are discussed in more detail in Chapter 9.

Management review and approval

Management provides resources and strategic direction for all aspects of an organization, including its information security program. As a part of its overall governance, management needs to review key aspects of the security program. There is no single way that this review is done; instead, in the style and with the same rigor that it reviews other key activities in an organization, management reviews the security program. In larger organizations, this review will likely be quite formal, with executive-level reports created periodically for senior management, including key activities, events, and metrics. (Think eye candy here.) In smaller organizations, this review will probably be a lot less formal. In the smallest organizations, as well as organizations with lower security maturity levels, there may be no management review.

Management review often includes these activities:

· Review of recent security incidents

· Review of recent and anticipated security-related spending

· Review (and ratification) of recent policy changes

· Review (and ratification) of risk treatment decisions

· Review (and ratification) of major changes to security-related processes and the security-related components of other business processes

· Review of operational- and management-level metrics and risk indicators

The internationally recognized standard ISO/IEC 27001, “Information technology — Security techniques — Information security management systems — Requirements,” requires an organization’s management to determine what activities and elements in the information security program need to be monitored, the methods to be used, and the people or teams that will review them. ISO/IEC 27001 formally defines the structure and activities of an information security management system (ISMS), the set of all high-level activities that constitute a complete information security program.

Key performance and risk indicators

The leaders of an organization’s information security program are generally required to report to upper management some key indicators that depict the health and effectiveness of the security program. These indicators include

· Key performance indicator (KPI): A measurable value that depicts the level of effectiveness or success of a process or procedure

· Key risk indicator (KRI): A measurement of a process or procedure that depicts the level of risk

KPIs and KRIs are meaningful measurements of key activities in an information security program that can be used to help management at every level better understand how well the security program and its components are performing. This process is easier said than done, however, and here are a few reasons why:

· No single set of universal metrics applies to every organization.

· There are different ways to measure performance and risk.

· Executives will want key activities to be measured in specific ways.

· Maturity levels vary from organization to organization.

Organizations typically develop metrics and KRIs for their key security-related activities to ensure that security processes are operating as expected. Metrics help identify improvement areas by alerting management to unexpected trends.

Focus areas for security metrics include the following:

· Vulnerability management: Operational metrics include the numbers of scans performed, numbers of vulnerabilities identified (ranked by severity), and numbers of patches applied. KRIs focus on the coverage of scans and elapsed time between the public release of a vulnerability and the completion of patching.

· Incident response: Operational metrics focus on the numbers and categories of incidents and on whether trends suggest new weaknesses in defenses. KRIs focus on the time required to realize that an incident is in progress (known as dwell time) and the time required to contain and resolve the incident.

· Security awareness training: Operational metrics and KRIs generally focus on the completion rate over time.

· Logging and monitoring: Operational metrics generally focus on the numbers and types of events that occur. KRIs focus on the proportion of assets whose logs are being monitored and the elapsed time between the start of an incident and the time when personnel begin to take action.

KRIs are so called because they are harbingers of information risk in an organization. Although the development of operational metrics is not all that difficult, security managers often struggle with the problem of developing KRIs that make sense to executive management. The vulnerability management process, for example, involves using one or more vulnerability scanning tools and subsequent remediation efforts. In this example, some good operational metrics include the numbers of scans performed, the numbers of vulnerabilities identified, and the time required to remediate identified vulnerabilities. These metrics will make no sense to management because they lack business context, but at least one good KRI can be derived from data in the vulnerability management process. “Percentage of servers supporting manufacturing whose critical security defects are not remediated within 10 days,” for example, is a great KRI. This metric directly helps management understand how well the vulnerability management process is performing in a specific business context. It is also a good leading indicator of the risk of a breach that exploits an unpatched, vulnerable server that could affect business operations (manufacturing, in this case).

Backup verification data

Organizations need to routinely review and test system and data backups, as well as recovery procedures, to ensure that they are accurate, complete, and readable. They also need to regularly test the ability to recover data from backup media to ensure that they can do so in the event of a ransomware attack, hardware malfunction, or disaster event that damages information systems or facilities.

On the surface, this process seems easy enough. But as they say, the devil’s in the details. Several gotchas and considerations exist, including the following:

· Data recovery versus disaster recovery: There are two main reasons for backing up data:

· Data recovery: When various circumstances require the recovery of data from a past state

· Disaster recovery: When an event has resulted in damage to primary processing systems, necessitating recovery of data to alternative processing systems

For data recovery, you want your backup media (in whatever form) to be logically and physically near your production systems so that the logistics of data recovery are simple. Disaster recovery, however, requires backup media to be far from the primary processing site so that it is not involved in the same natural disaster. These two processes are at odds. Organizations sometimes solve this dilemma by creating two sets of backup media, one that stays in the primary processing center, and one that is stored at a secure offsite facility.

· Data integrity: To respond to requests to roll back data to an earlier date and time, it is vital to know exactly what data needs to be recovered. Database management systems enforce a rule known as referential integrity, which means that a database cannot be recovered to a state in which relationships between indexes, tables, and foreign keys would be broken. This issue often comes into play in large distributed systems with multiple databases on different servers, sometimes owned by different organizations.

· Version control: To respond to requests to recover data to an earlier state, personnel also need to be mindful of all changes to programs and database design that are dependent on one another. Rolling data back to a point in time last week, for example, may also require that rolling back the associated computer programs if changes in those applications last week involved both code and data changes. Further, rolling back to an earlier point in time could involve other components such as run-time libraries, subsystems such as Java, and even operating system versions and patches.

· Staging environments: Depending on the reason for recovering data from a point in time in the past, it may be appropriate to recover data in a separate environment. If certain transactions in an e-commerce environment were lost, it may make sense to recover data, including the lost transactions, to a test server so that those transactions can be found. If older data was recovered to the primary production environment, transactions from that time up to the present would effectively be wiped out.

Organizations have several choices for backup media, including

· Magnetic tape: For decades, organizations have used various forms of magnetic tape (magtape), which is reliable and can last for years when properly stored.

· Optical disc: Media such as CD-ROM and DVD-ROM held promise as the heirs apparent to magtape, as they are impervious to magnetic fields and are thought to last longer.

· Virtual tape library (VTL): A VTL is a disk-based storage library that simulates a tape library. The advantage of VTL is its write and read speed, which is far higher than that of magtape or optical disc.

· Redundant storage system: This second storage system is usually located tens or hundreds of miles from the primary storage system. Data is copied to a redundant storage system during backups or during real-time replication.

· Electronic vaulting: Electronic vaulting (e-vaulting) consists of using a cloud-based data storage repository that functions as a data recovery or disaster recovery archive.

Training and awareness

Organizations need to measure participation in and effectiveness of security training and awareness programs to ensure that people at all levels of the organization understand how to respond to new and evolving threats and vulnerabilities.

Key characteristics for examining training and awareness programs include

· Relevance: Is the training content relevant to the workforce and the organization?

· Methods of delivery: In what ways is security awareness knowledge imparted to the workforce? Using a variety of methods is more effective than using one method.

· Specialized content: Do technical workers such as system administrators and DBAs receive additional training that is relevant to their responsibilities?

· Competency testing: Does security awareness training include any competency testing to see whether workers are learning from the lessons?

Security awareness training is discussed in more detail in Chapter 3.

Disaster recovery and business continuity

Disaster recovery (DR) and business continuity (BC) planning enable an organization to be more resilient, even when disrupting events occur. Business continuity planning ensures that critical business processes will continue operating, whereas disaster recovery planning ensures the restoration of critical assets. Organizations need to periodically review and test their disaster recovery and business continuity plans to determine whether recovery plans are up to date and will result in the successful continuation of critical business processes in the event of a disaster.

Techniques used for testing DR and BC include

· Document review: DR and BC documents can be examined for completeness, relevance, and date of last review and update.

· Test-report review: DR and BC test reports can be examined to see how well recent tests went, whether improvement opportunities were identified, and whether those improvements have been made.

· Plan testing: Various types of BC and DR tests can be performed, including tabletop testing, parallel testing, and cutover testing. Tabletop testing is a group walk-through of recovery procedures; parallel and cutover testing involve the use of primary and recovery systems and procedures.

DR and BC plan development and testing are discussed in detail in Chapters 3 and 9.

Tip Information security continuous monitoring (ISCM) is defined in NIST SP 800-137 as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” An ISCM strategy helps the organization systematically maintain an effective security management program in a dynamic environment.

Analyze Test Output and Generate Reports

Various systems and tools are capable of producing volumes of log and testing data. Without proper analysis and interpretation, these reports are useless or may be used out of context. Security professionals must be able to analyze log and test data, and to report this information in meaningful ways, so that senior management can understand organizational risks and make informed security decisions.

Crossreference This section covers Objective 6.4 of the Security Assessment and Testing domain in the CISSP Exam Outline (May 1, 2021).

Often, this process requires developing test output and reports for different audiences with information in a form that is useful to them. The output of a vulnerability scan report, with its lists of IP addresses, DNS names, and vulnerabilities with their respective common vulnerabilities and exposures codes, would be useful to system engineers and network engineers, who would use such reports as lists of individual defects to be fixed. But give that report to a senior executive, and they’ll have little idea what it’s about or what it means in business terms. For senior executives, vulnerability scan data would be rolled up into meaningful business metrics and KRIs to inform senior management of any appreciable changes in risk levels.

The key for information security professionals is knowing the meaning of data and transforming it for various purposes and different audiences. Security professionals who perform this task well are better able to obtain funding for additional tools and staff because they’re able to state the need for resources in business terms.

Remediation

It is said in our industry that the real work begins when the assessment is completed. In other words, assessments of systems and processes are likely to find problems and improvement areas, and we’re duty-bound to fix those problems. Management knows the game: They’re going to ask about remediation, and we’d better have an answer. Take the following steps upon receipt of a security assessment:

1. Validate.

The important first step to take when you receive a final assessment report is to validate the findings. This step is especially important when an outside firm performed the assessment; that firm doesn’t know the organization as you do, and its understanding of the matter is likely to be incomplete. Validation is a sanity check to confirm the validity of the findings.

2. Prioritize.

Initial prioritization is needed to give the organization an idea of which assessment findings will receive early attention and which ones can wait. This initial prioritization is done at face value before the details are known, and sometimes, priorities will change. Priorities may be based on risk level, visibility, the perception of quick wins, or a combination of all three.

3. Identify a remediation owner.

When the context of the issue is identified, management assigns the task of remediation to a remediation owner. In some cases, the remediation owner performs the remediation; in other cases, the owner manages or supervises those who will do the work. Still, one person is identified as being responsible for seeing remediation through.

4. Develop a work plan.

Subject-matter experts get down to business, developing a detailed work plan to change the process or system so as to resolve the issue found in the assessment. The work plan tells management how much effort (and what kinds of effort), cost, and time are required to remediate the finding. Depending on the nature of the finding, the work plan could be one line in a project plan or a thousand lines.

5. Reprioritize.

When the work plan has been created, you’ll know whether the cost, effort, and time are within initial estimates. Sometimes, remediation is easier than you initially thought, and sometimes, it’s harder. Reprioritizing the effort can be the right thing to do.

6. Remediate.

This work itself that resolves the issue. Whether remediation means making changes in a business process, an information system, or both, the changes are intended to resolve the issue that was identified in the assessment. In some cases, an assessment highlights the absence of something. Remediation consists of building the process or the system (or both) and putting it in place.

7. Close the issue.

When the remediation owner has completed the remediation effort, the work is confirmed, and the issue is marked as closed and completed.

Management is likely to request a periodic report on remediation progress. Whoever produces this report will need to stay in contact with remediation owners to maintain an up-to-date status on all issues, providing management an accurate depiction of progress.

Exception handling

Virtually all organizations have cases in which it’s infeasible (or impossible) to comply with every policy, control, and standard in every business process and information system. An account reconciliation procedure may require a manager to sign off on the process, for example, but the small size of the department may mean that there is no one to do the sign-off. Or an information system may be unable to enforce password complexity standards.

When these situations arise, the methodical approach is as follows:

1. Analyze the situation.

Study the situation to validate the assertion and explore reasonable options.

2. Analyze risk.

Study the risk levels of various options.

3. Get approval.

The exception can be approved or denied. In case of a denial, a different approach is generally prescribed.

4. Enter the records.

Enter the request, its analysis, and final disposition into an exception register.

Exception approvals should be time-bound, not perpetual. We suggest that exceptions be granted for no more than one year, after which time the matter will be reopened and reconsidered. Much may have changed in that year to put the risks in a new light.

Ethical disclosure

Now and again, personnel who conduct security assessments encounter wrongdoing. The nature of the wrongdoing may range from incompetence to intentional malice, and it may be difficult for the assessor to know the difference. Discerning intent can be especially difficult when the assessor is an employee of the firm that was hired to perform the assessment. Further, it can be difficult to know who else may be involved in the wrongdoing — who ordered it, who knows about it, and who is covering it up.

In situations like these, assessors have three options:

· Include the finding in the report. Although including the finding is ethically sound, management (if involved) may deliberately bury the finding, resulting in no change in the situation. The assessor may not be invited back to perform subsequent assessments now that the misbehavior has been found.

· Notify law enforcement. If an actual crime is being committed, auditors may be compelled to notify law enforcement. This approach may backfire, however, as the law enforcement organization’s caseload may result in unwillingness to pursue the case.

· Notify the board of directors. Bypassing management (who may be complicit in the misbehavior) and going directly to the board may be a viable option. Board members have a fiduciary responsibility and are legally bound to act on a notification of wrongdoing.

Assessors in these situations need to document their findings carefully, as those findings are likely to be challenged, considered, or even submitted as evidence in legal proceedings.

Tip Wrongdoing discovered within an information system, particularly when a crime has been committed, may require forensic analysis and a chain of custody.

Conduct or Facilitate Security Audits

Auditing is the process of examining systems and/or business processes to ensure that they’ve been designed properly, are being used properly, and are considered to be effective. Audits are frequently performed by an independent third party or an independent group within an organization, which helps ensure that the audit results are accurate and not biased due to organizational politics or other circumstances.

Crossreference This section covers Objective 6.5 of the Security Assessment and Testing domain in the CISSP Exam Outline (May 1, 2021).

Audits are frequently performed to ensure that an organization is in compliance with business or security policies and with other requirements to which the business may be subject. These policies and requirements can include laws and regulations, legal contracts, industry or trade group standards, and best practices.

The major factors in play for internal and external audits include

· Purpose and scope: The reason for an internal or external audit, and the scope of the audit, need to be fully understood by both management in the audited organization and those who will be performing the audit. Scope may include one or more of the following factors:

· Organization business units and departments

· Geographic locations

· Business processes, systems, and networks

· Time periods

· Applicable standards or regulations: Often, an audit is performed under the auspices of a law, regulation, or standard, which determines such matters as who may perform the audit, auditor qualifications, the type and scope of the audit, and the obligations of the audited organization at the conclusion.

· Qualifications of auditors: The personnel who perform audits may be required to have specific work experience, possess specific training and/or certifications, or work in certain types of firms.

· Types of auditing: Several activities comprise an audit, including

· Observation: Auditors passively observe activities performed by personnel and/or information systems.

· Inquiry: Auditors ask questions of control or process owners to understand how key activities are performed.

· Inspection: Auditors inspect documents, records, and systems to verify that key controls or processes are operating properly.

· Reperformance: Auditors perform tasks or transactions on their own to see whether the results are correct.

· Sampling: The process of selecting items in a large population is known as sampling. Regulations and standards often specify the types and rates of sampling that are required for an audit.

· Management response: In some types of audits, management in the auditee organization is permitted to write a statement in response to an auditor’s findings. Management response may range from “We already fixed it” to “We will fix it” to “We don’t think this is an issue.”

There are three main contexts for audits of information systems and related processes:

· Internal audit: Personnel in the organization conduct an audit on selected information systems and/or business processes.

· External audit: Auditors from an outside firm conduct an audit on one or more information systems and/or business processes.

· Third-party audit: Auditors, internal or external, perform an audit of a third-party service provider that is performing services on behalf of the organization. An organization may outsource a part of its software development to another company, for example. From time to time, the organization audits the software development company to ensure that its business processes and information systems are in compliance with applicable regulations and business requirements. Alternatively, the third party may hire its own audit firm and make audit reports available to its customers. This approach is common among service providers. SOC 1, SOC 2, and SOC 3 audits are often used for this purpose.

Business-critical systems need to be subject to regular audits as dictated by regulatory, contractual, or trade group requirements.

Warning For organizations that are subject to regulatory requirements, such as Sarbanes-Oxley (discussed in Chapter 3), it’s all too easy and far too common to make the mistake of focusing on audits and compliance rather than on implementing a truly effective and comprehensive security strategy. Compliance does not equal security. Compliance isn’t optional, but neither is security. Don’t assume that achieving compliance will automatically achieve effective security (or vice versa). Fortunately, security and compliance aren’t mutually exclusive, but you need to ensure that your efforts truly achieve both objectives.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!