Chapter 10

Software Development Security

IN THIS CHAPTER

Bullet Understanding and integrating security into the software development life cycle

Bullet Identifying and applying security controls in software development ecosystems

Bullet Assessing the effectiveness of software security

Bullet Assessing the security impact of acquired software

Bullet Defining and applying secure coding guidelines and standards

You must understand the principles of software security controls, software development, and software vulnerabilities. Software and data are the foundation of information processing; software can’t exist apart from software development. Understanding the software development process is essential for creating and maintaining appropriate, reliable, and secure software. This domain represents 10 percent of the CISSP certification exam.

Understand and Integrate Security in the Software Development Life Cycle

The software development life cycle (SDLC, also known as the systems development life cycle and the software development methodology) refers to all the steps required to develop software and systems from conception through implementation, support, and (ultimately) retirement. In other words, the entire life of software and systems, from birth to death, and everything in between (like adolescence, going off to college, getting married, and retirement)!

Crossreference This section covers Objective 8.1 of the Software Development Security domain in the CISSP Exam Outline (May 1, 2021).

The life cycle is a development process designed to achieve two objectives: software and systems that perform their intended functions correctly and securely, and a development or integration project that’s completed on time and on budget.

Tip As we point out numerous times in this chapter, the term software development life cycle is giving way to systems development life cycle because the process applies to more than just software development; it also broadly applies to systems development, which can include networks, servers, database management systems, and more.

Development methodologies

Popular development methodologies include waterfall, Agile, DevOps, and DevSecOps, as discussed in the following sections.

Agile

Agile development involves a more iterative, less formal approach to software and systems development than more traditional methodologies, such as waterfall (discussed in the next section). As its name implies, Agile development focuses on speed in support of rapidly, and often constantly, evolving business requirements.

The Manifesto for Agile Software Development (https://agilemanifesto.org/) describes the underlying philosophy of Agile development as follows:

· Individuals and interactions over processes and tools

· Working software over comprehensive documentation

· Customer collaboration over contract negotiation

· Responding to change over following a plan

The manifesto doesn’t disregard the importance of the items on the right (such as processes and tools), but it focuses more on the italicized items on the left.

Specific implementations of agile development take many forms. One common approach is the Scrum methodology. Typical activities and artifacts in this methodology include

· Product backlog: A prioritized list of customer requirements, commonly known as user stories, maintained by the product owner, a business or customer representative who communicates with the Scrum team on behalf of the project stakeholders.

· User stories: Formal requirements written as brief, customer-centric descriptions of the desired feature or function. User stories usually take the form “As a [role], I want to [feature/function] so that I can [purpose].” An example would be “As a customer service representative, I want to be able to view full credit card information so that I can process customer refunds.”

Warning The user story in the preceding example should be raising all sorts of red flags and sounding alarms in your head! It illustrates why security professionals need to be involved in the development process, particularly when Agile development methods are used; requirements are developed on the fly and may not be well thought out or part of a well-documented, comprehensive security strategy. The user in this example may simply be trying to perform a legitimate job function and may have limited understanding of the potential security risks that this request introduces. If the developer is not security-focused and doesn’t challenge the requirement, the feature may be delivered as requested. In the developer’s mind, a feature was rapidly developed as requested and delivered to the customer error-free. Still, major security risks may have been unintentionally and unwittingly made an inherent part of the software! Someone in security (maybe you!) needs to attend development meetings to make sure that risky features aren’t being developed.

· Sprint planning: During sprint planning, the entire team meets for two hours to selects the product backlog items that team members believe they can deliver during the upcoming sprint (also known as an iteration) — typically, a two-week time-boxed cycle. During the next two hours of the sprint planning meeting, the development team breaks the product backlog items selected during the first two hours into discrete tasks and plans the work that will be required during the sprint (including who will do what).

· Daily standup: Team members hold a daily 15-minute standup meeting (called a scrum) throughout the two-week sprint, and each team member answers the following three questions:

· What did I accomplish yesterday?

· What will I accomplish today?

· What obstacles or issues exist that may prevent me from meeting the sprint goal?

The daily standup is run by the scrum master, who is responsible for tracking and reporting the sprint’s progress and resolving any obstacles or issues identified during the daily standup.

· Sprint review and retrospective: At the end of each two-week sprint, the team holds a sprint review meeting (typically, for two hours) with the product owner and stakeholders to present (or demonstrate) the work that was completed during the sprint and review any work that was planned but not completed during the sprint.

The sprint retrospective typically is a 90-minute meeting. The team identifies what went well during the sprint and what can be improved in the next sprint.

Warning The preceding process is a very high-level overview of one possible Scrum methodology. There are as many iterations of Agile software development methods as there are iterations of software development. For a more complete discussion of the Agile and Scrum methodologies, we recommend Agile Project Management For Dummies, by Mark Layton, and Scrum For Dummies, 2nd edition, by Mark Layton and David Morrow (both John Wiley & Sons, Inc.). Another thing you can do is perform an Internet search for “pigs and chickens” to learn about the folklore behind the Scrum methodology. You’ll probably find it interesting. Make sure that you find the accompanying joke about the pig and the chicken who discussed opening a restaurant together.

Security concerns to be addressed within any Agile development process can include a lack of formal documentation or comprehensive planning. In more traditional development approaches, such as waterfall, extensive up-front planning is done before any actual development work begins. This planning can include creating formal test acceptance criteria, security standards, design and interface specifications, detailed frameworks and modeling, and certification and accreditation requirements. The general lack of such formal documentation and planning in the Agile methodology isn’t a security issue itself. Still, it means that security needs to be front of mind for everyone involved in the Agile development process throughout the project's life cycle.

Waterfall

In the waterfall model of software (or system) development, each of the stages in the life cycle progress is like a series of waterfalls.

The stages are performed sequentially, one at a time. Typically, these stages consist of the following:

· Conceptual definition: A high-level description of the software or system deliverable. This description generally contains no details; it’s the sort of description you want to give the business and finance people (the folks who fund your projects and keep you employed). You don’t want to scare them with details, and they probably wouldn’t understand the details anyway.

· Functional requirements: The required characteristics of the software or system deliverable (basically, a list). Rather than being a design, the functional requirements are a collection of things that the software or system must do. Although functional requirements don’t give you design-level material, this description contains more details than the conceptual definition. Functional requirements usually include a test plan, a detailed list of software or system functions and features that must be tested. The test plan describes how each test should be performed and the expected results. Generally, you have at least one test in the test plan for each requirement in the functional requirements. Functional requirements also must contain expected security requirements for the software or system.

· Nonfunctional requirements: The required characteristics of the software or system, thought of as “under the covers” properties. Examples include encryption algorithms, architecture, capacity, throughput, resilience, and portability.

· Functional specifications: The software development department’s version of functional requirements. Rather than being a list of have-to-have and nice-to-have items, the functional specification is more a statement of what it is (we hope) or what we think we can build statement. (To this point, the MoSCoW (M – Must have, S – Should have, C – could have, W – Won’t have) prioritization method can be used to prioritize requirements.) Functional specifications aren’t quite a design, but a list of characteristics that the developers and engineers think they can create in the real world. From a security perspective, the functional specifications for an operating system or application should contain all the details about authentication, authorization, access control, confidentiality, transaction auditing, integrity, and availability.

· Design: The process of developing the highest-detail designs. In the application software world, design includes entity-relationship diagrams, data-flow diagrams, database schemas, and over-the-wire protocols. For networks, this stage includes the design of local area networks (LANs), wide area networks (WANs), subnets, and the devices that tie them together and provide needed security.

· Design review: The last step in the design process, in which a group of experts (some on the design team and some not) examines the detailed designs. Members who are not on the design team give the design a set of fresh eyes and a chance to catch a design flaw or two.

· Coding: The phase that software developers and engineers yearn for. Most software developers would prefer to skip all the preceding steps and start coding right away, even before the formal requirements are known! It’s scary to think about how much of the world’s software was created with coding as the first activity. (Would you fly in an airplane that the machinists built before the designers produced engineering drawings? Didn’t think so.) Coding and systems development usually include unit testing, which is the process of verifying all the modules and other pieces built in this phase.

· Code review: The phase in which developers examine one another’s program code and get into philosophical arguments about levels of indenting and the correct use of curly braces. Seriously, though, engineers can discover mistakes during code review that would cost you a lot of money if you had to fix them later in the implementation process or in maintenance mode. You can use several good static and dynamic code analysis tools use to automatically identify security vulnerabilities and other errors in software code. Many organizations use these tools to ferret out programming errors that would otherwise result in vulnerabilities that attackers might exploit. You can review code review in Chapter 8.

· Configuration review: A phrase in systems development, such as operating systems and networks, that involves performing system or device configuration checks and similar activities. This important step helps verify that individual components were built properly and saves time in the long run, because errors found at this stage ensure that subsequent steps will go more smoothly and that errors in subsequent steps will be somewhat easier to troubleshoot. After all, the configuration of individual components will have already taken place.

· Unit test: When portions of an application or other system have been developed, testing the pieces separately is often possible. This is called unit testing. Unit testing allows a developer, engineer, or tester to verify the correct functioning of individual modules in an application or system. Unit testing is usually done during coding and other component development. It doesn’t always show up as a separate step in process diagrams.

· Integration test: As software modules and components are developed and unit tested, they can next be tested as a group in an integration test, which ensures that various modules and components communicate with each other correctly.

· System test: A system test occurs when all the components of the entire system have been assembled, and the entire system is tested from end to end. The test plan that was developed in the functional requirements step is carried out here. The system test includes testing all the system’s security functions, of course, because the program’s designers included those functions in the test plan. (Right?) You can find some great tools to rigorously test for vulnerabilities in software applications, as well as operating systems, database management systems, network devices, and other things. Many organizations consider it necessary to use such tools in system tests to ensure that the system has no exploitable vulnerabilities.

· Certification and accreditation: Certification is the formal evaluation of the application or system: Every intended feature performs as planned, and the system is declared fully functional. Accreditation means that the powers that be have said it’s okay to put the system into production by issuing an Authority to Operate (ATO). An ATO could mean offer it for sale, build it and ship it, or whatever “put into production” means in your organization.

· Implementation: The phase when all testing and required certifications and accreditations are completed and the software can be released to production. This phase usually involves a formal mechanism whereby the software developers create a release package for operations. The release package contains the new software and any instructions for operations personnel so that they know how to implement it and verify that it was implemented correctly. An implementation plan usually includes backout instructions to revert the software (and any other changes) to its pre-change state.

· Maintenance: The maintenance phase is a system’s “golden years.” Then customers start putting in change requests because … well, because that’s what people do! Change management and configuration management are the processes used to control (and document all changes to) the software or system over its lifetime. Change and configuration management are discussed later in this chapter.

Warning You need good documentation, in the form of those original specification and design documents, because the developers who wrote this software or built the system have probably moved on to some other cool projector even another organization, and new people are left to maintain it.

DevOps

DevOps is a life cycle software development methodology that can be thought of as a merger of software development and operations. DevOps was inspired by the Agile methodology and the “Plan – Do – Check – Act” Deming Cycle.

CIS SYSTEM AND DEVICE HARDENING STANDARDS

The systems development life cycle generally has to do with the design and development of information systems, which may include many components, including server operating systems, network devices, database management systems, embedded systems, and other components. Similar to OWASP (described later in this chapter for software developers), many good system and device hardening standards are available. Of particular note is the vast collection of hardening standards from the Center for Internet Security. No, we’re not talking about criminal investigators, but an organization dedicated to developing high-quality documents that provide detailed, step-by-step instructions on how to build and configure a system that will be highly resistant to attack. Best of all, these standards are free. You can find out more at https://www.cisecurity.org.

DevOps is a popular trend that represents the fusion of development and operations. It extends Agile development practices to the entire IT organization. Perhaps it’s not as exciting as an Asian–Italian fusion restaurant that serves a gourmet sushi calzone, but hey, this is software and systems development, not fine dining! (Sorry.)

DevOps aims to improve communication and collaboration between software/systems developers and IT operations teams to facilitate the rapid, efficient deployment of software and infrastructure.

As with Agile development methodologies, however, inherent security challenges must be addressed in a DevOps environment. Traditional IT organizations and security best practices have maintained strict separation between development and production environments. Although these distinctions remain in a DevOps environment, they are a little less absolute. Therefore, this situation can introduce additional risks to the production environment. These risks must be adequately addressed with proper controls and accountability throughout the IT organization.

To learn more about DevOps, pick up a copy of either The Phoenix Project (IT Revolution Press) or The Visible Ops Handbook (Information Technology Process Institute), written by Kevin Behr, Gene Kim, and George Spafford. These books are considered to be must-reads in many IT organizations.

The steps in the DevOps life cycle are

· Dev: The steps in the development portion of DevOps are

1. Plan: This step potentially contains several substeps, including the development of functional requirements, nonfunctional requirements, design, and design review.

2. Code: In this step, developers write or update application code. Potentially, this step also includes developing or updating infrastructure configuration. The concept of infrastructure as code embodies the idea that application developers create software application source code and the configuration of the underlying database management systems, operating systems, and even network infrastructure.

3. Build: This step relies on automation in the form of a build system that compiles and integrates code and infrastructure developed in the code step.

4. Test: Various types of testing are performed, including security testing, regression testing, performance testing, and user acceptance testing.

5. Release: In this step, the software release package is built so that the software (and potentially infrastructure) can be deployed in the next step.

· Ops: The steps in the operations portion of DevOps are

1. Deploy: The updated software and infrastructure are moved into production. Sometimes, this step includes running utility programs that make one-time changes to database management systems when the underlying data model is being changed (such as adding new tables or new fields to a table).

2. Operate: The system is in production operation, including users who use the system and all manual and automated tasks performed by users and IT operations personnel.

3. Monitor: Various monitoring tools are used to observe the running application to ensure that the underlying hardware is functioning correctly, sufficient resources are available, and the application is running correctly.

As with the Agile methodology, an organization can have several concurrent sprints in play and more in the pipeline.

DevOps is often depicted in a “figure 8” life cycle, as shown in Figure 10-1.

Schematic illustration of the DevOps life cycle process.

© John Wiley & Sons, Inc.

FIGURE 10-1: The DevOps life cycle process.

DevSecOps

DevSecOps is a life cycle development process much like DevOps, with the intentional inclusion of security in several steps of the DevOps cycle. The security methods used in the DevOps cycle include

· Planning: Requirements include applicable security requirements dictated by policies and controls, which themselves are aligned with applicable laws, regulations, and standards.

· Coding: Developers are trained in safe coding, and their integrated development environments (IDEs) include tools used to detect and alert developers to security defects in the source code they are building or updating.

· Testing: Testing the application and underlying infrastructure includes static application security testing (SAST) and dynamic application security testing (DAST), which are integrated into the build environment and operate automatically each time a new change is introduced into the environment.

· Operation: Operations includes security-related activities such as data replication and backup.

· Monitoring: Monitoring the environment includes performance monitoring and security monitoring performed by a security operations center (SOC).

DevSecOps embodies the concept of Shift Security Left, which means including security earlier in the development cycle, to the left on the arrow of time that represents the linear steps of the development cycle. Shift security left is depicted in Figure 10-2.

Schematic illustration of the concept of Shift Security Left.

© John Wiley & Sons, Inc.

FIGURE 10-2: The concept of Shift Security Left.

Maturity models

Organizations that need to understand and improve the quality of their software and systems development processes and practices can benchmark their SDLC by measuring its maturity. Models are available for measuring software and systems development maturity, including the following:

· Capability Maturity Model Integration (CMMI): By far the most popular model for measuring software development maturity, the CMMI is required by many U.S. government agencies and contractors. The model defines five levels of maturity:

· Initial: Processes are chaotic and unpredictable, poorly controlled, and reactive.

· Managed: Processes are characterized for projects but are still reactive.

· Defined: Processes are defined (written down) and more proactive.

· Quantitatively managed: Processes are defined and measured.

· Optimized: Processes are measured and improved.

Information about the CMMI is available at https://www.isaca.org.

· Software Assurance Maturity Model (SAMM): This model is an open framework geared to organizations that want to ensure that development projects include security features. More information about SAMM is available at https://owaspsamm.org.

· Building Security in Maturity Model (BSIMM): This model is used to measure the extent to which security is included in software development processes. This model has four domains:

· Governance

· Intelligence

· Secure software development life cycle touchpoints

· Deployment

Information is available at https://www.bsimm.com.

· Agile Maturity Model (AMM): This software process improvement framework is for organizations that use Agile software development processes. More information about AMM is available at www.researchgate.net/publication/45227382_Agile_Maturity_Model_(AMM)_A_Software_Process_Improvement_framework_for_Agile_Software_Development_Practices.

Organizations can perform self-assessments or employ outside experts to measure their development maturity. Some organizations opt to use outside experts as a way to instill confidence in customers.

Operation and maintenance

Software and systems that have been released to operations become part of IT operations and its processes. Several operational aspects come into play, including the following:

· Access management: If the application or system uses its own user access management, the person or team that fulfills access requests will do so for the application.

· Event management: The application or system will be writing entries to one or more audit logs or audit logging systems. Personnel will review these logs, or (better) these logs will be tied to a security information and event management (SIEM) system to notify personnel of actionable events.

· Vulnerability management: Periodically, personnel test the application or system to see whether it contains security defects that could lead to a security breach. The types of tests that may be employed include security scans, vulnerability assessments, and penetration tests. For software applications, tests could also include static and dynamic code reviews.

· Performance and capacity management: The application or system may be writing performance-related entries in a logging system, or external tools may measure the response time of key system functions. This phase helps ensure that the system is healthy, usable, and not consuming excessive or inappropriate resources.

· Audits: To the extent that an application or system is in scope for security or privacy audits, operational aspects of an application or system are examined by internal or external auditors to ensure that the application or system is managed properly and operating correctly. This topic is expanded later in this chapter.

From the time a software application or system is placed into production, development continues, but typically at a slower pace. During this phase, additional development tasks may be needed, such as

· Minor feature updates

· Bug fixes

· Security patching and updating

· Custom modifications

Finally, at the end of a system’s service life, the system is decommissioned, which typically involves one of three outcomes:

· Migration to a replacement system: Data in the old system may be migrated to a replacement system to preserve business records so that transaction history during the era of the old system may be viewed in its replacement.

· Coexistence with a replacement system: The old system may be modified to operate in rea- only mode, permitting users to view data and records in the old system. Organizations that take this path keep an old system for a few months to a year or longer. This option usually is chosen when the cost of migrating data to the new system exceeds the cost of keeping the old system running.

· Shutdown: In some instances, an organization discontinues use of the system. The business records may be archived for long-term storage if requirements or regulations dictate doing so.

Tip The operations and maintenance activities described in this section may be part of an organization’s DevOps processes. We discuss this topic later in this chapter.

Change management

Change management is the formal business process that ensures all changes made to a system receive formal review and approval from all stakeholders before implementation. Change management gives everyone a chance to voice their opinions and concerns about any proposed change so that the change goes as smoothly as possible, with no surprises or interruptions in service.

Change management is discussed in greater detail in Chapter 9.

Remember The process of approving modifications to a production environment is called change management.

Warning Don’t confuse the concept of change management with configuration management (discussed later in this chapter). The two concepts are distinctly different.

CSSLP CERTIFICATION

The (ISC)2 certification Certified Secure Software Life cycle Professional (CSSLP) recognizes the competency of software development professionals in incorporating security into every phase of the software development life cycle (SDLC) — not as an add-on, as it has been for many years. You can find out more about the CSSLP certification at https://www.isc2.org/Certifications/CSSLP.

Integrated product team

In the context of software development, an integrated product team (IPT) is a multidisciplinary group of people whose mission is to develop and operate an information system. An IPT attempts to remove barriers between developers, operations, and users, who are often isolated in organizations.

A development organization that implements IPT will have not just developers, but also operations and end users on a system development team. This alignment intends to develop and reinforce synergies between these (and other) groups, to ensure better outcomes in the form of systems that better meet users’ needs and that can be operated and monitored.

Identify and Apply Security Controls in Software Development Ecosystems

Development environments are the collection of systems and tools used to develop and test software and systems before their release to production. Particular care is required in securing development environments to ensure that security vulnerabilities and back doors are not introduced into the software created there. These safeguards also protect source code from theft by adversaries.

Crossreference This section covers Objective 8.2 of the Software Development Security domain in the CISSP Exam Outline (May 1, 2021).

KEEP DEVELOPERS OUT OF PRODUCTION ENVIRONMENTS

Software developers should not have access to production environments in an organization. This practice is required by regulations and standards, including PCI DSS, NIST SP800-53, and ISO/IEC 27002.

Different personnel should be installing updated software in production environments. Developers can put installable software on a staging system for trained operations personnel to install and verify proper operation.

Developers may on occasion require read-only access to production environments so that they can troubleshoot problems. Even this read-only access should be disabled, however, except during actual support cases.

Programming languages

A riddle in the programming and software engineering profession goes like this:

· Question: What’s the best programming language to use for a specific project on ?

· Answer: The language known to the programmer.

· Alternative answer: The language that is presently available for use.

The meaning of the riddle is this: Programs are written by developers using familiar and available languages, with two limitations:

· The chosen language is not always the best fit for the chosen purpose.

· The developer’s expertise in the chosen language will vary.

The result in these situations: The resulting programs may have defects that could make them vulnerable to attack. And depending upon the programming language chosen for a project, the availability of tools to identify these defects may be widely available, may have limited availability, or are not available at all.

You may have started this section thinking that the selection of a programming language has little consequence on the project’s outcome. Still, we hope that by now you realize that the opposite is true. The selection of a programming language puts the project on a long-term trajectory that will help determine the long-term success of the project, based on the following factors:

· Expertise of the developer in writing secure code

· Availability of tools to identify defects in source code

Libraries

In the context of software development, libraries are collections of source code or object code that are used in software development projects. Libraries may be purchased from commercial organizations, obtained as open-source, or developed in-house.

Increasingly, organizations employ libraries for new development projects, which results in most of an organization’s software having been developed by other parties. Software in general is undergoing continuous improvement in terms of functionality, leading to enterprise programs containing a much larger source code base, most of which was developed elsewhere.

Organizations’ use of source code libraries can act as a force multiplier, resulting in a smaller team of developers creating far more powerful applications. This situation is not without costs and risks, however. Organizations that use external code libraries must develop processes to continually ensure that these libraries are free of exploitable defects.

The use of libraries brings a compliance-related concern: licensing and attribution. The terms and conditions of many libraries require developers to include attribution for the use of a software library. Figure 10-3 shows a portion of the component attributions for Microsoft Word for Mac.

Tool sets

Many tools are involved in the software development process, including IDEs, repositories, compilers, code scanners, build systems, testing tools, and release systems. From a security perspective, there are two principal objectives:

· Protection of the tool and its environment: Tools and the environments containing them must be protected from unauthorized access and unauthorized changes.

· Security of the software being developed: Organizations must take every required measure to ensure that the confidentiality and integrity of software being developed are never compromised.

Snapshot of an example of software library attributions for a software application.

Source: Microsoft

FIGURE 10-3: An example of software library attributions for a software application.

These two objectives can be accomplished through

· Policies: Security and operational policies should define the security of tooling environments, who has access to these tools, what controls should be in place, and what audits and reviews should occur.

· Standards: Details on the configuration of tools and supporting environments will help ensure that these environments are secure and operated properly.

· Monitoring: As in any critical business environment, security events, configuration changes, access changes, successful and unsuccessful logins, and other events should be logged on the organization’s SIEM system so that security operations personnel can be notified of situations that could be signs of misbehavior or intrusion.

· Audits and reviews: Periodic examinations of tools, their settings, access controls, and the business processes they support will help identify gaps requiring attention and remediation.

Warning Software vendors are popular targets for cybercriminal organizations that want to infiltrate large numbers of organizations. Attacks on companies such as RSA, Solar Winds, Accellion, and Atlassian are familiar examples. Rigorous and continuous diligence is called for.

Integrated development environment

An integrated development environment (IDE) is a software tool that developers use to write, test, debug, and run software. Example IDEs include Eclipse, Microsoft Visual Studio, and GNU Emacs. Many IDEs also perform local version control, allowing a developer to revert to an earlier version of source code and to check code in and out from a central code repository.

Many IDEs can be integrated with security tools, such as Veracode, that can scan source code to alert developers to security and other defects. Catching errors while a developer is coding helps reinforce safe coding practices.

To prevent supply chain attacks, organizations need to make sure that developers’ IDEs are protected from unauthorized tampering, including introducing malicious code into a software program that could be later used to compromise organizations that use the program.

Runtime

As critical as it is to protect software development and testing environments, one cannot overlook a program’s runtime environment. Several factors make security an ongoing issue requiring continuous diligence in a runtime environment. We’ll discuss a few of those concerns here:

· Input vulnerabilities: Let’s start with the obvious. Throughout this chapter, we’ve been discussing the measures needed to make sure that programs do not have exploitable vulnerabilities, which could permit an attacker to cause a program to malfunction or take control of a program or the system on which the program is running. All those measures taken during design, development, and testing are critical. Beyond that, using a web application firewall (WAF) to protect web-based applications is an effective first line of defense. Penetration testing of new or changed applications can be effective as well, but a pen test is a “point in time” assessment instead of the continuous protection provided by the WAF.

· Mobile code: Some applications bring in code from libraries in your environment or other environments in real time. At times, it can be difficult to know the contents of the code and whether it can be trusted. Using only digitally signed code ensures its authenticity but not its security. (In a different context, even hackers use HTTPS on their phishing sites.)

· Code obfuscation and symbol table: No matter where your program is running (whether on a so-called protected server or an end-user system), there’s the risk that an attacker will grab your program’s binaries and attempt to reverse-engineer the program to learn its secrets, such as how it obtains encryption keys. Various code obfuscation techniques can make it more difficult for an attacker to succeed. You might also consider compiling the program so that it does not contain a symbol table. This measure makes it more difficult for an attacker to reverse-engineer the program and possibly learn its secrets.

· Untrusted endpoints: The majority of today's applications are mobile apps, applications running on laptops and desktop computers, and web-based applications. In all cases, untrusted endpoints are involved. Developers need to keep this fact in mind and consider performing threat modeling to better understand how their applications could be attacked and misused. A common approach is to imagine that an end user’s machine has been compromised by an attacker who is attempting to observe or even alter the user’s running of the application in an effort to steal secrets.

Continuous integration/continuous delivery

Many organizations have put continuous integration/continuous delivery (CI/CD) environments in place as a part of their DevOps environment. As opposed to producing software changes in large, waterfall-type projects or sprints, CI/CD environments are purpose-built to enable a larger number of smaller changes to be made in an environment. CI/CD is essential for organizations that require agility and responsiveness in their software environment.

CI/CD relies heavily on automation, particularly for code review, code inspection, security testing, and functional testing. Fewer human eyes are looking at changes in the CI/CD pipeline.

Organizations that implement CI/CD for its speed may overlook the need for careful, continuous scrutiny of the CI/CD pipeline. Otherwise, defects can easily be introduced (deliberately or not) without being noticed, and intruders may have an easier time compromising systems as well. Often, security is sacrificed for speed in a move to CI/CD, but things don’t have to be this way.

Security orchestration, automation, and response

As the number of actionable alerts and the velocity of attacks increase, security operations teams are straining to keep up and protect their organizations from compromise. This situation has led to the introduction of security orchestration, automation, and response (SOAR) platforms, most often integrated with an organization’s SIEM system, where security alerts are collected and analyzed.

Here is an example of a SOAR tool taking action:

A SIEM receives an alert from an IPS that suggests that a bot or a hacker is trying to brute-force login to a server in the DMZ. The SOAR platform will get this alert and look up the location of the IP address. When the SOAR platform determines that the attacking system is in a foreign country, the SOAR platform directs the firewall to block all traffic from that IP address for 48 hours.

Without a SOAR platform, a security analyst would have had to use manual tools to determine the location of the IP address and then ask a firewall engineer to block the IP address. These manual steps might have taken anywhere from 10 minutes to more than an hour, but with a SOAR platform, the attack was blocked automatically in seconds.

SOAR platforms can be set up to protect applications by responding more quickly than humans can. Given the increase in attack velocity, SOAR can make the difference between an attempted break-in and a successful break-in.

SOAR is discussed in more detail in Chapter 9.

Software configuration management

Software configuration management (SCM) is the practice of tracking and controlling changes to software programs, including source code. SCM embraces the principles of configuration management regarding the management and use of a repository for tracking and storing changes, and security controls that limit who can view or make software changes.

SCM is governed by change management — the process used to control changes made to an environment. In the context of software development, SCM is the recordkeeping system for the changes made to software. It should be driven by a defect management process, in which management decides which defects will be addressed at any given time.

A broader discussion of configuration management appears in Chapter 9.

Remember The process of managing the changes being made to systems is called change management. The process of recording modifications to a production environment is called configuration management. The process of recording modifications in a software development environment is called software configuration management.

Code repositories

During and after development, program source code resides in a central source code repository, sometimes known as a repo. Source code must be protected from both unauthorized access and unauthorized changes. Controls to enforce this protection include

· System hardening: Intruders must be kept out of the OS itself. This process includes all the usual system hardening techniques and principles for servers, as discussed in Chapter 5.

· System isolation: The system should be reachable by authorized personnel — no one else. It should not be reachable from the Internet or capable of accessing the Internet for any reason. The system should function only as a source code repository, not for other purposes.

· Restricted developer access: Only authorized developers and other personnel should have access to source code.

· Restricted administrator access: Only authorized personnel (ideally, not developers!) should have administrative access to the source code repository software, the underlying operating system, and components such as database management systems.

· No direct source code access: No one should be able to access source code directly. Instead, everyone should access it via the check-out process in its management software.

· Limited, controlled check-out: Developers should be able to check out modules only when they are specifically authorized to be. This process can be automated through integration with a software defect tracking system.

· Restricted access to critical code: Few developers should have access to the most critical code, including code used for security functions such as authentication, session management, and encryption.

· No bulk access: Developers should not under any circumstances be able to check out all modules. (This restriction exists primarily to prevent theft of intellectual property.)

· Retention of all versions: The source code repository should maintain copies of all previous source code versions so that modules can be rolled back as needed.

· Check-in approval: All check-ins should require approval from another person to prevent a developer from unilaterally introducing defects or back doors into a program.

· Activity reviews; The activity logs for a source code repository should be reviewed periodically to ensure that there are no unauthorized check-outs or check-ins, and that all check-ins represent only authorized changes to source code.

Application security testing

The sheer size and complexity of software applications, together with the fact that some defects invariably slip through, result in relentless attacks on applications. Manual and automatic testing are needed to root out all identifiable defects, with the goal of zero defects on software being released into production. Less rigor will practically guarantee that defects will be discovered and exploited.

Various types of software testing are discussed in this section.

Code reviews

A code review (also known as a peer review) is performed by a developer who examines the code changes made by another developer. The purposes of a code review include

· Identification of defects: A peer may recognize security defects and functional defects that the developer inadvertently placed in the program.

· Identification of improper logic: A peer can double-check program logic and flow to confirm that the developer coded the change correctly.

· Identification of violations of coding standards: A peer can check to see whether the new or changed source code complies with the organization’s coding standards.

· Transparency: When a developer knows that one or more peers will be examining their source code, they are less likely to sneak a back door or other malicious feature into a program.

Static application security testing

Static application security testing (SAST) represents a class of tools used to examine software source code. SAST tools identify various defects, including security defects that could enable intrusion and compromise of a running system. SAST tools are often built into IDEs as well as software build environments.

SAST tooling is useless if its output is ignored. Organizations that use SAST must develop policies and procedures to govern when and how builds and releases are deferred until defects can be fixed. Otherwise, SAST tools create noise and are disregarded.

Although SAST tooling can be run as a stand-alone function, SAST is frequently integrated into a DevOps or DevSecOps environment, where static code testing on new and changed code is performed automatically. Defects, then, are raised as software defects, the most serious of which may delay release until they are remediated.

Static application security testing is sometimes known as white-box testing, meaning that all available information, including source code, is available for testing.

Dynamic application security testing

Dynamic application security testing (DAST) represents a class of tools used to test running software programs. DAST tools execute a software program being tested and provide keyboard inputs as though the tool were a human user. The DAST tool repeatedly provides various types of inputs to discover exploitable weaknesses in the program being tested. DAST tools are used primarily to test web-based applications and mobile applications.

CODING STANDARDS

Better organizations develop and publish software coding standards, which specify various rules concerning the contents of source code. Topics of coding standards may include

· Open-source: Coding standards should address the organization’s stance on the use of open-source code.

· Safe and unsafe functions: The use of safe and unsafe functions should be addressed in coding standards so that developers know what functions to avoid. Unsafe functions often lack boundary and type checking; their use may make a program vulnerable to attack.

· Encryption: Coding standards should specify the encryption algorithms, implementation, and libraries to be used.

· Input validation: Coding standards should address the techniques or functions to sanitize inputs to prevent buffer overflow and other attacks.

· Session management: Coding standards should cite the techniques to be used for establishing and enforcing session management.

· Security: Coding standards may specify methods for referencing or storing login credentials needed for a running program to authenticate to another program.

· Indentation: No coding standard is complete without addressing the critical issue of indents. Should be they be two characters? Four? Eight? How about an odd number or a prime number, just to be interesting?

Like SAST testing, DAST testing can be run on a stand-alone basis or as a part of software build automation. As with DAST, defects found in SAST testing can hold up release if they are severe enough. Each organization sets the standard for the severity level of defects that require remediation before release, as opposed to defects that can be fixed in a future release.

DAST is sometimes known as black-box testing, meaning that very little or no information is available to the tester other than the program (or URL) itself.

Assess the Effectiveness of Software Security

U.S. President Ronald Reagan was well known for the phrase “Trust, but verify.” We take this saying a little further: “Don’t trust until verified.” This credo applies to many aspects of information security, including software.

Crossreference This section covers Objective 8.3 of the Software Development Security domain in the CISSP Exam Outline (May 1, 2021).

Initial and periodic security testing of software is an essential part of developing (or acquiring) and managing software throughout its life span. The reason for periodic testing is that researchers (both white-hat and black-hat) are always finding new ways of exploiting software programs that were once considered to be secure.

Other facets of security testing are explored in lurid detail in Chapter 8.

Auditing and logging of changes

Logging changes is an essential aspect of system and software behavior. The presence of logs facilitates troubleshooting, verification, and reconstruction of events.

Two types of changes are important here:

· Changes performed by the software: Mainly, changes made to data. As such, a log entry will include “before” and “after” values, as well as other essentials, including user, date, time, and transaction ID, and configuration changes that alter software behavior.

· Changes made to the software: Generally, changes made to the actual software code. In most organizations, this process involves change management and configuration management processes. While investigating system problems, however, you shouldn’t discount the possibility of unauthorized changes. The absence of change management records is not evidence of the absence of changes.

Log data for these categories may be stored locally or in a central repository, such as a configuration management database (CMDB) or a security information and event management system (SIEM). Appropriate personnel should be notified promptly when actionable events take place, as discussed more fully in Chapter 9.

Risk analysis and mitigation

Risk analysis of software programs and systems is essential for identifying, analyzing, and treating risks. The types of risks that will likely be included are

· Known vulnerabilities: What vulnerabilities can be identified, how they can be exploited, and whether the software has any means of being aware of attempted exploitation and defending itself.

· Unknown vulnerabilities: Vulnerabilities that have yet to be discovered. If you’re unsure of what we mean, just imagine any of several widely available programs that seem to be plagued by new vulnerabilities month after month. Software with that kind of track record certainly has undisclosed vulnerabilities. We won’t shame those products by listing them here.

· Transaction integrity: Whether the software works properly and produces the correct results in all cases, including unintentional and deliberate misuse and abuse. Manual or automated auditing of software programs can identify transaction calculation and processing problems, but humans often spot them too.

Tools that are used to assess the vulnerability of software include

· Security scanners: These tools, including DAST tools scan an entire website or web application. They examine form variables, hidden variables, cookies, and other web-page features to identify vulnerabilities.

· Website security tools: These tools are used to examine web pages manually for vulnerabilities that scanners often can’t find.

· Source code scanning tools: These tools examine program source code and identify vulnerabilities that security scanners often cannot see.

Tip Information about software vulnerability testing tools can be found at https://owasp.org/www-community/Vulnerability_Scanning_Tools.

Another approach to discovering vulnerabilities and design defects uses a technique known as threat modeling. Threat modeling involves a systematic, detailed analysis of a program’s interfaces, including user interfaces, APIs, and interaction with the underlying database management and operating systems. The analysis involves studying these elements to understand how they could be used, misused, and abused by insiders and adversaries. Information about threat modeling tools can be found at https://owasp.org/www-project-threat-model.

The STRIDE threat classification model is also handy for threat modeling. STRIDE stands for the following:

· Spoofing of user identity

· Tampering

· Repudiation

· Information disclosure

· Denial of service

· Elevation of privilege

Mitigating software vulnerabilities generally means updating source code (if you have it!) or applying security patches. Patches can’t always be obtained and applied right away, however, which means implementing temporary work-arounds or relying on security in other layers, such as a web application firewall (WAF).

Mitigation of transaction integrity issues may require manual adjustments to affected data or work-arounds in associated programs.

Assess Security Impact of Acquired Software

Every organization acquires some (or all) of its software from other entities. Any acquired software related to the storage or processing of sensitive data needs to be understood from a security perspective so that an organization is aware of the risks associated with its use.

Crossreference This section covers Objective 8.4 of the Software Development Security domain in the CISSP Exam Outline (May 1, 2021).

Some use cases bear further discussion:

· Commercial off-the-shelf: Confirming the security of commercial tools is usually more difficult than confirming the security of open-source software because the source code usually is not available to examine. Depending on the type of software, automated scanning tools may help, but testing is often a manual effort. Some vendors voluntarily permit security consulting firms to examine their software for vulnerabilities and permit customers to view test results (sometimes in summary form). Responsible vendors voluntarily undergo audits such as SOC 1 and SOC 2 or pursue ISO 27001 certification to give customers more confidence in their security controls, including those related to software development.

· Open-source: Many security professionals fondly recall those blissful days when we all trusted open-source software, believing that the examination of source code by many caring, talented people would surely root out security defects. Security vulnerabilities in OpenSSL, jQuery, and MongoDB, and other software, however, have burst that bubble. Now it is obvious that we need to examine open-source software with as much scrutiny as any other software. Organizations need to maintain an accurate, up-to-date inventory of all open-source code in use and develop a means of staying informed on security-related issues with open-source code. For each application, organizations develop a software bill of materials (SBOM), a detailed inventory of application source code, including the origins of each part. This practice aids in the identification of security defects in source code that, if unmitigated, could result in a security incident or breach.

· Third party: The term third party is most often associated with the practice known as third-party risk management (TPRM), a process used to assess vendors and service providers. TPRM is discussed in detail in Chapter 3.

· Managed services: Many organizations are migrating to cloud-based services that have several types of offerings:

· Software as a Service (SaaS): A software vendor hosts its software on its computers (or in an Infrastructure as a Service [IaaS] environment) and enables its customers to access and run the software over a network connection (usually, the Internet). Example SaaS vendors include SAP Concur and Webex. SaaS vendors generally undergo periodic penetration tests and external audits such as SOC 1 or SOC 2, and they make those test and audit reports available to corporate customers on request. Some SaaS vendors permit customers to perform their own penetration testing.

· Platform as a Service (PaaS): A software vendor hosts software platforms on its own computers (or in an IaaS environment), and permits customers and other vendors to host or integrate other programs and applications as part of the overall platform. Example PaaS offerings include Salesforce and Microsoft 365 (formerly Office 365). Like SaaS vendors, PaaS vendors often undergo penetration tests and external audits, and make the reports available to corporate customers.

· IaaS: A vendor hosts an environment where customers can set up and run virtual machines running operating systems, virtual network devices, and virtual storage systems. Customers develop their own designs for their environments and set them up in nearly the same way as though they were implementing these environments on their own computing, network, and storage hardware. In IaaS environments, customers are responsible for the security of all virtual systems, including operating systems and network devices; all patching, scanning, identity management, network architecture, and other activities are customers’ responsibility. IaaS vendors often provide SOC 1, SOC 2, or ISO 27001 certifications to customers who want to better understand the vendors’ security.

Define and Apply Secure Coding Guidelines and Standards

Organizations that develop software, whether for their own use or as products for use by other organizations, need to develop policies and standards regarding the development of source code to reduce the number of vulnerabilities that could lead to errors, incidents, and security breaches. Even organizations that use tools to find vulnerabilities in source code (and at run time) would benefit from such practices for two reasons:

· The time to fix application vulnerabilities is reduced.

· Some application vulnerabilities may not be discovered by tools or code reviews but could still be exploited by an adversary, leading to an error, incident, or breach.

Crossreference This section covers Objective 8.5 of the Software Development Security domain in the CISSP Exam Outline (May 1, 2021).

Security weaknesses and vulnerabilities at the source-code level

Software development organizations must have standards, processes, and tools in place to ensure that all developed software is free of defects, including security vulnerabilities that could lead to system compromise and data tampering or theft. The types of defects that need to be identified include

· Buffer overflow: In this attack, a program’s input field is deliberately overflowed in an attempt to corrupt the running software program and permit the attacker to force the program to run arbitrary instructions. A buffer overflow attack gives an attacker partial or complete control of the target system, thereby enabling them to access, tamper with, destroy, or steal sensitive data.

· Injection attacks: An attacker may be able to manipulate the application through a SQL injection or script injection attack, with various results, including access to sensitive data.

· Escalation of privileges: An attacker may trick the target application or system into raising the attacker’s level of privilege, allowing them to access sensitive data or take control of the target system.

· Improper authentication (authentication bypass): Authentication that is not airtight may be exploited by an attacker who may compromise or bypass authentication. Doing authentication correctly means writing resilient code and avoiding features that would give an attacker an advantage (such as telling the user that the user ID is correct but the password is not).

· Improper session management: Session management that is not programmed and configured correctly may be exploited by intruders, which could lead to session hijacking through a session replay attack.

· Improper use of encryption: Strong encryption algorithms can be ineffective if they are not implemented properly, which would make it easy for an attacker to attack the cryptosystem and access sensitive data. Remedying includes not only proper use of encryption algorithms, but also proper encryption key management.

· Gaming: This general term refers to faulty application or system design that may permit a user or intruder to use the application or system in ways not intended by its owners or designers. Criminals might use an image-sharing service, for example, to pass messages via steganography.

· Memory leaks: This type of defect occurs when a program fails to release unneeded memory, resulting in growing memory requirements in a running program growing until available resources are exhausted.

· Trap door: This is a feature in a program that performs an undocumented function, typically a security bypass.

· Race conditions: This type of defect involves two or more programs, processes, or threads, each of which accesses and manipulates a resource as though it had exclusive access to the resource. This defect can cause an unexpected result with one or more programs, processes, or threads.

The Open Web Application Security Project (OWASP) addresses these and other weaknesses in detail at https//owasp.org. OWASP publishes a top-ten web application vulnerabilities list every three years, updated to reflect changing trends.

Security of application programming interfaces

Application programming interfaces (APIs) are components of software programs used for data input and data output. An API has an accompanying specification (documented or not) that defines functionalities, input and/or output fields, data types, and other details. Typically, an API is used for nonhuman interaction between programs. Although you would consider a web interface to be a human-readable interface, an API is considered to be a machine-readable interface.

APIs exist in many places: operating systems, subsystems (such as web servers and database management systems), utilities, and application programs. APIs are also implemented in computer hardware for components, such as memory and peripheral devices, disk drives, network interfaces, keyboards, and display devices.

In software development, a developer can create an API from scratch (not recommended) or acquire an API by obtaining source code modules or libraries with APIs built in, such as REST. An API can be part of an application used to transfer data between applications, in bulk or transaction by transaction.

APIs need to be secure so that they do not become the means through which an intruder can covertly obtain sensitive data or cause the target system to malfunction or crash. Four primary means of ensuring an API security include

· Secure design: Each API needs to be implemented so that it carefully examines and sanitizes all input data (to defeat any attempts at injection or buffer overflow attacks, as well as program malfunctions). In APIs that require authentication, the API should be implemented so that authentication bypass attacks cannot succeed. Output data must also be examined so that the API does not output any noncompliant or malicious data.

· Security testing: Each API needs to be thoroughly tested to ensure that it functions correctly and resists attacks. Automated tools such as SAST and DAST (discussed earlier in this chapter) are commonly used to identify defects in APIs.

· Monitoring: APIs should log common activity as well as errors. All such log entries should be sent to the organization’s SIEM, which correlates events and generates alarms when personnel action is required.

· External protection: In the case of a Web Services API, an API gateway and/or a WAF may protect an API from attack. Such an option may not be available, however, if the API uses other protocols. Packet filtering firewalls do not protect APIs from logical attacks because firewalls do not examine the contents of packets — only their source and destination IP addresses and ports.

Secure coding practices

The purpose of secure coding practices is the reduction of exploitable vulnerabilities in tools, utilities, and applications. The practice of secure coding isn’t just about secure coding; it involves many other considerations and activities. Following are some of the factors related to secure coding:

· Tools: From the selection and configuration of IDEs to the use of SAST and DAST, tools can be used to detect the presence of source code defects, including security vulnerabilities. The earlier such defects are found, the less effort it takes to correct them and stay on schedule.

· Processes: As discussed earlier in this chapter, software development processes need to be designed and managed with security in mind. Processes define the sequence of events. In the context of software and systems development, security-related steps such as security requirements definition, peer reviews, and the use of vulnerability scanning tools ensure that all the right steps are taken to make sure that source code is reasonably free of defects.

· Training: Software developers and engineers are more likely to write good, safe code when they know how to. Training in secure development is essential. Very few universities include secure development in their undergraduate computer science programs. Secure coding generally is not part of university training, so organizations must fill the gap.

· Incentives: Money talks. Providing incentives of some form will help software developers pay more attention to whether they’re producing code with security vulnerabilities. People like the carrot more than the stick, so perhaps rewards for the fewest defects per calendar quarter or year is a good start.

· Selection of source code languages: The selection of source code languages and policies on the use of open-source code come into play. By design, some coding languages are more secure (we might say safe) than others. The C language, for example, as powerful as it is, has few protective features, so software developers must be more skilled in and knowledgeable about writing safe, secure code.

Developed in the 1970s, the C language was created during an era when there was more trust. But either Brian Kernighan or Dennis Ritchie, who are the co-creators of C, allegedly said, “We [Unix and C] won’t prevent you from doing something stupid, as that restriction might also prevent you from doing something good.” We have been unable to confirm whether one of them said this or not. The quote may have come from a book, a lecture, or a pub after a few pints of ale. The point is, some languages are safer than others. We’re sorry for the rabbit hole. Well, mostly sorry.

OPEN WEB APPLICATION SECURITY PROJECT

OWASP has published a short list of security standards that organizations have adopted, most notably the Payment Card Industry Data Security Standard (PCI DSS). The top ten software risks cited by OWASP are broken access control, cryptographic failures, injection, insecure design, security misconfiguration, vulnerable and outdated components, identification and authentication failures, software and data integrity failures, security logging and monitoring failures, and server-side request forgery (SSRF).

Earlier versions of the OWASP top ten software vulnerabilities included missing function level access control, cross-site request forgery, malicious file execution, information leakage, improper error handling, and insecure communications, which are also important security considerations for any software development project.

Removal of these risks makes a software application more robust, reliable, and secure. You can find out more about OWASP — and even join or form a local chapter — by visiting https://owasp.org.

Software-defined security

Software-defined security is a security model in which security mechanisms are defined and controlled by software. Put another way, in the software-defined security model, security hardware devices such as firewalls, spam filters, IPSes, and web content filters are implemented as software-based virtual machines. Software-defined security is closely related to network function virtualization (NFV), in which network devices such as routers, firewalls, and IPSes are implemented as virtual network devices instead of physical appliances.

Organizations can implement software-defined security or network function virtualization in both the private cloud and the public cloud. There is no reason to assume that software-defined security can be implemented only in IaaS environments such as Amazon Web Services or Microsoft Azure.

Software-defined security is considered to be a force multiplier that enables organizations to adapt quickly to evolving business architecture and threats by changing security architecture as quickly as engineers can click, drag and drop, and type. But software-defined security as a force multiplier can just as easily result in catastrophic errors that needlessly expose an organization to active threats when an engineer makes an error in the software-defined security UI.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!