Software Security Testing

The Security Testing practice is concerned with prerelease testing, including integrating security into standard quality assurance processes. The practice includes use of black-box security tools (including fuzz testing) as a smoke test in QA, risk-driven white-box testing, application of the attack model, and code coverage analysis. Security testing focuses on vulnerabilities in construction.

Security Testing Level 1

[ST1.1: 100] Ensure QA performs edge/boundary value condition testing.

QA efforts go beyond functional testing to perform basic adversarial tests and probe simple edge cases and boundary conditions, with no particular attacker skills required. When QA understands the value of pushing past standard functional testing that uses expected input, it begins to move slowly toward thinking like an adversary. A discussion of boundary value testing can lead naturally to the notion of an attacker probing the edges on purpose (for example, determining what happens when someone enters the wrong password over and over).

[ST1.3: 87] Drive tests with security requirements and security features.

QA targets declarative security mechanisms with tests derived from requirements and security features. A test could try to access administrative functionality as an unprivileged user, for example, or verify that a user account becomes locked after some number of failed authentication attempts. For the most part, security features can be tested in a fashion similar to other software features—security mechanisms such as account lockout, transaction limitations, entitlements, and so on are tested with both expected and unexpected input as derived from requirements. Software security isn’t security software, but testing security features is an easy way to get started. New software architectures and deployment automation, such as with container and cloud infrastructure orchestration, might require novel test approaches.

[ST1.4: 50] Integrate opaque-box security tools into the QA process.

The organization uses one or more opaque-box security testing tools as part of the QA process. Such tools are valuable because they encapsulate an attacker’s perspective, albeit generically. Traditional dynamic analysis scanners are relevant for web applications, while similar tools exist for cloud environments, containers, mobile applications, embedded systems, and so on. In some situations, other groups might collaborate with the SSG to apply the tools. For example, a testing team could run the tool but come to the SSG for help with interpreting the results. When testing is integrated into agile development approaches, opaque-box tools might be hooked into internal toolchains, provided by cloud-based toolchains, or used directly by engineering. Regardless of who runs the opaque-box tool, the testing should be properly integrated into a QA cycle of the SSDL and will often include both authenticated and unauthenticated reviews..

Security Testing Level 2

[ST2.4: 19] Share security results with QA.

The SSG or others with security testing data routinely share results from security reviews with those responsible for QA. Using security testing results as the basis for a conversation about common attack patterns or the underlying causes of security defects allows QA to generalize that information into new test approaches. Organizations that have chosen to leverage software pipeline platforms such as GitHub, or CI/CD platforms such as the Atlassian stack, can benefit from QA receiving various testing results automatically, which should then facilitate timely stakeholder conversations. In some cases, these platforms can be used to integrate QA into automated remediation workflow and reporting by generating change request tickets for developers, lightening the QA workload. Over time, QA learns the security mindset, and the organization benefits from an improved ability to create security tests tailored to the organization’s code.

[ST2.5: 21] Include security tests in QA automation.

Security tests are included in an automation framework and run alongside other QA tests. While many groups trigger these tests manually, in a modern toolchain, these tests are likely part of the pipeline and triggered through automation. Security tests might be derived from abuse cases identified earlier in the lifecycle (see [AM2.1 Build attack patterns and abuse cases tied to potential attackers]), from creative tweaks of functional tests, developer tests, and security feature tests, or even from guidance provided by penetration testers on how to reproduce an issue.

[ST2.6: 15] Perform fuzz testing customized to application APIs.

QA efforts include running a customized fuzzing framework against APIs critical to the organization. They could begin from scratch or use an existing fuzzing toolkit, but the necessary customization often goes beyond creating custom protocol descriptions or file format templates to giving the fuzzing framework a built-in understanding of the application interfaces it calls into. Test harnesses developed explicitly for particular applications make good places to integrate fuzz testing.

Security Testing Level 3

[ST3.3: 8] Drive tests with risk analysis results.

Testers use architecture analysis results (see [AA 2.1 Define and use AA process]) to direct their work. If the AA determines that “the security of the system hinges on the transactions being atomic and not being interrupted partway through,” for example, then torn transactions will become a primary target in adversarial testing. Adversarial tests like these can be developed according to risk profile, with high-risk flaws at the top of the list. Security testing results shared with QA (see [ST 2.4 Share security results with QA]) can help focus test creation on areas of potential vulnerability that can, in turn, help prove the existence of identified high-risk flaws.

[ST3.4: 2] Leverage coverage analysis.

Testers measure the code coverage of their security tests (see [ST2.5 Include security tests in QA automation]) to identify code that isn’t being exercised and then adjust automation (see [ST3.6 Implement event-driven security testing in automation]) to incrementally improve coverage. In turn, code coverage analysis drives increased security testing depth. Standard-issue black-box testing tools achieve exceptionally low coverage, leaving a majority of the software under test unexplored, which isn’t a testing best practice. Coverage analysis is easier when using standard measurements such as function coverage, line coverage, or multiple condition coverage.

[ST3.5: 2] Begin to build and apply adversarial security tests (abuse cases).

QA begins to incorporate test cases based on abuse cases (see [AM2.1 Build attack patterns and abuse cases tied to potential attackers]) as testers move beyond verifying functionality and take on the attacker’s perspective. One way to do this is to systematically attempt to replicate incidents from the organization’s history. Abuse and misuse cases based on the attacker’s perspective can also be derived from security policies, attack intelligence, standards, and the organization’s top N attacks list (see [AM2.5 Build and maintain a top N possible attacks list]). This effort turns the corner from testing features to attempting to break the software under test.

 

[ST3.6: 2] Implement event-driven security testing in automation.

The SSG guides implementation of automation for continuous, event-driven application security testing. Event-driven testing implemented in ALM automation typically moves the testing closer to the conditions driving the testing requirement (whether shift left toward design or shift right toward operations), repeats the testing as often as the event is triggered as software moves through ALM, and helps ensure the right testing is executed for a given set of conditions. This might be instead of or in addition to software arriving at a gate or checkpoint and triggering a point-in-time security test. Success with this approach depends on broad use of sensors (e.g., agents, bots) that monitor engineering processes, execute contextual rules, and provide telemetry to automation that initiates the specified testing whenever event conditions are met. More mature configurations proceed to including risk-driven conditions.