Software Security Testing

The Security Testing practice is concerned with prerelease testing, including integrating security into standard quality assurance processes. The practice includes use of black-box security tools (including fuzz testing) as a smoke test in QA, risk-driven white-box testing, application of the attack model, and code coverage analysis. Security testing focuses on vulnerabilities in construction.

Security Testing Level 1

[ST1.1: 87] Ensure QA supports edge/boundary value condition testing.

The QA team goes beyond functional testing to perform basic adversarial tests. They probe simple edge cases and boundary conditions. No attacker skills required. When QA understands the value of pushing past standard functional testing using acceptable input, they begin to move slowly toward thinking like a bad  guy. A discussion of boundary value testing leads naturally to the notion of an attacker probing the edges on purpose. What happens when you enter the wrong password over and over?

[ST1.3: 79] Drive tests with security requirements and security features.

Testers target declarative security mechanisms with tests derived from requirements and security features. For example, a tester could try to access administrative functionality as an unprivileged user or verify that a user account becomes locked after some number of failed authentication attempts. For the most part, security features can be tested in a fashion similar to other software features. Security mechanisms based on requirements such as account lockout, transaction limitations, entitlements, and so on are also tested. Of course, software security is not security software, but getting started with features is easy. New deployment models, such as cloud, might require novel test approaches.

 

Security Testing Level 2

[ST2.1: 25] Integrate black box security tools into the QA process.

The organization uses one or more black-box security testing tools as part of the quality assurance process. The tools are valuable because they encapsulate an attacker’s perspective, albeit generically. Tools such as IBM Security AppScan or HPE Fortify WebInspect are relevant for web applications, and fuzzing frameworks such as Synopsys Codenomicon are applicable for most network protocols. In some situations, other groups might collaborate with the SSG to apply the tools. For example, a testing team could run the tool but come to the SSG for help interpreting the results. Because of the way testing is integrated into agile development approaches, black-box tools might be used directly by the agile team. Regardless of who runs the black-box tool, the testing should be properly integrated into the QA cycle of the SSDL.

 

[ST2.4: 11] Share security results with QA.

The SSG routinely shares results from security reviews with the QA department. CI/CD makes this easier because of the way testing is integrated in a cross-functional team. Over time, QA engineers learn the security mindset. Using security results to inform and evolve particular testing patterns can be a powerful mechanism leading to better security testing. This activity benefits from an engineering-focused QA function that is highly technical.

[ST2.5: 9] Include security tests in QA automation.

Security tests run alongside functional tests as part of automated regression testing. The same automation framework houses both. Security testing is part of the routine. Security tests can be driven from abuse cases identified earlier in the lifecycle or tests derived from creative tweaks of functional tests.

[ST2.6: 10] Perform fuzz testing customized to application APIs.

Test automation engineers or agile team members customize a fuzzing framework to the organization’s APIs. They could begin from scratch or use an existing fuzzing toolkit, but customization goes beyond creating custom protocol descriptions or file format templates. The fuzzing framework has a built-in understanding of the application interfaces it calls into. Test harnesses developed explicitly for particular applications can make good places to integrate fuzz testing.

Security Testing Level 3

[ST3.3: 4] Drive tests with risk analysis results.

Testers use architecture analysis results to direct their work. For example, if architecture analysis concludes, “the security of the system hinges on the transactions being atomic and not being interrupted partway through,” then torn transactions will be become a primary target in adversarial testing. Adversarial tests like these can be developed according to risk profile—high-risk flaws first.

[ST3.4: 3] Leverage coverage analysis.

Testers measure the code coverage of their security test (see [ST2.5 Include security tests in QA automation]) to identify code that isn’t being exercised. Code coverage analysis drives increased security testing depth. Standard-issue black-box testing tools achieve exceptionally low coverage, leaving a majority of the software under test unexplored. Don’t let this happen to your tests. Using standard measurements for coverage such as function coverage, line coverage, or multiple condition coverage is fine.

 

[ST3.5: 4] Begin to build and apply adversarial security tests (abuse cases).

Testing begins to incorporate test cases based on abuse cases (see [AM2.1 Build attack patterns and abuse cases tied to potential attackers]). Testers move beyond verifying functionality and take on the attacker’s perspective. For example, testers might systematically attempt to replicate incidents from the organization’s history. Abuse and misuse cases based on the attacker’s perspective can also be driven from security policies, attack intelligence, and standards. This turns the corner from testing features to attempting to break the software under test.