Conduct Security Control Testing- Part 2
This page is dedicated to continuing the Conduct Security Control Testing title. In this part of this tutorial, we’ll take a look at control testing to help you understand the different aspects of control testing. You can see the previous section here.
Reviewing various security logs on a regular basis is a critical step in security control testing.
Unfortunately and actually, in many organizations, log reviews often happen only after an incident has already occurred.
In log review, the activities of the authorized users are monitored. Reviewing security audit logs within an IT system is one of the easiest ways to verify that access control mechanisms are performing effectively because IT systems can log anything that occurs on the system( such as access attempts and authorizations).
At result, the protection of log data itself is an important security control. If the integrity of the log data is lost, then the log review mechanism itself will produce erroneous results.
These contain building scripts or tools that simulate routinely activities performed in an application.
These synthetic transactions can be automated to run on a periodic basis to ensure the application is still performing as expected. For example, a tool may be used to regularly perform a series of scripted steps on an e-commerce website to measure performance, identify impending performance issues, and simulate the user experience.
Today reachability, is preferred metric for organizations that focus on customer experience. While system uptime, was preferred metric in past.
Other key metrics for applications are correct processing and transaction latency (the length of time it takes for specific types of transactions to complete).
Code Review and Testing (peer review)
The application development lifecycle must include code review and testing for security controls. Code review and testing, involves systematically examining application source code to identify bugs, mistakes, inefficiencies, and security vulnerabilities in software programs. In other words, we can say that code review is the foundation of software assessment programs.
A code review can be accomplished either manually by carefully examining code changes visually, or by using automated code reviewing software (such as IBM AppScan Source).
Different types of code review and testing techniques include
- Pair programming: Pair (or peer) programming is a technique commonly used in agile software development and extreme programming, in which two developers work together and alternate between writing and reviewing code, line by line.
- Lightweight code review: Often performed as part of the development process, consisting of informal walkthroughs, and/or over-the-shoulder reviews.
- Formal inspections: That is, to use structured processes during software development to identify types of defects.
Misuse Case Testing
Software and systems both can be tested for use other than its intended purpose; it is known as Misuse case testing.
Misuse Case Testing, is the opposite of use case testing (in which expected behavior in a system or application is defined and tested).
A common technique used in misuse case testing is known as fuzzing. Fuzzing involves the use of automated tools that can produce hundreds of combinations of input strings to be fed to a program’s data input fields in order to elicit unexpected behavior. Fuzzing is used.
Tools such as HP WebInspect, and IBM AppScan have built-in fuzzing and script injection tools that are pretty good at identifying script injection vulnerabilities in software applications.
Test Coverage Analysis
It attempts to identify the degree to which code testing applies to the entire application. Types of test coverage include:
Manual Testing: Testing is performed manually by hands.
Automated Testing: A script performs a set of actions.
Black box testing: The tester has no prior knowledge of the environment being tested.
White box Testing: The tester has full knowledge before testing.
Dynamic Testing: The system that is being tested is monitored during the test.
Static Testing: The system that is being tested is not monitored during the test.
Structural Testing: This can include statement, decision, condition, loop, and data flow coverage.
Functional Testing: This includes normal and anti-normal tests of the reaction of a system or software.
Negative Testing: This test uses the system or software with invalid or harmful data, and verifies that the system responds appropriately.
This test is primarily concerned with appropriate functionality being exposed across all the ways where users can interact with the application. In fact, the purpose is to ensure that security is uniformly applied across the various interfaces include:
- Server interfaces: these include hardware, software and networking infrastructure to support the server.
- Applications internal interfaces: These include plug-ins, error handling and more.
- Applications external interfaces: These can be a web browser or operating system.
Examples of interfaces tested include
- Application programming interfaces (APIs)
- Web services
- Transaction processing gateways
- Physical interfaces, such as keypads, keyboard/mouse/display