Sunday, June 1, 2008

Software Testing - Continue

1.3.2 Cyclomatic Complexity

As we have seen before McCabe’s cyclomatic drewbarry complexity is a software metric that offers an indication of the logical complexity of a program. When used in the context of the basis path testing approach, the value is determined for cyclomatic complexity defines the number of independent paths in the basis set of a program and offer upper bounds for number of tests that ensures all statements have been executed at least once. An independent path is any path through the program that introduces at least one new group of processing statements or new condition. A set of independent paths for the example flow graph are:

Path 1: 1-11

Path 2: 1-2-3-4-5-10-1-11

Path 3: 1-2-3-6-8-9-10-11

1.3.3 Deriving Test Cases

The basis path testing method can be applied to a detailed procedural design or to source code. Basis path testing can be seen as a set of steps.

  • Using the design or code as the basis, draw an appropriate flow graph.
  • Determine the cyclomatic complexity of the resultant flow graph.
  • Determine a basis set of linear independent paths
  • Prepare test cases that will force execution of each path in the basis set.

Date should be selected so that conditions at the predicate nodes is tested. Each test case is executed and contrasted with the expected result. Once all test cases have been completed, the tester can ensure that all statements in the program are executed at least once.

1.3.4 Graphical Matrices

The procedure involved in producing the flow graph and establishing a set of basis paths can be mechanized. To produce a software tool that helps in basis path testing, a data structure, called a graph matrix, can be quite helpful. A graph matrix is a square matrix whose size is the same as the identified nodes, and matrix entries match the edges between nodes. A basic flow graph and its associated graph matrix is shown below.

Node

1

2

3

4

5

1


a




2



b



3




d, c

f

4






5


e


g



Graph Matrix

In the graph and matrix each node is represented with a number and each edge a letter. A letter is entered into the matrix related to connection between the two nodes. By adding a link weight for each matrix entry the graph matrix can be used to examine program control structure during testing. In its basic form the link weight is 1 or 0. The link weights can be given more interesting characteristics:

  • The probability that a link will be executed.
  • The processing time expanded during traversal of a link
  • The memory required during traversal of a link

Represented in this form the graph matrix is called a connection matrix.

Connection to node

Node

1

2

3

4

5

Connections

1


1




1-1=0

2



1



1-1=0

3




1,1

1

3-1=2

4






0

5


1


1


2-1=1

Cyclomatic complexity is 2+1=3

Graph matrix

1.3 Control Structure Testing

Although basis path testing is simple and highly effective, it is not enough in itself. Next we consider variations on control structure testing that broaden testing coverage and improve the quality of white box testing.

1.4.1 Condition Testing

Condition testing is a test case design approach that exercises the logical conditions contained in a program module. A simple condition is a Boolean variable or a relational expression, possibly with one NOT operator. A relational expression takes the form

The condition testing method concentrates on testing each condition in a program. The purpose of condition testing is to determine not only errors in the conditions of a program but also other errors in the program. A number of condition testing approaches have been identified. Branch testing is the most basic. For a compound condition, C, the true and false branches of C and each simple condition in C must be executed at least once.

Domain testing needs three and four tests to be produced for a relational expression. For a relational expression of the form

Three tests are required the make the value of greater than, equal to and less than , respectively.

1.4.2 Data Flow Testing

The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined. For data flow testing each statement in program is allocated a unique s6atement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number,

DEF(S) = {X| statement S contains a definition of X}

USE(S) = {X| statement S contains a use of X}

If statement S is an if or loop statement, ifs DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S’ if there exists a path from statement S to S’ which does not contain any condition of X.

A definition-use chain (or DU chain) of variable X is of the type [X,S,S’] where S and S’ are statement numbers, X is in DEF(S), USE(S’), and the definition of X in statement S is live at statement S’.

One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements.

1.4.3 Loop Testing

Loops are the basis of most algorithms implemented using software. However, often we do consider them when conducting testing. Loop testing is a white box testing approach that concentrates on the validity of loop constructs. Four loops can be defined: simple loops, concatenate loops, nested loops, and unstructured loops.

Simple loops: The follow group of tests should be used on simple loops, where n is the maximum number of allowable passes through the loop:

  • Skip the loop entirely.
  • Only one pass through the loop.
  • Two passes through the loop.
  • M passes through the loop where m
  • n-1, n, n+1 passes through the loop.

1.5 Black Box Testing

Black box testing approaches concentrate on the fundamental requirements of the software. Black box testing allows the software engineer to produce groups of input situations that will fully exercise all functional requirements for a program. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches.

Black box testing tries to find errors in the following categories:

(1) incorrect or missing functions, (2) interface errors, (3) errors in data structures or external database access, (4) performance errors, and (5) initialization and termination errors.

By applying black box approaches we produce a set of test cases that fulfill requirements: (1) test cases that reduce the number of test cases to achieve reasonable testing, (2) test cases that tell use something about the presence or absence of classes of errors.

1.5.1 Equivalent Partitioning

Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors.

Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following:

If an input condition specifies a range, one valid and two invalid equivalence classes are defined.

If an input condition needs a specific value, one valid and two invalid equivalence classes are defined.

If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.

If an input condition is Boolean, one valid and invalid class are outlined.

1.5.2 Boundary Value Analysis

A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also.
Guidelines for BVA are close to those for equivalence partitioning:

  • If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively.
  • If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers.
  • Apply guidelines above to output conditions.
  • If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary.

1.5.3 Cause-Effect Graphing Techniques

In too many instances, an attempt to translate a policy or procedure stated in a natural language into a software causes frustration and problems. Cause-effect graphing is a test case design approach that offers a concise depiction of logical conditions and associated actions. The approach has four stages:

  • Cause (input conditions) and effects (actions) are listed for a module and an identifier is allocated to each.
  • A cause-effect graph is created.
  • The graph is altered into a decision table.
  • Decision table rules are modified to test cases.

A simplified version of cause-effect graph symbology is shown below. The left hand column of the figure gives the various logical associations among causes and effects . The dashed notation in the right-hand columns indicates potentials constraining associations that might apply to either causes or effects.

1.5.3 Comparison Testing

Under certain situations the reliability of the software is critical. In these situations redundant software and hardware is often used to ensure continuing functionality. When redundant software is produced separate software engineering teams produce independent versions of an application using the same applications. In this context each version can be tested with the same test data to ensure they produce the same output. These independent versions are the basis of a black box testing technique known as comparison testing. Other black box testing techniques are performed on the separate versions and it is assumed if they produce the same output they are assumed to be identical. However, if this is not the case then they are examined further.

1.6 Testing for Real-Time Systems

The specific characteristics of real-time systems makes them a major challenge when testing. The time-dependent nature of real-time applications adds a new difficult element to testing. Not only does the developer have to look at black and white box testing, but also the timing of the data and the parallelism of the tasks. In many situation test data for real-time system may produce errors when the system is in one state but to in others. Comprehensive test cases design methods for real-time systems have not evolved yet. However, a four-stage approach can be put forward:

Task testing: The first stage is to test independently the tasks of the real-time software.

Behavioural testing: Using system models produced with CASE tools the behaviour of the real-time system and examine its actions as a result of external events.

Intertask testing: Once errors in individual tasks and in system behaviour have been observed testing passes to time-related external events.

Systems testing: Software and hardware are integrated and a full set of systems tests are introduced to uncover errors at the software and hardware interface.

1.7 Automated Testing Tools

As testing can be 40% of the all effort expanded on the software development process tools that can assist by reducing the time involved is useful. As a response to this various researchers have produced sets of testing tools.
Miller described various categories for test tools:

Static analyzers: These program-analysis support “proving” of static allegations-weak statements about program architecture and format.
Code auditors: These special-purpose filters are used to examine the quality of software to ensure that it meets the minimum coding standards.

Assertion processors: These systems tell whether the programmer-supplied assertions about the program are actually meet.

Test data generators: These processors assist the user with selecting the appropriate test data.

Output comparators: This tool allows us to contrast one set of outputs from a program with another set to determine the difference among them.

Dunn also identified additional categories of automated tools including:

Symbolic execution systems: This tool performs program testing using algebraic input, instead of numeric data values.

Environmental simulators: This tool is a specialized computer-based system that allows the tester to model the external environment of real-time software and simulate operating conditions.

Data flow analyzers: This tool tracks the flow of data through the system and tries to identify data related errors.

No comments: