Professional Documents
Culture Documents
>front page
>index
>Black box testing and its types ( print out, already done )
>coverage question (du,dc etc) for the code given( photo uploaded in next post )
• To improve quality.
As computers and software are used in critical applications, the outcome of a bug can be
severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane
crashes, allowed space shuttle missions to go awry, halted trading on the stock market,
and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug
has given birth to a cottage industry of consultants and programming tools dedicated to
making sure the modern world doesn't come to a screeching halt on the first day of the
next century. In a computerized embedded world, the quality and reliability of software
is a matter of life and death.
Quality means the conformance to the specified design requirement. Being correct, the
minimum requirement of quality, means performing as required under specified
circumstances. Debugging, a narrow view of software testing, is performed heavily to
find out design defects by the programmer. The imperfection of human nature makes it
almost impossible to make a moderately complex program correct the first time. Finding
the problems and get them fixed, is the purpose of debugging in programming phase.
Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can
make claims based on interpretations of the testing results, which either the product
works under certain situations, or it does not work. We can also compare the quality
among different products under the same specification, based on results from the same
test.
We can not test quality directly, but we can test related factors to make quality visible.
Quality has three sets of factors -- functionality, engineering, and adaptability. These
three sets of factors can be thought of as dimensions in the software quality space. Each
dimension may be broken down into its component factors and considerations at
successively lower levels of detail. Table 1 illustrates some of the most frequently cited
quality considerations.
Good testing provides measures for all relevant factors. The importance of any particular
factor varies from application to application. Any system where human lives are at stake
must place extreme emphasis on reliability and integrity. In the typical business system
usability and maintainability are the key factors, while for a one-time scientific program
neither may be significant. Our testing, to be fully effective, must be geared to measuring
each relevant factor and thus forcing quality to become tangible and visible.
Tests with the purpose of validating the product works are named clean tests, or positive
tests. The drawbacks are that it can only validate that the software works for the specified
test cases. A finite number of tests can not validate that the software works for all
situations. On the contrary, only one failed test is sufficient enough to show that the
software does not work. Dirty tests, or negative tests, refers to the tests aiming at
breaking the software, or showing that it does not work. A piece of software must have
sufficient exception handling capabilities to survive a significant level of dirty tests.
A testable design is a design that can be easily validated, falsified and maintained.
Because testing is a rigorous effort and requires significant time and cost, design for
testability is also an important design rule for software development.
Software reliability has important relations with many aspects of software, including the
structure, and the amount of testing it has been subjected to. Based on an operational
profile (an estimate of the relative frequency of use of various inputs to the program,
testing can serve as a statistical sampling method to gain failure data for reliability
estimation.
Software testing is not mature. It still remains an art, because we still cannot make it a
science. We are still using the same testing techniques invented 20-30 years ago, some of
which are crafted methods or heuristics rather than good engineering methods. Software
testing can be costly, but not testing software is even more expensive, especially in places
that human lives are at stake. Solving the software-testing problem is no easier than
solving the Turing halting problem. We can never be sure that a piece of software is
correct. We can never be sure that the specifications are correct. No verification system
can verify every correct program. We can never be certain that a verification system is
correct either.
This method of test can be applied to all levels of software testing: unit, integration,
functional, system and acceptance. It typically comprises most if not all testing at higher
levels, but can also dominate unit testing as well.
Structure
Example
A technical support company writes a decision table to diagnose printer problems based
upon symptoms described to them over the phone from their clients.
Printer troubleshooter
Rules
Printer does not print Y Y Y Y N N N N
Conditions A red light is flashing Y Y N N Y Y N N
Printer is unrecognized Y N Y N Y N Y N
Check the power cable X
Check the printer-computer cable X X
Actions Ensure printer software is installed X X X X
Check/replace ink X X X X
Check for paper jam X X
EQIVALENCE PARTITIONING
2. Generate the test cases using the equivalence classes identified in the previous step.
This is performed by writing test cases covering all the valid equivalence classes. Then a
test case is written for each invalid equivalence class so that no test contains more than
one invalid class. This is to ensure that no two invalid classes mask each other.
Boundary-value analysis
The expected input and output values should be extracted from the component
specification. The input and output values to the software component are then grouped
into sets with identifiable boundaries. Each set, or partition, contains values that are
expected to be processed by the component in the same way. Partitioning of test data
ranges is explained in the equivalence partitioning test case design technique. It is
important to consider both valid and invalid partitions when designing test cases.
Consider a program with two input variables x and y. These input variables have
specified boundaries as:
a<=x <= b
c<= y <= d
The boundary value analysis test cases for our program with two inputs variables (x and
y) that may have any value from 100 to 300 are: (200,100), (200,101), (200,200),
(200,299), (200,300), (100,200), (101,200), (299,200) and (300,200).This input domain is
shown in Fig. 8.5. Each dot represents a test case and inner rectangle is the domain of
legitimate inputs. Thus, for a program of n variables, boundary value analysis yield 4n +
1 test cases.
Robustness testing
It is nothing but the extension of boundary value analysis. Here, we would like to see,
what happens when the extreme values are exceeded with a value slightly greater than the
maximum, and a value slightly less than minimum. It means, we want to go outside the
legitimate boundary of input domain.
There are four additional test cases which are outside the legitimate input domain. Hence
total test cases in robustness testing are 6n+1, where n is the number of input variables.
So, 13 test cases are: (200,99), (200,100), (200,101), (200,200), (200,299), (200,300)
(200,301), (99,200), (100,200), (101,200), (299,200), (300,200), (301,200).
Worst-case testing
If we reject “single fault” assumption theory of reliability and may like to see what
happens when more than one variable has an extreme value. In electronic circuits
analysis, this is called “worst case analysis”. It is more thorough in the sense that
boundary value test cases are a proper subset of worst case test cases. It requires more
effort. Worst case testing for a function of n variables generate 5n test cases as opposed
to 4n+1 test cases for boundary value analysis. Our two variables example will have
5^2=25 test cases.
Cause Effect Graphing Technique
->Consider single input conditions
->Do not explore combinations of input circumstances
1. Causes & effects in the specifications are identified. A cause is a distinct input
condition or an equivalence class of input conditions. An effect is an output condition or
a system transformation.
2. The semantic content of the specification is analysed and transformed into a boolean
graph linking the causes & effects.
3. Constraints are imposed
4. graph – limited entry decision table Each column in the table represent a test case.
5. The columns in the decision table are converted into test cases.
WHITE BOX TESTING
There are many techniques available in white-box testing, because the problem of
intractability is eased by specific knowledge and attention on the structure of the software
under test. The intention of exhausting some aspect of the software is still strong in
white-box testing, and some degree of exhaustion can be achieved, such as executing
each line of code at least once (statement coverage), traverse every branch statements
(branch coverage), or cover all the possible combinations of true and false condition
predicates (Multiple condition coverage).
Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow
structure of the software into a directed graph. Test cases are carefully selected based on
the criterion that all the nodes or paths are covered or traversed at least once. By doing so
we may discover unnecessary "dead" code -- code that is of no use, or never get executed
at all, which can not be discovered by functional testing.
In mutation testing, the original program code is perturbed and many mutated programs
are created, each contains one fault. Each faulty version of the program is called a
mutant. Test data are selected based on the effectiveness of failing the mutants. The more
mutants a test case can kill, the better the test case is considered. The problem with
mutation testing is that it is too computationally expensive to use. The boundary between
black-box approach and white-box approach is not clear-cut. Many testing strategies
mentioned above, may not be safely classified into black-box testing or white-box testing.
It is also true for transaction-flow testing, syntax testing, finite-state testing, and many
other testing strategies not discussed in this text. One reason is that all the above
techniques will need some knowledge of the specification of the software under test.
Another reason is that the idea of specification itself is broad -- it may contain any
requirement including the structure, programming language, and programming style as
part of the specification content.
We may be reluctant to consider random testing as a testing technique. The test case
selection is simple and straightforward: they are randomly chosen. Study indicates that
random testing is more cost effective for many programs. Some very subtle errors can be
discovered with low cost. And it is also not inferior in coverage than other carefully
designed testing techniques. One can also obtain reliability estimate using random testing
results based on operational profiles. Effectively combining random testing with other
testing techniques may yield more powerful and cost-effective testing strategies
Code coverage:
Segment coverage:
Ensure that each code statement is executed once.
Data flow testing is another form of structural testing. It has nothing to do with data flow
diagrams.
i. Statements where variables receive values.
ii. Statements where these values are used or referenced.
As we know, variables are defined and referenced throughout the program. We may have
few define/ reference anomalies:
i. A variable is defined but not used/ referenced.
ii. A variable is used but never defined.
iii. A variable is defined twice before it is used
Definitions
(i) Defining Node: Node n G(P) is a defining node of the variable v, written as DEF (v,
n), if the value of the variable v is defined at the statement fragment corresponding to
node n.
The definitions refer to a program P that has a program graph G(P) and a set of program
variables V. The G(P) has a single entry node and a single exit node. The set of all paths
in P is PATHS(P)
(ii) Usage Node: Node n G(P) is a usage node of the variable v, written as USE (v, n), if
the value of the variable v is used at statement fragment corresponding to node n. A usage
node USE (v, n) is a predicate use (denote as p) if statement n is a predicate statement
otherwise USE (v, n) is a computation use (denoted as c).
(iii) Definition use: A definition use path with respect to a variable v (denoted du-path)
is a path in PATHS (P) such that, for some v, there are define and usage nodes DEF(v, m)
and USE(v, n) such that m and n are initial and final nodes of the path.
(iv) Definition clear : A definition clear path with respect to a variable v (denoted dc-
path) is a definition use path in PATHS(P) with initial and final nodes DEF (v, m) and
USE (v, n), such that no other node in the path is a defining node of v.
The du-paths and dc-paths describe the flow of data across source statements from points
at which the values are defined to points at which the values are used. The du-paths that
are not definition clear are potential trouble spots.
Path Testing:
Path testing is the name given to a group of test techniques based on judiciously selecting
a set of test paths through the program. If the set of paths is properly chosen, then it
means that we have achieved some measure of test thoroughness.
Loop Testing:
These strategies relate to testing single loops, concatenated loops, and nested loops.
Independent and dependent code loops and values are tested by this approach.
• That all independent paths within a module have been exercised at least once.
• All logical decisions verified on their true and false values.
• All loops executed at their boundaries and within their operational bounds internal
data structures validity.
Need of White Box Testing?
To discover the following types of bugs:
• Logical error tend to creep into our work when we design and implement
functions, conditions or controls that are out of the program
• The design errors due to difference between logical flow of the program and the
actual implementation
• Typographical errors and syntax checking
Skills Required:
We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. We should know the specification and the
code to be tested. Knowledge of programming languages and logic.
Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means
exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and
data structure for testing is practically possible and effective.
INTEGRATION TESTING
How does Integration Testing fit into the Software Development Life Cycle?
Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due to the
integration. This is a very important step in the Software Development Life Cycle.
Before we begin Integration Testing it is important that all the components have been
successfully unit tested.
As you may have read in the other articles in the series, this document typically describes
one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from
one component to the other.
So the Integration Test cases should typically focus on scenarios where one component is
being called from another. Also the overall application functionality should be tested to
make sure the app works when the different components are brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be
created to focus on different areas of the application.
As mentioned before a dedicated Testing Team may be created to execute the Integration
test cases. Therefore the Integration Test Cases should be as detailed as possible.
Sample Test Case Table:
Test
Test Case Input Expected Actual
Case Pass/Fail Remarks
Description Data Result Result
ID
There are various factors that affect Software Integration and hence Integration Testing:
2) Automate Build Process where Necessary: A Lot of errors occur because the wrong
version of components were sent for the build or there are missing components. If
possible write a script to integrate and deploy the components this helps reduce manual
errors.
3) Document: Document the Integration process/build process to help eliminate the
errors of omission or oversight. It is possible that the person responsible for integrating
the components forgets to run a required script and the Integration Testing will not yield
correct results.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked
correctly. Each defect should be documented and tracked. Information should be captured
as to how the defect was fixed. This is valuable information. It can help in future
integration and deployment processes.
Path testing is a structural testing method that involves using the source
code of a program to attempt to find every possible executable path. The idea is
that we are then able to test each individual path in as many ways as possible in
order to maximise the coverage of each test case. This gives the best possible
chance of discovering all faults within a piece of code.
The fact that path testing is based upon the source code of a program
means that it is a white box testing method. The ability to use the code for testing
means that there exists a basis on which test cases can be rigorously defined. This
allows for both the test cases and their results to be analysed mathematically,
resulting in more precise measurement
Basis path testing is a hybrid between path testing and branch testing:
Path Testing: Testing designed to execute all or selected paths through a computer
program
Branch Testing: Testing designed to execute each outcome of each decision point in a
computer program
Basis Path Testing: Testing that fulfills the requirements of branch testing & also tests all
of the independent paths that could be used to construct any arbitrary path through the
computer program.
Example:
Objective of the Tutorial: To draw a Flow Graph, find its Cyclomatic Complexity,
V(G) and the independent paths for the following piece of code.
if (x > 0.01)
z = tan (x) ;
else
z = cos (x) ;
a[i] = a[i] * z ;
Step – 1: Let us try to provide numbering to the various nodes as per the following Flow
Graph
Step – 2: Piece of code being covered under Node –1
if (x > 0.01)
z = tan (x) ;
else
z = cos (x) ;
a[i] = a[i] * z ;
V(G) = 8 – 7 + 2=3
Conclusion – 2: There are 3 paths in this program which are independent paths and they
form a basis-set. These paths are described below
Path 1: 1–2–4–5-7
Path 2: 1–3–4–5-7
Path 3: 1–3–4–5–6–
7
Path 1: 1–2–4–5-7
Path 2: 1–3–4–5-7
Path 3: 1–2–4–5–6–
7
Conclusion – 4: We must execute these paths at least once in order to test the program
thoroughly. Accordingly we can design the test cases.
TOP-DOWN INTEGRATION TESTING
Top down integration testing is an incremental integration testing technique which begins
by testing the top level module and and progressively adds in lower level module one by
one. Lower level modules are normally simulated by stubs which mimic functionality of
lower level modules. As you add lower level code, you will replace stubs with the actual
components. Top Down integration can be performed and tested in breadth first or depth
firs manner.
A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a
system.
A stub is a computer program which is used as a substitute for the body of a software
module that is or will be defined elsewhere or a dummy component or object used to
simulate the behavior of a real component until that component has been developed.
Ultimately, the dummy method would be completed with the proper program logic.
However, developing the stub allows the programmer to call a method in the code being
developed, even if the method does not yet have the desired behavior.
Advantages
Disadvantages
• Stubs have to be written with utmost care as they will simulate setting of output
parameters.
• It is difficult to have other people or third parties to perform this testing, mostly
Example:
In bottom up integration testing, module at the lowest level are developed first and other
modules which go towards the 'main' program are integrated and tested one at a time.
Bottom up integration also uses test drivers to drive and pass appropriate data to the
lower level modules. As and when code for other module gets ready, these drivers are
replaced with the actual module. In this approach, lower level modules are tested
extensively thus make sure that highest used module is tested properly.
Driver is a the piece of code that passes test cases to another piece of code. Test Harness
or a test driver is supporting code and data used to provide an environment for testing
part of a system in isolation. It can be called as as a software module which is used to
invoke a module under test and provide test inputs, control and, monitor execution, and
report test results or most simplistically a line of code that calls a method and passes that
method a value.
Advantages
• Behavior of the interaction points are crystal clear, as components are added in
Disadvantages
• Writing and maintaining test drivers or harness is difficult than writing stubs.
• This approach is not suitable for the software development using top down
approach
Example:
Using Bottom UP testing with driver as component:
Call Graph Based
All the testing will be performed on actual units of fully functional code i.e
the program is constructed/produced before integration testing takes place.
Pair-Wise
Pair-Wise integration testing restricts the testing session to a pair of units
in the call graph. This ensures that faults can be easily isolated while still working
without stubs or drivers. A disadvantage to this method is that there needs to be as many
test sessions are there are edges.
Neighbourhood
Neighbourhood Integration Testing attempts to reduce the number of testing sessions
necessary in order to fully test the software system while still trying to keep the difficulty
of fault isolation low.