You are on page 1of 27

SOFTWARE TESTING FILE TO BE SUBMITTED LATEST BY MONDAY AND

MUST COMPRISE THE FOLLOWING :-

>front page

>index

>testing and its objectives ( print out )

>Black box testing and its types ( print out, already done )

>solved questions ( handwritten, done )

>introduction to white box testing theory ( print out )

>coverage question (du,dc etc) for the code given( photo uploaded in next post )

>introduction to integration testing ( printout )

>stub/driver, graph based, path based, MEP ( theory in each case )


SOFTWARE TESTING

“Testing is the process of executing a program with the intent of


finding errors.”

Software testing is any activity aimed at evaluating an attribute or capability of a program


or system and determining that it meets its required results. Although crucial to software
quality and widely deployed by programmers and testers, software testing still remains an
art, due to limited understanding of the principles of software. The difficulty in software
testing stems from the complexity of software: we can not completely test a program with
moderate complexity. Testing is more than just debugging. The purpose of testing can be
quality assurance, verification and validation, or reliability estimation. Testing can be
used as a generic metric as well. Correctness testing and reliability testing are two major
areas of testing. Software testing is a trade-off between budget, time and quality

Major Objectives of testing are:

• To improve quality.

As computers and software are used in critical applications, the outcome of a bug can be
severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane
crashes, allowed space shuttle missions to go awry, halted trading on the stock market,
and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug
has given birth to a cottage industry of consultants and programming tools dedicated to
making sure the modern world doesn't come to a screeching halt on the first day of the
next century. In a computerized embedded world, the quality and reliability of software
is a matter of life and death.

Quality means the conformance to the specified design requirement. Being correct, the
minimum requirement of quality, means performing as required under specified
circumstances. Debugging, a narrow view of software testing, is performed heavily to
find out design defects by the programmer. The imperfection of human nature makes it
almost impossible to make a moderately complex program correct the first time. Finding
the problems and get them fixed, is the purpose of debugging in programming phase.

• For Verification & Validation (V&V)

Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can
make claims based on interpretations of the testing results, which either the product
works under certain situations, or it does not work. We can also compare the quality
among different products under the same specification, based on results from the same
test.
We can not test quality directly, but we can test related factors to make quality visible.
Quality has three sets of factors -- functionality, engineering, and adaptability. These
three sets of factors can be thought of as dimensions in the software quality space. Each
dimension may be broken down into its component factors and considerations at
successively lower levels of detail. Table 1 illustrates some of the most frequently cited
quality considerations.

Functionality (exterior Engineering (interior Adaptability (future


quality) quality) quality)
Correctness Efficiency Flexibility
Reliability Testability Reusability
Usability Documentation Maintainability
Integrity Structure
Table 1. Typical Software Quality Factors [Hetzel88]

Good testing provides measures for all relevant factors. The importance of any particular
factor varies from application to application. Any system where human lives are at stake
must place extreme emphasis on reliability and integrity. In the typical business system
usability and maintainability are the key factors, while for a one-time scientific program
neither may be significant. Our testing, to be fully effective, must be geared to measuring
each relevant factor and thus forcing quality to become tangible and visible.

Tests with the purpose of validating the product works are named clean tests, or positive
tests. The drawbacks are that it can only validate that the software works for the specified
test cases. A finite number of tests can not validate that the software works for all
situations. On the contrary, only one failed test is sufficient enough to show that the
software does not work. Dirty tests, or negative tests, refers to the tests aiming at
breaking the software, or showing that it does not work. A piece of software must have
sufficient exception handling capabilities to survive a significant level of dirty tests.

A testable design is a design that can be easily validated, falsified and maintained.
Because testing is a rigorous effort and requires significant time and cost, design for
testability is also an important design rule for software development.

• For reliability estimation

Software reliability has important relations with many aspects of software, including the
structure, and the amount of testing it has been subjected to. Based on an operational
profile (an estimate of the relative frequency of use of various inputs to the program,
testing can serve as a statistical sampling method to gain failure data for reliability
estimation.

Software testing is not mature. It still remains an art, because we still cannot make it a
science. We are still using the same testing techniques invented 20-30 years ago, some of
which are crafted methods or heuristics rather than good engineering methods. Software
testing can be costly, but not testing software is even more expensive, especially in places
that human lives are at stake. Solving the software-testing problem is no easier than
solving the Turing halting problem. We can never be sure that a piece of software is
correct. We can never be sure that the specifications are correct. No verification system
can verify every correct program. We can never be certain that a verification system is
correct either.

Other Objectives of software testing are:

• To discuss the distinctions between validation testing and defect testing


• To describe the principles of system and component testing
• To describe strategies for generating system test cases
• To understand the essential characteristics of tool used for test automation
BLACK BOX TESTING

Black-box testing is a method of software testing that tests the functionality of an


application as opposed to its internal structures or workings. Specific knowledge of the
application's code/internal structure and programming knowledge in general is not
required. Test cases are built around specifications and requirements, i.e., what the
application is supposed to do. It uses external descriptions of the software, including
specifications, requirements, and design to derive test cases. These tests can be functional
or non-functional, though usually functional. The test designer selects valid and invalid
inputs and determines the correct output. There is no knowledge of the test object's
internal structure.

This method of test can be applied to all levels of software testing: unit, integration,
functional, system and acceptance. It typically comprises most if not all testing at higher
levels, but can also dominate unit testing as well.

Types of black box testing

• Decision table testing


• Equivalence partitioning
• Boundary value analysis

DECISION TABLE TESTING

Structure

The four quadrants


Conditions Condition alternatives
Actions Action entries

• Each decision corresponds to a variable, relation or predicate whose possible


values are listed among the condition alternatives. Each action is a procedure or
operation to perform, and the entries specify whether (or in what order) the action
is to be performed for the set of condition alternatives the entry corresponds to.
Many decision tables include in their condition alternatives the don't care symbol,
a hyphen. Using don't cares can simplify decision tables, especially when a given
condition has little influence on the actions to be performed. In some cases, entire
conditions thought to be important initially are found to be irrelevant when none
of the conditions influence which actions are performed.
• Aside from the basic four quadrant structure, decision tables vary widely in the
way the condition alternatives and action entries are represented. Some decision
tables use simple true/false values to represent the alternatives to a condition (akin
to if-then-else), other tables may use numbered alternatives (akin to switch-case),
and some tables even use fuzzy logic or probabilistic representations for condition
alternatives. In a similar way, action entries can simply represent whether an
action is to be performed (check the actions to perform), or in more advanced
decision tables, the sequencing of actions to perform (number the actions to
perform).

Example

A technical support company writes a decision table to diagnose printer problems based
upon symptoms described to them over the phone from their clients.

Printer troubleshooter
Rules
Printer does not print Y Y Y Y N N N N
Conditions A red light is flashing Y Y N N Y Y N N
Printer is unrecognized Y N Y N Y N Y N
Check the power cable X
Check the printer-computer cable X X
Actions Ensure printer software is installed X X X X
Check/replace ink X X X X
Check for paper jam X X

EQIVALENCE PARTITIONING

Equivalence partitioning (also called Equivalence Class Partitioning or ECP) is a


software testing technique that divides the input data of a software unit into partitions of
data from which test cases can be derived. In principle, test cases are designed to cover
each partition at least once. This technique tries to define test cases that uncover classes
of errors, thereby reducing the total number of test cases that must be developed.

Two steps are required to implementing this method:


1. The equivalence classes are identified by taking each input condition and partitioning it
into valid and invalid classes. For example, if an input condition specifies a range of
values from 1 to 999, we identify one valid equivalence class [1<item<999]; and two
invalid equivalence classes
[item<1] and [item>999].

2. Generate the test cases using the equivalence classes identified in the previous step.
This is performed by writing test cases covering all the valid equivalence classes. Then a
test case is written for each invalid equivalence class so that no test contains more than
one invalid class. This is to ensure that no two invalid classes mask each other.

Boundary-value analysis
The expected input and output values should be extracted from the component
specification. The input and output values to the software component are then grouped
into sets with identifiable boundaries. Each set, or partition, contains values that are
expected to be processed by the component in the same way. Partitioning of test data
ranges is explained in the equivalence partitioning test case design technique. It is
important to consider both valid and invalid partitions when designing test cases.

Consider a program with two input variables x and y. These input variables have
specified boundaries as:

a<=x <= b
c<= y <= d

The boundary value analysis test cases for our program with two inputs variables (x and
y) that may have any value from 100 to 300 are: (200,100), (200,101), (200,200),
(200,299), (200,300), (100,200), (101,200), (299,200) and (300,200).This input domain is
shown in Fig. 8.5. Each dot represents a test case and inner rectangle is the domain of
legitimate inputs. Thus, for a program of n variables, boundary value analysis yield 4n +
1 test cases.

Robustness testing

It is nothing but the extension of boundary value analysis. Here, we would like to see,
what happens when the extreme values are exceeded with a value slightly greater than the
maximum, and a value slightly less than minimum. It means, we want to go outside the
legitimate boundary of input domain.
There are four additional test cases which are outside the legitimate input domain. Hence
total test cases in robustness testing are 6n+1, where n is the number of input variables.
So, 13 test cases are: (200,99), (200,100), (200,101), (200,200), (200,299), (200,300)
(200,301), (99,200), (100,200), (101,200), (299,200), (300,200), (301,200).

Worst-case testing
If we reject “single fault” assumption theory of reliability and may like to see what
happens when more than one variable has an extreme value. In electronic circuits
analysis, this is called “worst case analysis”. It is more thorough in the sense that
boundary value test cases are a proper subset of worst case test cases. It requires more
effort. Worst case testing for a function of n variables generate 5n test cases as opposed
to 4n+1 test cases for boundary value analysis. Our two variables example will have
5^2=25 test cases.
Cause Effect Graphing Technique
->Consider single input conditions
->Do not explore combinations of input circumstances

1. Causes & effects in the specifications are identified. A cause is a distinct input
condition or an equivalence class of input conditions. An effect is an output condition or
a system transformation.
2. The semantic content of the specification is analysed and transformed into a boolean
graph linking the causes & effects.
3. Constraints are imposed
4. graph – limited entry decision table Each column in the table represent a test case.
5. The columns in the decision table are converted into test cases.
WHITE BOX TESTING

Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-


box testing, as the structure and flow of the software under test are visible to the tester.
Testing plans are made according to the details of the software implementation, such as
programming language, logic, and styles. Test cases are derived from the program
structure. White-box testing is also called glass-box testing, logic-driven testing or
design-based testing .

There are many techniques available in white-box testing, because the problem of
intractability is eased by specific knowledge and attention on the structure of the software
under test. The intention of exhausting some aspect of the software is still strong in
white-box testing, and some degree of exhaustion can be achieved, such as executing
each line of code at least once (statement coverage), traverse every branch statements
(branch coverage), or cover all the possible combinations of true and false condition
predicates (Multiple condition coverage).

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow
structure of the software into a directed graph. Test cases are carefully selected based on
the criterion that all the nodes or paths are covered or traversed at least once. By doing so
we may discover unnecessary "dead" code -- code that is of no use, or never get executed
at all, which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs
are created, each contains one fault. Each faulty version of the program is called a
mutant. Test data are selected based on the effectiveness of failing the mutants. The more
mutants a test case can kill, the better the test case is considered. The problem with
mutation testing is that it is too computationally expensive to use. The boundary between
black-box approach and white-box approach is not clear-cut. Many testing strategies
mentioned above, may not be safely classified into black-box testing or white-box testing.
It is also true for transaction-flow testing, syntax testing, finite-state testing, and many
other testing strategies not discussed in this text. One reason is that all the above
techniques will need some knowledge of the specification of the software under test.
Another reason is that the idea of specification itself is broad -- it may contain any
requirement including the structure, programming language, and programming style as
part of the specification content.

We may be reluctant to consider random testing as a testing technique. The test case
selection is simple and straightforward: they are randomly chosen. Study indicates that
random testing is more cost effective for many programs. Some very subtle errors can be
discovered with low cost. And it is also not inferior in coverage than other carefully
designed testing techniques. One can also obtain reliability estimate using random testing
results based on operational profiles. Effectively combining random testing with other
testing techniques may yield more powerful and cost-effective testing strategies

White Box Testing is coverage of the specification in the code.

Code coverage:

Segment coverage:
Ensure that each code statement is executed once.

Branch Coverage or Node Testing:


Coverage of each code branch in from all possible was.

Compound Condition Coverage:


For multiple condition test each condition with multiple paths and combination of
different path to reach that condition.

Basis Path Testing:


Each independent path in the code is taken for testing.

Data Flow Testing (DFT):


In this approach you track the specific variables through each possible calculation, thus
defining the set of intermediate paths through the code i.e., those based on each piece of
code chosen to be tracked. Even though the paths are considered independent,
dependencies across multiple paths are not really tested for by this approach. DFT tends
to reflect dependencies but it is mainly through sequences of data manipulation. This
approach tends to uncover bugs like variables used but not initialize, or declared but not
used, and so on.

Data flow testing is another form of structural testing. It has nothing to do with data flow
diagrams.
i. Statements where variables receive values.
ii. Statements where these values are used or referenced.

As we know, variables are defined and referenced throughout the program. We may have
few define/ reference anomalies:
i. A variable is defined but not used/ referenced.
ii. A variable is used but never defined.
iii. A variable is defined twice before it is used

Definitions
(i) Defining Node: Node n G(P) is a defining node of the variable v, written as DEF (v,
n), if the value of the variable v is defined at the statement fragment corresponding to
node n.
The definitions refer to a program P that has a program graph G(P) and a set of program
variables V. The G(P) has a single entry node and a single exit node. The set of all paths
in P is PATHS(P)
(ii) Usage Node: Node n G(P) is a usage node of the variable v, written as USE (v, n), if
the value of the variable v is used at statement fragment corresponding to node n. A usage
node USE (v, n) is a predicate use (denote as p) if statement n is a predicate statement
otherwise USE (v, n) is a computation use (denoted as c).

(iii) Definition use: A definition use path with respect to a variable v (denoted du-path)
is a path in PATHS (P) such that, for some v, there are define and usage nodes DEF(v, m)
and USE(v, n) such that m and n are initial and final nodes of the path.

(iv) Definition clear : A definition clear path with respect to a variable v (denoted dc-
path) is a definition use path in PATHS(P) with initial and final nodes DEF (v, m) and
USE (v, n), such that no other node in the path is a defining node of v.

The du-paths and dc-paths describe the flow of data across source statements from points
at which the values are defined to points at which the values are used. The du-paths that
are not definition clear are potential trouble spots.

Path Testing:
Path testing is the name given to a group of test techniques based on judiciously selecting
a set of test paths through the program. If the set of paths is properly chosen, then it
means that we have achieved some measure of test thoroughness.

This type of testing involves:


1. Generating a set of paths that will cover every branch in the program.
2. Finding a set of test cases that will execute every path in the set of program paths.

Loop Testing:
These strategies relate to testing single loops, concatenated loops, and nested loops.
Independent and dependent code loops and values are tested by this approach.

Why we do White Box Testing?


To ensure:

• That all independent paths within a module have been exercised at least once.
• All logical decisions verified on their true and false values.
• All loops executed at their boundaries and within their operational bounds internal
data structures validity.
Need of White Box Testing?
To discover the following types of bugs:

• Logical error tend to creep into our work when we design and implement
functions, conditions or controls that are out of the program
• The design errors due to difference between logical flow of the program and the
actual implementation
• Typographical errors and syntax checking

Skills Required:

We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. We should know the specification and the
code to be tested. Knowledge of programming languages and logic.

Limitations of WBT:

Not possible for testing each and every path of the loops in program. This means
exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and
data structure for testing is practically possible and effective.
INTEGRATION TESTING

Integration is a systematic approach to build the complete software structure specified in


the design from unit-tested modules. There are two ways integration performed. It is
called Pre-test and Pro-test.

• Pre-test: the testing performed in Module development area is called Pre-test.


The Pre-test is required only if the development is done in module development
area.

• Pro-test: The Integration testing performed in baseline is called pro-test. The


development of a release will be scheduled such that the customer can break
down into smaller internal releases
.

How does Integration Testing fit into the Software Development Life Cycle?

Even if a software component is successfully unit tested, in an enterprise n-tier


distributed application it is of little or no value if the component cannot be successfully
integrated with the rest of the application.

Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due to the
integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components. A lot of bugs


emerge during the integration step.

In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:

Before we begin Integration Testing it is important that all the components have been
successfully unit tested.

Integration Testing Steps:

Integration Testing typically involves the following Steps:


Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?

As you may have read in the other articles in the series, this document typically describes
one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary

How to write an Integration Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from
one component to the other.

So the Integration Test cases should typically focus on scenarios where one component is
being called from another. Also the overall application functionality should be tested to
make sure the app works when the different components are brought together.

The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be
created to focus on different areas of the application.

As mentioned before a dedicated Testing Team may be created to execute the Integration
test cases. Therefore the Integration Test Cases should be as detailed as possible.
Sample Test Case Table:

Test
Test Case Input Expected Actual
Case Pass/Fail Remarks
Description Data Result Result
ID

Additionally the following information may also be captured:


a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (One or more iterations of Integration testing may be performed)

Working towards Effective Integration Testing:

There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on


Integration of components and components can be built by different developers and even
different development teams, it is important the right version of components are tested.
This may sound very basic, but the biggest problem faced in n-tier development is
integrating the right version of components. Integration testing may run through several
iterations and to fix bugs components may undergo changes. Hence it is important that a
good Software Configuration Management (SCM) policy is in place. We should be able
to track the components and their versions. So each time we integrate the application
components we know exactly what versions go into the build process.

2) Automate Build Process where Necessary: A Lot of errors occur because the wrong
version of components were sent for the build or there are missing components. If
possible write a script to integrate and deploy the components this helps reduce manual
errors.
3) Document: Document the Integration process/build process to help eliminate the
errors of omission or oversight. It is possible that the person responsible for integrating
the components forgets to run a required script and the Integration Testing will not yield
correct results.

4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked
correctly. Each defect should be documented and tracked. Information should be captured
as to how the defect was fixed. This is valuable information. It can help in future
integration and deployment processes.

Path Based Testing

Path testing is a structural testing method that involves using the source
code of a program to attempt to find every possible executable path. The idea is
that we are then able to test each individual path in as many ways as possible in
order to maximise the coverage of each test case. This gives the best possible
chance of discovering all faults within a piece of code.
The fact that path testing is based upon the source code of a program
means that it is a white box testing method. The ability to use the code for testing
means that there exists a basis on which test cases can be rigorously defined. This
allows for both the test cases and their results to be analysed mathematically,
resulting in more precise measurement

Basis path testing is a hybrid between path testing and branch testing:

Path Testing: Testing designed to execute all or selected paths through a computer
program

Branch Testing: Testing designed to execute each outcome of each decision point in a
computer program

Basis Path Testing: Testing that fulfills the requirements of branch testing & also tests all
of the independent paths that could be used to construct any arbitrary path through the
computer program.

A path through the software is a sequence of instructions or statements that starts at an


entry junction or decision and ends at another or possibly the same junction decision or
exit. A path may go through several junctions processes or decisions one or more times
Step for Progam Runing:

• Prepare program depending on design document.


• Draw flow graph for that program.
• Calculate cyclomatic complexity.(No of Independent paths in that program)
• Run program more than one time to cover each Independent path execution

Example:

Step – 1: Construct the flow graph from the source code or


flow charts.

Step – 2: Identify independent paths.

Step – 3: Calculate Cyclomatic


Complexity, V(G).

Step – 4: Design the test cases.

Objective of the Tutorial: To draw a Flow Graph, find its Cyclomatic Complexity,
V(G) and the independent paths for the following piece of code.

void foo (float y, float a *, int n)

float x = sin (y) ;

if (x > 0.01)

z = tan (x) ;

else

z = cos (x) ;

for (int i = 0 ; i < x ; + + i) {

a[i] = a[i] * z ;

Cout < < a [i] ;

Step – 1: Let us try to provide numbering to the various nodes as per the following Flow
Graph
Step – 2: Piece of code being covered under Node –1

void foo (float y, float a *, int n)

float x = sin (y) ;

if (x > 0.01)

Step – 3: Piece of code being covered under Node –2

z = tan (x) ;

else

Step – 4: Piece of code being covered under Node –3

z = cos (x) ;

Step – 5: Piece of code being covered under Node –4

for (int i = 0 ; i < x ; + + i)


Step – 6: Piece of code being covered under Node –5

Step – 7: Piece of code being covered under Node –6

a[i] = a[i] * z ;

Cout < < a [i] ;

Step – 8: Piece of code being covered under Node –7

Cout < < a [i] ;

Step – 9: Calculation of V(G) by three methods


Method – 1: V(G) = e – n + 2 ( Where “e” are edges & “n” are nodes)

V(G) = 8 – 7 + 2=3

Method – 2: V(G) = P + 1 (Where P- predicate nodes with out degree = 2)

V(G) = 2 + 1 = 3 (Nodes 1 and 5 are predicate nodes)

Method – 3: V(G) = Number of enclosed regions + 1 = 2+1=3

V(G) = 3 and is same by all the three methods.

Conclusions from the above tutorial:

Conclusion – 1: By getting a value of V(G) = 3 we conclude that it is a “well written”


code, its “testability” is high and cost / effort to maintain is low.

Conclusion – 2: There are 3 paths in this program which are independent paths and they
form a basis-set. These paths are described below
Path 1: 1–2–4–5-7
Path 2: 1–3–4–5-7
Path 3: 1–3–4–5–6–
7

Conclusion – 3: We can form another basis-set as described below

Path 1: 1–2–4–5-7
Path 2: 1–3–4–5-7
Path 3: 1–2–4–5–6–
7
Conclusion – 4: We must execute these paths at least once in order to test the program
thoroughly. Accordingly we can design the test cases.
TOP-DOWN INTEGRATION TESTING

Top down integration testing is an incremental integration testing technique which begins
by testing the top level module and and progressively adds in lower level module one by
one. Lower level modules are normally simulated by stubs which mimic functionality of
lower level modules. As you add lower level code, you will replace stubs with the actual
components. Top Down integration can be performed and tested in breadth first or depth
firs manner.

Stubs for Testing:

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a
system.

Four basic types of Stubs for Top-Down Testing are:

1 Display a trace message


2 Display parameter value(s)
3 Return a value from a table
4 Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software
module that is or will be defined elsewhere or a dummy component or object used to
simulate the behavior of a real component until that component has been developed.

Ultimately, the dummy method would be completed with the proper program logic.
However, developing the stub allows the programmer to call a method in the code being
developed, even if the method does not yet have the desired behavior.

Advantages

• Driver do not have to be written when top down testing is used.


• It provides early working module of the program and so design defects can be

found and corrected early.

Disadvantages

• Stubs have to be written with utmost care as they will simulate setting of output

parameters.

• It is difficult to have other people or third parties to perform this testing, mostly

developers will have to spend time on this.

Example:

Top down integration testing using stubs:


BOTTOM-UP INTEGRATION TESTING

In bottom up integration testing, module at the lowest level are developed first and other
modules which go towards the 'main' program are integrated and tested one at a time.
Bottom up integration also uses test drivers to drive and pass appropriate data to the
lower level modules. As and when code for other module gets ready, these drivers are
replaced with the actual module. In this approach, lower level modules are tested
extensively thus make sure that highest used module is tested properly.

Driver for Testing:

Driver is a the piece of code that passes test cases to another piece of code. Test Harness
or a test driver is supporting code and data used to provide an environment for testing
part of a system in isolation. It can be called as as a software module which is used to
invoke a module under test and provide test inputs, control and, monitor execution, and
report test results or most simplistically a line of code that calls a method and passes that
method a value.

Advantages

• Behavior of the interaction points are crystal clear, as components are added in

the controlled manner and tested repetitively.

• Appropriate for applications where bottom up design methodology is used.

Disadvantages

• Writing and maintaining test drivers or harness is difficult than writing stubs.

• This approach is not suitable for the software development using top down

approach

Example:
Using Bottom UP testing with driver as component:
Call Graph Based

All the testing will be performed on actual units of fully functional code i.e
the program is constructed/produced before integration testing takes place.

Pair-Wise
Pair-Wise integration testing restricts the testing session to a pair of units
in the call graph. This ensures that faults can be easily isolated while still working
without stubs or drivers. A disadvantage to this method is that there needs to be as many
test sessions are there are edges.
Neighbourhood
Neighbourhood Integration Testing attempts to reduce the number of testing sessions
necessary in order to fully test the software system while still trying to keep the difficulty
of fault isolation low.

You might also like