You are on page 1of 28

XYZ Test strategy

Document Version 1.0


XYZ -test strategy

© Copyright 2019 Sapient Corporation | Confidential


Table of Contents

Table of Contents

Glossary and Abbreviations 2


1 Project Overview 3
2 Purpose 4
3 Scope 5
4 Test Strategy 6
4.1 Smoke Testing 6
4.2 Functional Testing 6
4.3 Cross Browser Testing 7
4.4 Usability Testing 7
4.5 Accessibility Testing 8
4.6 System Integration Testing (SIT) 8
4.7 Performance Testing 9
4.8 Database Testing 9
4.9 Security Testing 9
5 MDM testing approach 11
5.1 Important aspects of testing MDM application 11
5.2 Size and Complexity of the project 12
6 Defect Management Process 14
6.1 Defect Status Meetings 14
6.2 Defect Priority Guidelines 14
6.3 Defect Reporting Workflow 15
6.4 Different stages of Defect 16
7 Test Environment 18
8 Test Tools 19
9 Suspension/Resumption Criteria 20
10 Test Deliverables 21
11 Assumptions 22
12 Communication approach 23
12.1 Status Reporting 23
12.2 Defect Review Session 23
13 Roles and Responsibilities 24
14 Client Responsibilities 25
15 References 26
16 Approvals 27

QA Capability – MDM -test strategy


Glossary and Abbreviations

Glossary and Abbreviations


No. Term Description

1. MDM Master Data management

QA Capability – MDM -test strategy


1 Project Overview
To provide an internal website to access the distributed data from various standalone applications and
provide the consolidated information. This is done by creating and managing central repository of data and
accessing it through the UI.

QA Capability – MDM -test strategy


2 Purpose
The purpose of this document is to provide a test strategy for XYZ Corporation based on different
implementations.
The purpose of this document is to
 Define a Test strategy for MDM
 Describe the different testing types which would be relevant based on different criteria
 Describe the Testing Approach
 Define and identify roles and responsibilities in testing process

QA Capability – MDM -test strategy


3 Scope
Sapient is responsible for testing the MDM application developed by either Sapient itself or by any other
vendor.
The major testing scope items would be:
 Test the customizations and configurations being done in the MDM
 Test the non customisable non configurable requirements based on the project needs
 Integration within and third party applications
 Testing the Non–functional requirements based on the closure between Sapient and clients
In general the scope of testing types includes:
 Smoke Testing
 Functional Testing
 Cross Browser Testing
 Usability Testing
 Accessibility Testing
 Regression Testing
 System Integration Testing
 Localization Testing
 Performance Testing
 Third party Integration testing
 Database Testing
 Security Testing

QA Capability – MDM -test strategy


4 Test Strategy
As mentioned in the scope above, following testing types should be considered while testing a MDM
implementation:-
 Smoke Testing
 Functional Testing
 Cross Browser Testing
 Usability Testing
 Accessibility Testing
 Regression Testing
 System Integration Testing
 Localization Testing
 Performance Testing
 Third party Integration testing
 Database Testing
 Security Testing

4.1 Smoke Testing

Smoke testing is designed to quickly test the application and to understand of any potential issues which
can lead to failure of major functionality. Testing teams will have the information of what functionalities are
being releases in a particular iteration. Only when smoke testing is completed successfully, the testing team
picks the rest of the functionality.

Owner Testing team


Activity Stability of the application build
Environment QA Environment
Entry Build is ready
Criteria
Exit Criteria All smoke testing scenarios have passed

Smoke Testing approach and activities:


 Smoke testing suite will consist of all high priority scenarios for the requirements delivered for a given
iteration.
 Before QA environment is formally accepted for testing by testing team, high priority scenarios identified
above are executed
 Any build that does not meet exit criteria for smoke testing will be rejected, and testing will continue on the
previous base-lined build till the time new successful build is accepted by testing team

4.2 Functional Testing


Testing team will create functional test cases and ensure that these test cases are mapped to all the test
scenarios which in turn are mapped to requirements document. (Recommendation: In order to save time, it
is better to use the high level scenarios created by QA Capability team as your base and then create the
residual test scenarios) Functional testing focuses on verification of all the functionality described in the use
cases.

Owner Testing team


Activity Execution of functional test cases
Environment QA Environment
Entry Criteria All smoke testing scripts are passed
Exit Criteria No open P1s and P2 defects
Functional Testing approach and activities:

QA Capability – MDM -test strategy


 Testing team will create functional test scripts which will be mapped to Test scenarios and subsequently
they will be mapped to requirements
 Test Scenarios should be validated with all the stakeholders
 Test scripts should be peer reviewed
 Use stubs for testing 3rd party integration points
 The functional test scripts will be executed and results shared with stakeholders

4.3 Cross Browser Testing


Cross Browser testing should be done to confirm that the functionality/look and feel of pages on different
browsers is consistent. Different browsers that can be used for testing are:-

Internet Explorer Internet Internet Explorer


9.0 Explorer 8.0 7.0

Firefox 3.6 Chrome 7.0 Opera 10

Flash 10.0 Safari 5.0

Owner Testing team

Activity Execute test scripts based on closed


scope(Recommended: Execute High and Medium priority
test scripts
Environment Testing Environment
Entry There are no Open P1s and P2s on the main browser
Criteria
Exit Criteria There are no Open P1s and P2s on all the browsers
Scope  There should be consistency of page layout and GUI in all
the browsers and adheres to the Wireframes documents
and no high priority defects from Business point of view.
 There should be no P1 defects from functionality and
Business point of view
Cross Browser testing approach:
 Open the main browser and the browser to be tested at the same time and execute the scripts at
the same time. This will help in executing scripts faster

4.4 Usability Testing


Usability testing is primarily done to ensure that the end user has a good experience while traversing
through the website. Important aspects that would be tested in an MDM application are:
 To effectively see that the user can accomplish the desired tasks
 Effort needed to efficiently accomplish the desired tasks
 We have return users
In this type of testing, the application is tested for user interface items like colour, text, font, alignment of
buttons, alignment of fields, status of buttons, size of the fields or buttons etc. This is a visual comparison of
the application pages against the Wireframes/graphic designs.
This testing will be done for all the requirements that involve a GUI component change or new
development.

QA Capability – MDM -test strategy


Owner Testing Team
Activity Testing of user interface as per Wireframes and designs

Environment Testing Environment


Entry Criteria Smoke testing scripts are 100% passed
Exit Criteria There should be consistency of page layout and GUI in all
the browsers and adheres to the Wireframes documents
and no high priority defects i.e. P1’s from Business point of
view.

4.5 Accessibility Testing


Testing team will carry out accessibility testing to ensure application meets minimum level of accessibility
through mark up, scripting or other technologies that interact with or enable access through user agents,
including assistive technologies.

Accessibility testing will follow the guidelines specified in W3C Web Content Accessibility Guidelines 2.0,
Conformance Level A.

Owner Testing team


Activity Test the application for user accessibility.
Environment Testing Environment
Entry Criteria The application is successfully tested for functional
requirements
Exit Criteria Application to meet minimum accessibility criteria with no
P1’s as specified in W3C Web Content Accessibility
Guidelines 2.0, Conformance Level A.

This will be performed by ensuring:


 Text alternatives are provided for any non-text content so that it can be changed into other forms people
need, such as large print, Braille, speech, symbols or simpler language.
 Creation of contents that can be presented in different ways (for example simpler layout) without losing
information or structure.
 Testing the application for all the functionality available from Keyboard.
 Provide users enough time to read and use content, wherever time based action is involved.

4.6 System Integration Testing (SIT)


Primary purpose of SIT would be to execute end to end test scenarios and all the test cases which have
been added over the period of different iterations.
Following strategy will be followed:
 Testing team will use the test cases created for testing iterations and will also create end to end scenarios
meant for testing the integrated code.
 Test Data would be provided by the team as per the agreed upon.

Owner Testing Team


 Interface Testing for all those interfaces where's there is
Activity an interaction with sapient responsible systems, thus
verifying that response to request is coming properly or

QA Capability – MDM -test strategy


not.

Environment SIT environment


 Smoke testing scripts are 100% passed
 Modules have undergone unit testing and passed
functional testing.
Entry Criteria
 Interfaces and interactions between the various systems
must be operational.

 There are no P1 defects.


Exit Criteria  Defects identified during SIT have either been fixed or
accepted as known issues by the client.

4.7 Performance Testing


Need to be covered by the respective teams in consultation with the performance testing team

4.8 Database Testing


Testing team will create database testing test cases and ensure that these test cases are mapped to all the
test scenarios which are picking the values from the database. (Recommendation: In order to save time, it
is better to use the high level scenarios created by QA Capability team as your base and then create the
database test scenarios based on project need) In Database Testing test engineer should test the data
integrity, data accessing, query retrieving, modifications, updation and deletion etc

Owner Testing team


Activity Execution of database test cases
Environment QA Environment
Entry Criteria All smoke testing scripts are passed
Exit Criteria No open P1s and P2 defects

Database Testing approach and activities:


 Testing team will create database test scenarios and scripts
 Test Scenarios should be validated with all the stakeholders
 Test scripts should be peer reviewed
 The database test scripts will be executed and results shared with stakeholders
 Activities included
a)Data validity testing.
b)Data Integrity testing
c) Testing of Procedure, triggers and functions

4.9 Security Testing

Security testing is a process to determine that an information system protects data and maintains functionality
as intended.

The basic security concepts that need to be covered by security testing are: confidentiality, integrity,
authentication, availability, authorization and non-repudiation using Parameter Tampering, cookie poisoning,
Stealth commanding and Forceful Browsing.

QA Capability – MDM -test strategy


 Authentication - Testing the authentication schema means understanding how the authentication process
works and using that information to circumvent the authentication mechanism. Basically, it allows a
receiver to have confidence that information it receives originated from a specific known source.
 Authorization - Determining that a requester is allowed to receive a service or perform an operation.
 Confidentiality - A security measure which protects the disclosure of data or information to parties other
than the intended.
 Integrity – Whether the intended receiver receives the information or data which is not altered in
transmission.
 Non-repudiation - Interchange of authentication information with some form of provable time stamp e.g.
with session id etc.

Strategy should include the following:

 Testing team should validate Details /Scope for Security Testing


 Testing team should check for Session Maintenance.
 Testing team should validate security in various modules by testing strategies mentioned in table.
 Testing team should validate Entry and Exit Criteria.

QA Capability – MDM -test strategy


5 MDM testing approach
Consider MDM as made up of a front end (the human-computer interface), a back end (Centralised
database) and some middleware (some interfaces that integrates database with the standalone
applications).In order to fully test the system it is important to test the different aspects in isolation and
clubbed together.

5.1 Important aspects of testing MDM application

Browser compatibility

 Though relatively simple to do, it pays to spend enough time testing in this area. Decide on the lowest level of
compatibility and test that the system does indeed work without problem on the early as well as the latest
browser versions.
 Even with the same release version, browsers behave differently on different platforms, and when used with
different language options. Testing should cover at least the main platforms (Unix, Windows, Mac, and Linux)
and the expected language options.

Session Management

 Most applications and Web servers configure sessions so that they expire after a set time. Attempting to
access a session object that has expired causes an error, and must be handled within the code. Testing of
session expiration is ofter overlooked, largely because under normal operational circumstances, session
expiration is unlikely to occur.

Usability

 Site navigation is crucial for attracting customers and retaining them. Sophisticated Web sites, such as for
travel booking, need to pay particular attention to navigation issues.
 Large entities catalogs are central to many trading systems. Client should be able to quickly browse and
search through catalogs. Developers can define tests to measure the effectiveness of entity navigation
mechanisms. For example, you could test that a search on particular keywords brings up the correct entities.

Availability

 Before going live, predicted business usage patterns should indicate maximum stress levels. You should test
system availability against the maximum stress levels plus a safety margin for a defined period of time.

Internationalization

 Does the site offer the option to view non-English pages? If the choice of language is based on browser
preferences, does it work on all the desired browsers? Many older browsers do not support language
customization. We need to test all the above aspects.
 Test that words are correctly displayed and that sentence are grammatically correct. Use a native speaker to
verify that this is the case..

System Integration

 The data interface defines the format of data exchanged by front- and back-end systems. Tools such as XML
(Extensible Mark- Up Language) alleviate data interface problems by providing document type definitions.
 The processing between front- and back-end systems may be time dependent. For example, a back-end
system could be necessary to process a data transmission from the frontend system immediately or within a
defined period. Tests should ascertain whether a system actually observes timeliness constraints and whether
it activates data transmissions at the correct time.

QA Capability – MDM -test strategy


 One system must often update information in another system. Verify that batch programs and remote
procedures perform the necessary update operations without side effects.

5.2 Size and Complexity of the project


The size and complexity of an MDM implementation would drive the testing strategy for the project.
Low complexity MDM implementation

Low complexity MDM implementation would contain only basic features of MDM. The integration with 3rd party
vendors would be very limited and there would not be integration with multiple vendors. Any changes to the
application would be an easy job.

The testing of such systems would include:

 Testing of Interface
 Testing of integration with standalone applications

Test data management is simpler in these implementations. Since the teams are smaller in size and the content on
the sites are not heavy, the tester will have to manage smaller sized test data.

Medium complexity MDM implementation

Medium complexity MDM implementation would contain all the basic features as well as some advance features of e-
commerce. There would be multiple touch points with other vendors. The information in these systems is not
hardcoded and backend systems are used for publishing data on the website.

The testing of such systems would include all that is mentioned under Low complexity implementation and the items
mentioned below:

 Testing of data flow from backend


 Testing of information flowing between 3rd party vendors

Test data management would require pre planning and co-ordination with other vendors. There should be a
communication channel wherein the data coming into the system and going out of it is coordinated with 3 rd party
vendors.

Large complexity MDM implementation

Large complexity MDM implementation would contain all the advance features and would involve multiple integrations
with different 3rd party systems. The implementation would be complex and would require huge testing effort. The data
from one system would flow to multiple systems.

The testing of such systems would include all that is mentioned in low and medium complexity implementations and
the items mentioned below:

 Security and integrity of data


 Security of personal information
 Internationalization
 Performance

Test data management would require proper planning and co-ordination with other vendors. The ownership of test
data needs to be defined and a central repository needs to be in place from where everyone can pick up data and use
it.

Approach for testing large complexity MDM implementation:-

Test Global and Test Distributed: MDM system is global in spirit and structure. The different underlying systems
may be on different continents, but they appear to integrate seamlessly over large, distributed and non-homogenous

QA Capability – MDM -test strategy


networks and other communication channels. Testing team has to validate that the impact of changes in one system
should not impact other systems.

Consider User Profile: The user profile varies in terms of role(Admin or Normal user). While testing the application,
ensure that all the access rights of user profile are taken care of.

QA Capability – MDM -test strategy


6 Defect Management Process
A defect is a flaw in any aspect of a system including the requirements, the design or the code, that
contributes, or may potentially contribute, to the occurrence of one or more failures. This process is
essentially a workflow that defines how defects are captured, fixed, retested, and closed. Defects that are
discovered must be logged, tracked and managed to resolution in order to ensure they are not propagated
to the production environment.

6.1 Defect Status Meetings

In order to ensure defects are on a path to resolution, daily sessions (TAR sessions) will be conducted to
review defects with Client during SIT and UAT phase. These sessions are very critical to the success of
testing. It is therefore critical that key lead resources attend. During this session, we review the
outstanding issues (New, Open, and Reopen) by the priorities identified below. Some issues may require
breakout sessions with other teams in order to resolve. These defects will be discussed briefly in the
meetings and follow up meetings will be held immediately following the session.
The duration of the meeting will vary based upon the number of open defects, and the speed at which defects
are being worked.

6.2 Defect Priority Guidelines

Priority levels will be assigned for the defects as below:

Priority Definition Example


P1 Showstopper / Critical  After clicking the Search button the application hangs
- Testing cannot continue until issue is resolved  Clicking a link leads to system exception error
- That prevents execution of major (core)  Submission of data leads to a system exception or error
functionality and has no work around page
- has major implication in the business.  From usability and accessibility perspective; flickering of
screen could not be paused or stopped
P2 High  Search by specific text doesn't work
- Major functionality produces wrong results  Clicking on search from the Home page opens the wrong
- Workaround exists search page
- Resulting system has reduced usability for  System responds incorrectly to invalid data, error handling
the end user not implemented
 Navigation within fields resulting into error message pop
up saying the value entered is incorrect
 From usability and accessibility perspective; moving
content could not be frozen
P3 Medium / Significant  Clicking on a link takes the user to the wrong location on
- Minor functionality produces wrong results or the same page
missed.  A comments field is not getting updated in the database
 Field validation missing
 Scroll bar not working, incorrect labels, and instructional
text, heading or sub headings.
 From usability and accessibility perspective; tab order is
not logical through links, forms, and objects
P4 Low/Minor  Spelling or grammatical mistakes on web pages.
- A defect that does not affect the functionality  Font type or color used is not according to the specified
of the system. format
- Only minor cosmetic issues.

QA Capability – MDM -test strategy


6.3 Defect Reporting Workflow

Tester Log a Defect with Status Action Required: Tester Logs


Start “New” & assigns to QA TL Defect along with:
a) Detail steps to reproduce defect.
b) Screenshot of the Defect

Defect is discussed in TAR session.

No Yes
Is it a Defect? Defect is assigned to Dev Defect is assigned to
TL with Priority. Developer with Status “Open”

Yes
Action Required: Developer to Change Status to Is Defect a Duplicate?
mention duplicate Defect ID. “Reject – Duplicate”.

No
Change Status to
“Reject – Invalid Change Status to No
Defect Not Valid Defect”. “Reject – Can’t be Is Defect reproducible?
Reproduced”.

Yes
Defect logged is not
a Requirement. Developer fixes the Defect.

Change the status to “Fixed”.


Action Required: Reason from Defect assigned to
discussion needs to be stated in QA TL.
Description Field.
Build Released to Testing
Environment.

Enhancement

Follow Defect assigned to QA TL with


Issue Appropriate status “Ready to Retest”.
Lifecycle

Training
Requirement Tester retests the Defect assigned to “Tester”.
defect for correct fix.

Defect status changed to


No “Reopen” and Defect
Is Defect fixed? assigned back to Developer.

Yes
Action Required: Tester to add
Stop Defect Status is comments in Description Field.
changed to “Closed”

QA Capability – MDM -test strategy


The diagram above captures the defect tracking process:

 The tester will enter all defects into the defect tracking system; assign the defects to the QA TL in New
status. A defect will contain a
 Title/Summary
 Description with steps to reproduce and screen shot or relevant information
 Status
 Priority
 Module Name
 Discovered In release
 The defect is then discussed in TAR session.
 If the defect is acknowledged to be “Invalid”, that the defect raised is invalid as per the current
requirement, defect is marked as Reject-Invalid Defect.
 If the defect is acknowledged to be “Valid”, then it is assigned to the Development TL who assigns it to the
concerned developer.
 When a developer acknowledges the defects, he will change the status to Open and will start working on
the defect
 If the developer finds a defect to be a duplicate of an existing open defect, they will assign it back to
testers mentioning the exact duplicate defect id. Testers re-verify whether it is actually a duplicate or not.
If yes then defect is marked as Reject – Duplicate else it is re-assigned back to developers with proper
comment in Open status
 If the developer cannot reproduce the defect, he will change the status to Reject-Can’t be Reproduced
and assign it back to tester. Tester again tries to reproduce it and if it is re-producible, tester assigns it
back to developer with appropriate comment in Open state else it is Closed
 If the defect is reproducible, then the developer fixes a defect & he will change its status to Fixed and
keep it with him until next build is provided to QA
 Before every release, developers set the status to Ready to Retest and mentions the build in which the
defect has been fixed and assigns it back to the QA TL and he/she assigns it to the tester
 Once the defect fixes are released, the tester will re-test to verify the defect. If the defect has been fixed,
the status will be changed to Closed. If the Defect has not been fixed s/he will assign it back to developer
with status as Reopen
 If the review team or developer requires further information about the issue, they will change the status to
More Info Required and assign it back to testers. Testers provide required information and assign the
defect back to developer in Open status
 Before every release if there are certain known defects to developers, they log it with status Known Issue
 When a defect is raised and the review team acknowledges it as a known limitation of the system but
decides against fixing it & releasing the course with the defect, the status will be set to ‘Known Issue’
 If Testing has any suggestion about the functionality or UI which may result in requirement change, a
“Suggestion” will be logged for the same. ‘Suggestion’ is the category of the defect and these artifacts
will not be counted in QA defect report.

QA team will work on only those defects which are assigned to the QA team members & are in only Re-
Test/Cannot Reproduce/More Info Required/Duplicate status.

6.4 Different stages of Defect

1. New – Indicates new defect is logged on to defect tracking tool along with the Severity of the Defect. All
“New” Defects are assigned to QA TL.
2. Open – Defect is assigned “Open” status when it is assigned to a Developer for fix along with Priority
assigned against it.
3. Fixed – Developer, after fixing the defect changes its status to “Fixed” and don’t assign it to anyone.
4. Ready to Retest – Defect after getting fixed by developer is released to QA environment for Retest after
new build containing the fix is deployed. The Defect status is then changed to “Ready to Retest” and
assigned to QA TL who in turn assign it to Testers for retest.

QA Capability – MDM -test strategy


5. Reopen – On retesting the defect if tester finds that defect is not fixed, then its status is changed to
“Reopen” and is assigned back to developer along with comments.
6. Reject-Duplicate – Defect if found duplicate of an already existing defect already logged in defect
tracking tool will be rejected with status “Reject-Duplicate” by the developer along with Duplicate defect ID
and will be assigned to QA TL.
7. Reject-Can’t be reproduced – If the defect, assigned to developer for fix, cannot be reproduced by the
developer then it’s rejected with status “Reject-Can’t be reproduced” by the developer along with
comments and will be assigned to QA TL. QA TL will discuss this defect with tester & developer. If this
defect is getting reproduced then it will be assigned back to Developer with added comments and
screenshot with status Reopen else it will remain in “Reject-Can’t be Reproduced” status.
8. Reject-Invalid Defect – A defect can be labelled as an invalid defect when:
 Defect logged by the tester which is not a requirement.
 Defect logged by the tester due to miss interpretation of requirement.
9 Closed – The defects retested and working fine are “closed” by the tester.
10 More Info Required - the review team or developer requires further information about the issue, then
status changed to “More Info Required”
11 Known Issue - known limitation of the system or before every release if there are certain known
defects to developers
12 Suggestion - Testing has any suggestion about the functionality or UI which may result in requirement
change, a “Suggestion” will be logged for the same

QA Capability – MDM -test strategy


7 Test Environment
The following Test Environment’s would be made available for testing:
 Sapient Test QA Environment
 Client Test Environment
 Staging Environment
 Production Environment

In normal scenarios, testing will be done only in QA environment. If the QA environment is inaccessible due to
technical issues then testing will be done on Dev environment.

We would be using the following client environment setup for our testing

Software/OS Version
Windows XP SP-4
Linux
MAC

QA Capability – MDM -test strategy


8 Test Tools
The following table lists the test tools required for this project.

Requirement Tool Vendor Version


Test Management Selenium Open Source 2.19

Defect Tracking Bugzilla Open Source 4.2

Unit Testing JUnit Open Source 4.10

QA Capability – MDM -test strategy


9 Suspension/Resumption Criteria
The testing will be suspended if
 Test Environment and backup environment is unavailable.
 Smoke testing for the build fails.
 Incorrect version of code is deployed.
 Functionality is unstable, i.e., too many non-reproducible defects are encountered
Testing will be resumed if
 Stable test environment is available with stable build and correct version of code.
 Smoke testing for the build is passed.
 When showstoppers are fixed.

QA Capability – MDM -test strategy


10 Test Deliverables
Described in the table below are testing deliverables, responsibilities and details for the project.

S. No. Deliverable Details


Name

1 Test Strategy A Test Strategy document will be created once


Document and it will outline the following details.
 Scope and types of testing
 Testing Approach and Key activities
 Entry and Exit criteria
 Defect Reporting workflow

2 Test Plan This document contains testing activities


planned.

QA Capability – MDM -test strategy


11 Assumptions

 MDM would be accessed through LAN or VPN.


 Proper permissions have been taken to access the real time data from standalone applications
 All the legal formalities have been carried out

QA Capability – MDM -test strategy


12 Communication approach
12.1 Status Reporting

The following status reports will be generated only during UAT phase to track testing and provide visibility
into the status of testing, outstanding issues and risks.

Test Execution Plan and Status – Module-wise / Overall


 Active, resolved and closed defects
 Defects by severity and priority
 % of test cases passed vs. failed vs. remaining to run
 After UAT starts, Defect Triaging report for issues logged by the UAT team

12.2 Defect Review Session


Defect review session will be held during Test script execution. Its task will be to review testing activities
and prioritize and assign any defect(s) that have been raised during UAT. These TAR sessions will be done
every day during UAT phase or as agreed upon with Client.
Expected participants are Sapient and Client stake holders.

QA Capability – MDM -test strategy


13 Roles and Responsibilities

Role Who Responsibility


Test Lead Sapient  Responsible for drafting and executing Testing Strategy as a whole
 Works with development team to ensure bugs are fixed in a timely manner
 Reviewing and Ensuring Test Scripts/cases are as per agreed standards.
 Creates/updates test scripts/cases.
 Assists the Tester(s) in understanding of the application and writing effective
Test Cases
 Coordinates test activities
 Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT
 Prepare and Publish Test Status Reports at the end of Testing
Cycle/Iteration.
 Risk and Issues escalation and tracking
 Will assist CO test lead in SIT planning.
 Provides test data created by Hybris system .

Test Lead Client  Plans and co-ordinates test activities for SIT / UAT testing at Client test
environment
 Ensures access to Testing testers to System Integration Testing test
environment
 Provides test data required from Client Internal systems and External
systems
 Participates in TAR sessions/Defect Prioritization meetings during SIT/UAT
Tester(s) Client  Responsible for Security (Cybercom for external security testing), Migration
(Migration Imports performed by Sapient would be verified by Sapient where
as overall Migration testing would be done by Client), and User Acceptance
testing
 Carries out System Integration testing for Client’s internal and external
systems to be working fine after integration
Tester(s) Sapient  Understands the requirements/stories and the application
 Creates and Updates test cases
 Execution of test scripts/cases as per the plan
 Capturing defects in defect tracking tool
 Retesting fixed defects and closing them

QA Capability – MDM -test strategy


14 Client Responsibilities
Following are the responsibilities:
 Access to their environments for Testing Team i.e. Client Test Environment, Staging Environment and
Production Environment.
 Test Data required from standalone applications would be provided by Client.
 UAT Test Plan for the testing to be provided by Client team.
 Performance Test Data to be provided.
 User Acceptance Testing to be carried out by Client QA team.
 Resources to be used for UAT.
 Sign off process for High Level Scenarios from Client.
 Sign off process for Test Cases from Client.
 Sign off process for each Iteration.

QA Capability – MDM -test strategy


15 References
 Specifications of all standalone applications
 SRS

QA Capability – MDM -test strategy


16 Approvals
Approval of Test Strategy document
By signing this, I confirm my approval of the Test Strategy.

Name Role Signature Date


Vikas Test manager

Shachi Test Lead

XYZ Client

QA Capability – MDM -test strategy

You might also like