You are on page 1of 61

Part 1 of 12 Q. 1: What is a Technical Review?

A Technical Review (also known as a peer review), is considered to be a formal review type, even though no Managers are expected to attend. It involves a structured encounter, in which a peer/s analyze the work with a view to improve the quality of the original work. # Ideally led by the Moderator # Attended by peers / technical experts # Documentation is required # No Management presence # Decision making # Solving technical problems <<<<<< Q. 2: What =================== is a Walkthrough >>>>>> Review?

A walkthrough is a set of procedures and techniques designed for a peer group, lead by the author to review software code. It is considered to be a fairly informal type of review. The walkthrough takes the form a meeting, normally between one and two hours in length. # Led by the Author # Attended by a peer group # Varying level of formality # Knowledge gathering # Defect finding <<<<<< Q. 3: =================== What is an Inspection >>>>>> Review?

An inspection is a formal type of review. It requires preparation on the part the review team members before the inspection meeting takes place. A follow-up stage is also a requirement of the inspection. This ensures that any re-working is carried out correctly. # # # # Led Attended Metrics Formal by by are a specified Moderator roles included process

# #

Entry

and Defect

Exit

Criteria finding

<<<<<< =================== >>>>>> Q. 4: What is an Informal Review?

An informal review is an extremely popular choice early on in the development lifecycle of both software and documentation. The review is commonly performed by peer or someone with relevant experience, and should be informal and brief. # # # # No Widely No Low formal documentation used cost proces required review

<<<<<< =================== >>>>>> Q. 5: What is V & V?

Software Validation and Verification can involve analysis, reviewing, demonstrating or testing of all software developments. This will include the development process and the development product itself. Verification and Validation is normally carried out at the end of the development lifecycle (after all software developing is complete). But it can also be performed much earlier on in the development lifecycle by simply using reviews. <<<<<< =================== >>>>>> Q. 6: What is Validation?

Validation involves the actual testing. This should take place after verification phase has been completed. Validation: confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled. Validation: Are we building the right product?

<<<<<< =================== >>>>>> Q. 7: What is Verification?

Verification would normally involve meetings and reviews and to evaluate the documents,

plans, requirements and specifications. This can be achieved by using reviews and meetings etc. Verification: confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. Verification: Are we building the product right?

<<<<<< =================== >>>>>> Q. 8: What is a Waterfall Model?

The Waterfall model is also known as the Sequential model. Each stage follows on from the previous one. The testing is performed in block as the last stage. Planning or Test creation is not considered until the actual software code has been written. This can result in problems being found much later in the project lifecycle than is desirable. <<<<<< =================== >>>>>> Q. 9: What is a V Model

The V-Model is an industry standard framework that shows clearly the software development lifecycle in relation to testing. It also highlights the fact that the testing is just as important as the software development itself. The relationships between development and testing are clearly defined. The V-Model improves the presence of the testing activities to display a more balanced approach. <<<<<< =================== >>>>>> Q. 10: What is a Spiral Model? The Spiral model is an incremental testing approach to both Development and Testing. This is used most effectively when the users do not know all of the requirements. From what is known, initial requirements can be defined. Then from these the code and test cases are created. As time goes on, more details of the requirements are known and implemented in further iterations of design, coding and testing phases. The system is considered to be complete, when enough of the iterations have taken place. Q. 11: What is RAD?

RAD represents Rapid Application Development, and is a software development process that was developed in the mid 1980s. It was developed to overcome the rigidity of such processes as The Waterfall Model. Elements: # Prototyping # Iterative Development # Time-boxing # Team Members # Management Approach # RAD Tools <<<<<< =================== >>>>>> Q. 12: What is DSDM?

DSDM (Dynamic Systems Development Methodology) is basically a high level framework of already proven RAD techniques, and also management controls that are used to increase the chances of successful RAD projects. The high level framework allows for a process that can be easily modified for individual projects specific needs. But, due this quite to simple framework lack also of results in poor detail. implementation

<<<<<< =================== >>>>>> Q. 13: What are the Process Interfaces?

As a Tester, your focus will be fixed on the test process. But we must consider other processes that exist, and also their interaction with the test process. # # # # # # Project Change Configuration Software Technical Technical Management Management Management Development Writing Support

<<<<<< =================== >>>>>> Q. 14: What is Component Testing?

Component testing is also known as Unit, Module, or Program Testing. In simple terms, this type of testing focuses simply on testing of the individual components themselves. It is common for component testing to be carried out by the Developer of the software. This however has a very low rating of testing independence.

<<<<<< =================== >>>>>> Q. 15: What is Component Integration Testing?

This type of Integration testing is concerned with ensuring the interactions between the software components at the module level behave as expected. Component Integration Testing is also often referred to as Integration Testing in the Small. It is commonly performed after any Component Testing has completed, and the behaviour tested may cover both functional and non-functional aspects of the integrated system. <<<<<< =================== >>>>>> Q. 16: What is Requirements Based Functional Testing?

Requirements-based Testing is simply testing the functionality of the software/system based on the requirements. The tests themselves should be derived from the documented requirements and not based on the software code itself. This method of functional testing ensures that the users will be getting what they want, as the requirements document basically specifies what the user has asked for.

<<<<<< =================== >>>>>> Q. 17: What is Business Process Functional Testing?

Different types of users may use the developed software in different ways. These ways are analyzed and business scenarios are then created. User profiles are often used in Business Process Functional Testing. Remember that all of the functionality should be tested for, not just the most commonly used areas. <<<<<< =================== >>>>>> Q. 18: What is Load Testing?

Testing the ability of the system to be able to bear loads. An example would be testing that a system could process a specified amount of transactions within a specified time period. So

you are effectively loading the system up to a high level, then ensuring it can still function correctly whilst under this heavy load. <<<<<< =================== >>>>>> Q. 19: What is Performance Testing? A program/system may have requirements to meet certain levels of performance. For a program, this could be the speed of which it can process a given task. For a networking device, it could mean the throughput of network traffic rate. Often, Performance Testing is designed to be negative, i.e. prove that the system does not meet its required level of performance. <<<<<< =================== >>>>>> Q. 20: What is Stress Testing?

Stress Testing simply means putting the system under stress. The testing is not normally carried out over a long period, as this would effectively be a form of duration testing. Imagine a system was designed to process a maximum of 1000 transactions in an hour. A stress test would be seeing if the systems could actually cope with that many transactions in a given time period. A useful test in this case would be to see how the system copes when asked to process more than 1000. Q. 21: What is Security Testing? A major requirement in todays software/systems is security, particularly with the internet revolution. Security testing is focused at finding loopholes in the programs security checks. A common approach is to create test cases based on known problems from a similar program, and test these against the program under test. <<<<<< =================== >>>>>> Q. 22: What is Useability Testing? This is where consideration is taken into account of how the user will use the product. It is common for considerable resources to be spent on defining exactly what the customer requires and simple it is to use the program to achieve there aims. For example; test cases could be created based on the Graphical User Interface, to see how easy it would be to use in relation to a typical customer scenario. <<<<<< =================== >>>>>> Q. 23: What is Storage Testing?

This type of testing may focus on the actual memory used by a program or system under certain conditions. Also disk space used by the program/system could also be a factor. These factors may actually come from a requirement, and should be approached from a negative testing point of view. <<<<<< =================== >>>>>> Q. 24: What is Volume Testing? Volume Testing is a form of Systems Testing. It primary focus is to concentrate on testing the systems while subjected to heavy volumes of data. Testing should be approached from a negative point of view to show that the program/system cannot operate correctly when using the volume of data specified in the requirements. <<<<<< =================== >>>>>> Q. 25: What is Installability Testing? A complicated program may also have a complicated installation process. Consideration should be made as to whether the program will be installed by a customer or an installation engineer. Customer installations commonly use some kind of automated installation program. This would obviously have to under go significant testing in itself, as an incorrect installation procedure could render the target machine/system useless. <<<<<< =================== >>>>>> Q. 26: What is Documentation Testing? Documentation in todays environment can take several forms, as the documentation could be a printed document, an integral help file or even a web page. Depending of the documentation media type, some example areas to focus on could be, spelling, usability, technical accuracy etc. <<<<<< =================== >>>>>> Q. 27: What is Recovery Testing? Recovery Testing is normally carried out by using test cases based on specific requirements. A system may be designed to fail under a given scenario, for example if attacked by a malicious user; the program/system may have been designed to shut down. Recovery testing should focus on how the system handles the failure and how it handles the recovery process. <<<<<< =================== >>>>>>

Q. 28: What is System Integration Testing? This type of Integration Testing is concerned with ensuring the interactions between systems behave as expected. It is commonly performed after any Systems Testing has completed. Typically not all systems referenced in the testing are controlled by the developing organization. Some systems maybe controlled by other organizations, but interface directly with the system under test.

<<<<<< =================== >>>>>> Q. 29: What is UAT? User Acceptance Testing or UAT is commonly the last testing performed on the software product before its actual release. It is common for the customer to perform this type of testing, or at least be partially involved. Often, the testing environment used to perform User Acceptance Testing is based on a model of the customers environment. This is done to try and simulate as closely as possible the way in which the software product will actually be used by the customer.

<<<<<< =================== >>>>>> Q. 30: What is Contract & Regulation Acceptance Testing? This type of Acceptance Testing is aimed at ensuring the acceptance criteria within the original contract have indeed been met by the developed software. Normally any acceptance criteria is defined when the contract is agreed. Regulation Acceptance Testing is performed when there exists specific regulations that must be adhered to, for example, there may be safety regulations, or legal regulations. Q. 31: What is Operational Acceptance Testing?

This form of acceptance testing is commonly performed by a System Administrator and would normally be concerned with ensuring that functionality such as; backup/restore, maintenance, and security functionality is present and behaves as expected. <<<<<< =================== >>>>>> Q. 32: What is Alpha Testing? Alpha Testing should be performed at the developers site, and predominantly performed by internal testers only. Often, other company department personnel can act as testers. The marketing or sales departments are often chosen for this purpose.

<<<<<< =================== >>>>>> Q. 33: What is Beta Testing? Beta Testing is commonly performed at the customers site, and normally carried out by the customers themselves. Potential customers are often eager to trial a new product or new software version. This allows the customer to see any improvements at first hand and ascertain whether or not it satisfies their requirements. On the flip side, it gives invaluable feedback to the developer, often at little or no <<<<<< =================== >>>>>> Q. 34: What is a Re-Test? It is imperative that when a fault is fixed it is re-tested to ensure the fault has indeed been correctly fixed. Re-test: "Whenever a fault is detected and fixed then the software should be re-tested to ensure that the original fault has been successfully removed." <<<<<< =================== >>>>>> Q. 35: What is Regression Testing?

When checking a fixed fault, you can also consider checking that other existing functionality has not been adversely affected by the fix. This is called Regression Testing. Regression Test: "Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression faults) and that the modified system still meets its requirements."

<<<<<< =================== >>>>>> Q. 36: What is a Test Plan Document?

A Test Plan should be a single document that basically contains what is going to be tested, why it is going to be tested, and how it is going to be tested. It is also important to clarify what is not going to be tested in the software product too. With regards to using a standard Test Plan layout, then we can look to the advice given by the IEEE(Institute of Electrical and Electronic Engineers) located in the International Standard IEEE Std 929-1998. <<<<<< =================== >>>>>> Q. 37: What is a Generic Test Process?

A standard test process that is commonly used exists within the BS7925-2 Standard for Software Component Testing: # Test Planning # Test Specification # Test Execution # Test Checking & Recording # Checking for Test Completion <<<<<< =================== >>>>>> Q. 38: What is a Test Policy?

This should apply to both new projects and maintenance work. Normally fairly short in length, the test policy should be a high-level document, and should contain the following items: # # # # # Definition The Evaluation Quality Improvement of testing of testing process testing levels approach

<<<<<< =================== >>>>>> Q. 39: What is a Test Strategy? Based on the test policy, the test strategy is designed to give an overview of the test requirements for a programme or even organization. Information relating to risks should be documented here, specifically the risks that will be addressed by the testing, and the specific tests that will be used against each risk. <<<<<< =================== >>>>>> Q. 40: What is a Project Plan?

Exactly how the test strategy for a particular project will be implemented is displayed in the project plan. The project test plan will normally be referenced from the overall project plan. In relation to the test strategy, the project plan should detail items from the test strategy that it is complying with, and also items it is not complying with. Q. 41: What is a Phase Test Plan?

The specific details of the approach taken by the testing for a specific test phase is shown in this document. It can be thought of as being based on the project plan, but with greater amounts of detail included, such as testing activities based on day to day plan, or expected amounts of man hours to complete individual tasks.

<<<<<< =================== >>>>>> Q. 42: What is Risk Management? Risk management comprises of the following three components: # Risk Identification # Risk Analysis # Risk Mitigation Risk management should be a consideration for everyone involved in the project. <<<<<< =================== >>>>>> Q. 43: What is Risk Identification? The following techniques can all be used to identify risks associated with products and projects. The list is by no means rigid, as many organizations will have there own techniques. # Expert Interviews # Independent Assessment # Risk Templates # Lessons Learned # Risk Workshops # Brainstorming and Checklists <<<<<< =================== >>>>>> Q. 44: What is Risk Analysis? It is simply studying the identified risks. A simple formula can be used to calculate risk: Frequency (likelihood) X Severity (impact) By using the above formula we can produce a figure, otherwise known as the exposure. <<<<<< =================== >>>>>>

Q.

45:

What

is

Risk

Mitigation?

Risk mitigation is simply the response to the analyzed risks. A choice must be made as what action should be carried out once a risk has been identified. Some possible choices could be: # Do nothing # Take preventative action (test it) # Contingency plan (what we should do if the predicted fault actually occurs) <<<<<< =================== >>>>>> Q. 46: What is Equivalence Partitioning? What this method allows you to do is effectively partition the possible program inputs. For each of the above input fields, it should not matter which values are entered as long as they are within the correct range and of the correct type. So the point of equivalence portioning is to reduce the amount of testing by choosing a small selection of the possible values to be tested, as the program will handle them in the same <<<<<< =================== >>>>>> Q. 47: What is Boundary Value Analysis? By the use of equivalence partitioning, a tester can perform effective testing without testing every possible value. This method can be enhanced further by another method called Boundary Value Analysis. After time, an experienced Tester will be often rea lize that problems can occur at the boundaries of the input and output spaces. When testing only a small amount of possible values, the minimum and maximum possible values should be amongst <<<<<< the first items to be tested. >>>>>> way.

===================

Q. 48: What is Classification Tree? The classification tree method is also known as a decision tree method and the terms can be used interchangeably as they mean the same thing. A decision tree can be learned by splitting the source set into subsets based on an attribute value test. This process is repeated on each subset in a recursively. The recursion is completed when splitting is either not possible, or a single classification can be applied to each element of the subset <<<<<< =================== >>>>>>

Q. 49: What is State Transition Testing? This type of Black-box testing is based on the concept of states and finite-states, and is based on the tester being able to view the softwares states, transition between states, and what will trigger a state change. Test cases can then be designed to execute the state changes. <<<<<< =================== >>>>>> Q. 50: What is Statement Testing? This testing method involves using a model of the source code, which identifies statements. These statements are the categorized as being either executable or non-executable. In order to use this method, the input to each component must be identified. Also, each test case must be able to identify each individual statement. Lastly, the expected outcome of each test case must be clearly defined. Q. 51: What is Branch Decision Testing? This test method uses a model of the source code which identifies individual decisions, and their outcomes. A decision is defined as being an executable statement containing its own logic. This logic may also have the capability to transfer control to another statement. Each test case is designed to exercise the decision outcomes. In order to use this method, the input to each component must be identified. <<<<<< =================== >>>>>> Q. 52: What is Branch Condition Testing? Branch Condition Testing uses a model of the source code, and identifies decisions based on individual Boolean operands within each decision condition. A decision is defined as being an executable statement containing its own logic. An example of a decision would be a loop in a program. <<<<<< =================== >>>>>> Q. 53: What is Branch Condition Combination Testing? Branch Condition Combination Testing uses a model of the source code, and identifies decisions based on combinations of Boolean operands within decision conditions. This logic may also have the capability to transfer control to another statement. The decision condition

is a Boolean expression which is evaluated to determine the outcome of the decision. <<<<<< =================== >>>>>> Q. 54: What is Requirements Based Functional Testing? Requirements-based Testing is simply testing the functionality of the software/system based on the requirements. The tests themselves should be derived from the documented requirements and not based on the software code itself. This method of functional testing ensures that the users will be getting what they want, as the requirements document basically specifies what the user has asked for.

<<<<<< =================== >>>>>> Q. 55: What is Useability Testing? This is where consideration is taken into account of how the user will use the product. It is common for considerable resources to be spent on defining exactly what the customer requires and how simple it is to use the program to achieve there aims. For example; test cases could be created based on the Graphical User Interface, to see how easy it would be to use in relation to a typical customer scenario.

<<<<<< =================== >>>>>> Q. 56: What is Volume Testing? Volume Testing is a form of Systems Testing. It primary focus is to concentrate on testing the systems while subjected to heavy volumes of data. Testing should be approached from a negative point of view to show that the program/system cannot operate correctly when using the volume of data specified in the requirements. <<<<<< =================== >>>>>> Q. 57: What is Performance Testing? A program/system may have requirements to meet certain levels of performance. For a program, this could be the speed of which it can process a given task. For a networking device, it could mean the throughput of network traffic rate. Often, Performance Testing is designed to be negative, i.e. prove that the system does not meet its required level of performance. <<<<<< =================== >>>>>>

Q. 58: What is Stress Testing? Stress Testing simply means putting the system under stress. The testing is not normally carried out over a long period, as this would effectively be a form of duration testing. Imagine a system was designed to process a maximum of 1000 transactions in an hour. A stress test would be seeing if the systems could actually cope with that many transactions in a given time period. A useful test in this case would be to see how the system copes when asked to process more than 1000. <<<<<< =================== >>>>>> Q. 59: What is Dynamic Analysis? Dynamic analysis is a testing method that can provide information on the state of software. It can achieve this dynamically i.e. it provides information when the software is actually running. It is commonly used to exercise parts of the program that use memory resources e.g.: # # # # # Memory Memory Memory Memory Unassigned allocation usage de-allocation leaks pointers

<<<<<< =================== >>>>>> Q. 60: What is Static Analysis? Static Analysis is a set of methods designed to analyze software code in an effort to establish it is correct, prior to actually running the software. As we already know, the earlier we find a fault the cheaper it is to fix. So by using Static Analysis, we can effectively test the program even before it has been written. This would obviously only find a limited number of problems, but at least it is something that can be done very early on in the development lifecycle. Q. 61: What is Control Flow Graphing? Control flow graphs display the logic structure of software. The flow of logic through the program is charted. It is normally used only by Developers as it is a very low level form testing, often used in Component Testing. It can be used to determine the number of test cases required to test the programs logic. It can also provide confidence that the detail of the logic in the code has been checked.

<<<<<< =================== >>>>>> Q. 62: What is Cyclomatic Complexity? Cyclomatic Complexity is a software metric that is used to measure the complexity of a software program. Once we know now how complex the program is, we then know how easy it will be to test. C=EN+P # # # # P E N = C = = = Cyclomatic number number number of of of Complexity edges nodes components

<<<<<< =================== >>>>>> Q. 63: What are Lines of Code? The most basic form of a complexity metric is the Lines of Code metric, or LOC metric. Its purpose like other complexity metrics is to estimate the amount of effort that will be required not only to develop such a program, but also assist in estimating how much effort will be required to test it. In its simplest form we could use the LOC metric by literally counting the number of lines of code in the program. <<<<<< =================== >>>>>> Q. 64: What is Data Flow Analysis? The idea behind Data-flow Analysis is to work-out the dependencies between items of data that are used by a program. When a program is ran, it rarely runs in a sequential order i.e. starting at line 1 and finishing at line 100. What usually happens is that the dependencies of the data within the program will determine the order. Data-flow Analysis can be used to find definitions that have no intervening use. Data-flow analysis is also used to detect variables that are used after it has effectively been killed. <<<<<< =================== >>>>>> Q. 65: What is Error Guessing? Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called Error Guessing. To be successful at

Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. <<<<<< =================== >>>>>> Q. 66: What is Exploratory Testing?

This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it. <<<<<< =================== >>>>>> Q. 67: What is Ad-hoc Testing?

This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. <<<<<< =================== >>>>>> Q. 68: What is Random Testing?

A Tester normally selects test-input data from what is termed an input domain. Random Testing is simply when the Tester selects data from the input domain randomly. As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. <<<<<< =================== >>>>>> Q. 69: What is Quality Assurance Standards? A Quality Assurance (QA) standard simply specifies that testing should be performed.

Example: ISO 9000 <<<<<< =================== >>>>>> Q. 70: What is will detail Industry exactly what Specific level of Standards? testing is to be

An industry specific standard performed.Examples:

# Railway Signalling standard DO-178B # Nuclear Industry standard # MISRA guidelines for motor vehicle software # Pharmaceutical standards Q. 71: What are Testing Standards?

Testing standards will detail how to perform the testing. Ideally, a testing standard should be referenced from a QA or Industry specific standard. Example: BS7925-1, BS7925-2 <<<<<< =================== >>>>>> Q. 72: What is the Definition of Review? Review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. [IEEE] A review should be performed when all of the supporting documentation is available. This can include design documents, requirements documents, standards documents, basically any documentation that has either been influential or is applicable to the document to be reviewed. <<<<<< =================== >>>>>> Q. 73: What is Review Roles? Organizations will commonly have different named roles than those listed below, but this will give you an idea of a commonly used set of roles used throughout the world. # Manager # Moderator # Author

# Reviewer # Scribe <<<<<< =================== >>>>>> Q. 74: What is the Review Process Structure? An example of a typical review process is below. This is probably the most documented review process you will find in the software development world, and is open to interpretation: # Planning # Kick-off # Preparation # Meeting # Rework # Follow-up # Exit Criteria <<<<<< =================== >>>>>> Q. 75: What is Incident Management? We term an incident; any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. The incident should be raised when the actual result differs from the expected result. After the inevitable investigation of the incident, there may be a reason other than a software fault, for example: # Test environment incorrectly set up # Incorrect Test Data used # Incorrect Test Specification <<<<<< =================== >>>>>> Q. 76: What is IEEE Std. 1044-1993?

This standard aims to provide a standard approach to classification of anomalies found in software. It includes descriptions of the processes involved in a software life cycle, including details on how anomalies should be recorded and subsequently processed. It consists of four sequential steps; Recognition, Investigation, Action, Disposition. Each of those steps has three administrative activities, which are; Recording, Classifying, Identifying Impact. <<<<<< =================== >>>>>>

Q. 77: What is the definition of Maturity Model? A maturity model is basically a collection of elements that are structured in such a way that they can describe characteristics of processes and their effectiveness. A maturity model can provide: # # # # # A Use Value A A structure of of for starting shared organising previous point vision actions experience improvements

<<<<<< =================== >>>>>> Q. 78: What is SEI Capability Maturity Model (CMMI)? The Capability Maturity Model, simply put, is a baseline of practices that should be implemented in order to develop or maintain a product. The product can be completely software, or just partially software. The SW-CMM focuses on the software practices whereas with the CMMI, you may find both software and systems practices. <<<<<< =================== >>>>>> Q. 79: What are the CMM Maturity Levels? The CMM defines five maturity levels, which form the top-level structure of the CMM itself. Each level is basically a foundation that can be built upon to improve the process in sequence. Starting with basic management practices and progressing through successive proven levels. # # # # # <<<<<< =================== >>>>>> Q. 80: What are the CMM Capability Levels? The software process capability defines what can be achieved by undertaking a specific software process. It achieves this by describing the range of expected results. Quantitatively Initial Managed Defined Managed Optimizing

There are six capability levels. # Incomplete # Performed # Managed # Defined # Quantitatively Managed # Optimizing Q. 81: What is the definition of ISO/IEC 15504 (SPICE)? The ISO/IEC 15504 is also known as SPICE. Spice stands for Software Process Improvement and Capability determination. It is essentially a framework for assessing software processes. Rather than concerning itself with specific standards, ISO/ISEC 15504 concerns itself with is the capabilities provided by an organisations structure. These structures include its management structure and its process definition structure. <<<<<< =================== >>>>>> Q. 82: What is the definition of TMM?

The Illinois Institute of Technology (IIT) developed the Testing Maturity Model (TMM) in 1996. The main reason for developing the TMM was that existing maturity models didnt properly address real testing issues. It was designed to complement the existing CMM. The main purpose of the TMM is to support assessment and improvement drives within an organization. The model comprises of a Maturity Model and an Assessment Model. <<<<<< =================== >>>>>> Q. 83: What are the TMM Maturity Levels?

The maturity levels are basically defined levels of maturity that can be achieved by showing that specific practices have been carried out. The TMM has five different achievable levels: # # # # # Optimization Management / defect Prevention / and Quality Phase Initial Definition Integration Measurement Control

<<<<<< =================== >>>>>>

Q.

84:

What

is

the

definition

of

TPI?

Developed by Koomen and Pol in 1997, the Test Process Improvement Model or TPI was created with the goal of simplifying the sometimes over-complicated testing process. The TPI model itself identifies the good and bad parts of a testing process. The maturity of the process can also be assessed by using the TPI. The TPI consists of the following four components: # A Maturity Model # A Test Maturity Matrix # A Checklist # Improvement Suggestions <<<<<< =================== >>>>>> Q. 85: What is a TPI Model? The TPI model consists of three maturity levels and fourteen scales. The individual levels contain several different scales. The scales themselves provide indication of which key areas require improvement. # # # Scales Scales Scales 1 6 11 to to 5 to focus 10 14 on bring on focus the on testing test process test process under control efficiency optimisation

focus

establishing

process

<<<<<< =================== >>>>>> Q. 86: What is a TPI Test Maturity Matrix?

The TPI takes into account the different aspects of a test process, including design techniques, test tool usage and reporting. Structured evaluation of various key areas, highlights the test processes strengths and weaknesses. The state of a key area is determined by assigning a level to it, commonly A to B to C etc. The levels are increased based on time, cost and quality.

<<<<<< =================== >>>>>> Q. 87: What are the Requirements Testing Tools?

This type of tool is designed to assist with verification and validation of requirements, for example; consistency checking. <<<<<< =================== >>>>>>

Q.

88:

What

are

the

Static

Analysis

Tools?

By examining the code instead of running test cases through the code, this type of tool can provide information on the actual quality of the software. Cyclomatic complexity is one such characteristic that can be obtained by using this type of tool. Q. 91: What are the Test Running Tools?

These are extremely popular type of tools. They provide capture and replay facilities for WIMP interface based applications. The tools can simulate mouse movement, mouse clicks and keyboard inputs. The tools can even recognise windows and buttons, thus making them extremely versatile. The test procedures are normally written in a specific scripting language. This tool is another popular choice for regression testing. <<<<<< =================== >>>>>> Q. 92: What are the Test Harnesses? If the software under test does not have a user interface, then test harnesses and drivers can be used to execute the software. These types of tools can be bought off the shelf, but more commonly they are built for a specific purpose. <<<<<< =================== >>>>>> Q. 93: What are the Test Script Generators? Creates actual test scripts based on information held within a test specification. Simulators are commonly used where it is impracticable to use them, for example software to control a space probes trajectory. <<<<<< =================== >>>>>> Q. 94: What are the Performance Test Tools? This type of tool comprises of two components; Load Generation and Test Transaction Measurement., Load Generation is commonly performed by running the application using its interface or by using drivers. The number of transactions performed this way are then logged. Performance test tools will commonly be able to display reports and graphs of load against response time. <<<<<< =================== >>>>>> Q. 95: What are the Dynamic Analysis Tools?

Run-time information on the state of the executing software is achieved by using Dynamic Analysis Tools. These tools are ideally suited for monitoring the use and allocation of memory. Faults such as memory leaks, unassigned pointers can be found, which would otherwise be difficult to find manually. <<<<<< =================== >>>>>> Q. 96: What are the Debugging Tools? Debugging tools are often used by programmers to try and reproduce code-related errors in order to investigate a problem. The debugger allows the program to be run line by line. This enables halted the program on demand to examine and set program variables. <<<<<< =================== >>>>>> Q. 97: What are the Comparison Tools? This type of tool is used to highlight differences between actual results and expected results. Off the shelf Comparison Tools can normally deal with a range of file and database formats. This type of tool often has filter capabilities to allow ignoring of rows or columns of data or even areas on a screen <<<<<< =================== >>>>>> Q. 98: What are the Test Management Tools? Test Management Tools commonly have multiple features. Test Management is mainly concerned with the management, creation and control of test documentation. More advanced tools have additional capabilities such as test management features, for example; result logging and test scheduling. <<<<<< =================== >>>>>> Q. 99: What are the Coverage Measurement Tools? This type of tool provides objective measures of structural test coverage when the actual tests are executed. Before the programs are compiled, they are first instrumented. Once this has been completed they can then be tested. The instrumentation process allows the coverage data to be logged whilst the program is running. Once testing is complete, the logs can provide statistics on the details of the tests covered. <<<<<< =================== >>>>>> Q. 100: What are the Hyperlink Testing Tools?

These tools are simply used to check that no broken hyperlinks exist on a web site. Q. 101: What are the Monitoring Tools?

These tools are typically used for testing e-commerce and e-business applications. The main purpose of this tool is to check web sites to ensure that they are available to customers and also to produce warnings if problems are detected. <<<<<< =================== >>>>>> Q. 102: What are the Security Testing Tools? These tools are commonly used for testing e-commerce and e-business applications, and sometimes web sites. A security testing tool will check for any parts of a web based system that could cause potential security risks if attacked. <<<<<< =================== >>>>>> Q. 103: What are the Test Oracles? A Test Oracle is used to automatically generate expected results. They are commonly used in situations where an old system is upgraded with a new system with the same functionality, so the old system can be used as an Oracle.

<<<<<< =================== >>>>>> Q. 104: What is the Tool Selection Process? A suggested tool selection and evaluation process is: # # # # # # # # Identify Prepare Perform any a Perform a Determine Ensure that Prepare Identify specific required of a competitive tool short-list the there actual are a any features possible detailed trial, if or suitable no problem obvious or alternative business requirement solutions case constraints characteristics tools evaluation needed

<<<<<< =================== >>>>>> Q. 105: What are the Pilot Projects?

The last thing we want is to introduce a tool into the organization, only to find a few weeks down the line it fails resulting in potentially disastrous scenarios. In order to avoid this situation, we can implement a pilot project. The benefits of using a pilot project are; # # # Gaining Identify Identify any experience any shortcomings test suitability using process of the the tool changes tool

<<<<<< =================== >>>>>> Q. 106: What are the constituents of Test Tool Implementation Team?

An implementation team can be formed to evaluate a new tool consisting of: # A Champion # A Change Agent # A Tool Custodian <<<<<< =================== >>>>>> Q. 107: What are the User Skills? A User is someone who has had experience actually using the software under test, or similar types of software. This knowledge can be useful to determine the type of faults that a typical user may come across, which to most developments would also have the most impact. The User would probably not have sufficient knowledge to test the software to extreme depths though. <<<<<< =================== >>>>>> Q. 108: What are the Developer Skills? A developers background may have provided them with experience with code, design or requirements analysis. This knowledge can be extremely useful, as the developer would probably have some idea when looking at the software of how it was developed, and so would probably know where to look for weaknesses. <<<<<< =================== >>>>>> Q. 109: What are the Tester Skills? Previous knowledge of testing has its obvious advantages. They should be able to analyse a specification, design test cases, execute test cases, and produce results and reports. An individual with previous testing experience would also have the right mindset for testing, as

they

would

already

know

the

reasoning

behind

why

testing

is

performed.

<<<<<< =================== >>>>>> Q. 110: What are the Belbins Action Oriented Roles?

Shapers: Challenging, dynamic, thrives on pressure. The drive and courage to overcome obstacles. Prone to provocation. Offends people's feelings Implementer: Disciplined, reliable, conservative and efficient. Somewhat inflexible. Slow to respond to new possibilities Completer Finisher: Painstaking, conscientious, anxious. Searches out errors and omissions. Delivers on time. Inclined to worry unduly. Reluctant to delegate. Q. 111: What are the Belbins People Oriented Roles?

Coordinator: Mature, confident, a good chairperson. Clarifies goals, promotes decisionmaking, delegates well. Can often be seen as manipulative. Team Worker: Co-operative, mild, perceptive and diplomatic. Listens, builds, averts friction. Indecisive in crunch situations. Resource Investigator: Extrovert, enthusiastic, communicative. Explores opportunities. Develops contacts. Over - optimistic. Loses interest after short period. <<<<<< =================== >>>>>> Q. 112: What are the Belbins Thought Oriented Roles?

Plant: Creative, imaginative, unorthodox. Solves difficult problems. Ignores incidentals. Too pre-occupied to communicate effectively. Monitor Evaluator: Sober, strategic and discerning. Sees all options. Judges accurately. Lacks drive and ability to inspire others. Specialist: Single-minded, self-starting, dedicated. Provides knowledge and skills in rare supply. Contributes only on a narrow front. Dwells on technicalities. <<<<<< =================== >>>>>> Q. 113: What are the roles of the Test Leader?

The Test Leader will commonly come from a testing background and have a full understanding of how testing is performed. They will also possess good managerial expertise. They are also responsible for ensuring that test coverage is sufficient and will be required to produce reports. <<<<<< =================== >>>>>> Q. 114: What are the roles of the Tester? The Tester obviously provides the skills necessary to perform the Testing itself. This role can include test design and test execution. Automated testing skills are also a possible requirement of this role. # Preparation of test data # Execute tests # Review other peoples tests # Review Test Plan # Involvement in automation of tests # Create Test Specifications <<<<<< =================== >>>>>> Q. 114: What are the roles of the Client? The client is effectively the project sponsor, and will provide the budget for the project. The Client can also be the business owner.

<<<<<< =================== >>>>>> Q. 116: What are the roles of the Project Manager? Management skills are provided by the Project Manager. The Project Manager will be actively involved throughout the project and will provide feedback to the client. <<<<<< =================== >>>>>> Q. 117: What are the roles of the Developer? A Developer will provide the skills to write the actual software code and perform Unit Testing. They may also be called upon at a later stage to provide bug fixes and technical advice. <<<<<< =================== >>>>>>

Q. 118: What are the roles of the Business Analyst? The Business Analyst will provide knowledge of the business and analysis skills. The Business Analyst will also be responsible for creating User Requirements based on talks with the Users. <<<<<< =================== >>>>>> Q. 119: What are the roles of the Systems Analyst? Systems design will be provided by the Systems Analyst. The Systems Analyst will also be responsible for developing the Functional Specification from the User Requirements. <<<<<< =================== >>>>>> Q. 120: What are the roles of the Technical Designer? Technical detail and support to the system design is the responsibility of the Technical Designer. This role may include database administration.

Understanding the utility of Keyword-Driven Test Automation Tools Keyword-driven test is a way to execute test scripts at a higher level of abstraction. The idea is similar to that of a service or subroutine in programming where the same code may be executed with different values. Keywords are defined to represent a script, and a tool can then act as a link between the keywords and the tool executing the corresponding test script. Values may be assigned for parameters associated with the keywords. The tools make it possible to use parameter-driven test scripts without having to change the (often complicated) scripts in the execution tool. Keywords are usually related to higher level functionality or business procedures. They may also reflect use cases. What are the tools for Keyword driven testing? The tools for keyword-driven testing are also known as script wrappers, because they wrap the technical part of the test (the actual test scripts and the test execution tool) so that the testers only need to know about the high-level keywords Keywords may be held in spreadsheets or tables, and longer sequences executions of test scripts can be specified by sequences of keywords.

A test sequence defined by keywords in a table may appear like: Keyword Create customer Create customer Find customer Edit customer Find customer Find customer Delete customer P1 Mr. Ms. Ms. , , Gupta Mr. Mr. Yes Raja Rajender Sumit Dev P2 Sunith Amman Amman P3 Shakun Bhavna Ritu

Each keyword has a number of parameters with specific meanings. Keyword-driven test is getting more and more sophisticated, introducing several levels of abstraction between the tester and the technical test scripts. Test wrapping tools are available commercially and as open-source, but they are also very often homemade and usually quite simple, yet very effective. Keyword-driven testing requires a good overview of the test assignment and a high level of abstraction as all parameterization does. This is demanding but can be rewarding for the test in the long run. Dynamics of Reporting System of Software Testing Progress & its Completion Test managers play an active role in managing the entire process of testing & plan concrete strategies for an effective reporting of completion of the testing effort. A discussion is being presented here to explain the system of test progress management & subsequent reporting on completion. Test execution, recording, control, re-testing, and regression testing must be continued until we believe that the exit criteria have been achieved. The purpose of the test progress and completion reporting is to stay in control of the testing and deliver the results of the testing activities in such ways that they are understandable and useful for the stakeholders. The inputs on which the process of test progress and completion reporting is based are: 1) Test plan 2) Measurements from the test development and execution processes

The activities involved in test progress and completion reporting are: 1) Comparing actual measurements with estimates and planned values 2) Reporting test results The output comprises of the following: 1) Presentation of test progress 2) Test report Input to Test Progress and Completion Reporting: The input from the level test plan that we need for this process is: 1) Scheduling and staffing for the activities 2) Exit criteria Documentation of Test Progress and Completion Reporting: The documentation of the progress must be presented in various ways according to who is receiving it. The audience may be the customer, higher management, project management and participants, and testers. At the completion of each test level a test summary report should be produced. The ultimate documentation of completion is the final test summary report for the entire test assignment. Activities for Test Progress and Completion Reporting: Checking for Completion: A check against the test exit criteria is mandatory before we can say that the testing is completed at any level. To warrant a stop it is important to ensure that the product has the required quality. The exit criteria are tightly connected to the coverage items for the test, the test case design techniques used, and the risk of the product. The exit criteria therefore vary from test level to test level. Examples of exit criteria are as under: 1) Specified coverage has been achieved 2) Specified number of failures found per test effort has been achieved

3) No known serious faults 4) The benefits of the system as it is are bigger than known problems If the exit criteria are not met the test cannot just be stopped. Iteration in the test process must take place: We have to go back to where something can be repeated to ensure that the exit criteria are fulfilled. In most cases additional test procedures are required. This means that the test analysis and design process must be revisited and more test cases and procedures added to increase coverage. These test procedures must then be executed, and the results recorded and checked. Finally the checking of the exit criteria must be completed. As an alternative, we can revise the test plan by adding relaxation to the test exit criteria or making it stricter. All changes made in the test completion criteria need to be properly documented. It is better to firstly identify the associated risks & reach a consensus with the customer on different changes. Changing the test plan by adjusting the completion criteria should be regarded as an emergency situation and be very well accounted for. When all test completion criteria are met and the report approved, the test object can be released. Release has different meanings at different points in the development life cycle: 1) When the test is a static test the test object (usually a document) can be released for use as the basis for further work. 2) When the test is a test level for dynamic test the test object is progressively released from one test level to the next. 3) Ultimately the product can be released to the customer. Metrics for Progress and Completion Reporting: Metrics to be defined for the progress and control activities themselves can include: 1) Number of tasks commenced over time 2) Task completion percentage over time 3) Number of task completed over time 4) Time spent on each task over time

This will of course have to be compared to the estimates and schedule of the test progress and completion activities. Why Software Testing Experts use Test Execution Tools for Automation? This type of software testing tools are known by different names like - test execution tools, or test running tools, or capture and replay tools, and is probably the most commonly known category of software testing tools. These tools are primarily used for automation of regression testing. They can execute test scripts much faster and more reliably than human beings, and they can therefore reduce test execution time when tests are repeated and/or allow more tests to be executed. All the test execution tools work according to the following basic principles, like: 1) Capture: A recording of all the testers manual actions and the systems responses into a test script. 2) Control points: A number of checkpoints added to the script by the tester during the capture. 3) Playback: Automatic re-execution of the test script. Test execution tools exist for graphical user interface, GUI, and for character-based interfaces. For GUI applications the tools are able to simulate the movement of mouse and click over buttons & are able to recognize the GUI objects like windows, buttons, fields & various different controls. When a script has been captured once, it may be executed at any given time again. If the software under testing reacts differently from what was expected at the inserted checkpoints, the execution will report a failure. It is usually also possible to log information during execution. Test scripts are captured in a specific scripting language. In newer versions of these tools it has become possible to get access to the scripts that have been captured. These are often in a C or Visual Basic like code, and this offers the possibility for editing the scripts, so that, for example, forgotten operations, further control points, or changed values, may be added. Experience shows that if the scripts are written from scratch, rather than captured, and good and systematic development principles are used, the scripts will be more maintainable. More and more of these tools are therefore used as test execution tools of coded scripts, rather than capture/playback. Advantage of Test Execution Tools: The advantages of this type of software testing tools are: 1) Lot of manual test execution can be done automatically. This is especially the case in iterative development and other development projects where a large number of regression testing are needed.

2) These tools are indispensable in development where "frequent build and smoke test" principles are used. Builds can be made in the evening, and automated test suites can be set to run overnight. Testing results will then be ready in the morning. 3) The use of test scripts requires good configuration management to keep track of which versions of the test objects, test data, and test scripts belong together. This becomes a blessing in disguise. Drawbacks of Test Execution Tools: The pitfalls of this type of software testing tools are: 1) These tools tend to be expensive to establish and maintain the test scripts. 2) The requirements, specifications, and code undergo changes in the course of the development, especially in iterative development. This needs to be carefully considered in connection with the estimation of the continuous maintenance of the test scripts. 3) The work with the test scripts requires programming skills. If the necessary skills are not available, the use of test execution tools may be very cumbersome and inefficient. A word of caution: It must be remembered that test scripts written by a programmer or tester are just like any other (software) product: made by humans and therefore not perfect. Defects are also introduced in test scripts, and test scripts should therefore be tested and corrected when defects are identified. The earlier this is done the better since fewer defects in the test scripts reduce the possible uncertainty as to whether a failure is caused by a defect in the test script or indeed in the product under test. Exploratory Testing an Indispensable Nonsystematic Software Testing Technique Going by the versions of ISTQB certified experts like "Advanced Test Managers" & "Technical Test Analysts", we can say that systematic testing is not enough always! Since faults are sly we have to attack them in unpredictable ways. This is where the nonsystematic testing techniques come in as a valuable supplement to systematic testing techniques. The great Testing Guru - Cem Kaner drafted the term "Exploratory Testing"being a type of nonsystematic testing techniques more than two decades ago, to elaborate a methodology of performing skilled activities that were quite common in the Silicon Valley. The concept of "Exploratory Testing" eventually evolved quite a lot since then. In fact all testers perform exploratory testing in some form or the other. Some perform it more intentionally & skillfully while some do it unintentionally. Sometimes it is worthwhile to search in a structured way and use the results to decide on the future course as they come in. This is the philosophy when people are looking for oil or mines, and it is the philosophy in exploratory testing. What is Exploratory Testing?

Exploratory testing is testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. In other words exploratory testing is simultaneous: 1) Learning 2) Test design 3) Test execution Exploratory testing is an important supplement to structured testing. As with all the nonsystematic techniques it may be used before the structured test is completely designed or when the structured test has stopped. It is important that the actions of the exploratory testing activity are properly documented, so that we are able to know at some later date as to what had been done. The idea in exploratory testing is not that it should not be documented, but that it should be documented as we go along. Exploratory testing is certainly not the baby of novices or inexperienced testers. Extensive testing experience and knowledge of testing techniques and typical failures are indispensable for the performance of an effective exploratory test. It is also an advantage if the tester has some domain knowledge. The exploratory tester needs to be able to analyze, reason, and make decisions on the fly; and at the same time have a systematic approach and be creative. The tester also needs some degree of independence in relation to the manufacturing of the system - the programmer of a system cannot perform exploratory testing on his or her "own" system. Perhaps most importantly the exploratory tester must have an inclination towards destruction. Exploratory testing will not work if the tester is "afraid" of getting the system to fail. What are the different degrees of Exploratory Testing? Another advocate of exploratory testing & the testing guru James Bach has defined following degrees of exploration. 1) Fully Specified 2) Loosely Specified 3) Somewhat Specified 4) Tasks 5) Roles 6) Free Exploration. At the end of the hierarchy, we have the totally free exploration. Here the tester simply sits before the system and starts wherever he or she feels like. A step upper, we find the exploratory testing guided by roles. Here the tester attacks the system under testing assuming a specific user role. This could, for example, be the role of an accountant, a nurse, a secretary, an executive manager, or any other role defined for the system. This provides a starting point and a viewpoint for the testing, which is exploratory within the framework of the role. While going further up we find the exploratory testing guided by a specific task. In this the testing engineer narrows down the testing framework even more & more by testing within the viewpoint of a specific task defined for a specific role for the system under testing.

On the borderline between exploratory testing and structured testing we have the sporadically specified test. Here the tester has sketched the test beforehand and takes this as the starting point and guideline for the performance of the exploratory testing. How do we perform Exploratory Testing? No matter which degree of exploration we use, we have to follow the principles in the general test process. We must plan and monitor; we must specify, execute, and record; and we must check for completion. In the planning we consider what we are going to do and who is going to do it. We must choose the degree of exploration and describe the appropriate activities. The testing activities should be divided into one-hour sessions. If the sessions are shorter we risk not getting an effective flow in the exploration; if they are longer we get tired and the effectiveness goes down. It is important to make sure that the tester or testers are protected during the sessions. There should be no phones or other interruptions to disturb the flow of the testing. The test specification, execution, and recording are done simultaneously during our sessions of exploratory testing. Within the defined boundaries of exploration the tester remains prepared to get distracted reason being you never know what you may find. Stock must be taken from time to time to verify that we are on track. For each session we must do the following: 1) Take extensive notes and attach data files, screen dumps, and/or other documentation as appropriate 2) Produce an overview over findings 3) Reprioritize the remaining activities When to stop Exploratory Testing? The exploratory testing can stop when we have fulfilled our purpose. Importance of Maintainability to a Good Software & Role of Maintenance Testing Before going into the details of high sounding terms like maintainability etc, let us firstly understand the meaning of maintenance testing extensively used by experienced test managers in software testing parlance. What Is Maintenance Testing? Maintenance testing is the type of software testing that refers to testing the changes to an operational system or the impact of a changed environment to an operational system. In other words, it's about keeping (maintaining) the achieved levels of quality across the entire life cycle of the software application. Quality here can mean one or more of the quality attributes described in ISO 9126 standard.

According to many experts, as post delivery changes are introduced to an existing application, each change could be considered to start a new Software Development Life Cycle (SDLC), More accurately, the project is now in a Software Maintenance Life Cycle (SMLC). Many projects spend most of their time in a post-delivery SMLC rather than the pre-delivery SDLC. Maintenance software testing is an activity performed in the SMLC and can be described in terms of the following principal activities: 1) Dynamic maintenance 2) Corrective maintenance 3) Adaptive maintenance In general, the task of performing maintenance software testing is made more efficient when the software has good maintainability. What Is Maintainability? Rather like reliability, maintainability is a word we all know in testing but sometimes have difficulty explaining. It's generally regarded as something we would like to have in our software, but it's hard putting our finger on what maintainability actually is. The ISO 9126 Quality Model defines maintainability as "the ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier or adapted to a changed environment:" According to ISO 9126, maintainability can be described in terms of four sub attributes: 1) Analyzability: It relates to the effort required (usually by the developers) to diagnose defects or to identify parts of the software system requiring change. 2) Changeability: It relates to the effort required to actually fix defects or make improvements. 3) Stability: It is the likelihood that unexpected side effects occur as a result of making changes to the software. It's what we have in mind when we sometimes say that the software is brittle. 4) Testability: It describes the effort required for testing changed software. This is one of the principal software quality attributes that directly affect our work. What are the causes of Poor Maintainability? We frequently experience the symptoms of poor software maintainability without appreciating the root causes. Few fundamental maintainability risks are listed in the following table

Sr.

Basic Maintainability Risks Effort required for fixing defects and implementing changes might be more than planned. Time taken for maintenance tasks exceeds the fixedlength maintenance windows.

Repercussions of Poor Maintainability If the size of the maintenance team performing these tasks is fixed (a common situation), this will also translate into longer time scales.

1) Production could be negatively affected (e.g., staff arriving for work find that an application server is unavailable due to the scheduled nightly maintenance window slipping). 2) Maintainers may be forced to take shortcuts to keep within agreed-upon maintenance periods. They may need to make assumptions regarding the implementation of required changes (perhaps due to poor documentation). 3) Penalties may be imposed if Service Level Agreements apply.

Long-term buildup of poor maintainability resulting from the cumulative effects of bad software development practices.

1) Reliability levels slowly reduce. 2) The number of functional defects introduced by changes (regressions) increases. 3) Defects take longer to fix. 4) Maintenance staffs are put under steadily more pressure, which may even result in a further worsening of the situation.

Conclusion: If the maintenance costs of a project is substantially high like over 75%, we should certainly test it for maintainability. Understanding the ABC of Software Product Quality Assurance It is not possible to test quality into a product when the development is close to being finished. As asserted by many renowned QA Experts & Test Managers, the quality assurance activities must start early and become an integrated part of the entire development project and the mindset of all stakeholders.

Quality assurance comprises following four activities: 1) Definition of quality criteria 2) Validation 3) Verification 4) Quality reporting

It may be borne in mind that the validation is not necesarily performed before the verification; in many organizations it is the other way around, or in parallel. First of all, the Quality criteria must be defined. These criteria are the expression of the quality level that must be reached or an expression of "what is sufficiently good." These criteria can be very different from product to product. They depend on the business needs and the product type. Different quality criteria will be set for a product that will just be thrown away when it is not working than for a product that is expected to work for many years with a great risk of serious consequences if it does not work. Following are the two quality assurance activities for checking if the quality criteria have been met by the object under testing: 1) Validation; 2) Verification. Both of them have different goals and different techniques. The object to test is delivered for validation and verification from the applicable development process.

Validation is the assessment of the correctness of the product (the object) in relation to the users needs and requirements. We can also say that validation answers the question: "Are we building the correct product?" Validation must determine if the customers needs and requirements are correctly captured and correctly expressed and understood. We must also determine if what is delivered reflects these needs and requirements. When the requirements have been agreed upon and approved, we must ensure that during the entire development life cycle: 1) Nothing has been forgotten. 2) Nothing has been added. It is obvious that if something is forgotten, the correct product has not been delivered. Is does, however, happen all too often, that requirements are overlooked somewhere in the development process. This costs money, time, and credibility. On the surface it is perhaps not so bad if something has been added. But it does cost money and affect the project plan, when a developer - probably in all goodwill - adds some functionality, which he or she imagines would be a benefit for the end user. What is worse is that the extra functionality will probably never be tested in the system and acceptance test, simply because the testers dont know anything about its existence. This means that the product is sent out to the customers with some untested functionality and this will lie as a mine under the surface of the product. Maybe it will never be hit, or maybe it will be hit, and in that case the consequences are unforeseeable. The possibility that the extra functionality will never be hit is, however, rather high, since the end-user will probably not know about it anyway. Validation during the development process is performed by analysis of trace information. If requirements are traced to design and code it is an easy task to find out if some requirements are not fulfilled, or if some design or code is not based on requirements. The ultimate validation is the user acceptance test, where the users test that the original requirements are implemented and that the product fulfills its purpose. Verification, the other quality assurance activity, is the assessment of whether the object fulfills the specified requirements. Verification answers the question: "Are we building the product correctly?"

The difference between validation and verification can be illustrated like: a) Validation confirms that a required calculation of discount has been designed and coded in the product. b) Verification confirms that the implemented algorithm calculates the discount as it is supposed to in all details. We have several techniques for verification. The ones to choose depend on the test object. In the early phases the test object is usually a document, for example in the form of: 1) Plans; 2) Requirements specification; 3) Design; 4) Test specifications; 5) Code. The verification techniques for these are the static test techniques like: 1) Inspection; 2) Review (informal, peer, technical, and management); 3) Walkthrough. Once some code has been produced, we can use static analysis on the code as a verification technique. This is not executing the code, but verifying that it is written according to coding standards and that is does not have obvious data flow faults. Finally, dynamic testing where the test object is executable software can be used. We can also use dynamic analysis, especially during component testing. This technique reveals faults that are otherwise very difficult to identify. Quality assurance reports on the findings and results should be produced. If the test object is not found to live up to the quality criteria, the object is returned to development for correction. At the same time incident reports should be filled in and given to the right authority. Once the test object has passed the validation and verification, it should be placed under configuration management. Different Software Quality Metrics used by Expert Test Managers

Article by: Kushal Kar & Swastika Nandi - Guest Publishers of the article. Software Metric is a generic name for the measure of Quality of the Software Product. Software Metric can be a reflection on the status under software development cycle or some results etc. A good project manager is the one who applies the principles of metrics to plan, organize & control the project deliverables in quantifiable / measurable terms. Some of the software metrics extensively used by ISTQB certified expert testing managers are described below. Sr. 1 Description of Metric Test Coverage How to Measure (Formulae) Number of units (KLOC/FP) tested / total size of the system No. of defects found during Testing/(No. of defects found during testing + No of acceptance defects found after delivery) *100 {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100 {(Actual Duration - Estimated Duration)/Estimated Duration} *100 t/(t+UAT) Explanation: Here "t" is the total number of defects reported during the testing, whereas UAT, means the total number of defects that are reported during the user acceptance testing. 6 Defect Density No of Defects / Size (FP or KLOC) Explanation: Here FP = Function Points 7 Weighted Defect Density (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects) Explanation: Here 5,3,1 corresponds to the severity of the detect 8 Schedule Slippage (Actual End date - Estimated End date) / (Planned End Date Planned Start Date) * 100 (Actual review effort spent in that particular phase / Total actual efforts spent in that

Quality of Testing

Effort Variance

Schedule Variance

Test Effectiveness

Rework Effort Ratio

phase) * 100 10 Requirement Stability Index Requirement Creep {1 - (Total number of changes /number of initial requirements)} (Total Number of requirements added/Number of initial requirements) * 100 Defects / KLOC or Defects / Function points 13 Maintainability MTTC (Mean time to change) -- Once error is found, how much time it takes to fix it in production. Integrity = Summation [(1 - threat) X (1 security)] User questionnaire survey results will give an indication of the same. Comment: How easy its for the users to use the system and how fast they are able to learn operating the system. 16 CSAT Customer Satisfaction Index Call volume to customer service hotline Availability (percentage of time a system is available, versus the time the system is needed to be available) # Mean time between failures (MTBF) Total operating time divided by the number of failures. MTBF is the inverse of failure rate. # Mean time to repair (MTTR) - Total elapsed time from initial failure to the reinitiating of system status. Mean Time To Restore includes Mean Time To Repair (MTBF + MTTR = 1.) # Reliability ratio = (MTBF / MTTR) Explanation: Reliability is the probability that an item will perform a required function under stated conditions for a stated period of time. The probability of survival, R(t), plus the probability of failure, F(t), is always unity. Expressed as a formula : F(t) + R(t) = 1 or, F(t)=1 - R(t).

11

12

Correctness

14

Integrity

15

Usability

17

Reliability

# Mean time between failure (MTBF) # Mean time to repair (MTTR) # Reliability ratio (MTBF / MTTR) 18 Defect Ratios # Defects found after product delivery per function point. # Defects found after product delivery per LOC # Pre-delivery defects: annual post-delivery defects # Defects per function point of the system modifications 19 Number of Tests per Unit Size Acceptance Criteria Tested Defects Per Size Testing Cost Cost to locate defect Number of test cases per KLOC/FP

20

Acceptance criteria tested / total acceptance criteria Defects detected / system size Cost of testing / total cost *100 Cost of testing / the number of defects located Actual cost of testing / Budgeted cost of testing Defects detected in testing / total system defects Defects detected in production/system size

21 22 23

24

Achieving Budget

25

Defects detected in testing Defects detected in production Quality of Testing

26

27

No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100 = Loss due to problems / total resources processed by the system Number of third party complaints / number of transactions processed Assessment of testing by giving rating in

28

Effectiveness of testing to business System complaints

29

30

Scale of Ten

scale of 1 to 10 31 Source Code Analysis Number of source code statements changed / total number of tests No of Test cases designed / Actual Effort for Design and Documentation No of Test cycles executed / Actual Effort for testing

32

Test Planning Productivity Test Execution Productivity

33

About the Authors: Kushal Kar & Swastika Nandi QA Analysts, are the guest publishers of the article & are solely responsible for the ownership of its contents. How Project Managers conduct Code Reviews as a Testing Activity? Expert managers carry out several types of code reviews that include informal walkthroughs, formal inspections, and paired programming being popular these days. The paired programming involves participation of two developers in a combined development effort at one workstation. For example, one creates a unit test for a class while the other one thinks of the design and implementation of the class that will pass the test. These partners switch their roles too frequently sometimes at least once a day. Irrespective of the approach used, the objective of the code review activity is to identify the potential coding and design defects manually by evaluating a segment of software. Whether an individual or a team does this activity, doesnt make a difference. Let us firstly try to gather the answers of Why, How & Who related to the code review activities Q. 1: Why do we need to carry out Code reviews? Code reviews facilitate detection of defects early in development. Additionally, they motivate developers to write manageable code. When developers know that their peers will review the code, they tend to create more clear and concise programs and do proper documentation of their code. This role of reviews is especially important because writing documentation is often psychologically resented during code construction. Code reviews also improve the technical knowledge of the participants since developers are given an opportunity to learn from each other about the latest coding patterns and new programming tips. Another positive side effect of code reviews is the fact that they facilitate the building of a common team culture through face to face interactions. During such meetings the developers are more likely to pressurize the "odd man out" to follow the teams standards and practices. It is an underlying fact that software quality and development productivity drastically increases by code reviews. Recent studies have revealed that traditional code reviews are indeed value-added tasks; they

increase development productivity of the current project and they can facilitate future productivity improvements up to 20%. Results of research on the productivity of paired programming are less conclusive. Quantitative evidences are available that indicate that paired programming improves product quality and reduces time to market. However, more recent research, indicates that productivity improvements depend on the experience and skills of the individuals forming pairs. For example, novice - novice pairs compared with novice solos are much more productive than expert - expert pairs compared with expert solos. Q. 2: Who should carry out Code reviews? Developers and the architect are the best-suited personnel for conducting code reviews. Q. 3: How should we do the Code reviews? Code reviews allow developers to evaluate the implementation of their co-workers in hopes of turning up possible defects. The scope of the code review is usually determined by use cases: when a developer completes the code required to implement a use case, all the related code should be submitted for review. Alternatively, code can be reviewed incrementally for example, as it is added to the source control system. The expanded infrastructure can facilitate this process by automatically alerting reviewers about newly modified lines of code once a week. In the latter case, the reviews should be performed after the code is checked in and demonstrated to satisfy coding standards as well as white box testing and black box test progression criteria. Those parts of the code that show coding standards violations, as identified by the automated coding standards analysis, are a natural starting point for a code review. The violations should be assessed to determine whether they are warranted, and therefore whether the code should be exempt from the required standard, whether the standard should be modified to permit the reported type of violations, or whether the code should be modified to comply with the standard. In addition, a code review should be conducted every time a regression test failure is reported. The goal of this inspection should be to determine whether the code behavior change was intentional. If the change was intentional, the test cases should be modified to allow for the new expected behavior. Otherwise, the code should be corrected. Code reviews should be performed on the new, modified code and on the most critical parts of the application. This type of approach indicates a sense of priority. While focus should be on defect detection, code reviews should also help to identify bottlenecks and performance problems in program segments.

The seriousness of the defects identified during code reviews should be assessed, and severe defects should be reported in the problem tracking system and used in defect analysis and prevention. Approvals & measurement of code review:

The amount of code that has been reviewed and that still needs to be reviewed should be measured. This number should be used in assigning priorities to the developers tasks as reported in the "Current Activity Log" report, and in estimating the time needed for testing activities. The lead developer before the integration testing starts should approve all code subject to the review. The new and modified code that needs to be reviewed should be tracked. Once reviewed and approved by a lead developer, the code should be marked as accepted. Use of automation tools for code review:

Conformance to coding standards and project specific programming rules can be verified automatically by the use of static analysis tools. Review of the algorithms and complex parts of code should be done manually. While these types of code reviews cannot be automated, the code covered by reviews can be tracked automatically. Static analysis can be used to help in identifying parts of the code that need to be reviewed, such as segments of code with high cyclomatic complexity. Selection of the code review tool are based upon its ability to identify new and modified code in the source control system and informing a reviewer or the lead developer on a particular module about the required reviews. The developers should be provided with a list that tracks the review status (accepted, pending, or needs revisions) of all relevant pieces of code. Exploratory Testing an Indispensable Nonsystematic Software Testing Technique Going by the versions of ISTQB certified experts like "Advanced Test Managers" & "Technical Test Analysts", we can say that systematic testing is not enough always! Since faults are sly we have to attack them in unpredictable ways. This is where the nonsystematic testing techniques come in as a valuable supplement to systematic testing techniques. The great Testing Guru - Cem Kaner drafted the term "Exploratory Testing"being a type of nonsystematic testing techniques more than two decades ago, to elaborate a methodology of performing skilled activities that were quite common in the Silicon Valley. The concept of "Exploratory Testing" eventually evolved quite a lot since then. In fact all testers perform exploratory testing in some form or the other. Some perform it more intentionally & skillfully while some do it unintentionally.

Sometimes it is worthwhile to search in a structured way and use the results to decide on the future course as they come in. This is the philosophy when people are looking for oil or mines, and it is the philosophy in exploratory testing. What is Exploratory Testing? Exploratory testing is testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. In other words exploratory testing is simultaneous: 1) Learning 2) Test design 3) Test execution Exploratory testing is an important supplement to structured testing. As with all the nonsystematic techniques it may be used before the structured test is completely designed or when the structured test has stopped. It is important that the actions of the exploratory testing activity are properly documented, so that we are able to know at some later date as to what had been done. The idea in exploratory testing is not that it should not be documented, but that it should be documented as we go along. Exploratory testing is certainly not the baby of novices or inexperienced testers. Extensive testing experience and knowledge of testing techniques and typical failures are indispensable for the performance of an effective exploratory test. It is also an advantage if the tester has some domain knowledge. The exploratory tester needs to be able to analyze, reason, and make decisions on the fly; and at the same time have a systematic approach and be creative. The tester also needs some degree of independence in relation to the manufacturing of the system - the programmer of a system cannot perform exploratory testing on his or her "own" system. Perhaps most importantly the exploratory tester must have an inclination towards destruction. Exploratory testing will not work if the tester is "afraid" of getting the system to fail. What are the different degrees of Exploratory Testing? Another advocate of exploratory testing & the testing guru James Bach has defined following degrees of exploration. 1) Fully Specified 2) Loosely Specified 3) Somewhat Specified 4) Tasks 5) Roles 6) Free Exploration. At the end of the hierarchy, we have the totally free exploration. Here the tester simply sits before the system and starts wherever he or she feels like.

A step upper, we find the exploratory testing guided by roles. Here the tester attacks the system under testing assuming a specific user role. This could, for example, be the role of an accountant, a nurse, a secretary, an executive manager, or any other role defined for the system. This provides a starting point and a viewpoint for the testing, which is exploratory within the framework of the role. While going further up we find the exploratory testing guided by a specific task. In this the testing engineer narrows down the testing framework even more & more by testing within the viewpoint of a specific task defined for a specific role for the system under testing. On the borderline between exploratory testing and structured testing we have the sporadically specified test. Here the tester has sketched the test beforehand and takes this as the starting point and guideline for the performance of the exploratory testing. How do we perform Exploratory Testing? No matter which degree of exploration we use, we have to follow the principles in the general test process. We must plan and monitor; we must specify, execute, and record; and we must check for completion. In the planning we consider what we are going to do and who is going to do it. We must choose the degree of exploration and describe the appropriate activities. The testing activities should be divided into one-hour sessions. If the sessions are shorter we risk not getting an effective flow in the exploration; if they are longer we get tired and the effectiveness goes down. It is important to make sure that the tester or testers are protected during the sessions. There should be no phones or other interruptions to disturb the flow of the testing. The test specification, execution, and recording are done simultaneously during our sessions of exploratory testing. Within the defined boundaries of exploration the tester remains prepared to get distracted reason being you never know what you may find. Stock must be taken from time to time to verify that we are on track. For each session we must do the following: 1) Take extensive notes and attach data files, screen dumps, and/or other documentation as appropriate 2) Produce an overview over findings 3) Reprioritize the remaining activities When to stop Exploratory Testing? The exploratory testing can stop when we have fulfilled our purpose. An Insight to Software Testing Policies & Strategies used by Expert Test Managers There can be a very high level of management at an organizational or corporate level. This might be limited to a definition of how testing is expected to be conducted throughout the organisation. Wherever organizations have ISTQB certified Test Managers, the management of software testing can be done either at the programme level or at individual project level.

For every such level of test management there is an associated document. When structured in a topdown fashion, it is called the test management documentation hierarchy. An example of this is shown in the following figure. It provides a basic model for test management by the test managers from which to gain an understanding of the relationships between documents used to manage software testing at different levels.

The figure outlined above is a simple and consistent model that is generally applicable to large organizations having a mature testing culture. The hierarchy identifies a single overall organizational software testing policy, a testing strategy for each individual project, a project test plan for each project and a set of level test plans that elaborate the detailed test planning for each test level of each project. Broadly what do we mean by a Test Policy?

A test policy represents an organizations overall attitude and approach to testing. Documented test policies are not all that common, but many organizations have a quality policy which covers a definition of what testing must achieve, while others may incorporate quality and testing into a corporate mission statement, such as All systems available round the clock. A test policy is generic in nature. What should the best contents of a Test Policy document?

Although there are no industry-standard for the contents of a test policy but the following could be the possibly included.

1) Definition of what software testing means in the organization (i.e. how much software testing the organization expects to be done on software products). 2) The test process to be followed (e.g. the Fundamental Test Process). 3) General standards and criteria for software testing in projects (which may depend on classification of projects as critical, large, small etc.), especially acceptance criteria. 4) Use of tools to support software testing, including the tool set to be used where appropriate. 5) Definition of software testing terms such as test level, test type, test condition to clarify their use in other documents. 6) How the value of software testing will be measured (e.g. by assessing the cost of software testing to prevent defects versus the cost of repair when a defect is found). 7) Identification of standard software development life cycle(s) and associated testing life cycle(s) used in the organization. 8) The approach to test process improvement (e.g. seeking customer feedback as part of an overall quality plan, staff training). What are the Advantages of having a documented Test Policy?

A well-documented test policy can provide following benefits. 1) Visible commitment to the test effort at an organizational level; 2) Definition of key processes that must be followed; 3) Definition of quality levels that must be achieved throughout testing. 4) Provides a mechanism for encouraging standardization across different projects. What do we mean by a Test Strategy?

A test strategy generally refers to an overall approach to testing of a specific project or a product. It is based upon the requirements described in the test policy document, if it happens to be there in the organization. A test strategy can relate to an organization, to departments within an organization or to individual projects. There can be standard test strategy templates, used at a high level across all projects within an organization, or lower down, across projects of similar functionality (e.g. a bank may have a test strategy template for use by all projects involved in online banking). Documented test strategies are much more common compared to documented test policies.

What

should

the

best

contents

of

Test

Strategy

document?

As for the test policy, there are no industry-standard for the contents of a test strategy but the following could be the possibly included. 1) Standards to be followed (e.g. those required for the defence, transport, pharmaceutical and banking industries to name a few, plus process standards such as the Capability Maturity Model (CMMI) and the International Organization for Standardization quality set - the ISO 9000 set). 2) Test levels to be used (such as increased emphasis on user acceptance testing in an iterative model, all levels in the V model). 3) Test types to be used (such as functional, structure-based, non-functional testing and experiencebased). 4) Test design techniques to be used (such as equivalence partitioning, decision testing and error guessing). 5) Approach to retesting and regression testing (usually all changes will be retested, and regression cycles specified (e.g. weekly or by number of changes included)). 6) The amount of testing to be carried out on reused components (depending on the extent of reuse). 7) Entry and exit criteria to be used for each level of testing (such as 100 per cent decision coverage at unit and integration test levels). 8) Incident management (reporting, follow-up and analysis). 9) Definition of test environments to be used. 10) Use of tools to support testing. The above list is not the ultimate one & may be modified according to the needs of the organization. What are the Advantages of having a documented Test Strategy document?

1) The business will understand what is required from them in order to conduct software testing (such as their assessment of risks, well-defined requirements); 2) The project management team will have early detailed knowledge of what will be required for software testing and can plan budgets accordingly (such as test tools and the test environment); 3) The development team will understand their contribution to the test effort (such as frozen code, 100 per cent decision coverage achieved at unit or integration testing);

4) The test team will understand what needs to be done, such as usage of test design technique and criteria they must meet for each stage of testing (usually system and acceptance testing). How Software Testing Managers Tackle the Incidents during the Development Life Cycle The activity of managing unexpected events that are encountered during testing is called incident management. It is formally an activity in the configuration management process, but being emphasized here because it is strongly connected to software testing. We can say that, Incident is a defect or an outcome of an action during testing, wherein the actual result differs from the expected result & such an event requires greater study & investigation. What are the prime causes of Incidents?

When a product is being developed, tested, deployed, and maintained, incidents are bound to come up. It is human to make mistakes so defects get introduced during development, requirements change over time, and the environment in which the product is deployed can evolve as well. Testing is the obvious source of incidents, since the idea of software testing is to expose things that compel us say : "Oops - what was that?" Generally Incidents are detected in the form of defects in static testing, and typically in the form of failures in dynamic testing. An incident could be that the actual result is different from the expected result when a test case is executed. How do we recognize an incident?

Upon recognition of an incident, software testing engineers initiate an incident report. All incidents must be reported meticulously so that they can be investigated, recreated if needed, and monitored. When an incident is initially recognized, set of supporting information that need be included in the incident report can contain: # Identification of the incident, including unique number, heading, trigger event, proposed fix, if possible, and documentation (e.g., screen dumps) # Identification of the environment, including hardware, software, vendor, item in which the incident was seen, and fix description, if any # Identification of the people involved, including originator and investigator # Related time information, for example, system time, CPU time, and wall time as appropriate How do we classify an incident?

The information for classification could contain: # Project activity: What were you doing when the incident was recognized?

# Project phase: What phase was the project in when the incident was recognized? # Symptoms: What did you see when you recognized the incident? It could also include information about suspected cause, repeatability, and product status. What type of information related to the Impact of an incident must be a part of the incident report? Impact related information could contain: # Its Severity # Impact on project schedule # Impact on the project cost Who as an individual can raise an incident?

Incidents can be raised by all stakeholders within the organization or by customers. It is important that all incidents are handled via a defined path and are processed in a controlled way. How do we investigate Incidents?

The investigation is performed based on the information provided in the incident report. Formally this is not testing but configuration management. The investigation is about finding out what is wrong, if anything, and what should happen next. Many things could be wrong, for example, in the context of testing, these could be: # A wrong wording, caught during a review of a document. # A coding defect found during a walk-through of a piece of source code. # A failure found in the integration test. # A wish to expand or enhance the finished product, arising when the product is in acceptance testing # A change required in the code because of the upgrade to a new version of the middleware supporting the system (e.g. a new version of MS Access, which in certain places is not backward compatible) If something is certainly wrong the investigation must try to find out what the impact is and what the cost of making the necessary corrections is. It must also be considered what the cost of not making the corrections is. It is not always a simple matter to perform such an analysis, but it must be done before an informed decision about what to do can be made. Possible actions can be: # Nothing - No failure after all or the failure is too insignificant

# Nothing right now - Changes are postponed # Changes must be implemented immediately where necessary The supporting data for the other life cycle phases include primarily # Identification of the people involved in the investigation, at least those responsible for any decisions made # Related time information The information for classification can contain: # Actual cause - Where have we pinpointed the incident to come from a high level? # Source - In which work products or product components must changes be made? # Type - What type of incident are we dealing with? What actions do we take when Incident takes place?

If any action is to be taken it will be in the form of one or more changes, since one incident may trigger changes in more places. Specific change requests should be produced by the software testing engineers for all the objects to be changed. This makes it easier to follow-up on the progress of a change through its life cycle: open, implemented, and approved. When testing comes into picture while handling Incidents?

The software testing comes into the incident handling at the time of approval. Re-testing must be done to ensure that corrections have been made correctly, and regression testing must be performed to ensure that the correction has had no adverse effects on the areas that were working before the correction. How do we dispose of the Incidents?

In case an action has already been initiated, we can dispose of the incident only when approval to all the change requests is granted. Here we can close the incident report only when details are furnished about the methodology of implementing the corrective measures after due approval. How do we use the information derived from the Incident reports?

Useful information that can be extracted from incident reports is essential for a number of people in the organization, including test management, project management, project participants, process improvement people, and organizational management.

The primary areas for which incident report information can be used are: # Estimation and progress # Incident distribution # Effectiveness of quality assurance activities # Ideas for process improvement Some of the direct measurements we can extract from the incident reports at any given time are: # Total number of incidents # The number of open incidents # The number of closed incidents # The time it took to close each incident report # Which changes have been made since the last release What is the methodology of counting Incidents?

The incidents can be counted for specific classifications, and this is where life gets so much easier if a defined classification scheme has been used. We can count the number of: # Incidents found during review of the requirements # Incidents found during component testing # Incidents where the source was the specification # Incidents where the type was a data problem For estimation and progress purposes we can compare the actual time it took to close an incident to our estimate and get wiser and better at estimating next time. We can also look at the development in open and closed incidents over time and use that to estimate when the testing can be stopped. How do we communicate Incidents?

Careful incident reporting in written incident reports using an incident management system facilitates objective communication about incidents and defects. However, should not be a substitute for the verbal communication, which must not be eliminated at all.

Communication about incidents is a bit difficult task. The first and most important thing for software testing engineers and developers is to keep in mind that developers and others do not make defects on purpose to annoy us, or to tease us, or even to keep us busy. The next and equally important thing is that software testing engineers and others do not report incidents to tease and punish but as their contribution to a high-quality product. Another aspect of incident communication among software testing engineers & others is concerned with what should be corrected and when. Quality Measures and Indicators Prescribed by the Software Testing Experts Software development & testing experts rightly declare "We cannot control what we cannot measure". Thus in order to successfully measure quality of any software application, first of all we need to understand the key differences between the elements of chain of three interconnected quality terms like; measures, metrics & indicators. Quality Measure A measure is to ascertain or appraise by comparing to a standard. Quality Metric A metric is a quantitative measure of the degree to which a system, component, or process possesses a given attribute. Quality Indicator An indicator is a device or variable, which can be set to a prescribed state, based on the results of a process or the occurrence of a specified condition. An indicator usually compares a metric with a baseline or expected result.

A standard or unit of measurement covers: # The extent, dimensions, capacity of anything, especially as determined by a standard. # An act or process of measuring; a result of measurement. A measure gives very little or no information in the absence of a trend to follow or an expected value to compare against.

It is a calculated or composite indicator based upon two or more measures.

A metric is a comparison of two or more measures like defects per thousand source lines of code.

Indicator help the decision-makers to make a quick comparison that can provide a perspective as to the "health" of a particular aspect of the project. Software quality indicators act as a set of

Measure does not provide enough

Software quality metrics is used to

information to make meaningful decisions.

assess throughout the development cycle whether the software quality requirements are being met or not

tools to improve the management capabilities of personnel responsible for monitoring software development projects.

The software quality indicators address management concerns, take advantage of data that is already being collected, are independent of the software development methodology being used, are specific to phases in the development cycle, and provide information on the status of a project. Software Testing experts like ISTQB advanced certified Test Managers prescribe following quality indicators for use during the software testing & development life cycle. 1) Progress: Measures the amount of work accomplished by the developer in each phase. This measure flows through the development life cycle with a number of requirements defined and baselined, then the amount of preliminary and detailed designed completed, then the amount of code completed, and various levels of tests completed. 2) Stability: Assesses whether the products of each phase are sufficiently stable to allow the next phase to proceed. This measures the number of changes to requirements, design, and implementation. 3) Process compliance: Measures the developer's compliance with the development procedures approved at the beginning of the project. Captures the number of procedures identified for use on the project versus those complied with on the project. 4) Quality evaluation effort: Measures the percentage of the developer's effort that is being spent on internal quality evaluation activities. Percent of time developers are required to deal with quality evaluations and related corrective actions. 5) Test coverage: Measures the amount of the software system covered by the developer's testing process. For module testing, this counts the number of basis paths executed/covered, & for system testing it measures the percentage of functions tested. 6) Defect detection efficiency: Measures how many of the defects detectable in a phase were actually discovered during that phase. Starts at 100% and is reduced as defects are uncovered at a later development phase. 7) Defect removal rate: Measures the number of defects detected and resolved over a period of time. Number of opened and closed system problem reports (SPR) reported through the development phases. 8) Defect age profile: Measures the number of defects that have remained unresolved for a long period of time. Monthly reporting of SPRs remaining open for more than a month's time.

9) Defect density: Detects defect-prone components of the system. Provides measure of SPRs / Computer Software Component (CSC) to determine which is the most defect-prone CSC. 10) Complexity: Measures the complexity of the code. Collects basis path counts (cyclomatic complexity) of code modules to determine how complex each module is. Following table illustrates the software quality indicators used during the development phase.

Conclusion: The quality indicators discussed above have certain characteristics related to quality

measures. Accordingly software testing experts like Test Managers have made few conclusions like. 1) Quality measures must be oriented toward management goals. One need not have extensive familiarity with technical details of the project. 2) Quality measures should reveal problems as they develop and suggest corrective actions that could be taken. 3) Quality measures must be easy to use. They must not be excessively time consuming, nor depend heavily on extensive software training or experience. Measures that are clearly specified, easy to calculate, and straightforward to interpret are needed. 4) Quality measures must be adequately flexible. 5) We must not use quality measurement just for the purpose of quality assurance & software testing managers & engineers must come out of the narrow notions they happen to have about the constituents of the quality.

You might also like