You are on page 1of 72

http://www.ambysoft.com/essays/agileLifecycle.html http://www.implementingscrum.com/2006/09/11/the-classic-story-of-the-pig-and-chicken/ http://www.agiledata.org/ http://www.agiledata.org/essays/agileDataModeling.html http://mike2.openmethodology.org/w/index.

php/Category:Software_Delivery_Lifecycle

Projectsmart http://www.projectsmart.co.uk/

m often asked by clients to facilitate workshops overviewing the ideas presented in the Agile Manifesto and agile techniques such as Test-Driven Design (TDD), database refactoring, and agile change

management. One issue that many people seem to struggle with is how all of these ideas fit together, and
invariably I found myself sketching one or more pictures which overview the life cycle for agile software development projects. I typically need one or more pictures because the scope of life cycles change -- some life cycles address just the construction life cycle, some address the full development life cycle, and some even address the full IT life cycle. Depending on your scope, and how disciplined your approach to agile software development is, you will get different life cycle diagrams. The goal of this article is to describe the agile system development life cycle (SDLC), putting it in context from what you may have heard about within the agile community and more importantly within the context of your overall IT efforts.

This article covers:


1. 2. 3. 4. 5. 6. 7. The scope of life cycles Iteration -1: Pre-project planning Iteration 0: Project inception Construction iterations Release iterations Production Retirement

1. The Scope of Life Cycles

As we described in the book The Enterprise Unified Process (EUP) the scope of life cycles can vary dramatically. For example, Figure 1 depicts the Scrum construction life cycle whereas Figure 2 depicts an extended version of that diagram which covers the full system development life cycle (SDLC) and Figure 3 extends that further by addressing enterprise-level disciplines via the EUP life cycle. The points that I'm trying to make are:

System development is complicated. Although it's comforting to think that development is as simple as Figure 1 makes it out to be, the fact is that we know that it's not. If you adopt a development process that doesn't actually address the full development cycle then you've adopted little more than consultantware in the end. My experience is that you need to go beyond the construction life cycle of Figure 1 to the full SDLC of Figure 2 (ok, Retirement may not be all that critical) if you're to be successful There's more to IT than development. To be successful at IT you must take a multisystem, multi-life cycle stage view as depicted in Figure 3. The reality is that organizations have many potential projects in the planning stage (which I'll call Iteration 1 in this article), many in development, and many in production.

Figure 1 uses the terminology of the Scrum methodology. The rest of this article uses the terminology popularized in the mid-1990s by the Unified Process (Sprint = Iteration, Backlog = Stack, Daily Scrum Meeting = Daily Meeting). Figure 1 shows how agilists treat requirements like a prioritized stack, pulling just enough work off the stack for the current iteration (in Scrum iterations/sprints are often 30-days long, although this can vary). At the end of the iteration the system is demoed to the stakeholders to verify that the work that the team promised to do at the beginning of the iteration was in fact accomplished. Figure 1. The Scrum construction life cycle.

The Scrum construction life cycle of Figure 1, although attractive proves to be a bit naive in practice. Where does the product backlog come from? Does it get beamed down from the Starship Enterprise? Of course not, it's actually the result of initial requirements envisioning early in the project. You don't only implement requirements during an iteration, you also fix defects (disciplined agile teams have a parallel testing effort during construction iterations where these defects are found), go on holiday, support other teams (perhaps as reviewers of their work), and so on. So you really need to expand the product backlog into a full work items list. You also release your system into production, often a complex endeavor. A more realistic life cycle is captured Figure 2, overviewing the full agile SDLC. This SDLC is comprised of six phases: Iteration -1, Iteration 0/Warm Up, Construction, Release/End Game, Production, and Retirement.
Although many agile developers may balk at the idea of phases, perhaps Gary Evan's analogy of development seasons may be a bit more palatable, the fact is that it's been recognized that processes such as Extreme Programming (XP) and Agile Unified Process (AUP) do in fact have phases (for diagrams, see XP life cycle and AUP life cycle respectively). The Disciplined Agile Delivery (DAD) lifecycle also includes phases (granted, I lead the development of DAD). Furthermore, the Agile MSF calls its phases/seasons "tracks".

Figure 2. A detailed agile SDLC.

Figure 3. The Enterprise Unified Process (EUP) life cycle.

Figure 4. The Agile SDLC (high-level).

On the surface, the agile SDLC of Figure 4 looks very much like a traditional SDLC, but when you dive deeper you quickly discover that this isn't the case. This is particularly true when you consider the detailed view of Figure 2. Because the agile SDLC is highly collaborative, iterative, and incremental the roles which people take are much more robust than on traditional projects. In the traditional world a business analyst created a requirements model that is handed off to an architect who creates design models that are handed off to a coder who writes programs which are handed off to a tester and so on. On an agile project, developers work closely with their stakeholders to understand their needs, they pair together to implement and test their solution, and the solution is shown to the stakeholder for quick feedback. Instead of specialists handing artifacts to one another, and thereby injecting defects at every step along the way, agile developers are generalizing specialists with full life cycle skills.

2. Iteration -1: Pre-Project Planning


Iteration -1, the pre-Inception phase in the Enterprise Unified Process (EUP), is the pre-project aspects of portfolio management. During this phase you will:

1. Define the business opportunity. You must consider the bigger business picture and
focus on market concerns. This includes exploring how the new functionality will improve your organizations presence in the market, how it will impact profitability, and how it will impact the people within your organization. This exploration effort should be brief, not all projects will make the initial cut so you only want to invest enough effort at this point to get a good gut feel for the business potential. A good strategy is to follow Outside-In Developments focus on identifying the potential stakeholders and their goals, key information to help identify the scope of the effort. 2. Identify a viable for the project. There are several issues to consider when identifying a potential strategy for the project. For example, do you build a new system or buy an existing package and modify it? If you decide to build, do you do so onshore or offshore? Will the work be solely done by your own development team, by a team from a system integrator (SI), or in partnership with the SI? What development paradigm traditional/waterfall, iterative, or agile will you follow? Will the team be co-located, near-located within the same geographic region, or far-located around the world? As you can see there are many combinations of strategy available to you, and at this point in time you may only be able to narrow the range of the possibilities but be forced to leave the final decision to the project team in future iterations. 3. Assess the feasibility. During Iteration -1 you will want to do just enough feasibility analysis to determine if it makes sense to invest in the potential project. Depending on the situation you may choose to invest very little effort in considering feasibility, for many systems just considering these issues for a few minutes is sufficient for now, and for some systems you may choose to invest days if not weeks exploring feasibility. Many organizations choose to do just a little bit of feasibility analysis during Iteration 1, and then if they decide to fund the project they will invest more effort during Iteration 0. In my experience you need to consider four issues when exploring feasibility: economic feasibility, technical feasibility, operational feasibility, and political feasibility. Your feasibility analysis efforts should also produce a list of potential risks and criteria against which to make go/no-go decisions at key milestone points during your project. Remember that agile teams only have a success rate of 72%, compared to 63% for traditional projects, implying that almost 30% of agile projects are considered failures. Therefore you should question the feasibility of the project throughout the life cycle to reduce overall project risk.

Iteration -1 activities can and should be as agile as you can possibly make it you should collaborate with stakeholders who are knowledgeable enough and motivated enough to consider this potential project and invest in just enough effort to decide whether to consider funding the effort further.

3. Iteration 0/Warm Up: Project Initiation


The first week or so of an agile project is often referred to as Iteration 0 (or "Cycle 0") or in The Eclipse Way the "Warm Up" iteration. Your goal during this period is to initiate the project by:

1. Garnering initial support and funding for the project. This may have been already achieved via your portfolio management efforts, but realistically at some point somebody is going to ask what are we going to get, how much is it going to cost, and how long is it going to take. You need to be able to provide reasonable, although potentially evolving, answers to these questions if you're going to get permission to work on the project. You may need to justify your project via a feasibility study. 2. Actively working with stakeholders to initially model the scope of the system. As you see in Figure 5, during Iteration 0 agilists will do some initial requirements modeling with their stakeholders to identify the initial, albeit high-level, requirements for the system. To promote active stakeholder participation you should use inclusive tools, such as index cards and white boards to do this modeling our goal is to understand the problem and solution domain, not to create mounds of documentation. The details of these requirements are modeled on a just in time (JIT) basis in model storming sessions during the development cycles. 3. Starting to build the team. Although your team will evolve over time, at the beginning of a development project you will need to start identifying key team members and start bringing them onto the team. At this point you will want to have at least one or two senior developers, the project coach/manager, and one or more stakeholder representatives. 4. Modeling an initial architecture for the system. Early in the project you need to have at least a general idea of how you're going to build the system. Is it a mainframe COBOL application? A .Net application? J2EE? Something else? As you see in Figure 5, the developers on the project will get together in a room, often around a whiteboard, discuss and then sketch out a potential architecture for the system. This architecture will likely evolve over time, it will not be very detailed yet (it just needs to be good enough for now), and very little documentation (if any) needs to be written. The goal is to identify an architectural strategy, not write mounds of documentation. You will work through the design details later during development cycles in model storming sessions and via TDD. 5. Setting up the environment. You need workstations, development tools, a work area, ... for the team. You don't need access to all of these resources right away, although at the start of the project you will need most of them. 6. Estimating the project. You'll need to put together an initial estimate for your agile project based on the initial requirements, the initial architecture, and the skills of your team. This estimate will evolve throughout the project.

Figure 5: The Agile Model Driven Development (AMDD) life cycle.

4. Construction Iterations
During construction iterations agilists incrementally deliver high-quality working software which meets the changing needs of our stakeholders, as overviewed in Figure 6. Figure 6. Agile software development process during a construction iteration.

We achieve this by: 1. Collaborating closely with both our stakeholders and with other developers. We do this to reduce risk through tightening the feedback cycle and by improving communication via closer collaboration. 2. Implementing functionality in priority order. We allow our stakeholders to change the requirements to meet their exact needs as they see fit. The stakeholders are given complete control over the scope, budget, and schedule they get what they want and spend as much as they want for as long as theyre willing to do so. 3. Analyzing and designing. We analyze individual requirements by model storming on a just-intime (JIT) basis for a few minutes before spending several hours or days implementing the requirement. Guided by our architecture models, often hand-sketched diagrams, we take a highly-collaborative, test-driven design (TDD) approach to development (see Figure 7) where we iteratively write a test and then write just enough production code to fulfill that test. Sometimes, particularly for complex requirements or for design issues requiring significant

forethought, we will model just a bit ahead to ensure that the developers don't need to wait for information. 4. Ensuring quality. Agilists are firm believers in following guidance such as coding conventions and modeling style guidelines. Furthermore, we refactor our application code and/or our database schema as required to ensure that we have the best design possible. 5. Regularly delivering working software. At the end of each development cycle/iteration you should have a partial, working system to show people. Better yet, you should be able to deploy this software into a pre-production testing/QA sandbox for system integration testing. The sooner, and more often, you can do such testing the better. See Agile Testing and Quality Strategies: Discipline Over Rhetoric for more thoughts. 6. Testing, testing, and yes, testing. As you can see in Figure 8 agilists do a significant amount of testing throughout construction. As part of construction we do confirmatory testing, a combination of developer testing at the design level and agile acceptance testing at the requirements level. In many ways confirmatory testing is the agile equivalent of "testing against the specification" because it confirms that the software which we've built to date works according to the intent of our stakeholders as we understand it today. This isn't the complete testing picture: Because we are producing working software on a regular basis, at least at the end of each iteration although ideally more often, we're in a position to deliver that working software to an independent test team for investigative testing. Investigative testing is done by test professionals who are good at finding defects which the developers have missed. These defects might pertain to usability or integration problems, sometimes they pertain to requirements which we missed or simply haven't implemented yet, and sometimes they pertain to things we simply didn't think to test for. Figure 7. Taking a "test first" approach to construction.

Figure 8. Testing during construction iterations.

I would rather fail three months into a two-year project than three years into a two-year project.

5. Release Iterations(s): The "End Game"


During the release iteration(s), also known as the "end game", we transition the system into production. Not that for complex systems the end game may prove to be several iterations, although if you've done system and user testing during construction iterations (as indicated by Figure 6) this likely won't be the case. As you can see in Figure 9, there are several important aspects to this effort: 1. Final testing of the system. Final system and acceptance testing should be performed at this point, although as I pointed out earlier the majority of testing should be done during construction iterations. You may choose to pilot/beta test your system with a subset of the eventual end users. See the Full Life Cycle Object-Oriented Testing (FLOOT) method for more thoughts on testing. 2. Rework. There is no value testing the system if you don't plan to act on the defects that you find. You may not address all defects, but you should expect to fix some of them.

3. Finalization of any system and user documentation. Some documentation may have been written during construction iterations, but it typically isn't finalized until the system release itself has been finalized to avoid unnecessary rework Note that documentation is treated like any other requirement: it should be costed, prioritized, and created only if stakeholders are willing to invest in it. Agilists believe that if stakeholders are smart enough to earn the money then they must also be smart enough to spend it appropriately. 4. Training. We train end users, operations staff, and support staff to work effectively with our system. 5. Deploy the system. See my article entitled System Deployment Tips and Techniques.

Figure 9. The AUP Deployment discipline workflow.

6. Production
The goal of the Production Phase is to keep systems useful and productive after they have been deployed to the user community. This process will differ from organization to organization and perhaps even from system to system, but the fundamental goal remains the same: keep the system running and help users to use it. Shrink-wrapped software, for example, will not require operational support but will typically require a help desk to assist users. Organizations that implement systems for internal use will usually require an operational staff to run and monitor systems. This phase ends when the release of a system has been slated for retirement or when support for that release has ended. The latter may occur immediately upon the release of a newer version, some time after the release of a newer version, or simply on a date that the business has decided to end support. This phase typically has one iteration because it applies to the operational lifetime of a single release of your software. There may be multiple iterations, however, if you defined multiple levels of support that your software will have over time.

7. Retirement
The goal of the Retirement Phase is the removal of a system release from production, and occasionally even the complete system itself, an activity also known as system decommissioning or system sunsetting. Retirement of systems is a serious issue faced by many organizations today as legacy systems are removed and replaced by new systems. You must strive to complete this effort with minimal impact to business operations. If you have tried this in the past, you know how complex it can be to execute successfully. System releases are removed from production for several reasons, including:
1. The system is being complete replaced. It is not uncommon to see homegrown systems for human resource functions being replaced by COTS systems such as SAP or Oracle Financials. 2. The release is no longer to be supported. Sometimes organizations will have several releases in production at the same time, and over time older releases are dropped. 3. The system no longer needed to support the current business model. A organization may explore a new business area by developing new systems only to discover that it is not cost effective. 4. The system is redundant. Organizations that grow by mergers and/or acquisitions often end up with redundant systems as they consolidate their operations. 5. The system has become obsolete.

In most cases, the retirement of older releases is a handled during the deployment of a newer version of the system and is a relatively simple exercise. Typically, the deployment of the new release includes steps to remove the previous release. There are times, however, when you do not retire a release simply because you deploy a newer version. This may happen if you can not

require users to migrate to the new release or if you must maintain an older system for backward compatibility.

8. Recommended Reading

Agile Estimating and Planning by Mike Cohn Agile Estimating Tips Agile Model Driven Development (AMDD) Agile Scheduling Tips Agile Testing and Quality Strategies: Discipline Over Rhetoric Agile Testing Strategies The Criteria for Determining Whether a Team is Agile The Disciplined Agile Delivery (DAD) Lifecycle

Evidence that Agile Software Development Scales


Examining the Big Requirements Up Front (BRUF) Approach Initial High-Level Architectural Envisioning Initial High-Level Requirements Envisioning Initiating an Agile Project Is Agile Crossing the Chasm? Justifying a Software Development Project The Process of Database Refactoring Translating Scrum Terminology Why Agile Software Development Works: Improved Feedback

Project Management Life Cycle


The Project Management Life Cycle has four phases: Initiation, Planning, Execution and Closure. Each project life cycle phase is described below, along with the tasks needed to complete it. You can click the links provided, to view more detailed information on the project management life cycle.

Develop a Business Case Undertake a Feasibility Study Establish the Project Charter Appoint the Project Team Set up the Project Office Perform Phase Review

Create a Project Plan Create a Resource Plan Create a Financial Plan Create a Quality Plan Create a Risk Plan Create an Acceptance Plan Create a Communications Plan Create a Procurement Plan Contract the Suppliers Define the Tender Process Issue a Statement of Work Issue a Request for Information Issue a Request for Proposal Create Supplier Contract Perform Phase Review

Build Deliverables Monitor and Control Perform Time Management Perform Cost Management Perform Quality Management Perform Change Management Perform Risk Management Perform Issue Management Perform Procurement Management Perform Acceptance Management Perform Communications Management

Perform Project Closure Review Project Completion

The Technical Support Specialist (TSS) is the subject matter expert and escalation point within the Customer support and services group for all technical issues that cannot be resolved by Tier 2 support and will provide issue resolution support with Skypes Service/Incident Management and operational teams. The technical support specialist provides technical support and implementation advice to both external and internal customers including Customer support, Sales and Service Management. The TSS also builds, maintains and imparts the highest level of technical knowledge with internal and external customers through supporting trials; proof of concept (POC) projects and replication of customer setups to validate interoperability and proposed technical solutions. The TSS also provides support to Product Management for the Skype Connect certification program by working with and testing Skype Connect with Skypes OEM partners products (PBXs) prior to official certification testing (pre-certification) as well as providing support post official certification to ensure the continuation of interoperability with certified partners. IB FINANCE Investment Bank A financial institution that provides a variety of services for clients. Among the services of an investment bank are underwriting, facilitating transactions, assisting in mergers and acquisitions, and brokering. In general, an investment bank's clients are institutional investors, but high net-worth individuals also use them. The name can be misleading since investment banks rarely provide retail banking services. GFT team GLOBAL FOREX TRADING

What Does Financial Instrument Mean? A real or virtual document representing a legal agreement involving some sort of monetary value. In today's financial marketplace, financial instruments can be classified generally as equity based, representing ownership of the asset, or debt based, representing a loan made by an investor to the owner of the asset. Foreign exchange instruments comprise a third, unique type of instrument. Different subcategories of each instrument type exist, such as preferred share equity and common share equity, for example.

Investopedia explains Financial Instrument Financial instruments can be thought of as easily tradeable packages of capital, each having their own unique characteristics and structure. The wide array of financial instruments in today's marketplace allows for the efficient flow of capital amongst the world's investors. What Does Fixed Income Mean? A type of investing or budgeting style for which real return rates or periodic income is received at regular intervals at reasonably predictable levels. Fixed-income budgeters and

investors are often one and the same - typically retired individuals who rely on their investments to provide a regular, stable income stream. This demographic tends to invest heavily in fixedincome investments because of the reliable returns they offer.

Investopedia explains Fixed Income Individuals who live on set amounts of periodically paid income face the risk that inflation will erode their spending power. Fixed-income investors receive set, regular payments that face the same inflation risk. The most common type of fixed-income security is the bond; bonds are issued by federal governments, local municipalities or major corporations.
Credit default swaps (CDS) are the most widely used type of credit derivative and a powerful force in the world markets. The first CDS contract was introduced by JP Morgan in 1997 and by mid-2007, the value of the market had ballooned to an estimated $45 trillion, according to the International Swaps and Derivatives Association - over twice the size of the U.S. stock market. Read on to find out how credit default swaps work and the main uses for them.

How They Work A CDS contract involves the transfer of the credit risk of municipal bonds, emerging market bonds, mortgage-backed securities, or corporate debt between two parties. It is similar to insurance because it provides the buyer of the contract, who often owns the underlying credit, with protection against default, a credit rating downgrade, or another negative "credit event." The seller of the contract assumes the credit risk that the buyer does not wish to shoulder in exchange for a periodic protection fee similar to an insurance premium, and is obligated to pay only if a negative credit event occurs. It is important to note that the CDS contract is not actually tied to a bond, but instead references it. For this reason, the bond involved in the transaction is called the "reference obligation." A contract can reference a single credit, or multiple credits. (To learn more about bonds, see our tutorial Advanced Bond Concepts.) As mentioned above, the buyer of a CDS will gain protection or earn a profit, depending on the purpose of the transaction, when the reference entity (the issuer) has a negative credit event. When such an event occurs, the party that sold the credit protection and who has assumed the credit risk may deliver either the current cash value of the referenced bonds or the actual bonds to the protection buyer, depending on the terms agreed upon at the onset of the contract. If there is no credit event, the seller of protection receives the periodic fee from the buyer, and profits if the reference entity's debt remains good through the life of the contract and no payoff takes place. However, the contract seller is taking

the risk of big losses if a credit event occurs. (For related reading, see Corporate Bonds: An Introduction To Credit Risk.) Hedging and Speculation CDS have the following two uses.

A CDS contract can be used as a hedge or insurance policy against the default of a bond or loan. An individual or company that is exposed to a lot of credit risk can shift some of that risk by buying protection in a CDS contract. This may be preferable to selling the security outright if the investor wants to reduce exposure and not eliminate it, avoid taking a tax hit, or just eliminate exposure for a certain period of time. (For more on hedging your investments, read Practical And Affordable Hedging Strategies and Corporate Use Of Derivatives For Hedging.) The second use is for speculators to "place their bets" about the credit quality of a particular reference entity. With the value of the CDS market, larger than the bonds and loans that the contracts reference, it is obvious that speculation has grown to be the most common function for a CDS contract. CDS provide a very efficient way to take a view on the credit of a reference entity. An investor with a positive view on the credit quality of a company can sell protection and collect the payments that go along with it rather than spend a lot of money to load up on the company's bonds. An investor with a negative view of the company's credit can buy protection for a relatively small periodic fee and receive a big payoff if the company defaults on its bonds or has some other credit event. A CDS can also serve as a way to access maturity exposures that would otherwise be unavailable, access credit risk when the supply of bonds is limited, or invest in foreign credits without currency risk. (For more, see Risky Portfolio? Currencies Come To The Rescue.)

An investor can actually replicate the exposure of a bond or portfolio of bonds using CDS. This can be very helpful in a situation where one or several bonds are difficult to obtain in the open market. Using a portfolio of CDS contracts, an investor can create a synthetic portfolio of bonds that has the same credit exposure and payoffs. Trading While most of the discussion has been focused on holding a CDS contract to expiration, these contracts are regularly traded. The value of a contract fluctuates based on the increasing or decreasing probability that a reference entity will have a credit event. Increased probability of such an event would make the contract worth more for the buyer of protection, and worth less for the seller. The opposite occurs if the probability of a credit event decreases. A trader in the market might speculate that the credit quality of a reference entity will deteriorate some time in the future and will buy protection for the very short term in the hope of profiting from the transaction. An investor can exit a contract by selling his or her interest to another party, offsetting the contract by entering another contract on the other side with another party, or offsetting the terms with the original counterparty. Because CDSs are traded over the counter (OTC), involve intricate knowledge of the market and the underlying assets and are valued using industry computer programs, they are better suited for institutional rather than retail investors. (For more insight, read Are Derivatives Safe For Retail Investors?)

Market Risks The market for CDSs is OTC and unregulated, and the contracts often get traded so much that it is hard to know who stands at each end of a transaction. There is the possibility that the risk buyer may not have the financial strength to abide by the contract's provisions, making it difficult to value the contracts. The leverage involved in many CDS transactions, and the possibility that a widespread downturn in the market could cause massive defaults and challenge the ability of risk buyers to pay their obligations, adds to the uncertainty. Conclusion Despite these concerns, credit default swaps have proved to be a useful portfolio management and speculation tool, and are likely to remain an important and critical part of the financial markets.

What Does North American Loan Credit Default Swap Index - LCDX Mean? A specialized index of loan-only credit default swaps (CDS) covering 100 individual companies that have unsecured debt trading in the broad secondary markets. The LCDX is traded over the counter and is managed by a consortium of large investment banks, which provide liquidity and assist in pricing the individual credit default swaps. The index begins with a fixed coupon rate (225 bps); trading moves the price and changes the yield, much like a standard bond. The index rolls every six months. Buyers of the index pay the coupon rate (and purchase the protection against credit events), while sellers receive the coupon and sell the protection. What is being protected is a "credit event" at the company, such as defaulting on a loan or declaring bankruptcy. If a credit event occurs in one of the underlying companies, the protection is paid out via physical delivery of the debt or through a cash settlement between the two parties. The underlying company is then removed, and a new one is placed in the index to return it to 100 members.

Investopedia explains North American Loan Credit Default Swap Index - LCDX Credit default swaps essentially put a price on the risk of a particular debt issuer's default. Companies with strong credit ratings have low risk premiums, so protection can be purchased for a minimal fee, assessed as a percentage of the notional (dollar) amount of the underlying debt. Companies with low credit ratings cost more to protect against, and the credit default swaps covering them may cost several percentage points of the notional amount. Minimum purchase amounts for the LCDX run in the millions, so most investors are large institutional firms who invest as either a hedge or a speculative play. The advantage is that one can gain access to a diversified group of companies for much less than purchasing the credit default swaps individually.

What Does Credit Risk Mean? The risk of loss of principal or loss of a financial reward stemming from a borrower's failure to repay a loan or otherwise meet a contractual obligation. Credit risk arises whenever a borrower is expecting to use future cash flows to pay a current debt. Investors are compensated for assuming credit risk by way of interest payments from the borrower or issuer of a debt obligation. Credit risk is closely tied to the potential return of an investment, the most notable being that the yields on bonds correlate strongly to their perceived credit risk.

Investopedia explains Credit Risk The higher the perceived credit risk, the higher the rate of interest that investors will demand for lending their capital. Credit risks are calculated based on the borrowers' overall ability to repay. This calculation includes the borrowers' collateral assets, revenue-generating ability and taxing authority (such as for government and municipal bonds). Credit risks are a vital component of fixed-income investing, which is why ratings agencies such as S&P, Moody's and Fitch evaluate the credit risks of thousands of corporate issuers and municipalities on an ongoing basis. hat Does Mortgage Mean? A debt instrument that is secured by the collateral of specified real estate property and that the borrower is obliged to pay back with a predetermined set of payments. Mortgages are used by individuals and businesses to make large purchases of real estate without paying the entire value of the purchase up front. Mortgages are also known as "liens against property" or "claims on property".

Investopedia explains Mortgage In a residential mortgage, a home buyer pledges his or her house to the bank. The bank has a claim on the house should the home buyer default on paying the mortgage. In the case of a foreclosure, the bank may evict the home's tenants and sell the house, using the income from the sale to clear the mortgage debt.
What Does Mutual Fund Mean? An investment vehicle that is made up of a pool of funds collected from many investors for the purpose of investing in securities such as stocks, bonds, money market instruments and similar assets. Mutual funds are operated by money mangers, who invest the fund's capital and attempt to produce capital gains and income for the fund's investors. A mutual fund's portfolio is structured and maintained to match the investment objectives stated in its prospectus.

Watch: Mutual Funds

Investopedia explains Mutual Fund One of the main advantages of mutual funds is that they give small investors access to professionally managed, diversified portfolios of equities, bonds and other securities, which would be quite difficult (if not impossible) to create with a small amount of capital. Each shareholder participates proportionally in the gain or loss of the fund. Mutual fund units, or shares, are issued and can typically be purchased or redeemed as needed at the fund's current net asset value (NAV) per share, which is sometimes expressed as NAVPS.

What Does Repurchase Agreement - Repo Mean? A form of short-term borrowing for dealers in government securities. The dealer sells the government securities to investors, usually on an overnight basis, and buys them back the following day. For the party selling the security (and agreeing to repurchase it in the future) it is a repo; for the party on the other end of the transaction, (buying the security and agreeing to sell in the future) it is a reverse repurchase agreement.

Investopedia explains Repurchase Agreement - Repo Repos are classified as a money-market instrument. They are usually used to raise short-term capital.
Repo is short for repurchase agreement. Those who deal in government securities use repos as a form of overnight borrowing. A dealer or other holder of government securities (usually T-bills) sells the securities to a lender and agrees to repurchase them at an agreed future date at an agreed price. They are usually very short-term, from overnight to 30 days or more. This short-term maturity and government backing means repos provide lenders with extremely low risk.

Repos are popular because they can virtually eliminate credit problems. Unfortunately, a number of significant losses over the years from fraudulent dealers suggest that lenders in this market have not

always checked their collateralization closely enough. There are also variations on standard repos:

Reverse Repo - The reverse repo is the complete opposite of a repo. In this case, a dealer buys government securities from an investor and then sells them back at a later date for a higher price Term Repo - exactly the same as a repo except the term of the loan is greater than 30 days.

Next: Money Market: Conclusion

Table of Contents 1) Money Market: Introduction 2) Money Market: What Is It? 3) Money Market: Treasury Bills (T-Bills) 4) Money Market: Certificate Of Deposit (CD) 5) Money Market: Commercial Paper 6) Money Market: Banker's Acceptance 7) Money Market: Eurodollars 8) Money Market: Repos 9) Money Market: Conclusion

Printer friendly version (PDF format)


Need help with PDF files?

Filed Under: 401K, Retirement


Before we get started, here's an OLAP Blog. This is an extract of a Microsoft Paper on SQL Server 2000, called 'Cubes in the Real World'. The Method : Aim for an iterative Approach with substantial User involvement.

Review results after each major Iteration. Choose between a Star Schema or a Snowflake Schema? Star Schemas give better performance, and are therefore recommended when volumes are high. Should dimensions be shared or private ? Usually choose shared to ensure widest usage later. Only choose private to protect restricted or secure data. What is Dimensional Reporting ? Dimensional modeling utilizes "measures" and "dimensions". Measures include a variety of key performance indicators, and may include "simple" measures (amounts paid to providers, patient days, etc.) as well as computed measures or ratios, such as cost per member per month. Measures can be presented at various levels of summarization or drilldown, depending on how the dimensions of the analysis are displayed. For example, a calculation of a cost PMPM ratio for a specific primary care physician for a specific month would automatically limit the cost numerator and the member-months denominator to the specified PCP and time period. Dimensions represent the variables by which measurement is performed, such as date, location, product code, etc. Dimensions can be arranged in hierarchies, allowing users to drill down through the data. For example, the service date dimension contains the hierarchy of YEARS drilling down to QUARTERS, and then to MONTHS. Careful design of the hierarchy in a dimension facilitates drill-down reporting by designing the hierarchies to be intuitive and to follow the thought process of the analyst. A dimensional design model allows the user to view the measures in any reasonable combination of the dimensions. In Microsoft Excel pivot tables this is done by dragging a dimension name to a row or column heading in a table, which will automatically arrange the measures in the table according to the dimensions displayed. This provides a practically limitless set of "reports". Users are not limited by predetermined capabilities of the system, since each dimension can be used as a row or a column in a report, or as a "filter to select certain data to appear in that report. Dimensional reporting is not only useful in designing interactive tools, but also in design of static reports. By designing Reports using common definitions for these headings, such confusion can be avoided.

In addition, using dimensional design for static reports greatly facilitates the ability to drill down to those reports using the reporting tools described below. What is an OLAP cube ? An OLAP cube is a specially designed database that is optimized for reporting. While most databases designed for online transaction processing such as those used in claims processing are designed for efficiency in data storage, OLAP cubes are designed for efficiency in data retrieval. This means that the data is stored in such a way as to make it easy and efficient for reporting. Regular relational databases treat all data into the database similarly, however OLAP cubes categorize data into "dimensions" and "measures". Measures represent items that are counted, summarized or aggregated, such as costs or units of service. Dimensions are variables by which measures are summarized, such as hospitals, physicians, or dates of service. This organization of data greatly facilitates the ability to formulate data requests based on real-life situations. In addition, many of queries that could be posed to the data are "pre-aggregated" in the database such that the answers have already been precalculated and can be reported without delay. The term "cube" comes from the geometric object that has three dimensions. OLAP cubes can have many more dimensions than 3, but the term continues to apply. What is a pivot table ? A pivot table is a Microsoft Excel feature that allows access to data that is organized into dimensions and measures. It acts as an "OLAP client" to allow users to interact with data stored in OLAP cubes using the familiar Excel spreadsheet format. Can I use other programs to access OLAP cubes ? Many other programs can be used as OLAP clients. Microsoft Data Analyzer allows arranging dimensions and measures in a highly graphical format, while Crystal Analysis capitalizes on its ability to create Web-based templates. Any of these clients can be used with any OLAP cube, including those designed by DGA Partners. We've found that Excel generally provides the best blend of free-form analysis with exceptional functionality. Why are OLAP cubes important ?

Before OLAP technology was well developed, data had to be extracted from databases using "queries". This meant that the analyst had to structure a request to the database for the information desired, and then submitted this query to the database server. That server would processing query and return the results. Depending on the size of the database and the data requested, this query could take minutes or hours to complete. In this sense, the "online" aspect of this type of reporting is questionable. OLAP cubes are fundamentally different in that they "pre-aggregate" the data used to answer many of queries that are anticipated. This pre-aggregation occurs when the cube is built, which means that this process is already completed when the user queries the data. In addition, the size of an OLAP cube depends on the number of measures and dimensions and contains -it may have no relationship to the side of the initial data set. Therefore, a claims data set having millions of members can be consolidated into a relatively small OLAP cube that can return data almost instantaneously. OLAP technology has been around since the mid-1990s, but has developed significantly over the last two to three years. Prior to that, the need for efficient storage of data took priority over the need for efficient reporting, simply because disk space was so expensive. In addition, the availability of faster CPUs has made it possible to compute aggregations on huge data sets within a reasonable period of time. Finally, the tools necessary to design OLAP cubes have become much more powerful over last few years Analytical products designed more than two years ago are likely to be based on the older query-based reporting technology. They can never be as fast or as user-friendly as those based on OLAP data sources. Does an OLAP cube allow me to "drill down" into my data? Yes, this is a unique feature that is part of the OLAP database structure. When designing an OLAP database, dimensions are structured into "hierarchies". For example, service dates can be arranged in a hierarchy of days, months, quarters, and years. Similarly, diagnoses can be arranged by major and minor categories, drilling down to the individual diagnosis code.

The OLAP cube "knows" the hierarchy, so if the analyst issues the command to "drill down", the cube knows the next level of data to be presented. Our claims data is already contained in a data warehouse. Why do you have to create another one? The term "data warehouse" has been used to describe many different types of databases. Frequently, it is used to describe a collection of tables containing claims data in a highly normalized form, meaning that the data is organized for maximum storage efficiency. While reports can be generated from such a database, it is often cumbersome to do so because of the organization of the data. In addition, to meet reporting needs it is often necessary to include information that is not contained in the Claims data warehouse, such as IBNR factors, administrative adjustments, or other similar data that is found in spreadsheets or desktop databases. Before efficient reporting can be done, this data must all be brought together, relationships built, data integrity verified, dimensions and measures identified and made consistent, and the whole structure optimized for use in developing OLAP cubes. How can we can access OLAP cubes without servers on our site ? Several alternatives are available. First, we can make the data available to you over the Internet using a standard Internet Explorer interface. In this system, you can connect to one of our servers and open a "terminal server session". From the point on, you'll be working on our server, even though the desktop appears on your computer. You can use Excel, or any other OLAP clients, and print reports and analyses to your local printers. You can also copy cells from the servers desktop into a spreadsheet located on your own desktop; however the local spreadsheet will not have OLAP functionality. This approach is especially useful for clients whose computers may be several years old, and might be too slow for effective OLAP analysis. This allows you to utilize our state-of-the-art servers without upgrading your own hardware. It also avoids the need for you to install or set up any software -- we can create a desktop for you that contains everything you need.

This alternative offers the advantage of "instant updating", in that data that we have process is immediately available for your use, without having transfer it from our site to yours. It also require a full-time connection from your computers to the Internet. For more information about this service, see Using the DGA Partners Secure Data Server below. Is there a way that we can bring the OLAP cubes "in-house", so that we don't need constant Internet access? Yes, we can install the OLAP cubes on servers located your site. Actually, a "server" is unnecessary -most computers built within the last year, and populated with sufficient memory, are sufficiently powerful to maintain an OLAP server and client. In cases where portability is necessary, we have utilized laptop computers to provide both the OLAP server and Excel client functions. This allows the user to take the databases "on the road" to meet with physicians, payors, or others. This solution requires you to obtain and install Microsoft SQL Server 2000 on each computer to be used as a server. Where can I find more information about OLAP and pivot tables? See the DGA Partners OLAP and Data Analytics Resource page here How does DGA Partners approach a data reporting project ? DGA Partners structures data reporting engagements into two major phases. In the first phase, we transfer, edit, and load data from its original source into an "analytical data warehouse", located on our data servers. The second phase is tailored to the needs of the client, and can involve (a) the preparation of tabular reports, (b) development of OLAP cubes that the client can use either on their own computers or through the Internet to DGA Partners servers, (c) the transfer of the analytical data warehouse to the client servers, or (d) additional consulting services, including financial and clinical analysis of client data. What types of standard reports do you produce ? DGA Partners has a library of standard report formats that other clients have found useful. However, it is important to realize that the ad hoc reporting capability of an OLAP-based reporting system allows users to access any information that is available in the database.

By carefully structuring the dimensions and measures in an OLAP cube, we can allow users to access any possible data. Thus, the concept of "standard reports" becomes less meaningful.

ACL, formerly known as Audit Command Language, is a data extraction and analysis software used for fraud detection and prevention. By sampling large data sets, ACL is used to find irregularities or patterns in transactions that could indicate control weaknesses or fraud.

CCH (Commerce Clearing House), a Wolters Kluwer business, is a provider of software and information services for tax, accounting and audit workers. http://www.cchgroup.com/webapp/wcs/stores/servlet/topCategories_10151_-1_10053

RIA Checkpoint
RIA Checkpoint is a web-based tax research service that includes a wide range of materials on federal and state taxes. Access to RIA Checkpoint is limited to University of Washington Law School faculty and students and to other researchers visiting the Gallagher What does RIA Checkpoint offer? A partial list includes:

U.S. (federal) o Internal Revenue Code and regulations o Internal Revenue Bulletin o IRS rulings and releases o Tax Court and other federal tax decisions o pending tax legislation o IRS forms and publications o Federal Tax Coordinator 2d (FTC) o United States Tax Reporter (USTR) o RIA Federal Tax Handbook o Journal of Corporate Taxation o Taxation for Accountants o Journal of Taxation o daily tax news States o laws and regulations o forms and instructions o tax rates o court decisions

multistate law comparisons Journal of Multistate Taxation and Incentives Calculators o auto o business o credit card o investment o loans o mortgages o personal o retirement o savings o tax

o o

RACI CHART http://en.wikipedia.org/wiki/Responsibility_assignment_matrix GAP ANALYSIS

In business and economics, gap analysis is a tool that helps a company to compare its actual performance with its potential performance. At its core are two questions: "Where are we?" and "Where do we want to be?" If a company or organization is not making the best use of its current resources or is forgoing investment in capital or technology, then it may be producing or performing at a level below its potential. This concept is similar to the base case of being below one's production possibilities frontier. The goal of gap analysis is to identify the gap between the optimized allocation and integration of the inputs (resources) and the current level of allocation. This helps provide the company with insight into areas which could be improved. The gap analysis process involves determining, documenting and approving the variance between business requirements and current capabilities. Gap analysis naturally flows from benchmarking and other assessments. Once the general expectation of performance in the industry is understood, it is possible to compare that expectation with the company's current level of performance. This comparison becomes the gap analysis. Such analysis can be performed at the strategic or operational level of an organization. Gap analysis is a formal study of what a business is doing currently and where it wants to go in the future. It can be conducted, in different perspectives, as follows:
1. 2. 3. 4. Organization (e.g., human resources) Business direction Business processes Information technology

Gap analysis provides a foundation for measuring investment of time, money and human resources required to achieve a particular outcome (e.g. to turn the salary payment process from paper-based to paperless with the use of a system). Note that 'GAP analysis' has also been used as a means for classification of how well a product or solution meets a targeted need or set of requirements. In this case, 'GAP' can be used as a ranking of 'Good', 'Average' or 'Poor'. This terminology does appear in the PRINCE2 project management publication from the OGC (Office of Government Commerce).

The need for new products or additions to existing lines may have emerged from portfolio analyses, in particular from the use of the Boston Consulting Group Growth-share matrix, or the need will have emerged from the regular process of following trends in the requirements of consumers. At some point a gap will have emerged between what the existing products offer the consumer and what the consumer demands. That gap has to be filled if the organization is to survive and grow. To identify a gap in the market, the technique of gap analysis can be used. Thus an examination of what profits are forecasted for the organization as a whole compared with where the organization (in particular its shareholders) 'wants' those profits to be represents what is called the 'planning gap': this shows what is needed of new activities in general and of new products in particular. The planning gap may be divided into three main elements:

Contents
[hide]

1 Usage gap 2 Market potential 3 Existing usage 4 Product gap 5 Competitive gap 6 See also 7 Market gap analysis 8 References 9 External Links

[edit] Usage gap


This is the gap between the total potential for the market and the actual current usage by all the consumers in the market. Clearly two figures are needed for this calculation:

market potential existing usage

Current industrial potential

[edit] Market potential


The maximum number of consumers available will usually be determined by market research, but it may sometimes be calculated from demographic data or government statistics. Ultimately there will, of course, be limitations on the number of consumers. For guidance one can look to the numbers using similar products. Alternatively, one can look to what has happened in other countries.[citation needed] The increased affluence of all the major Western economies means that such a lag can now be much shorter. at least the maximum attainable average usage (there will always be a spread of usage across a range of customers), will usually be determined from market research figures. It is important, however, to consider what lies behind such usage

[edit] Existing usage


The existing usage by consumers makes up the total current market, from which market shares, for example, are calculated. It is usually derived from marketing research, most accurately from panel research such as that undertaken by the Nielsen Company but also from ad hoc work. Sometimes it may be available from figures collected by government departments or industry bodies; however, these are often based on categories which may make sense in bureaucratic terms but are less helpful in marketing terms. The 'usage gap' is thus:
usage gap = market potential existing usage

This is an important calculation to make. Many, if not most marketers, accept the existing market size, suitably projected over the timescales of their forecasts, as the boundary for their expansion plans. Although this is often the most realistic assumption, it may sometimes impose an unnecessary limitation on their horizons. The original market for video-recorders was limited to the professional users who could afford the high prices involved. It was only after some time that the technology was extended to the mass market. In the public sector, where the service providers usually enjoy a monopoly, the usage gap will probably be the most important factor in the development of the activities. But persuading more consumers to take up family benefits, for example, will probably be more important to the relevant government department than opening more local offices. The usage gap is most important for the brand leaders. If any of these has a significant share of the whole market, say in excess of 30 per cent, it may become worthwhile for the firm to invest in expanding the total market. The same option is not generally open to the minor players, although they may still be able to target profitably specific offerings as market extensions.

All other gaps relate to the difference between the organization's existing sales (its market share) and the total sales of the market as a whole. This difference is the share held by competitors. These gaps will, therefore, relate to competitive activity.

[edit] Product gap


The product gap, which could also be described as the segment or positioning gap, represents that part of the market from which the individual organization is excluded because of product or service characteristics. This may have come about because the market has been segmented and the organization does not have offerings in some segments, or it may be because the positioning of its offering effectively excludes it from certain groups of potential consumers, because there are competitive offerings much better placed in relation to these groups. This segmentation may well be the result of deliberate policy. Segmentation and positioning are very powerful marketing techniques; but the trade-off, to be set against the improved focus, is that some parts of the market may effectively be put beyond reach. On the other hand, it may frequently be by default; the organization has not thought about its positioning, and has simply let its offerings drift to where they now are. The product gap is probably the main element of the planning gap in which the organization can have a productive input; hence the emphasis on the importance of correct positioning.

[edit] Competitive gap


What is left represents the gap resulting from the competitive performance. This competitive gap is the share of business achieved among similar products, sold in the same market segment, and with similar distribution patterns - or at least, in any comparison, after such effects have been discounted. Needless to say, it is not a factor in the case of the monopoly provision of services by the public sector. The competitive gap represents the effects of factors such as price and promotion, both the absolute level and the effectiveness of its messages. It is what marketing is popularly supposed to be about.

[edit] See also


Capability (systems engineering) Gap analysis (conservation)

[edit] Market gap analysis


In the type of analysis described above, gaps in the product range are looked for. Other perspective (essentially taking the "product gap" to its logical conclusion) is to look for gaps in the "market" (in a variation on "product positioning", and using the multidimensional "mapping"), which the company could profitably address, regardless of where the current products stand.

Many marketers would question the worth of the theoretical gap analysis described earlier. Instead, they would immediately start proactively to pursue a search for a competitive advantage.

Rational RequisitePro
Features and benefits
Software development is a team endeavor, so it is critical that team members possess a shared understanding of their project's vision, goals, specifications and requirements. But how can this be achieved when teams are geographically distributed and functionally isolated, failing to communicate with each other in a timely, clear, consistent manner? The IBM Rational RequisitePro solution addresses this need. Rational RequisitePro is an easy to use requirements management tool that lets teams author and share their requirements using familiar document-based methods while leveraging databaseenabled capabilities such as requirements traceability and impact analysis. The result is better communication and management of requirements with the increased likelihood of completing projects on time, within budget and above expectations. Successful projects start with requirements management - the more effective the execution, the greater the resulting quality and customer satisfaction.

IBM Rational RequisitePro


Description Feature Benefit Supports Microsoft Word for requirement This unique architecture marries an authoring and communication; complements effective and intuitive method for document-based entry with a commercial Word-Database creating and sharing requirement database to add organization, tracking and information with advanced integration management capabilities. For teams that prefer a customization and analysis database centric approach the Word integration techniques. feature is optional. Powerful Easily set up requirement, attribute and requirements document types. Define queries and filters to engine quickly find information of interest. Adaptable to your custom process. Team members can easily create views to display the information they need. Querying these relationships provides coverage analysis to ensure completeness and to make sure that time is not wasted building some

Traceability

Easily set up and track relationships between requirements to verify that high-level requirements are represented in the detailed

Description

Feature software requirement specs.

Benefit part of the system that is not fulfilling any need.

Ensure early notification when Graphical display of suspect requirements as a requirements are changed by any Change analysis result of requirement modification elsewhere in member of the project team. Stay on and notification the project. Includes automatic email notification top of the latest modifications to of stakeholders when requirements change. keep the project on track. Support parallel development in Create an XML-based baseline of project which more than one set of Baseline requirements. Use to jumpstart new projects or requirements must coexist. creation and to compare it with other project baselines, Determine when and where changes comparison exposing requirement changes and omissions at occur to reduce team confusion and to clarify the effect of project multiple levels of detail. decisions over time. Anyone with Web access, independent of his or her platform, can view, author and manage requirements quickly and efficiently without having Rational RequisitePro loaded on their machine. Project administration can be done via the web too. Create, view and export filterable traceability matrices and attribute reports to support the documentation requirements of internal and external auditing needs.

Web-based access

Ensure that distributed teams have read and write access to requirements from wherever they are located.

Detailed reporting for standards compliance

Create documentation required for compliance with organizational, institutional and governmental standards Leverages existing database investments and provides support for very large numbers of requirements with enterprise databases.

Commercially Supports IBM UDB DB2; Microsoft SQL Server; supported Oracle; and Microsoft Access databases databases

Integrated with Rational Software Architect, Requirements information is Integrated Rational Software Modeler, Rational ClearQuest,uniformly accessible and modifiable through products in the IBM requirements Rational Application Developer, Rational Data Software Delivery Platform, ensuring management Architect, Rational Systems Developer, WebSphere Business Modeler, WebSphere the entire team remains in sync. Integration Developer, Rational SoDA, Rational Provides for full traceability through

Description

Feature Unified Process and Rational TestManager.

Benefit the entire lifecycle. A wealth of resources and a community of practice help teams get started fast and manage requirements with confidence.

Getting started guide, experienced local support, Well supported extensibility interface for customization, Webuser community based training, and IBM developerWorks for customers.

IBM Rational Software Architect, (RSA) made by IBM's Rational Software division, is a comprehensive modeling and development environment that uses the Unified Modeling Language (UML) for designing architecture for C++ and Java 2 Enterprise Edition (J2EE) applications and web services. Rational Software Architect is built on the Eclipse open-source software framework and includes capabilities focused on architectural code analysis, C++, and model-driven development (MDD) with the UML for creating resilient applications and web services.

Contents
[hide]

1 Overview 2 History 3 Benefits 4 IBM Rational Software Architect Versions 5 See also 6 External links

[edit] Overview
Rational Software Architect version 7.5 includes the following capabilities:
1. Built on Eclipse version 3.4 2. Supports UML version 2.1 3. Supports model-to-code and code-to-model transformations. 1. Forward transformations go from: 1. UML to Java 2. UML to C# 3. UML to C++ 4. UML to EJB 5. UML to WSDL 6. UML to XSD 7. UML to CORBA Interface Description Language (IDL)

8. UML to structured query language (SQL)-based logical data models as supported by IBM Rational Data Architect software. 2. Reverse transformations go from 1. Java to UML 2. C++ to UML. 4. Includes all of the capabilities of IBM Rational Application Developer 5. Enables model management for parallel development and architectural re-factoring, e.g., split, combine, compare and merge models and model fragments. 6. Provides visual construction tools to expedite software design and development

It also has integrations with other Rational tools, such as ClearCase version control and ClearQuest configuration management. All Rational software products, including Rational Software Architect (RSA), are engineered as plugins that sit on top of the open-source Eclipse development platform. Installing Rational Software Architect gives the developer the option to either install RSA on top of an existing Eclipse v3.2 installation, or to install a new Eclipse v3.2 instance. Because RSA is Eclipse-based, it can take advantage of a vibrant market of third-party plugins for Eclipse, as well as plugins specifically for Rational tools.

[edit] History
IBM Rational software has a long history in application modeling, beginning in the early 1990s with the work of Grady Booch, James Rumbaugh and Ivar Jacobson. They combined competing modeling approaches to form what eventually became the Unified Modeling Language. Rational Software's first visual modeling and development tool was Rational Rose, a stand-alone modeling tool that integrates at the application programming interface (API) level with thirdparty Integrated Development Environments (IDEs) to support a variety of programming languages and other implementation technologies. While Rational Rose was an important step in bringing Model-driven development (MDD) closer to practicing software developers, it was found that only a small fraction of developers used modeling on a routine basis. A key problem was identified - developers didnt like to leave their IDE. They wanted visual modeling to be integratednot with their IDE, but rather inside their IDE. Rational responded to this need in 2002 with IBM Rational XDE software, providing an extended development environment for the next generation of programming technologies emerging at the time: Java and Microsoft .NET. IBM Rational XDE was characterized as the next generation of IBM Rational Rose not a new version of it (hence the name change), and not necessarily a replacement for Rose (since IBM Rational XDE was purposefully restricted to support only a select number of IDEs and implementation technologies). However, with each addition of a tool or capability came another point-to-point integration requirement. As more and

more capabilities were added, Rational began to reach the practical limits of this style of tool integration. For the next-generation MDD products, it was only natural to build additional model-driven development functions on top of Eclipse to form a more complete MDD tool. IBM Rational Software Architect, IBM Rational Software Modeler and IBM Rational Systems Developer are the result of these changes, transforming the silos that previously defined modeling, development and code analysis into an integrated and unified design and development experience. RSA now co-exists with IBM Rational Rhapsody (acquired from Telelogic in 2008) and several other products, with the two mentioned being the major MDD solutions within Rational.

[edit] Benefits
The benefits of using Rational Software Architect include:

Build software architecture that supports change with a common platform that facilitates easy roundtrip engineering and synchronization of models and code. Accelerate implementation and facilitate maintenance of a successful service-oriented architecture (SOA) solution, such as a web service, with powerful tools and process guidance. Use UML to ensure the numerous stakeholders within your software development projects are continuously communicating, and use defined specifications to jumpstart development. Gain insight into distributed projects and tighter control of shared information.

ix Sigma is a business management strategy originally developed by Motorola, USA in 1986.[1][2] As of 2010, it is widely used in many sectors of industry, although its use is not without controversy. Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes.[3] It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.) who are experts in these methods.[3] Each Six Sigma project carried out within an organization follows a defined sequence of steps and has quantified financial targets (cost reduction or profit increase).[3] The term Six Sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield, or the percentage of defect-free products it creates. A six sigma process is one in which 99.99966% of the products manufactured are statistically expected to be free of defects (3.4 defects per million). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this goal became a byword for the management and engineering practices used to achieve it.

Contents
[hide]

1 Historical overview 2 Methods o 2.1 DMAIC o 2.2 DMADV or DFSS o 2.3 Quality management tools and methods used in Six Sigma 3 Implementation roles o 3.1 Certification 4 Origin and meaning of the term "six sigma process" o 4.1 Role of the 1.5 sigma shift o 4.2 Sigma levels 5 Software used for Six Sigma 6 Application 7 Criticism o 7.1 Lack of originality o 7.2 Role of consultants o 7.3 Potential negative effects o 7.4 Based on arbitrary standards o 7.5 Criticism of the 1.5 sigma shift 8 See also 9 References 10 Further reading

[edit] Historical overview


Six Sigma originated as a set of practices designed to improve manufacturing processes and eliminate defects, but its application was subsequently extended to other types of business processes as well.[4] In Six Sigma, a defect is defined as any process output that does not meet customer specifications, or that could lead to creating an output that does not meet customer specifications.[3] The idea of Six Sigma was actually born at Motorola in the 1970s, when senior executive Art Sundry was criticizing Motorolas bad quality.[5] Through this criticism, the company discovered the connection between increasing quality and decreasing costs in the production process. Before, everybody thought that quality would cost extra money. In fact, it was reducing costs, as costs for repair or control sank.[6] Then, Bill Smith first formulated the particulars of the methodology at Motorola in 1986.[1] Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects,[7][8] based on the work of pioneers such as Shewhart, Deming, Juran, Ishikawa, Taguchi and others. Like its predecessors, Six Sigma doctrine asserts that:

Continuous efforts to achieve stable and predictable process results (i.e., reduce process variation) are of vital importance to business success. Manufacturing and business processes have characteristics that can be measured, analyzed, improved and controlled. Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.

Features that set Six Sigma apart from previous quality improvement initiatives include:

A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.[3] An increased emphasis on strong and passionate management leadership and support.[3] A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green Belts", etc. to lead and implement the Six Sigma approach.[3] A clear commitment to making decisions on the basis of verifiable data, rather than assumptions and guesswork.[3]

The term "Six Sigma" comes from a field of statistics known as process capability studies. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO).[9][10] Six Sigma's implicit goal is to improve all processes to that level of quality or better. Six Sigma is a registered service mark and trademark of Motorola Inc.[11] As of 2006 Motorola reported over US$17 billion in savings[12] from Six Sigma. Other early adopters of Six Sigma who achieved well-publicized success include Honeywell (previously known as AlliedSignal) and General Electric, where Jack Welch introduced the method.[13] By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.[14] In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to yield a methodology named Lean Six Sigma.

[edit] Methods
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.[14]

DMAIC is used for projects aimed at improving an existing business process.[14] DMAIC is pronounced as "duh-may-ick". DMADV is used for projects aimed at creating new product or process designs.[14] DMADV is pronounced as "duh-mad-vee".

[edit] DMAIC
The DMAIC project methodology has five phases:

Define the problem, the voice of the customer, and the project goals, specifically. Measure key aspects of the current process and collect relevant data. Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation. Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability. Control the future state process to ensure that any deviations from target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, and visual workplaces, and continuously monitor the process.

[edit] DMADV or DFSS


The DMADV project methodology, also known as DFSS ("Design For Six Sigma"),[14] features five phases:

Define design goals that are consistent with customer demands and the enterprise strategy. Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities, production process capability, and risks. Analyze to develop and design alternatives, create a high-level design and evaluate design capability to select the best design. Design details, optimize the design, and plan for design verification. This phase may require simulations. Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

[edit] Quality management tools and methods used in Six Sigma


Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside of Six Sigma. The following table shows an overview of the main methods used.

5 Whys Analysis of variance ANOVA Gauge R&R Axiomatic design Business Process Mapping Cause & effects diagram (also known as fishbone or Ishikawa diagram) Chi-square test of independence and fits Control chart Correlation

Histograms Quality Function Deployment (QFD) Pareto chart Pick chart Process capability Quantitative marketing research through use of Enterprise Feedback Management (EFM) systems Regression analysis Root cause analysis

Cost-benefit analysis CTQ tree Design of experiments Failure mode and effects analysis (FMEA) General linear model

Run charts SIPOC analysis (Suppliers, Inputs, Process, Outputs, Customers) Taguchi methods Taguchi Loss Function TRIZ

[edit] Implementation roles


One key innovation of Six Sigma involves the "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a ranking terminology (similar to some martial arts systems) to define a hierarchy (and career path) that cuts across all business functions. Six Sigma identifies several key roles for its successful implementation.[15]

Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements. Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts. Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments. Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools.

[edit] Certification
In the United States, Six Sigma certification for both Green and Black Belts is offered by the Institute of Industrial Engineers[16] and by the American Society for Quality.[17] Many organizations also offer certification programs to their employees. Many corporations, including early Six Sigma pioneers General Electric and Motorola developed certification programs as part of their Six Sigma implementation. All branches of the US Military also train and certify their own Black and Green Belts[citation needed].

[edit] Origin and meaning of the term "six sigma process"

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. The Greek letter (sigma) marks the distance on the horizontal axis between the mean, , and the curve's inflection point. The greater this distance, the greater is the spread of values encountered. For the curve shown above, = 0 and = 1. The upper and lower specification limits (USL, LSL) are at a distance of 6 from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely. Even if the mean were to move right or left by 1.5 at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6 away from the nearest specification limit.

The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications.[10] This is based on the calculation method employed in process capability studies. Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification.[10]

[edit] Role of the 1.5 sigma shift


Experience has shown that processes usually do not perform as well in the long term as they do in the short term.[10] As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study.[10] To account for this real-life increase in process variation over time, an empiricallybased 1.5 sigma shift is introduced into the calculation.[10][18] According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigma either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.[10]

Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided capability study).[10] So the 3.4 DPMO of a six sigma process in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift introduced to account for long-term variation.[10] This allows for the fact that special causes may result in a deterioration in process performance over time, and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.[10]

[edit] Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation. See also: Three sigma rule

The table[19][20] below gives long-term DPMO values corresponding to various short-term sigma levels. It must be understood that these figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be

0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = 0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages only indicate defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk 1 2 3 4 5 6 7 691,462 69% 308,538 31% 66,807 6.7% 6,210 233 3.4 0.019 0.62% 0.023% 0.00034% 0.0000019% 31% 69% 93.3% 99.38% 99.977% 99.99966% 99.9999981% 0.33 0.67 1.00 1.33 1.67 2.00 2.33 0.17 0.17 0.5 0.83 1.17 1.5 1.83

[edit] Software used for Six Sigma


Main article: List of Six Sigma software packages

[edit] Application
Main article: List of Six Sigma companies

Six Sigma mostly finds application in large organizations.[21] An important factor in the spread of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a figure that later grew to more than $1 billion.[21] According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation, or need to adapt the standard approach to make it work for them.[21] This is due both to the infrastructure of Black Belts that Six Sigma requires, and to the fact that large organizations present more opportunities for the kinds of improvements Six Sigma is suited to bringing about.[21]

[edit] Criticism

[edit] Lack of originality


Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality improvement", stating that "there is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that

concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers."[22]

[edit] Role of consultants


The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry of training and certification. Critics argue there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they only have a rudimentary understanding of the tools and techniques involved.[3]

[edit] Potential negative effects


A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)."[23] The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies." Advocates of Six Sigma have argued that many of these claims are in error or ill-informed.[24][25] A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the effect of stifling creativity and reports its removal from the research function. It cites two Wharton School professors who say that Six Sigma leads to incremental innovation at the expense of blue skies research.[26] This phenomenon is further explored in the book, Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.[27]

[edit] Based on arbitrary standards


While 3.4 defects per million opportunities might work well for certain products/processes, it might not operate optimally or cost effectively for others. A pacemaker process might need higher standards, for example, whereas a direct mail advertising campaign might need lower standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as the number of standard deviations is not clearly explained. In addition, the Six Sigma model assumes that the process data always conform to the normal distribution. The calculation of defect rates for situations where the normal distribution model does not apply is not properly addressed in the current Six Sigma literature.[3]

[edit] Criticism of the 1.5 sigma shift


The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature.[28] Its universal applicability is seen as doubtful.[3] The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels

corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma process."[10][29] The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention about how Six Sigma measures are defined.[29] The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.[10]

Capability Maturity Model Integration


From Wikipedia, the free encyclopedia Jump to: navigation, search

Characteristics of the Maturity levels.[1]

Capability Maturity Model Integration (CMMI) is a process improvement approach that helps organizations improve their performance.[citation needed] CMMI can be used to guide process improvement across a project, a division, or an entire organization. CMMI in software engineering and organizational development is a process improvement approach that provides organizations with the essential elements for effective process improvement. CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes."[2]

Contents
[hide]

1 Overview 2 History 3 CMMI topics o 3.1 CMMI representation o 3.2 CMMI model framework o 3.3 Maturity levels in CMMI for development o 3.4 Maturity levels in CMMI for services o 3.5 Maturity levels in CMMI for acquisition o 3.6 CMMI models o 3.7 Appraisal o 3.8 Achieving CMMI compliance 4 Applications 5 See also 6 References 7 Official sources 8 External links

[edit] Overview
CMMI currently addresses three areas of interest:
1. Product and service development CMMI for Development (CMMI-DEV), 2. Service establishment, management, and delivery CMMI for Services (CMMI-SVC), and 3. Product and service acquisition CMMI for Acquisition (CMMI-ACQ).

CMMI was developed by a group of experts from industry, government, and the Software Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization.[1] CMMI originated in software engineering but has been highly generalised over the years to embrace other areas of interest, such as the development of hardware products, the delivery of all kinds of services, and the acquisition of products and services. The word "software" does not appear in definitions of CMMI. This generalization of improvement concepts makes CMMI extremely abstract. It is not as specific to software engineering as its predecessor, the Software CMM (CMM, see below).

[edit] History
CMMI was developed by the CMMI project, which aimed to improve the usability of maturity models by integrating many different models into one framework. The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industrial Association. CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM was developed from 1987 until 1997. In 2002, CMMI Version 1.1 was released, Version 1.2 followed in August 2006, and Version 1.3 in November 2010.

[edit] CMMI topics

[edit] CMMI representation


CMMI exists in two representations: continuous and staged.[1] The continuous representation is designed to allow the user to focus on the specific processes that are considered important for the organization's immediate business objectives, or those to which the organization assigns a high degree of risks. The staged representation is designed to provide a standard sequence of improvements, and can serve as a basis for comparing the maturity of different projects and organizations. The staged representation also provides for an easy migration from the SW-CMM to CMMI.[1]

[edit] CMMI model framework


For more details on this topic, see Process area (CMMI).

Depending on the CMMI constellation (acquisition, services, development) used, the process areas it contains will vary. Process areas are the areas that will be covered by the organization's processes. The table below lists the process areas that are present in all CMMI constellations. This collection of sixteen process areas is called the CMMI core process areas.
Capability Maturity Model Integration (CMMI) Core Process Areas Abbreviation CAR CM DAR IPM Causal Analysis Configuration Management Decision Analysis and Resolution Integrated Project Management Name Support Support Support Project Management Area Maturity Level 5 2 3 3

MA OPD OPF OPM OPP OT PMC PP PPQA QPM REQM RSKM

Measurement and Analysis Organizational Process Definition Organizational Process Focus

Support Process Management Process Management

2 3 3 5 4 3 2 2 2 4 2 3

Organizational Performance Management Process Management Organizational Process Performance Organizational Training Project Monitoring and Control Project Planning Process and Product Quality Assurance Quantitative Project Management Requirements Management Risk Management Process Management Process Management Project Management Project Management Support Project Management Project Management Project Management

[edit] Maturity levels in CMMI for development


There are five maturity levels. However, maturity level ratings are awarded for levels 2 through 5. The process areas below and their maturity levels are listed for the CMMI for Development model: Maturity Level 2 - Managed

CM - Configuration Management MA - Measurement and Analysis PMC - Project Monitoring and Control PP - Project Planning PPQA - Process and Product Quality Assurance REQM - Requirements Management SAM - Supplier Agreement Management

Maturity Level 3 - Defined


DAR - Decision Analysis and Resolution IPM - Integrated Project Management OPD - Organizational Process Definition

OPF - Organizational Process Focus OT - Organizational Training PI - Product Integration RD - Requirements Development RSKM - Risk Management TS - Technical Solution VAL - Validation VER - Verification

Maturity Level 4 - Quantitatively Managed


OPP - Organizational Process Performance QPM - Quantitative Project Management

Maturity Level 5 - Optimizing


CAR - Causal Analysis and Resolution OPM - Organizational Performance Management

[edit] Maturity levels in CMMI for services


The process areas below and their maturity levels are listed for the CMMI for Services model: Maturity Level 2 - Managed

CM - Configuration Management MA - Measurement and Analysis PPQA - Process and Product Quality Assurance REQM - Requirements Management SAM - Supplier Agreement Management SD - Service Delivery WMC - Work Monitoring and Control WP - Work Planning

Maturity Level 3 - Defined


CAM - Capacity and Availability Management DAR - Decision Analysis and Resolution IRP - Incident Resolution and Prevention IWM - Integrated Work Management OPD - Organizational Process Definition OPF - Organizational Process Focus OT - Organizational Training RSKM - Risk Management SCON - Service Continuity SSD - Service System Development SST - Service System Transition

STSM - Strategic Service Management

Maturity Level 4 - Quantitatively Managed


OPP - Organizational Process Performance QWM - Quantitative Work Management

Maturity Level 5 - Optimizing


CAR - Causal Analysis and Resolution OPM - Organizational Performance Management

[edit] Maturity levels in CMMI for acquisition


The process areas below and their maturity levels are listed for the CMMI for Acquisition model: Maturity Level 2 - Managed

AM - Agreement Management ARD - Acquisition Requirements Development CM - Configuration Management MA - Measurement and Analysis PMC - Project Monitoring and Control PP - Project Planning PPQA - Process and Product Quality Assurance REQM - Requirements Management SSAD - Solicitation and Supplier Agreement Development

Maturity Level 3 - Defined


ATM - Acquisition Technical Management AVAL - Acquisition Validation AVER - Acquisition Verification DAR - Decision Analysis and Resolution IPM - Integrated Project Management OPD - Organizational Process Definition OPF - Organizational Process Focus OT - Organizational Training RSKM - Risk Management

Maturity Level 4 - Quantitatively Managed


OPP - Organizational Process Performance QPM - Quantitative Project Management

Maturity Level 5 - Optimizing

CAR - Causal Analysis and Resolution OPM - Organizational Performance Management

[edit] CMMI models


CMMI best practices are published in documents called models, each of which addresses a different area of interest. The current release of CMMI, version 1.3, provides models for three areas of interest: development, acquisition, and services.

CMMI for Development (CMMI-DEV), v1.3 was released in November 2010. It addresses product and service development processes. CMMI for Acquisition (CMMI-ACQ), v1.3 was released in November 2010. It addresses supply chain management, acquisition, and outsourcing processes in government and industry. CMMI for Services (CMMI-SVC), v1.3 was released in November 2010. It addresses guidance for delivering services within an organization and to external customers.

Regardless of which model an organization chooses, CMMI best practices should be adapted by an organization according to its business objectives.

[edit] Appraisal
An organization cannot be certified in CMMI; instead, an organization is appraised. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1-5) or a capability level achievement profile. Many organizations find value in measuring their progress by conducting an appraisal. Appraisals are typically conducted for one or more of the following reasons:
1. To determine how well the organizations processes compare to CMMI best practices, and to identify areas where improvement can be made 2. To inform external customers and suppliers of how well the organizations processes compare to CMMI best practices 3. To meet the contractual requirements of one or more customers

Appraisals of organizations using a CMMI model[3] must conform to the requirements defined in the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals, A, B and C, which focus on identifying improvement opportunities and comparing the organizations processes to CMMI best practices. Of these, class A appraisal is the most formal and is the only one that can result in a level rating. Appraisal teams use a CMMI model and ARC-conformant appraisal method to guide their evaluation of the organization and their reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan improvements for the organization. The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal method that meets all of the ARC requirements.[4] Results of an SCAMPI appraisal may be

published (if the appraised organization approves) on the CMMI Web site of the SEI: Published SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability Determination), assessments etc.

[edit] Achieving CMMI compliance


The traditional approach that organizations often adopt to achieve compliance with CMMI models involves the establishment of an Engineering Process Group (EPG) and Process Action Teams (PATs)[5] This approach requires that members of the EPG and PATs be trained in the CMMI, that an informal (SCAMPI C) appraisal be performed, and that process areas be prioritized for improvement. More modern approaches that involve the deployment of commercially available, CMMI-compliant processes, can significantly reduce the time to achieve compliance. SEI has maintained statistics on the "time to move up" for organizations adopting the earlier Software CMM and primarily using the traditional approach.[6] These statistics indicate that, since 1987, the median times to move from Level 1 to Level 2 is 23 months, and from Level 2 to Level 3 is an additional 20 months. These statistics have not been updated for the CMMI. The Software Engineering Institutes (SEI) Team Software Process methodology and the use of CMMI models can be used to raise the maturity level.

[edit] Applications
The SEI published that 60 organizations measured increases of performance in the categories of cost, schedule, productivity, quality and customer satisfaction.[7] The median increase in performance varied between 14% (customer satisfaction) and 62% (productivity). However, the CMMI model mostly deals with what processes should be implemented, and not so much with how they can be implemented. These results do not guarantee that applying CMMI will increase performance in every organization. A small company with few resources may be less likely to benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of the organizations with 10012000 employees are rated at the highest level (5: Optimizing). Interestingly, Turner & Jain (2002) argue that although it is obvious there are large differences between CMMI and agile methods, both approaches have much in common. They believe neither way is the 'right' way to develop software, but that there are phases in a project where one of the two is better suited. They suggest one should combine the different fragments of the methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum and CMMI brings more adaptability and predictability than either one alone. David J. Anderson (2005) gives hints on how to interpret CMMI in an agile manner. Other viewpoints about using CMMI and Agile development are available on the SEI website. The combination of the project management technique earned value management (EVM) with CMMI has been described (Solomon, 2002). To conclude with a similar use of CMMI, Extreme Programming (XP), a software engineering method, has been evaluated with CMM/CMMI

(Nawrocki et al., 2002). For example, the XP requirements management approach, which relies on oral communication, was evaluated as not compliant with CMMI. CMMI can be appraised using two different approaches: staged and continuous. The staged approach yields appraisal results as one of five maturity levels. The continuous approach yields one of six capability levels. The differences in these approaches are felt only in the appraisal; the best practices are equivalent and result in equivalent process improvement results Rational ClearQuest is an enterprise level workflow automation tool from the Rational Software division of IBM. Commonly, ClearQuest is configured as a bug tracking system, but it can be configured to act as a CRM tool or to track a complex manufacturing process. It can also implement these functions together. IBM provides a number of predefined "schemas" for common tasks such as software Defect Tracking which can themselves be further customized if required.

Contents
[hide]

1 Common ClearQuest Terminology o 1.1 Fields o 1.2 Hook o 1.3 Record Type o 1.4 Schema o 1.5 Schema Repository o 1.6 Test Database o 1.7 User Database 2 Architecture o 2.1 Overview o 2.2 Client o 2.3 Databases 2.3.1 Schema Repository 2.3.2 User Database 2.3.3 Test Database 3 Customization 4 Reporting 5 Administration 6 Integration 7 Licensing 8 Criticisms 9 External links

[edit] Common ClearQuest Terminology


This section is written like a manual or guidebook. Please help rewrite this section from a neutral point of view. (June 2010)

[edit] Fields
Elementary receptacle for a datum which is associated with a single Record Type. As a minimum, a field comprises a data type and a name and is analogous to a database column. Supported data types are:
ClearQuest Field data types Data Type
ATTACHMENT_LIST

Description Special field type that allows one or more arbitrary files to be stored.

DATE_TIME INT

Date and time field. Integer.

MULTILINE_STRING Multi-line string stored as a database blob. REFERENCE REFERENCE_LIST SHORT_STRING

Special field type that refers to a single instance of a Record Type (an Entity). Special field type that refers to multiple instances of a Record Type (an Entity). Single-line string equivalent to a database varchar.

[edit] Hook
A hook is an arbitrary piece of code that can be configured to be executed when a particular event occurs. There are two main types of hooks: Field Hooks and Action Hooks and ClearQuest breaks these down further to allow very fine-grained control over when hook code is run. Hooks can be used for a variety of tasks such as pre-populating fields, validating Field and Record data, and even creating new Record Types. On Unix and Linux, hook code must be written in Perl, whereas on Microsoft Windows it can be written in either Perl or VBScript.

[edit] Record Type


Definition of an individual entity. A Record Type encapsulates one or more Fields and is analogous to a database table. Record Types can be either "Stateless" or "State-based" with the latter being able to transition between a defined set of states. For example, a "Defect" Record Type used to record software bugs may move between states such as "open", "working", "review", "testing" and "closed". Note that states do not have to follow a linear progression and very complex transitions are possible.

[edit] Schema
A Schema in ClearQuest terminology encapsulates a single workflow including information on states, actions, data structures and hooks. Schemas are versioned in ClearQuest and all versions are stored in a Schema Repository

[edit] Schema Repository

A database containing one or more Schemas. The schemas are actually stored as SQL within the tables of the Schema Repository database itself.

[edit] Test Database


A database used by administrators to test changes to a production system without impacting the live system. Each Test Database is associated with a version of a Schema.

[edit] User Database


A database containing the data entered and queried by users. Each User Database is associated with a version of a Schema.

[edit] Architecture

[edit] Overview
ClearQuest is a client-server application although note that technically there is no ClearQuest "backend". Rather, the ClearQuest clients utilize an existing database server.

[edit] Client
Traditionally, ClearQuest has comprised a fat client and a number of databases existing on a backend RDBMS server to store all captured data. ClearQuest supports various backend databases including Oracle, Sybase, SQL Server and IBM DB2. More recently, IBM has introduced a web interface to ClearQuest called "CQWeb" which utilizes an application server to generate the web-based interface. The client then becomes simply a supported web browser, which alleviates the heavy requirements of the full client version. The web version offers all the functionality of the full version except for the ability to create reports. As of version 7, ClearQuest ships with an Eclipse-based client which provides almost the same features as the original Win32 Windows ClearQuest client. Charts and complex queries must still be constructed on the Windows client, even if they can be used on the other clients. Apart from the standard ClearQuest user client, a design tool called ClearQuest Designer is also provided. This allows administrators to configure every aspect of not only the client interface but also the way in which data from specific widgets in the client are captured in the backend database. The ClearQuest Designer is currently only available for Microsoft Windows. All clients talk to the backend databases via ODBC. Clients are available for Microsoft Windows, Linux, AIX, HP-UX and Solaris operating systems.

[edit] Databases
A standard ClearQuest installation contains a minimum of three databases:
[edit] Schema Repository

Contains a complete description of the process(es) being modelled


[edit] User Database

"Main" database containing the data entered and queried by users. This must be tagged as a "production database" to ClearQuest.
[edit] Test Database

Database tagged as a "test database" and can be thought of as being logically paired with a production User database; commonly, the User and Test databases will be associated with the same Schema, but with different versions of that schema. If the production User Database is associated with version "x" of a particular Schema and the administrators are working on new functionality which they intend to roll out to the production User Database, the Test Database may be associated with Schema version "x+1" (or higher).

[edit] Customization
Full customization of the client interface and even the manner in which actual data is stored are possible using the ClearQuest Designer which provides a drag-and-drop interface for laying out the client data input Forms. The designer allows administrators to create and modify Record Types, Fields, Lists, hook code and scripts in a simple fashion. Making changes in the ClearQuest Designer modifies a particular version of a Schema (or creates a new version). Once an administrator is satisfied with their changes, they are able to test them without impacting the live production system by deploying the new Schema version to a Test Database associated with the Schema in question. Once testing is complete, the new Schema version can be applied to the production User Database such that all users will have access to the new schema. Note that the ClearQuest Designer performs a number of checks on the changes made by the administrator to ensure that the constraints have not been violated. This minimizes the chance that erroneous changes will be "rolled-out" to general ClearQuest users. ClearQuest 7.1 introduces the ability within the designer to "diff" two arbitrary versions of a particular schema to allow the administrator to see easily what changes have been made.

[edit] Reporting
ClearQuest places heavy emphasis on reports, allowing users and administrators to define their own reports and charts (called Personal Queries) or shared reports and charts (called Public Queries). The ClearQuest client provides a simple interface to allow even new users to create useful reports with the minimum of effort. Additionally, since ClearQuest stores all its data in an RDBMS, the client can be used to create arbitrarily complex SQL queries which can again be made available to other users.

[edit] Administration
Aside from the ClearQuest Designer, ClearQuest also provides a full API to allow programmatic access to ClearQuest. Like the hook code, this API is via Perl on Unix and Linux and either Perl or VBScript on Windows. Rather than providing a Perl module for an existing Perl installation, IBM includes their own version of Perl with ClearQuest (called "cqperl", now deprecated in favour of "ratlperl") to allow administrative access to the "CQPerlExt" module. Additionally, a number of command-line tools are provided to aid administrators, such as "pdsql" which is an ODBC SQL client allowing various actions to be taken on the backend database along with the execution of arbitrary SQL code. Another important example is "cqload" which provides a number of facilities including the ability to import and export schema versions.

[edit] Integration
ClearQuest is designed to be used with other Rational Software testing tools, such as IBM Rational Performance Tester, IBM Rational Functional Tester and IBM Rational Manual Tester. ClearQuest also provides a robust environment for software quality assurance. Using the Rational ClearQuest Bridge in IBM Rational Team Concert, you can work with your ClearQuest records in an agile development environment. The tool can also be linked to Microsoft Project to create ordered steps for resources assigned to specific goals. ClearQuest also integrates well with IBM Rational ClearCase/UCM revision control system. This allows to correlate source code change sets with ClearQuest records (i.e. bugs), simplifying some aspects of bug tracking and source code management. IBM sells a supplementary product called ClearQuest MultiSite to allow ClearQuest databases to be synchronized, for example between two or more company offices potentially located in different countries. ClearQuest can be configured to perform user authentication utilizing LDAP.

HP Quality Center (QC) is a set of web-based test management software offerings from the HP Software Division of Hewlett-Packard, many of which were acquired from Mercury Interactive Corporation.[1] HP Quality Center offers software quality assurance, including requirements management, test management and business process testing for IT and application environments.[2] HP Quality Center and HP Performance Center are components of the HP Application Lifecycle Management software solution set.[3]

Contents
[hide]

1 Products o 1.1 HP Change Impact Testing module for SAP Applications o 1.2 HP Functional Testing software o 1.3 HP Quality Center software o 1.4 HP QuickTest Professional software o 1.5 HP Service Test software o 1.6 HP Software as a Service for HP Quality Center o 1.7 HP Software Professional Services for HP Application Lifecycle Management o 1.8 HP Test Data Management software o 1.9 Reports 2 Benefits of Software Quality Assurance and Testing 3 More Information on Software Quality Assurance and Testing 4 References 5 External links

[edit] Products

[edit] HP Change Impact Testing module for SAP Applications


HP Change Impact Testing module for SAP Applications runs with HP Quality Management software to provide recommendations on SAP testing priorities. The software offers an automated process for identifying changes, analyzing the technical impact, evaluating the business risk, fine-tuning the test strategy and creating a test execution plan for SAP applications.[4]

[edit] HP Functional Testing software


HP Functional Testing software combines HP QuickTest Professional software and all HP QuickTest Professional add-ins into a solution for functional testing. The software supports both manual and automated testing for Graphical User Interface (GUI)-based applications and non GUI-based services.[5]

[edit] HP Quality Center software


HP Quality Center software is also part of the HP Quality Center product center for software quality assurance. Available in three versions, HP Quality Center software is web-based software used for global application testing.[6] HP Quality Center is available as HP Quality Center Starter Edition software and HP Quality Center Enterprise software. The Starter Edition is for entrylevel software quality assurance organizations. The Enterprise version, originally called Mercury TestDirector for Quality Center, is for quality assurance organizations that are managing medium to large releases. In addition, for large and global organizations that manage enterprise-wide releases with hundreds of applications, HP Application Lifecycle Management 11 incorporates the capabilities of Quality Center Enterprise along with Project Planning and Tracking, Enterprise Release Management and Asset Sharing to manage functional quality, performance and security from requirements to application delivery..[7] HP Quality Center software has several modules: management (includes a release sub-module and a libraries sub-module), requirements, test resources, test lab, dashboard, defects management and business components.[8]

[edit] HP QuickTest Professional software


HP QuickTest Professional provides functional testing and regression testing automation for major software application environments. It uses the concept of keyword-driven testing to simplify test creation and maintenance. It enables testers to build functional and regression test cases by capturing flows directly from the application screens using specialized capturing technology. Test experts also have full access to the underlying test and object properties via an integrated scripting and debugging environment.[9]

[edit] HP Service Test software


HP Service Test software provides automated functional testing of Service oriented architecture (SOA) services.[10][11]

[edit] HP Software as a Service for HP Quality Center


HP Software as a Service for HP Quality Center delivers hosted test management for organizations. As part of this service offering, HP installs HP Quality Center software in the HP data center, deploys the software in the customer environment, assists the customer with workflows and customizations, provides ongoing mentoring and offers 24x7 support for test management requests. All services are delivered remotely.[12]

[edit] HP Software Professional Services for HP Application Lifecycle Management


HP offers various professional services for application quality management, including HP Quality Management Center of Excellence Implementation Services. The HP Professional Services organization can provide assistance in the architecture, planning and implementation of

a Quality Management Center of Excellence (CoE). It can also assist in the operation of the organizational, process and technology aspects of the CoE.[13]

[edit] HP Test Data Management software


HP Test Data Management software automates the process of collecting test data from live applications and combines masking and automated data extraction.[14]

[edit] Reports
A third-party add-on package, QCReporting produces media reports for HP Quality Center. The reporting tool provides project and cross-projects reports. Reporting formats are available in:

PDF Microsoft Excel Screen table Adobe Flash Graph Microsoft Word HTML

[edit] Benefits of Software Quality Assurance and Testing


Software quality assurance solutions let software developers and software quality assurance teams load-test applications under real-world peak loads. Before applications and web sites are deployed, they can be tested under a production load in an authentic environment to emulate the performance and response times. And, testers can record business transactions and play them back manually or according to a schedule against a desired combination of browsers and operating systems to see how end users would experience the application.[15] In addition to HP, other vendors offer software quality assurance solutions, including Agitar Software, Aldon, AtTask, CollabNet, Coverity, Global Knowledge, IBM, IBM Rational, Jama Software, Novalys, SERENA Software, Inc., SmartBear Software, ThoughtWorks Studios and VeraCode, Inc

HP WinRunner software was an automated functional GUI testing tool that allowed a user to record and play back UI interactions as test scripts. As a functional test suite, it worked with HP QuickTest Professional and supported enterprise quality assurance. It captured, verified and replayed user interactions automatically, in order to identify defects and determine whether business processes worked as designed. The software implemented a proprietary Test Script Language (TSL) that allowed customization and parameterization of user input.

HP WinRunner was originally written by Mercury Interactive.[1] Mercury Interactive was subsequently acquired by Hewlett Packard (HP) in 2006.[2] On February 15, 2008, HP Software Division announced the end of support for HP WinRunner versions 7.5, 7.6, 8.0, 8.2, 9.2 suggesting migration to HP Functional Testing HP LoadRunner software is an automated performance and load testing product from HewlettPackard for examining system behavior and performance, while generating actual load.[1] HP acquired LoadRunner as part of its acquisition of Mercury Interactive in November 2006.[2] HP LoadRunner can emulate hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads, while collecting information from key infrastructure components (Web servers, database servers etc.)[3] The results can then be analyzed in detail, to explore the reasons for particular behavior. HP LoadRunner is sold as part of the HP IT Management Software category by the HP Software Division. Consider the client-side application for an automated teller machine (ATM). Although each client is connected to a server, hundreds of ATMs may be open to the public. During peak times such as 10 a.m. Monday, the start of the work week the load may be much higher than normal. In order to test such situations, it is not practical to have a testbed of hundreds of ATMs. So, one can use an ATM simulator and a computer system with HP LoadRunner to simulate a large number of users accessing the server simultaneously. Once activities are defined, they are repeatable. After debugging a problem in the application, managers can check whether the problem persists by reproducing the same situation, with the same type of user interaction. HP LoadRunner consists of several different tools: Virtual User Generator (VuGen), Controller, load generators, Analysis and Launches.

Quality function deployment (QFD) is a method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.,[1] as described by Dr. Yoji Akao, who originally developed QFD in Japan in 1966, when the author combined his work in quality assurance and quality control points with function deployment used in value engineering. QFD is designed to help planners focus on characteristics of a new or existing product or service from the viewpoints of market segments, company, or technology-development needs. The technique yields graphs and matrices. QFD helps transform customer needs (the voice of the customer [VOC]) into engineering characteristics (and appropriate test methods) for a product or service, prioritizing each product or service characteristic while simultaneously setting development targets for product or service.

Contents
[hide]

Techniques and tools based on QFD

[edit] House of Quality


House of Quality appeared in 1972 in the design of an oil tanker by Mitsubishi Heavy Industries.[5] Akao has reiterated numerous times that a House of Quality is not QFD, it is just an example of one tool.[6] A Flash tutorial exists showing the build process of the traditional QFD "House of Quality" (HOQ).[7] (Although this example may violate QFD principles, the basic sequence of HOQ building are illustrative.) There are also free QFD templates available that walk users through the process of creating a House of Quality.[8] Other tools extend the analysis beyond quality to cost, technology, reliability, function, parts, technology, manufacturing, and service deployments. In addition, the same technique can extend the method into the constituent product subsystems, configuration items, assemblies, and parts. From these detail level components, fabrication and assembly process QFD charts can be developed to support statistical process control techniques.

[edit] Pugh concept selection


Pugh Concept Selection can be used in coordination with QFD to select a promising product or service configuration from among listed alternatives.

[edit] Modular Function Deployment


Modular Function Deployment uses QFD to establish customer requirements and to identify important design requirements with a special emphasis on modularity. The affinity diagram is a business tool used to organize ideas and data. It is one of the Seven Management and Planning Tools. The tool is commonly used within project management and allows large numbers of ideas stemming from brainstorming[1] to be sorted into groups for review and analysis.[2] The affinity diagram was devised by Jiro Kawakita in the 1960s[3] and is sometimes referred to as the KJ Method.

Process
1. Record each idea on cards or notes 2. Look for ideas that seem to be related 3. Sort cards into groups until all cards have been used.

Once the cards have been sorted into groups the team may sort large clusters into subgroups for easier management and analysis.[4] Once completed, the affinity diagram may be used to create a cause and effect diagram.[5] Ishikawa diagrams (also called fishbone diagrams, cause-and-effect diagrams or Fishikawa) are causal diagrams that show the causes of a certain event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are product design and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify these sources of variation. The categories typically include:

People: Anyone involved with the process Methods: How the process is performed and the specific requirements for doing it, such as policies, procedures, rules, regulations and laws Machines: Any equipment, computers, tools etc. required to accomplish the job Materials: Raw materials, parts, pens, paper, etc. used to produce the final product Measurements: Data generated from the process that are used to evaluate its quality Environment: The conditions, such as location, time, temperature, and culture in which the process operates

Overview

Ishikawa diagram, in fishbone shape, showing factors of Equipment, Process, People, Materials, Environment and Management, all affecting the overall problem. Smaller arrows connect the sub-causes to major causes.

Ishikawa diagrams were proposed by Kaoru Ishikawa[2] in the 1960s, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management.

It was first used in the 1940s, and is considered one of the seven basic tools of quality control.[3] It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton. Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was "Jinba Ittai" or "Horse and Rider as One". The main causes included such aspects as "touch" and "braking" with the lesser causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door". Every factor identified in the diagram was included in the final design.

[edit] Causes
Causes in the diagram are often categorized, such as to the 8 M's, described below. Cause-andeffect diagrams can reveal key relationships among various variables, and the possible causes provide additional insight into process behavior. Causes can be derived from brainstorming sessions. These groups can then be labeled as categories of the fishbone. They will typically be one of the traditional categories mentioned above but may be something unique to the application in a specific case. Causes can be traced back to root causes with the 5 Whys technique. Typical categories are:

[edit] The 8 Ms (used in manufacturing)


Machine (technology) Method (process) Material (Includes Raw Material, Consumables and Information.) Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions Measurement (Inspection) Milieu/Mother Nature (Environment) Management/Money Power Maintenance

[edit] The 8 Ps (used in service industry)


Product=Service Price Place Promotion/Entertainment People(key person) Process Physical Evidence Productivity & Quality

[edit] The 4 Ss (used in service industry)

Surroundings Suppliers Systems Skills

[edit] Questions to ask while building a Fishbone Diagram

People

Was the document properly interpreted? Was the information properly disseminated? Did the recipient understand the information? Was the proper training to perform the task administered to the person? Was too much judgment required to perform the task? Were guidelines for judgment available? Did the environment influence the actions of the individual? Are there distractions in the workplace? Is fatigue a mitigating factor? How much experience does the individual have in performing this task?

Machines

Was the correct tool used? Are files saved with the correct extension to the correct location? Is the equipment affected by the environment? Is the equipment being properly maintained (i.e., daily/weekly/monthly preventative maintenance schedule) Does the software or hardware need to be updated? Does the equipment or software have the features to support our needs/usage? Was the machine properly programmed? Is the tooling/fixturing adequate for the job? Does the machine have an adequate guard? Was the equipment used within its capabilities and limitations? Are all controls including emergency stop button clearly labeled and/or color coded or size differentiated? Is the equipment the right application for the given job?

Measurement

Does the gauge have a valid calibration date? Was the proper gauge used to measure the part, process, chemical, compound, etc.? Was a guage capability study ever performed? - Do measurements vary significantly from operator to operator? - Do operators have a tough time using the prescribed gauge? - Is the gauge fixturing adequate? Does the gauge have proper measurement resolution? Did the environment influence the measurements taken?

Material (Includes Raw Material, Consumables and Information )

Is all needed information available and accurate? Can information be verified or crosschecked? Has any information changed recently / do we have a way of keeping the information up to date? What happens if we don't have all of the information we need? Is a Material Safety Data Sheet (MSDS) readily available? Was the material properly tested? Was the material substituted? Is the suppliers process defined and controlled? Were quality requirements adequate for part function? Was the material contaminated? Was the material handled properly (stored, dispensed, used & disposed)?

Environment

Is the process affected by temperature changes over the course of a day? Is the process affected by humidity, vibration, noise, lighting, etc.? Does the process run in a controlled environment? Are associates distracted by noise, uncomfortable temperatures, fluorescent lighting, etc.?

Method

Was the canister, barrel, etc. labeled properly? Were the workers trained properly in the procedure? Was the testing performed statistically significant? Was data tested for true root cause? How many if necessary and approximately phrases are found in this process? Was this a process generated by an Integrated Product Development (IPD) Team? Did the IPD Team employ Design for Environmental (DFE) principles? Has a capability study ever been performed for this process? Is the process under Statistical Process Control (SPC)? Are the work instructions clearly written? Are mistake-proofing devices/techniques employed? Are the work instructions complete? Is the tooling adequately designed and controlled? Is handling/packaging adequately specified? Was the process changed? Was the design changed? Was a process Failure Modes Effects Analysis (FMEA) ever performed? Was adequate sampling done? Are features of the process critical to safety clearly spelled out to the Operator?

[edit] Criticism of Ishikawa Diagrams


In a discussion of the nature of a cause it is customary to distinguish between necessary and sufficient conditions for the occurrence of an event. A necessary condition for the occurrence of a specified event is a circumstance in whose absence the event cannot occur. A sufficient condition for the occurrence of an event is a circumstance in whose presence the event must occur.[4] Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction.[5] A focus group is a form of qualitative research in which a group of people are asked about their perceptions, opinions, beliefs and attitudes towards a product, service, concept, advertisement, idea, or packaging.[1] Questions are asked in an interactive group setting where participants are free to talk with other group members. The first focus groups were created at the Bureau of Applied Social Research in the USA, by associate director, sociologist Robert K. Merton.[2] The term itself was coined by psychologist and marketing expert Ernest Dichter.[3]

Contents
[hide]

1 In marketing 2 In social sciences 3 In usability engineering 4 Types of focus groups 5 Benefits/strengths of focus group discussions 6 Problems and criticism 7 United States government use of focus groups 8 Focus groups in art 9 See also 10 References 11 External links

[edit] In marketing
In the world of marketing, focus groups are seen as an important tool for acquiring feedback regarding new products, as well as various topics. In particular, focus groups allow companies wishing to develop, package, name, or test market a new product, to discuss, view, and/or test the new product before it is made available to the public. This can provide invaluable information about the potential market acceptance of the product. Focus Group is an interview, conducted by a trained moderator among a small group of respondents. The interview is conducted in an unstructured and natural way where respondents are free to give views from any aspec

You might also like