Professional Documents
Culture Documents
H A P T E R
SECTIONS
1.1 1.2 1.3 1.4 1.5 1.6 1.7 Introduction Successful Applications of Six Sigma Management Key Ingredients for Success with Six Sigma Management Benefits of Six Sigma Management Fundamentals of Improving a Product, Service, or Process Fundamentals of InventingInnovating a Product, Service, or Process What Is New about Six Sigma Management? Six Sigma in Non-Manufacturing Industries Summary References
LEARNING OBJECTIVES
After reading this chapter, you will be able to:
Present strong evidence of the value of Six Sigma style of management. Understand the key ingredient for success with Six Sigma management. Appreciate the benefits of Six Sigma management. Review the fundamentals of improving a product, service, or process. Appreciate the DMAIC model for improvement. Introduce the fundamentals of inventing and innovating a product, service, or process. Appreciate the DMADV model for invention and innovation. Know what is new about Six Sigma management. Appreciate the significance of Six Sigma management in non-manufacturing industries.
3
INTRODUCTION
This chapter is about getting you comfortable with Six Sigma management. We accomplish this objective by providing you with strong anecdotal evidence that Six Sigma is a very successful style of management, explaining how it must be emphatically led from the top of the organization, and, finally, introducing you to the Six Sigma models for improving and inventing/innovating products, services, or processes. This chapter could serve as a brief introduction to Six Sigma management for any stakeholder of your organization.
Allstate Insurance Amazon.com American Express Bank of America Bankers Life and Insurance Capital One Services Intuit J. P. Morgan Chase Merrill Lynch Microsoft United Health Group University of Miami Jack Welch, Chairman emeritus and CEO of General Electric, was so committed to and impressed with Six Sigma that he stated: Six Sigma GE Quality 2000 will be the biggest, the most personally rewarding, and, in the end, the most profitable undertaking in our history. we plunged into Six Sigma with a company-consuming vengeance just over three years ago. We have invested more than a billion dollars in the effort and the financial returns have now entered the exponential phase. (GEs letter to shareowners, February 12, 1999)
The perception and performance of an enterprise determines its value. Six Sigma management focuses on driving effective and efficient performance across the total enterprise to increase the perception of the marketplace of its ability to deliver valueadded processes, products, and services. The perception of the marketplace of the value of an enterprise is indirectly measured by market share, shareholder value, and the willingness of customers to recommend these processes, products, and services to other potential customers. Benefits to stakeholders are a by-product of the organizational benefits. The benefits to stakeholders include the following: Shareholders receive more profit due to decreased costs and increased revenues. Customers are delighted with products and services. Employees experience higher morale and more satisfaction from joy in work. Suppliers enjoy a secure source of business.
8
Inputs Manpower/Services Equipment Materials/Goods Methods Environment Process Transformation of inputs, value (time, place, form) is added or created
Processes exist in all facets of organizations, and our understanding of them is crucial. Many people mistakenly think only of production processes. However, administration, sales, service, human resources, training, maintenance, paper flows, interdepartmental communication, and vendor relations are all processes. Importantly, relationships between people are processes. Most processes can be studied, documented, defined, improved, and innovated. An example of a generic assembly process is shown in Figure 1.2. The inputs (component parts, machines, and operators) are transformed in the process to make the outputs (assembled product).
Inputs Component Parts Machines Operators Process Assembly of components into final product Ship to Customer Outputs Customer receives product
An organization is a multiplicity of micro sub-processes, all synergistically building to the macro process of that organization. All processes have customers and suppliers; these customers and suppliers can be internal or external to the organization. A customer can be an end user or the next operation downstream. The customer does not even have to be a human; it could be a machine. A supplier could be another organization supplying sub-assemblies or services, or the prior operation upstream. Variation in a Process The outputs from all processes and their component parts may be measured; the measurements invariably fluctuate over time, creating a distribution of measurements. The distribution of measurements of the outputs from a process over time is called the Voice of the Process (VoP). Consider a process such as getting ready for work or for class in the morning. Some days you are busier than usual, while on other days you have less to do than usual. Your process varies from day to day to some degree. This is common variation. However, if a construction project begins on the highway you take to work or school, you might drastically alter your morning routine. This would be special variation because it would have been caused by a change external to your driving to work or school process. If the traffic patterns had remained as they were, your process would have continued on its former path of common variation.
The design and execution of a process creates common causes of variation. In other words, common variation is due to the process itself. Process capability is determined by inherent common causes of variation, such as hiring, training, or supervisory practices; inadequate lighting; stress; management style; policies and procedures; or design of products or services. Employees working within the process cannot control a common cause of variation and should not be held accountable for, or penalized for, its outcomes. Process owners (management) must realize that unless a change is made in the process (which only they can make), the capability of the process will remain the same. Special causes of variation are due to events external to the usual functioning of the process. New raw materials, a drunken employee, or a new operator can be examples of special causes of variation. Identifying the occurrence of special and common causes of variation is discussed extensively in References 2 and 3. Because unit-to-unit variation decreases the customers ability to rely on the dependability and uniformity of the outputs of a process, managers must understand how to reduce and control variation. Employees use statistical methods so that common and special causes of variation can be differentiated; special variation can be resolved and common variation can be reduced by management action, resulting in improvement and innovation of the outputs of a process. The following fictionalized case history demonstrates the need for management to understand the difference between common and special causes of variation to take appropriate action. In this case history, an employee comes to work intoxicated. His behavior causes productivity, safety, and morale problems. You, as the supervisor, speak to the employee privately, try to resolve the situation, and send the employee home with pay. After a second instance of intoxication, you speak to the employee privately, try to resolve the problem again, and send the employee home without pay. A third instance causes you to refer the employee to an Employee Assistance Program. A fourth offense results in you terminating the employee. As a good manager, you document the employees history to create a paper trail in case of legal action. All of the above is necessary and is considered to be good management practice. The thought process behind the preceding managerial actions assumes that the employee is the problem. In other words, you view the employees behavior as the special cause of variation from the desired sober state. However, this is true only if there is a statistically significant difference between the employee in question and all other employees. If the employees behavior is part of a process that allows such behavior to exist, then the problem is not a special cause, but rather a common cause; it requires a different solution. In the latter case, the employee must be dealt with as before; but, additionally, organizational policies and procedures (processes) must be changed to prevent future incidents of intoxication. This new view requires a shift in thought. With the new thought process, if existing organizational policies and procedures allow employees with drinking problems to be present in the workplace, an intoxicated employee must be dealt with according to the original solution, and policies and procedures must be improved to prevent future incidents of such behavior on the job. Feedback Loops An important aspect of any process is a feedback loop. A feedback loop relates information about outputs from any stage(s) back to other stage(s) to make an analysis of the process. Figure 1.3 depicts the feedback loop in relation to a basic process.
10
Inputs
Process
Outputs
Feedback Loop
The tools and methods discussed in this book provide vehicles for relating information about outputs to other stage(s) in the process. Decision making about processes is aided by the transmission of this information. A major purpose of quality management is to provide the information (flowing through a feedback loop) needed to take action with respect to a process. There are three feedback loop situations: no feedback loop, special cause only feedback loop, and special and common cause feedback loop. A process that does not have a feedback loop is probably doomed to deterioration and decay due to the inability of its stakeholders to rejuvenate and improve it based on data from its outputs. An example of a process without a feedback loop is a relationship between two people (manager and subordinate, husband and wife, or buyer and seller) that contains no vehicle (feedback loop) to discuss issues and problems with the intention of establishing a better relationship in the future. A process in which all feedback information is treated as a special cause will exhibit enormous variation in its output. An example of a process with a special cause only feedback loop could be a relationship between two people; but in this case, the relationship deteriorates through a cycle of successive overreactions to problems that are perceived as special by both members of the relationship. In fact, the problems are probably repetitive in nature due to the structure of the relationship itself and to common causes of variation. Finally, in a process in which feedback information is separated into common and special causesspecial causes are resolved and common causes are reduced products, services, or processes will exhibit continuous improvement of their output. For example, the relationship problems between a superior and a subordinate can be classified as either due to special and/or common causes; statistical methods are used to resolve special causes and to remove common causes, thereby improving the relationship in the future. Consider the following example. Paul is a 40-year-old, mid-level manager who is unhappy because he wants his boss to give him a promotion. He thinks about his relationship with his boss and wonders what went wrong. He determines that over a period of 10 years, he has had about 40 disagreements with his boss (one per quarter). Paul thinks about what caused each disagreement. Initially, he thought each disagreement had its own special cause. After studying the pattern of the number of disagreements per year, Paul discovered that it was a stable and predictable process of common causes of variation. Subsequently, he wrote down the reason for as many of the disagreements as he could remember (about 30). However, after thinking about his relationship with his boss from the perspective of common causes, he realized his disagreements with his boss were not unique events (special causes); rather, they were a repetitive process, and the reasons for the disagreements could be classified into common cause categories. He was surprised to see that the 30 reasons collapse
11
down to four basic reasonspoor communication of a work issue, a process failure causing work not to be completed on schedule, unexcused absence, and pay-related issueswith one reason, poor communication of a work issue, accounting for 75% of all disagreements. Armed with this insight, he scheduled a discussion with his boss to find a solution to their communication problems. His boss explained that he hates the e-mails that Paul is always sending him and wished Paul would just talk to him and say what is on his mind. They resolved their problem; their relationship was greatly improved, and, eventually, Paul received his promotion. Definition of Quality (Voice of the Customer [VoC]) Goal Post View of Quality Quality is a concept whose definition has changed over time. In the past, quality meant conformance to valid customer requirements. That is, as long as an output fell within acceptable limits (called specification limits) around a desired value or target value (also called the nominal value, denoted by m); it was deemed conforming, good, or acceptable. We refer to this as the goal post definition of quality. The nominal value and specification limits are set based on the perceived needs and wants of customers. Specification limits are called the Voice of the Customer. Figure 1.4 shows the goal post view of losses arising from deviations from the nominal value. That is, losses are minimum until the lower specification limit (LSL) or upper specification limit (USL) is reached. Then, suddenly, losses become positive and constant, regardless of the magnitude of the deviation from the nominal value.
Good, no loss
No good, loss
Quality Characteristic
FIGURE 1.4 Goal Post View of Losses Arising from Deviations from Nominal
An individual unit of product or service is considered to conform to a specification if it is at or inside the boundary (USL or LSL) or boundaries (USL and LSL). Individual unit specifications are made up of a nominal value and an acceptable tolerance from the nominal. The nominal value is the desired value for process performance mandated by the customers needs and/or wants. The tolerance is an allowable departure from a nominal value established by designers that is deemed
12
non-harmful to the desired functioning of the product or service. Specification limits are the boundaries created by adding and/or subtracting tolerances from a nominal value; for example: USL = upper specification limit = nominal + tolerance LSL = lower specification limit = nominal tolerance A service example of the goal post view of quality and specification limits can be seen in a monthly accounting report that must be completed in 7 days (nominal), no earlier than 4 days (lower specification limitnot all the necessary data will be available), and no later than 10 days (upper specification limitthe due date for the report at the board meeting). Therefore the Voice of the Customer is that the report must be completed ideally in 7 days, but no sooner than 4 days or no later than 10 days. Another example of the goal post view of quality and specification limits is to insert a medical device into the chest of a patient that is 25 mm in diameter (the nominal value). A tolerance of 5 mm above or below the nominal value (25 mm) is acceptable to the surgeon performing the operation. Thus, if a medical devices diameter measures between 20 mm and 30 mm (inclusive), it is deemed conforming to specifications. It does not matter if the medical device is 21 mm or 29 mm; they are both conforming units. If a medical devices diameter measures less than 20 mm or more than 30 mm, it is deemed as not conforming to specifications and is scrapped at a cost of $1,000.00 per device. Therefore, the Voice of the Customer states that the diameters of the medical devices must be between 20 mm and 30 mm, inclusive, with an ideal diameter of 25 mm. In this section, you assumed that there is a reasonable target from which deviations on either side are possible. For situations in which there is only one specification limitsuch as time to deliver mail in hours, with the target of 0 hours and an upper specification limit of 5 daysthe objective is not to exceed the upper specification, and to deliver the mail on a very consistent basis (little variation) to create a highly predictable mail delivery process. In other words, whether there are two-sided specifications or a one-sided specification, the goal is to have increased consistency, implying minimal variation in performance and, thus, increased predictability and reliability of outcomes. Continuous Improvement View of Quality A more modern definition of quality states that: Quality is a predictable degree of uniformity and dependability, at low cost and suited to the market [see Reference 1, p. 229]. Figure 1.5 shows a more realistic loss curve in which losses begin to accumulate as soon as a quality characteristic of a product or service deviates from the nominal value. As with the goal post view of quality, once the specification limits are reached, the loss suddenly becomes positive and constant, regardless of the deviation from the nominal value beyond the specification limits. The continuous improvement view of quality was developed by Genichi Taguchi [see Reference 10, pp. 7-11]. The Taguchi Loss Function, called the Loss curve in Figure 1.5, expresses the loss of deviating from the nominal within specifications: the left-hand vertical axis is loss and the horizontal axis is the measure, y, of a quality characteristic. The loss associated with deviating (y m) units from the nominal value, m, is:
13
(1.1)
y = the value of the quality characteristic for a particular item of product or service. m = the nominal value for the quality characteristic. k = a constant, A/d2. A = the loss (cost) of exceeding specification limits (e.g., the cost to scrap a unit of output). d = the allowable tolerance from the nominal value that is used to determine specification limits.
Nominal
Loss incurred from unit-tounit variation before improvement Quality USL characteristic (y) d
Under this Taguchi Loss Function, the continuous reduction of unit-to-unit variation around the nominal value is the most economical course of action, absent capital investment (more on this later). In Figure 1.5, the righthand vertical axis is Probability and the horizontal axis is the measure, y, of a quality characteristic. The distribution of output from a process before improvement is shown in Curve A, while the distribution of output after improvement is shown in Curve B. The losses incurred from unit-to-unit variation before process improvement (the lined area
14
under the loss curve for Distribution A) is greater than the losses incurred from unit-to-unit variation after process improvement (the hatched area under the loss curve for Distribution B). This definition of quality promotes continual reduction of unit-to-unit variation (uniformity) of output around the nominal value, absent capital investment. If capital investment is required, then an analysis must be conducted to determine if the benefit of the reduction in variation in the process justifies the cost. The capital investment for a process improvement should not exceed the single lined area under the Taguchi Loss Function in Curve A, but not in Curve B, in Figure 1.5. This modern definition of quality implies that the Voice of the Process should take up a smaller and smaller portion of the Voice of the Customer (specifications) over time, rather than just being inside of the specification limits. The logic here is that there is a loss associated with products or services that deviate from the nominal value, even when they conform to specifications. To illustrate the continuous definition of quality, return to the example of the medical device that is to be inserted into a patients chest. Every millimeter higher or lower than 25 mm causes a loss that can be expressed by the following Taguchi Loss Function: L(y) = k(y m)2 = (A/d2)(y m)2 = ($1,000/[52])(y 25mm)2 = (40)(y 25mm)2 if 20 y 30 L(y) = $1,000 if y < 20 or y > 30
Table 1.1 shows the values of L(y) for values of the quality characteristic (diameter of the medical device). TABLE 1.1 Loss Arising from Deviations in Diameters of the Medical Device
Diameter of the Medical Device (y) Value of Taguchi Loss Function (L[y]) 1,000 1,000 1,000 640 360 160 40 0 40 160 360 640 1,000 1,000 1,000
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
15
Under the loss curve shown in Table 1.1, it is always economical to continuously reduce the unit-to-unit variation in the diameter of medical devices, absent capital investment. This will minimize the loss of surgically inserting medical devices. If a Taguchi Loss Function has only one specification limit, such as an upper specification limit, the preceding discussion applies without loss of generality. For example, if in the opinion of customers, 30 seconds is the maximum acceptable time to answer phone calls at a customer call center and the desired time is 0 seconds, any positive deviation will result in loss to the customer. Moreover, the greater the process variation (above the nominal time of 0), the greater the loss to the customer. In the case where there is no natural nominal value (e.g., 0 seconds), deviation between the process average and the desired time results in a process bias. The loss function can be used to show in these cases that the loss is a function of the bias squared plus the process variation. This implies that the goal is to eliminate the bias (i.e., move the process average toward the desired time) and to reduce process variation. For example, customer call centers not only wish to reduce their time to answer phone calls from their customers, but they want to have uniformly short answer times. Why? When management determines staffing requirements for the customer call center, it needs to be able to have enough staff to meet its specification for time-toanswer. The more variation in the time-to-answer per call, the more unpredictable the process, and the less confidence management will have in its staffing model. Management may actually overstaff to ensure it meets its specifications. This introduces more cost to the customer call center, which is indirectly passed on to the customer.
16
0 Nominal
+1
+2
+3
+4
+5
+6
When measuring any process, its outputs (services or products) vary in size, shape, look, feel, or any other measurable characteristic. The typical value of the output of a process is measured by a statistic called the mean or average. The variability of the output of a process is measured by a statistic called the standard deviation. In a normal distribution, the interval created by the mean plus or minus 2 standard deviations contains 95.44% of the data values; 45,600 data values per million are outside of the area created by the mean plus or minus 2 standard deviations (45,600 = 1,000,000 [4.56% = 100% 95.44%]). In a normal distribution, the interval created by the mean plus or minus 3 standard deviations contains 99.73% of the data; 2,700 defects per million opportunities are outside of the area created by the mean plus or minus 3 standard deviations (2,700 = 1,000,000 [0.27% = 100% 99.73%]). In a normal distribution, the interval created by the mean plus or minus 6 standard deviations contains 99.9999998% of the data; 2 data values per billion data values are outside of the area created by the mean plus or minus 6 standard deviations (2 = 1,000,000,000 [0.0000002% = 100% 99.9999998%]). Relationship Between VoP and VoC. Six Sigma promotes the idea that the distribution of output for a stable normally distributed process (Voice of the Process) should be designed to take up no more than half of the tolerance allowed by the specification limits (Voice of the Customer). Although processes may be designed to be at their best, you assume that the processes may increase in variation over time. This increase in variation may be due to small variation with process inputs, the way the process is monitored, changing conditions, etc. The increase in process variation is often assumed to be similar to temporary shifts in the underlying process mean. In practice, the increase in process variation has been shown to be equivalent to an average shift of 1.5 standard deviations in the originally designed and monitored process. If a process is originally designed to be twice as good as a customer demands (i.e., the specifications representing the customer requirements are 6 standard deviations from the process target), then even with a shift in the Voice of the Process, the customer demands are likely to be met. In fact, even if the
17
process mean shifted off target by 1.5 standard deviations, there are 4.5 standard deviations between the process mean and closest specification limit, resulting in no more than 3.4 defects per million opportunities (DPMO). In the 1980s, Motorola demonstrated that in practice, a 1.5 standard deviation shift was what was observed as the equivalent increase in process variation for many processes that were benchmarked. Figure 1.7 shows the Voice of the Process for an accounting function with an average of 7 days, a standard deviation of 1 day, and a stable normal distribution. It also shows a nominal value of 7 days, a lower specification limit of 4 days, and an upper specification limit of 10 days. The accounting function is referred to as a 3-sigma process because the process mean plus or minus 3 standard deviations is equal to the specification limits; in other terms, USL= + 3 and LSL = 3. This scenario will yield 2,700 defects per million opportunities, or one early or late monthly report in 30.86 years [(1/0.0027)/12].
3 4
2 5
1 6
0 +1 7 8 Nominal
+2 9
+3 10
Days
FIGURE 1.7 Three Sigma Process with 0.0 Shift in the Mean
Figure 1.8 shows the same scenario as in Figure 1.7, but the process average shifts by 1.5 standard deviations (the process average is shifted down or up by 1.5 standard deviations [or 1.5 days] from 7.0 days to 5.5 days or 8.5 days) over time. This is not an uncommon phenomenon. The 1.5 standard deviation shift in the mean results in 66,807 defects per million opportunities at the nearest specification limit, or one early or late monthly report in 1.25 years [(1/.066807)/12], if the process average moves from 7.0 days to 5.5 days or from 7.0 days to 8.5 days. In this discussion, only the observations outside the specification nearest the average are considered.
18
LSL USL
1.5
+1.5
4 4.5 2.5 3
3 4
2 1 1.5 5 5.5 6
0 7 Nominal
+1 +2 +1.5 8 8.5 9
+3 10
FIGURE 1.8 Three Sigma Process with a 1.5-Sigma Shift in the Mean
Figure 1.9 shows the same scenario as Figure 1.7, but the Voice of the Process takes up only half the distance between the specification limits. The process mean remains the same as in Figure 1.7, but the process standard deviation has been reduced to one half-day through application of process improvement. In this case, the resulting output will exhibit two defects per billion opportunities, or one early or late monthly report in 41,666,667 years [(1/.000000002)/12]. Figure 1.10 shows the same scenario as Figure 1.9, but the process average shifts by 1.5 standard deviations (the process average is shifted down or up by 1.5 standard deviations [or 0.75 days = 1.5 0.5 days] from 7.0 days to 6.25 days or 7.75 days) over time. The 1.5 standard deviation shift in the mean results in 3.4 defects per million opportunities at the nearest specification limit, or one early or late monthly report in 24,510 years [(1/.0000034/12]. This is the definition of 6-sigma level of quality. Another Look at the 1.5-Sigma Shift in the Mean. The engineer responsible for creating the concept of Six Sigma at Motorola was Bill Smith. Bill Smith indicated that product failures in the field were shown to be statistically related to the number of product reworks and defect rates observed in production. Therefore, the more defect and rework free a product was during production, the more likely there would be fewer field failures and customer complaints. Additionally, Motorola had a very strong emphasis on total cycle time reduction. A business process that takes more steps to complete its cycle increases the chance for changes/unforeseen events, and the opportunity for defects. Therefore, reducing cycle time is best accomplished by streamlining the process, removing non-value added effort, and as a result, reducing the opportunities for making mistakes (defects). What a concept! Reducing cycle time by simplifying a
19
process will result in fewer defects, lower remediation/warranty/service costs, and ultimately increased customer satisfaction with the results. This last concept is not new to those who are familiar with Toyota production system concepts, Just-In-Time philosophy, or what many call Lean Thinking. Six Sigma practitioners concern themselves with reducing the defect or failure rate while Lean practitioners concern themselves with streamlining processes and reducing cycle time. Defect reduction and lean thinking are flip sides of the same coin. The integrated strategy of considering both sides at the same time was the basis of the original work in Six Sigma.
LSL USL
6 4
4 5
2 6
0 +1 7 Nominal
+2 8
+3
+4 9
+5
+6 10 Days
FIGURE 1.9 Six Sigma Process with a 0.0 Shift in the Mean
LSL 1.5 +1.5 USL
6 4
5 4 4.5 5
2 1 1.5 6
0 7 Nominal
+1 +2 +1.5 8
+3
+4 +5 +4.5 9
+6 10 Days
FIGURE 1.10 Six Sigma Process with 1.5-Sigma Shift in the Mean
20
Some proof of this was gained in the period from 1981 to 1986 when Bob Galvin (CEO of Motorola) set a goal of a tenfold improvement in defect rates over those five years. During those five years, positive results were demonstrated in field failures and warranty costs. However, some of Motorolas key competitors improved at a faster rate. In 1987, Motorola indicated it would increase the rate of improvement to tenfold improvement every two years rather than five years. What was the target? The target was called Six Sigma quality (which was defined to be 3.4 defects per million opportunities) by 1992. Of course, the key question was whether there was a tradeoff between reducing defect rates and implementation cost. Bill Smith and others were not advocating increasing costs by increasing inspection, but rather that engineers design products and production processes so that there would be little chance for mistakes/defects during production and customer usage. The focus was on the upstream X variables that would be indicators of future performance and process problems that were observed. The Y variables were the downstream defect rates, rework rates, and field failures that were observed and measured. Motorolas strict focus on the rate of improvement challenged engineering, supply management, and production to develop materials, production equipment, and products that were more robust to variation, and as a result, less sensitive to processing variation. Hence, the focus was on understanding the X variables. What is interesting about the preceding two paragraphs is that often the initial focus of Statistical Process Control (SPC) was limited to monitoring Y variables or average/target values of process variables. Six Sigma did not really change the tools, but instead focused the tools on their usage upstream on X variables; in particular, on understanding the relationship of the variation in the X variables on the variation of the Y variables, and finally, using the tools in such a sequence as to uncover the relationships and be able to improve and control the results of the Y variables. Studies did show that Bill Smiths insights were valid: defects per million opportunities (DPMO) and defects per unit (DPU) measures calculated in production facilities did predict field performance, customer complaints, and warranty costs. Therefore, DPMO and DPU became metrics of emphasis at Motorola. Around the same time that these studies were done, employees at Motorola gathered empirical evidence that even when the Y variables were in statistical control, the X variables might not be in statistical control. Additionally, SPC as practiced in many operations was more of a monitoring method on the Y variables with the only out of control indicator being a point beyond a control limit. Consequently, run tests1 were not used as indicators of out of control. Empirical evidence indicated that a process could shift within the 3-sigma control limits as much as 2 standard deviations and stay there for some run of points before a point outside 3 standard deviations was observed. In fact, if a process with stable variation shifts 1.5 standard deviations, there is an average run of 16 points that would be observed before one point was beyond the 3 standard deviation control limits.
1 A run is a consecutive series of similar items that are preceded or followed by items of a different type. For example, in tossing a coin eight times, the sequence Head Head Head Head Tail Tail Tail Tail contains two runsa run of four heads followed by a run of four tails. A runs test is a statistical test that determines whether a particular sequence has a random pattern.
21
In addition to DPMO and DPU measures, Motorola was also concerned about upstream X variables that could be measured (rather than attribute variables). To control measurement data, a focus on means (i.e., targets) and spreads (i.e., standard deviations) was needed. If the Voice of the Process (VoP) is equal to the Voice of the Customer (VoC), the processs mean output plus or minus 3 standard deviations equals the specification limits; about 0.27% of the process output is defective given a normal distribution. If SPC were utilized to track that variable, and the mean shifted halfway to the control limits (i.e., this assumes an individualmoving range type control chart that is discussed in References 2 and 3), then there could be an average run of 16 observations before a point beyond a control limit would be noted. Another way of saying this is there could be an increase in dpmo from 2,700 to 66,807 with no points being beyond a control limit. If various run tests were conducted, then the shift in the mean would be detected; but in practice, production personnel rarely shut down a process for failure of a run test, if no points were outside the control limits. So, why does Six Sigma often reference a 1.5 standard deviation shift in the mean of a process? Studies of various production lines at Motorola Corporation showed that even in a state of control, where the only out-of-control condition to be checked was observations outside the 3 standard deviation control limits, there often would be uncontrolled shifts of between 1 to 2 standard deviations. For example, for some manual assembly processes, the shift averaged between 1.2 and 1.8 standard deviations at the time an out-of-control observation was recorded. Of course, for automated processes, this degree of shift is frequently not allowed. A statistical purist would argue that the genesis of the sigma metric is flawed because it is based on a shift factor. The engineers viewed the metric as a worst-case DPMO for a process because they assumed that any shift factor significantly larger than 1.5 would be caught by the common usage of statistical process control (a point beyond a 3-sigma control limit). If there is a shift less than 1.5 sigma, that is all to the good since the dpmo is less. From a practical standpoint, Six Sigma seems to be an effective form of management. Moreover, the argument against the 1.5-sigma shift in the mean seems similar to the claim that a yard is not really three feet. Some say a yard was based on the distance from the tip of the nose to the tip of the middle finger on an outstretched arm for an average male. What is an average male? Is that similar to knowing an average shift? It turns out that eventually everyone accepted the definition that a yard is equal to three feet, and few remember the original definition. At Motorola, the story is similar in that only a few folks remember the original reason for the definition of the sigma levels, and it is accepted that the DPMO levels can be equated with sigma levels. Interestingly, many of those who continue to argue about the derivation of sigma levels are those who have learned about Six Sigma in the last seven years. It seems that they are trying to understand the legend of Six Sigma rather than seeing the upside and benefit. We can continue to argue about this, but practitioners are continuing to improve their organizations regardless of any technical flaws in the derivations of the methods. Does Six Sigma Matter? The difference between a 3-sigma process (66,807 defects per million opportunities at the nearest specification limit) and a 6-sigma process (3.4 defects per million opportunities at the nearest specification limit) can be seen in a service with 20 component steps. If each of the 20 component steps has a quality level of 66,807 defects per million opportunities,
22
assuming each step does not allow rework, then the likelihood of a defect at each step is 0.066807 (66,807/1,000,000) or 6.68 percent. By subtraction, the likelihood of a defect-free step is 0.933193 (1.0 0.066807) or 93.3 percent. Consequently, the likelihood of delivering a defect-free final service is 25.08 percent. This is computed by multiplying 0.933193 by itself 20 times ([1.0 0.066807]20 = 0.2508 = 25.08%). However, if each of the 20 component parts has a quality level of 3.4 defects per million opportunities (0.0000034), then the likelihood of delivering a defect-free final service is 99.99932% ([1.0 0.0000034]20 = 0.9999996620 = 0.9999932 = 99.99932%). A 3-sigma process generates 25.08% defect-free services, while a 6-sigma process generates 99.99932% defect-free services. The difference between the 3-sigma process and the 6-sigma process is dramatic enough to certainly believe that 6-sigma level of performance matters, especially with more complex processes with a greater number of steps or components.
23
Start
Start
No Yes
DMAIC Model
No Yes
Stop
Stop
LSL
USL
LSL
USL
3 2 1 0 +1 +2 +3 OLD
6 3
0 NEW
+3 +6
FIGURE 1.11 Relationship Between the VoC, the VoP, and the DMAIC Model
24
Definition Invention is the process of creating new products, services, or processes that are usable in accomplishing human objectives that were formerly difficult or impossible. The first club used by a caveman to kill animals to feed his family is an example of an invention. Inventions and History Some of the most significant inventions were created before recorded history; for example, crude tools, weapons, speech, cultivation of plants, domestication of animals, building techniques, production and control of fire, pottery, political systems, and the wheel. The period of recorded history began with the invention of cuneiform script by the Sumerians of Mesopotamia in about 3000 B.C.
Innovation
Definition Innovation is the process by which invention is put to use for the betterment of humanity. Thomas Edison was both an inventor (of the electric light bulb) and an innovator because he was critical to the electrification of New York City and the establishment of the General Electric Company. Reasons for Innovation Innovations are created for five reasons. They are discussed next. Reason 1: Build competitive advantage in target markets, and increase market share by offering products, services, or processes that are preferred above those of competitors. Innovation offers an organization the opportunity to take a step ahead of its competitors for the customers positive attention and resources. Reason 2: Increase profitability by introducing more profitable lines. Innovation aids organizations in creating products, services, and processes that yield higher profit margins than those of their competitors. Reason 3: Build a reputation for technological excellence by introducing state-of-theart products. Innovation provides an opportunity for organizations to enhance their image of being a provider who is on the frontier of their core discipline. Reason 4: Counteract the effects of downsizing on stakeholders by creating new employment opportunities through upsizing [see Reference 7]. Downsizing is a term used to describe an organizational layoff policy whose purpose is to reduce costs. Upsizing is a term used to describe an organizational product creation focus whose purpose is to create new employment opportunities to mitigate the uncertainties caused by downsizing. Innovation provides an organization with the opportunity to create products, services, and processes, and hence, upsize.
25
Reason 5: Creating exportable products for developing countries. Frequently, developing countries experience great economic difficulties due to unstable governments, an uneducated work force, variable and poor quality raw materials, and rampant inflation. In the face of such unfavorable conditions for economic health, developing countries need to export products that enjoy a non-competitive marketplace. They need protection from industrialized countries that do not suffer from the same economic woes. This can be accomplished by creating, producing, and selling innovative products, services, and processes that enjoy the legal protection of patents, copyrights, and trademarks.
Eight Methods for Invention and Innovation There are eight methods available for creating inventions and innovations. Method 1: Exploit core technologies; for example, using excellence in electrical engineering and physics to create new electronic products. Method 2: Capitalize on particularly excellent common operating elements; for example, using unusually excellent repair service or unusually short delivery times. Method 3: Pray for an inspiration; for example, luckily thinking to put an eraser on the end of a pencil. Method 4: Conduct scientific research; for example, studying computer science to develop a laptop computer. Method 5: Use expertise in specialized functional areas; for example, using excellence in quality control to create dependability and reliability (e.g., McDonalds or Marriott). Method 6: Identify the unmet needs of known customers; for example, conducting market research to provide faster delivery time of office supplies or longer battery life for laptop computers. Method 7: Study lead users to identify the unknown needs of customers. Lead users are consumers of a product, service, or process who are months or years ahead of regular users in their use of the item and who will benefit greatly by the innovation. For example, a lead user of a hair dryer may attach a portable battery pack and use it as a body warmer at football games played in cold weather. In this example, studying lead users resulted in the invention of a personal body warmer. Method 8: Study the unstated (or unknown) problems of existing users through observational studies. Ideas to surpass customers unknown needs and wants do not come from direct queries to customers, but rather from the manufacturers observations of the problems customers encounter while using products and services. An example of a product created using this method is a camera with automatic load [see Reference 8].
26
In 1974, the camera market was saturated with cameras that satisfied customers current needs; cameras were reliable, relatively inexpensive to use, and produced good pictures. This created a nightmare for the camera industry. Consequently, Konica decided to ask consumers: What else would you like in a camera? Consumers replied that they were satisfied with their cameras. Unfortunately, asking consumers what else they would like in a camera did not yield the information Konica needed to create a breakthrough. In response to this situation, Konica studied negatives at film-processing laboratories and discovered that the first few pictures on many rolls were overexposed, indicating that users had difficulty in loading film into the cameras. This presented an opportunity to innovate camera technology. The customer could not have been expected to think of this innovation. In response to this analysis, Konica developed the automatic-loading camera. This is an excellent example of the eighth method for innovating current products, services, or processes. Simple Examples of the Invention and Innovation Process All inventions and innovations do not have to be generated from complex, theoretical, and radical ideas. Sometimes, they come from the simplest of ideas. For example, whoever thought it was possible to create an improved corkscrew? Yet, in the last decade or two, a new corkscrew was invented. This corkscrew has wings or handles that allows the corkscrew to pull the cork out of the bottle. Another example of a product that you might not think could be improved is the teabag. Surprise! Lipton invented a teabag with two strings that allow the user to squeeze the last drops of tea out of the bag without burning his or her fingers. Ingenious!
27
minimize variability in CTQs and maximize customer satisfaction. The third principle is to design a process capable of delivering the quantity and quality of products or services desired by customers in a timely fashion. The fourth principle is to include suppliers early in the design process. These four principles are the bedrock of the DFSS method.
28
reviewing designs with respect to all potential users and possible uses, and, if necessary, improving designs; establishing appropriate control and monitoring systems to ensure designs meet and maintain goals throughout production or ongoing service life; and transferring the design to the process owner with a functioning control plan.
29
been very successful utilizing Six Sigma theory and methods in its non-manufacturing functions such as GE Capital. Additionally, service organizations such as American Express, HSBC, and the University of Miami have successfully used Six Sigma management. Granted, service transactions are frequently one-of-a-kind transactions that take place on demand (zero inventories) in the presence of the customer (zero time between production and use of service) with subjective service quality characteristics. Still, Six Sigma is appropriate in this type of environment. For example, a subjective quality characteristic in a restaurant is how patrons feel about the taste of cheesecake. One way to measure this is to ask patrons how they feel about the taste of cheesecake on a 1 to 5 scale, where 1 = very dissatisfied, 3 = neutral, and 5 = very satisfied. This type of measurement is subject to inaccuracies caused by factors such as embarrassment at telling the truth. Another way to determine how a patron feels about the taste of cheesecake is to instruct one busboy to collect the first cheesecake dessert eaten by a patron each of the six evening hours each day, and to weigh the cheesecake left on the plate. All cheesecake slices are 4 ounces, so 4 ounces minus the weight of cheesecake returned is the weight of cheesecake eaten by the patron. With the preceding information, the chef can estimate the average ounces and range (maximumminimum) of cheesecake eaten by patrons each day. Consequently, the chef can modify the recipe for preparing cheesecake and determine from the statistics if the patrons eat more cheesecake (higher average) with less variation (smaller range) per day. If they do, the chef assumes that the patrons like the taste of the cheesecake better with the new recipe than with the old recipe.
SUMMARY
A process is a collection of interacting components that transform inputs into outputs toward a common aim, called a mission statement. Two kinds of variation exist in a process: special variation and common variation. Common causes of variation are due to the process itself. Special causes of variation are due to events external to the usual functioning of the process. A feedback loop relates information about outputs from any stage(s) back to other stage(s) so that an analysis of the process can be made. The goal post view of quality states that as long as a unit of output falls within acceptable limits (called specification limits) around a desired value (called the nominal or target value), the process is deemed conforming, and there is minimum cost. The continuous improvement view of quality states that quality is a predictable degree of uniformity and dependability, at low cost and suited to the market. This is a more realistic view in that losses begin to occur as soon as a quality characteristic of a product or service deviates from the nominal value, even within specification limits. The pursuit of quality requires that organizations globally optimize their process of interdependent stakeholders. Six Sigma management is the relentless and rigorous pursuit of the reduction of variation in all critical processes to achieve continuous and breakthrough improvements that impact the bottom line and top line of the organization, and to increase customer satisfaction. Another common definition of Six Sigma management is that it is an organizational initiative designed to reduce defects tenfold while simultaneously reducing processing time by 50% every two years.
30
Finally, the objective of Six Sigma management is to create processes that are twice as good as the customer demands so that if the process mean shifts down, the process will not generate more than 3.4 defects per million opportunities. The methodology utilized in Six Sigma management to lead to breakthrough improvement in current existing processes is the DMAIC model. DMAIC is an acronym for Define, Measure, Analyze, Improve, and Control. The methodology used to create new products, services, or processes or to substantially innovate broken processes is Design for Six Sigma (DFSS). DFSS uses the DMADV model. DMADV is an acronym for Define, Measure, Analyze, Design, and Verify/Validate. The key ingredient for a successful Six Sigma management process is the commitment of top management. Six Sigma management is equally applicable in manufacturing and service industries, education, and government.
REFERENCES
1. W. E. Deming, Quality, Productivity, and Competitive Position (Cambridge, MA: Massachusetts Institute of Technology Center for Advanced Engineering Study, 1982). 2. H. Gitlow, A. Oppenheim, R. Oppenheim, and D. Levine, Quality Management, 3rd ed. (New York: McGraw-Hill-Irwin, 2005). 3. H. Gitlow and D. Levine, Six Sigma for Green Belts and Champions (Upper Saddle River, NJ: Financial Times-Prentice Hall, 2005). 4. G. Hahn, N. Dogannaksoy, and R. Hoerl, The Evolution of Six Sigma, Quality Engineering, 12, 2000, pp. 317326. 5. M. Harry and R. Schroeder, Six Sigma: The Breakthrough Management Strategy Revolutionizing the Worlds Top Corporations (New York, NY: Currency/Doubleday, 2000), p. 153. 6. K. Ishikawa and D. Lu, What Is Total Quality Control? The Japanese Way (Englewood Cliffs, NJ: Prentice-Hall, 1985). 7. N. Kano, Quality in the Year 2000: Downsizing Through Reengineering and Upsizing Through Attractive Quality Creation, ASQC Annual Quality Conference (Las Vegas, NV: May 24, 1994). 8. N. Kano and H. Gitlow, The Management of Innovation: Predicting the Products and Services of Tomorrow, The Kano Program (Miami, FL: 1995). 9. R. D. Snee, The Impact of Six Sigma on Quality, Quality Engineering, 12, 2000, pp. ixxiv. 10. G. Taguchi and Y. Wu, Introduction to Off-Line Quality Control (Nagoya, Japan: Central Japan Quality Control Association, 1980).