Initially, no points fell outside the -chart control limits and one could be led to believe that this indicates that the process mean exhibits good statistical control.. Here, Rm is bas
Trang 3Fig 17 Second revision control charts for Table 2 data with two more sample deletions (samples 1 and 11,
both of which exceed UCL in Fig 16) resulting from workpieces produced prior to machine being properly
warmed up (a) -chart (b) R-chart Data now have k = 16, n = 5
Importance of Using Both and R Control Charts. This example points to the importance of maintaining both
and R control charts and the significance of first focusing attention on the R-chart and establishing its stability Initially,
no points fell outside the -chart control limits and one could be led to believe that this indicates that the process mean
exhibits good statistical control However, the fact that the Rchart was initially not in control caused the limits on the chart to be somewhat wider because of two inordinately large R values Once these special causes of variability were
-removed, the limits on the -chart became narrower, and two values now fall outside these new limits Special causes
were present in the data, but initially were not recognizable because of the excess variability as seen in the R-chart
This example also points strongly to the need to have 25 or more samples before initiating control charts In this case, once special causes were removed, only 16 subgroups remained to construct the charts This is simply not enough data
Importance of Rational Sampling
Perhaps the most crucial issue to the successful use of the Shewhart control chart concept is the definition and collection
of the samples or subgroups This section will discuss the concept of rational sampling, sample size, sampling frequency, and sample collection methods and will review some classic misapplications of rational sampling Also, a number of practical examples of subgroup definition and selection will be presented to aid the reader in understanding and implementing this central aspect of the control chart concept
Concept of Rational Sampling. Rational subgroups or samples are collections of individual measurements whose variation is attributable only to one unique constant system of common causes In the development and continuing use of control charts, subgroups or samples should be chosen in a way that provides the maximum opportunity for the measurements within each subgroup to be alike and the maximum chance for the subgroups to differ from one another if special causes arise between subgroups Figure 18 illustrates the notion of a rational sample Within the sample or subgroup, only common cause variation should be present Special causes/sporadic problems should arise between the selection of one rational sample and another
Trang 4Fig 18 Graphical depiction of a rational subgroup illustrating effect of special causes on mean (a) Unshifted
• Subgroups should ensure the presence of a normal distribution for the sample means In general, the larger the sample size, the better the distribution is represented by the normal curve In practice, sample sizes of four or more ensure a good approximation to normality
• Subgroups should ensure good sensitivity to the detection of assignable causes The larger the sample size, the more likely that a shift of a given magnitude will be detected
When the above factors are taken into consideration, a sample/subgroup size of four to six is likely to emerge Five is the most commonly used number because of the relative ease of further computation
Sampling Frequency. The question of how frequently samples should be collected is one that requires careful thought
In many applications of and R control charts, samples are selected too infrequently to be of much use in identifying and
solving problems Some considerations in sample frequency determination are the following:
• If the process under study has not been charted before and appears to exhibit somewhat erratic behavior, samples should be taken quite frequently to increase the opportunity to quickly identify improvement opportunities As the process exhibits less and less erratic behavior, the sample interval can be
Trang 5lengthened
• It is important to identify and consider the frequency with which occurrences are taking place in the process This might include, for example, ambient condition fluctuations, raw material changes, and process adjustments such as tool changes or wheel dressings If the opportunity for special causes to occur over a 15-min period is good, sampling twice a shift is likely to be of little value
• Although it is dangerous to overemphasize the cost of sampling in the short term, clearly it cannot be neglected
Common Pitfalls in Subgroup Selection. In many situations, it is inviting to combine the output of several parallel and assumed-to-be-identical machines into a single sample to be used in maintaining a single control chart for the process Two variations of this approach can be particularly troublesome: stratification and mixing
Stratification of the Sample. Here each machine contributes equally to the composition of the sample For example,
one measurement each from four parallel machines yields a sample/subgroup of n = 4, as seen in Fig 19 In this case,
there will be a tremendous opportunity for special causes (true differences among the machine) to occur within subgroups
Fig 19 Block diagram depicting a stratified sample selection
When serious problems do arise, for example, for one or more of the machines, they will be very difficult to detect because of the use of stratified samples This problem can be detected, however, because of the unusual nature of the -chart pattern (recall the previous pattern analysis) and can be rectified provided the concepts of rational sampling are understood
The R-charts developed from such data will usually show very good control The corresponding control chart will show
very wide limits relative to the plotted values, and their control will therefore appear almost too good The wide limits result from the fact that the variability within subgroups is likely to be subject to more than merely common causes (Fig 20)
Trang 6Fig 20 Typical control charts obtained for a stratified sample selection (a) -chart (b) R-chart
Mixing Production From Several Machines. Often it is inviting to combine the output of several parallel machines/lines into a single stream of well-mixed product that is then sampled for the purposes of maintaining control charts This is illustrated in Fig 21
Fig 21 Block diagram of sampling from a mixture
If every sample has exactly one data point from each machine, the result would be the same as that of stratified sampling
If the sample size is smaller than the number of machines with different means or if most samples do not include data
Trang 7from all machines, the within-sample variability will be too low, and the between-sample differences in the means tend to
be large Thus, the -chart would give an appearance that the values are too far away from the centerline
Statistical Quality Design and Control
Richard E DeVor, University of Illinois, Urbana-Champaign; Tsong-how Chang, University of Wisconsin, Milwaukee
Zone Rules for Control Chart Analysis
Special causes often produce unnatural patterns that are not as clear cut as points beyond the control limits or obvious regular patterns Therefore, a more rigorous pattern analysis should be conducted Several useful tests for the presence of unnatural patterns (special causes) can be performed by dividing the distance between the upper and lower control limits into zones defined by , 2 , and 3 boundaries, as shown in Fig 22 Such zones are useful because the statistical distribution of follows a very predictable pattern the normal distribution; therefore, certain proportions of the points are expected to fall within the ± boundary, between and 2 , and so on
The following sections discuss eight tests that can be applied
to the interpretation of and R control charts Not all of
these tests follow/use the zones just described, but it is useful to discuss all of these rules/tests together These tests provide the basis for the statistical signals that indicate that the process has undergone a change in its mean level, variability level, or both Some of the tests are based specifically on the zones defined in Fig 22 and apply only to the interpretation of the -chart patterns Some of the tests apply to both charts Unless specifically identified to the contrary, the tests/rules apply to the consideration of data to one side of the centerline only
When a sequence of points on the chart violates one of the rules, the last point in the sequence is circled This signifies that the evidence is now sufficient to suggest that a special cause has occurred The issue of when that special cause actually occurred is another matter A logical estimation of the time of occurrence may be the beginning of the sequence
in question This is the interpretation that will be used here It should be noted that some judgment and latitude should be given Figure 23 illustrates the following patterns:
• Test 1(extreme points): The existence of a single point beyond zone A signals the presence of an
out-of-control condition (Fig 23a)
• Test 2 (2 out of 3 points in zone A or beyond): The existence of 2 out of any 3 successive points in zone
A or beyond signals the presence of an out-of-control condition (Fig 23b)
• Test 3 (4 out of 5 points in zone B or beyond): A situation in which there are 4 out of 5 successive points
in zone B or beyond signals the presence of an out-of-control condition (Fig 23c)
• Test 4 (runs above or below the centerline): Long runs (7 or more successive points) either strictly above or strictly below the centerline; this rule applies to both the and R control charts (Fig 23d)
• Test 5 (trend identification): When 6 successive points on either the or the R control chart show a
continuing increase or decrease, a systematic trend in the process is signaled (Fig 23e)
• Test 6 (trend identification): When 14 successive points oscillate up and down on either the or R
control chart, a systematic trend in the process is signaled (Fig 23f)
• Test 7 (avoidance of zone C test): When 8 successive points, occurring on either side of the centerline,
avoid zone C, an out-of-control condition is signaled This could also be the pattern due to mixed sampling (discussed earlier), or it could also be signaling the presence of an over-control situation at the process (Fig 23g)
• Test 8 (run in zone C test): When 15 successive points on the -chart fall in zone C only, to either side
of the centerline, an out-of-control condition is signaled; such a condition can arise from stratified
Fig 22 Control chart zones to aid chart interpretation
Trang 8sampling or from a change (decrease) in process variability (Fig 23h)
The above tests are to be applied jointly in interpreting the charts Several rules may be simultaneously broken for a given data point, and that point may therefore be circled more than once, as shown in Fig 24
Fig 23 Pattern analysis of -charts Circled points indicate last point in a sequence of points on a chart that
violates a specific rule
Trang 9Fig 24 Example of simultaneous application of more than one test for out-of-control conditions Point A is a
violation of tests 3 and 4; point B is a violation of tests 2, 3, and 4; and point C is a violation of tests 1 and 3 See text for discussion
In Fig 24, point A is circled twice because it is the end point of a run of 7 successive points above the centerline and the end point of 4 of 5 successive points in zone B or beyond In the second grouping in Fig 24, point B is circled three times because it is the end point of:
• A run of 7 successive points below the centerline
• 2 of 3 successive points in zone A or beyond
• 4 of 5 successive points in zone B or beyond
Point C in Fig 24 is circled twice because it is an extreme point and the end point of a group of five successive points, four of which are in zone B or beyond Two other points (D, E) in these groupings are circled only once because they violate only one rule
Statistical Quality Design and Control
Richard E DeVor, University of Illinois, Urbana-Champaign; Tsong-how Chang, University of Wisconsin, Milwaukee
Control Charts for Individual Measurements
In certain situations, the notion of taking several measurements to be formed into a rational sample of size greater than one simply does not make sense, because only a single measurement is available or meaningful at each sampling For example, process characteristics such as oven temperature, suspended air particulates, and machine downtime may vary during a short period at sampling Even for those processes in which multiple measurements could be taken, they would not provide valid within-sample variation for control chart construction This is so because the variation among several such measurements would be primarily attributed to variability in the measurement system In such a case, special control charts can be used Commonly used control charts for individual measurements include:
• x, Rm control charts
• Exponentially weighted moving average (EWMA) charts
• Cumulative sum charts (CuSum charts)
Both the EWMA (Ref 13, 14, 15, 16) and the CuSum (Ref 17, 18, 19, 20, 21) control charts can be used for charting sample means and other statistics in addition to their use for charting individual measurements
Trang 10x and Rm (Moving-Range) Control Charts. This is perhaps the simplest type of control chart that can be used for
the study of individual measurements The construction of x and Rm control charts is similar to that of and R control charts except that x stands for the value of the individual measurements and Rm for the moving range, which is the range
of a group of n consecutive individual measurements artificially combined to form a subgroup of size n (Fig 25) The
moving range is usually comprised of the largest difference in two or three successive individual measurements The moving ranges are calculated as shown in Fig 25 for the case of three consecutive measurements used to form the
artificial samples of size n = 3
Fig 25 Examples of three successive measurements used to determine the moving range
Because the moving range, Rm, is calculated primarily for the purpose of estimating common cause variability of the process, the artificial samples that are formed from successive measurements must be of very small size to minimize the
chance of mixing data from out-of-control conditions It is noted that x and Rm are not independent of each other and that
successive sample Rm values are overlapping
The following example illustrates the construction of x and Rm control charts, assuming that x follows at least approximately a normal distribution Here, Rm is based on two consecutive measurements; that is, the artificial sample
Table 3 x and Rm control chart data for the batch processing of white millbase topcoat component of Example 2
Trang 12In the calculation of averages in Table 3, is an average of all 27 individual measurements, while m is an average of 27
- 1 = 26 Rm values because there are only 26 moving ranges for n = 2 If the artificial samples were of size n = 3, there
would be only 27 - 2 = 25 moving averages
Once the and m values are calculated, they are used as centerline values of x and Rm control charts, respectively The
calculation of upper and lower control limits for the Rm control chart is also the same as in , R control charts, using the artificial sample size n to determine D3 and D4 values However, the upper and lower control limits for the x chart should
always be based on a sample size of one, using m the same way as in control charts These calculations are shown
below for the example data For the Rm-chart, for n = 2, D3 = 0, and D4 = 3.27:
CL = m = 4.99/26 = 0.192 UCL = D4 m = (3.27)(0.19) = 0.62 LCL = D3 m = 0
For the x-chart, an estimate of the standard deviation of x is equal to m/d2, where d2 = 1.128 from Table 1 using n = 2 Thus, 3 = (3/d2) m = (3/1.128) m = 2.66 m:
the end point of a sequence that violates any of the zone rules, or if it simply indicates a nonrandom sequence However,
tests for unnatural patterns should be used with more caution on an x-chart than on an -chart because the individual chart is sensitive to the actual shape of the distribution of the individuals, which may depart considerably from a true
normal distribution Figures 26 and 27 show the x and Rm control charts for the example data
Trang 13Fig 26 Rm control chart obtained for white millbase data in Table 3 Data are for k = 27, n = 2
Trang 14Fig 27 x control chart obtained for white millbase data in Table 3 Data are for k = 27, n = 2
References cited in this section
13 S.W Roberts, Control Charts Based on Geometric Moving Averages, Technometrics, Vol 1, 1959, p
17 A.F Bissell, An Introduction to CuSum Charts, The Institute of Statisticians, 1984
18 "Guide To Data Analysis and Quality Control Using CuSum Techniques," BS5703 (4 parts), British Standards Institution, 1980-1982
19 J.M Lucas, The Design and Use of V-Mask Control Scheme, J Qual Technol., Vol 8 (No 1), 1976, p 1-12
20 J Murdoch, Control Charts, Macmillan, 1979
21 J.S Oakland, Statistical Process Control, William Heinemann, 1986
Trang 15Statistical Quality Design and Control
Richard E DeVor, University of Illinois, Urbana-Champaign; Tsong-how Chang, University of Wisconsin, Milwaukee
Shewhart Control Charts for Attribute Data
Many quality assessment criteria for manufactured goods are not of the variable measurement type Rather, some quality characteristics are more logically defined in a presence-of or absence-of sense Such situations might include surface flaws on a sheet metal panel; cracks in drawn wire; color inconsistencies on a painted surface; voids, flash, or spray on an injection-molded part; or wrinkles on a sheet of vinyl
Such nonconformities or defects are often observed visually or according to some sensory criteria and cause a part to be defined simply as a defective part In these cases, quality assessment is referred to as being made by attributes
Many quality characteristics that could be made by measurements (variables) are often not done as such in the interest of economy A go/no-go gage can be used to determine whether or not a variable characteristic falls within the part specification Parts that fail such a test are simply labeled defective Attribute measurements can be used to identify the presence of problems, which can then be attacked by the use of and R control charts The following definitions are
required in working with attribute data:
• Defect: A fault that causes an article or an item to fail to meet specification requirements Each instance
of the lack of conformity of an article to specification is a defect or nonconformity
• Defective: An item or article with one or more defects is a defective item
• Number of defects: In a sample of n items, c is the number of defects in the sample An item may be
subject to many different types of defects, each of which may occur several times
• Number of defectives: In a sample of n items, d is the number of defective items in the sample
• Fractional defective: The fractional defective, p, of a sample is the ratio of the number of defectives in a sample to the total number of items in the sample Therefore, p = d/n
Operational Definitions
The most difficult aspect of quality characterization by attributes is the precise determination of what constitutes the presence of a particular defect This is so because many attribute defects are visual in nature and therefore require a certain degree of judgment and because of the failure to discard the product control mentality For example, a scratch that
is barely observable by the naked eye may not be considered a defect, but one that is readily seen is Furthermore, human variation is generally considerably larger in attribute characterization (for example, three different caliper readings of a workpiece dimension by three inspectors and visual inspection of a part by these same individuals yield anywhere from zero to ten defects) It is therefore important that precise and quantitative operational definitions be laid down for all to observe uniformly when attribute quality characterization is being used The length or depth of a scratch, the diameter of a surface blemish, or the length of a flow line on a molded part can be specified
The issue of the product control versus process control way of thinking about defects is a crucial one From a product control point of view, scratches on an automobile grille should be counted as defects only if they appear on visual surfaces, which would directly influence part function From a process control point of view, however, scratches on an automobile grille should be counted as defects regardless of where they appear because the mechanism creating these scratches does not differentiate between visual and concealed surfaces By counting all scratches, the sensitivity of the statistical charting instrument used to identify the presence of defects and to lead to their diagnosis will be considerably increased
A major problem with the product control way of thinking about part inspection is that when attribute quality characterization is being used not all defects are observed and noted The first occurrence of a defect that is detected immediately causes the part to be scrapped Often, such data are recorded in scrap logs, which then present a biased view
of what the problem may really be One inspector may concentrate on scratch defects on a molded part and will therefore tend to see these first Another may think splay is more critical, so his data tend to reflect this type of defect more frequently The net result is that often such data may then mislead those who may be using it for process control purposes
Trang 16Figure 28 shows an example of the occurrence of multiple defects on a part It is essential from a process control standpoint to carefully observe and note each occurrence of each type of defect Figure 29 shows a typical sample result and the careful observation of each occurrence of each type of defect In Fig 29, the four basic measures used in attribute quality characterization are defined for the sample in question
Fig 28 Typical multiple defects present on an engine valve seat blank to illustrate defect identification in an
attribute quality characterization situation
Fig 29 Analysis of four basic measures of attribute quality characterization used to illustrate the typical defects
present in the engine valve seat blank shown in Fig 28 Out of ten samples tested, four had no defects, three had single defects, and three had multiple defects
p-Chart: A Control Chart for Fraction Defective
Consider an injection-molding machine producing a molded part at a steady pace Suppose the measure of quality conformance of interest is the occurrence of flash and splay on the molded part If a part has so much as one occurrence
of either flash or splay, it is considered to be nonconforming, that is, a defective part
To establish the control chart, rational samples of size n = 50 parts are drawn from production periodically (perhaps, each
shift), and the sampled parts are inspected and classified as either defective (from either or both possible defects) or
nondefective The number of defectives, d, is recorded for each sample The process characteristic of interest is the true process fraction defective p' Each sample result is converted to a fraction defective:
Trang 17(Eq 1)
The data (fraction defective p) are plotted for at least 25 successive samples of size n = 50 The individual values for the sample fraction defective, p, vary considerably, and it is difficult to determine from the plot at this point if the variation
about the average fraction defective, , is solely due to the forces of common causes or special causes
Control Limits for the p-Chart. It can be shown that for random sampling, under certain assumptions, the occurrence
of the number of defectives, d, in the sample of size n is explained probabilistically by the binominal distribution Because the sample fraction defective, p, is simply the number of defectives, d, divided by the sample size, n, the occurrence of values for p also follows the binominal distribution Given k rational samples of size n, the true fraction defective, p', can be estimated by:
(Eq 2)
or
(Eq 3)
Equation 3 is more general because it is valid whether or not the sample size is the same for all samples Equation 2
should be used only if the sample size, n, is the same for all k samples
Therefore, given , the control limits for the p-chart are then given by:
Thus, only has to be calculated for at least 25 samples of size n to set up a p-chart The binomial distribution is generally not symmetric in quality control applications and has a lower bound of p = 0 Sometimes the calculation for the lower
control limit may yield a value of less than 0 In this case, a lower control limit of 0 is used
Example 3: A p-Chart Applied to Evaluation of a Carburetor Assembly (Ref 22)
This example illustrates the construction of a p-chart The data in Table 4 are inspection results on a type of carburetor at the end of assembly; all types of defects except leaks were noted, and n = 100 for all samples Samples taken numbered k
Trang 20UCLp = 0.02086 + 0.04287 = 0.06373 LCLp = 0.02086 - 0.04287 = -0.02201 That is:
LCLp = 0
The plot of the data on the corresponding p-chart is shown in Fig 30 The process appears to be in statistical control, although eight points lie on the lower control limit In this case, results of p = 0 that fall on the lower control limit should not be interpreted as signaling the presence of a special cause For a sample size of n = 100 and a fraction defective p' = 0.02, the binomial distribution gives the probability of d = 0 defectives in a sample of 100 to be 0.133 Therefore, a
sample with zero defectives would be expected about one out of seven times
Fig 30 p control chart obtained for the evaluation of the carburetor assembly data in Table 4 Data are for k =
35, n = 100
In summary, the p-chart in this example seems to indicate good statistical control, having no extreme points (outside the
control limits), no significant trends or cycles, and no runs of sizable length above or below the centerline At least over this period of data collection, the process appears to be operating only under a common cause system of variation However, Fig 30 shows that the process is consistently operating at a 2% defective rate
Variable Sample Size Considerations for the p-Chart. It is often the case that the sample size may vary from
one time to another as data for the construction of a p-chart are obtained This may be the case if the data have been
collected for other reasons (for example, acceptance sampling) or if a sample constitutes a day's production (essentially
Trang 21100% inspection) and production rates vary from day to day Because the limits on a p-chart depend on the sample size n,
some adjustments must be made to ensure that the chart is properly interpreted There are several ways in which the variable sample size problem can be handled Some of the more common approaches are the following:
• Compute separate limits for each individual subgroup This approach certainly leads to a correct set of limits for each sample, but requires continual calculation of the control limits and a somewhat messy- looking control chart
• Determine an average sample size, , and a single set of control limits based on This method may be appropriate if the sample sizes do not vary greatly, perhaps no more than about 20% However, if the
actual n is less than , a point above the control limit based on may not be above its own true upper control limit Conversely, if the actual n is greater than , a point may not show out of control when in
reality it is
• A third procedure for varying sampling size is to express the fraction defective in standard deviation
units, that is, plot (p - )/ p on a control chart where the centerline is zero and the control limits are set
at ±3.0 This stabilizes the plotted value even though n may be varying Note that (p - )/ p is a familiar
form; recall the standard normal (Z) distribution For this method, the continued calculation of the
stabilized variable is somewhat tedious, but the chart has a clean appearance, with constant limits of always ±3.0 and constant centerline at 0.0
c-Chart: A Control Chart for Number of Defects
The p-chart deals with the notion of a defective part or item where defective means that the part has at least one
nonconformity or disqualifying defect It must be recognized, however, that the incidence of any one of several possible nonconformities would qualify a part for defective status A part with ten defects, any one of which makes it a defective,
is on equal footing with a part with only one defect in terms of being a defective
Often it is of interest to note every occurrence of every type of defect on a part and to chart the number of defects per sample A sample may only be one part, particularly if interest is focusing on final inspection of an assembled product, such as an automobile, a lift truck, or perhaps a washing machine Inspection may focus on one type of defect (such as nonconforming rivets on an aircraft wing) or multiple defects (such as flash, splay, voids, and knit lines on an injection-molded truck grille)
Considering an assembled product such as a lift truck, the opportunity for the occurrence of a defect is quite large, perhaps to be considered infinite However, the probability occurrence of a defect in any one spot arbitrarily chosen is probably very, very small In this case, the probability law that governs the incidence of defects is known as the Poisson
law or Poisson probability distribution, where c is the number of defects per sample It is important that the opportunity
space for defects to occur be constant from sample to sample The Poisson distribution defines the probability of
observing c defects in a sample where c' is the average rate of occurrence of defects per sample
Construction of c-Charts From Sample Data. The number of defects, c, arises probabilistically according to the
Poisson distribution One important property of the Poisson distribution is that the mean and variance are the same value
Then given c', the true average number of defects per sample, the 3 limits for the c-chart are given by:
Trang 22Therefore, trial control limits for the c-chart can be established, with possible truncation of the lower control limit at zero,
from:
UCLc = + 3 LCLc = - 3
Example 4: c-Chart Construction for Continuous Testing of Plastic-Insulated Wire at a Specified Test Voltage
Table 5 lists the results of continuous testing of a certain type of plastic-covered wire at a specified test voltage This test causes breakdowns at weak spots in the insulation, which are cut out before shipment
Table 5 c-chart data for plastic-insulated wire of Example 4
Sample number, k Number of breakdowns
Trang 23corresponding c-chart In general, it is desirable to select the sample size for the c-chart application such that on average
( ) at least one or two defects are occurring per sample
In most applications, the centerline of the c-chart is based on the estimate of the average number of defects per sample
This estimate can be calculated by:
Trang 24The resulting c-chart in Fig 31 shows the presence of special causes of variation
Fig 31 c control chart obtained for the evaluation of the plastic-insulated wire data (k = 30) in Table 5 u-Chart: A Control Chart for the Number of Defects per Unit
Although in c-chart applications it is common for a sample to consist of only a single unit or item, the sample or subgroup
can be comprised of several units Further, from subgroup to subgroup, the number of units per subgroup may vary, particularly if a subgroup is an amount of production for the shift or day, for example
In such cases, the opportunity space for the occurrence of defects per subgroup changes from subgroup to subgroup,
violating the equal opportunity space assumption on which the standard c-chart is based Therefore, it is necessary to create some standardized statistic, and such a statistic may be the average number of defects per unit or item where n is the number of items per subgroup The symbol u is often used to denote average number of defects per unit, that is:
(Eq 7)
where c is the total number of defects per subgroup of n units For k such subgroups gathered, the centerline on the
u-chart is:
Trang 25(Eq 8)
The trial control limits for the u-chart are then given by:
Example 5: Use of the u-Chart to Evaluate Leather Handbag Lots
Table 6 lists inspection results in terms of defects observed in the inspection of 25 consecutive lots of leather handbags
Because the number of handbags in each lot was different, a constant sample size of n = 10 was used All defects were counted even though two or more defects of the same or different type occurred on the same bag The u-chart data are as
follows (Fig 32):
Table 6 u-chart data for leather handbag lot production of Example 5
Sample number, k(a) Total number
Trang 27Fig 32 u control chart obtained for the evaluation of leather handbag lot data in Table 6 Data are for k = 25,
n = 10 Datum for sample 9 is an extreme point because it exceeds value of UCL
Reference cited in this section
22 I Burr, Statistical Quality Control Methods, Marcel Dekker, 1976
Statistical Quality Design and Control
Richard E DeVor, University of Illinois, Urbana-Champaign; Tsong-how Chang, University of Wisconsin, Milwaukee
Process Capability Assessment
This section presents both traditional and more modern views of process capability and how it is assessed The presentation stresses the relationship between the control/stability of a process and its capability The clear distinction between the engineering specification and the statistical control limits in terms of their use and interpretation is emphasized Using the traditional conformance to the specifications view of process capability, this section illustrates the consequences of a lack of statistical control in terms of the manner and extent to which a process produces parts that meet design intent This section also presents the loss function approach to the articulation of process capability
Process Capability Versus Process Control
There are two separate but vitally important issues that must be addressed when considering the statistical representation
of process data These are:
• The ability of the process to produce parts that conform to specifications
• The ability of the process to maintain a state of good statistical control
Trang 28These two process characteristics are linked in the sense that it will be difficult to assess process capability with respect to conformance to specifications without being reasonably assured of having good statistical control Although control certainly does not imply conformance, it is a necessary prerequisite to the proper assessment of conformance
In a statistical sense, conformance to specifications involves the process as a whole; therefore, attention will be focused
on the distribution of individual measurements In dealing with statistical control, summary statistics from each sample,
mainly and R, are used; as a result, this involves the distribution of these statistics, not individual measurements
Because of the above distinction between populations and samples, it is crucial not to compare or confuse part specifications and control limits In fact, tolerance/specification limits should never be placed on a control chart This is
so because the control chart is based on the variation in the sample means , while it is the individual measurements in
the sample that should be compared to specifications Placing specifications on the control chart may lead to the mistaken impression that good conformance exists when, in fact, it does not This is illustrated in Example 6
A process may produce a large number of pieces that do not meet the specified production standards, even though the process itself is in a state of good statistical control (that is, all the points on and R control charts are within the 3
limits and vary in a random manner) This may be because the process is not centered properly; in other words, the actual mean value of the parts being produced may be significantly different from the specified nominal value of the part If this
is the case, an adjustment of the machine to move the mean closer to the nominal value may solve the problem Another possible reason for lack of conformance to specifications is that a statistically stable process may be producing parts with
an unacceptably high level of common cause variation In summary, if a process is in statistical control but not capable of meeting the tolerances, the problem may be one of the following:
• The process is off-center from the nominal
• The process variability is too large relative to the tolerances
• The process is off-center and with large variation
Example 6: Statistical Assessment of Process Capability for a Workpiece
For a certain part, a dimension of interest was specified by the engineering department as 0.140 ± 0.003 in Many parts were being rejected on 100% inspection using a go/no-go gage because they failed to meet these tolerances
It was decided to study the capability of the process using and R control charts These data were taken from the same
machine and operator and at the rate of about one sample per hour
Both the and R control charts seem to indicate good statistical control with no points exceeding the 3 limits, a
reasonably normal distribution of points between the limits, and no discernible trends, cycles, and so on Therefore, the calculated sample mean = 0.1406 in and the sample standard deviation σx = 0.0037 in are good estimates The process can then be evaluated with respect to its conformance to specifications
To obtain a clear picture of the statistical nature of the data, a frequency histogram was plotted that resembled a normal distribution, but the mean appears to be slightly higher than the nominal value of 0.140 set by the engineering department Figure 33 shows this population distribution curve centered at with a spread of x = 0.0037 in The specifications are also shown on this plot
Trang 29Fig 33 Normal distribution model for process capability for the data of Example 6 LSLx = 0.137, USLx = 0.143,
= 0.1406, and σx = 0.0037
The shaded areas in Fig 33 represent the probability of obtaining a part that does not meet specifications To compute the
probability of a part failing below the lower specification, the standard normal distribution, Z, is calculated, and a normal
curve table is used:
Z = (x - )/ x
where x is the value of either the lower or upper specification, is the estimate for the population mean, and σx is the estimate of the process standard deviation
To find the probability of a point below the lower specification limit, LSLx = 0.137, with = 0.1406, and x = 0.0037:
Looking this value up in the normal table produces Prob (0.137 or less) = 0.1660 This means that there is a 16.6% chance
of an individual part falling below the specified tolerance
To find the probability of an individual part falling above the upper specification limit, USLx = 0.143, = 0.1406, x = 0.0037, and:
The probability of a part being above the upper specification limit is equal to 0.2578, based on Z = 0.65 The process that
does not meet the specifications is therefore 16.6% + 25.78% = 42.37%
It might be asked whether centering the process at the nominal value of 0.140 would help To check, a normal curve is
constructed, centered at 0.140 The probability of a point below the lower specification is found by computing Z using the
nominal value as the population mean:
Trang 30Looking up the area for this Z value in the normal curve table produces a value of 0.209, which is the probability of
getting a value below the lower specification limit The probability of getting a value above the upper specification is found by:
which from the normal curve table gives an area = 0.209 The total probability of a part not meeting specification is the sum of these, or 0.209 + 0.209 = 0.41, or 41%
Therefore, recentering the process will not be of much help The process is in control; no special causes of variability were indicated However, about 42% of the parts were outside the tolerances Possible remedies include the following:
• Continue to sort by 100% inspection
• Widen the tolerances, for example, 0.140 ± 0.006
• Use a more precise process; reduce process variation
• Use statistical methods to identify variation reduction opportunities for the existing process
Too often, the strategy used (or at least urged) is the second remedy listed above Clearly, a stronger consideration should
be given to the final remedy listed above
Comparison of Tolerances and Control Limits
It is important to clearly differentiate between specification limits and control limits The specification limits or tolerances
of a part are:
• Characteristic of the part/item in question
• Based on functional considerations
• Related to/compared with an individual part measurement
• Used to establish the conformability of a part
The control limits on a control chart are:
• Characteristic of the process in question
• Based on process variability
• Dependent on sampling parameters, namely, sample size
• Used to identify presence/absence of special cause variation in the process
Control limits and tolerances must never be compared numerically and should not appear together on the same graph Tolerances are limits on individual measurements and as such can be compared against the process as a whole as represented by many individual measurements collected in the form of a statistical distribution, as was done in Example 6
to assess overall process capability
Process Capability Indices
It is common to measure process capability in the units of process standard deviations In particular, it is common to look
at the relationship between the process standard deviation and the distance between the upper and lower specification:
Trang 31(Eq 10)
The minimum acceptable value for C p is considered to be 6
Recently, many companies have begun to use a capability index referred to as C pk For bilateral specifications, C pk is defined in the following manner First, the relationship between the process mean and the specification limits in the units of standard deviations is determined:
(Eq 11a)
(Eq 11b)
Then the minimum of these two values is selected:
The C pk index is then defined by dividing this minimum value by 3:
(Eq 13)
Commonly, C pk must be 1.00
Statistical Process Control and the Statistical Tolerance Model
The issue of part tolerancing and, in particular, the statistical assignment and assessment of tolerances are excellent examples of the need for design and manufacturing to understand what each other is doing and why The best intentions
of the design process can go unmet if the manufacturing process is not operated in a manner totally consistent with design intent To more clearly appreciate the marriage of thinking that must exist between the design and manufacturing worlds, some of the basic assumptions of the tolerancing activity and their relationship to the manufacturing process will be examined The following sections clearly point to the importance of statistical process control relative to the issue of process capability
The key concepts in statistical tolerancing are:
• The use of a statistical distribution to represent the design characteristic and therefore the process output for the product/part in question relative to the design specifications
• The notion of random assembly, that is, random part selection from these part process distributions when more than one part is being considered in an assembly
• The additive law of variances as a means to determine the relationship between the variability in individual parts and that for the assembly
To assume that the parts can be represented by a statistical distribution of measurements (and for the assumption to hold
in reality), the part processes must be in a state of statistical control The following example illustrates the importance of statistical process control in achieving design intent in a tolerancing problem
Example 7: Statistical Tolerance Model for Optimum Fit of a Pin Assembly in a Hole Machined in a Plate
Trang 32Figure 34 shows two simple parts: a plate with a hole and a pin that will ultimately be assembled to a third part but must pass through the hole in the plate For the assembly, it is desired for function that the clearance between the plate hole and the pin be at least 0.015 in but no more than 0.055 in
Fig 34 Machined components statistically analyzed in Example 7 (a) Plate with hole (b) Pin assembly
Dimensions given in inches
To achieve the design requirement stated above, the nominal values and tolerances for the plate hole and pin were statistically derived and are shown in Fig 34 To arrive at these tolerances, it was assumed that:
• The parts would be manufactured by processes that behave according to the normal distribution
• The process capabilities would be at least 6σ, the processes would be centered at the nominal values given in Fig 34, and the processes would be maintained in a state of statistical control
• Random assembly would prevail
If these assumptions are met, the processes for the two parts, and therefore the clearance associated with assembled parts, would be as shown in Fig 35, and the design intent would be met
Trang 33Fig 35 Statistical basis for satisfying design intent for the hole/pin assembly clearance in Fig 34 (a)
Distribution of hole (b) Distribution of pin (c) Distribution of clearance
Suppose that despite the assumptions made and the tolerances derived, the processes manufacturing the pin and plate hole were not maintained in good statistical control As a result, the parts actually more nearly follow a uniform/rectangular distribution within the specifications, as shown in Fig 36 Such could have arisen as a result of sorting or rework of a more variable process(es), in which case the results are doubly distressing, that is, poorly fitting assemblies and increased cost to the system
Trang 34Fig 36 Clearance implications of poor process control of plate hole and pin dimensions for components of Fig
34 (a) Distribution of hole (b) Distribution of pin (c) Distribution of clearance
Figure 36 shows the distribution of the clearance if the hole and pin dimensions follow the uniform distribution within the specifications The additive law of variances has been used to derive the variation in the clearance distributions but assuming the uniform distribution for the individual part processes Some assemblies may not go together at all, some will fit quite tightly and may later bind if foreign matter gets into the gap, and others will fit together with a much larger clearance than desired
The problem here is not a design problem The plate hole and pin tolerances have been derived using sound statistical methods However, if the processes are not in good statistical control and therefore not capable of meeting the assumptions made during design, poor-quality assemblies will follow It should be noted that the altogether too common process appearance of a uniform distribution of measurements within the specifications can arise in several different ways:
• From processes that have good potential with regard to variation, but are not kept in good statistical control
• From unstable and/or large variation processes that require sorting/rework to meet the specifications
• From processes that are intentionally allowed to vary over the full range of the specifications to take advantage locally of wide specifications relative to the process variation
In all of the three cases mentioned above, additional costs will be incurred and product quality will be eroded Clearly, statistical process control is crucial to the tolerancing issue in engineering design Taguchi's loss function model, which is
an essential element in tolerance design, assumes similarly that quality characteristics can be represented by a statistical distribution of measurements Again, for this assumption to be met at the process and therefore in the ultimate product in the field, the manufacturing processes must be maintained in a state of statistical control
Trang 35Statistical Quality Design and Control
Richard E DeVor, University of Illinois, Urbana-Champaign; Tsong-how Chang, University of Wisconsin, Milwaukee
Design of Experiments: Factorial Designs
The process of product design and its associated manufacturing processes and tolerance designs often involve many experiments to better understand the various cause-effect relationships for quality performance of the product and for ease
in process control The sections that follow will present some of the basic concepts and methods for the planning, design, and analysis of experiments
The purpose of most experimental work is to discover the direction(s) of change that may lead to improvements in both the quality and the productiveness of a product or process Such endeavors can be referred to as process improvement because product improvement can only be meaningfully measured through its use and that is of course a process Historically, design of experiments methods have tended to focus more attention on process improvement as contrasted with product design In this regard, the different view that might be taken toward design of experiments in product design versus processing is probably overstated In this section, the role of design of experiments in product design is emphasized, as is its use in the simultaneous engineering of products and processes
In investigating the variation in performance of a given product or process, attention focuses on the identification of those factors that, when allowed to vary, cause performance to vary in some way Some such factors are qualitative in nature (categorical variables), while others are quantitative (possessing an inherent continuity of change) The situations examined below may consider both qualitative and quantitative variables simultaneously In fact, an important advantage
of the two-level factorial designs that will be discussed is their ability to consider both types of variables within the same test plan
Mathematical Model
A fundamental problem of design of experiments is that of selecting the appropriate arrangement of test points within the space defined by the design/control and noise variables Although many different considerations must come into play in selecting a test plan, none can be more fundamental than the notion of the mathematical model Whether or not explicitly recognized as such, most experimental studies are aimed either directly or indirectly at discovering the relationship between some performance response and a set of candidate variables influencing that response In general, this relationship can be written as:
Y = f(X1, X2, , Xk) + e (Eq 14)
where Y is the response of interest, f is some unknown functional relationship, X1, , Xk are the independent variables,
and e is a random error The functional form f can be thought of as a transfer function In Taguchi's framework, the variables X1, X2, , Xk are generally partitioned into signal, control, and noise variables
Sometimes enough is known about the phenomenon under study to use theoretical considerations to identify the form of f
For example, a chemical reaction can be described by a differential equation, which when solved produces a theoretical relationship between the dependent and independent variables More often than not, however, the knowledge is more sparse, and empirical models must be relied upon that act as mathematical french curves describing relationships through the data; for example:
Y = b0 + b1X1 + b2X2 + e (Eq 15a)
Y = b0 + b1X1 + b11 + e (Eq 15b)
Model Building. In most studies, the experimenter begins with a tentative hypothesis concerning the plausible model forms that are to be initially entertained He must then select an experimental design having the ability to produce data that will:
Trang 36• Be capable of fitting proposed model(s)
• Be capable of placing the model in jeopardy in the sense that inadequacies in the model can be detected through analysis
The second consideration above is of particular importance to ensure that through a series of iterations the most appropriate model can be determined, while others may be proved less plausible through the data
If, for example, a curvilinear relationship between temperature and reaction time in a chemical process is suspected, an experiment that studies the process at only two temperatures will be inadequate to reveal this possibility An experiment with three levels of temperature would, however, allow this possibility to be considered Figure 37 illustrates several scenarios that emphasize the importance of the relationship between the math model and the associated design of experiment In Fig 37, the following points should be noted:
• The relationship is actually curvilinear, but such will never be detected by the data
• A poor model (straight line) has been hypothesized, but model checking can reveal this and help propose a better model
• If the relationship is known to be a straight line, many levels of temperature in the experiment are unnecessary
• If it is known a priori that the relationship is a straight line, the best test plan would be to study only two
relatively extreme levels of temperature and to use additional tests for replication to observe the amount
of experimental error
Trang 37Fig 37 Comparison of typical time-temperature relationships for true relationship (a) compared to
experimental and fitted models (a through d See text for more details.)
Sequential and Iterative Experimentation. There is always the temptation to carefully design one large experiment that will consider all aspects of the problem at hand Such a step is dangerous for the following reasons:
• If erroneous decisions and hypotheses about the state of affairs are made, considerable time and experimental resources may be wasted, and the end product may provide little useful information or direction in terms of what to do next
• If knowledge of the underlying situation is limited a priori, many factors may be suspected as being
important, requiring a very large experiment in terms of number of tests Ultimately, only a small subset
of variables will be found to be of major significance
• In the early stages of experimentation, knowledge of the specific ranges of variables that ought to be studied is not always available Furthermore, the metrics to be employed to define the variables, responses, or even what responses to observe may not always be clear in the early stages of experimental work
• One large experiment will necessarily cause the testing period to be protracted in time, making it more
Trang 38difficult to control the forces of nuisance variation
For these reasons, it is much more desirable to conduct an experimental program through a series of smaller, often interconnected experiments This provides the opportunity to modify hypotheses about the state of affairs concerning the situation at hand, to discard variables that are shown to be unimportant, to change the region of study of some or all of the variables, and/or to define and include other measures of process performance Experimental designs that can be combined sequentially are very useful in this regard This is often referred to as the sequential assembly of experimental designs
Revelation of Variable Effects. Often, the variables of importance are not clearly known a priori It is desirable to be
able to study several variables together but to independently observe the effect of a change in each one of the variables Furthermore, it may be deemed important to know if such a variable effect varies with the conditions of the process, that
is, when other variables take on varying levels An arrangement of the tests is called a design, which provides for the opportunity to learn much about the relationships between the variables and the response In particular:
• The effect of changing any of the variables alone can be observed
• The possibility that the effects measured above can vary as conditions of other variables vary can be observed, that is, the existence of variable interactions
System Noise/Variation
The experimental study of any phenomenon is made difficult by the presence of noise or experimental error Many factors, not directly under study, are varying over the course of the experiment These are often referred to as the forces of common cause system variation Such variation may cloud or mask the effect of change of the factors under study in an experiment The forces of noise or variation can be better understood or mitigated by several approaches, some of which are strictly experimental design issues
Statistical Control/Stability Analysis. If the phenomenon under study is already a viable and ongoing process, the pursuit of improvement opportunities through experimentation can be considerably enhanced by employing the techniques of statistical process control In this way, spurious or sporadic sources of variation can be identified and, through remedial action, removed Achievement of a stable process will greatly contribute to the ability to more readily observe the effects of purposeful process change Thus, continued study will further enhance the ability to observe the persistence of changes that might be introduced Once a process is stabilized, continued attack on the common cause system will lead to a progressively quieter process, further heightening the ability to observe the forces of purposeful process change through experimentation
Experimental Design Strategies. In many situations, the notion of a stable, ongoing process has little meaning In the early stages of product or process design or prototype or pilot-plant testing, a stable, consistent process is not present
It is perhaps for this reason (among others) that the body of knowledge known as experimental design was cultivated Under such situations, the following factors are significant:
• Attempt to identify major sources of variation and take action to ensure that their presence is blocked out from the comparisons made within an experiment The technique of blocking is useful for this purpose
• Counteract the forces of unknown systematic variation over the period of the experiment by randomization of the tests so that such variation is uniformly and randomly distributed across the trials conducted
• Include replication in the experimental test plan Multiple tests at the same conditions will provide comparisons that directly measure the amount of variation/experimental error
• Include confirmatory testing as part of the experimental strategy It will be important that additional trials are run under specific conditions determined from the analysis to verify the improvement opportunities revealed from the experiment
Trang 39The parameter design method is specifically directed at mitigating the forces of noise variation as it may be transmitted through the product/process design to the output performance
Nature of Variable Interactions
For many products and/or processes, the effects that the important design/control factors have on the system performance responses of interest do not act independently of each other That is, the effect a certain factor has on the response may be different for different levels of a second factor When this occurs, the two factors are said to interact or to have an interdependency relationship; that is, a two-factor interaction effect is present Figure 38 summarizes the nature of the two-factor interaction effect Figure 38(a) shows that the effect of pressure on time (the slope of the line) is the same regardless of the level of temperature Therefore, no interaction is present However, in Fig 38(b), the effect of pressure
on time is clearly seen to vary with temperature Therefore, a two-factor interaction is present
Fig 38 Graphical depiction of the absence (a) and presence (b) of a two-factor interaction effect
Simple Yet Powerful Experimental Design. Many of the problems created by ad hoc testing methods and/or
methods such as the one-variable-at-a-time approach can be overcome by using an experimental design structure referred
to as the two-level factorial design For such test plans, each factor/variable is studied over only two levels or settings, and all possible combinations are examined Therefore, the total number of unique tests required for such a test plan is 2k,
where k is the number of variables; for example, for two variables, 22 = 4 test conditions define the test matrix
Figure 39 shows a graphical representation of the two-level factorial design when two and three variables are under study The geometric representation is useful from the standpoints of interpreting the variable effects and communicating the purpose and results of the test plan to others The corners of the square and the cube represent geometrically the conditions of each unique combination of the variable settings
Trang 40Fig 39 Two-level factorial design for two (four tests required) (a) and three (eight tests required) (b) variables
Tables 7(a) and 7(b) provide for a more algebraic way to represent the test conditions for a two-level factorial design The two levels for each factor are often simply referred to as the high and low levels and are represented in coded form as + or +1 and - or -1 This facilitates the determination of the variable effects, given the data Each row in Tables 7(a) and 7(b) represents the recipe for a particular test For example, in the 23 factorial in Table 7(b), test 3 is run with variable 1(X1) at
its low level, variable 2(X2) at its high level, and variable 3(X3) at its low level
Table 7(a) Test matrix for simple two-variable, two-level factorial design