Introduction to Statistical Process Control (SPC) and this website

Statistical Process Control is a combination of techniques aimed at continually improving production processes so that the customer may depend on the uniformity of a product and may purchase it at minimum cost. In this website we will try and provide you with information to understand SPC and give you guidelines how to implement it in a company. The website will also have a full training about specific techniques used in SPC. We are still developing the content of this website so please come back and we hope you find what you are looking for. For any comments, questions or suggestions please send a mail to
Return to index

Where did Statistical Process Control (SPC) come from ?

Statistical Process Control is developed in the 1920’s by Walter Shewart at Bell Telephone Laboratories. It started as an investigation to develop a scientific basis for attaining economic control of quality of manufacturing product through the establishment of control limits to indicate at every stage in the production process from raw materials to finished product when the quality of product is varying more than is economically desirable as Walter Shewart states it in his preface of the book resulting from this investigation. The book “Economic Control of Quality of Manufactured Product” was published in 1931 and all concepts described in this book are still valid more than 80 years later. The methods described by Shewart were incorporated into a management philosophy by Dr. W.E. Deming (a younger colleague of Shewhart). Just prior to World War II American industrial management did not pay very much attention to Deming and his views on statistical techniques and open management style. However Japan’s post war efforts to increase production, and to compete with western industries, found Deming’s philosophy attractive. Top Japanese management concluded that they had to improve quality, and invited Deming to lecture in Japan in the early 1950’s. The successful tour lead to a few companies implementing the Deming methodologies and within a few months their quality and productivity increased. This in turn led to a greater proliferation of these techniques in Japan. It was the commitment of top Japanese management, the realization of the rewards of SPC implementation plus the philosophies of Deming that are the basis of Japanese competitive advantage as we know it today. Deming stated that a quality product can only be made if all the processes in a company are under control, therefore everybody in a company is responsible for quality. The knowledge on the shop floor has to be used and the walls between departments have to be torn down. It is the responsibility of management to allow operators to work with the best methods, the best machines, etc. In 1981 Deming appeared in a documentary on American television named: “If Japan can, why can’t we?” There was a considerable reaction and for the first time managers in America listened to his philosophy. It was quickly proven that SPC could also give beneficial results in western industries. However despite increased attention on this side of the globe SPC is still in a preliminary implementation stage. Deming has summarized his philosophy in 14 rules of management, these are given below.

  1. Create constancy of purpose toward improvement of product and service with the aim to become competitive, stay in business and provide jobs.
  2. Adopt the new philosophy. We are in a new economic age, created by Japan. A transformation of Western style of management is necessary to halt the continued decline of industry.
  3. Cease dependence on inspection to achieve quality. Eliminate the need for mass inspection by building quality into the product in the first place.
  4. End the practice of awarding business on the basis of price. Purchasing, design, manufacturing and sales departments must work with the chosen suppliers so as to minimize total cost, not initial cost.
  5. Improve constantly, and forever, every activity in the company so as to improve quality and productivity and thus constantly decrease costs.
  6. Institute education and training on the job, including management.
  7. Institute improved supervision. The aim of supervision should be to help people and machines do a better job.
  8. Drive out fear so that everyone may work effectively for the company.
  9. Break down the barriers between departments. People in research, design, sales and production must work as a team to tackle production and usage problems that may be encountered with the product or service.
  10. Eliminate slogans, exhortations and targets for the workforce asking for new levels of productivity and zero defects. The bulk of the causes of low quality and low productivity belong to the system and will not be in the direct control of the workforce.
  11. Eliminate work standards that prescribe numerical quotas. Instead, use resources and supportive supervision, using the methods to be described for the job.
  12. Remove the barriers that rob the hourly worker of the right to pride of workmanship. The responsibility of supervision must be changed from sheer numbers to quality. Equally, remove barriers that rob people in management and engineering of their right to pride of workmanship.
  13. Institute a vigorous program of education and retraining. New skills are required for changes in techniques, materials and services.
  14. Put everybody in the organization to work in teams to accomplish the transformation.

Rules of Deming
Return to index

Definition and Organization of Statistical Process Control (SPC)

There are a lot of different ideas about what SPC is. Some see SPC as the use of control charts to analyze data. Other see it as a complete management system used to continuously improve quality and productivity. In this website we will explain how SPC can be used as a complete management system to continuously improve quality and productivity because that is the ultimate goal of SPC. One of the first requirements to get maximum results is that SPC should be implemented in every level in the organization. To control a process we need to apply SPC real-time. When we measure we need to register the data immediately and plot the results in a control charts. When the process is out of control we need to find the root cause of the out of control and take corrective actions. It makes no sense to only analyze the data at the end of the day because then we can only report about the quality produced and the fact that the process is not in control. It is often too late to get proper information about the root cause take corrective actions. We call this the control level.
Operators should be empowered to find the root cause and take corrective actions. If they are no capable of finding the root cause or are not allowed to make the required corrective actions. They should inform the production support level (engineers, production management or maintenance) and they have the responsibility to improve the process inputs and provide feedback to the operators how the process is improved. We call this the improvement level.
Production management needs to set targets for continual improvement so priorities can be set. They also should provide the means to improve the production factors. We call this the planning level.
The method described is shown in the figure to the left
If SPC is implemented like described you might get maximum results from your SPC implementation.

Return to index

Lesson 1 – Variation and control charts

One of the most important names in the history of statistical process control is Deming as explained above. Deming used a famous simulation to explain the principles of variation called: The red bead experiment. While explaining the principles of variation we like to honor Deming by using his red bead box. In this lesson we are going to explain the principles of variation and how we will analyze variation using control charts.

Imagine that you are interested in a process and you want to get better results from that process. It may be a manufacturing process or a service process, it may be in the public sector or you may work for a private company. We are going to count a particular type of thing which you would prefer did not happen. We will scoop beads from a box to simulate the process and any red beads scooped are going to represent the thing you would prefer to avoid.


We can take a number of scoops from the box and we can count the number of red beads drawn with every scoop.

The point to understand here is that it is random variation which is producing the different number of red beads in each scoop. Every process contains some random variation and the people who operate the process have no control over it.

Let’s now look at a table of the results.


Now, suppose that we did not know that these  results were due to random variation. Say, for example, that the figures come from a school and represent the number of pupils expelled for bad behavior each year.

Translate the numbers you find into headlines in your local newspaper:

“Big jump in expellings from school – officials want to know why”

“The number of expelled pupils has fallen since the new headmaster took over.”

“Number of expelled pupils has increased for 3 years in a row. Experts blame video games.”

It is very easy to fall into the trap of assuming that there is always a reason for figures going up or down. There is no reason – other than random variation – to explain why the number of red beads drawn by the paddle changes with every scoop. Every year, random variation alone will cause a different number of pupils to enter the school system with behavior problems.

Let’s look how this information would look in a chart


This is what random variation looks like. Putting past results in time sequence on a chart like this should make us less likely to jump to conclusions about an individual result. We might be less likely to assume that an upward or downward trend of just 3 or 4 results means that a long-term change is taking place.

However, we do need to know if a new policy, or a change in procedures, or a change to a process really affects the results. So we need something which points out the significant changes and encourages us to ignore the random variation. This is where control limits come in.


If we use the simulation to add new points on the chart all points should be between the control limit lines (although you might be unlucky and get a false signal).

lesson1 chart3
Although there are lots of ups and downs on the chart, you should be able to get an overall impression that this process is stable. There is no obvious change over the long term.

In this process, whatever causes the low counts also causes the high counts. The things which cause the variation are common to all the results. That is why this type of variation is called common cause variation.

We also say that the results are “in statistical control”. We say this because as long as nothing changes, we can predict that this process will continue with approximately the same average, and the control limits will continue to show the maximum and minimum results that we can expect.

Now we will look at a set of results from a very different type of process.

lesson1 chart4
You will notice that this process has about the same average number of defects as the previous chart but the chart looks very different.
This chart is showing that the process is unstable. There are two things on the chart which indicate instability:

  • some points are outside the control limit lines
  • some points have an “R” above them which indicates a “run”. The previous 6 points are on the same side of the average – it is unusual for random variation to produce 7 consecutive points above average or 7 consecutive points below average.

With this process, there is something else affecting it alongside the common causes of variation. Look at the results between 25 and 37 on the horizontal scale and compare them with the results between 37 and 50. Something must be causing them to be different. We call this a “special” cause of variation.

When special causes of variation are present in a process we say that the process is “unstable”. With an unstable process you cannot predict future results because we do not know when the special causes of variation will occur. If we have an unstable process, we should, whenever possible:

  • investigate what is causing the special variation,
  • learn whatever we can from the investigation
  • improve the process by making the best conditions permanent
  • put controls in place to prevent the special variation from returning

In the simulation we have arranged that no more special variation will occur, so we can scoop more beads. But first we will write a note on the control chart.

We scoop 30 more subgroups and there seems to be consistently fewer red beads now and the red beads represent something that we do not want. So there is reason to believe that we have improved this process. We should now recalculate the control limits using results which come after the improvement.

We now have new control limits which indicate the limits of common cause variation for the new improved process.
These new limits will show if any new special causes of variation appear. If one does appear, we should investigate it and remove it.

If we want to improve a process which contains only common cause variation, we will need to investigate the factors which are constantly affecting the process and influence every result.

Lesson 1 Summary Variation and Control charts:

  1. All processes contain variation.
  2. We must distinguish between special cause variation and common cause variation. We need to know this difference because the things we will have to do to remove or reduce the two types of variation are very different. We need to reduce variation to improve processes.
  3. The way to distinguish between common cause variation and special cause variation is to use a control chart.
  4. Before we can consider a process to be “under control”, efforts must be made to remove special causes of variation. We must also learn from each incident of special variation and take action to make sure that these types of changes do not happen again.
  5. If we want to improve a process which contains only common cause variation, we need to investigate the factors which affect every result.

End of Lesson 1 Statistical Process Control

Return to the index

Lesson 2 – Xbar & Range control chart

In lesson 1 we discovered why we need control charts. Now we are going to learn how to draw the control charts.

We use different types of control charts for different types of data. Data can be divided into two major categories, variables and attributes

Variable data is any measurement which has a continuous scale. For example:

Attribute data is based on discrete counts. For example:
the number of blemishes on a surface,
the number of faulty products
the number of unpaid invoices

With variable data we can measure to any accuracy that we want, for example 12.5, 3.075 etc. Attribute data, on the other hand, can only have whole number values like 1, 3, 12 etc.

In this lesson we are going to look at a process where we have been given a preferred target value for a variable measurement. Our job is to try to get results as close as possible to that target.

The process is a tennis ball launcher and we are trying to shoot balls at a target distance. The target is 500 and the specifications are 300 to 700. The location of the previous shots fired are given in the screen

Imagine that you are the operator of a machine – the launcher. Your job is to fire balls at the target and get them to land as close as possible to the ideal value of 500. First we need to ‘centre’ the process.
When we fire one ball from the launcher we find the value is 415.
The customer of this process wants the landing position to be 500, so what do we do now? Should we compensate for the error by moving the launcher?

The answer is NO because we don’t know the process yet. There is always a certain amount of variation in every process and if we only have common cause variation and we are trying to adjust for this variation we will actually cause more variation on the output.

Look at the display of Landing positions at the right of the launcher. Each landing position is different and the variation may be due to common causes which are always present in this process.

We now have more information to get an idea how the process is performing. We can calculate the average of the shots which is 413 (Rounded).

If we fire 45 more shots we will get a better estimate of the real average of the process. After 50 shots the process average is 419 (Rounded)

We can now move the launcher so future shots will center around 500. Assuming that nothing changes in this process, future output should now be centered on 500

“Assuming that nothing changes” is a big assumption. At this stage we do not know very much about our process and we do not know if things are likely to change over time. So, we need to find out if the process is “statistically stable”. We fire off another 100 shots then we will create a control chart.

The landing position is a continuous scale. We are actually using complete numbers for the results (423, 657 etc.) but there is nothing to stop us using more accurate measurements if we wanted to (423.45, 657.09 etc.).

This type of data is called variable data. We will use an Xbar and Range chart as the control chart for this process. In an Xbar and Range chart, the data is arranged into subgroups.

Let’s take a quick look at the data table.
Notice that there are 5 columns with “Landing position ( )” at the top. The number in brackets means that the column is part of a subgroup. For each row of the table, the data in these five columns represents one subgroup. Look at subgroup 25.

Look at the row with the number 25 in the grey column at the left. This subgroup starts at shot number 121. The subgroup columns contain the results of shots 121, 122, 123, 124 and 125.

Now let’s look at the control chart of this data:

An Xbar and range chart contains two graphs. For each row in the data table, the subgroup average is plotted (Xbar) along with the largest value in the subgroup minus the smallest value (Range). Let’s confirm this:

The last subgroup is highlighted and on the right side we see that Xbar = 533.2 and the Range =69.

The 5 measurements of subgroup 30 are 499,567,550,498 and 552

We can confirm that the value on the Range chart 168 is the highest value (567) – the smallest value (498) and the average is the sum of values/5.

The point at which the launcher was moved is shown on the chart. This is the equivalent of a written note on a paper chart and this is the sort of information an operator should record on a control chart.

Next we will calculate the control limits and draw control lines on the chart. The purpose of these lines is to show when we should suspect that something has changed which affects the process (in other words, a special cause of variation has occurred).

We calculate the control limits from a section of the data. Of course we know that the launcher has been moved and moving the launcher is a special cause of variation, so we should calculate the limits with results which come after the launcher move. For now, we will not worry about how the calculations are made.

All the results after moving the launcher are within the control limits, so the process is probably stable or “in statistical control”. This means that all the variation comes from common causes. Common cause variation is just the normal random variation which is inherent in the process.

If we want to be in full control of a process we must use the charts to identify when special cause variation occurs, determine if things were better before or after the change, then make one of these situations permanent.

Let’s carry on producing. We fire off another 100 shots
Look at the control chart. All we need to know now is whether there has been any change in the process since the lines were calculated. Is the output stable?

Look to see if any of the points are outside the control limits. It looks as if something unusual happened around subgroup 40 (launcher shot 200). Subgroup average drops below the control line so a special cause of variation has occurred. As an operator, your job is to produce results as close as possible to 500 but the average landing position has suddenly changed.

You could, of course re-centre the process (move the launcher). This might help in the short term, but you have no idea whether things might suddenly change back to normal. The only really satisfactory solution is to carry out an investigation, find the source of the special cause of variation, learn from what happened, then make sure that this kind of change does not occur again.

A word now about specification or tolerance limits. In most industrial processes, the operator is given specification or tolerance limits as well as the target value. However, these specification limits should always be looked on as representing the MINIMUM acceptable quality from the process. World class quality does not come from treating everything within the specification limits as equally acceptable. We must try to produce as close as we can to the target value. This is what the customer really wants.

In our bouncing ball process, an unknown special cause of variation made the subgroup average fall at around subgroup 40 (shot 200). It might be that the individual results are still within the specified tolerance limits, but our customer would prefer the results to be 500. So we must make efforts to produce with the average output at 500 and the minimum variation that our process is capable of. So we must investigate and remove special causes of variation even if we are still producing within the specification or tolerance limits.

When we did the investigations we found out that one batch of balls has slightly less bounce than normal. We discard this batch and demand from our supplier that they supply us with statistically stable product (they can only be sure of doing this by using control charts).

We have removed this special cause of variation so things should return to normal from shot 256. We fire off another 50 shots.

The control chart should show clearly that a change occurred around subgroup 40 and things returned to normal around subgroup 53.

Now we will look at how the control limits were calculated.
The mathematics are not difficult. You need to find the average Xbar (average subgroup average) and the average range for the section of the data that you use to calculate the limits. You also need to look up a table to find constants called “A2″ “D3″ and “D4″ for the subgroup size that you are using.

By distinguishing between special cause variation and common cause variation, control charts can help operators and managers to run processes which produce on-target with minimum variation.

If special cause variation is present, we must find the root cause and stop this from occurring again in the future. We ask the questions:

“What happened at approximately that point to change the results?”. and

“How can we prevent this from happening again?”

If no special causes are present and we want to get better results, we ask different questions:

“Looking at all the results, is the average off-target?” and

“Looking at all the results, why is there so much variation?”

To reduce common cause variation we might need better machinery, more frequent maintenance or less common cause variation within raw materials.

Lesson 2 summary:

  1. When we are given a target or ideal value for something, we should always try to get results with the Average on target and with the minimum amount of variation. We should not just try to get results within the specification limits.
  2. Control charts can help to distinguish between common cause variation and special cause variation.
  3. A good type of control chart for variables is the Xbar and Range chart. Xbar and Range charts use data arranged into small subgroups.
  4. It is not enough just to react to special causes of variation by adjusting the process to compensate. World class quality comes from removing the special causes of variation and preventing them from returning. Because this often requires management action, SPC will only work properly when managers understand the role they have to play in creating stability and reducing variation.

End of Lesson 2 Statistical Process Control

Return to the index

Lesson 3 Histograms and distributions

A histogram is a way of showing a set of measurements as a picture.

Let’s fire some balls from the tennis ball simulation and then look at a histogram of the landing positions.

In this histogram, the measurements are the landing positions of the balls from the launcher simulation. The possible landing positions are set out on the horizontal scale and this is divided into a number of sections. For each section, a column is drawn and the height of the column represents the number of balls which have fallen within that section of landing positions. For example the column between “300” and “350” on the horizontal scale is 2 units high, this means that 2 balls have landed between 300 and 350.

Let’s check that the histogram has been drawn correctly for our data.


Now we will fire more balls and look at the histogram contain more results.
This is how the histogram looks after 60 shots

Lets put a still more results in the histogram.
Here you see the histogram after 510 shots
It should become clear after we have fired this many shots that the highest columns are near the middle of the histogram. This means that most of the balls land near the middle of the range of possible results.

The shape of the histogram is call the “distribution”. It shows how the measurements are “distributed” among the range of possible measurements.

A particular bell-shaped histogram pattern is known as the ‘normal’ distribution. It occurs frequently in nature and is common in industrial processes. We know a great deal about normal distributions and this helps us to make some general statements about the outcome of processes.

We will now produce a normal distribution starting with a new set of shots
Notice that the histogram shape looks like a bell with a high middle and tails at each end.

This box shows some figures for the data which is used to make the histogram. We are going to describe the first two figures under the “Statistics” section.


The Average is calculated in the normal way from the individual results:
The individual results are added up then divided by the number of results.

Standard Deviation:

‘St dev’ means Standard Deviation.
Standard Deviation gives us a figure for how much the individual values in a set of measurements are spread around the Average. A set of measurements where most of the values are near the Average has a low Standard Deviation, a set of measurements where most of the values are far away from the Average has a high Standard Deviation.
You can create and use control charts without knowing how to calculate Standard Deviation. For those who want to know, here is how to calculate a Standard Deviation:

First, find the Average of all the values
For every individual value, find the distance from the Average.
Square this number (multiply it by itself).
Add all the squares together
Divide by the number of measurements minus one.
Find the square root of this number.

I repeat that you do not have to remember how to calculate Standard Deviation to draw control charts or to use the charts to improve processes. All you need to know is that Standard Deviation is a measurement of spread.
For Normal distributions, we can use Standard Deviation to make some useful statements about a set of measurements. We can also make some predictions about future measurements from the same process if it is reasonable to assume that the process will not change (it will only be reasonable to make this assumption if the process is stable).
If past measurements show a normal distribution, and the process is stable, then we can say :

As long as the process stays stable:
about 68% of results will lie between one Standard Deviation below Average and one Standard Deviation above Average,
about 27% will lie between one Standard Deviation and two Standard Deviations from the Average,
about 4.5% will lie between two Standard Deviations and three Standard Deviations from the Average,
only a very small proportion (about 0.3%) will be more than three Standard Deviations from the Average.

Let’s see if this is true by checking one of these statements.
The green zone is up to 1 Standard Deviation either side of the Average.
The yellow zone is more than 1 but less than 2 Standard Deviations from Average
The purple zone is more than 2 but less than 3 Standard Deviations from Average
The red zone is more than 3 Standard Deviations from Average

The blue lines show the specification limits for the launcher simulation. In the “Conformity” section of the information box you will see figures for the number of results which are out of specification (OOS) and the percentage of the results which are within specification. If we drag the specifications to put them at the border of yellow and purple the percentage OK is recalculated.
The percentage OK figure now shows how many of the results are within 2 Standard Deviations from Average. You see it is 95.1 % which is close to what we were expecting from the information above.

So, we now know that for a normal distribution, the majority of results will be less than one Standard Deviation from Average. However, we also know that there will be a small number of results more than 3 times Standard Deviation from Average. We cannot tell WHEN these extreme results will happen, but we know that they will happen sometime.

Using statistics to predict the future:

These percentages tell us approximately what has happened in the past. but often we are asked what sort of results we will get in the future from a process.

We can predict that the process will continue to produce roughly the same proportion of results in the 1, 2 and 3 standard deviation zones IF THE PROCESS DOES NOT CHANGE IN ANY WAY.

Also, the percentages given above for the normal distribution are only true over the very long term. It would be wrong to suggest that we can tell with confidence what the measurements will be in any one particular batch of goods.

Lesson 3 summary:

  1. A histogram is a way of showing a set of measurements as a picture.
  2. The shape of the histogram is know as the distribution.
  3. Standard Deviation gives a figure for how much spread there is in a set of measurements.
  4. If past measurements show a “normal” distribution, and the process is stable, then as long as the process remains stable, we can predict the approximate number of results which will be at different distances from the Average.

End of Lesson 3

Return to the index
Lesson 4 – Power of control charts to detect instability

In lesson 2, we used a section of data to calculate control limits for a process. In that example, the process was stable in the early stages and we used data from that period to calculate the control limits.

In this lesson we are going to investigate what happens if the process is unstable while producing the data which is used to calculate control limits. We ran a simulation of a process which is unstable in the early stages.

statistical-process-control-l4-chart1You now see that the process is indicating instability even though the data used to calculate the control limits contains instability.

This ability of Shewhart control charts to detect special causes of variation, even when these special causes are present in the data used to calculate the control limits, is very important. Most industrial processes are not naturally in a state of statistical control.

The control limits are set at 3 times sigma from the average. Sigma is similar to standard deviation. This sigma is calculated based on the average range of the subgroups (range is the maximum value minus the minimum value) or in other words based on the within subgroup variation.

The reason that the Xbar chart detects special variation is because the control limits are calculated using an estimate of standard deviation based on the average subgroup range. Since the subgroups are taken from consecutive products, this means that all the variation between subgroups is filtered out.

When using control charts it is important to ensure that subgroups contain mostly common cause variation. Normally this can be done by measuring a small number of consecutive products for each subgroup, and having a time gap between the subgroups.

X or Individual Value chart:

Sometimes it is not possible to take consecutive measurements from a process which can be grouped in a subgroup. For example, there is no variation if we take consecutive measurements eg temperature or the pH value of a bath. In this case, we will have to use an X & mr (individual value and moving range) chart.

In this type of chart we plot the individual measurements on one graph and the differences between the consecutive measurements on the other graph, this is called the Moving Range (sometimes only the individual points are shown, the moving range chart is omitted).

Here is an example to show how moving ranges are calculated.

Measurement 2.5 3.1 3.3 2.4 2.9 2.3 2.4
Moving range 0.6 0.2 0.9 0.5 0.6 0.1

If we run a process with is unstable in the early stages and we chart the individual values we see the control chart below. We see that the chart is able to detect both disturbances in the average as well as disturbances in the range.

The chart shows some instability, both by having some points outside the control limits and because there are long runs in the data. A run is where a number of consecutive results are all above average or all below average.

Lets look at how control limits for individual value chart are calculated:


During an implementation we will also implement control charts where removing instability is not the highest priority because it is not the most critical characteristic. In that case we may use different ways to calculate limits. This advanced subject is outside the scope of this training.

Lesson 4 summary:

  1. Shewhart control charts will indicate instability even if instability is present in the data used to calculate the control limits.
  2. We must use our knowledge of the process when deciding how to sample results and arrange them into subgroups. We should do this in a way which we know will reduce the chances of special cause variation occurring within subgroups.
  3. If we do not know much about the process or we cannot be confident that little special cause variation will be present within subgroups, then we should use an X (individual values) chart or and X & mr (moving range) chart. With these charts, the control limits are based on the average difference between each individual result.
  4. If we want to detect if the process is stable it is a mistake to calculate the control limits from the deviation of the individual results from the Average. The distance of the control limits from Average is calculated from a short-term dispersion statistic (subgroup range or moving range).

End of Lesson 4
Return to the index

Lesson 5 – Binomial control charts

In lesson 2 we looked at Xbar and Range control charts. In lesson 4 the X (individual value) chart was introduced. In both these cases, we used variable or measurement data. This is data which comes from a continuous scale.

There is a different type of data called “attribute” data. Attribute data comes from discrete counts. For example:

  • the number of blemishes on a surface,
  • the number of faulty products
  • the number of unpaid invoices

With attribute type data, in order to choose the correct type of control chart, we have to look at the way the data was generated. If we know in advance that the set of data will exhibit the characteristics of Binomial data or Poisson data then these types of charts should be used.

Binomial data:

Binomial data is where individual items are inspected and each item either possesses the attribute in question or it does not. Binomial means “two names” so if each item can be put down as either a pass or a fail then we can consider the data gathered to be Binomial data.

For example, consider the attribute ‘blue’ in samples of beads scooped from a box which contains beads of many colours. Each bead scooped is either blue or it is not blue – so if we create a stream of samples taken from the box and we count the number of blue beads in the samples, then we can assume that the resulting data will be Binomial type data.

Other examples of counts which would generate binomial data are:

  • Late deliveries
  • Non-conforming goods
  • Out of specification components.

The random variation of Binomial data acts in a particular way, because of this we can calculate where to put the control limits. All we need to know is the average of the data set and the sample size.

We can generate Binomial data using the simulation of scooping beads from the box – so let’s do this now. The bead box has 20% red products (bad) and 80% white products (good):

This type of chart is called an “np” chart. It is used when we know we have Binomial data and the sample size does not change. The points on the np chart are simply the number of items in the sample which have the attribute being counted (in this case we are counting beads with the attribute “red”)

Let’s look briefly at how the control limits were calculated:

In these formulae “n” is the sample size (in this case 50, the size of the paddle), and “p bar” is the average proportion of the samples which have the attribute being counted.


Binomial data with different sample sizes:

If we have binomial data but the sample size is not constant, then we cannot use a np chart. We will now use the simulation to add new samples to the data we have already started, but we will change the sample size:

When the sample size is not constant for every scoop we have to convert counts to a rate or proportion. The resulting chart is called a “p” chart. We convert to a rate by dividing the attribute count by the sample size.

You will notice that there is a step in the control limit lines at the point where the sample size changed. Before we look at the mathematics of the control limits, let’s try to understand why there is a step.

The purpose of the control limits is to show the maximum and minimum values that we can put down to random common cause variation. Any points outside the limits indicate that something else has probably occurred to cause the result to be further from the average.

As we have said before, the random common cause variation of Binomial data acts in a particular way. The variation with large sample sizes is smaller than the variation with small sample sizes. We can use the simulation to demonstrate this.

We will change the subgroupsize to 5 and take 30 more subgroups

Look at the results in the Data Table and keep in mind that the proportion of red beads in the box has not changed. In this exercise there are always 20% red beads in a box.

When the sample size is 5, a lot of times the number of red beads scooped is 1 (20% of the sample size), but it is not unusual to get 0 or occasionally 3 (60% of the sample size). In rare cases like in this simulation we can even have 4 and we have a false alarm.

Now let’s use a very big sample size:

Look again at the results in the Data Table. Remember that 20% of the beads in the box are red and 20% of the average of the sample size is now 30.

As you would expect most of the results are near 30, but even the most extreme results are nowhere near 0% or 60% of the sample size (60% of the sample size would be 90).

Now let’s see how the control chart handles these extreme sample sizes.

Look at the way the points which correspond to the small sample size (samples 60 – 90) vary up and down, then compare this with the variation with the large sample size (after 90). Keep in mind that we are not looking at absolute numbers here, we are looking the proportion of the sample which is red.

Look at the position of the control limits for the small subgroupsize and the large subgroupsize.

This illustrates one of the basic points about using control charts for attributes. Small subgroupsizes produce control charts which are not sensitive because there is so much random common cause variation in small sample sizes. Large sample sizes produce more sensitive control charts.

What this means is that if a process has a special cause of variation acting on it from time to time, it may not produce any points outside the control limits if the sample size is small. The same special cause of variation is more likely to produce points outside the control limits if we use a large sample size.

Let’s have a quick look at the mathematics for the control limits:

Notice that the limits have to be separately calculated for each subgroupsize. The example given is for sample number 1 (subgroups 1 to 30).

Criteria for binomial data:

We can only use an np chart or a p chart if we know in advance that the data produced will be binomial data. The full conditions which have to be satisfied before we can consider a set of data to be Binomial are:

  1. The count must arise from a known number of discrete products (goods or services).
  2. Each product inspected must either have, or not have, the attribute which we are counting.
  3. The products inspected must not influence each another. If one item has the attribute, this fact must not change the likelihood of its neighbours having the attribute.

Lesson 5 summary:

  1. Data from process can be divided into two major categories, variables and attributes.
  2. Binomial data is attribute data where individual items are inspected and each item either possesses the attribute in question or it does not.
  3. An “np” chart is used for Binomial data if the sample size is constant.
  4. A “p” chart is used for Binomial data if the sample size is not constant.
  5. Before using an “np” chart or a “p” chart we have to make sure that all the conditions for Binomial data are met.
  6. With applying SPC to attribute counts, small sample sizes make it difficult to distinguish between common cause variation and special cause variation.

End of Lesson 5

Return to the index

Lesson 6 – Poisson and “X” control charts using attribute data

In lesson 5 we created control charts for Binomial data.

Binomial data is where we look at products or services, and for each, decide it as a “pass” or a “fail”. It is not always best to classify a whole product in this “all or nothing” way. For example, we might want to count the number of blemishes on a surface. We will never know the number of ‘non-blemishes’ on the surface so the data gathered is not Binomial data.

Criteria for Poisson data:

We can consider data to be Poisson type data if:

  1. Discrete counts of an attribute can be made. e.g. tears in material, cracks on surfaces etc.
  2. The counts arise from a known area of opportunity.
  3. As with Binomial data, the attributes must arise independently of one another. In other words, there must be no mechanism which makes the attribute normally occur in clusters.
  4. There are relatively few incidents of the attribute appearing compared with what might happen in the worst possible circumstances.

Another way of looking at this is that Binomial counts represent DEFECTIVES whereas Poisson counts represents DEFECTS.

With Poisson data, we use a “c” chart if the sample size is constant, and a “u” chart if it is not. With Poisson type data, the sample size is sometimes called the “area of opportunity”.

Let’s look at a file with Poisson type data:

A “c” chart is very similar to an “np” chart, the points plotted are simply the numbers in the data column. The only difference is the way the control limits are calculated:

Look at how the limits are calculated

The control limits for a “c” chart are calculated from the average attribute count for all the samples. Notice that the sample size is not used anywhere in these calculations.

Poisson data with different “areas of opportunity”:

Now let’s look at a chart for Poisson type data where the sample size or area of opportunity is not constant.

Because the “area of opportunity” is not the same for all samples, we need to convert each attribute count into a rate before plotting the points on the chart. The resulting chart is called a “u” chart. The rate is simply the attribute count divided by the sample size or area of opportunity for the sample.

Look at how the limits are calculated

Notice that the control limits are tighter for larger areas of opportunity. This is for the same reasons that the control limits vary for different sample sizes in “p” charts.

The X (individual value) control chart with attribute data:

In a lot of cases the Binomial or Poisson charts are not appropriate because one of the conditions is not applicable. In that case we can use an X or ‘individual’ chart. Control limits for X charts are empirical limits based on the variation in the data and these are almost always valid.

Let’s compare a binomial chart with an X chart using the same data. First we will generate some data:

Now we will create a Binomial chart and an X (individual values) chart from same data.

Look at the upper control limit (UCL) and lower control limits (LCL) for each chart.

If we cannot be confident that the data we have fulfills the conditions to be binomial or Poisson data, then we can usually rely on an X chart to do a pretty good job. However there are limitations:

Let’s take more subgroups but now with a subgroupsize of 20. We now have a non constant sample size.

Sometimes X charts should be rate charts when the sample size is not constant and sometimes they should not – it depends on what the measurement represents. In our case the number of red beads scooped is definitely dependent on the sample size so we should look at an X chart based on rates.

The p chart and the X rate chart are both showing proportions and the control limits have been calculated using scoops 1 – 30. Compare the two charts. Look at the data and the control limits before and after the change of sample size (the change was at subgroup number 30).

Because we have not changed the number of beads in the box, we are looking at the results of a stable process so in theory control charts should not show any points outside the control limits.

There is always more random common cause variation with small sample sizes and you can see that the points on both charts jump up and down more after we change to a smaller sample size.

Because the control limits on a binomial chart are based on a theoretical knowledge of the way binomial data behave, the control limits change to accommodate the different sample sizes.

On X charts, the control limits are based on the variation between successive points in the data stream. When this variation changes due to altering the sample size, this can be misinterpreted as a process change.

X charts with low average:

When the average count is very small, another problem prevents us from using X charts. With attribute counts, the data can only take integer values such as 6, 12, 8 etc. Values such as 1.45 cannot occur. The discreteness of the values is not a problem when the average is large, but when the average is small (less than 1) then the only values which are likely to appear are 0, 1, 2 and occasionally 3.

The whole idea of control charts is that we want to gain insight into the physical variations which are happening in a process by looking at the variation of some measurement at the output of the process. When the measurements are constrained to a few discrete values then the results are not likely to reflect subtle physical changes within the process. For this reason X charts should not be used for attribute counts when the average count is low.

Lesson 9 gives more information about using attribute control charts when the average count is low.

Lesson 6 summary:

  1. Poisson data is where we are counting defects (whereas binomial data is where we count “defectives”)
  2. With Poisson data, we use a “c” chart if the sample size is constant.
  3. With Poisson data, we use a “u” chart if the sample size is not constant.
  4. With Poisson type data, the sample size is sometimes called the “area of opportunity”.
  5. Before using an “c” chart or a “u” chart we have to make sure that all the conditions for Poisson data are met.
  6. If we cannot be sure that the data will meet all the conditions to be Binomial or Poisson data, then we may be able to use an X chart, but the average count must be greater than 1.

End of Lesson 6
Return to the index

Lesson 7 – Pareto chart

A Pareto chart helps us to identify priorities for tackling problems. The Pareto principle (named after a 19th century Italian economist) states that 80% of defects or problems usually arise from about 20% of the causes.

Let’s look at data from an imaginary process.

The columns in the table represent 10 types of non-conformity or imperfection which can occur in Assembly M412. Each of the 25 rows contains the results of one inspection.

Let’s look at this data using a Pareto chart:

In a Pareto chart, the categories of data are shown as columns and the height of each column represents the total from all the samples.
The order of the columns is arranged so that the largest is shown on the left, the second largest next and so on. Since these counts usually represent defects or non-conformities, the biggest problems are therefore the categories on the left of the chart.

Our Pareto chart makes it immediately obvious that the most frequent problem is Smidgers appearing on the assembly.

It is common practice on Pareto charts to superimpose a cumulative percentage curve.
At each point on this curve you can see the percentage of the overall number of non-conformities or imperfections which are caused by the categories to the left of the point. This is best illustrated by example. We have added a red line for the second defect Scrim pitted. By eye, follow the red lines from the column to the % scale on the right.

The red line points to just under 80%. This means the first two columns account for nearly 80% of all the non-conformities or imperfections that occur in Assembly M412. If we concentrate our efforts on reducing the number of smidgers and pitted scrims, then even if we are only partially successful, we are likely to make a substantial difference to the number of assembles which we have to send for rework.

Of course, all types of problems do not have an equal impact in terms of cost or importance. So if we know the cost of putting right each type of problem, then it is better to draw the Pareto chart with the column heights representing the total cost.

A “unit cost” value has been entered for each column. We are now looking at the total cost associated with each column (total number multiplied by unit cost). You will see that the left vertical scale is now labeled “$ US”.

When the chart is showing costs, we get a different picture from the picture we get when it is showing numbers Although smidgers are the most common imperfection in assembly M412, they are easy to remove – a quick wipe with a cloth is all that is needed. A pitted scrim, on the other hand, needs the assembly to be dismantled. A leaking gear housing is the big nightmare – but fortunately they are not very common.

We see that pitted scrims account for about 60% of all rework costs, so this is the problem which is creating the highest cost to the company.

Although we do not get many gear leaks, they are actually the second biggest problem in terms of costs. Smidgers do not cost the company a lot of money despite the fact that they occur in large numbers.

So looking at Pareto chart for M412 costs, we should concentrate our efforts in eliminating pitted scrims and gear leaks.

Although a Pareto clearly identifies the major cause of problems you also have to consider the amount of efforts required to solve a problem. It might be that an issue is very easy to fix, so make sure you always briefly review all issues before you start to solve the most important problems.

Pareto are used in a lot of different situations and can be adapted to get the right information. Let’s look at a Pareto of downtimes

This Pareto is shown with the bars horizontally. With downtime analysis the total downtime is important but more information is required. In this Pareto we see that a label is added with the total downtime in minutes but also the number of downtimes is given. One long downtime might require a different approach than a large number of short downtimes.

In this Pareto the color of the bar indicates a downtime category.

Lesson 7 summary:

  1. A Pareto chart helps us to determine priorities.
  2. If the cost associated with one unit of each column is known then we can choose between displaying total numbers or total costs for each category.
  3. The columns on the left or rows on top have the highest totals.

End of Lesson 7
Return to the index

Lesson 8 – Scatter chart

When we want to reduce or eliminate a problem, we will need to come up with ideas or theories about what is causing the problem. One way to check if a theory should be taken seriously is to use a scatter chart, also called regression analysis.

To use a scatter chart, we first have to take a series of measurements of two things over a period of time. The two things that we would measure are the problem itself, and the thing that we think may be causing the problem. We then plot the measurements on a scatter chart. The scatter chart will help us to see whether there is a mathematical relationship between two sets of measurements.

We will look at how to use a scatter chart using an example:

Flaking plugs:

A company makes large cylindrical casings known as “plugs” for a chemical process.

Analysis using a Pareto chart showed that the problem of surface flaking of the plugs was costing the company a lot of money.

A process improvement team was set up to try to reduce the number of “flakers”. The team quickly found that everyone had a different opinion of what was OK and what was a flaker. The first job, therefore, was to come up with a good definition of a flaker which everyone could use.

The process operators were shown how to use control charts and they started keeping a chart of the number of flakers produced in each batch. This chart showed that the process was unstable. so they knew that they had to look for special causes of variation. Mary, one of the process operators on the team, said she always feels cold on days that they have a lot of flakers.

The process operators started keeping records of the air temperature at the time the plugs were made. At one of the team meetings Jack pointed out that on at least two occasions when the number of flakers was outside the control limits, it was raining.

The team asked the lab for help to test the theory that rain was a factor. One of the engineers pointed out that it was actually raining that very day but there were very few flakers. Nevertheless he still suggested that it might be a good idea to measure the moisture content of the main ingredient.

First, let’s have a look at the control chart. Because each plug is either a flaker or it is not a flaker, the chart we should use is a binomial chart.

The data is out of control because some points are outside the control limits. There are also runs of 10 consecutive points above and below the average line – these also indicates instability.

Now let’s look at a scatter chart.

On this chart, the number of flakers is on the vertical axis and the air temperature is on the horizontal axis. For each row in the data table, a dot is put where the two values meet.

In a scatter chart, if the measurements on the horizontal axis are not related in any way to the measurements on the vertical axis, then the dots will appear at random, with no pattern visible. If there is a mathematical relationship between them then the dots will tend to group into a fuzzy line or curve.

In this case there does not seem to be any pattern to the points on the scatter chart. We can conclude, therefore, that there is no correlation between air temperature and the number of flakers produced. This means that we can say that the air temperature is not a factor in producing flakers.

Now let’s have a look if humidity is of influence

On this scatter chart we see flakers on the vertical axis and moisture content on horizontal axis. There appears to be a correlation between the two sets of numbers because we can see the dots have formed into a fuzzy line. This chart is showing that flakers increase when the moisture content increases. This still does not prove that one causes the other. There could be a third factor which causes BOTH to change at the same time. Still, we seem to have a clue here.

We have added a “best fit” line through the points. The equation for this line is shown at the top right of the chart. The R-squared figure is a measure of how well the data fits the line. If R squared = 1 then all the points lie on the line. If R-squared is 0 or near 0 then there is no correlation between the data on the two axes so the line and the equation has no relevance.

Now look again at the scatter of the temperature. You can now see the best fit line through these points. The R-squared value is low showing that there is no correlation between the two sets of data.

A few remarks are important when using scatter charts.

When looking at scatter charts it might be important to include all other relevant information. It might be important to look simultaneously at control charts, scatter charts and data table to get a better understanding what is exactly going on. This analysis is beyond the scope of this training.

Another important aspect of a scatter analysis is that the results are strongly influenced by an outlier. If we look at the scatter chart with temperature and add an outlier (18 flakers with 35 degrees) we get the following result:

You see that one outlier is drastically changing the R-squared value. So always look at the chart and ask yourself what is happening exactly.

Lesson 8 summary:

  1. A scatter chart helps us to see whether there is a mathematical relationship (correlation) between two things which we have measured. This may help us to find the causes of problems.
  2. Even if we find a mathematical relationship this does not necessarily mean that one of them causes the other.

End of Lesson 8
Return to the index
Lesson 9 – Attribute control charts with low average

We are now going to look at a particular problem you can encounter with attribute control charts.

We will generate four streams of data from a process and create a special cause of variation in each data stream.

Now we will look at control charts for all four colours of beads.

For all four colours, the number of beads in the box doubles after shot 20, therefore we would expect to see a clear signal that the special type of variation has occurred.

The control limits are calculated using all the data so scoops 1-20 should give results below average and scoops 21 – 40 should give results above average.

Look at the chart for red beads.
The chart is showing points outside the control limits and there are long runs below and above average – so the special variation is clearly visible on this chart.

Now look at the charts for Green, Yellow and Blue beads.
The special cause of variation is not so obvious on these charts, especially on the chart for blue beads.

Now we will look at the position of the upper control limit for each chart:
Look at the average (Avg.) figure for the red beads chart. Look at the Upper control limit (UCL) figure for red beads chart.
The upper control limit is probably about 1.4 times the average.

Look at the Average and Upper control limit values for the other three charts. Work out approximately how many times greater the UCL is than the Average.

As the average figure gets lower, the upper control limit gets further from the average. In the Blue bead chart, the Upper control limit is many times greater than the average.

The only thing that is different between the four charts is the average number of beads scooped. This demonstration shows one of the inherent problems with control charts for attributes. If the average of the samples is low, then attribute control chart are not sensitive at detecting special variation.

Because we usually count problems or failures, this means that as we get more successful at removing problems, the charts become less good at separating special cause variation from common cause variation.

The best way to overcome this problem is – whenever possible – use a measurement from a continuous scale and plot this on a chart for variables (Xbar & range or X) rather than use count data with an attribute chart.

For example, you are trying to produce a product or service within a given specification of time, weight or length, then make control charts from the time, weight or length measurement. This will indicate special variation much better than an attribute chart showing the number of out-of-specification products.

If it is not possible to use a variable, then there are other possible solutions.

  1. Use large sample sizes to make the average count as high as possible.
  2. If you have already collected the data, you could combine a number of the original samples into a smaller number of large samples. This would increase the average count per sample, but there is a danger here. If many of the new large samples contained products from before and after process changes, then the special variation could be hidden. It would only be sensible to combine samples if the new larger samples still contain products made at approximately the same time.
  3. We could measure the interval between occurrences of the attribute and plot this on an X chart. The interval could be the number of products which do not have the attribute, or the total volume of good product between each occurrence of bad product. We could also measure time intervals (months, days or minutes) between occurrences of the thing we are interested in.

With extreme low defect percentages you can also use the Cumulative Count Control chart which uses an exponential scale but this goes beyond the scope of this training

Lesson 9 summary:

  1. Beware of any attribute control chart which has a very small average. Special causes of variation may be present in the process but not showing on the chart.
  2. Always try to use control charts with variable measurements rather than simple pass / fail counts.
  3. If it is not possible to use variable measurements, then, if you have enough data, you may try combining samples or calculating intervals to see if there is any indication of special variation.

End of Lesson 9
Return to the index
Lesson 10

An important part of any SPC implementation is the use of process capability indices. There are several capability indices Cp, Cpk, Ppk, Cpm, NCpk. In this lesson we explain the most common used indices Cp, Cpk, Pp and Ppk. There is some confusion about the use of these indices. In this lesson we will try and remove some of the confusion and explain the differences between the indices and how they can be used in a practical way.

This lesson is not following the tutorial like the other lessons. The tutorial lesson is summarized in a video at the end of this lesson

First we will provide the definitions of the indices and give some historic insight in the development of these indices which will explain some of the confusion.

Then we will explain how the indices can be used in a practical way


What is important to know before we will explain the definitions of the indices is that the definitions in the past have changed.

Ppk was defined under the Q101 system of Ford as the preliminary capability index and the Cpk was defined as the long term capability index.

In some cases the Cpk value on the histogram was calculated differently from the Cpk calculations on the control chart. When the big three (Ford , GM and Chrysler) merged their quality manuals into the QS9000 system the definitions where changed and these definitions are still the standard nowadays in the TS16949 manual and will be used and explained in this lesson.



Cp (sometimes also named Cpi) stands for the capability index of the process. The formula for the calculation is:


The refers to the estimated standard deviation. The estimated standard deviation is calculated using the following formula:

where R bar is the average range of subgroups and d2 is taken from a statistical table.

In normal words it means the Cp index is calculated based on the within subgroup variation. So if the variation within the subgroup is very small you will have a good Cp index no matter how much the process average is drifting or what the location of the process is so the Cp index shows you how capable your machine is to produce consecutive products within the required variation (Tolerance).


Because the Cp index alone doesn’t indicate if you are producing within specifications we need an indication of the process is centered between the specification limits.

Therefore the Cpk index is used. The formula is:


So if the process is exactly in the middle of LSL and USL the Cp and Cpk index are the same.

If we now report both Cp and Cpk index we know how capable the process is to produce within the required variation (tolerance) and if the process is producing in the middle of the tolerance.


Is the information about Cp and Cpk enough to indicate if the process is running within specifications.

The answer is no because these 2 indices are calculated based on the within subgroup variation and it is still possible there is a large amount of between subgroup variation which is not taken into account. Let us try to explain this with an example.

The chart shows we had a lot of variation between subgroups (Xbar chart) but the variation with the subgroup was much better in control (Range chart)

The Cp index for this process is 1.66 and the Cpk index for this process is 1.65 which indicate the process is capable to produce within the required variation and over the reported time period this process is in the middle of the tolerance.

We see that these 2 indices are not enough and we need more information to know if the process is producing within specification limits. If we only use Cp and Cpk we need to add the requirement that the process must be in control. If the average chart is in control it indicates the process is stable and the process average is not fluctuating.

However we don’t always have the chart available when analyzing process data for example if we report a large number of characteristics. In that case we could indicate the percentage of subgroups out of control but there is also another possibility.

We can also know if the process is stable by calculating the Process Performance Index Pp.

The Pp index is calculated in the same way as the Cp index but now using the real standard deviation instead of the estimated standard deviation. So the formula is:


So the Pp index uses both within subgroup variation and between subgroup variation in the calculation and indicates how well the process was capable to produce within specification limits over the reported time period.

The Ppk index is calculated in a similar way as the Cpk index and needs no further explanation.

Practical use of Capability indices

If we now report 3 indices eg Cp, Cpk and Ppk we know what is happening in the process.

Cp indicates how well a process is capable to produce consecutive products within the required variation. The difference between Cp and Cpk indicates if the process is producing in the middle of the tolerance.

The difference between Cpk and Ppk indicates if the process is stable or in other words if there are special causes of variation which are influencing the average of the process even if control limits are not properly set.

The requirement in industry is that the Ppk value should exceed 1.67.

If the Ppk value is below 1.67 the combination of Cp, Cpk and Ppk will give you an indication who is responsible to improve the capability. Let me explain with an example:

Ppk = 0.8

Cpk = 1.67

Cp = 1.67
Ppk = 0.8

Cpk = 0.8

Cp = 1.67
Ppk = 0.8

Cpk = 0.8

Cp = 0.8

All 3 processes have the same Ppk index of 0.8 but require a completely different approach to improve capability and likely also a different department will be responsible to improve the capability.

Process 1: Unstable, Long term not capable, Short term capable , On target
This process is out of control and has assignable causes. There is more between subgroup variation than within subgroup variation.

Process 2: Stable, Short term capable, Long term capable, Not on target
This process has a wrong process setting and if the process is brought on target the Ppk will be acceptable.

Process 3: Stable, Short term not capable, Long term not capable, On target
This process is not capable to produce consecutive products within the allowed tolerance so this process needs to be altered.

There is also a tutorial for this lesson but because interactivity is required we have a recording of this sessions

Please view process capability video training
End of lesson 10
Return to the index

About us

The information on this website and the free training is offered by DataLyzer International.
DataLyzer International (Formerly Stephen Computer Service) is a supplier of SPC, FMEA, Gage Management and OEE software with offices in USA, Netherlands, UK and India. DataLyzer International was the first supplier in the world to offer commercial SPC software in 1980 and a few years later was the first to offer commercial Gage R&R software.
When implementing SPC it is important that people are properly trained. To improve the quality of training and to reduce the cost of training DataLyzer International has developed a training module using process simulations. The training presented in this side is based on the tutorial in the SPC training software SPC Wizard.

For more information about DataLyzer or SPC Wizard go to

Feel free to use and copy all information on this website under the condition your refer to this website.
If you like what you see please link to this site and refer the site to others.
If you don’t like what you see please send us your feedback and we will try to continuously improve the content in the spirit of all people applying SPC

Please send your feedback to mschaeffers @
Return to the index