Part F of the house of quality is the foundations. Without stretching the analogy, it is also the point where the exercise is most likely to be undermined.
The technical benchmark is a comparison between your products or services and those of your competitors. However, unlike the assessment made by your customers, this one is based on the technical responses. In our experience, organizations vary tremendously between one another in the extent to which they gather this information. Competitive benchmark- ing has nothing to do with industrial espionage!
A lot of information can be collected about the performance levels of your real competitors. This can come from market research surveys, debriefing new employees, the trade press, and so on. But organizations pursuing six sigma will not spend much time engaged in comparisons of this kind.
Today’s approach to competitive benchmarking is to compare your performance not with that of your competitors in the marketplace, but with organizations using the same, or similar, processes. These process competitors work in a different market and have inherited a different set of constraints. This means that dramatically different levels of perform- ance can become acceptable.
For example, consider distribution. If you are to achieve six sigma levels of performance in your distribution activities, then it will only happen by learning from people who do it better. One petrochemical company was satisfied with its performance. It outperformed other petro- chemical companies in most respects. For example, 85 per cent of its deliveries were made on the day that was agreed with the customer. This was not necessarily the day that the customer originally wanted, nor was it necessarily a convenient time, and it meant that 15 per cent of the deliv- eries were not to the customer’s satisfaction, but it was significantly bet- ter than the competitors. Therefore, normal competitor benchmarking would have been reassuring but unlikely to lead to improvement.
One of the problems confronting the organization was that no one had ever expected them to do better. Newspaper distribution is a different mat- ter. Here the product has almost no value if it arrives on a different day, and most consumers expect it to arrive before 8 a.m., otherwise they can- cel their order and buy one casually instead. Yet the newspaper produc- tion process cannot really be completed until after the last television news broadcast, otherwise the newspaper would no longer be competitive.
An international express delivery service, such as DHL or Federal Express, potentially has more freedom, but their direct competitors are constantly putting pressure on them to cut delivery times while sustain- ing high levels of accuracy. Thus, DHL delivers over 98 per cent of its customers’ packages to their satisfaction.
Similarly, the petrochemical companies deliver on order, but expect their customers to prebook, in other words to anticipate their short-term sales volumes. Other industries do not have this luxury. The ambulance service in central London, for instance, responds to nearly 1000 calls
per day. It is required to do so within 14 minutes of receiving the call in 95 per cent of all cases. More recently, the specification for their service has changed and they have to achieve an eight-minute target.
The 95 per cent goal does not seem too impressive when we are dis- cussing six sigma levels of 99.997 per cent, but it is still much better than the petrochemical business.
Another example comes from the accountancy profession. Frequently called upon to make presentations to customers, the organization acknowl- edged that it was staid, lacked polish and did not totally engage its audi- ence. They did not want to introduce a comedy act, but felt that they were not selling their message properly. Using a human resources consult- ant as facilitator, they benchmarked sales presentations with an adver- tising agency and a television production company. The results were acknowledged by all three to have improved their individual performance.
Most processes can be benchmarked in this manner, and the specifi- cations for technical responses should be too. This calls for a great deal of creativity, but the temptation to contract it out is best resisted. As with all the quality tools, once the skills have been developed in-house, the real benefits come when the people who are involved in the job carry out their own assessment and implement improvements.
Benchmarks should be technically sound, even when you are looking outside your industry. Even seemingly very specific parameters can be usefully benchmarked outside your own industry. For instance, Boeing compares in-cabin aircraft noise levels with those in other forms of trans- port. When you compare the sound levels inside an aircraft with those in an express train or a luxury car, you begin to see things from the cus- tomer’s perspective.
The technical benchmark is shown on part F of the house of quality.
The scale on the form is marked 2, 1, 1 and 2 for relative perform- ance. If appropriate, this can be changed to a more specific scale, but avoid overspecification. The key is relative performance. If one of your comparison organizations is better than you, then show yourself as a (1); if they are very much better give yourself a (2).
If the technical benchmark does not give you a clear picture of the priorities for improvement, then something has gone seriously wrong.
By comparing your technical benchmarks with the customers’ assess- ments of your performance (part B), you can quickly identify those fea- tures of your product that you think are performing well against your
competitors and against those that your customers think perform badly.
To do this, move up the column for each individual technical response from the benchmark to the correspondence matrix (part C). When you reach a relationship between the technical response and the customer expectation, move right to the area of customer assessment (part B).
There are four possible significant outcomes (see Figure 5.2).
Most of these outcomes should be self-explanatory. One useful appli- cation of this cross-checking is to validate your assumptions about the relationship between technical responses and customer expectations.
For example, as brewers, you may believe that head retention (a measure of the length of time the froth remains on the top of the beer) has a rela- tionship to taste. You would then expect that if you score highly for head retention in your benchmarks, that you should also score highly for taste.
If you do not score highly for taste, then your benchmark could be inaccurate, and either the relationship between head and taste could be less marked, or the way in which taste was assessed was questionable.