The decision to set out to achieve six sigma levels of performance is based on the benefits not only to the customer, but also to the company, minimizing the cost to society in Taguchi’s terms. In the short term, practical decisions about priorities for improvement need to be taken.
We have already established one of the criteria in determining these;
namely, the customer’s rank importance of the individual factors. How- ever, just as important in this priority setting are the customers’ percep- tions of where we stand in relation to our competitors.
It seems logical that improvement activities should focus on the factors that are of importance to the customers, and for which we do not perform well against our competitors. Yet, if you look at most quality improvement processes within organizations, the majority of efforts are relatively unfocused or, worse, they focus on small-scale internal issues that are within the control of a particular manager. We often hear senior people justify this failure to do anything to help the customer by talking in terms of ‘starting small’, learning the ropes before going ‘companywide’, not rocking the boat, and so on. Then, a year or so later, they put their improvement process onto the back-burner because they need to press on with more ‘customer-focused’ activities.
There is no room for this attitude when pursuing six sigma. Every- thing has to be driven towards improving performance in the eyes of the customer.
How we achieve this depends on the service or product that we offer, and especially on the customer’s relationship with it. Whichever route we use, the relative position of our offering is rated against the customer’s, using part B of the house of quality. This assessment takes two forms:
unsolicited customer comments (in practice usually complaints) and solicited feedback. The nature of the solicited feedback depends on the type of product or service, as explained below.
Customer complaints
First, statistics are collected from the customer complaints procedures.
These are recorded in the first column of form B.
Activity
Spend a few minutes with one of the longer serving (and possibly more cynical) employees. Draw up a brief history of the company’s change initiatives. What were their main characteristics? Who was involved?
What worked well? What did not? Why did they end? Where are they now? How customer led were they? Did they take into account com- petitor knowledge? To what extent did they cross departments and functions?
Think carefully before embarking on a quick call to the customer com- plaints department or the quality manager. A ‘complaint’ may not appear as such. For example, the accounts department should have a better idea of those sales accounts with a reducing turnover, or even lost ones. They will also be able to identify late and argumentative payers. While they may be belligerent, these people may also have their reasons for delaying payment.
It is useful to review the business as a whole at this stage, looking at the various ways in which complaints are received. In organizations with over 250 employees, there are usually very many more routes for complaints to be received than at first appear. This is particularly so if you start to include, as you should, the customers of your processes as well as the recipients of your product or service. For example, if you produce uni- versal widgets, the contract drivers who transport them are customers of your processes, and just as crucial to the end-user as the product itself.
How do you hear complaints from the drivers and their company? And what about other service providers? The days have passed when a com- pany could contract out problematic services and then knobble the sub- contractor, although there are still quite a few exceptions.
One cleaning company was strongly criticized by its customer for poor attention to detail which was causing contamination of its products in certain areas. When looked at more closely, the problem was due to a combination of several factors, including a decision to reduce the amount of overhead heating. This not only made conditions difficult for the clean- ers, but also reduced the flow of air through the plant that would other- wise have removed contaminants. The cleaning company management heard the grumbling of their cleaners, but did not connect the complaints with the cause, and anyway it had no channel of communication with the customer except through the shift supervisor, who was not regarded as senior enough by the client.
In another cleaning scenario, the cleaner in a small remote office was the wife of one of the junior managers who worked there. She pointed out to him that she could do a better job if she was allowed to come in at a different time. He asked his superior (it was a fairly disciplined environment) whether she could change her hours. When this informa- tion reached her managers, they told her off for ‘admitting’ that she was not doing a proper job.
In your review of the complaints procedures, also look out for examples of conflicting objectives. These can drastically reduce the level of overt
complaints. For example, customer help-desks often record the time taken to resolve a customer’s query. Two common problems arise with the statistics from these desks. First, complaints are usually tallied according to those resolved within thirty minutes, half a day, by the end of the day, and those carried over to the next day, or similar periods. This means that once a query has rolled over to the next day there is no further incentive to resolve it. So the priority for the team working on the second day is to tackle new queries and leave the rolled-over ones until their own work- load is cleared. As they are always short of staff, this almost inevitably means that these queries remain unresolved for very long periods.
The second problem arises with calls that involve a callback. The initial query is made, a tentative solution is offered and the customer goes off to try it. The assumption with most help-desks is that the customer will call again if the solution does not work. This means that the call can be removed from the monitoring system as resolved, thereby removing any incentive for the staff to pursue the customer to check that it did work.
When the customer does call again, the system creates a new job and the severity of the problem itself is lost.
Broad experience products and services
When customers have experience not only of your own products or ser- vices but also of those of your competitors, then the assessment of per- formance can be based on direct comparison. There are many variables.
For example, car manufacturers often carry out competitive comparisons involving their customers. However, as most car drivers retain the same vehicle for three years, their comparisons are likely to be well out of date and certainly not based on contemporary models. By contrast, hotel guests, especially business people, will probably have relevant, recent experiences of competitors’ establishments, even if the hotels belong to the same chain.
In this case, the customers are presented with a list of customer expect- ations. They are asked to rate the performance of a given product against those of others. The assessment consists of a simple four-part scale. This deliberately eliminates ‘don’t know’ types of response, but allows cus- tomers to give ‘same as’ responses. There is a comprehensive range of statistical techniques for the description and comparison of these scales (Kendall, 1970).
Limited experience products and services
When the customer is very unlikely to have sufficient experience of competitive products to make realistic comparisons, the question posed needs to be changed. We now ask them to rate our product for each of the expectations in part A, using a similar four-part scale, against their own perception of what should be delivered.
This can be problematic. As customers have little experience of alter- natives, they may be prepared to award high marks when technical per- formance is actually much lower. Some researchers attempt to address this by exposing customers to some alternatives, often in the form of written descriptions or video material, before asking them to carry out the assessment. Unfortunately, this process can desensitize the customer and produce very confusing results.
One approach that can produce usable results is to present the customer with four simple descriptions, without explaining which one matches your own product.
For example, ‘What proportion of first class letters should arrive the next day?’
■ all first class posted letters should be delivered the next day
■ nine out of ten
■ eight out of ten
■ seven out of ten.
There are still problems with this approach, but the results are useful.