Quality Assurance an Approach
Quality Assurance (QA) is a method for assuring management, owners, customers or anyone that the organization is producing products or services at a predetermined level of quality.
Typically QA is the gathering, analysis and reporting function of an overall approach to quality in an organization. Quality assurance is not primarily a performance management program, but rather a benchmarking and assessment process. Quality improvement efforts can be part of a QA program it is not necessarily a component.
Quality Assurance in many call centers today often misses this essential definition.
It is important for any organization to understand that there are many views of quality for an organization, customers, staff, management, regulators etc. Often each of theses constituencies have their own view of quality. In the absence of clear and defined standards of quality each constituency develops its own and presumes that everyone else understands and agrees with their view or quality.
Therefore the establishment, publication and promotion of what constitutes in an organization is critically important. What is quality to one must be the same quality for all. In other words quality requires a consistency of vision.
One approach is to adopt standards set by outside agencies using one of the many canned standards, Six Sigma & ISO 9000 and their subsets being most common. While very easy to find, they have huge overhead to develop, deploy, implement and maintain. They are complex and for most organizations cumbersome. There is now a growing body of evidence that these programs have become their own bureaucracy and add little in value to many organizations except possibly as gimmicks for advertising and sales. Even there, their value is suspect.
Another approach is to return to the definition of Quality Assurance at the opening. To achieve this end the program should be simple, effective, and efficient and provide a clear understanding to whomever is reviewing it of what the organization is doing and how well that is being achieved.
A Call Center QA program should include the following components:
• Customer Listening & Satisfaction
• Customer Relations
• Failure Analysis and Process Improvement
• Mystery Calling
When taken together the above components form a complete picture of the key elements for the reviewer. To be effective these components need to be reported on in a frank and open manner without the assignment of blame.
(Insert Circle graph)
Of the five components the one most often installed in call centers today is monitoring. Followed by some form of customer satisfaction survey or scoring.
A recent ICMI and AC Nielsen study of 735 call centers showed that:
• 93 percent reported monitoring agent calls.
• There is a wide variance in the number of calls monitored per month per agent. The most popular frequencies are 4 to 5, and 10 or more.
• Apart from agent calls; other types of contacts are also monitored. Four out of 10 call centers monitor email responses, one in six monitor fax correspondence, and one in 14 monitor Web text-chat sessions
• More than one-third of call centers devote one to five hours per week to monitoring, and a quarter devoted six to 10 hours weekly. However, it is not surprising that the larger call centers (200 or more agents) devote significantly more time per week to monitoring and coaching than the smallest call centers (fewer than 50 agents).
These are good places to start to build Quality Assurance Programs. What is it that management needs to be assured about? The work and the systems that produce the work result. Therefore start QA design or review by understanding what is the work result, who judges it and how is it produced? What information is needed by all reviewers in order to judge the quality of the operation?
Some key information is needed to do this. That is what are the transactions handled by the centre, their volume, which ones start and end in the centre, which ones need other areas of the organization involved in order to complete, the number of failures that are now produced in the center or elsewhere in the process of the total transaction.
Let’s examine a simple call for an order. Customer calls in, call is taken by the switch or ACD, routed to a CSR using some form of menu system, IVR or voice prompts. CSR answers the call, asks some key questions, and enters the responses into another computer system. If payment is required, the system deals with other systems to process credit card information to ensure payment. A system generates a pick/pack order for someone to fulfill. The order is packed, shipped using a vehicle which can be from yet a third party, courier or shipping firm. Finally the product is delivered to the customer.
From a transaction, company or a systems point of view it is complicated. There of over ten points that can fail in the described order process.
If viewed from only the call center perspective this is a straight forward call received, call answered, information gathered, closed call. This has a perceived low likelihood of failure.
From a customers point of view the process is also simple. Place a call, give information, wait for and accept deliver. Not much could go wrong. For the customer if anything fails, it is a failure. They don’t care what went wrong, just get it fixed.
Your next step in developing a QA program is to develop a list of transactions that the centre handles. These will only fall into two general types: value adding or failure fixes. Value adding transactions are, as they sound, those transactions that add (or potentially) or could add value. These would include; orders, subscriptions, pricing requests etc. Failure Fix transactions involve correcting a problem that has occurred in one or more of the organizations processes. These would include; out of stock, order not received, not as ordered, service not delivered service/product not satisfactory, billing errors etc.
Collect information is available about the number of failures or complaints by type. This forms the ‘failure’ baseline.
With a complete transaction list identify the volume of transactions by type and period. Rank the order of transactions by volume, value and the number of failures per 1000. This is the baseline numbers from which to start the overall reporting on.
The focus of QA programs is on the system not primarily on an individual. People are the ‘face’ of the organization and a conduit for assessing quality, there is an opportunity to assess individual though the information has less validity on an individual basis.
With monitoring start assessing each of the transactions being done. Is the transaction done according to the specifications process flow and standards defined by the organization? For each of the observed transactions track the results to success or failure. Monitoring with this approach focuses first on what the system is producing and not whether or not an agent is performing.
Assessing the individuals’ performance in following the defined process, procedures and flow of the particular transaction provides an assessment of how well the agent performed or adhered to the expected processes. Deficiencies identified in the course of this monitoring can then be shared with the training and coaching staff for individual coaching and training.
It must be kept in mind that any assessment of individual performance almost certainly lacks statistical validity. The majority of centers will only monitor their agents on 10- to 20 calls per month. If an average agent handles 50 calls a day then this represents only 1 – 2% of the agents’ calls. This is sufficient to identify significant issues but is unlikely to uncover minor issues or misinterpretations by the agent.
The same approach of keeping records about customer complaints (which by definition are failures) and their resolution, if any, will produce a data set for tracking and analysis. Again, the primary issue for management is what produces the errors not who. This can only be determined with enough data and incidents. What that number is, is outside the scope of this document since each depend on the circumstances of the situation.
Customer Listening and Satisfaction (CLS)
CLS consists of two parts. Management, at all levels benefit from listening to actual, unedited calls. Ideally this is done between an hour to two hours of conversations with customers and prospects every month or quarter depending on their level of involvement. This can be facilitated either in person or if the technology is available via CD. This gives management a perspective and insight into customer issues and behaviour that is direct and unfiltered.
At the same time on a quarterly basis a survey of callers, customers and prospects should be completed to ensure statistical view of the center, its trends and the organizations performance/progress. This must include all the critical segments for ongoing tracking; but should also key issues or concerns on a rotational basis as those are identified and focused on for understanding or resolution. This is usually best done by a third party or a different group to the people already handling calls in the centre to avoid any slippage, bias or perception thereof.
‘Customer relations’ refers to any call or incident that requires escalation above the level of senior agents. These events combined with Customer complaints form the first line of warning to an organization. Usually agents and senior agents have been trained to handle all the calls into a centre. Any that require escalation indicate that something in the system or process is not working and requires attention or identification as an unusual ‘one-off’ which is out of the normal or ordinary.
Dispositioning and tracking of these quickly produces patterns that can be reported on; and processes or policies developed to add to the agents arsenal of tools and techniques to handle similar calls when they arise again. Accurate tracking and summarizing is essential for this to succeed. Frequency of reporting and format need to evolve depending on the organization and it ability to react. A caution here is needed. Reacting quickly to fix an individual issue is great. However without good records and reporting, single or low frequency incidents which require a systemic or organization approach can and do get lost in the noise of day to day operations.
Management needs to periodically review all errors from the summaries, customer relations, and satisfaction surveys and listening sessions to identify commonalities or areas for correction and change. Front line staff, CSR & supervisors often have little or no authority or control to change policies or procedures in most operations. This is the role of management. This is the tool that starts to improvement of quality as defined.
All quality costs. High quality does cost more than most organizations consider appropriate. The issue is that low quality often costs even more than high quality. This is key to understanding that while higher quality which we all inherently see as more expensive may actually cost less.
As an example, look at the following chart. In this example a call center manager in an effort to reduce costs choked calls. The result was hiding the total number of calls, an increase in the number of abandoned calls, more calls due to pent-up demand and callers taking longer to vent their frustration; increasing AHT and causing agent fatigue and turnover. Costs therefore rose. Once choking stopped, pent up demand addressed and all calls handled in reasonable time and quality the volume stabilized to below pre-change over levels and the costs were reduced and the service provided improved dramatically.
From time to time an issue or concern arises which may or may not be a significant issue requiring attention. These issues must be investigated, though their frequency may be low and therefore not ideally suited for research through monitoring. Mystery calling can be employed to more completely understand the depth and frequency of either behaviours or process incidents. Again use of third party or voices of people not of the centre is advised for avoiding bias and ‘skuing’ results. Mystery calling is tactical and usually revolves around a particular transaction step or group of similar transactions to see what is causing outcomes that are unexpected.
Customer satisfaction studies reveal how customers feel about their customer service experience. They do not reveal why. Customer service measurement reveals the .why. that stimulates continuous improvement. Essentially, satisfaction studies report perceptions and service studies report performance. If a satisfaction study revealed that customers thought food service was slow in a chain of restaurants, valuable information is gleaned. Acting on this information alone would be impractical. Would the chain simply ask employees to work faster? Would it risk serving undercooked food for the sake of quick service? Would it redesign its units to receive food orders more quickly? No, of course not to do so would be to try to fix an unknown.
The chain would drill down deeper into the data to determine the root cause, the “why”. The chain would measure the speed of customer service it provides, likely using mystery shoppers to take those measurements. If a subsequent mystery shopper study revealed that table-service customers were waiting an average of 10 minutes to receive their checks, a specific reason for customers to perceive slow service has been isolated. Causes for the delay can now be investigated.
Causes might include slow credit card authorizations, understaffing, a backlog waiting for a manager approval, or lack of equipment or staff training to use computers. That one statistic-the more than 10-minute wait–gives managers a specific issue to work toward correcting. It gives the customer-driven company a way to serve customers better in the short term.
Think of customer satisfaction as the end product of a production line. In a retail environment, one stage of the process might involve approaching customers as they enter the store. Another stage might involve having advertised merchandise readily available, supported by prominent displays. All along the production line, customers decide how the business meets their expectations. Customer satisfaction surveys address the end product of the production line, revealing expectations and perceptions in total. By contrast, customer service measurements from mystery shopping allows an organization to target specific point in the production process to gauge their impact on the end result and reveal performance at each identified stage of the production process.
Taking measurements along a production line and comparing them to established benchmarks should sound familiar. It is a basic principle of Total Quality Management (TQM) called Statistical Process Control (SPC). SPC requires that quality be inspected at every stage of the process, not just at the end. TQM proponents equate statistical process control of a production line to mystery shopping of service businesses.
Quality Assurance is the assessment portion of an overall approach to quality in a company.
If the QA program identifies areas of weakness or failure that management thinks needs attention then the steps on the chart above are done.
1. QA identifies areas of failure or process results that are either uncommon, persistent, or have to large a range of variation.
2. Management investigates using root cause analysis, mystery shopping or monitoring results to identity what is causing the problem or variation of results. These are usually in four key areas: Process, Technology (Equipment), People or Methodology (Policies, Practices and measures). On determination of the cause management decides to change or to maintain existing process.
3. Process improvement if determined as required, designs and tests new approach to handling the transaction or process.
4. New process is measured and tracked to determine if it improves the results and whether those results stabilize into a normal range different than the pervious process. If they do then this is accepted as the new standard
5. New standard is accepted as part of the QA process for reporting and quality purposes.
Let us know what you think of this article or any suggestions you have for future issues by email at