Why the Quality Listening program Should Not be a Performance Review
Why the Quality Listening program Should Not be a Performance Review
By: Colin Taylor
Let’s look at the numbers. In a customer service call center where the quality assurance program requires the evaluation of 4 calls per month. The average agent will handle approximately 1,600 calls in the month. This means that the 4 calls evaluated represent only a quarter of a single percentage point. Or put another way we are evaluating and assessing only one out of each 400! How representative was the second Tuesday of August? To employ that Tuesday in August of last year as being representative of the past fifteen months likely doesn’t make sense. Neither does basing an opinion of an agent’s performance on every 400th call. No mater how we try to examine these individual call assessments, the sample size is just too small to have meaning. This is the fundamental problem with attempting to employ quality assurance scores as mini-performance reviews.
Attempting to use your quality reviews as a performance assessment tool misses the primary objective of quality management. Quality assurance is about assuring the quality of the service being delivered. To who is this assurance being made? The answer is to senior management. The practice of assessing quality allows center management to gauge the performance of the center and individual agents within the center. The value to the center and senior managers in knowing the relative performance of the center and comparing and contrasting the performance with previous months is significant. But perhaps the ability to identify how individual agents are performing is more valuable. By knowing where agents are at can help us direct our efforts to improve the overall performance and quality of the center.
The objective isn’t just to identify problems and what agents are doing wrong, but also to identify what they are doing well. Both the areas for improvement; through coaching leading to improved individual performance and sharing best practices improves the overall performance of the center.
Performance reviews have a place and time, and that is your regularly scheduled performance review. The agent’s individual performance reviews may play a small role here specifically related to improvement over time as their skills improved. Remember that a failure of an agent to improve or be able to overcome performance deficiencies is as much a censure of the coaching and skills development staff, recruiting and staff selection and processes as it is of the agent in question.
The correct positioning of the Quality program, its strengths and weaknesses, function and goals is key to gaining a well functioning center. This positioning needs to be known by both the senior management but also the agents. So that each can recognize their contributions and how all can help with the centers success.
Participate in the #fiveideas and vote on the call center topic you would like addressed in the next post. You can vote here
Karyn,
Thank you so much for your comment, we appreciate you input. You cite the Hawthorne Effect as giving validity to an otherwise statistically insignificant sampling process. While I respect your point of view I would challenge you on the validity of the conclusion.
While the Hawthorne effect does suggest that observed individuals behave or perform better than unsupervised individuals, for a limited time, as long as they suspect or know about the observation. The effect however diminishes over time. To state that performance improves while being observed, does not in and of itself speak to whether the standard deviation narrowed. If the standard deviation, i.e. the range of quality from best call to worst call, remained unchanged then even if the Hawthorne effect remained in force over time the variance is unchanged. So any improvement seen is understood in the context of where the performance is at that time and the Hawthorne effect is thereby negated. This would be a case of ‘high tide raising all boats’ rather than evidence that statistical validity being rendered irrelevant.
Other research even questions the utility and applicability of the Hawthorn Effect. Roethlisberger found that greater productivity resulted when management made workers feel valued and aware that their concerns were taken seriously. On the surface this point of view is well aligned with the original findings of the Hawthorne (experimental) Effect. The new aspect introduced by Roethlisberger is that the effect occurs as a result of ‘vested self interest’. The effect depends on the participants’ interpretation of the situation; it is not awareness per se, nor special attention per se, but participants’ interpretation that must be investigated (Adair).
So even if the Hawthorne Effect is real, and even if the effect did not erode back to zero over time, if the participants or call center agents do not perceive and interpret the additional attention of monitoring and coaching to be their best interests then no effect occurs. This is another reason to discard the ‘catch them doing something bad’ model for Quality Assurance.
But that is not the only concern I have with this logic. Individual agents may make positive adjustments due to monitoring and coaching due to the attention, or so we hope. Those adjustments may be long lived or short lived depending upon the agent and the coaching involved. What is important and the original point in the article is that Quality Assurance (QA) programs are designed to identify the quality of the service provided. QA programs are not to be meant as a performance review every week. In many cases where a QA program is put in place performance does rise and this rise is maybe due to the Hawthorne effect, as you stated, a novelty effect or the agents perception of benefit in doing so. This does not account for the entire rise. With the staff aware that supervisors and monitors are paying attention there is likely greater concentration to do what has been asked. This is especially so for those whom positive coaching and encouragement is provided.
However the purpose of the QA program is to report what the QA is. Hence ‘Assurance’ part of the title. Centers often use the title of Quality Assurance Program and mean Quality Monitoring (QM) or Quality Listening Program. A QM program is usually part of an overall QA program. A well designed QM can and usual does provide fast and effective (we hope) feedback to agents in order to improve how they perform the functions they’ve been ask to do. This feedback loop, and as stated above, the positivity of this loop is critical, especially in the early stages of an agents career in a call center. (For more on this subject see: Talent is Overrated by Geoff Colvin). Weekly monitoring and coaching can quickly become a form of performance review.
A full Quality Assurance program should and usually does take a broader view of the environment. A single agent and a small sample of calls are fine for QM, performance improvement and coaching exercises of that agent. From a senior managers point of view it is more important to be able to view the entire system, all or groups of agents, all calls and call types or large sub categories, and the overall trend and control levels in order to better manage the center or programs.
So to reiterate our original premise Quality Assurance should not be a performance review. We would be happy to carry on this dialogue or to speak directly to discuss this matter further. Thank you once again comments.
Hi Colin,
I found your article quite interesting; however, I disagree with the concept that only a percentage of the calls being monitored is not significant in the big picture. The point of the agents knowing that they are being randomly recorded and listened to is what is the big differentiator. This is called the Hawthorn Effect.
The Hawthorne Effect – According to wikipedia.org, the Hawthorne Effect is defined as: “An experimental effect in the direction expected but not for the reason expected; i.e., a significant positive effect that turns out to have no causal basis in the theoretical motivation for the intervention, but is apparently due to the effect on the participants of knowing themselves to be studied in connection with the outcomes measured.”
This is why it is important to listen and monitor calls. Unfortunately, you cannot listen to all “1600” calls, but a percentage can statistically represent the overall behavior of the agent. As experts in Quality Monitoring, we actually recommend no less than 2 audits per week per agent. This really paints a good picture of the overall behavior…be it good or unacceptable. This also provides opportunities for coaching and praising certain behaviors.
Thank you for all your great blogs and helping us all to achieve great Customer Satisfaction and Loyalty!
Colin:
I applaud you for this concise and and clear article. The Elkind Group specializes in helping call centers transform their organizations from service to sales AND service. We have found the Quality Listening programs in general to be very misunderstood, and subsequently misused, and often abused. These programs represent a wonderful opportunity for managers to get and maintain a good pulse on what their front-line people are actually doing with their customers. When there exists a clearly laid out and communicated call flow to assess against, then the Quality Listening program becomes a wonderful coaching tool and recognition opportunity. But in order for this to really work, the organization has to have a well defined call flow which includes observable behaviors and can be used as a training and coaching tool. I invited you and your readers to The Elkind Group web site to learn more. Thanks again for such a good blog.
Colin,
Great article and I agree. In my experience, Quality Monitoring programs are most effective as a means of reinforcing new employee and ongoing training with your contact center staff.
Think about it. What better way does an organization have to follow up on a new procedure the staff were recently trained on. By making simple adjustments to the monitoring program, the QA staff can provide feedback to the training organization as to their effectiveness, plus provide the CC managers with the coaching opportunities were skills monitored were shown to be less than desired. It can be an effective tool in the performance evaluation, but only as it relates to the learning capacity of the employee and their ability to translate that into action.
But in an organization focused on delivering a consistent, awe-inspiring customer experience, it’s the feedback from customers that should play a significant role in the evaluation of the employee. This feedback, in the form of transaction surveys, should drive the employee coaching sessions. Reading the thoughts from the customer as to how well they were served while listening to the recorded call is a very powerful way to help the employees understand what you’re looking for in their performance.
Coupling the “internal” data from a QA program with the “external” voice of the customer sentiment, you’ll have a complete 360 degree overview of your rep’s performance!
Larry,
You make some really good points, thank you for sharing them with us.
Being able to connect the customer feedback – the external satisfaction measures to the quality listening – our internal stand in for customer satisfaction, is certainly powerful indeed. In my experience the results often are a surprise to the center as well. Many times we discover that what we had thought was important to the customer really isn’t. This level set is critical if we are going to make require ‘course adjustments’ in our Customer Experience journey to Customer Satisfaction.
When a baseball pitcher hits the batter (HB) with the ball, it’s just one pitch, right? After all, a starting pitcher throws about 90-pitches a game, 30 games a year, for about 2700 pitches a season. If only “a quarter of a single percentage” of those pitches end up hitting the batter, that’s less than 7-batters a year hit by that pitchers bad pitch. The career pitching record for most hit batsmen is 205 by Hall-of-Famer Walter Johnson. The season record is 54 by Phil Knell in 1891, and the game record is six, held by Ed Knouff and John Grimes.
While it’s unreasonable to base a career solely on bad pitches – especially if I’m only seeing “a quarter of a single percentage” of that pitchers throws – and only some of them might be considered bad pitches – what am I missing with the other 99% I’m not seeing.
I agree quality listening should not be a performance review, as you’ve described it, however, it should be a guage to identify potential problem areas. In baseball, the batter dusts it off and takes first base, in the business world; when you throw a bad pitch that hits the customer, they usually hang up and never calls back again.
Sean,
I like the analogy of ‘bad pitches’. I am not suggesting that quality listening should not be done. In fact I am strong believer that a strong quality program incorporating quality listening, coaching training and customer feedback is a must for any center. I agree that as a gauge or even to take an ‘if there’s smoke, there’s fire” approach makes a lot of sense. Viewing quality listening as an early warning system can be very helpful. What often isn’t helpful and what can set an agent, a team or even a center back, is trying to leverage this very slight insight into an agents calls and judge them solely based on the 3 or 4 calls in question. You still coach and train, you just don’t turn the quality review into a performance review.
Another valuable approach that can improve our insight into the agents within the parameters of a quality listening program can include;
– getting the agent to record the calls
– ask the agent only to record their best or worst calls
– have peers assess the agents calls
All of the above approaches can broaden our perspective and add value to our quality listening and program.
Thank you for joining the discussion.
Colin
I do disagree, Colin. Despite only looking at a minuscule percentage of calls, I have found that call scores are consistent and representative of the service offered.
This should be used as a tool, along with coaching and training, to ensure that maximum performance is achieved.
Kate,
Thank you for your comments. In my post I am not suggesting that there isn’t value in quality listening, simply that you cannot put too much stock into the results. I have seen center where the results were in fact representative and also centers where they were not. Often if the coaching and training is effective then even the 1/4 of 1% scores will improve. I agree that quality listening is a tool and needs to be supported by effective coaching and training to gain performance improvements.
Thank you for sharing your point of view.
Colin