Calibration- Ensuring consistency in your Quality Assurance program
In previous articles we have examined what best practice organizations do to implement successful and effective Quality Assurance operations. We have reviewed the practice of assessing the quality of the agent voices stream, the transactional quality and value versus fix transactions, monitoring forms, call recording and logging.
In this article we examine the often overlooked, though essential activity of calibrating the quality assurance
monitoring process. While the Quality Assurance program structure of best practice organizations and espoused by TRG endeavor to eliminate subjectivity as much as possible, it cannot be totally eliminated for the call monitoring process. Each individual who monitors a call will form their own opinion related the call based upon their own experience, knowledge and training.
The implementation of consistent training, coaching and management will over time reduce the variances between individual opinions as their own experience will be more consistent and better aligned. This benefit exists within mature center with a consistent and stable training, management and methodologies in place. In many centers however this level of consistency does not exist at the point when quality assurance is introduced to the center.
‘Calibration’ is the process of assessing the gap between individuals and/or organizations and through active and structured review and comparison, reduce the gap.
The process by which we can assess the gap, or the subjective differences in opinion is completed through
independent review of monitored calls. Each of the calls is reviewed by each member of the Quality Assurance team who as responsibility for monitoring and coaching. Each member of the group scores the call employing the approved monitoring form. The scores for each call from each team member is plotted on a spreadsheet.
During the calibration meeting each call is played and reviewed by the group. In sequence each team member reviews their scoring for the call and their thinking and rationale for their scores. The group reviews and discusses the various scores until they agree on a final score for the call. Each of the monitored calls is then reviewed in sequence until all call has been reviewed. The final scores for each call is then plotted on the spreadsheet and a gap report is produced. The gap report illustrates the variance of the scores, for example if the team member scores ranged from 7.2 to 9.0 for a specific call the gap range would be 1.8 (the total variance between the two scores). Then, the summary range of each individual call averages this is the overall gap result.
By meeting and tracking the .gap. performance and shifts over a three to six month period it is often possible to virtually eliminate any .gap. between the individuals assessing the calls. Of course this process does need to be repeated whenever new staff joins the department with responsibility for monitoring.
The result of the calibration process as outlined above is a fair, reasonable and consistent Quality Assurance process that is virtually free of subjectivity.
Let us know what you think of this article or any suggestions you have for future issues by email at