The Art of Calibration
By John Cockerill,
Calibration is the art of being able to standardize the measurement of calls or transaction quality across and amongst those doing the work and those who review the work. Without calibration any program is open to cries of bias, unfair treatment, and the results can be inconsistent or ineffective. While this is nice to know, how and why Calibration should be done; what it consist of; and when to complete it are questions that are regularly raised by people in call and contact centres.
Whether there is one assessor or many, a few agents or a lot, ensuring the expectations of stakeholders clear and well understood is always a challenge. It is a good practice is to develop a checklist with some form of scoring and recording the call or transaction scores. For many operations this is as far their quality monitoring program goes. This structure leads to an assessor(s) doing best efforts usually based on their own training and experience.
A superior approach is to ensure that all people involved understand, review, score and coach using the same standard; and to do so consistently regardless of the passage of time or changes to people involved. This is the purpose and role of calibration.
While the term “call” is most often used as what is being monitored, the same approach and method can be applied to any and all transaction types with a little modification. If the center has text, chat, emails, TTY or other media types the approach to calibration is the same and the same rationale to ensure accurate reporting of quality applies.
What to Calibrate?
This depends upon what is desired to be measured. At most all call types need to be assessed and calibrated. Or only the high volume and most complex may be selected, if it is believed that by doing so the low volume and simple call types will be judged similarly. At minimum the more important calls need calibration to establish a baseline for the scoring. Other calls and transaction types can be added later.
Criterion and scoring models need to be developed before doing calibration. There are many approaches to establishing the criteria from the simple to the complex. One consideration in creating the criteria is that the simpler it is, the easier it will be to complete, explain, control and conduct. Identify the natural sections for your call types are. There maybe more or less sections. Here is a sample of sections.
• Opening or greeting
• Understanding Call
Each of these sections then has points or attributes that are expected to happen to one degree or another. Be it the simple use of the callers name or proper identification or the correct diagnose and solution each is scored on some scale. Here are samples of attributes:
• Correct information provided
• Agent took correct steps in account
• Ticket created correctly and acted upon
• Applicable templates used
• Achieved FCR (first call resolution)
• Clarified customer questions and or paraphrased to ensure understanding
• Agent picked up nuances of complaint
• Repeated data gathered back to customer
Scoring models vary as well from the simple 0/1 for done or not done, with each attribute getting the same weighting to very complex models:. for example scoring calls out of 100 with differing weightings for each attribute or section. The proper diagnostic process could be scored on how well or close to the optimum it was done. A rule of practice is that the simpler the scoring for any point the better. A scoring model of 0 to 3 or 0 to 5 for any observation point seems to work well for many. Zero would be if the action or behaviour is absent, 1 or 2 if OK, 3 if stellar. Zero to five allows a bit more nuance and judgement. You need to be careful so as not to create too broad a scale where you could lose the specific granularity and the ability to effectively identify the variances between, say, a 7 versus and 8.
Weighting of either the individual attributes or sections is also a consideration. Do all sections get valued the same? Should they? Is calling the customer by name really all that important? Is using the customers name 15 times better than 1? Or is solving the callers problem quickly and effectively of greater value to the organization and the customer?
With these weightings decided, then a few more items are needed before starting a calibration session. Of these the most important is a range of sample calls from different agents and times. The nest step is to set the stage and expectations with whoever is to participate.
Who is involved?
Agents, supervisors or team leads, assessors or evaluators, managers and especially other key stakeholders. Key stakeholders can include: program managers, clients, and/or people from marketing and sales. This is especially important in the initial sessions or the roll-out of new programs where their voice at the table can ensure that what was designed is practicable and is put into practice.
Who participates in any calibration session is dependent upon the purpose of the session? If the purpose is to calibrate the assessors, especially if they are new to the work, then likely key stakeholders, marketing, sales, and or more experienced managers should attend. In General, try to keep each session to fewer then 10 people to allow all to participate; but more then 4 in order to surface differences of opinions, perceptions and points of view. If calibrating new or existing agents, be sure to include a manager or supervisor and at least one assessor. Again keep the number of participants to a size where all can get an opportunity to discuss the calls.
How to Calibrate?
Calibration sessions by their nature need to be both open and participatory. At the same time the sessions must be disciplined in order to get the work done in a frank manner and without rancor or blame. Start each session with a brief review of the purpose, criterion, scoring, and approach or behaviour during the session. is the meeting chair should refrain from providing the opinion about the calls reviewed. This provides the impression of objectivity and a degree of separation. The chair should engage where arbitration and a deciding authority is required in the case of disagreements that the participants themselves can’t resolve.
Provide each participant with enough scoring sheets, forms to evaluate all the calls involved. If your organization employs an application or system, be sure all bring laptops and/or tablets that can access the required documents. Ensure that there is a list of the calls with some form of unique call identification for each. Ask that there be no table discussion of the calls until the opinions are asked for. This silence is important if bias and fixed frames of reference are to be minimized.
Play the calls twice, or more if necessary. The first time through is to get a rough idea of the call, its purpose, flow and general impressions. The participants should score quickly their first impressions. Likely they will not catch all the call or have time to listen or score for all the individual attributes. Second time through the call can be stopped at any point to allow scoring or repeat listen to be done. Gather the scores for each call. Then ask each participant to express their opinion and why they scored the call the way they did.
When asking for the opinions ensure that the moderator asks the participants in reverse order of importance. In other words hear from the person with the least experience to the most experience or authority. This is very important. It ensures that each person expresses their opinion with less reservation and without jumping to whatever they think opinion of the group or most senior person.
Then look at the scores and points of similarity and differences. Ask those people with outlying scores to explain; and those who differ to provide their views. Where possible encourage a consensus decision on how to score particular points and sections. Keep the discussion on point and focused. Replay and review calls as required. Focus on the work and calls, not on the people in the samples chosen.
Keep track of the scores for each call, sections and attributes. Expect that during the session scores and outliers will narrow to a tighter range. Some practitioners like to set a target of only a 5 to 10% variation of scores. This is doable but is dependent upon the people participating and the organizations overall experience.
The first few calls will go slow. They always do. As the participants learn to trust each other and gain understanding of what is being looked for the speed of each calls review will increase. Likely expect that a typical five minute call will take 15 to 20 minutes to review. This will reduce to about 10 to 15 minutes by the end of each session. Therefore expect to have 12 calls per session and likely to cover about 6 to 10 calls in depth.
When to Calibrate?
Calibration must be done regularly to prevent variance from creeping in and invalidating the process. A few of the considerations for timing:
• Frequency of change of call types or requirement
• New campaigns or events
• New agents or assessors
• General or specific scores, by agent, agent cohort, section or attribute that cause concern
• Seasoned agents on rotation to ensure attention
Calibration is an important tool in any center. Doing it well takes practice. This short guide will help. Repeated use will enable any center to improve the quality and reliability of call and transaction assessments. This improved reliability of the quality monitoring equips everyone in the center to understand how well the operation is performing in its efforts to meet it’s goals and objectives.
Read more on Quality Management in your call center here