Want to Make an Impact With Your Training Impact Study? Avoid These 4 Pitfalls
Training impact studies are quite common within the L&D community. Reserved for the most strategic or visible programs, these studies aim to show that the training program has resulted in new behaviors on the job and that these behaviors led to a business impact. The business impacts are measures critical to business leaders such as increased sales, improved productivity or greater employee retention.
A well-executed training impact study can be revealing and not only provide evidence of training impact, but also demonstrate to business leaders where improvements are warranted. This is where the value lies: specific recommendations on how to improve the training. If you do it right, you can also identify how to get even more benefit from employees who have already been trained.
Unfortunately, too many impact studies are, well, not very impactful. They fail to produce meaningful action, process improvements or even acknowledgement that the findings have merit. This is a shame not to mention a waste of resources, time, funds and energy. After all, if your impact study doesn’t effect change, of what use is it? Additionally, after having spent time, resources and money, who wants to have their work sit on a virtual shelf?
Why might your impact study fall victim to this fate? Here are four pitfalls you should avoid to ensure you get maximum impact from your impact study.
- No ownership for action
- Your stakeholders didn’t like or anticipate the results
- The users of the study question your evaluation methods or the quality of your data
- The final report wasn’t action oriented.
Let’s explore each of these pitfalls and discuss what you can do proactively to avoid them in the future.
1. No Ownership for Action
If your impact study is going to make an impact, then somebody (or somebodies) need to be accountable for acting on the recommendations. Unfortunately, ownership for action is often not defined when the study begins and no one knows ‘who’s on first.’ Is the training program manager accountable? Is there an identified business leader who can improve how managers support their trainees? When you lack clarity on ownership responsibilities, no one feels that he/she is on the hook to follow through.
What to do proactively: When you launch the study, identify who cares about the results and where and how to involve them. Have them sign off on your approach, your assumptions and hypotheses. Finally, get them to agree on the role they will play at the study’s conclusion and what actions they are prepared to take.
2. Your Stakeholders Didn’t Like or Anticipate the Results
Stakeholders don’t like surprises. Getting an unanticipated or negative result will rarely be well received. When this happens, the tendency is to question everything about the study, which in turn creates a reluctance to act (see Pitfall #3).
What to do proactively: Michael Quinn Patton wrote an invaluable book called “The Essentials of Utilization-Focused Evaluation”. In the book, he suggests simulating the use of the findings. The simulation engages stakeholders to explore the range of possible results and the underlying root causes. What should you investigate if you find that the program is highly successful, but only with a subset of the population? What further data should you consider if find that the program was well received, but fizzled when employees tried to apply the concepts in their work? The simulation process not only prepares stakeholders for possible negative results, but also helps to identify, in advance, how to uncover root causes.
3. The Users of the Study Question Your Evaluation Methods or the Credibility of Your Data
Related to pitfall #2, there is nothing quite like presenting the results of a month’s long impact study only to have someone question your measurement approach or take pot shots at your data. A skeptic about your methods can undercut the findings and leave you exposed and vulnerable.
What do to proactively: When you are engaging your stakeholders, get their input about the project as well as how you will assess impact and what data you will collect. Do they trust self-report data or consider it useless? Do they question the quality of the business data because no one has updated the demographics to reflect organizational changes? If you have skeptics in your midst (and who doesn’t?), identify them early. Talk to them earnestly about data integrity issues and seek their ideas on how to mitigate the risks. Most often, these same skeptics can suggest complimentary methods that will make them feel more at ease and will improve the quality of your study.
4. The Final Report Wasn’t Action Oriented
In telecommunications, there is a phrase called, “The Last Mile Problem.” This expression refers to the challenges of connecting the ‘last mile’ of the telecommunications infrastructure to the end consumer. In evaluation, we have a serious “last-mile problem”. How many reports do you read that are filled with statistical jargon, detailed tables or poor visualizations that don’t provide insights or suggest what should be done differently? Audiences listen to these presentations but often have no clue as to what they should do differently or what action they should take.
What to do proactively: In Patton’s book, he cites a rule of thumb from the Canadian Health Services Organization. The format is 1:3:25 and works like this: One page for main messages and relevant conclusions, three pages for the executive summary of the main findings, and twenty-five pages with a comprehensive, plain-language report. Keep your findings and recommendations succinct. Eliminate jargon. Be explicit about what should happen next and who owns the action.
In summary, consider the end in mind before you launch your study. Think about who cares about the training program, how to get them on board and how to involve them. Begin by identifying the accountable parties before you start. Set stakeholder expectations for the possible findings of your study. Gain their support for your evaluation approach and the type of data you will collect. Finally, make your recommendations action focused. By engaging the right people throughout the process, your training impact study has a good chance of making meaningful impact.
Share this entry