A way to measure the impact of a learning experience

May 12, 2022

Understanding the impact of a medical simulation can be a tricky thing. Let’s say you’ve had a group of first year residents go through hands-on, mastery level training where they practice inserting a central line into a simulated neck vein to deliver high risk medications.

Following the simulation, a majority says they found the content relevant and plan to use what they’ve learned in practice. They can even correctly demonstrate the procedure under the watchful eye of a facilitator. However, when it came to performing the technique in a high pressure setting, they couldn’t do it.

In this hypothetical situation, what might have happened?

“Competency is not the same as self-confidence,” said Dr. Lisa Barker, medical director of simulation for Jump Simulation. “As interns, they may know how to insert the central line but may not believe they can do it on their own. That’s why we believe there is also value in asking learners to rate their ability to accomplish a task after a learning experience.”

Relevancy of the self-confidence measurement

Simulation using manikinOver the last few years, the Jump Simulation Education and Research teams have been studying whether self-efficacy is a good indicator of an impactful simulation event.

“However, instead of asking learners to rate their abilities before they’ve take part in a simulation and then after they’ve completed it, they are asked retrospectively. This means trainees have to think back to how they would’ve evaluated themselves prior to the simulation as well as after the training,” Dr. Barker said. “This allows them to reflect on how much they've grown after they've experienced the content.”

Retrospectively evaluating self-efficacy following a simulation is also easier operationally. There is no need to use extra resources to ensure a pre and post survey is completed by the same people. The information provided after an event may also be more accurate because learners can reflect on their true belief of self-confidence before going into the simulation and immediately after.

The main benefit though is that this question can help Jump determine the impact of a learning experience.

“When we see a narrow change in self-efficacy ratings, that leads us to ask a couple questions,” said. Dr. Barker. “Do we need to adjust our content or delivery? Did we understand the learner mix? This essentially prompts us to do a deeper exploration of the content.”

The teams’ initial research confirmed an inverse correlation of change in self-efficacy with training level. This means that the gap between learners’ self-confidence before and after a simulation narrowed as their experience increased.

Take a deeper dive

Jump Education and Research completed a manuscript on this topic and presented a poster at this year’s International Meeting on Simulation in Healthcare.


FEATURED AUTHOR

denise-molina-weiger.jpgDenise Molina-Weiger is a Creative and Digital Writer for OSF HealthCare, where she has worked since March 2015. She initially came to OSF to write about the work taking place at the Jump Trading Simulation & Education Center, one of the largest simulation and innovation centers in the world, and went on to become the Media Relations Coordinator for OSF Innovation which was developed to help the hospital system lead the way in transforming care.  Before joining the OSF HealthCare team, Denise was a reporter for Peoria Public Radio for ten years, writing on everything from politics, housing and transportation issues to hospital care in the region.



View More In This Section
Back to Page