A META-ANALYSIS, SYSTEMATIC REVIEW, AND EMPIRICAL EVALUATION OF SIMULATION-BASED TRAINING IN VETERINARY EDUCATION
MetadataShow full item record
One of the greatest challenges in veterinary education is adequately preparing students with the clinical skills they need to be successful healthcare providers. Integration of simulation-based medical education (SBME) approaches into the veterinary curriculum can help address challenges to clinical instruction. The SBME model of learning aims to enhance clinical skills and increase patient safety by providing standardized learning opportunities. Despite the evidence regarding the effectiveness of simulation-based training in human healthcare education, the effectiveness of this instructional modality in the context of veterinary education remains unclear. The purpose of the first phase of this research project was to gain insight into this literature gap by comprehensively searching, critically appraising, and carefully synthesizing the evidence to inform policy and guide future development of simulation-based resources in veterinary education. A systematic search identified 416 potential manuscripts from which 60 articles were included after application of inclusion criteria. Information was extracted from 71 independent experiments. The overall weighted mean effect size was small for the fixed effect model (g = 0.33) and stronger for the mixed effects model (g = 0.49). All outcome measures of knowledge and clinical skills produced statistically significant (p < .001) mean effect sizes in favor of simulation ranging from d = 0.35 – 0.79. A moderator analysis was conducted for study characteristics and quality, as well as for best practice features of instruction and research design. Moderators that positively influenced the effect size of training with simulation were incorporated into the design of the second phase of the research project. This was an empirical study that demonstrated when an anesthesia simulation-based course was designed according to best practice recommendations, pretest-posttest scores for knowledge and self-efficacy significantly increased and course participants received higher ratings on clinical task performance and professional skills (i.e., communication and collaboration) compared to a matched control group when evaluated by blinded, external raters using a standardized rubric.