All published articles of this journal are available on ScienceDirect.
Simulation and Quality in Clinical Education
Abstract
Background:
Simulation-based education (SBE) has become commonplace in healthcare education within hospitals, higher education institutions, the private healthcare sector, and private education providers. The standards and quality of delivery vary across the UK [1], leading to differing degrees of learning for healthcare professionals. This variance in standards makes research into the impact of SBE on the end user (the patient) difficult to measure.
Review:
The delivery of SBE needs to be of a high standard if learning via this pedagogy is to be maximised and benefits to patients are to be accurately assessed. This article aims to summarise the importance of quality within clinical SBE and how it can be achieved and maintained to produce a measurable impact on patient care. The current progress of the implementation of UK national standards for SBE is included to highlight the need for standardisation and guidance to support simulation centres and individuals to benchmark practice and work towards accreditation through quality measurement and monitoring processes. Suggestions are made on how such standards will affect the future of SBE and all those involved.
Conclusion:
There is a clear need for the development of national standards for SBE delivery and for a stepped approach [i.e. minimum, intermediate, and advanced standards] depending on the size, capacity, and frequency of SBE education delivery. Considerable financial outlay will be required to monitor standards effectively. The enhanced use of current and future technologies should be considered with regards to monitoring standards as well as data collection for future research opportunities.
1. BACKGROUND
Simulation can be defined simply as, “A tool, device and/or environment that mimics an aspect of clinical care” [2]. Its concept is not new and while its roots are firmly planted in the aviation industry, it has become an embedded pedagogy within healthcare education over the last few decades. This is primarily due to the published evidence supporting its effectiveness in a learning environment [3, 4]. There is no doubt that if delivered effectively, it is of clear benefit to clinicians as far as performance is concerned [5]. Appealing to a number of learning styles, simulation-based education (SBE) offers targeted learning experiences where knowledge, skills, and attitudes can be learned and refined within a safe and supportive environment [6]. The ability to replicate specific clinical scenarios with immersive and interactive participation from learners (both individuals and teams) is a powerful tool with which technical and non-technical skills can be enhanced, and assessment of clinical performance, testing and refining care pathways and clinical processes can be performed [7, 8]. SBE in the literature is viewed positively [9-18], claiming some of the benefits of this pedagogy are as follows:
- Increasing patient safety
- Developing critical thinking, diagnostic reasoning, and decision making
- Enhancing teaching of non-technical skills
- Increasing participants’ satisfaction of the learning experience
- Potentially reducing demands on clinical placement providers for undergraduate students
While one would assume that the ripple of success of SBE in clinical education would continue downstream to benefit the quality of patient care, there is limited published evidence to support this. McGaghie, Issenberg [19] suggest that research to date has focused on measuring learner feedback on the SBE activity itself and measuring the impact of SBE on learner’s knowledge and skills. Research should now focus more on determining the impact on patient outcomes and the wider public health agenda as well as skill and knowledge retention over time. This pattern of research is likely due to the relatively new concept of SBE in relation to other pedagogies and follows translational learning and research models [20]. A review of the literature supports the above claim. In contrast to the amount of evidence available supporting the impact of simulated practice on healthcare professionals’ education, there is relatively little research demonstrating that this learning translates into improvements in patient outcomes. The few studies that have been published focus on secondary care with an emphasis on the medical workforce [21-25]. Findings range from the unequivocal to small, statistically insignificant positive changes to patient outcomes and focus on detecting latent error as well as driving forward quality improvement processes. This positive correlation to improved patient outcomes appears to increase when team training is utilised. Riley et al. [26] implemented team simulation training with the intention of reducing birth trauma within a community hospital. Their results showed a 37% drop in trauma following the training. Smith, Siassakos, Crofts and Draycott [27] support the fact that team training using simulation has improved perinatal care and outcome, decreased litigation claims and reduced midwifery sick leave. Statistically significant changes were also demonstrated following advanced cardiac life support training for medical residents, where again, statistically significant improvements in the quality of clinical care delivered was shown [28].
Published systematic reviews [29-31] support the above findings. Zendejas, Brydges, Wang and Cook [32] looked at 50 studies comparing the outcomes from simulated practice with no intervention or non-simulated instruction. Patient outcomes were enhanced but did not reach statistical significance. Surprisingly, studies demonstrated that using SBE for assessments related to patient outcomes works better in early career years or for experienced clinicians but does not appear to be as effective for those in mid-training [33]. One potential explanation for this is the pressure to perform well. Earlier career clinicians would not be expected to know and experienced clinicians will have gained the required skills and knowledge over time and feel more comfortable in their role. This would fit with Benner’s [34] concept of moving from novice to expert, where the competent practitioner in mid-career becomes more aware of their long-term goals and gaps in knowledge, thereby intensifying the pressure to achieve. Burrell and Bienstock [35] make a valid point that we should not forget, that competence is an individual characteristic. As such, learners should be treated as individuals and recognition given to the fact that acquisition of skills will take differing lengths of time.
Braga et al. [31] focused on just-in-time simulation (i.e. simulated training took place shortly before the procedure was performed in the clinical setting). While learner performance was enhanced, there was no published evidence to show improved levels of patient complications. While anecdotal evidence suggests that the pedagogy of SBE in fact does have a wide-reaching impact on patient outcomes, to prove and measure this, quality needs to be achieved and maintained in two key areas; the SBE activity itself and the simulation-based research (SBR) processes utilised.
While the research mentioned above focuses on actual patient outcomes, these are often difficult to measure for healthcare educational establishments who are not associated with teaching hospitals. Interestingly, Brydges, Hatala, Zendejas, Erwin and Cook [33] undertook a systematic review and meta-analysis focusing on simulation-based assessments as surrogates for patient-related outcomes. If valid and reliable tools are used to measure these outcomes, they suggest that this format of measuring SBE impact may become common practice in the future. This approach would certainly remove some barriers within this realm of SBE research.
This article aims to summarise the importance of quality within clinical SBE and how it can be achieved and maintained to produce a measurable impact on patient care, but to achieve “quality,” it needs to be defined. There are definitions abound, but all affirm that to measure quality, a benchmark or standard must be set with which to measure your activity against. The Oxford English Dictionary [36] defines quality as, “the standard of something as measured against other things of a similar kind; the degree of excellence of something.” The GMC definition includes, “all the policies, standards, systems and processes that are in place to maintain and improve the quality of medical education” [37]. Quality frameworks for SBE developed by regional networks/groups refer to a narrative of what good quality looks like, recognition of best practice, a level of excellence to act as guidance for simulation and clinical skill providers, and drive quality improvement [38, 39]. Health Education England, in their latest Quality Framework document refers to, “a national and local ambition for quality in education and training” [40]. In the words of Lord Kelvin, “If you cannot measure it, you cannot improve it” [41].
2. QUALITY STANDARDS FOR SIMULATION-BASED EDUCATION
In 2012-13, the Association for Simulated Practice in Healthcare (ASPiH) [42] conducted a National Simulation Development Project [1], supported by Health Education England [43] and the Higher Education Academy [44]. The aim was to map the resources available and the application of SBE and technology-enhanced learning (TEL) across the United Kingdom. A key concern identified in this report was the need for national guidance related to quality indicators and SBE standards of practice. This would need to be of relevance, value and easily accessible to an increasing number and breadth of organisations, departments and individuals designing and delivering SBE.
As a direct result of the National Project, ASPiH established a standards committee consulting with educationalists, professionals, and experts in the field and developed draft standards for SBE [45]. Both the first and second consultation, supported by Health Education England, confirmed that for SBE to achieve its full potential, an agreed quality standard framework is required. The majority of UK simulation centres, educational institutions, and practitioners support this requirement for national standards [46]. In our opinion, there is no doubt that their adoption and application would support and enhance delivery of SBE, allowing for a more rigorous, consistent standard of practice and provide a benchmark to strive towards in order to achieve and maintain quality, parity, and inclusiveness. It is noteworthy that their development has triggered lengthy discussion around the use of the word standard and the mandatory consequence that may be perceived if compared to those of the professional bodies in defining their requirements for education, training, and patient safety such as, for example, the General Medical Council’s Promoting excellence: standards for medical education and training [37] and the Nursing and Midwifery Council’s Quality Assurance Framework Part Three: Assuring the Safety and Effectiveness of Practice Learning [47]. In comparison, the Resuscitation Council is very clear with regards to terminology within its standards and compliance, using the terms must, should, and recommends, making it clear which elements are mandatory [48]. ASPiH needs to be mindful of this in the context of their framework. If the mandatory implications are removed, then the standards may take on a much greater aspirational and best practice significance. A number of organisations and individuals have already expressed concern around the levels of attainment and the challenges and impact that working towards certain elements of the standards may have on their staffing, resources, and finances [49]. Interestingly, others counteract this argument and feel that introduction of the standards may in fact provide leverage regarding funding and more adequate and appropriate resourcing. Hopefully, the latter will prevail.
The latest version of the ASPiH Standards Framework includes four themes: faculty, activity, resources, and technical personnel with an overall aim to provide the “opportunity to associate high quality SBE with improvement in care quality outcomes and system improvement” [50]. ASPiH is very cognisant of the development and use of regional frameworks [51, 52] and the availability of standards and processes for accreditation. In the United States there are currently two organisations who have developed standards, namely the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice [53] and the Society for Simulation in Healthcare (SSH) Accreditation Standards [54]. Despite some UK organisations using these standards for reference, guidance, and partial adoption, the INACSL Standards of Best Practice do not include any of the environmental aspects of creating a simulated scenario or relevant quality assurance frameworks [55]. There is currently no simulation accreditation process widely used in the UK. The SSH accreditation standards have substantial cost implications and no UK organisation has yet gone through the process [56].
There is no doubt that such ambition or aspiration for quality necessitates standards for SBE but evidencing achievement and progressing to recognition for that through accreditation requires additional commitment and is regarded as the final step in most quality assurance processes [1]. Preceding such accolade for most simulation centres and individuals, will be a period of working towards, of improvement, putting things in place, providing the evidence (i.e. measurement against the standards of SBE). In the long term, the measurement and monitoring activities will aim to drive quality improvement of SBE, however, compliance and delivering on such activities could be time consuming and arduous. The second consultation exercise has identified a variety of benchmarking practices, online reporting, self and peer review, periodic face-to-face audit – all voluntary accreditation processes for the UK national standards in SBE [46]. Raising the standards of SBE delivery would, however, allow for a more robust research strategy to be implemented, enabling definitive outcome measures to be addressed.
3. QUALITY STANDARDS FOR SIMULATION-BASED RESEARCH
Despite the plethora of evidence supporting the use of SBE, Cheng et al [2] claim that in the health professions, educational research is often poorly designed and the findings are inconsistently or poorly documented. They argue that many researchers utilise methodologies that reflect traditional educational research and argue that simulation-based research (SBR) has different unique features that are often not considered in the design or methodologies described. Research specific to SBE and healthcare has found that studies have not included aspects like instructional design, setting the context, and outcomes [57]. A further review identified that only 3% of studies utilising a debriefing following SBE (a key element) documented the essential elements required [58]. For the appraiser of the research, parts of the process are often missed out leading potentially to frustration. It could be argued that the inconsistent approach to the many elements of SBR reflects the inconsistent way that SBE is carried out in both NHS trusts and higher education institutions (HEIs). The introduction of the standards may encourage academics and researchers alike to consider the unique methodological challenges faced when carrying out SBR. Cheng et al [2] have contributed to solving this issue by suggesting additions to existing reporting guidelines that reflect the unique qualities of SBR.
It has been commented that SBE in the health setting has sprung up out of a necessity rather than from a robust evidence base with ideas like the changing face of the NHS and increased demand for placements being cited as potential drivers [59]. Others argue that the main driving factor for SBE is patient safety [12, 60-67]. The drivers may be different depending on the clinical speciality. Whatever the drivers or motivations are, the general consensus within the field appears to be that a consistent approach to this pedagogy needs to be adopted to ultimately ensure its quality. The standards are an attempt to develop this pedagogy and provide a consistent approach (along with an evidence base) that is currently missing. Historically, SBE has been developed by pockets of simulation enthusiasts with sometimes very basic equipment and training. Despite the continued investment into SBE, equity of access to specialised centres is still recognised as a potential barrier to the development of this technique [68]. Perhaps there is a real risk that the standards could heighten this problem in the short term. They are a benchmark for what quality SBE should look like. To achieve some of these inevitably will require investment, not only in buildings and equipment, but in ensuring that facilitators (clinicians, educationalists, and learning technologists) delivering SBE are appropriately trained and supervised. Even those centres with the infrastructure to be able to cope with the new demands will find adopting the standards a challenge. Careful consideration needs to be shown to those centres that do not have the resilience to achieve the benchmark in the short term. The risk would be that they carry on in delivering SBE but do so (through no fault of their own) compromising some of the standards. A further risk could be that these centres would not engage in future developments potentially leading to independent SBE providers whose quality (in terms of the ASPiH standards) could not be assured.
In support of the patient safety agenda Deutsch et al. [69] argue that SBE allows a unique opportunity to carry out research into human factors (HF) within healthcare. Human factors have been defined by Catchpole [70] as “Enhancing clinical performance through an understanding of the effects of teamwork, tasks, equipment, workspace, culture, and organisation on human behaviour and abilities and application of that knowledge in clinical settings.”
One of the most complex hurdles academics and researchers must overcome when undertaking research is gaining ethical approval especially when dealing with patients. SBE provides the opportunity to undertake research in simulated clinical environments with members of the interdisciplinary team but without exposing real patients to any direct risks [71, 72]. Deutsch et al. (2016) argue that SBE offers the HF researcher several unique opportunities. At an organisational level, simulations can be used to observe how leaders at different levels respond to patient safety issues, how they apply policy and procedure, and how they risk assess (Deutsch et al. 2016). It also allows potential risks to be identified and acted on before they cause harm, often referred to as latent risks [73]. SBE also provides the opportunity to develop innovative ways of working and problem solving especially within complex teams [69].
4. RESULTS AND DISCUSSION
In conclusion, SBE is set to stay, with professional organisations encouraging its use in their curricula, clinical and educational practice [74-77]. In some areas, the investment into equipment, dedicated facilities, and personnel who support SBE has been significant. However, the drivers and standards guiding SBE have been focussed on those who have the infrastructure to support it rather than robust methodologies and evidence. For SBE to be delivered in a quality assured way (whatever the definition of quality) requires a benchmark standard for a baseline to be achieved. Without them a baseline will never be achieved and SBE will carry on being delivered by enthusiasts who despite their motivations or resources potentially could miss the bigger picture which is about providing high quality education in an effective manner to maximise the benefits of SBE for the learners, their current and future employers, and the simulation centre or programme. Maybe an approach that could be adopted in the short term is a stepped approach i.e. one whereby those delivering and centres providing SBE are to meet a minimum set of criteria documented by the standards and that progression to higher levels of approval are achievable as individuals and centres develop and investment increases. The difficulty lies in deciding what the minimum standards are; setting someone up to fail before they begin could become realistic.
Curran, cited in Riley [78], writes in reference to SBE that “the capability of the trainer as an educator limits or expands the effectiveness of the teaching; the more versatile and competent the trainer, the more likely they are to be effective”. This statement supports the notion that beginning with developing the faculty may also be a sensible starting point. Ensuring that all faculty (from education, research, clinician, and learning technologist) are aware of the underpinning learning theories that support not just traditional education but also the elements that are unique to simulation which will help to provide a better learning experience. It may lead to new theories that have not been explored within education. Having a greater understanding of the pedagogy will provide the researcher the insight to develop new or adapted methodologies to capture the unique data that SBE may provide. With continued advancement in technology, system integration such as electronic medical records and programmes enabling the measurement of simulated patient and manikin parameters (proxy patient outcomes), education, training, and research within SBE is ideally situated to address areas of practice where clinical errors are most prevalent (e.g. prescribing, patient monitoring) [79].
CONCLUSION
As with the aviation industry, simulation is rapidly becoming the industry standard in relation to education and training. The key catalyst for its adoption in aviation is the clear link to enhanced pilot/passenger safety [80]. In healthcare, if such a link between SBE and improved patient outcomes can be established through robust research, incorporating both the standards for SBE [45] and enhanced research framework [2], its development is likely to be continually supported in years to come.
ETHICS APPROVAL AND CONSENT TO PARTICIPATE
Not applicable.
HUMAN AND ANIMAL RIGHTS
No Animals/Humans were used for studies that are base of this research.
CONSENT FOR PUBLICATION
Not applicable.
CONFLICT OF INTEREST
The authors declare no conflict of interest, financial or otherwise.
ACKNOWLEDGEMENTS
Decleared none.