BACKGROUND
E-learning in medical education can contribute to alleviating the severe shortages of health workers in many low- and middle-income countries. In the past few decades, the rapid development of technologies resulted in an abundance of new resources, including personal computers, smartphones, handheld devices, software and the Internet – at constantly decreasing costs. Consequently, educational interventions increasingly integrate e-learning to tackle the challenges of health workforce development and training. However, evaluations of e-learning interventions still lack clear methodology to assess the effectiveness and the success of e-learning for medical education, especially in those countries where they are most needed.
OBJECTIVE
Our specific research aim was to systematically describe currently used evaluation methods and definitions for the success of medical e-learning interventions for medical doctors and medical students in low- and middle-income countries. Our long-term objective is to contribute to generating effective and robust e-learning interventions to address critical health worker shortages in low- and middle-income countries.
METHODS
Seven databases were searched for e-learning interventions for medical education in low- and middle-income countries, covering publications ranging from January 2007 to June 2017. We derived search terms following a preliminary review of relevant literature and included studies published in English which implemented e-learning asynchronously for medical doctors and/or medical students in a low- or middle-income country. Three reviewers screened the references, assessed their study quality, and synthesized extracted information from the literature.
RESULTS
We included 52 studies representing a total of 12294 participants. Most of the e-learning evaluations were assessed summatively (83%) and within pilot studies (73%), relying mainly on quantitative evaluation methods using questionnaire (45%) and/or knowledge testing (36%). We identified a lack of evaluation standards for medical e-learning interventions, as methods varied considerably in the evaluation of their medical e-learning interventions with a high variation in study quality (general low study quality, based on study quality scales MERSQI, NOS and NOS-E), study period (ranging from 5 days up to 6 years), assessment methods (6 different main methods) and outcome measures (a total of 52 different outcomes), as well as in the interpretation of intervention success. The majority of studies relied on subjective measures and self-made evaluation frameworks, resulting in low comparability and validity of evidence. Most of the included studies reported success in their e-learning intervention.
CONCLUSIONS
The evaluation of e-learning interventions needs to produce meaningful and comparable results. Currently, a majority of evaluations of e-learning approaches to educate medical doctors and medical students is based on self-reported measures that lack adherence to a standard evaluation framework. While the majority of studies report success of e-learning interventions – suggesting the potential benefits of the e-learning – the overall low quality of the evidence makes it difficult to draw firm conclusions. Methods development, study design guidance, and standardization of evaluation outcomes and approaches for e-learning interventions will be important for this field of education research to prosper. Methodological strength and standardization are particularly important, because the majority of the existing studies evaluate pilot interventions. Rigorous evidence on pilot success can improve the chances of scaling and sustaining e-learning approaches for health workers.