Preliminary results from a survey on the use of metrics and evaluation strategies among mHealth projects
Patricia Mechael, Nadi Kaonga
Center for Global Health and Economic Development at the Earth Institute, Columbia University
CORE Group Spring Meeting, April 30, 2010
Chandigarh Call Girls Service ❤️🍑 9809698092 👄🫦Independent Escort Service Cha...
Preliminary results from a survey on the use of metrics and evaluation strategies among mHealth projects
1. Preliminary results from a survey on the use of metrics and evaluation strategies among mHealth projects Garrett Mehl, Franz Allmayer, Heli Bathija Department of Reproductive Health and Research, WHO, Switzerland Patricia Mechael, Nadi Kaonga Center for Global Health and Economic Development at the Earth Institute, Columbia University
2.
3.
4. USA:5 Peru: 7 Haiti:2 Mexico: 3 India:6 Pakistan:4 Kenya:11 Uganda:10 South Africa:4 Tanzania:5 Ghana:3 Nigeria:5 Malawi: 2 Philipines:4 All countries highlighted contain an mHealth project covered by the survey. Countries with multiple projects are specified. Jordan Survey Reach
7. Current Phase of projects Current Phase Needs assessment 8.8% Usability testing 8.8% Pilot not for scaling 7.4% Pilot for scaling 57.4% Large scale implementation 17.6%
10. Clustering of mHealth objectives Stock Crisis response Time savings Reduce referral Communication Provider Skills Lower costs Quality/Safety Compliance Service Demand Service access Client info Provider info
15. Focus of Evaluation Assessment Did the intervention result in improvements in:
16. Number of evaluation questions tracked Costs Sustainability Behavior changes Health outcomes Knowledge, attitudes Performance Quality of Care Service Utilization
17. Type of evaluation approach Descriptive 29.8% Cross-sectional 44.7% Longitudinal 40% Case-control 12.8% Wedge 8.5%
26. “ We need a systematic approach to analyzing the data we have collected over the past 3 years. “ Assistance Requested
27. “ We also need to learn what is the norm for "success" in this field and how we stack up to normal interventions vs. other mHealth projects working on [similar] technology.”
28. “ We need guidance on evaluation methods for mHealth”
29. “ We are interested in collaborative approaches and standard indicators that will be measured across the different mHealth programs.”
30. “ How to assess the impact of [our] mHealth tool.” mHealth tool.”
31. Thank you. For more information, or to submit your mHealth project to the survey, please send an email to: Dr. Garrett Mehl [email_address]
Hinweis der Redaktion
A 2006 report by the Center for Global Development entitled "When Will We Ever Learn?" concluded that too many missed opportunities for collection and analysis of programme impacts lead to continued funding for ineffective and inefficient programmes. The report highlighted the need to invest in rigourous impact evaluation from the outset of programme implementation. While mHealth demonstrates potential for improving the coverage, quality, efficiency and responsiveness of health systems, with likely benefits for health service utilisation, behaviour and outcomes, robust evidence to support this assumption is lacking. With mHealth now poised to move beyond the proof-of-concept stage, the time is ripe to consider how best to conduct rigourous impact evaluation, and monitor implementation. To begin to understand how to address this recognized gap in evidence, A partnership between the Center for Global Health and Economic Development at the Columbia University and the Department of Reproductive Health and Research at WHO are implementing a survey to identify the nature and extent of monitoring and evaluation in the design and delivery of ongoing mHealth projects. This presentation will report on preliminary findings from data collected thus far. We will report on the levels and rigour of reported monitoring and evaluation efforts among current projects. It will also detail which domains of health and which uses of mHealth are a focus of evaluation efforts. Additionally, we discuss the general strength of evaluation design, the quality of metrics being used at different levels (input, output, outcome, impact), and the drivers behind monitoring and evaluation.
This work is complementary to the mapping of published studies conducted by the Earth Institute and supported by the mHealth Alliance which looks at strength of evidence in support of mHealth interventions. We recognized that the field of mHealth is moving quickly, and that a considerable number of projects are currently being conducted which represent huge investments. Some of these projects may not be reporting on results for some time, if ever. We wanted to ensure that the survey results represented a wide range of types of mHealth projects, representing early stages to projects that have been implemented for some time -- small projects to large scale projects. The Aim of the survey was to to understand the current standards and specific needs that mHealth projects have related to project monitoring and evaluation, funding, capacity building -- in relation to the types and approaches of mHealth intervention.
This presentation represents preliminary findings HIFA, MobileActive, communication initiative, Global-link, GlobalHealthDelivery Individuals were drawn from databases of known implementers of mHealth projects -- and emails sent to them. Twitter - Tweet
A total number of 70 project were included in the analysis, representing the contribution of 50 individuals from 29 countries. We tried to reach all mHealth projects -- but we feel that the sample is at this stage biased toward those individuals who are implementing projects they are proud of, or are of a level of sophistication that implementers would want to share their project experiences. Small scale mHealth projects and failures are underrepresented. Average size focused on health workers = 24 (maximum 162) Average size focused on clients/patients users = 13,500 (most projects were 1000 or less)
The majority of the projects are less than 2 years old. In fact 76% are less than 2 years old.
A considerable proportion of projects in our sample focused on child or sexual and reproductive health issues.
While the sample included projects at a range of project phases, the majority were at the self-reported “pilot” stage.
This is a list in descending frequency of project objectives. Projects might have more than 1 objective. We have tried to group the objectives according to client or provider focus, and those aimed at improving system efficiencies. It is worth noting that projects with a client focus were the most frequent, followed by provider focus.
Among those projects where the is a difference in strategic focus, we see that: Child Health are more health system focused -- data collection, surveillance, and provider point of care support. SRH is more client focused -- reminders, prevention and H. promotion, treatment compliance.
This is a heirarchical cluster analysis of project objectives. Those items that are closer together on the dendrogram are more likely to be in the same project. This is useful for understanding which project objectives cluster with others... For example -- projects with an information focus, which we see at the bottom of the dendrogram are also likely to focus on improving access or demand. mHealth projects focused on communication are those likely to be focused on improving efficiencies and reducing unneeded referrals.
This Health metrics network diagram represents the common domains of measurement for health information systems. It is high-level, and not entirely relevant to mHealth, but there are measurement domains that are useful to review, as they are indicative of what an ideal mHealth project might be measuring for impact assessment. Determinants of health –indicators include socioeconomic,environmental, behavioural, demographic and genetic determinants or risk factors. Such indicators characterize the contextual environments in which the health system operates. Health system – indicators include inputs to the health system and related processes such as policy, organization, human and financial resources, health infrastructure, equip- ment and supplies. There are also output indicators such as health service availability and quality, as well as information availability and quality. Finally there are immediate health system outcome indicators such as service coverage and utilization. Health status –indicators include levels of mortality,morbidity,disability and well-being.
Each of these measurement domains are critical to understanding how an intervention improves health in a setting. Inputs into the project (e.g., tracking staff time, trainings, raw materials that have gone into the project, etc.) Provision of mHealth services (e.g., tracking how many messages have been sent , etc.) Utilization of mHealth services (e.g., tracking how many messages have been received and responded to , etc.) Coverage of mHealth services (e.g., tracking what proportion of the target population received the message, etc.) Outcomes of mHealth services (e.g., tracking whether members of the target population have changed their behavior as a result of the message, etc.) Costs of mHealth project implementation, etc.
This pie-chart focuses on how many monitoring domains were a focus of individual mHealth projects. Over 50% included 5-6 monitoring domains. 35% included 3 or less monitoring domains.
As we reported earlier, the greatest proportion of projects included evaluation and monitoring.
We asked projects that were focused on evaluation, which questions were a focus of their evaluation. While higher than we would have expected, it is worth noting that the most difficult questions to address (health outcomes, quality of care) are those that are least well represented among the evaluation assessment questions.
We assessed how many evaluation questions were being tackled by individual projects. Notably, well over half of the projects included attempts to evaluate more than 5 question domains.
The evaluation approaches varied. Note that projects could have more than one approach --
We developed an index that was used to assess the rigor of the evaluation approach. This included items such as self reported data collection at baseline, post-intervention, whether there was a control group, whether it was a matched control group, whether sample size calculations were tabulated, randomization, etc.
We developed two indexes: One index representing the level of reporting required by their supporters the second index represented the diversity of funding sources. Each of these was a significant predictor of level of monitoring. Meaning that projects where supporters required reporting --> results in higher level of monitoring projects with greater diversity of supporters --> results in higher levels of monitoring
Phases -- needs assessment, scaling up, large scale implementation Projects at an earlier phase of implementation incorporate higher levels of monitoring. Significant difference between phases in terms of level of monitoring <.05 significance -- Anova (difference of means)
Projects at an earlier time of implementation incorporate higher levels of evaluation.
Significant difference between phases in terms of rigor of evaluation <.005 significance -- Anova (difference of means)
Each of these was a significant predictor of rigor of evaluation. Meaning that projects where supporters required reporting --> results in higher rigors of evaluation projects with greater diversity of supporters --> results in higher rigors of evaluation
We asked projects to specific what types of assistance they would need.
Projects at an earlier phase request higher levels of assistance Significance <.05