Are Health Apps Effective? The State of the Science

May 16, 2018

By: John Sharp, Senior Manager, Personal Connected Health Alliance

Two recent review articles  examine the need for  sufficient evidence in order for providers to recommend or prescribe mobile health apps. First is the AHRQ-commissioned report on apps for diabetes. Second is an article from Nature Digital Medicine reviewing randomized clinical trials of health apps.  Both address some of the limitations of studies on health apps.

The AHRQ technical brief, Mobile Health Applications for Self-Management of Diabetes, reviewed seven publications (six randomized clinical trials – RCTs) evaluating five commercially available apps. In addition to the literature review, the report used interviews with key informants who are experts in diabetes. The findings include an analysis of risk of bias across the studies in addition to limits on statistical efficacy, clinical efficacy, generalizability, usability, duration and limited evidence to detect patterns between cost, features and efficacy.

Their conclusions included recommendations for future research needs (such as, how to evaluate apps which may be constantly changing), implications for clinicians and patients (how strong evidence may help inform choices, and the need for evidence to be available in apps stores). Probably the most important recommendation is that diabetes decision tools need to be patient-centered so that patients decide to use the tool based not just on personal preferences but also evidence.

The Nature article, Prescribable mHealth apps identified from an overview of systematic reviews, is a review of reviews, looking at “six systematic reviews including 23 RCTs evaluating 22 available apps.” This article also examined the risk for bias in these reviews. Based on standard guidelines for evidence, most of the evidence was of low quality. “Most of the app trials were pilot studies, which tested the feasibility of the interventions on small populations for short durations.” High attrition rates also affected the quality of the evidence. Because of the challenge in finding apps with solid evidence, the authors recommend initiatives like NHS App library to help providers find apps with evidence. Finally, they recommend “encouraging app effectiveness testing prior to release, designing less biased trials, and conducting better reviews with robust risk of bias assessments.”

My take: Four issues in these reviews need to be addressed.

  1. The field of digital health is in its early stages, so that pilot studies with short term follow up is typical. This also means that study protocols are not well established or standardized.
  2. The RCT approach for apps and personal health devices is not necessarily the correct approach to studying the effectiveness of these technologies. RCTs are the standard approach for comparing drugs and not comparing devices or Software as a Medical Device (SaMD) – the U.S. Food and Drug Administration (FDA) has acknowledged that this category should be viewed differently, as features and programming may change during a clinical trial. For long term follow up, typical in an RTC, the technology may even go out of production. Also, creating a placebo study arm is challenging.
  3. Prescribing an app which may lack a health behavior change strategy is likely to fail because of faulty assumptions about incentives or other approaches to behavior change that lack a theory base and evidence. Apps and devices which include a coaching element appear to be more effective.
  4. Apps and devices appear to be more effective when personalized to an individual’s lifestyle and approach to wellness or condition management. RTCs are typically not personalized but use large samples to demonstrate an effect size in large populations.