Keller et al. (2023) Receptivity to Mobile Health Interventions Study Guide
Receptivity to Mobile Health Interventions
Roman Keller, Florian v. Wangenheim, Jacqueline Mair & Tobias Kowatsch · Chapter 6 in Digital Therapeutics for Mental Health and Addiction (Elsevier) · 2023
The effectiveness of mobile health interventions depends not only on what support is delivered but on when it reaches the person. Delivering support into a receptive state — one in which the individual can receive, process, and act on the message — is a critical and underexplored design variable in JITAIs.
Research Question
Under what conditions are individuals able to receive, process, and use support from mobile health interventions? What contextual and personal factors predict receptive states, and how can MHI designers leverage these signals to deliver interventions at opportune moments?
The Receptivity Problem
Most MHIs deliver interventions without considering whether the recipient is ready to engage. Poorly timed support is not merely wasted — it can actively cause harm through unwanted interruption, cognitive disruption, increased stress, and notification fatigue. In severe vulnerability scenarios, a missed intervention window can have life-threatening consequences.
The Anatomy of an Ideal MHI
Keller et al. propose that an ideal JITAI-based MHI continuously senses both vulnerable states (transient tendencies toward adverse health behaviors) and receptive states (readiness to engage with support). Given both conditions, it delivers the optimal evidence-based dose, assesses impact, and iterates — forming a closed-loop learning system. Critically, this requires unobtrusive, ideally contactless, passive sensing to avoid adding burden on the user.
From Interruptibility to Receptivity
The concept of receptivity evolved from the ubiquitous computing tradition of interruptibility research, which sought to reduce notification burden by identifying natural breakpoints. The chapter extends this frame: receptivity goes beyond “can this person be interrupted?” to ask whether they can fully receive, cognitively process, and behaviorally implement the support — a three-layer gate that any effective JITAI must pass through.
The Evidence Landscape
A 2017 systematic review confirmed that context-aware notification timing improves response rates, but findings were not generalizable due to small, homogeneous samples. More recent ML studies report receptivity improvements of up to 40% over random delivery (Mishra et al., 2021) and over 66% improvement in engagement prediction (Pielot et al., 2017). Adaptive models that learn from individual data show improving performance over time — pointing toward personalized, dynamic delivery architectures.
The 16 Receptivity Signal Factors (Table 6.1)
| # | Factor | Type | Key Evidence Summary |
|---|---|---|---|
| 1 | Activity | Contextual | Less complex tasks, idle/relaxed states linked to higher receptivity; mixed evidence for physical activity type |
| 2 | Breakpoints | Contextual | Activity transitions and natural interaction breakpoints (post-call, post-SMS) indicate opportune moments |
| 3 | Location | Contextual | Mixed evidence; home/work linked to higher receptivity; shopping malls, social settings linked to lower |
| 4 | Time of Day | Contextual | Lower receptivity in morning; increases through the day; mixed weekend vs. weekday results |
| 5 | Personality | Intrinsic | Conscientious and neurotic individuals tend to be more receptive; extroverts more reactive to notifications |
| 6 | Sender | Content | Notifications from significant others (partner, family) trigger higher receptivity than system or service senders |
| 7 | Device Battery | Contextual | Higher battery level linked to higher receptivity; fully charged battery linked to lower (proxy for inactivity) |
| 8 | Device Interaction | Contextual | Recent phone use (e.g., recent unlock event) predicts higher receptivity |
| 9 | Age | Intrinsic | Older individuals tend to be more receptive |
| 10 | Mood | Intrinsic | Happy/energetic states linked to higher receptivity; stressed states to lower receptivity |
| 11 | Bluetooth Signal | Contextual | Changes in nearby Bluetooth device count may signal context shifts linked to receptivity |
| 12 | Communication Patterns | Contextual | Less time since last call or SMS linked to greater receptivity |
| 13 | Alert Modality | Intrinsic/Device | Vibration mode (vs. silent or sound) linked to higher receptivity |
| 14 | Device Type | Intrinsic/Device | Android users tend to be more receptive than Apple smartphone users in one study |
| 15 | Social Setting | Contextual | Being alone or without active social interaction positively influences receptivity |
| 16 | Notification Content | Content | Entertainment value, relevance, actionability, and personal interest all positively influence receptivity |
■ Core Concepts ■ Processes & Mechanisms ■ Research Methods ■ Technology & Design
Core Concepts
Processes & Mechanisms
Research Methods
Technology & Design
Chapter Type at a Glance
This is a book chapter combining a conceptual framework with a narrative literature review. It introduces the “ideal MHI anatomy” and a three-process model of receptivity (receiving, processing, using), then reviews 16 empirically identified signal factors drawn from the interruptibility and mHealth literatures. The chapter concludes with a structured discussion of implementation challenges and future directions. No new primary data are collected; the contribution is theoretical synthesis and evidence mapping.
Literature Scope
- Interruptibility tradition from ubiquitous computing (Ho & Intille 2005; Okoshi et al. 2015a, 2015b, 2017; Fischer et al. 2010, 2011; Mehrotra et al. 2015, 2016)
- mHealth receptivity studies: Künzler et al. (2019), n=189, 6-week mHealth study; Mishra et al. (2021), n=83, 3-week physical activity intervention; Pielot et al. (2017), n=337, 4-week content engagement study; Morrison et al. (2017), n=77, stress management intervention
- JITAI theoretical foundation: Nahum-Shani et al. (2015, 2018)
- One systematic review with meta-analysis: Künzler, Kramer & Kowatsch (2017)
The 16-Factor Synthesis
- Factors ranked by number of independent investigations across reviewed studies
- Each factor accompanied by a summary of evidence direction and quality
- Factors classified as contextual (passively collectable) vs. intrinsic (requiring self-report or inference) — relevant for practical MHI implementation
- Most investigated factors: Activity (#1), Breakpoints (#2), Location (#3), Time (#4), Personality (#5)
Machine Learning Evidence
- Künzler et al. (2019): combined intrinsic + contextual features in ML models; significant improvements over baseline interruptibility detection
- Pielot et al. (2017): passive sensing model achieves over 66% better prediction of content engagement vs. baseline
- Mishra et al. (2021): static pretrained model and adaptive model both improve receptivity by up to 40% vs. random delivery; adaptive model shows continuous improvement over 3 weeks
- Morrison et al. (2017): prediction-model-timed notifications did not outperform daily delivery in stress management context — a null result highlighting the nascent state of the field
Evidence Quality Assessment
- Most primary studies: small samples (typically 20–50 participants), short duration (days to weeks), healthy young adults or university students
- Only one meta-analysis available (Künzler et al. 2017) — findings not generalizable due to high homogeneity and small primary study samples
- No studies in clinical or at-risk populations representative of actual JITAI targets (e.g., substance use disorders, depression, severe obesity)
- Causal influence of most factors is inconclusive — most evidence is associative/observational
- Microrandomized trials recommended as future gold standard for causal estimation
Nahum-Shani et al. (2015) — JITAI Framework
Why it matters
The theoretical bedrock for the entire chapter. Nahum-Shani et al. define vulnerable states and establish the JITAI framework — including the definition of receptivity as the condition in which a person can receive, process, and use support. Keller et al. build their anatomy of an “ideal MHI” directly on this foundation.
Key contribution
Operationalizes the construct of receptivity within a JITAI design framework, distinguishing it from vulnerability and providing design guidance for intervention authors. Establishes the logic of closed-loop adaptive health systems.
Künzler et al. (2019) — State-of-Receptivity for mHealth
Study design
6-week mHealth study with 189 participants, examining associations between receptivity and intrinsic factors (device type, age, gender, personality) and contextual factors (time of delivery, battery level, device interaction, physical activity, location). ML models trained on combined factors.
Key findings
Higher response rates associated with older age, neuroticism, mid-day timing (10am–6pm), home or workplace location, higher battery level, active device interaction, and walking activity. ML models combining intrinsic and contextual features significantly outperformed a baseline model in receptivity detection.
Mishra et al. (2021) — Detecting Receptivity in Natural Environment
Study design
3-week RCT with 83 participants in a smartphone-based physical activity chatbot intervention. Three conditions: (1) static pretrained ML model (using Künzler 2019 data), (2) adaptive ML model updated daily with new individual data, (3) random delivery control.
Key findings
Both ML models improved receptivity by up to 40% over random delivery. The adaptive model showed continuous improvement over the 3 weeks as it accumulated more individual data — suggesting that personalization gains grow with deployment time. This is a key argument for individualized, learning MHI architectures.
Pielot et al. (2017) — Beyond Interruptibility: Predicting Opportune Moments
Study design
4-week study with 337 participants. Primary task: report mood via notification-triggered questionnaire. Secondary goal: analyze voluntary engagement with 8 diverse content types (games, news, videos, etc.) delivered at the end of each questionnaire.
Key findings
Higher battery level, recent phone interaction, higher ambient noise, less light variance, older age, and later time of day all predicted engagement. Passive sensing model achieved over 66% better prediction vs. baseline. Extends receptivity research beyond simple notification receipt toward deeper content engagement — addressing the “processing” layer.
Okoshi et al. (2015a, 2015b, 2017) — Breakpoint Notification Systems
Three linked studies
2015a: controlled study (n=37) + 16-day in-the-wild study (n=27) using device interaction events (screen viewing, window transitions) to detect breakpoints — reduced perceived cognitive load by 46% and 33% respectively vs. random timing. 2015b: added physical activity recognition; 1-month study (n=41) achieved ~72% greater workload reduction than previous system. 2017: large-scale deployment with 680,000 users for 3 weeks — deferring to opportune moments reduced response time by ~50%.
Significance
Provides the empirical foundation for breakpoint-based receptivity detection at scale. The 2017 result is particularly notable — 680,000 users is orders of magnitude larger than any mHealth receptivity study, lending ecological validity to the breakpoint approach.
Künzler, Kramer & Kowatsch (2017) — Meta-Analysis of Context-Aware Notifications
Scope and finding
The only systematic review with meta-analysis to date covering context-aware notification management for mobile applications. Finding: systems designed to intervene at more opportune moments yield greater response rates; some evidence for reduced response times.
Limitation
Authors note findings are not generalizable due to small primary study sample sizes and high participant homogeneity. This limitation motivates the chapter’s call for larger, more diverse trials using microrandomized designs in clinical and at-risk populations.
This chapter sits at the intersection of two literature streams: ubiquitous computing (interruptibility management, breakpoint detection) and mHealth/JITAI research (vulnerable state detection, precision support delivery). The JITAI framework from Nahum-Shani et al. (2015, 2018) provides the unifying theoretical structure, while empirical contributions from Künzler, Mishra, Pielot, and Okoshi populate the 16-factor receptivity model. The chapter explicitly calls for future convergence between behavioral medicine, clinical psychology, information systems, software engineering, and computer science.
For mobile health interventions to reach their potential, the when of delivery must be treated as a therapeutic variable — as important as what is delivered. A receptivity-capable MHI that detects and predicts opportune moments will outperform one that ignores the recipient’s state, and the evidence base to build such systems is now within reach.
-
Timing is a therapeutic variable
When support is delivered matters as much as what is delivered. Poorly timed notifications are not merely ineffective — they can cause harm through cognitive disruption, stress, and notification fatigue. In severe vulnerability scenarios, missing an opportune window can have life-threatening consequences.
-
Receptivity is a three-layer gate
Before support can work, three conditions must hold simultaneously: the person must be able to receive (perceive the notification), process (evaluate sender, content, and implications), and use (act on) the support. Failing any one layer renders the intervention ineffective, regardless of its clinical quality.
-
Sixteen signal factors span person and context
Activity, location, time of day, personality, mood, device state, social context, and communication patterns all carry predictive signal for receptivity. No single factor dominates; machine learning models that combine multiple signals consistently outperform single-factor heuristics or random delivery.
-
Adaptive models outperform static ones
Systems that continuously update with individual behavioral data improve receptivity prediction over time (Mishra et al., 2021). This supports the case for personalized, learning MHI architectures. The longer the deployment and the more individual data accumulated, the better the prediction — a fundamental argument for individualized over population-level delivery.
-
Evidence quality remains immature
Most prior studies used small samples (typically 20–50 participants), short durations, and healthy young adults — populations not representative of clinical or at-risk JITAI targets. Only one meta-analysis exists, and its findings are not generalizable. Microrandomized trials with larger, more diverse populations are urgently needed to establish causal evidence.
-
Technical and ethical friction threatens real-world deployment
Battery drain from passive sensing, OS-level restrictions, frequent software updates breaking sensor streams, data privacy requirements, and stacked notification behavior all create implementation barriers that current research has not fully resolved. Paradoxically, the very sensing infrastructure needed for receptivity detection can prompt users to uninstall the app — requiring careful design tradeoffs and transparent user communication.
