Keller et al. (2023) Receptivity to Mobile Health Interventions Study Guide

Receptivity to MHIs — GIVEMEA Study Guide
GIVEMEA Study Guide · Digital Health / mHealth

Receptivity to Mobile Health Interventions

Roman Keller, Florian v. Wangenheim, Jacqueline Mair & Tobias Kowatsch · Chapter 6 in Digital Therapeutics for Mental Health and Addiction (Elsevier) · 2023

Book Chapter — Narrative Review Conceptual Framework mHealth / JITAIs DOI: 10.1016/B978-0-323-90045-4.00006-X
16Signal Factors
3Key Processes
~40%Receptivity Gain (ML vs. random)
~50%Response Time Reduction (Okoshi 2017)
Central Argument
The effectiveness of mobile health interventions depends not only on what support is delivered but on when it reaches the person. Delivering support into a receptive state — one in which the individual can receive, process, and act on the message — is a critical and underexplored design variable in JITAIs.

Research Question

Under what conditions are individuals able to receive, process, and use support from mobile health interventions? What contextual and personal factors predict receptive states, and how can MHI designers leverage these signals to deliver interventions at opportune moments?

The Receptivity Problem

Most MHIs deliver interventions without considering whether the recipient is ready to engage. Poorly timed support is not merely wasted — it can actively cause harm through unwanted interruption, cognitive disruption, increased stress, and notification fatigue. In severe vulnerability scenarios, a missed intervention window can have life-threatening consequences.

The Anatomy of an Ideal MHI

Keller et al. propose that an ideal JITAI-based MHI continuously senses both vulnerable states (transient tendencies toward adverse health behaviors) and receptive states (readiness to engage with support). Given both conditions, it delivers the optimal evidence-based dose, assesses impact, and iterates — forming a closed-loop learning system. Critically, this requires unobtrusive, ideally contactless, passive sensing to avoid adding burden on the user.

From Interruptibility to Receptivity

The concept of receptivity evolved from the ubiquitous computing tradition of interruptibility research, which sought to reduce notification burden by identifying natural breakpoints. The chapter extends this frame: receptivity goes beyond “can this person be interrupted?” to ask whether they can fully receive, cognitively process, and behaviorally implement the support — a three-layer gate that any effective JITAI must pass through.

The Evidence Landscape

A 2017 systematic review confirmed that context-aware notification timing improves response rates, but findings were not generalizable due to small, homogeneous samples. More recent ML studies report receptivity improvements of up to 40% over random delivery (Mishra et al., 2021) and over 66% improvement in engagement prediction (Pielot et al., 2017). Adaptive models that learn from individual data show improving performance over time — pointing toward personalized, dynamic delivery architectures.

The 16 Receptivity Signal Factors (Table 6.1)

# Factor Type Key Evidence Summary
1ActivityContextualLess complex tasks, idle/relaxed states linked to higher receptivity; mixed evidence for physical activity type
2BreakpointsContextualActivity transitions and natural interaction breakpoints (post-call, post-SMS) indicate opportune moments
3LocationContextualMixed evidence; home/work linked to higher receptivity; shopping malls, social settings linked to lower
4Time of DayContextualLower receptivity in morning; increases through the day; mixed weekend vs. weekday results
5PersonalityIntrinsicConscientious and neurotic individuals tend to be more receptive; extroverts more reactive to notifications
6SenderContentNotifications from significant others (partner, family) trigger higher receptivity than system or service senders
7Device BatteryContextualHigher battery level linked to higher receptivity; fully charged battery linked to lower (proxy for inactivity)
8Device InteractionContextualRecent phone use (e.g., recent unlock event) predicts higher receptivity
9AgeIntrinsicOlder individuals tend to be more receptive
10MoodIntrinsicHappy/energetic states linked to higher receptivity; stressed states to lower receptivity
11Bluetooth SignalContextualChanges in nearby Bluetooth device count may signal context shifts linked to receptivity
12Communication PatternsContextualLess time since last call or SMS linked to greater receptivity
13Alert ModalityIntrinsic/DeviceVibration mode (vs. silent or sound) linked to higher receptivity
14Device TypeIntrinsic/DeviceAndroid users tend to be more receptive than Apple smartphone users in one study
15Social SettingContextualBeing alone or without active social interaction positively influences receptivity
16Notification ContentContentEntertainment value, relevance, actionability, and personal interest all positively influence receptivity

■ Core Concepts   ■ Processes & Mechanisms   ■ Research Methods   ■ Technology & Design

Core Concepts

Receptivity
tap to define
Receptivity
The condition in which a person is able to receive, process, and use support provided by a mobile health intervention — all three layers must be present simultaneously for full receptivity.
Vulnerable State
tap to define
Vulnerable State
A person’s transient tendency to experience adverse health outcomes or to engage in maladaptive behaviors (e.g., relapse, binge eating, excessive substance use). A necessary but not sufficient condition for JITAI delivery.
Opportune Moment
tap to define
Opportune Moment
A point in time when a person is simultaneously in a vulnerable state and a receptive state — the ideal window for JITAI delivery. Detecting and predicting these moments is the central challenge of receptivity-capable MHI design.
Notification Burden
tap to define
Notification Burden
The negative cognitive and emotional impact of poorly timed or excessive push notifications. Delivering too many or mistimed notifications can increase stress and decrease wellbeing — making the MHI more harmful than helpful.
JITAI
tap to define
JITAI
Just-in-Time Adaptive Intervention. A mobile health approach that tailors the type, timing, and dose of support in real time based on a person’s changing state. Receptivity-capable MHIs are the practical implementation of JITAI logic.

Processes & Mechanisms

Receiving
tap to define
Receiving
The first process of receptivity: the individual physically perceives the intervention notification. Key question: can this person be interrupted right now? Influencing factors include current activity, focus mode, channel type, and device state.
Processing
tap to define
Processing
The second process of receptivity: sufficient cognitive capacity is available to evaluate sender identity and trust, content relevance, and what action is implied. This may be measured psychophysiologically (skin conductance, cerebral blood flow, blood pressure).
Using Support
tap to define
Using Support
The third process: the individual has both the time and opportunity to implement the suggested behavioral action. Not required for all notifications (e.g., reaching a step goal requires no action), but critical when the intervention calls for behavior change.
Working Alliance
tap to define
Working Alliance
The quality of the therapeutic relationship between health care provider (or digital agent) and patient, characterized by trust, shared goals, and collaboration. Research shows humans can form working alliances with conversational AI agents — enhancing the processing layer of receptivity.
Breakpoint
tap to define
Breakpoint
A natural pause or transition in activity (e.g., finishing a phone call, transitioning from sitting to walking, closing an app). Breakpoints indicate the end of one focused task and create a brief window of elevated receptivity for incoming notifications.

Research Methods

EMA
tap to define
Ecological Momentary Assessment
Real-time self-report of states, experiences, or behaviors captured in the natural environment via smartphone prompts. Used to measure subjective receptivity signals (mood, cognitive load) that cannot yet be reliably detected passively. Creates respondent burden — a core limitation of current receptivity research.
Microrandomized Trial
tap to define
Microrandomized Trial
An experimental design in which individual decision points within a longitudinal study are randomly assigned to intervention vs. control conditions — allowing causal estimation of just-in-time effects. Recommended by the authors as the gold standard for future receptivity research.
Interruptibility Research
tap to define
Interruptibility Research
A tradition in ubiquitous computing (pre-2015) studying the burden of mobile notifications and how to reduce it by timing delivery to natural breakpoints. The direct precursor to mHealth receptivity research — focused primarily on the “receiving” layer.
Microincentive
tap to define
Microincentive
Small reward (e.g., entering a prize draw) used in mHealth studies to incentivize EMA completion. A common workaround for EMA burden — but may bias engagement levels and reduce ecological validity by distorting natural motivation to interact.

Technology & Design

Digital Biomarker
tap to define
Digital Biomarker
An objective physiological or behavioral signal measured continuously by a digital device (e.g., smartphone accelerometer, wearable heart rate sensor) that can inform detection of health states. Validated digital biomarkers are needed for reliable passive detection of both vulnerable and receptive states.
Passive Sensing
tap to define
Passive Sensing
Continuous, unobtrusive data collection from built-in smartphone and wearable sensors (GPS, accelerometer, microphone, Bluetooth, screen state) without requiring active user input. The preferred approach for receptivity detection — but creates battery drain and data privacy challenges.
Adaptive Model
tap to define
Adaptive Model
A machine learning model for receptivity prediction that is continuously updated with new individual-level behavioral data as the study or deployment progresses. In contrast to a static pretrained model, an adaptive model improves personalization over time — as shown by Mishra et al. (2021).
Stacked Notifications
tap to define
Stacked Notifications
Multiple notifications from the same app grouped together in the notification tray. Users often focus only on the most recent or dismiss the entire stack as unimportant — a design challenge that can significantly degrade intervention outcomes when delivery frequency is too high.
Closed-Loop MHI
tap to define
Closed-Loop MHI
A mobile health system architecture in which the MHI continuously senses state, delivers intervention, assesses impact, and uses that feedback to refine subsequent delivery decisions. The “ideal MHI” described by Keller et al. operates as a closed-loop system that learns over repeated cycles.

Chapter Type at a Glance

This is a book chapter combining a conceptual framework with a narrative literature review. It introduces the “ideal MHI anatomy” and a three-process model of receptivity (receiving, processing, using), then reviews 16 empirically identified signal factors drawn from the interruptibility and mHealth literatures. The chapter concludes with a structured discussion of implementation challenges and future directions. No new primary data are collected; the contribution is theoretical synthesis and evidence mapping.

Literature Scope

  • Interruptibility tradition from ubiquitous computing (Ho & Intille 2005; Okoshi et al. 2015a, 2015b, 2017; Fischer et al. 2010, 2011; Mehrotra et al. 2015, 2016)
  • mHealth receptivity studies: Künzler et al. (2019), n=189, 6-week mHealth study; Mishra et al. (2021), n=83, 3-week physical activity intervention; Pielot et al. (2017), n=337, 4-week content engagement study; Morrison et al. (2017), n=77, stress management intervention
  • JITAI theoretical foundation: Nahum-Shani et al. (2015, 2018)
  • One systematic review with meta-analysis: Künzler, Kramer & Kowatsch (2017)

The 16-Factor Synthesis

  • Factors ranked by number of independent investigations across reviewed studies
  • Each factor accompanied by a summary of evidence direction and quality
  • Factors classified as contextual (passively collectable) vs. intrinsic (requiring self-report or inference) — relevant for practical MHI implementation
  • Most investigated factors: Activity (#1), Breakpoints (#2), Location (#3), Time (#4), Personality (#5)

Machine Learning Evidence

  • Künzler et al. (2019): combined intrinsic + contextual features in ML models; significant improvements over baseline interruptibility detection
  • Pielot et al. (2017): passive sensing model achieves over 66% better prediction of content engagement vs. baseline
  • Mishra et al. (2021): static pretrained model and adaptive model both improve receptivity by up to 40% vs. random delivery; adaptive model shows continuous improvement over 3 weeks
  • Morrison et al. (2017): prediction-model-timed notifications did not outperform daily delivery in stress management context — a null result highlighting the nascent state of the field

Evidence Quality Assessment

  • Most primary studies: small samples (typically 20–50 participants), short duration (days to weeks), healthy young adults or university students
  • Only one meta-analysis available (Künzler et al. 2017) — findings not generalizable due to high homogeneity and small primary study samples
  • No studies in clinical or at-risk populations representative of actual JITAI targets (e.g., substance use disorders, depression, severe obesity)
  • Causal influence of most factors is inconclusive — most evidence is associative/observational
  • Microrandomized trials recommended as future gold standard for causal estimation
R1

Nahum-Shani et al. (2015) — JITAI Framework

Health Psychology, 34(0), 1209–1219 · DOI: 10.1037/hea0000306
★ Foundational Health Psychology Theory
Why it matters

The theoretical bedrock for the entire chapter. Nahum-Shani et al. define vulnerable states and establish the JITAI framework — including the definition of receptivity as the condition in which a person can receive, process, and use support. Keller et al. build their anatomy of an “ideal MHI” directly on this foundation.

Key contribution

Operationalizes the construct of receptivity within a JITAI design framework, distinguishing it from vulnerability and providing design guidance for intervention authors. Establishes the logic of closed-loop adaptive health systems.

R2

Künzler et al. (2019) — State-of-Receptivity for mHealth

Proceedings of the ACM on IMWUT, 3(4), 140 · DOI: 10.1145/3369805
★ Core Empirical ACM IMWUT ML + mHealth Study
Study design

6-week mHealth study with 189 participants, examining associations between receptivity and intrinsic factors (device type, age, gender, personality) and contextual factors (time of delivery, battery level, device interaction, physical activity, location). ML models trained on combined factors.

Key findings

Higher response rates associated with older age, neuroticism, mid-day timing (10am–6pm), home or workplace location, higher battery level, active device interaction, and walking activity. ML models combining intrinsic and contextual features significantly outperformed a baseline model in receptivity detection.

R3

Mishra et al. (2021) — Detecting Receptivity in Natural Environment

Proceedings of the ACM on IMWUT, 5(2), 74 · DOI: 10.1145/3463492
★ Core Empirical ACM IMWUT Adaptive ML Trial
Study design

3-week RCT with 83 participants in a smartphone-based physical activity chatbot intervention. Three conditions: (1) static pretrained ML model (using Künzler 2019 data), (2) adaptive ML model updated daily with new individual data, (3) random delivery control.

Key findings

Both ML models improved receptivity by up to 40% over random delivery. The adaptive model showed continuous improvement over the 3 weeks as it accumulated more individual data — suggesting that personalization gains grow with deployment time. This is a key argument for individualized, learning MHI architectures.

R4

Pielot et al. (2017) — Beyond Interruptibility: Predicting Opportune Moments

Proceedings of the ACM on IMWUT, 1(3), 91 · DOI: 10.1145/3130956
★ Core Empirical ACM IMWUT Content Engagement Study
Study design

4-week study with 337 participants. Primary task: report mood via notification-triggered questionnaire. Secondary goal: analyze voluntary engagement with 8 diverse content types (games, news, videos, etc.) delivered at the end of each questionnaire.

Key findings

Higher battery level, recent phone interaction, higher ambient noise, less light variance, older age, and later time of day all predicted engagement. Passive sensing model achieved over 66% better prediction vs. baseline. Extends receptivity research beyond simple notification receipt toward deeper content engagement — addressing the “processing” layer.

R5

Okoshi et al. (2015a, 2015b, 2017) — Breakpoint Notification Systems

IEEE PerCom 2015; UbiComp 2015; IEEE PerCom 2017
★ Formative IEEE PerCom / UbiComp System Design & Validation
Three linked studies

2015a: controlled study (n=37) + 16-day in-the-wild study (n=27) using device interaction events (screen viewing, window transitions) to detect breakpoints — reduced perceived cognitive load by 46% and 33% respectively vs. random timing. 2015b: added physical activity recognition; 1-month study (n=41) achieved ~72% greater workload reduction than previous system. 2017: large-scale deployment with 680,000 users for 3 weeks — deferring to opportune moments reduced response time by ~50%.

Significance

Provides the empirical foundation for breakpoint-based receptivity detection at scale. The 2017 result is particularly notable — 680,000 users is orders of magnitude larger than any mHealth receptivity study, lending ecological validity to the breakpoint approach.

R6

Künzler, Kramer & Kowatsch (2017) — Meta-Analysis of Context-Aware Notifications

IEEE WiMob 2017 · DOI: 10.1109/WiMOB.2017.8115839
★ Only Meta-Analysis IEEE WiMob Systematic Review
Scope and finding

The only systematic review with meta-analysis to date covering context-aware notification management for mobile applications. Finding: systems designed to intervene at more opportune moments yield greater response rates; some evidence for reduced response times.

Limitation

Authors note findings are not generalizable due to small primary study sample sizes and high participant homogeneity. This limitation motivates the chapter’s call for larger, more diverse trials using microrandomized designs in clinical and at-risk populations.

Reference Network Note

This chapter sits at the intersection of two literature streams: ubiquitous computing (interruptibility management, breakpoint detection) and mHealth/JITAI research (vulnerable state detection, precision support delivery). The JITAI framework from Nahum-Shani et al. (2015, 2018) provides the unifying theoretical structure, while empirical contributions from Künzler, Mishra, Pielot, and Okoshi populate the 16-factor receptivity model. The chapter explicitly calls for future convergence between behavioral medicine, clinical psychology, information systems, software engineering, and computer science.

Question 1 of 5
According to Keller et al., what are the three key processes that must all be present for a person to be in a receptive state?
✓ Correct. Receptivity requires that a person can receive (perceive the notification), process (cognitively evaluate sender, content, and implications), and use (implement the suggested action). All three layers must be present simultaneously — failing any one means the intervention is effectively lost.
The three processes defined by Keller et al. are receiving, processing, and using — mirroring the full definition of receptivity as the ability to receive, process, and use support provided by an MHI (from Nahum-Shani et al. 2015).
Question 2 of 5
Mishra et al. (2021) compared a static pretrained model, an adaptive personalized model, and random delivery. What was the key finding?
✓ Correct. Both models significantly outperformed random delivery by up to 40%. Crucially, the adaptive model showed continuous improvement as it accumulated individual data — arguing for personalized, learning MHI architectures over static population-level models.
Both ML models beat random delivery by up to 40% in a 3-week trial (n=83). The adaptive model’s key advantage was its continuous improvement over time — suggesting that longer deployments yield better personalization.
Question 3 of 5
Which of the following is NOT listed in Table 6.1 as a signal factor for detecting or predicting receptivity?
✓ Correct. Body temperature does not appear in Table 6.1. The 16 factors are: activity, breakpoints, location, time of day, personality, sender, device battery, device interaction, age, mood, Bluetooth signal, communication patterns, alert modality, device type, social setting, and notification content.
Body temperature is not among the 16 factors in Table 6.1. Device battery status (#7), Bluetooth signal (#11), and alert modality (#13) all appear in the table — making body temperature the odd one out.
Question 4 of 5
Why do Keller et al. argue that passive sensing for receptivity detection can paradoxically undermine JITAI goals in the real world?
✓ Correct. Background sensing requires frequent data sampling, which accelerates battery drain. Mobile operating systems flag apps for high consumption relative to active use, potentially prompting users to delete the app — directly preventing timely intervention delivery and undermining the JITAI’s core function.
The specific challenge Keller et al. highlight is battery drain from continuous background sensing. This triggers OS warnings and uninstall prompts — a technical constraint that works against the unobtrusive, always-on sensing that ideal JITAIs require.
Question 5 of 5
In Okoshi et al.’s large-scale 3-week study with 680,000 users, what was the measured benefit of deferring notifications to more interruptable breakpoint moments?
✓ Correct. The 2017 large-scale study (680,000 users, 3 weeks) found that deferring notifications to opportune breakpoints reduced user response time by ~50% vs. immediate delivery. The 46% cognitive load reduction is from Okoshi’s earlier smaller controlled study (2015a), and the 66%+ figure comes from Pielot et al. (2017).
The ~50% response time reduction comes from the 2017 Okoshi large-scale study (680,000 users). The 46% cognitive load reduction is from Okoshi’s 2015a controlled study, while the 66%+ prediction improvement is from Pielot et al. (2017) — each figure comes from a different study.
— / 5 Quiz Score
Core Thesis
For mobile health interventions to reach their potential, the when of delivery must be treated as a therapeutic variable — as important as what is delivered. A receptivity-capable MHI that detects and predicts opportune moments will outperform one that ignores the recipient’s state, and the evidence base to build such systems is now within reach.
  • 📱

    Timing is a therapeutic variable

    When support is delivered matters as much as what is delivered. Poorly timed notifications are not merely ineffective — they can cause harm through cognitive disruption, stress, and notification fatigue. In severe vulnerability scenarios, missing an opportune window can have life-threatening consequences.

  • 🧠

    Receptivity is a three-layer gate

    Before support can work, three conditions must hold simultaneously: the person must be able to receive (perceive the notification), process (evaluate sender, content, and implications), and use (act on) the support. Failing any one layer renders the intervention ineffective, regardless of its clinical quality.

  • 🗺️

    Sixteen signal factors span person and context

    Activity, location, time of day, personality, mood, device state, social context, and communication patterns all carry predictive signal for receptivity. No single factor dominates; machine learning models that combine multiple signals consistently outperform single-factor heuristics or random delivery.

  • ⚙️

    Adaptive models outperform static ones

    Systems that continuously update with individual behavioral data improve receptivity prediction over time (Mishra et al., 2021). This supports the case for personalized, learning MHI architectures. The longer the deployment and the more individual data accumulated, the better the prediction — a fundamental argument for individualized over population-level delivery.

  • ⚠️

    Evidence quality remains immature

    Most prior studies used small samples (typically 20–50 participants), short durations, and healthy young adults — populations not representative of clinical or at-risk JITAI targets. Only one meta-analysis exists, and its findings are not generalizable. Microrandomized trials with larger, more diverse populations are urgently needed to establish causal evidence.

  • 🔧

    Technical and ethical friction threatens real-world deployment

    Battery drain from passive sensing, OS-level restrictions, frequent software updates breaking sensor streams, data privacy requirements, and stacked notification behavior all create implementation barriers that current research has not fully resolved. Paradoxically, the very sensing infrastructure needed for receptivity detection can prompt users to uninstall the app — requiring careful design tradeoffs and transparent user communication.

Similar Posts

Leave a Reply