ePRO, EMA, RWD: Three Terms People Keep Using Interchangeably
ePRO, EMA, and RWD get conflated constantly in grant applications and protocols. They're related but distinct. Here's what each one actually means and how to write about them precisely.
I've read a lot of protocols and grant drafts over the last few years, and there's a pattern worth pointing out: "ePRO," "EMA," and "real-world data" get used as if they were synonyms. They're not. They describe different things — one is a delivery channel, one is a measurement design, one is a source category — and they compose rather than substitute for each other.
This isn't pedantry. When reviewers read a Significance section that says "we will use RWD to measure symptoms," they notice, because RWD isn't a measurement instrument. That kind of imprecision shows up as weakness in Approach.
Here's the cleanest way I've found to think about these three terms.
ePRO is a delivery channel
Electronic patient-reported outcomes (ePRO) means a patient-reported outcome (PRO) measure administered electronically instead of on paper. A PRO — any PRO, on any medium — is a report of health status that comes directly from the patient without clinician interpretation or translation. The "e" just specifies how it's delivered.
The regulatory anchor here is the FDA's 2009 PRO Guidance for Industry, which formally accepts patient-reported data as support for labeling claims when the instrument has adequate measurement properties. Subsequent guidance has been explicit that electronic administration is acceptable when it's shown to be equivalent to the validated paper form. Equivalence is the word that matters — if you're moving a validated paper instrument to an app, plan for either an equivalence substudy or a citation-based rationale.
What "ePRO" does not tell you is when, how often, or in what state the patient completed the assessment. That's EMA's job.
EMA is a measurement design
Ecological momentary assessment (EMA) — sometimes called experience sampling — is a data collection strategy, not a technology. The defining features, laid out in Shiffman, Stone & Hufford (2008) in Annual Review of Clinical Psychology, are:
- Momentary — capturing current or very recent state to minimize reliance on memory
- Ecological — in the patient's natural environment, not a clinic
- Repeated — multiple assessments over time, often multiple times per day
EMA is typically delivered electronically these days, which makes most modern EMA a subset of ePRO. But the two words emphasize different things. "ePRO" foregrounds that the instrument is validated and the data is regulator-acceptable. "EMA" foregrounds that you've chosen a sampling strategy specifically to avoid recall bias and capture within-person variability.
Use "EMA" when the sampling design is doing real scientific work — when part of what you're claiming in Innovation is that prior studies missed something because they asked patients to remember a week at a time, and you won't. Use "ePRO" when you're describing the regulatory pathway or the fact that your instruments are delivered through validated electronic forms.
Both words can appear in the same protocol without contradiction: "Participants will complete validated ePRO instruments on a weekly schedule, alongside event-triggered EMA prompts capturing symptom intensity in the moment."
RWD is a source classification
Real-world data (RWD) is a category of data source, not a measurement method. The FDA's RWE framework defines RWD as data relating to patient health status or healthcare delivery collected outside of traditional randomized controlled trials — EHRs, claims, registries, PROs, wearables, mobile health data.
The important distinction: ePRO and EMA describe how you collect data. RWD describes what kind of study the data comes from. Data from a prospectively planned RCT — collected via an ePRO app, using EMA sampling — is generally not RWD. It's interventional trial data. The same app, deployed in a registry or natural history study, produces RWD. The app didn't change; the study design did.
Real-world evidence (RWE) is the clinical evidence you derive by analyzing RWD. RWD is the ingredient; RWE is the dish.
Use "RWD" when you're describing non-interventional data collection, when you're arguing that your source provides external validity an RCT can't, or when you're referencing the FDA RWE pathway as a regulatory route.
Mapping it out
It helps me to work through a concrete example:
- A validated questionnaire (e.g., PROMIS-29) is a PRO instrument
- Delivered on a smartphone, it's ePRO
- Delivered multiple times per day in the moment, it's EMA (and still ePRO)
- In a natural history study, the resulting data is RWD
- A regulatory filing that uses that dataset to support a label claim is built on RWE
One app, one study, four different concepts — all accurate, none interchangeable.
Errors to avoid in grant language
A few patterns I see often, and what to do instead:
"We will use RWD to measure symptoms." RWD isn't a measurement method, so this sentence has a category error. You collect ePRO or EMA data that, depending on your study design, may qualify as RWD.
"EMA will provide validated outcomes." EMA is a sampling strategy. It doesn't validate anything on its own — your instruments still need their own psychometric support. EMA can improve measurement fidelity by reducing recall error, but that's different from instrument validation.
"We will collect real-world ePRO data." This smashes a source category (RWD) and a delivery method (ePRO) into one phrase. Be explicit about study design (interventional vs. observational) and delivery (ePRO) separately.
"Daily EMA surveys." EMA prompts are typically brief — a handful of items, under two minutes. Calling them "surveys" suggests a heavier instrument and tends to inflate burden estimates in reviewer minds. "Prompts" or "assessments" read more accurately.
Language that works in a protocol
If you're writing a research strategy and want a clean paragraph to adapt:
Symptom data will be collected via an ePRO application (Forma Health) that delivers both scheduled validated instruments and event-triggered ecological momentary assessments. Validated instruments — [list] — will be administered on a weekly schedule. Between scheduled assessments, participants will complete brief momentary prompts (≤5 items, approximately 60–90 seconds) capturing [constructs] at [event triggers / signal-contingent intervals]. This design minimizes retrospective recall bias and generates within-week variability data unavailable through scheduled assessments alone.
That paragraph makes the regulatory pathway explicit (validated instruments via ePRO), the measurement contribution explicit (EMA for within-day variability), and the burden bounded (≤5 items, under 90 seconds). A reviewer reading it can tell you've thought about the measurement design, not just the instrument list.
Study sections reward precision because the vocabulary is technical enough that imprecision reads as unfamiliarity. And familiarity with your methods is a prerequisite for a competitive Approach score.
See Forma Health in action
Walk through a custom configuration for your condition, endpoints, and data needs — set up in under an hour.