- Research
- Open access
- Published:
User journey method: a case study for improving digital intervention use measurement
BMC Health Services Research volume 25, Article number: 479 (2025)
Abstract
Background
Many digital mental health interventions meet low levels of use. However, current use measurement methods do not necessarily help identify which intervention elements are associated with dropout, despite this information potentially facilitating iterative intervention development. Here, we suggest improving the comprehensiveness of intervention use measurement with the user journey method, which evaluates every intervention element to identify intervention-specific use barriers.
Methods
We applied user journey method in a clinical trial that investigated the efficacy of a novel game-based intervention, Meliora, for adult Major Depressive Disorder. We modelled the intervention for its four technological (Recruitment, Website, Questionnaires, Intervention Software) and two interpersonal elements (Assessment, Support). We then applied the user journey method to measure how many users proceeded from one element to the next combining social media analytics, website use data, signup data, clinical subject coordinator interview data, symptom questionnaire data, and behavioral intervention use data. These measurements were complemented with the qualitative analysis of the study discovery sources and email support contacts.
Results
Recruitment: The intervention recruitment reached at least 145,000 Finns, with social media, word-of-mouth, and news and web sources being the most effective recruitment channels. Website: The study website received 16,243 visitors, which led to 1,007 sign-ups. Assessment: 895 participants were assessed and 735 were accepted. Intervention Software: 498 participants were assigned to the active intervention or comparator, of whom 457 used them at least once: on average, for 17.3 h (SD = 20.4 h) on 19.7 days (SD = 20.7 d) over a period of 38.9 days (SD = 31.2 d). The 28 intervention levels were associated with an average dropout rate of 2.6%, with two sections exhibiting an increase against this baseline. 150 participants met the minimum adherence goal of 24 h use. Questionnaires: 116 participants completed the post-intervention questionnaire. Support: 313 signed-up participants contacted the researchers via email.
Conclusion
The user journey method allowed for the comprehensive evaluation of the six intervention elements, and enabled identifying use barriers expediting iterative intervention development and implementation.
Trial registration
ClinicalTrials.gov, NCT05426265. Registered 28 June 2022, https://clinicaltrials.gov/ct2/show/NCT05426265.
Contributions to literature
-
Evaluation of digital intervention use has been summative and often focused solely on intervention software.
-
The user journey method could facilitate measuring how users interact with every intervention element.
-
We illustrate this method with a case study of a novel game-based digital intervention for adult depression.
-
The user journey method enabled detailed measurement of intervention use and the identification of use barriers.
Introduction
Underused intervention use data
Digital mental health interventions are expected to improve the availability of and access to health care. However, digital interventions often face challenges in reaching their audience and attracting users [1,2,3,4]. When the users are found, their indicated interest does not necessarily lead to sufficient use [5], and dropping out is common [6,7,8], which diminishes the intervention effectiveness [9, 10]. Furthermore, a recent review examining digital interventions for depression found that lab-based efficacy trials reach significantly better results than effectiveness trials [11]. Resolving these challenges is essential for digital interventions to achieve impact in real-world environments [12,13,14,15].
Use measurement is a vital part of digital intervention evaluation. Also known as analytics [16] and log data [17], intervention use measurement evaluates how and how much the patient interacts with the intervention. High levels of use may indicate acceptability and feasibility [18], which facilitate intervention implementation [19, 20]. However, intervention use data is infrequently and inconsistently measured and reported [5, 21,22,23], with only 6% of 160 eHealth studies examining log data [17]. Another review of 100 clinical trials found that while 90% of trials gathered some use data, only 39% investigated the level of intervention use [24]. Moreover, the studies commonly used heterogeneous metrics with a review of 25 studies finding 71 different metrics being used [22]. The variance in the metrics reflects the lack of standardization, the challenges in gathering and analyzing use data, and the need to adapt the metrics to intervention-specific features and the technologies employed in their implementation [21, 24, 25]. There is a need to improve the intervention use data measurement.
User journey method
Digital intervention evaluation often concentrates on the product, the intervention software [2], even if digital interventions often also include other elements [26, 27]. These elements can be interpersonal, such as assessment and therapeutic interactions, and technological, such as websites and measuring devices. While the intervention software typically contains the mechanisms of action that are proposed to have a causal effect on behavioral change [28, 29], other intervention elements are required to enable and support these beneficial interactions [9, 11, 30], and they act as necessary but not sufficient factors for behavioral change. Indeed, Mohr et al. [2] and Shaw et al. [31] have suggested turning attention from the product to the service—a superordinate concept encompassing all intervention elements required to create value.
The user journey method facilitates the comprehensive measurement of all intervention elements: it examines the whole pathway the user needs to take to gain benefits [32]. The user journey method belongs to a school of person- or user-centered approaches [17, 33,34,35] and associated methods have earlier been described as a critical pathway [36], critical path [37], care pathway [38], customer journey [39], patient journey [4, 40,41,42], process map [41], and service blueprint [43]. Here, we adopt the concept of user journey, which is commonly used in the context of digital services. To date, the user journey method has rarely been applied in digital intervention development [15, 44], presenting an opportunity to improve intervention evaluation.
User journey evaluates intervention use temporally. Users follow a process of discovering new services, trying them, engaging in continued use, discontinuing use, and potentially returning later [32, 45, 46]. This temporal approach helps identify patterns in the rapidly evolving field of digital health [47]. For instance, installed software is launched, whereas browser-based software may require logging in—yet both user actions mark the start of intervention use. At each step, users may decide to discontinue using the intervention [45, 48], and user journey seeks to determine when the dropout occurs. Moments of increased dropout can indicate a use barrier regarding intervention content, associated processes, or their implementation [49, 50], and identifying these barriers helps developers iteratively refine intervention elements. Thus, user journey facilitates continuous intervention evaluation [51,52,53] encompassing design, evaluation, redesign, and further data gathering [13] to gradually enhance intervention quality.
The study aim
This study introduces the user journey method to advance digital intervention use measurement. We model our case study intervention, Meliora, for its elements, and conduct a literature overview to identify common intervention use metrics. Then, we apply the user journey method to measure how many participants progress from one intervention element to the next, and complement this quantitative evaluation with two qualitative datasets. This study is intended for digital intervention researchers, designers, and developers who engage in iterative user-centered design and seek to translate their research from clinical studies to real-life environments.
Methods
Study overview
The present study is conducted as a sub-study in a clinical trial that investigated the efficacy (reported elsewhere) of a novel game-based digital mental health intervention, Meliora, in alleviating adult Major Depressive Disorder (MDD). The study participants were 18–65-year-old Finnish adults with interview-confirmed MDD. The comparator-controlled Randomized Controlled Trial (RCT) received positive appraisals from the Helsinki University Hospital research ethics committee (HUS/3042/2021) and Finnish Medicines Agency Fimea (FIMEA/2022/002976) and was preregistered on ClinicalTrials.gov [54]. The participants signed up for the study between 1.8.2022 and 30.4.2023, were accepted to the study before 21.6.2023, and completed the 12-week intervention period before 21.9.2023. These participants (Nenrolled = 1,007, Nincluded = 735) constitute a representative sample of the total cohort (Nenrolled = 1,384, Nincluded = 1,001) of the trial, representing 72,7% of enrolled and 73.4% of included participants. Three participants requested their data to be excluded. The study participant characteristics were described in detail in a previous mixed methods study [46], and intervention user experience in a previous qualitative study [55].
Modelling case study user journey
We modelled the case study intervention for its elements. It consisted of four technological (Recruitment, Website, Questionnaires, Intervention Software) and two interpersonal elements (Assessment, Support), and the processes that integrated the elements with each other (Fig. 1). The modelled service closely resembled how consumers might discover, adopt, and use digital interventions in real-world settings, thereby supporting the study’s ecological validity [53, 56]. As such, the user journey method can facilitate effectiveness/implementation hybrid trials that study intervention effectiveness alongside its implementability [13, 57, 58].
We conducted a brief overview [59] on digital intervention use metrics. This literature search focused on recent English-language literature and identified five reporting frameworks [5, 23, 60,61,62] and three reviews [21, 24, 63] (Additional file 1). The reviews described how digital intervention use has been measured, while frameworks made suggestions on how intervention use could or should be measured often drawing on prior practices. From this literature, we identified eight metrics, summarized in Table 1. This overview highlighted consistent agreement on measuring intervention launches and total use time, but substantial variation in other use metrics. This variance likely reflects the diversity of digital interventions, their target audiences, and intended uses [61]. We based the user journey intra-intervention use metrics on this overview, with two exceptions: we distinguished between service provider and peer interactions as they require different resources and serve distinct aims [64,65,66], and we categorized downloads within launches, as not all interventions are downloaded.
While many studies have focused on intra-intervention use metrics, the pre- and post-intervention stages have received less attention [45]. To evaluate the whole modelled service, we examined other studies and identified pre-intervention use metrics including reach [67, 68], interest [69], assessment [70], and purchase [71], and as well as post-intervention metrics such as follow-up measurements [72], and re-engagement [45]. We organized these metrics temporally along the user journey (Table 2). We consider these intervention use metrics an extension of what Pham et al. [62] describe as a “library”: they provide structure for use data measurement, but researchers and developers must consider them alongside intervention-specific elements (see Fig. 1) as not all phases are relevant for every intervention.
We applied the user journey method to measure the use of Meliora. Our use data sources were social media analytics, website use, sign-up data, Clinical Subject Coordinator (CSC) interview data, symptom questionnaire data, and behavioral intervention use data. These data were complemented by two sets of qualitative data: CSC interview data detailing where participants had discovered the study, and participant email support contact data. This approach converges with the recommendation to use mixed methods in intervention evaluation [17, 22, 25, 26], and the mixed methods research design was reported aligned with the checklist by Lee et al. [73] (Additional file 2).
Pre-intervention use measurements
The study recruitment combined mass media and targeted channels. We recruited participants through healthcare partner referrals, social media (such as Facebook, Instagram, and Reddit), as well as email campaigns and posters placed mainly at university campuses. Online news features in the Finnish National Broadcasting Company (Yle), Aalto University website, and digital gaming community Assembly facilitated the effort to reach potential participants. To evaluate intervention reach, we only had access to Meta Business Suite (MBS) data, which described the number of people reached in Facebook and Instagram. We considered this the lower-bound for the number of participants reached.
The various recruitment efforts converged on the study website, where use data was gathered using Piwik PRO Analytics Suite (PPAS). PPAS and MBS comply with the European Union General Data Protection Regulation (GDPR). The website landing page included information about the study and its inclusion and exclusion criteria. When proceeding further, the second page included a detailed informed consent form (Additional file 3) that was signed digitally, and a background questionnaire designed for this study (see Appendix 3 in [46]). The sign-up data was gathered and stored securely on Helsinki University servers with access control and a tailored monitoring user interface that was developed by the third author.
The digital sign-up process added the participants to the CSC assessment queue. To avoid delays in assessment, we controlled the influx of participants by regulating the intensity of the recruitment activities. The CSC evaluated participant suitability for the study against the inclusion and exclusion criteria in a remote telephone interview assessment (Additional file 4), gave the participant an overview about the study process, and answered any participant questions. The CSCs included the second author who is a licensed psychologist, the third author who has experience in mental health work and information technology development, and the fourth author who has a candidate’s degree in psychology. The CSCs were supported by a licensed psychologist (the first author), who is experienced in clinical psychology and service design, and the coordinating researcher (the seventh author). Unclear cases were frequently discussed and negotiated to ensure compliance with the clinical investigation plan. Notably, the assessment phase included the researchers potentially excluding the participant. In other phases, the participants made the decision to continue or discontinue the intervention use.
We analyzed how the users discovered the intervention using PPAS and CSC interview data. PPAS data included the digital referral source, but this data could not identify direct referral sources such as email campaigns, private digital channels, and typing the landing page address to the browser from handouts, posters, news, or word-of-mouth (WOM) recommendations [74]. To study the recruitment performance of direct sources, the CSC asked the participants where they had heard about the intervention. The first author coded and categorized [75] the CSC entries in Excel (Microsoft). The approach to the coding was descriptive which was aligned with the brevity of the data [76]. The first and fourth authors reviewed the coding in a sense-making session. If the participant mentioned two channels, the latest or most prominent reason was coded as the discovery source. For example, “My friends saw the news about the study in Yle [national news], and conveyed the information to me” was coded with word of mouth as the discovery source.
Intra-intervention measurements
The study included three arms, and the subjects were automatically and equally randomized into one of them. The participants in the active intervention (MEL-T01) and active comparator (MEL-S01) arms began the study with a 12-week intervention period while engaging in treatment-as-usual (TAU) and then continued to a 12-week follow-up period with TAU only. The participants in the TAU arm began the study with a 12-week waiting period with TAU only and then were assigned to use one of the intervention versions randomly while engaging in TAU.
The active intervention (MEL-T01) was a game-based digital mental health intervention [77]. It implemented cognitive training resembling a fast-paced, single-player action and strategy game (Fig. 2). The intervention mechanisms of action were based on previous research that has found that depression is associated with cognitive impairment, and that computerized cognitive training could alleviate this impairment and the depressive symptoms [78,79,80]. To facilitate the training impact, the intervention level of challenge was adapted to the participant’s performance between rounds. The active comparator (MEL-S01) was highly similar to the active intervention, apart from specific cognition-training features. Regardless of which intervention software version the participant was assigned to, the five other intervention elements were identical (see Fig. 1). Due to these considerable similarities, we examined the user journey of both intervention versions together.
The primary and secondary intra-intervention use metrics were use time and level progression. The former was adopted because we assumed a dose–response relationship between cognitive training and symptom reduction based on previous research [80]. The participants were instructed to interact with the intervention several times per week, for at least a total of 24 h (2 h weekly), or preferably 48 h (4 h weekly), during the intervention period. The main screen visualized the intervention use hours and levels in total and per week. The intervention could be used for a maximum of 90 min per day and only during the 12-week intervention period, which was digitally and automatically enforced. Level progression acted as the secondary use metric. The intervention was divided into 28 levels, through which the user progressed following a story inspired by Cognitive Behavioral Therapy (CBT). After completing the 28th level, the participant could interact with the fully unlocked intervention. The participant often needed to play one level several times (“rounds”) to progress through the levels.
Intervention adherence was defined based on use time and completion of symptom questionnaires. A total use time of 24 h was the minimum threshold for being included in the efficacy evaluation. The participants were requested to fill the baseline symptom questionnaires (T0) before the CSC admitted them to the study. Thereafter, the participants received a link to the questionnaires via email 4, 8, and 12 (T1–T3) weeks after the study had commenced, and a follow-up questionnaire was sent 12 weeks after the intervention period had ended (T4). The T0 questionnaire battery included 11 individual questionnaires, and the T1–T4 batteries included 10 questionnaires. The questionnaire batteries consisted of, for instance, Finnish translations of standardized questionnaires measuring depression severity [81], anxiety severity [82], and well-being [83]. Here, we do not analyze questionnaire responses, but measure whether the participants filled the questionnaires as an indication of adherence. The participants had 2 weeks to complete the questionnaires and could not interact with the software before completing them or until the 2 weeks had passed. At the end of the study period, the participants were compensated according to the Finnish Ministry of Social Affairs decree [84]: 50€ for meeting the 24 h and 120€ for meeting the 48 h adherence goal, providing they had also responded to the symptom questionnaires.
The intervention software was developed in Unity (Unity Technologies), distributed to accepted participants using the Steam (Valve Corporation) platform, installed and used on the participant’s personal Windows (Microsoft) computer, and used at the participant’s premises. Intervention access was controlled with username and password (see Fig. 2 for login screen). The intervention use was measured with data sent from client software to Aalto University servers after each round. The data included a timestamp, level, duration of play, and intervention optimization parameters. Intervention level data from two participants was missing. Some participants also took part in a brain imaging sub-study, conducted at Aalto University or Helsinki University, which was not examined here because of its optionality.
Post-intervention measurements
We measured how many participants had used the intervention for at least 24 h and had completed the follow-up questionnaire battery that was sent 12 weeks after the intervention period had ended (T4).
Support contact analysis
Participant-initiated email contacts were analyzed with two aims: to explore use barriers across intervention elements, and to measure the quantity and frequency of the contact requests to evaluate the resources needed to manage the intervention [4]. Emails were the primary mode of communication between the participants and the researchers. The participant could contact the CSC directly via email or report an issue within the intervention software, whereafter the contact continued over email. The first and second authors designed a process for analyzing the frequency and content of the email contacts (Additional file 5).
The approach to coding the email content was primarily descriptive, as opposed to interpretative, aiming to characterize the topic of the contact [76]. This approach was chosen because the participants often used succinct language in the emails with the intention to resolve a particular practical issue. For instance, one email stated: “When and to whom should I give my bank account info in order to receive my gaming reimbursement”, which was categorized as “Reimbursement”. When the email included more than one distinct topic, all were coded. The email topics were further categorized based on whether they concerned a particular intervention element or an integration of elements. For instance, a contact: “I am unable to login to Meliora anymore, even though I think the password is correct. How can I reset my password?” concerned the intervention software, whereas “Hi, I would like to verify on which day the study starts for me, or has it already begun? I have sent the application, filled in the questionnaires and been through the coordinator interview” concerned how the distinct intervention elements were integrated. Being informed of the user journey model, the coding combined inductive and deductive approaches [85]. During this coding, the second author marked 35 (7.0%) of the 499 email contacts as uncertain. In four sense-making sessions, the first and second authors reviewed these cases, which led to changing 14 (40%) of the unclear coding instances and refining the categorization. A complementary codebook was created to facilitate the coding process and improve its reliability (Additional file 6).
Results
Results overview
We measured intervention use through the user journey (Fig. 3). We found that 17% of the participants entering the second webpage were later accepted to the study, and 30% of the accepted participants assigned to intervention arms later met the minimum adherence goal. Every service element and phase were associated with dropout or exclusion. On average, 56% of users (SD = 32%) continued from each phase to the next, though there was significant variance between phases. The proportion of participants progressing from one phase to the next increased after sign-up, exhibiting some participants’ growing commitment. Overall, we needed to reach at least 9 people to have 1 person arrive to study the landing page, we needed 16 people arriving on the study landing page for 1 sign-up, and we needed to assign 3 users to the intervention or comparator group to have 1 participant interact with the intervention for the minimum adherence criteria.
Pre-intervention use
Pre-intervention use were reported in Table 3. The recruitment reached 148,719 people on Facebook and Instagram, and 97.5% of the website visitors came from Finland. Thus, our study reached at least 145,000 Finns. According to the website use data, the most frequent digital referral categories were direct sources, social media, and news and web. Organic and miscellaneous, and messaging were less typical discovery channels. The CSC interview data found that the most influential recruitment channels were social media, news and web, and word of mouth. Mental health professional recommendations and other recruitment efforts (emails and posters) were the least effective.
The study landing page received 16,243 visitors, the second page 4,420 visitors, and 1,007 visitors signed up to the study. There was a very strong correlation between the daily number of website visitors and daily sign-ups (r(271) = 0.93, p < 0.01). The effect of each recruitment activity was brief and identified as a spike in the number of website visitors and sign-ups (Fig. 4). Of the 1,007 people digitally signed up for the study, 895 were assessed via interview, and of them 735 were accepted. The delay between sign-up and interview was on average 17.3 days (SD = 14.3), with both CSC availability and participant preferences influencing the delay. Both participant- and study-related factors hindered starting the intervention use: 112 signed-up participants could not be assessed, and 160 signed-up participants were rejected by CSC.
The study recruitment activities led to variable and short-lived increases in website visitors and sign-ups. Mielenterveystalo.fi is a Finnish digital mental well-being hub, Reddit is a social media platform, Assembly.fi is a Finnish digital gaming community, Aalto.fi is the website of the university the study is conducted in, Yle is the National Broadcasting Company in Finland, and MTV is a commercial Finnish news channel
Intra-intervention use
Intra-intervention use were reported in Table 4. Of the 498 accepted participants assigned to intervention or comparator arms, and of them 457 (91.7%) used the device at least once. On an average intervention use day, they interacted with it for 4.8 rounds, which took on average 52.9 min (SD = 24.5 min; median = 51 min). Over the intervention period, the participants used the intervention, on average, for 17.3 h during 22.1 sessions. This use occurred on 19.7 different days over a period of 38.9 days. The total use hours were influenced by the adherence goals, which were identified as plateaus in the cumulative hourly use data.
There was considerable heterogeneity in the use patterns (Fig. 5). The use hours increased with the number of use days and length of the use period. The correlation between total hours used and use days (r(455) = 0.95, p < 0.01) was higher than the correlation between total hours used and use period (r(455) = 0.80, p < 0.01), because of the variance in non-use days.
The frequency and quantity of intervention use exhibits significant variance. Ten representative participants (#) are selected by their total intervention use hours at decile increments. The number of days (d) during which the participant interacted with the intervention are described per intervention period week. Darker colors indicate more frequent interactions (white = 0 d / wk; light red = 1–2 d / wk; medium red 3–4 d / wk; dark red = 5–7 d / wk)
The average dropout per level 1–28 was 2.6% (SD = 1.7%; median = 2.1%). Two sections of the intervention content were associated with increased dropout likelihood: levels 8–10 and 13–16 with dropouts ranging from 3.7% to 6.8%. These levels were associated with new game features and an increase in intervention complexity and level of challenge. 118 participants (25.8%) completed all the 28 intervention levels.
Post-intervention use
Post-intervention use were reported in Table 5. Of the 150 participants who used the intervention for 24 h, 116 (77%) filled the follow-up questionnaire.
Support contacts
Of the 1,007 signed-up participants, 313 (31.1%) contacted the researchers via email. These participants made on average 1.6 (SD = 1.0) contacts, for a total of 499 email contacts. 108 (21.6%), 337 (67.5%), and 54 (10.8%) of these contacts were made pre-, intra-, and post-intervention respectively. Qualitative contact analysis revealed that these contacts concerned 9 topics. The most common reasons for contacting the CSC concerned intervention software, symptom questionnaires, quitting, and reimbursement (Table 6).
Discussion
Overview
Here we proposed that the user journey method can facilitate the comprehensive measurement of digital intervention use by evaluating every intervention element. We investigated the feasibility of the method by using it to measure the use of a novel game-based digital intervention for depression, Meliora. We modelled the intervention for its elements (see Fig. 1) and conducted a literature overview for their measurement through the user journey (see Table 2). We found that the user journey method structured the measurement of intervention use and allowed for identifying use barriers (see Fig. 3, and Tables 3, 4, and 5).
Case study discussion
We found that modelling the user journey turned our attention to the proverbial forest for the trees—examining the whole intervention as a service (see Fig. 1). This provided us with several insights: [1] The nature of the intervention influences its modelling and measurement. Non-digital healthcare user journeys often examine waiting times and delays to improve service throughput [86] and aim to improve care coordination [41]. In contrast, digital interventions may be approached through use and dropout, as much of the service may be automated. [2] User journey method facilitated the evaluation of the service from the user’s perspective. However, it implicitly prioritized the elements visible to the users, and further modelling is required to capture the back-end systems and processes that enable and support the user-facing elements [43], and, for instance, personalize the intervention content based on the user needs, symptoms, and behavior [25, 87]. [3] We found that the case study process resembled the direct to customer intervention delivery model. However, digital intervention delivery and business models can also be more complex as they are intertwined with legal and regulatory frameworks, reimbursement models, and existing healthcare systems—and this landscape continues to evolve [88, 89]. We look forward to further studies that use the user journey method to evaluate a range of service delivery and business models. [4] We found that measuring all the service elements required integrating several data sources and manual data processing. A data and resource management platform could facilitate their measurement but would increase the service setup time and cost.
We found that the user journey method allowed identifying several barriers. We used many recruitment channels and found digital channels most effective. Previous studies have also found digital channels common in discovering health apps [90] and less expensive than healthcare referrals [91], but they can lead to reaching a partial population [69]. We found that only 23% of second website visitors signed up to the study, which calls for user interface optimization [92]. It is also possible that as the participants became more informed about the intervention, some may have found that the intervention was not relevant to them. Further attention is also needed to understand why 8.6% of the signed up, assessed, and accepted participants did not begin to use the intervention. Intra-intervention, we found two intervention sections exhibited pronounced dropout rates, which suggests that improvements in intervention instructions, pacing, and design are needed.
We found that qualitative methods complemented the quantitative use data, which is aligned with previous studies [22, 25, 28, 93]. Regarding study discovery, the CSC interview data allowed quantifying the significance of WOM and the challenges with healthcare professional referrals. The differences between the PPAS and CSC discovery data may be partly explained by them measuring different points in time: the former measured those arriving to the website, the latter those assessed by the CSC. We also found, again aligned with previous studies [4], that email was an important channel for the prospective and existing participants to resolve uncertainties about the intervention elements and the overall service process. These email contacts also shed light on the use barriers, such as the website usability problems. Use data can identify moments that should be examined in detail, and qualitative investigation can characterize the problems in depth.
Having identified several use barriers, the next step would be to prioritize and address them, and the data gathered here could be used to evaluate the impact of the iterations against future user cohorts. However, this is outside the scope of the present study, and further studies are needed to examine the degree to which per-phase dropout can be influenced by incremental improvements based on user-centered design [92, 94].
Overall, we found that all intervention elements and phases [32] were associated with dropout—not merely the intervention software. Only 17% of people entering the second webpage were eventually accepted, and 30% of accepted participants met minimum adherence goal. The researchers excluded 160 potential participants, while in other phases the participant made the decision to cease intervention use and drop out. Thus, the design of pre-intervention elements including recruitment, sign-up, assessment, and onboarding can have substantial influence on the overall user flow. We propose that the user journey method has three benefits for intervention researchers, designers, and developers:
-
1.
Modelling the whole intervention. User journey examines all the intervention elements from the first contact to follow-up.
-
2.
Measuring the user flow. User journey facilitates measuring user interactions with all the intervention elements quantitatively and qualitatively.
-
3.
Managing the intervention. User journey allows for identifying and addressing use barriers, thus facilitating iterative intervention development and implementation.
The characteristics and the context of the case study service should be considered when interpreting the results. The relatively common nature of MDD allowed us to use mass media in recruitment and could have facilitated WOM discovery, which may not provide similar outcomes in rarer disorders. Finland is highly developed digitally [95], and digital social and healthcare service use is common [96,97,98], which may positively influence the rate of service adoption.
Theoretical contributions and implications
Digital intervention effectiveness is associated with users interacting with mechanisms of action, which, based on the theoretical foundation [99, 100], are proposed to have a causal influence on behavior change [28, 29]. In practice, however, users must pass several gateways before they can interact with these mechanisms [32, 46]. They must perceive a health need, be aware of a service that could address it, consider it sufficiently meaningful to merit signing up, possibly participate in an assessment, and find the intervention easy to onboard and use over time. We exhibited how many of these phases were associated with dropout (see Fig. 3, and Tables 3, 4, and 5). The user journey method helps to identify use barriers and address them, thereby facilitating the evaluation of intervention acceptability and feasibility in real-life contexts.
However, the user journey method should not be used to merely flatten the retention curve: the quantity of use is not alone enough for beneficial effects. A review of 25 studies found a correlation between intervention use and mental health improvements was modest, r. = 24 [10]. The quality of use—effective engagement [25, 53, 62]—matters and requires careful operationalization. Considered through the user journey (see Table 2), effective engagement may be most directly captured by the “software interactions”, and also “therapeutic interactions” and “peer interactions”, if such elements are present. Measuring these interactions comprehensively allows for evaluating which kinds of interactions are associated with intervention efficacy, and for whom. Furthermore, the temporality-emphasizing user journey method can complement the existing user engagement frameworks. For instance, Pham et al. differentiate between the amount, duration, breadth, and depth of engagement [62], and Short et al. focus on the frequency, intensity, time, and the type of engagement [60].
Digital interventions commonly face challenges in their implementation [11, 15], which encourages evaluating them beyond their effectiveness [26]. We reconceptualized digital intervention as a service [2, 31], a superordinate concept to the intervention software. We modelled all the intervention elements of Meliora and measured how users interacted with them over time. This method could facilitate examining how the intervention elements are integrated into a coherent service [40, 42, 101], and how they can be incorporated into healthcare service systems [14, 31, 102, 103]. Thus, user journey can complement existing implementation frameworks [104,105,106] and outcome indicators [107], and facilitate digital interventions reaching real-world impact.
Conclusions
We applied the user journey method to measure user interactions with a complex digital intervention comprising six elements, both technological and interpersonal. This evaluation identified temporally positioned and intervention-specific use barriers that could be addressed to improve the intervention. We consider this holistic, temporality-focused method to have significant potential and encourage its application in the development, evaluation, and implementation of digital interventions.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Abbreviations
- CSC:
-
Clinical Subject Coordinator
- MDD:
-
Major Depressive Disorder
- MBS:
-
Meta Business Suite
- PPAS:
-
Piwik PRO Analytics Suite
References
Folker AP, Mathiasen K, Lauridsen SM, Stenderup E, Dozeman E, Folker MP. Implementing internet-delivered cognitive behavior therapy for common mental health disorders: A comparative case study of implementation challenges perceived by therapists and managers in five European internet services. Internet Interv. 2018;11:60–70.
Mohr DC, Weingardt KR, Reddy M, Schueller SM. Three problems with current digital mental health research and three things we can do about them. Psychiatr Serv. 2017;68(5):427–9.
Frampton GK, Shepherd J, Pickett K, Griffiths G, Wyatt JC. Digital tools for the recruitment and retention of participants in randomised controlled trials: a systematic map. Trials. 2020;21(478):1–23.
Waller R, Gilbody S. Barriers to the uptake of computerized cognitive behavioural therapy: a systematic review of the quantitative and qualitative evidence. Psychol Med. 2009;39(5):705–12.
Lipschitz JM, Van Boxtel R, Torous J, Firth J, Lebovitz JG, Burdick KE, et al. Digital mental health interventions for depression: scoping review of user engagement. J Med Internet Res. 2022;24(10):e39204. Available from: https://www.jmir.org/2022/10/e39204.
Torous J, Lipschitz J, Ng M, Firth J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J Affect Disord. 2020;263:413–9. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0165032719326060.
Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: Systematic search and panel-based usage analysis. J Med Internet Res. 2019;21(9):1–15.
Nwosu A, Boardman S, Husain MM, Doraiswamy PM. Digital therapeutics for mental health: is attrition the Achilles heel? Front Psychiatry. 2022;13:900615.
Wright JH, Owen JJ, Richards D, Eells TD, Richardson T, Brown GK, et al. Computer-assisted cognitive-behavior therapy for depression: a systematic review and meta-analysis. J Clin Psychiatr. 2019;80(2):18r12188.
Gan DZQ, McGillivray L, Han J, Christensen H, Torok M. Effect of engagement with digital interventions on mental health outcomes: a systematic review and meta-analysis. Front Digit Health. 2021;3:764079. Available from: https://www.frontiersin.org/articles/10.3389/fdgth.2021.764079/full.
Moshe I, Terhorst Y, Philippi P, Domhardt M, Cuijpers P, Cristea I, et al. Digital interventions for the treatment of depression: a meta-analytic review. Psychol Bull. 2021;147(8):749–86.
Graham AK, Lattie EG, Powell BJ, Lyon AR, Smith JD, Schueller SM, et al. Implementation strategies for digital mental health interventions in health care settings. Am Psychologist. 2020;75(8):1080–92.
Mohr DC, Lyon AR, Lattie EG, Reddy M, Schueller SM. Accelerating digital mental health research from early design and creation to successful implementation and sustainment. J Med Internet Res. 2017;19(5):e153.
Bauer MS, Kirchner JA. Implementation science: What is it and why should I care? Psychiatry Res. 2020;283:112376.
Lipschitz J, Hogan TP, Bauer MS, Mohr DC. Closing the research-to-practice gap in digital psychiatry: the need to integrate implementation science. J Clin Psychiatr. 2019;80(3):18com12659.
Lee U, Jung G, Ma EY, Kim JS, Kim H, Alikhanov J, et al. Toward data-driven digital therapeutics analytics: literature review and research directions. IEEE CAA J Autom Sin. 2023;10(1):42–66 Available from: https://ieeexplore.ieee.org/document/10007899/.
Kip H, Keizer J, da Silva MC, Jong NB De, Köhle N, Kelders SM. Methods for human-centered ehealth development: narrative scoping review. J Med Internet Res. 2022;24(1):e31858.
Orsmond GI, Cohn ES. The distinctive features of a feasibility study: Objectives and guiding questions. OTJR (Thorofare N J). 2015;35(3):169–77.
Klaic M, Kapp S, Hudson P, Chapman W, Denehy L, Story D, et al. Implementability of healthcare interventions: an overview of reviews and development of a conceptual framework. Implement Sci. 2022;17(10).
Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res. 2017;17(1):1–13.
Lo B, Shi J, Hollenberg E, Abi-Jaoudé A, Johnson A, Wiljer D. Surveying the role of analytics in evaluating digital mental health interventions for transition-aged youth: scoping review. JMIR Ment Health. 2020;7(6):e15942.
Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv. 2019;70(7):538–44.
Fleming T, Bavin L, Lucassen M, Stasiak K, Hopkins S, Merry S. Beyond the trial: Systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res. 2018;20(6):1–11.
Koneska E, Appelbe D, Williamson PR, Dodd S. Usage metrics of web-based interventions evaluated in randomized controlled trials: systematic review. J Med Internet Res. 2020;22(4):e15474.
Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, et al. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med. 2016;51(5):833–42.
Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.
Luke DA, Stamatakis KA. Systems science methods in public health: Dynamics, networks, and agents. Annu Rev Public Health. 2012;33:357–76.
Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. 2017;7(2):254–67.
Ritterband LM, Thorndike FP, Cox DJ, Kovatchev BP, Gonder-Frederick LA. A behavior change model for internet interventions. Ann Behav Med. 2009;38(1):18–27.
Buelens F, Luyten P, Claeys H, Van Assche E, Van Daele T. Usage of unguided, guided, and blended care for depression offered in routine clinical care: Lessons learned. Internet Interv. 2023;34:100670.
Shaw J, Agarwal P, Desveaux L, Palma DC, Stamenova V, Jamieson T, et al. Beyond “implementation”: digital health innovation and service design. NPJ Digit Med. 2018;1(1):48.
Levesque JF, Harris MF, Russell G. Patient-centred access to health care: conceptualising access at the interface of health systems and populations. Int J Equity Health. 2013;12(1):1–9.
Fleming TM, de Beurs D, Khazaal Y, Gaggioli A, Riva G, Botella C, et al. Maximizing the impact of E-Therapy and Serious Gaming: Time for a paradigm shift. Front Psychiatry. 2016;7(65):1–7.
Lyon AR, Koerner K. User-Centered Design for Psychosocial Intervention Development and Implementation. Clin Psychol Sci Pract. 2016;23(2):180–200.
Enam A, Torres-Bonilla J, Eriksson H. Evidence-based evaluation of ehealth interventions: systematic literature review. J Med Internet Res. 2018;20(11):e10971.
Every NR, Hochman J, Becker R, Kopecky S, Cannon CP. Critical Pathways. A Review. Circulation. 2000;101:461–5.
Hofmann PA. Critical Path Method: An Important Tool for Coordinating Clinical Care. Jt Comm J Qual Improv. 1993;19(7):235–46.
Schrijvers G, van Hoorn A, Huiskes N. The care pathway: concepts and theories: an introduction. Int J Integr Care. 2012;12:e192.
Lemon KN, Verhoef PC. Understanding customer experience throughout the customer journey. J Mark. 2016;80(6):69–96.
Borycki EM, Kushniruk AW, Wagner E, Kletke R. Patient journey mapping: Integrating digital technologies into the journey. Knowl Manag E-Learn. 2020;12:521–35.
Trebble TM, Hansi N, Hydes T, Smith MA, Baker M. Process mapping the patient journey: an introduction. BMJ. 2010;341:c4078–c4078 Available from: https://www.bmj.com/lookup/doi/10.1136/bmj.c4078.
Joseph AL, Kushniruk AW, Borycki EM. Patient journey mapping: current practices, challenges and future opportunities in healthcare. Knowl Manag E-Learn: Int J. 2020;12(4):387–404.
Bitner MJ, Ostrom AL, Morgan FN. Service Blueprinting: a practical technique for service innovation. Calif Manage Rev. 2008;50(3):66–94.
Inal Y, Wake JD, Guribye F, Nordgreen T. Usability Evaluations of Mobile Mental Health Technologies: Systematic Review. J Med Internet Res. 2020;22(1):e15337.
O’Brien HL, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inform Sci Technol. 2008;59(6):938–55.
Lukka L, Salonen A, Vesterinen M, Karhulahti VM, Palva S, Palva JM. The qualities of patients interested in using a game-based digital mental health intervention for depression: a sequential mixed methods study. BMC Digital Health. 2023;1(1):37.
Torous J, Bucci S, Bell IH, Kessing LV, Faurholt-Jepsen M, Whelan P, et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry. 2021;20(3):318–35.
Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11.
Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. 2021;23(3):e24387.
O’Connor S, Hanlon P, O’Donnell CA, Garcia S, Glanville J, Mair FS. Understanding factors affecting patient and public engagement and recruitment to digital health interventions: a systematic review of qualitative studies. BMC Med Inform Decis Mak. 2016;16(1):1–15.
Van Gemert-Pijnen JEWC, Nijland N, Van Limburg M, Ossebaard HC, Kelders SM, Eysenbach G, et al. A holistic framework to improve the uptake and impact of ehealth technologies. J Med Internet Res. 2011;13(4):e111.
Yardley L, Morrison L, Bradbury K, Muller I. The person-based approach to intervention development: Application to digital health-related behavior change interventions. J Med Internet Res. 2015;17(1):e30.
Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res. 2017;19(6):e232.
ClinicalTrials.gov. The Effects of Videogames on Depression Symptoms and Brain Dynamics. 2022. Available from: https://clinicaltrials.gov/ct2/show/NCT05426265. Cited 2023 Sep 21.
Lukka L, Karhulahti VM, Bergman VR, Palva JM. Measuring digital intervention user experience with a novel ecological momentary assessment (EMA) method. CORTO Internet Interv. 2024;35:100706.
van Berkel N, Clarkson MJ, Xiao G, Dursun E, Allam M, Davidson BR, et al. Dimensions of ecological validity for usability evaluations in clinical settings. J Biomed Inform. 2020;110:103553 Available from: https://linkinghub.elsevier.com/retrieve/pii/S1532046420301817.
Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.
Landes SJ, McBain SA, Curran GM. Reprint of: an introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2020;283:112630.
Grant MJ, Booth A. A typology of reviews: An analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108.
Short CE, DeSmet A, Woods C, Williams SL, Maher C, Middelweerd A, et al. Measuring engagement in eHealth and mHealth behavior change interventions: Viewpoint of methodologies. J Med Internet Res. 2018;20(11):1–18.
Bijkerk LE, Oenema A, Geschwind N, Spigt M. Measuring Engagement with Mental Health and Behavior Change Interventions: an Integrative Review of Methods and Instruments. Int J Behav Med. 2023;30(2):155–66.
Pham Q, Graham G, Carrion C, Morita PP, Seto E, Stinson JN, et al. A library of analytic indicators to evaluate effective engagement with consumer mHealth apps for chronic conditions: scoping review. JMIR Mhealth Uhealth. 2019;7(1):e11941.
Sieverink F, Kelders SM, Gemert-Pijnen V. Clarifying the concept of adherence to ehealth technology: systematic review on when usage becomes adherence. J Med Internet Res. 2017;19(12):e402.
Smit D, Vrijsen JN, Groeneweg B, Vellinga-Dings A, Peelen J, Spijker J. A newly developed online peer support community for depression (Depression Connect): qualitative study. J Med Internet Res. 2021;23(7):e25917.
Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: Evidence, theory and improvements. Evid Based Ment Health. 2018;21(3):116–9.
Wentzel J, Van der Vaart R, Bohlmeijer ET, Van Gemert-Pijnen JEWC. Mixing online and face-to-face therapy: How to benefit from blended care in mental health care. JMIR Ment Health. 2016;3(1):1–7.
Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7(MAR):1–9.
Smith JD, Hasan M. Quantitative approaches for the evaluation of implementation research studies. Psychiatry Res. 2020;283:112521.
Lindner P, Nyström MBT, Hassmén P, Andersson G, Carlbring P. Who seeks ICBT for depression and how do they get there? Effects of recruitment source on patient demographics and clinical characteristics. Internet Interv. 2015;2(2):221–5.
Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8(1):18 Available from: https://biomedcentral-bmcmedicine.publicaciones.saludcastillayleon.es/articles/10.1186/1741-7015-8-18.
Aitken M, Nass D. Digital health trends. Innovation, evidence, regulation, and adoption. IQVIA Institute for Human Data Science. 2020.
Kannisto KA, Korhonen J, Adams CE, Koivunen MH, Vahlberg T, Välimäki MA. Factors associated with dropout during recruitment and follow-up periods of a mHealth-based randomized controlled trial for mobile.net to encourage treatment adherence for people with serious mental health problems. J Med Internet Res. 2017;19(2):e46.
Lee SYD, Iott B, Banaszak-Holl J, Shih SF, Raj M, Johnson KE, et al. Application of Mixed Methods in Health Services Management Research: A Systematic Review. Med Care Res Rev. 2022;79(3):331–44.
Piwik PRO. Direct traffic. 2023. Available from: https://piwik.pro/glossary/direct-traffic/. Cited 2023 Aug 2.
Grodal S, Anteby M, Holm, Audrey L. Achieving rigor in qualitative analysis: The role of active categorization in theory building. Acad Manag Rev. 2021;46(3):591–612.
Saldaña J. The Coding Manual for Qualitative Researchers. 3rd edition. SAGE; 2016.
Lukka L, Palva JM. The Development of Game-Based Digital Mental Health Interventions: Bridging the Paradigms of Health Care and Entertainment. JMIR Serious Games. 2023;11:e42173. Available from: https://games.jmir.org/2023/1/e42173. Cited 2023 Aug 2.
Rock PL, Roiser JP, Riedel WJ, Blackwell AD. Cognitive impairment in depression: A systematic review and meta-analysis. Psychol Med. 2014;44(10):2029–40.
Motter JN, Pimontel MA, Rindskopf D, Devanand DP, Doraiswamy PM, Sneed JR. Computerized cognitive training and functional recovery in major depressive disorder: A meta-analysis. J Affect Disord. 2016;189:184–91.
Bediou B, Adams DM, Mayer RE, Tipton E, Green CS, Bavelier D. Meta-analysis of action video game impact on perceptual, attentional, and cognitive skills. Psychol Bull. 2018;144(1):77–110.
Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–13.
Spitzer RL, Kroenke K, Williams JBW, Löwe B. A Brief Measure for Assessing Generalized Anxiety Disorder. Arch Intern Med. 2006;166(10):1092.
Topp CW, Østergaard SD, Søndergaard S, Bech P. The WHO-5 Well-Being Index: A Systematic Review of the Literature. Psychother Psychosom. 2015;84(3):167–76.
Finlex. Decree of the ministry of social affairs and health on remuneration payable to research subjects. 2011. Available from: https://www.finlex.fi/en/laki/kaannokset/2011/en20110082. Cited 2023 May 29.
Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.
Tlapa D, Zepeda-Lugo CA, Tortorella GL, Baez-Lopez YA, Limon-Romero J, Alvarado-Iniesta A, et al. Effects of Lean Healthcare on Patient Flow: A Systematic Review. Value Health. 2020;23(2):260–73.
Roefs A, Fried EI, Kindt M, Martijn C, Elzinga B, Evers AWM, et al. A new science of mental disorders: Using personalised, transdiagnostic, dynamical systems to understand, model, diagnose and treat psychopathology. Behav Res Ther. 2022;153(April):104096.
Gaebel W, Lukies R, Kerst A, Stricker J, Zielasek J, Diekmann S, et al. Upscaling e-mental health in Europe: a six-country qualitative analysis and policy recommendations from the eMEN project. Eur Arch Psychiatry Clin Neurosci. 2021;271(6):1005–16.
van Kessel R, Srivastava D, Kyriopoulos I, Monti G, Novillo-Ortiz D, Milman R, et al. Digital health reimbursement strategies of 8 European countries and Israel: scoping Review and policy mapping. JMIR Mhealth Uhealth. 2023;11:e49003.
Schueller SM, Neary M, O’Loughlin K, Adkins EC. Discovery of and interest in health apps among those with mental health needs: survey and focus group study. J Med Internet Res. 2018;20(6):e10141.
Liu Y, Pencheon E, Hunter RM, Moncrieff J, Freemantle N. Recruitment and retention strategies in mental health trials – A systematic review. PLoS One. 2018;13(8):e0203127.
Hentati A, Forsell E, Ljótsson B, Kaldo V, Lindefors N, Kraepelien M. The effect of user interface on treatment engagement in a self-guided digital problem-solving intervention: a randomized controlled trial. Internet Interv. 2021;26:100448.
Kelders SM, Van Zyl LE, Ludden GDS. The concept and components of engagement in different domains applied to ehealth: A systematic scoping review. Front Psychol. 2020;11:1–14.
Norman DA, Verganti R. Incremental and Radical Innovation: Design Research vs. Technology and Meaning Change. Design Issues. 2014;30(1):78–96.
European Commission. The Digital Economy and Society Index (DES) 2022 Finland. 2022;1–15. Available from: https://digital-strategy.ec.europa.eu/en/policies/countries-digitisation-performance.
Kyytsönen M, Aalto AM, Vehko T. Social and health care online service use in 2020–2021: Experiences of the population. Finnish Institute for Health and Welfare (THL). Helsinki; 2021.
Pennanen P, Jansson M, Torkki P, Harjumaa M, Pajari I, Laukka E, et al. Digitaalisten palvelujen vaikutukset sosiaali- ja terveydenhuollossa. Helsinki; 2023. Available from: https://urn.fi/URN:ISBN:978-952-383-059-2. Cited 2023 Oct 6.
Lukka L, Karhulahti VM, Palva JM. Factors Affecting Digital Tool Use in Client Interaction According to Mental Health Professionals: Interview Study. JMIR Hum Factors. 2023;10:e44681.
Verschueren S, Buffel C, Stichele G Vander. Developing theory-driven, evidence-based serious games for health: framework based on research community insights. JMIR Serious Games. 2019;7(2):e11565.
Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(42):1–11.
Chorpita BF, Daleiden EL, Weisz JR. Modularity in the design and application of therapeutic interventions. Appl Prev Psychol. 2005;11(3):141–56.
Braithwaite J, Churruca K, Long JC, Ellis LA, Herkes J. When complexity science meets implementation science: a theoretical and empirical analysis of systems change. BMC Med. 2018;16(1):1–14.
Titov N, Hadjistavropoulos HD, Nielssen O, Mohr DC, Andersson G, Dear BF. From research to practice: ten lessons in delivering digital mental health services. J Clin Med. 2019;8(8):1239.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15.
Wang Y, Wong ELY, Nilsen P, Chung VC ho, Tian Y, Yeoh EK. A scoping review of implementation science theories, models, and frameworks — an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci. 2023;18(1):43.
Heinsch M, Wyllie J, Carlson J, Wells H, Tickner C, Kay-Lambkin F. Theories Informing eHealth Implementation: Systematic Review and Typology Classification. J Med Internet Res. 2021;23(5):e18500 Available from: https://www.jmir.org/2021/5/e18500.
Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18(1):31.
Allen L, O’Connell A, Kiermer V. How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learned Publishing. 2019;32(1):71–4.
Acknowledgements
The authors would like to thank Jukka Laakso and Sami Laakso for their work with the intervention and website analytics, Paula Partanen for acting as the fourth CSC, Juhani Kolehmainen and Lauri Pohjola for their work with in-game analytics, Alpo Oksaharju for the visual design of the intervention main screen, Ronja Palva for their input on data visualization, and Veli-Matti Karhulahti, Märt Vesinurm, and Olga Perski for their valuable comments on the article draft.
Funding
LL and AS are funded by the Jane and Aatos Erkko Foundation and the Technology Industries of Finland Centennial funding for “The Future Makers Program" awarded to JMP.
MV, AS, and VRB are funded by Business Finland Research2Business funding (42173/31/2020) awarded to JMP.
LL is funded by the Sigrid Juselius Foundation funding awarded to SP and JMP.
PT, SP, and JMP report no funding.
Author information
Authors and Affiliations
Contributions
The authors’ contributions are listed using the CRediT statement [108]. LL: Conceptualization (the research approach and aims), methodology (the user journey method), investigation (literature overview, service element modelling, user journey data analysis), writing – original draft, writing – review and editing, visualization, project administration. MV: Investigation (acting as CSC, support contact data analysis), data curation (sign-up data), writing – review and editing. AS: Software (sign-up data management), investigation (acting as CSC, use data management), data curation (use data). VRB: Investigation (acting as CSC, discovery source data analysis), data curation (discovery data). PT: Writing – review and editing. SP: Funding acquisition, supervision, writing – review and editing. JMP: Funding acquisition, supervision, writing – review and editing. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The randomized controlled trial (RCT) received positive appraisals from the Helsinki University Hospital research ethics committee (HUS/3042/2021) and the Finnish Medicines Agency Fimea (FIMEA/2022/002976), and was preregistered on ClinicalTrials.gov [54]. Written informed consent was obtained from all participants at sign-up.
Consent for publication
Not needed.
Competing interests
LL and JMP are co-founders of Soihtu DTx Ltd., which develops game-based digital mental health interventions.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lukka, L., Vesterinen, M., Salonen, A. et al. User journey method: a case study for improving digital intervention use measurement. BMC Health Serv Res 25, 479 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12913-025-12641-9
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12913-025-12641-9