Evidence-Based Complementary and Alternative Medicine

Evidence-Based Complementary and Alternative Medicine / 2014 / Article
Special Issue

Research Methodology: Choices, Logistics, and Challenges

View this Special Issue

Review Article | Open Access

Volume 2014 |Article ID 561320 | https://doi.org/10.1155/2014/561320

Katherine Bradbury, Sam Watts, Emily Arden-Close, Lucy Yardley, George Lewith, "Developing Digital Interventions: A Methodological Guide", Evidence-Based Complementary and Alternative Medicine, vol. 2014, Article ID 561320, 7 pages, 2014. https://doi.org/10.1155/2014/561320

Developing Digital Interventions: A Methodological Guide

Academic Editor: Brian Mittman
Received15 Aug 2013
Accepted19 Dec 2013
Published04 Feb 2014


Digital interventions are becoming an increasingly popular method of delivering healthcare as they enable and promote patient self-management. This paper provides a methodological guide to the processes involved in developing effective digital interventions, detailing how to plan and develop such interventions to avoid common pitfalls. It demonstrates the need for mixed qualitative and quantitative methods in order to develop digital interventions which are effective, feasible, and acceptable to users and stakeholders.

1. Introduction

Healthcare resources are currently being put under pressure by ageing populations and increases in long-term health conditions [1]. There are growing calls for patients to self-manage their health and self-care is set to play a leading role in future healthcare [1, 2]. One potentially fruitful and increasingly popular method of promoting self-care and self-management is with digital interventions (DIs). DIs can offer self-care information, education, and behavioural support to patients remotely utilising technological mediums such as the Internet, mobile applications, and text messaging services. They can also allow healthcare practitioners to check a patient’s progress. Evidence from a meta-analysis of 85 studies indicates that DIs can be effective for health management purposes, producing small but significant effects [3].

DIs can be extremely cost-effective [4], as once created they can be used for an infinite number of times by new users, unlike practitioner delivered interventions, which require staffing and room hire for each treated patient. DIs can complement practitioner delivered interventions by delivering routine aspects of healthcare, or providing self-care education and support for changing health behaviours, therefore allowing practitioners to focus on more complex tasks. DIs also offer benefits over paper based interventions, which require the production, printing and distribution of costly hard copy material. In addition to lowering costs, DIs can increase access for users, both by providing 24-hour access to healthcare interventions and increasing access to healthcare for those who might find it harder to attend appointments (e.g., because of their location or health problem) [4].

DIs have the potential to reach huge numbers of people. Recent research suggests that most households in the Western World now have access to the Internet [5]; over 71% of households have accessed the Internet in the United States [6] and 86% in the United Kingdom [7]. Likewise, over 90% of the population in these countries currently have access to and use mobile phones [8, 9]. Internet and mobile phone access in the developing world is also growing rapidly, for instance, 15.9% of African households now have access to the Internet [10] and by 2012 54% owned mobile phones (although since many people share mobiles, access to one is likely to be higher) [11].

In the past DIs have been costly and difficult to create, requiring expertise in programming. This changed with the introduction of LifeGuide (https://www.lifeguideonline.org/), a University of Southampton initiative, which allows people with no programming knowledge to easily create DIs using free software [12]. The pages of a DI can include text, pictures, or videos and a series of pages can be shown one after another to produce a session. For example, a user might complete a session on healthy eating which includes several pages of information about particular food groups and then a guide to setting goals to change eating habits. Questionnaires can also be included, which can help collect data for research and also tailor intervention content to users (for instance by gender, user preferences, or health risks), thereby showing users information which is most relevant to them. LifeGuide is also able to send remote prompts to users in the form of personalised emails and text messages and can provide a facility for healthcare professionals to monitor patient progress. Furthermore, this software collects both research data (from questionnaire responses) and usage data, enabling practitioners and researchers to keep track of a patient’s progress through a DI.

Many DIs are developed to support mainstream healthcare practitioners, such as GPs or nurses (e.g., [1315]). However, there are many instances where a DI might be developed to support or enhance face to face care in CAM therapies. For instance, a DI could enhance osteopathy for back pain by teaching patients exercises to strengthen their backs, thus freeing practitioners to focus on providing hands on techniques during treatment sessions. Nutritional advice and support (for instance, with weekly goal setting and reviewing) could support a number of CAM therapies such as acupuncture or Ayurveda, which sometimes offer nutritional advice alongside other techniques [16, 17]. Such DIs could help improve patients’ adherence to lifestyle advice by teaching them skills to help make behaviour changes habitual. In other instances it may be possible for some CAM modalities to be delivered primarily through a DI, eliminating the costs of face to face care. Modalities such as autogenic training, meditation, progressive muscle relaxation, guided imagery, and mindfulness based stress reduction are known to be effective in reducing stress, depression, anxiety, fatigue, and pain [1821] and might be usefully incorporated into DIs. Indeed, some already have real success; for instance, face to face and online mindfulness interventions appear to be equally effective for reducing stress [22]. It is important to note that there may be particular CAM therapies which cannot be delivered partially or fully through a DI, such as the Alexander Technique, which requires the touch of a practitioner to guide patients through improvements in their posture.

Much of the literature concerning developing DIs is complex, technical, and not aimed at beginners, who require an overview of the methods involved. The methodological processes involved in creating mainstream and CAM DIs are the same; however, we are not aware of any literature aimed at a CAM audience. Our aim is to equip a CAM audience with the knowledge needed to embark on creating DIs. The current article will provide a guide to the methodological processes involved, outlining best practice and how to avoid potential pitfalls; it will cover both initial intervention planning and further intervention development with user testing. Table 1 shows an overview of this process, including critical issues to consider at each stage.

Key stepsCommon critical issues

Intervention planning
What are the key behaviours to be targeted?

What modality is most appropriate for delivery (e.g., computers, smartphones, or text messages)?

Determine likely influences on key behaviours using
(i) deductive reviews of the literature and implementation of theory,
(ii) inductive research with users and stakeholders.
Ensure both quantitative and qualitative literatures are reviewed to fully understand likely effectiveness and acceptability of DI components.

Create an intervention plan.
(i) What should the intervention contain to maximise effectiveness, acceptability, and feasibility?
(ii) When should each intervention component be introduced to the user?
(iii) Plan practical aspects of the DI (such as security and logging on procedures).
Include a security page to allay user concerns.
Make the logging on button larger than the register button to avoid problems with users registering multiple times.

Prioritise intervention components to ensure feasibility for the development team.

Intervention development and usability testing
(i) Think-aloud interviews to assess user perceptions and interactions with the intervention.
(ii) Modify the DI as needed.
Be sure to observe how users navigate the intervention during think-aloud interviews as this provides additional information about potential problems.

Retrospective interviews with users who have tried the intervention alone.Try to conduct interviews within a week of the participant using the DI to maximise recall.
Users can keep a diary of their experiences of using the DI to improve recall during interviews.

(i) Triangulate user data (e.g., aspects of DI viewed) with retrospective interview data to gain a fuller picture of how the DI is used when participants are alone.
(ii) Modify the DI as needed.

Intervention testing
Feasibility RCT to test the study processes for a full trial.Retrospective interviews can highlight ways to improve study processes or the DI for the main trial.

Fully powered RCT to test the effectiveness and cost-effectiveness of the DI. Retrospective interviews can enhance interpretation of quantitative results.

2. Intervention Planning

There is widespread consensus that systematic intervention planning incorporating existing evidence, theory, and the views of potential users (and those involved with their care) is key to creating interventions which will be successful and widely adopted into practice [17].

Several decisions must be made prior to starting work on a DI design. Firstly, the key behaviours that users need to perform in order to improve their health problem must be identified [13, 2328]. For instance, users may need to make lifestyle changes, such as increasing their physical activity, or perhaps self-monitor their behaviour or a particular aspect of their health such as their weight or mood.

Secondly, it is important to consider the modality through which a DI will be delivered; common modalities include computers, smartphones, or simply text messages. The nature of the intervention that can be produced using each of these modalities will be constrained by the capabilities of each technology. For instance, computer based interventions can include multiple sessions that users complete over time, therefore incorporating multiple and complex behaviour change techniques. Larger screens also mean that more detailed information can be included. In contrast Smartphone’s provide a smaller screen, so less information can be displayed. However, Smartphone’s could provide more accessible interventions than computers, since people tend to carry them with them most of the time and look at them frequently [2932]. Smartphone technology also includes sensors, for instance, GPS, to recognise location and voice sensors which might be used to identify mood changes [29, 3335]; research into how best to harness these sensors for intervention design is ongoing [29, 35]. It is possible that in the near future such sensors could enable intervention at critical time points, such as when users are lower in mood and might lack motivation to carry out positive health behaviours. Text messages are limited to providing the briefest of information, but are likely accessible to the widest range of individuals, including those in lower SES groups [36]. Text message interventions can include brief advice, support, or reminders to perform health behaviours. One study showed that smokers receiving smoking cessation advice, support, and distraction via text were twice as likely to quit compared to those in a control group [37].

Once key user behaviours and modality of delivery have been determined, likely influences on key behaviours can be identified [23, 28]. A combination of deductive and inductive approaches is useful at this point.

2.1. Deductive Approaches

Deductive approaches are useful to ascertain what is already known about changing a behaviour; these approaches include reviews of the existing literature and theory that might usefully inform intervention design.

Reviews of the existing quantitative literature (in particular RCTs) are useful to elicit what behaviour change techniques are likely to be most effective and cost effective. Such studies have high internal validity and are therefore capable of isolating cause and effect [38]. However, such studies cannot answer questions about how acceptable an intervention is to a user or why users fail to adhere to an intervention. Inductive qualitative research is instead able to answer such questions as it enables in depth exploration of users’ viewpoints. Reviews of the qualitative literature focussing on participants’ experiences of using particular behaviour change techniques or entire interventions are therefore equally valuable, providing insights into what intervention components are likely to be acceptable to users and feasible in practice. The downside of qualitative studies is that they lack the precision and control needed for high internal validity, meaning that they cannot draw conclusions about cause and effect or statistical relationships. The combination of both quantitative and qualitative evidence therefore means that the strengths of both methods can be utilised and the limitations of each are to some extent overcome by the inclusion of the other.

A number of intervention development guidelines recommend that theory should also inform intervention design, as it can improve specification of potentially active intervention components [23, 25]. Indeed, systematic review evidence shows that more extensive use of theory in DIs is associated with larger effect sizes [3]. Theory can inform DI developers of the likely influences on key behaviours. For example the Behaviour Change Wheel provides a system for identifying what individual level or environmental factors need to change in order for a user to perform a given behaviour [26]. The model links these key influences on behaviour to intervention functions (e.g., education, providing incentives, or modelling), meaning that once a key influence that is lacking is highlighted, the behaviour change functions necessary to increase that key influence on behaviour can be easily identified and implemented in order to change behaviour. Taxonomies of behaviour change techniques are also useful at this point, as they identify a wide range of techniques which can be usefully employed to enhance or minimise key influences on behaviour [39, 40].

2.2. Inductive Approaches

Equally valuable to DI planning is inductive qualitative work, which can provide important information about which intervention features might be acceptable (or unacceptable) to users and therefore which of those highlighted by deductive approaches might be most usefully integrated [13]. Inductive research is particularly valuable at the intervention planning stage if only limited qualitative literature assessing the acceptability of intervention components within the target population is available. Sam Watts and George Lewith have been developing a DI for men with prostate cancer assigned to active surveillance, a process which is known to be stressful [41]. At first deductive research suggested that a mindfulness meditation intervention would be an effective way of reducing stress. However, in early qualitative interviews it was identified that these patients held negative perceptions of mindfulness meditation and were not prepared to engage in such an intervention. The idea of a “stress reduction” intervention which incorporated the breathing exercises and relaxation techniques from mindfulness programmes was, however, viewed as acceptable. Qualitative work therefore highlighted how the intervention could be repackaged to appeal to men with prostate cancer.

The views of stakeholders, such as healthcare staff who might be supporting a DI or affected by its implementation, should also be considered to ensure that a DI is developed which is acceptable and feasible to adopt in practice [19, 27]. For example, focus groups with healthcare practitioners revealed that they have little time or training to deliver weight management services but that they would welcome a DI which supported patients in losing weight [42]. As staff had little training in weight management techniques, a DI was designed to provide the weight loss advice while practitioners were involved to support and encourage patients in order to help maintain patient motivation. DIs developed without consideration of the opinions of stakeholders and users are less likely to be successfully implemented [27].

A limitation of using an inductive approach in intervention development is that the small samples employed might not be representative of the entire target population and some viewpoints may therefore be missed. This problem can be reduced by seeking a maximum variation sample, which includes a range of people from the target population (e.g., from different backgrounds, genders, race, employment status, or stage of their particular health problem) so that a wider variety of viewpoints are likely to be captured [43]. Importantly, inductive qualitative research is able to capture novel responses which have not been anticipated by the researcher. In contrast, deductive research (e.g., questionnaire designs) requires the researcher to choose the possible responses that users will give in advance, meaning that vital information might not be captured [44], which could lead to problems with the acceptability of a DI being overlooked.

2.3. Integrating Deductive and Inductive Approaches: Creating an Intervention Plan

Once completed, the findings of the deductive and inductive research need to be collated to create a plan of what the intervention should contain in order to maximise effectiveness, acceptability, and feasibility. Decisions can then be made about when each intervention component should be introduced to the user; for instance it might be important to introduce goal setting early on, whereas environmental restructuring might need to be addressed later.

It is also useful at this stage to plan the practical elements of the DI, which can influence uptake. For example, potential users may fail to register because of concerns about the privacy or security of their data. A security page at the beginning of an intervention can allay concerns and avoid problems with uptake. Such a page should include information about who has access to data (e.g., the research team), that data is password protected, anonymised, and protected by firewalls [20]. The process of initially signing up for a DI and then later logging back on can also confuse users and mean they end up registering multiple times, therefore starting an intervention again rather than logging back into where they were last time. This can be resolved by making log in buttons larger than sign up buttons, so that users will be more likely to log in and avoid signing up multiple times. The LifeGuide beginner’s guide [20] is a useful resource for planning practical aspects of a DI.

2.4. Prioritisation: Deciding What Makes the Final Cut

Whilst researching and planning the aspects of an intervention likely to be the most feasible, acceptable, and effective for users, it is also vital to consider what will be feasible for the development team. Time and staffing resources are likely to be limited for most DI developers, prioritisation of the components which are the most essential to a particular intervention is therefore necessary [45, 46]. Prioritisation should be adjusted throughout intervention planning and development phases, as unexpected issues arise; this is therefore best considered an iterative process. MoSCoW, a prioritisation model used by digital service developers [46, 47], is a simple tool that can be usefully employed to prioritise the content of DIs. Each of the capital letters represents a priority:(i)M: Must have this for the intervention to be effective, acceptable and feasible,(ii)S: Should have (if possible) for the intervention to be a success, but may be able to be delivered in a different way, or is in some way less critical than a Must have,(iii)C: Could have this as it would be useful, but only if time and resources are available,(iv)W: Would like for classifying a feature which would be nice, but is not essential now and can be put on hold.

In our experience this tool ensures efficient development of successful interventions and aids effective communication between team members, who might otherwise perceive different priorities.

3. Intervention Development

Once intervention plans have been put into place and a DI created, usability testing can be employed to further develop and improve the DI. Usability testing originated in the field of human-computer interaction but is now considered essential in the development of e-health interventions [48]. It is used to assess the target group’s views regarding the content and format of the intervention during development and evaluate user experiences following completion of the intervention [49]. It aims to identify and eliminate barriers to easy and efficient use by members of the target population and to establish user acceptability and satisfaction with the intervention [49]. Usability testing is an iterative process: it leads to changes being made to the DI, which are then followed by further user testing. This process continues until no further suggestions for changes are being made (e.g., [50]). There are two main qualitative methods for carrying it out: think-aloud interviews, which involve asking users to vocalise their reactions and thinking processes while they use the DI and retrospective semistructured interviews [51]. Each method is discussed in more detail below.

3.1. Think-Aloud Interviews

During think-aloud interviews, participants, with the researcher present, are observed using the DI and asked to comment on their reactions to every aspect of the intervention. In the case of computer based interventions with several sessions, a participant might look at one or two sessions during each think-aloud interview, meaning multiple interviews will be required to test the whole intervention. Think aloud interviews enable researchers to identify problems people might experience when carrying out the intervention and modify the DI accordingly. They are extremely effective in highlighting navigational difficulties [52] and can reveal useful information relating to the content of DI which needs to be modified [50]. For example, think-aloud studies investigating a web-based weight loss intervention revealed that participants did not always implement goal setting and planning as intended. They often restated goals or made imprecise or irrelevant plans which would not achieve weight loss [13]. Following these interviews, the researchers therefore modified the web site by supplying concrete examples of suitable goals, with preset goal choices [13]. These were sufficiently specific and challenging to achieve weight loss if followed and gave more detailed illustrations of how to make plans.

3.2. Retrospective Interviews

Retrospective semistructured interviews are valuable to evaluate user experiences after completion of part of, or the entire, DI. Here, participants try out the intervention alone as an end user and are interviewed about their experiences after completing some or all of the intervention. Retrospective interviews complement think-aloud methods by providing information about how people use the DI in the absence of a researcher, who might otherwise influence participants experiences of carrying out the intervention [53]. For instance, participants might look at different information or use the DI in a different way when not being observed. Furthermore, using the intervention alone allows participants to try out new behaviours recommended by the DI, such as making dietary changes, which can then provide useful information about how well the DI supports users in making such changes. Drawbacks of this method include attrition over the course of the intervention, and the logistics of carrying out interviews before too much time has elapsed following completion of the intervention [54]. This is particularly important with regard to health conditions where the condition or treatment might impair long-term memory. Asking participants to keep a diary of their experiences of using the DI could help to overcome problems of forgetting important information about using a DI. We have found it useful to ask participants to note any aspects that they found particularly useful or not useful, easy to use or problematic and aspects which they particularly liked or disliked. In addition, retrospective data can be triangulated with data collected by the DI itself about how the participant has used the intervention (for instance what pages they have viewed).

Think aloud and retrospective approaches should be seen as complementary, not competing. They offer additional data which can be usefully triangulated for a fuller picture of how people use a DI and any improvements which might be usefully made [55]. These two approaches might also be best employed at different time points in the development phase; whilst think-aloud approaches are useful early on to iron out major content or navigational issues, retrospective studies can be helpful later to identify any further problems occurring when people use the intervention alone.

Once an initial DI appears acceptable to users, it can then enter into a feasibility study, a small scale randomised clinical trial (RCT) designed to test the processes of carrying out a fuller trial in a smaller (and therefore less costly) group of patients. Such designs are extremely useful for highlighting any problems with the feasibility of using a DI in the context that it will eventually be implemented into [56]. They can also estimate sample sizes needed to test an intervention based on effect sizes [56], although they cannot test whether an intervention is effective as they are inevitably underpowered to do so. Data from feasibility studies can then be used to make final refinements to the DI before its effectiveness is tested in a fully powered RCT. The Medical Research Council (MRC) provides extremely useful guidance on testing complex interventions (such as DIs) in RCTs [23, 57]. Retrospective interviews carried out at the end of both feasibility and fully powered RCTs can further enhance interpretation, highlighting explanations for quantitative results or further ways in which a DI might be improved.

Once a DI has been shown to be effective, feasible, and acceptable to users, implementation studies can be useful to determine how best to implement the DI into routine practice, to identify any unintended adverse effects, and to test whether the DI is equally as effective in this environment [23, 57]. Carrying out qualitative research throughout the planning and development stages, considering and responding to the views of both end users and stakeholders (such as healthcare professionals), as outlined in this article, should mean that problems with implementation can be largely avoided [27].

4. Conclusion

The methodological processes involved in developing DIs have been outlined; we have demonstrated how a mixture of qualitative and quantitative designs can be used to ensure the effectiveness, feasibility, and the acceptability of DIs to both end users and stakeholders. The technological capabilities of DIs are increasing at a rapid pace and it is likely that they will become an increasingly important component of healthcare. It is hoped that this guide will help to promote the rigorous development of DIs and encourage the CAM community to take advantage of the technology available, in order to enhance patients’ empowerment and effective self-management of their problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. World Health Organization, The World Health Report: Making a Difference, World Health Organization, Geneva, Switzerland, 1999.
  2. D. Wanless, “Securing our future health: taking a long-term view,” Final Report, HM Treasury, London, UK, 2002. View at: Google Scholar
  3. T. L. Webb, J. Joseph, L. Yardley, and S. Michie, “Using the Internet to promote health behavior change: a systematic review and meta-analysis of the impact of theoretical basis, use of behavior change techniques, and mode of delivery on efficacy,” Journal of Medical Internet Research, vol. 12, no. 1, 2010. View at: Publisher Site | Google Scholar
  4. F. Griffiths, A. Lindenmeyer, J. Powell, P. Lowe, and M. Thorogood, “Why are health care interventions delivered over the internet? A systematic review of the published literature,” Journal of Medical Internet Research, vol. 8, no. 2, article e10, 2006. View at: Publisher Site | Google Scholar
  5. International Telecommunications Union, The World in 2013: Fact and Figures, ITU, Geneva, Switzerland, 2013.
  6. US Department of Commerce, Computer and Internet Usage in the United States, United States Census Bureau, Washington, DC, USA, 2013.
  7. Office for National Statistics, Statistical Bulletin: Internet Access for Households and Individuals, HM Government, London, UK, 2013.
  8. Ofcom, UK Communications Market Report, UK Mobile Phone Usage Statistics, London, UK, 2013.
  9. PEW Internet Research Centre, Mobile Statistics in the US. Washington, DC, USA, 2013, http://www.pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx.
  10. Internet Society, “An open Internet in Africa: Challenges shifting beyond access,” Geneva, Switzerland, 2013, http://internet-africa.projects.visual.ly/#. View at: Google Scholar
  11. GSMA, Telecommunications in Africa: Current Usage and Access, GSMA, Nairobi, Kenya, 2012.
  12. S. Williams, L. Yardley, M. Weal, and G. Willis, “Introduction to lifeguide: open-source software for Creating Online Interventions for Health Care, Health Promotion and Training,” in Proceedings of the Global Telemedicines and eHealth Updates: Knowledge Resources Conference (Med-e-tel '13), M. Jordanova and F. Lievens, Eds., vol. 3, pp. 187–190, ISfTeH International Society for Telemedicine & eHealth, Luxembourg, Luxembourg, 2013. View at: Google Scholar
  13. L. Yardley, S. Williams, K. Bradbury et al., “Integrating user perspectives into the development of a web-based weight management intervention,” Clinical Obesity, vol. 2, no. 5-6, pp. 132–141, 2013. View at: Google Scholar
  14. P. Little, B. Stuart, N. Francis et al., “Effects of internet-based training on antibiotic prescribing rates for acute respiratory-tract infections: a multinational, cluster, randomised, factorial, controlled trial,” The Lancet, vol. 382, no. 9899, pp. 1175–1182, 2013. View at: Google Scholar
  15. H. A. Everitt, R. E. Moss-Morris, A. Sibelli et al., “Management of irritable bowel syndrome in primary care: feasibility randomised controlled trial of mebeverine, methylcellulose, placebo and a patient self-management cognitive behavioural therapy website. (MIBS trial),” BMC Gastroenterology, vol. 10, article 136, 2010. View at: Publisher Site | Google Scholar
  16. M. Evans, C. Paterson, L. Wye et al., “Lifestyle and self-care advice within traditional acupuncture consultations: a qualitative observational study nested in a co-operative inquiry,” Journal of Alternative and Complementary Medicine, vol. 17, no. 6, pp. 519–529, 2011. View at: Publisher Site | Google Scholar
  17. P. Sebastian, O. H. M. Lic, and H. C. Ayur, “The Ayurvedic Approach to Digestive Health and Nutrition,” Nutrition, vol. 2, 2008. View at: Google Scholar
  18. N. Kanji, “Management of pain through autogenic training,” Complementary Therapies in Nursing and Midwifery, vol. 6, no. 3, pp. 143–148, 2000. View at: Publisher Site | Google Scholar
  19. S. Lolak, G. L. Connors, M. J. Sheridan, and T. N. Wise, “Effects of progressive muscle relaxation training on anxiety and depression in patients enrolled in an outpatient pulmonary rehabilitation program,” Psychotherapy and Psychosomatics, vol. 77, no. 2, pp. 119–125, 2008. View at: Publisher Site | Google Scholar
  20. J. L. A. Apóstolo and K. Kolcaba, “The effects of guided imagery on comfort, depression, anxiety, and stress of psychiatric inpatients with depressive disorders,” Archives of Psychiatric Nursing, vol. 23, no. 6, pp. 403–411, 2009. View at: Publisher Site | Google Scholar
  21. D. K. Reibel, J. M. Greeson, G. C. Brainard, and S. Rosenzweig, “Mindfulness-based stress reduction and health-related quality of life in a heterogeneous patient population,” General Hospital Psychiatry, vol. 23, no. 4, pp. 183–192, 2001. View at: Publisher Site | Google Scholar
  22. R. Q. Wolever, K. J. Bobinet, K. McCabe et al., “Effective and viable mind-body stress reduction in the workplace: a randomized controlled trial,” Journal of Occupational Health Psychology, vol. 17, no. 2, article 246, 2012. View at: Google Scholar
  23. Medical Research Council, A Framework for Development and Evaluation of RCTs for Complex Interventions to Improve Health, Medical Research Council, London, UK, 2000.
  24. W. Hardeman, S. Sutton, S. Griffin et al., “A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation,” Health Education Research, vol. 20, no. 6, pp. 676–687, 2005. View at: Publisher Site | Google Scholar
  25. National Institute of Health and Clinical Excellence, Behaviour Change (Public Health Guidance 6), NICE, London, UK, 2007.
  26. S. Michie, M. M. van Stralen, and R. West, “The behaviour change wheel: a new method for characterising and designing behaviour change interventions,” Implementation Science, vol. 6, no. 1, article 42, 2011. View at: Publisher Site | Google Scholar
  27. J. E. W. C. van Gemert-Pijnen, N. Nijland, M. van Limburg et al., “A holistic framework to improve the uptake and impact of eHealth technologies,” Journal of Medical Internet Research, vol. 13, no. 4, article e111, 2011. View at: Google Scholar
  28. E. Murray, “Web-based interventions for behaviour change and self-management: potential, pitfalls, and progress,” Medicine, vol. 1, no. 2, article e3, 2012. View at: Google Scholar
  29. M. J. Weal, C. Hargood, D. T. Michaelides, L. Morrison, and L. Yardley, “Making online behavioural interventions mobile,” 2012, http://eprints.soton.ac.uk/343040/1/Digital_research_2012.pdf. View at: Google Scholar
  30. G. Miller, “The smartphone psychology manifesto,” Perspectives on Psychological Science, vol. 7, no. 3, pp. 221–237, 2012. View at: Google Scholar
  31. M. E. Morris and A. Aguilera, “Mobile, social, and wearable computing and the evolution of psychological practice,” Professional Psychology, Research and Practice, vol. 43, no. 6, pp. 622–626, 2012. View at: Google Scholar
  32. M. J. Boschen and L. M. Casey, “The use of mobile telephones as adjuncts to cognitive behavioral psychotherapy,” Professional Psychology: Research and Practice, vol. 39, no. 5, pp. 546–552, 2008. View at: Publisher Site | Google Scholar
  33. K. K. Rachuri, M. Musolesi, C. Mascolo, P. J. Rentfrow, C. Longworth, and A. Aucinas, “EmotionSense: a mobile phones based adaptive platform for experimental social psychology research,” in Proceedings of the 12th International Conference on Ubiquitous Computing (UbiComp '10), pp. 281–290, September 2010. View at: Publisher Site | Google Scholar
  34. H. Lu, D. Frauendorfer, M. Rabbi et al., “StressSense: detecting stress in unconstrained acoustic environments using smartphones,” in Proceedings of the ACM Conference on Ubiquitous Computing, pp. 351–360, ACM, 2012. View at: Google Scholar
  35. N. Lathia, V. Pejovic, K. K. Rachuri, C. Mascolo, M. Musolesi, and P. J. Rentfrow, “Smartphones for large-scale behaviour change interventions,” to appear in IEEE Pervasive Computing. View at: Google Scholar
  36. B. S. Fjeldsoe, A. L. Marshall, and Y. D. Miller, “Behavior change interventions delivered by mobile telephone short-message service,” American Journal of Preventive Medicine, vol. 36, no. 2, pp. 165–173, 2009. View at: Publisher Site | Google Scholar
  37. A. Rodgers, T. Corbett, D. Bramley et al., “Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging,” Tobacco Control, vol. 14, no. 4, pp. 255–261, 2005. View at: Publisher Site | Google Scholar
  38. J. E. McGrath and B. A. Johnson, “Methodology makes meaning: how both qualitative and quantitative paradigms shape evidence and its interpretation,” in Qualitative Research in Psychology: Expanding Perspectives in Methodology and Design, P. Camic, J. Rhodes, and L. Yardley, Eds., pp. 31–48, APA Books, Washington, DC, USA, 2003. View at: Google Scholar
  39. C. Abraham and S. Michie, “A taxonomy of behavior change techniques used in interventions,” Health Psychology, vol. 27, no. 3, pp. 379–387, 2008. View at: Publisher Site | Google Scholar
  40. S. Michie, M. Johnston, C. Abraham, J. Francis, and M. P. Eccles, “The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions,” Annals of Behavioral Medicine, vol. 46, no. 1, pp. 81–95, 2013. View at: Google Scholar
  41. K. L. Burnet, C. Parker, D. Dearnaley, C. R. Brewin, and M. Watson, “Does active surveillance for men with localized prostate cancer carry psychological morbidity?” British Journal of Urology International, vol. 100, no. 3, pp. 540–543, 2007. View at: Publisher Site | Google Scholar
  42. L. J. Ware, S. Williams, K. Bradbury et al., “Exploring weight loss services in primary care and staff views on using a web-based programme,” Informatics in Primary Care, vol. 20, no. 4, pp. 283–288, 2012. View at: Google Scholar
  43. D. E. Polkinghorne, “Language and meaning: data collection in qualitative research,” Journal of Counseling Psychology, vol. 52, no. 2, pp. 137–145, 2005. View at: Publisher Site | Google Scholar
  44. L. Yardley and F. Bishop, “Mixing qualitative and quantitative methods: a pragmatic approach,” in Qualitative Research in Psychology, C. Willig and W. Stainton-Rogers, Eds., pp. 313–326, Sage, Los Angeles, Calif, USA, 2007. View at: Google Scholar
  45. L. Van Velsen, J. Wentzel, and J. E. Van Gemert-Pijnen, “Designing eHealth that matters via a multidisciplinary requirements development approach,” JMIR Research Protocols, vol. 2, no. 1, article e21, 2013. View at: Google Scholar
  46. “A beginners’guide to creating online interventions using LifeGuide,” 2013, https://www.lifeguideonline.org/file/download/BeginnersGuide&sa=U&ei=rCXxUdaeM4eR0AWa3IBg&ved=0CB8QFjAB&sig2=r_b0sb4qnEd774BYQrm8w&usg=AFQjCNFJOjSgfr5Uu9fjSvTa1OUVQiHDlw. View at: Google Scholar
  47. J. Kuhn, “Decrypting the MoSCoW analysis (DITY weekly newsletter),” itSM solutions, 5, 44, 2009, http://www.itsmsolutions.com/newsletters/DITYvol5iss44.htm. View at: Google Scholar
  48. C. Pagliari, “Design and evaluation in ehealth: challenges and implications for an interdisciplinary field,” Journal of Medical Internet Research, vol. 9, no. 2, article e15, 2007. View at: Publisher Site | Google Scholar
  49. L. Yardley, L. G. Morrison, P. Andreou, J. Joseph, and P. Little, “Understanding reactions to an internet-delivered health-care intervention: accommodating user preferences for information provision,” BMC Medical Informatics and Decision Making, vol. 10, no. 1, article 52, 2010. View at: Publisher Site | Google Scholar
  50. L. Yardley, S. Miller, E. Teasdale, and P. Little, “Using mixed methods to design a web-based behavioural intervention to reduce transmission of colds and flu,” Journal of Health Psychology, vol. 16, no. 2, pp. 353–364, 2011. View at: Publisher Site | Google Scholar
  51. M. J. Van Den Haak, M. D. T. De Jong, and P. J. Schellens, “Evaluation of an informational web site: three variants of the think-aloud method compared,” Technical Communication, vol. 54, no. 1, pp. 58–71, 2007. View at: Google Scholar
  52. L. Morrison, J. Joseph, P. Andreou, and L. Yardley, “Application of the LifeGuide: a think-aloud study of users' experiences of the “Internet Doctor”,” 2010. View at: Google Scholar
  53. J. L. Branch, “Investigating the Information-Seeking Processes of Adolescents: the value of using think alouds and think afters,” Library and Information Science Research, vol. 22, no. 4, pp. 371–392, 2000. View at: Google Scholar
  54. C. Leonidou, A qualitative study on cancer survivors' views about the social support components of an internet-based intervention directed to cancer-related fatigue [MSc dissertation], University of Southampton, 2012.
  55. A. Hinchliffe and W. K. Mummery, “Applying usability testing techniques to improve a health promotion website,” Health Promotion Journal of Australia, vol. 19, no. 1, pp. 29–35, 2008. View at: Google Scholar
  56. M. Arain, M. J. Campbell, C. L. Cooper, and G. A. Lancaster, “What is a pilot or feasibility study? A review of current practice and editorial policy,” BMC Medical Research Methodology, vol. 10, article 67, 2010. View at: Publisher Site | Google Scholar
  57. R. Anderson, “New MRC guidance on evaluating complex interventions,” British Medical Journal, vol. 337, no. 7676, pp. 944–945, 2008. View at: Publisher Site | Google Scholar

Copyright © 2014 Katherine Bradbury et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.