Skip to main content

Best practice for integrating digital interventions into clinical care for young people at risk of suicide: a Delphi study



Digital tools have the capacity to complement and enhance clinical care for young people at risk of suicide. Despite the rapid rise of digital tools, their rate of integration into clinical practice remains low. The poor uptake of digital tools may be in part due to the lack of best-practice guidelines for clinicians and services to safely apply them with this population.


A Delphi study was conducted to produce a set of best-practice guidelines for clinicians and services on integrating digital tools into clinical care for young people at risk of suicide. First, a questionnaire was developed incorporating action items derived from peer-reviewed and grey literature, and stakeholder interviews with 17 participants. Next, two independent expert panels comprising professionals (academics and clinical staff; n = 20) and young people with lived experience of using digital technology for support with suicidal thoughts and behaviours (n = 29) rated items across two consensus rounds. Items reaching consensus (rated as “essential” or “important” by at least 80% of panel members) at the end of round two were collated into a set of guidelines.


Out of 326 individual items rated by the panels, 188 (57.7%) reached consensus for inclusion in the guidelines. The endorsed items provide guidance on important topics when working with young people, including when and for whom digital tools should be used, how to select a digital tool and identify potentially harmful content, and identifying and managing suicide risk conveyed via digital tools. Several items directed at services (rather than individual clinicians) were also endorsed.


This study offers world-first evidence-informed guidelines for clinicians and services to integrate digital tools into clinical care for young people at risk of suicide. Implementation of the guidelines is an important next step and will hopefully lead to improved uptake of potentially helpful digital tools in clinical practice.

Peer Review reports


Suicide is a leading cause of death in young people globally [1], and is the number one cause of death in young people in Australia [2]. Suicidal ideation and behaviour (including self-harm and suicide attempt) are more common and associated with risk of future fatal and non-fatal suicide attempts, with international prevalence estimates of suicidal ideation and suicide attempt ranging from 14.3 to 22.6% and 4.6–15.8% respectively [3]. Whilst many young people who experience suicidal thoughts and behaviours do not seek help, many also do; indeed, data indicate that approximately one-third of young people who died by suicide in Australia between 2006 and 2015 were receiving mental health treatment at the time of their death [4]. This indicates a need for suicide prevention efforts to focus on both improving access to services and enhancing the quality of care that services can provide.

Young people are increasingly reliant on digital technologies to seek help and information regarding their mental health, with the COVID-19 pandemic particularly highlighting the potential for telehealth and digital tools to enhance, supplement or extend clinical care [5, 6]. Moreover, an emerging evidence base has demonstrated that digital interventions show promise for the prevention and treatment of suicidal ideation and behaviours in youth [7,8,9], including when delivered alongside standard clinical care [10]. Despite the emerging evidence base, implementation remains a challenge and the rate of integration of digital interventions into clinical services is low [11, 12]. Several factors are likely at play, including unfamiliarity with available digital interventions and/or how to use them, concerns about the quality or potential utility of available interventions, concerns about how risk of suicide or suicidal behaviour will be assessed, monitored and managed in the digital environment, and time and resource constraints [13,14,15]. Moreover, no evidence-informed guidance exists for clinicians or services regarding how digital tools can be integrated safely and effectively into clinical care for young people at risk of suicide.

The current study therefore aimed to engage a network of academic experts, mental health professionals, and young people with lived experience to develop a set of best-practice guidelines for integrating digital interventions into clinical care for young people who experience suicidal ideation and/or engage in suicide-related behaviour (including self-harm).


Study design

This study used the Delphi methodology, a method of achieving expert consensus on a particular topic where other study designs are either infeasible or inappropriate, and which offers an opportunity to incorporate practice-based evidence through feedback from expert panellists [16]. The study was conducted over two phases. In the first phase, a questionnaire was developed comprising all possible statements that could go into a set of guidelines. In the second phase, the questionnaire was distributed to two expert panels who rated each of the items according to importance for inclusion in the guidelines. Following two rounds of questionnaires, all items that achieved consensus were compiled into the final guidelines.

Phase 1: Questionnaire development

Peer-reviewed and grey literature were systematically searched to identify action items. Action items were defined as statements that described what clinicians, service providers, or services (i.e., organisations) should do or had done when using digital tools with young people who experience suicidal thoughts or behaviour.

To identify peer-reviewed articles, Medline, PsycInfo and Embase were searched in July 2020 using the following search string: (suicid* OR self harm OR self-harm) AND (digital OR online OR internet OR technolog* OR ehealth OR e-health OR mhealth OR m-health OR web OR mobile device* OR mobile phone OR cell* phone OR smartphone) AND (young OR youth OR adolescen* OR teen* OR child*). This search strategy identified 1790 records after removing duplicates. Based on title and abstract screening, 165 articles were reviewed in full for eligibility. A total of 52 articles contained relevant action items to extract from and were considered “included”.

To identify grey literature, Google search engines from Australia, New Zealand, Canada, the USA and the UK were searched using four different combinations of search terms related to youth, digital technology, and suicide/self-harm. The first two pages of search results were reviewed. A total of 128 unique grey literature sources were identified via this search strategy, of which 30 contained relevant action items and were considered “included”.

Additionally, due to the paucity of grey or peer-reviewed literature specifically regarding the integration of digital tools into clinical care for this population, qualitative interviews were conducted with key stakeholders with the goal of eliciting additional action items. Two groups of stakeholders were recruited: professionals (n = 9) and consumers (n = 8). Professionals were individuals with clinical or research expertise in digital interventions and youth suicide risk, identified via the research team and contacted via email with an invitation to participate. Inclusion criteria for professional stakeholders were: (1) currently employed in a clinical setting and have experience working with young people with suicidal ideation and/or behaviour (in a client-facing or managerial capacity), (2) have published (or are currently conducting) research evaluating digital interventions for young people at risk of suicide, and (3) living in a predominantly English-speaking country. Consumer stakeholders were recruited via advertisements posted on social media. Inclusion criteria for consumer stakeholders were: (1) aged 15 to 25 inclusive; (2) have used technology for support with suicidal thoughts or behaviour; (3) report experiencing suicidal thoughts “only once or twice” or less in the two weeks prior to consenting as assessed using an adapted version of item 9 of the Patient Health Questionnaire-9 (PHQ-9) [17]. Item 9 of the PHQ-9 was adapted to include the “only once or twice” response (in addition to “not at all”, “several days”, “more than half the days”, and “nearly every day”) to allow participants with fleeting suicidal thoughts to take part. Consumer stakeholders were required to reside in Australia, to enable adequate follow-up in the event of any disclosure of suicide risk during interviews. All interviews were conducted over Zoom, and audio recorded using Zoom’s record function then transcribed. The mean interview length was 34.4 min for professionals and 48.5 min for consumers. Consumer stakeholders were paid $30 per interview (professionals were not paid).

The peer-reviewed and grey literature sources, and qualitative interview transcripts, were then hand-searched for action items. All items meeting criteria were extracted and compiled into an excel spreadsheet. A working group of project team members then reviewed these items in regular meetings. The purpose of the meetings was to review and refine each of the action items, ensuring their clarity and succinctness and that each item conveyed only one idea. Following this process, the action items were compiled into a survey hosted on Qualtrics – the “Round 1” questionnaire.

Phase 2: Consensus rounds

Selection of expert panels

Two expert panels, one of professionals and one of consumers, were recruited. In line with previous studies in this area, the target sample size was 20–30 participants per panel [18, 19]. The inclusion criteria, recruitment processes, and demographic characteristics of the participating panels are described below.

Professional panel

Professional panel members were academic experts, clinicians, and leadership/management staff in mental health services.

Academic experts were defined as those who had published (as lead or senior author) at least one peer-reviewed article on the topic of digital interventions for young people who experience suicidal ideation or behaviour. Academic experts were eligible if they were based in any English-speaking country including Australia, New Zealand, Britain, USA and Canada. Academic experts were recruited via contacting first and last authors on all eligible peer-reviewed literature identified via the systematic literature search.

Clinicians were defined as individuals with at least one year of experience providing mental health care to young people who experience suicidal ideation and/or behaviour. Leadership/management staff were defined as individuals with at least one year’s experience overseeing mental healthcare providers who work with young people who experience suicidal thoughts or behaviour. Clinicians and leadership/management staff were only eligible if they were based in Australia or New Zealand and were recruited via advertisements posted to relevant social media groups and on the Australian Psychological Society’s website.

Twenty professional panel members were recruited. Twelve panel members (60.0%) lived in Australia, four (20.0%) in the USA, three (15.0%) in the UK and one (5.0%) in New Zealand. Thirteen identified as female (65.0%) and seven as male (35.0%). The professional panel included three members who also participated in the qualitative interview (Phase 1).

Most professional panel members met eligibility criteria for multiple positions. Eleven (55%) met criteria for “academics”, with years of experience ranging from 1.5 to 31 years (M = 11.1, SD = 8.6). Fifteen (75.0%) met criteria for “clinicians”, with between one- and 40-years’ experience (M = 13.6, SD = 11.5) and eight (40.0%) met criteria for leadership or management staff, with 5–40 years’ experience (M = 15.0, SD = 10.7). Participants worked across a range of settings including universities, non-government and government organisations (including not-for-profits), medical research institutes, and in public and private mental health services.

The twenty professional panel members completed the first questionnaire (Round 1), and nineteen (95.0%) completed the second questionnaire (Round 2). The one panel member who did not complete the second questionnaire did not respond to any emails (i.e., it is unclear why they did not complete Round 2).

Consumer panel

Inclusion criteria for consumer panel members were the same as the inclusion criteria for the consumer stakeholder interviewees, described above. Consumer panel members were recruited via social media (with consumer stakeholders from Phase 1 also directly invited to participate). Twenty-nine consumer participants were recruited, including four panel members who also participated in the qualitative interview component (Phase 1). Panel members were asked basic demographic questions, including some questions designed to capture whether the panel was representative of the population of young people known to be at higher risk of suicide. The mean age of panel members was 20.6 (SD = 2.9). Seventeen panel members (58.6%) identified as female, four (13.8%) as male, and eight (27.6%) identified as a gender other than “male” or “female”. Ten panel members (34.5%) described their sexuality as heterosexual; the remaining 19 panel members (65.5%) did not identify this way. Most panel members (n = 22, 75.9%) were born in Australia and most (n = 23, 79.3%) reported English was the main language spoken at home. No panel members identified as Aboriginal or Torres Strait Islander. The majority of consumer panel members lived in Victoria (n = 19, 65.5%), with the remainder living across New South Wales (n = 4, 13.8%), Queensland (n = 4, 13.8%), South Australia (n = 1, 0.3%), and Western Australia (n = 1, 0.3%). Most reported they lived in a metropolitan area (n = 26, 89.7%).

The twenty-nine consumer panel members completed the first questionnaire (Round 1), and twenty-six (89.7%) completed the second questionnaire (Round 2). Of the three who did not complete the questionnaire, one did not respond to any emails and reminders (i.e., it is unclear why they did not complete Round 2), one completed a portion (13.4%) then did not respond to further reminders and thus were removed from the analysis for Round 2, and one had turned 26 years old between the Round 1 and Round 2 questionnaires and advised the research team they did not feel eligible to participate in Round 2.

Delphi consensus process

Expert panel members were asked to complete two rounds of questionnaires. In each round, panel members were instructed to rate each item according to its importance for inclusion in the guidelines, using a five-point Likert scale consisting of the following response options: essential, important, unsure/depends, unimportant, should not be included. The items were largely the same for both panels, however a small number of items (n = 49; 19.1%) were removed from the consumer panel survey as they required specific professional expertise (e.g., understanding of service systems); this decision was made in consultation with a youth advisor. At the beginning of both questionnaires, panel members were provided with a list of terminology and definitions which could be downloaded and referenced during questionnaire completion; this included a definition of suicide-related behaviour as including suicide attempt and self-harm regardless of intent. In the first questionnaire (Round 1), all panel members were given the opportunity to suggest additional action items at the end of each section. The second questionnaire (Round 2) included all items that did not reach consensus for either inclusion or exclusion in Round 1, as well as the additional items suggested by panel members.

Consistent with previous Delphi studies developing guidelines [18,19,20], items were included in the guidelines if they were rated as “essential” or “important” by at least 80% of participants in both panels in Round 1, and were excluded if they were rated as “essential” or “important” by less than 70% of participants in both panels in Round 1. If they were rated as “essential” or “important” by 70–79% of participants in both panels, or if there was a discrepancy between the overall ratings of the two panels (e.g., included by the consumer panel but excluded by the professional panel) in Round 1, these items were re-rated in Round 2. Due to time and resource limitations only two questionnaire rounds were completed, and items not classified as “included” after Round 2 were excluded. In Round 2, panel members were provided with a document containing the Round 1 consensus ratings from both groups for items to be re-rated.

The Round 1 and Round 2 questionnaires were estimated to take 60 and 30 minutes to complete, respectively. Panel members did not have to complete the questionnaire all at once and could save and return to complete it later, providing they accessed it using the same link and device each time. All item responses were forced for the professional panel members, but not for the consumer panel members; this was to allow consumers to skip items that made them feel upset or uncomfortable.

Youth panel members were paid $30 per survey completed. Professional panel members were provided with a $50 AUD-equivalent gift voucher upon completion of the second survey.


Consensus ratings

Figure 1 displays the flow of action items through the Delphi consensus process. Ultimately, of 326 individual items that were rated (including 70 new items suggested by panel members included in Round 2), 188 (57.7%) were included in the final guidelines [21]. A complete list of every item rated by panel members across both rounds, including their consensus ratings, is contained in Supplementary File 1.

Fig. 1
figure 1

Flow of actions through the Delphi consensus process. *Items not included in the consumer surveys due to requiring professional expertise

There was strong agreement between the two panels based on combined “essential” and “important” ratings (r = 0.84, p < 0.001). The means and standard deviations of the between-panel differences on combined “essential” and “important” scores were calculated to examine items on which panels disagreed; items with differences of more than two standard deviations above or below the mean are displayed in Table 1. All 14 of these items were rated as “essential” or “important” by a higher proportion of the consumer panel than the professional panel, and two of these had large discrepancies in both questionnaire rounds. Most of the items related to interactive digital tools (e.g., social media, online forums, and chat bots).

Table 1 Items where consensus ratings differed significantly between the two panels

Guideline development

At the end of the Round 2 questionnaire, all included items (n = 188) were collated into a set of guidelines [21]. Whilst the wording of the items remained the same, many items were collapsed into single sentences to improve coherence. The structure of the guidelines is shown in Table 2.

Table 2 Structure of the guidelines


This study aimed to develop a set of best-practice guidelines for integrating digital interventions into clinical care for young people who experience suicidal ideation and/or engage in suicide-related behaviour. To our knowledge, these represent the first evidence-informed guidelines on the topic. Forty-nine panel members were recruited, who agreed on the inclusion of 188 items (out of 326; 57.7%). The items provide guidance across three broad areas (divided into three parts in the guidelines): incorporating digital tools into clinical care; identifying and managing risk of suicide; and actions for services. Part One provides guidance on how to choose a new digital tool to introduce to a young person (including the minimum ideal features of digital tools for young people at different levels of suicide risk). This section also advises clinicians to engage young people in ongoing conversations about their use of digital tools (including social media use) and empower young people to recognise the effects of a digital tool on their suicidal thoughts or behaviour and choose digital tools that promote their safety and recovery. In Part Two, clinicians are advised to establish general processes to be followed in the case that a young person indicates via digital means that they may be at risk of suicide or suicidal behaviour, and to develop individualised, specific processes for each young person. For the latter, clinicians are encouraged to include specific indicators of escalating suicidal ideation or behaviour that can be drawn on to assess risk of harm remotely, as well as clear responses to be followed if risk is perceived to be escalating (including who will be contacted in what circumstances). The resulting guidelines do not provide specific advice about exactly what processes should be undertaken based on level of suicide risk, indicating this is a matter of clinical judgment and should be determined on a case-by-case basis. Part Three, “Actions for Services”, includes specific guidance for leadership and/or management in mental health services across three key areas. The first relates to establishing policies and procedures (a “digital strategy”) for the integration of digital tools into the service that stipulate clear governance and risk escalation processes, and the second concerns ensuring equity of access and transparency of processes to young people. The third area provides guidance for promoting the uptake and implementation of digital tools in the service setting, and includes items related to providing training and resources to staff. Taken together, the resulting guidelines are likely to go some way to address several key barriers to uptake of digital tools in clinical practice, including concerns about how to select a digital tool (including how to assess quality or safety), concerns about how to monitor and manage suicide risk in a digital environment, and barriers related to limited knowledge, training, and resourcing [13,14,15].

A number of items that did not reach consensus for inclusion in the guidelines are worthy of further discussion. For instance, several items related to clinicians looking at a young person’s social media pages (including only with their consent) were included in the survey but did not reach consensus for guideline inclusion. While this does not necessarily indicate the expert panels disagree that clinicians should view young people’s social media pages, it suggests that they do not recommend doing so. Surprisingly, the panels did not agree to include items stipulating that information about means and methods of suicide, images of suicide/self-harm, and content that normalises suicide should be considered “potentially harmful”. This is contrary to recommendations of existing guidelines for reporting on or discussing suicide and suicidal behaviour in traditional media [22] and on social media [23], and suggests contextual factors are important when assessing what content is harmful, and for whom. Indeed, there are likely individual differences in how young people react and respond to such content; moreover, risks may be mitigated if potentially helpful messaging (e.g., that conveys hope or encourages help-seeking) is also present [24, 25]. There was also limited consensus on items related to the use of digital tools by young people who were not in regular contact with a clinician (e.g., post-discharge or on a wait list), suggesting this is likely a nuanced issue which again depends on context. Given digital tools may have great potential to support people as they transition into and out of clinical care [12, 26], and that risk of suicide may be amplified during these periods [27, 28], there is a need for the development of specific guidance in this area.

Items with significant differences in consensus ratings between the two panels were also examined. Interestingly, items stating that clinicians should only recommend online communities that display the details of moderation, are co-designed by young people with lived experience, and allow young people to report posts were all endorsed for inclusion by the consumer panel but were excluded by the professional panel in the Round 1 questionnaire (with all but the first item excluded from the final guidelines). Although the data do not allow examination of the reasons for this, it may be that the professional panel did not support clinicians recommending online communities at all or had feasibility concerns, rather than because they disagreed with the various conditions. A much higher proportion of young people than professionals agreed that young people should be able to participate in online communities using a pseudonym, suggesting that while young people favour the ability to contribute anonymously this is not endorsed by professionals (presumably due to safety concerns, although this could not be confirmed in the current study). Additionally, an item specifying that digital tools should enable direct contact between the young person and their treating clinician was included by the youth panel but excluded by the professional panel, suggesting that while young people would like around-the-clock access to their clinician, this is unlikely to be feasible from the perspective of clinicians. Overall, differences between the ratings of the two panels reflect the differing perspectives of clinicians and consumers: consumers respond based on their lived and living experience, whereas professionals take a more cautious and dispassionate approach. This highlights both the importance of, and challenges associated with, integrating both perspectives in the design and delivery of mental health care.

Strengths and limitations

This study employed a rigorous Delphi methodology that included a systematic peer-reviewed and grey literature search, stakeholder interviews with groups of young people with lived experience and relevant professionals to supplement gaps in the available literature, and two sufficiently sized panels of topic experts. The use of a consumer panel in addition to a professional panel is a clear strength and aligns with the increasing emphasis on the importance of involving people with lived experience in service design [29]. There was generally strong agreement between the panels and a high completion rate, attesting to the reliability of resulting guidelines. However, the study is not without limitations: due to time constraints, only two consensus rounds were completed. Whilst there is methodological precedent for stopping after two consensus rounds [30], further consensus rounds may have led to the inclusion of more items. We excluded participants who reported frequent suicidal ideation within the week prior to consenting; whilst this was a deliberate safety measure, in consequence the views of this group of young people were not accounted for. Only clinicians and service providers from Australia and New Zealand participated as panel members; as a result, the guidelines specifically target this audience and may have less relevance internationally (particularly in non-English-speaking countries). Despite this, the guidelines provide a foundation for the conduct of further international studies in this area. Finally, it is widely recognised that implementing evidence-based guidelines in healthcare settings is challenging [31], and this study does not address the many barriers to implementing guidelines in practice. To address this, future work by our group will focus on implementing the guidelines through developing accessible learning tools (e.g., webinars, handouts, and templates), developing a strategy to roll out the guidelines and associated tools to the target audience of youth mental health services and clinicians, and evaluating the guideline implementation.


This study has led to a set of world-first evidence-informed guidelines on integrating digital tools into clinical care for young people who experience suicidal ideation and behaviour. The content of the guidelines has been endorsed by expert professionals and consumers with lived experience, who largely agreed regarding the inclusion of items. It is hoped that the guidelines will address several major barriers to the uptake of digital tools in clinical care, including concerns about the quality of tools and how to assess and manage suicide risk. Whilst these guidelines are an important first step in improving the uptake of potentially efficacious digital tools in clinical care, their existence alone is insufficient; thus, a body of work to facilitate the implementation of the guidelines will be an important next step.

Data availability

The datasets used during the current study are available from the corresponding author upon reasonable request.


  1. World Health Organization, Suicide Geneva WHO. ; 2021 [Available from:

  2. Australian Bureau of Statistics. Causes of death, Australia, 2021: Australian Bureau of Statistics ( 2022 [.

  3. Van Meter AR, Knowles EA, Mintz EH. Systematic Review and Meta-analysis: international prevalence of suicidal ideation and attempt in Youth. J Am Acad Child Adolesc Psychiatry; 2022.

  4. Hill NT, Witt K, Rajaram G, McGorry PD, Robinson J. Suicide by young australians, 2006–2015: a cross-sectional analysis of national coronial data. Med J Australia. 2021;214(3):133–9.

    Article  PubMed  Google Scholar 

  5. Torous J, Jän Myrick K, Rauseo-Ricupero N, Firth J. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health. 2020;7(3):e18848.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Holmes EA, O’Connor RC, Perry VH, Tracey I, Wessely S, Arseneault L, et al. Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry. 2020;7(6):547–60.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Torok M, Han J, Baker S, Werner-Seidler A, Wong I, Larsen ME, et al. Suicide prevention using self-guided digital interventions: a systematic review and meta-analysis of randomised controlled trials. Lancet Digit Health. 2020;2(1):e25–e36.

    Article  PubMed  Google Scholar 

  8. Melia R, Francis K, Hickey E, Bogue J, Duggan J, O’Sullivan M, et al. Mobile Health Technology Interventions for Suicide Prevention: systematic review. JMIR mHealth and uHealth. 2020;8(1):e12516.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Witt K, Spittal MJ, Carter G, Pirkis J, Hetrick S, Currier D, et al. Effectiveness of online and mobile telephone applications (‘apps’) for the self-management of suicidal ideation and self-harm: a systematic review and meta-analysis. BMC Psychiatry. 2017;17(1):297.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bailey E, Alvarez-Jimenez M, Robinson J, D’Alfonso S, Nedeljkovic M, Davey CG et al. An enhanced social networking intervention for young people with active suicidal ideation: safety, feasibility and acceptability outcomes. Int J Environ Res Public Health [Internet] 2020; 17(7).

  11. Mohr DC, Weingardt KR, Reddy M, Schueller SM, Three problems with current digital mental health research and three things we can do about them. Psychiatric services (Washington, DC). 2017;68(5):427-9.

  12. Cross SP, Nicholas J, Bell IH, Mangelsdorf S, Valentine L, Thompson A et al. Integrating digital interventions with clinical practice in youth mental health services. Australasian Psychiatry. 2023:10398562231169365.

  13. Mendes-Santos C, Nunes F, Weiderpass E, Santana R, Andersson G. Understanding Mental Health professionals’ perspectives and practices regarding the Implementation of Digital Mental Health: qualitative study. JMIR Formative Research. 2022;6(4):e32558.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Bucci S, Schwannauer M, Berry N. The digital revolution and its impact on mental health care. Psychol Psychother. 2019;92(2):277–97.

    Article  PubMed  Google Scholar 

  15. Bucci S, Berry N, Morris R, Berry K, Haddock G, Lewis S, et al. They are not hard-to-Reach clients. We have just got hard-to-Reach services. Staff Views of Digital Health Tools in specialist Mental Health services. Front Psychiatry. 2019;10:344.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Jorm AF. Using the Delphi expert consensus method in mental health research. Aust N Z J Psychiatry. 2015;49(10):887–97.

    Article  PubMed  Google Scholar 

  17. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–13.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Cox GR, Bailey E, Jorm AF, Reavley NJ, Templer K, Parker A, et al. Development of Suicide postvention guidelines for secondary schools: a Delphi study. BMC Public Health. 2016;16(1):180.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Robinson J, Hill NTM, Thorn P, Battersby R, Teh Z, Reavley NJ, et al. The #chatsafe project. Developing guidelines to help young people communicate safely about Suicide on social media: a Delphi study. PLoS ONE. 2018;13(11):e0206584.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Reavley NJ, Ross A, Killackey EJ, Jorm AF. Development of guidelines to assist organisations to support employees returning to work after an episode of anxiety, depression or a related disorder: a Delphi consensus study with Australian professionals and consumers. BMC Psychiatry. 2012;12(1):135.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Bailey E, Bellairs-Walsh I, Reavley N, Gooding P, Hetrick S, Rice S, et al. Guidelines for integrating digital tools into clinical care for young people at risk of Suicide. Melbourne, Australia: Orygen; 2023.

    Google Scholar 

  22. Everymind. Reporting Suicide and mental ill-health: a mindframe resource for media professionals. Newcastle, Australia; 2020.

  23. Robinson J, Hill N, Thorn P, Teh Z, Battersby R, Reavley N. #chatsafe: a young person’s guide for communicating safely online about Suicide. Melbourne, Australia: Orygen; 2018.

    Google Scholar 

  24. Sinyor M, Schaffer A, Nishikawa Y, Redelmeier DA, Niederkrotenthaler T, Sareen J et al. The association between suicide deaths and putatively harmful and protective factors in media reports. CMAJ: Canadian Medical Association journal = journal de l’Association medicale canadienne. 2018;190(30):E900-e7.

  25. Hawley LL, Niederkrotenthaler T, Zaheer R, Schaffer A, Redelmeier DA, Levitt AJ, et al. Is the narrative the message? The relationship between suicide-related narratives in media reports and subsequent suicides. Australian & New Zealand Journal of Psychiatry. 2022;57(5):758–66.

    Article  Google Scholar 

  26. Hirschtritt ME, Insel TR. Digital Technologies in Psychiatry: Present and Future. Focus (American Psychiatric Publishing). 2018;16(3):251–8.

    PubMed  Google Scholar 

  27. Williams ME, Latta J, Conversano P. Eliminating the wait for mental health services. J Behav Health Serv Res. 2008;35(1):107–14.

    Article  PubMed  Google Scholar 

  28. Forte A, Buscajoni A, Fiorillo A, Pompili M, Baldessarini RJ. Suicidal risk following hospital discharge: a review. Harv Rev Psychiatry. 2019;27(4):209–16.

    Article  PubMed  Google Scholar 

  29. Tindall RM, Ferris M, Townsend M, Boschert G, Moylan S. A first-hand experience of co-design in mental health service design: opportunities, challenges, and lessons. Int J Ment Health Nurs. 2021;30(6):1693–702.

    Article  PubMed  Google Scholar 

  30. Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: how to decide its appropriateness. World J Methodol. 2021;11(4):116–29.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Fischer F, Lange K, Klose K, Greiner W, Kraemer A. Barriers and strategies in Guideline Implementation-A scoping review. Healthc (Basel Switzerland). 2016;4(3).

Download references


We would like to thank Alex Dalton (youth advisor) for his input regarding whether items were appropriate to be rated by the youth expert panel.


This study received funding from Suicide Prevention Australia. Additional support for this study came from Future Generation Global and a NHMRC Centre of Research Excellence scheme (ID1171910). PG received funding from the Australian Research Council to enable his contribution (ARC No. DE200100483). SH is supported by an Auckland Medical Research Foundation Douglas Goodfellow Repatriation Fellowship. SR is supported by a Dame Kate Campbell Fellowship from the University of Melbourne. JR is funded by a NHMRC Investigator Grant (ID2008460) and a Dame Kate Campbell Fellowship from the University of Melbourne. The funders had no role in study design, collection, analysis, or interpretation of data, or in writing the manuscript.

Author information

Authors and Affiliations



Conceptualisation: EB & JR; Design: EB, JR & NR; Literature search: EB & AB; Questionnaire development: EB, AB & IBW; Recruitment: EB & IBW; Data collection: EB & IBW; Data analysis: EB; Participation in WIP meetings: EB, IBW, JR, NR., PG, SH & SR; Writing – original draft: EB; Writing – review & editing: IBW, JR, AB, NR., PG, SH & SR.

Corresponding author

Correspondence to Eleanor Bailey.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for this study was obtained from the University of Melbourne Human Research Ethics Committee (ID 2057168). All participants provided written informed consent to participate. All study procedures were performed in accordance with the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1:

All items rated by panel members, consensus ratings, and outcome

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bailey, E., Bellairs-Walsh, I., Reavley, N. et al. Best practice for integrating digital interventions into clinical care for young people at risk of suicide: a Delphi study. BMC Psychiatry 24, 71 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: