Skip to main content

Advertisement

We’d like to understand how you use our websites in order to improve them. Register your interest.

Variability in clinician intentions to implement specific cognitive-behavioral therapy components

Abstract

Background

CBT comprises many discrete components that vary in complexity, but implementation and training efforts often approach CBT as a single entity. We examined variability in clinician intentions to use different structural and interventional components of CBT for three different clinical groups: clients receiving CBT, clients with depression, and clients with anxiety.

Methods

Clinicians (n = 107) trained in CBT completed a one-time electronic survey. Clinicians’ intentions were measured using established item stems from social psychology adapted to examine intentions to use six specific CBT components: exposure therapy, cognitive restructuring, behavioral activation, planning homework, reviewing homework, and agenda-setting.

Results

Intentions were weakest, on average, for exposure. They were strongest, on average, for reviewing homework. A series of ANOVAs with Tukey’s post-hoc tests revealed that participants intended to use exposure with clients receiving CBT (p = .015) and clients with anxiety (p < .001) significantly more than for clients with depression. Participants intended to use behavioral activation with clients with depression (p = .01) significantly more than for clients with anxiety. No other intentions to use CBT components differed among these three clinical populations.

Conclusions

When studying determinants of CBT use and designing interventions to increase use, implementers should consider that different CBT components may require different implementation strategies.

Trial registration

Not applicable.

Peer Review reports

Background

Strong evidence supports using cognitive behavioral therapy (CBT) for a range of mental health problems in children and adults [1, 2]. However, in the U.S. it is rarely is used in routine clinical practice in the community [3, 4]. Implementation strategies, the “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice,” [5, 6] aimed at increasing clinicians’ use of evidence-based mental health practices have had limited success to date in increasing the use of CBT [7,8,9,10].

CBT’s complexity may contribute to its poor and infrequent implementation [11]. CBT is an overarching term encompassing a set of intervention components guided by cognitive-behavioral theory. To date, most CBT dissemination and implementation efforts have trained clinicians to deliver comprehensive CBT protocols. CBT comprises many discrete components that vary in what they require clinicians to do [12, 13]. CBT involves both structural elements (e.g., agenda-setting, homework assignment, Socratic questioning) and discrete intervention components (e.g., cognitive restructuring, relaxation, exposure). The set of components may vary based on the disorder the clinician is treating. Thus, implementing a “single CBT protocol” requires clinicians to learn multiple components concurrently, which they may use with varying fidelity. A small body of research suggests that clinicians vary in whether and how well they use these components [14] and in how much they value particular components [15]. For example, while exposure is considered a key component in CBT for anxiety [16], community clinicians rarely use it, relying instead on other less effective CBT strategies, such as relaxation [17, 18]. Clinicians may find certain intervention components to be easier to implement, more intuitive, or less aversive (e.g., in the case of exposure) than others, contributing to this variability.

Most approaches to evaluating CBT implementation, as well as studies predicting clinicians’ use of CBT, do not distinguish among CBT’s many components [17]. Recent examination of other psychosocial evidence-based practices (EBPs) suggests that intentions to use specific intervention components, as well as actual use, may vary within and across practitioners, and may call for different implementation strategies for different components [19].

Our research and that of others suggests that intentions are an important, proximal determinant of implementation of evidence-based practices [20, 21]. Intentions are defined as a person’s motivation to perform a behavior, or the effort an individual plans to exert to perform the behavior [22,23,24]. Clinician’s use of EBPs is the outcome of interest in most implementation studies. In many models of clinician behavior, strong intention is a necessary precursor for behavior change to occur [25]. If the clinician has the skills and resources needed to perform the given behavior, then it is highly likely that he or she will act on those intentions [22,23,24].

We examined variability in the strength of intentions to use different CBT components, which we think has two important implications. First, if there is variability, it suggests that other measures of clinicians’ thoughts (such as their attitudes or self-efficacy) regarding use of CBT should take this variability into account. Many implementation measures ask clinicians to report their views and use of EBPs broadly instead of their views and use of specific EBP components [26]. Second, variability would suggest that implementation strategies may need to target use of specific components, rather than CBT as a whole. Since intentions may be weaker for certain CBT components, it could be more cost-effective and efficacious to design implementation strategies that target those specific components.

Intentions may be influenced by attitudes (i.e., the perceived advantages and disadvantages of implementing a particular CBT component), perceived norms (e.g., the belief that important others think they should or should not implement the CBT component), and self-efficacy (i.e., confidence in one’s ability to so implement) [23, 27]. Each of these determinants of intention represent potential malleable mechanisms [19]. For example, training and consultation strategies may be sufficient for increasing clinician self-efficacy and fidelity to a component they already strongly intend to use. When intentions to use an intervention component are weak, additional strategies such as policy mandates (to strengthen perceived norms) or financial incentives (to improve attitudes) may be needed to strengthen intentions.

To conduct this study, we surveyed community mental health clinicians who were trained in CBT through the University of Pennsylvania’s Beck Community Initiative (Penn BCI), a large-scale CBT implementation effort conducted in partnership with the Philadelphia Department of Behavioral Health and Intellectual disAbility Services (DBHIDS) [28]. We gathered data on the intentions of community clinicians to use each of six key CBT intervention components (exposure therapy, cognitive restructuring, behavioral activation, planning homework, reviewing homework, and agenda setting). For each CBT component, we gathered data about clinicians’ intentions to use them for three different clinical groups: 1) all clients receiving CBT, 2) clients with depression, and 3) clients with anxiety, because the appropriateness of these components may vary by the presenting problem. We hypothesized that clinicians would have the strongest intentions to use exposure for clients with anxiety, and to use behavioral activation for clients with depression. We also hypothesized that the strength of intentions would not differ across clinical groups for cognitive restructuring, planning homework, reviewing homework, and agenda setting as these CBT strategies are recommended across groups. We hypothesized that intentions to use structural components of CBT (planning homework, reviewing homework, and agenda setting) would be stronger than intentions to use intervention strategies (exposure therapy, cognitive restructuring, behavioral activation) because of their perceived complexity [29].

Method

Participants

Our sample comprised 107 clinicians trained in CBT through the Penn BCI [28]. Training consisted of 22 h of content about CBT from foundational through more complex skills, including case conceptualization, intervention components, and relapse prevention, followed by 6 months of weekly group consultation with tape review. Training was conducted either in-person (n = 37) or by web (n = 70). Clinicians were primarily master’s level (n = 88, 82.2%). Eight (6.6%) were doctoral level (i.e., MD or PhD) and four (3.7%) who provided substance use services had a bachelor’s or associate’s degree. Criteria for inclusion in the present study were minimal: participants had to be English speaking and have participated in training or consultation through the Penn BCI. See Table 1.

Table 1 Demographic characteristics (N = 107)

We recruited clinicians in two ways, depending on whether they were currently receiving training or consultation through the Penn BCI or had previously received training or consultation through the Penn BCI. We presented clinicians actively receiving training or consultation with a description of the study while administering standard program evaluation measures. We recruited clinicians who previously received training or consultation through the Penn BCI via email.

Procedure

The IRB reviewed and approved this project. Between 12/11/2018 and 2/20/2019, participants completed a one-time electronic survey questionnaire that took approximately 5–10 min. The required elements of informed consent were described on the first page of the survey. Individuals agreed to participate by proceeding to complete the questionnaire, which was administered via Research Electronic Data Capture (REDcap), a HIPAA compliant web-based survey platform. Those who completed the questionnaire were entered in a lottery to win one of five $100 gift cards. The survey data were linked with background forms that clinicians completed during their baseline program evaluation through the Penn BCI.

Measures

Intentions

We measured the strength of intentions using validated and widely-used item stems from social psychology that were designed to be adapted to any behavior of interest [23]. We adapted the item stem to measure clinician intentions towards using each of six specific CBT intervention components: “I intend to [perform the specified CBT intervention component for a particular group of clients] over the next 2 or 3 months.” Clinicians responded to each intention statement using a 7-point scale (where 1 = strongly disagree and 7 = strongly agree), with higher numbers representing stronger intentions. Clinicians reported the strength of their intentions to use each of the six specific CBT intervention components for three different clinical groups: 1) all of their clients receiving CBT, 2) clients with depression, and 3) clients with anxiety. We selected these clinical populations because the appropriateness of certain components may vary by presenting problem (anxiety or depression). The six intervention components were selected to capture structural components of CBT that would apply to a wide client population (agenda setting, planning and reviewing homework), discrete intervention components that would apply to a wide client population (cognitive restructuring), and intervention components that are evidence-based for some populations but not others (exposure therapy, behavioral activation). For example, exposure therapy is evidence-based for anxiety but not depression, so we would expect stronger intentions to use exposure therapy for anxiety than to use it for depression. We included a general “clients receiving CBT” group for comparison.

Clinician background information

Clinicians completed a 22-item “Personal Information” form upon enrolling in the Penn BCI. This form includes questions about the clinician’s age, gender, race, ethnicity, educational background, years of experience, licensure status, primary clinical responsibilities, theoretical orientation, and CBT experience.

Data analyses

We cleaned the data by matching background forms with survey responses and screened for outliers by examining histograms and scatterplots of relevant variables. No cases were removed. We used descriptive statistics to describe the sample and variability in intention strength across survey items. We calculated correlations among the intention responses and intraclass correlations (ICC) to estimate how the strength of intention to use each CBT component varied within each clinician across clinical populations. The ICC is a measure of the proportion of variance in intention to use CBT components explained by the individual. We tested whether variability in intention strength differed across the CBT components and clinical groups using one-way analysis of variance (ANOVA).

Results

Descriptive statistics and correlations

Fig. 1 displays the average strength of intentions to use the CBT components for the three clinical populations. Intentions tended to skew negatively. Intentions were weakest, on average, for exposure (M = 3.9), and strongest, on average, for reviewing homework (M = 5.8). Across clinical populations, more participants “strongly agreed” that they intended to the use structural components of CBT (i.e., agenda, reviewing homework, and planning homework) than cognitive restructuring; few participants “strongly agreed” that they intended to use behavioral activation and exposure.

Fig. 1
figure1

Distribution of CBT component intentions by clinical group

Table 2 shows correlations between intentions to use CBT components across the three clinical populations. As expected, there were significant correlations between many pairs of intentions to use CBT components. Intentions towards using structural components of CBT were highly correlated with each other across groups. Intentions towards using CBT interventions (i.e., exposure, cognitive restructuring, and behavioral activation) were moderately correlated with each other and showed mixed associations across and within groups. For example, intentions towards using exposure, on average, showed the lowest correlations with other CBT components, even for clients with anxiety.

Table 2 Correlations among intentions to use different CBT components across three clinical groups

ICCs were high. For intentions to use CBT components with all clients receiving CBT, the ICC = .78. For intentions to use CBT components with clients with depression, the ICC = .83. For intentions to use CBT components with clients with anxiety, the ICC = .83.

Differences in strength of intentions between groups

Results of the one-way ANOVAs showed a statistically significant difference in the strength of intention to use two of the CBT components across the three clinical groups. There was a statistically significant difference in the strength of intentions to use exposure, F(2, 318) = 8.71, p < .001 and behavioral activation F(2, 318) = 3.06, p = .048. Tukey’s post-hoc test revealed that participants had stronger intentions to use exposure with clients receiving CBT (M = 4.01, SD = 1.72, p = .015) and clients with anxiety (M = 4.29, SD = 1.52, p < .001) than clients with depression. No difference was observed between the strength of intentions to use exposure with clients receiving CBT and clients with anxiety (p = .41). Tukey’s post-hoc test revealed that participants had significantly stronger intentions to use behavioral activation for clients with depression (M = 4.97, SD = 1.48, p = .01) than clients with anxiety. No difference was observed between the strength of intentions to use behavioral activation among clients with depression and clients receiving CBT (p = .22) and clients with anxiety and clients receiving CBT (p = .73). Strength of intentions to use other CBT components did not differ among the clinical populations.

Discussion

In this sample of community clinicians trained in CBT, we found that the strength of clinicians’ intentions to use different CBT components differed. This finding has immediate clinical implications. Given that intentions toward using some components were weak, while others were relatively strong, implementation strategies that target increasing high-fidelity use of CBT broadly may not be sufficient to increase clinician use of all CBT components. Consistent with hypotheses, compared with the strength of intentions to use structural components of CBT, intentions were relatively weak towards using intervention components, which may require more tailored implementation strategies. A related, methodological implication is that researchers querying clinicians about their thoughts regarding use of CBT should ask questions separately about different CBT components; traditionally, clinicians have been asked to report thoughts about CBT as a whole.

Intentions to use the structural components of CBT – agenda setting, planning and reviewing homework – were strongest and highly correlated with each other. Clinicians may view these as a suite of activities that they mostly intend or don’t intend to use. This was not the case for the CBT intervention components. Intentions to use cognitive restructuring and behavioral activation were highly correlated but intentions to use exposure was not correlated with either. Among the six components of CBT we asked clinicians about, intentions to use exposure were weakest and few clinicians endorsed that they “strongly agree” that they intend to use exposure. This is consistent with prior findings about clinicians’ attitudes toward and use of exposure, which are negative even when treating individuals with anxiety for whom exposure is most warranted [17].

These findings underscore variability in clinician CBT use in “real world” contexts and have important implications for implementation strategy selection [6]. Different implementation strategies may be needed when intentions are weak versus when they are strong. For example, implementation strategies should be developed to strengthen intentions to use exposure with clients experiencing anxiety. If clinicians are unlikely to use the CBT components for which intentions are strong, such as reviewing homework, implementation strategies should be designed to help clinicians act on their intentions. For example, if intentions are strong but clinicians forget to review homework, strategies to help clinicians remember may be most needed.

One possible explanation for these findings is that clinicians trained in CBT strongly intend to structure their sessions according to key CBT principles (e.g., setting agendas, assigning homework) because they judge these strategies as relatively easy to implement. In addition, or alternatively, they may feel they are expected to use these strategies, and/or that other clinicians frequently use these strategies; these perceived norms could strengthen their intentions. When it comes to intervention strategies that are more complex, or where perceived norms and attitudes may dictate strategies such as exposure are less acceptable or beneficial, clinicians may select strategies that best fit their practice style or that they feel most comfortable using, regardless of diagnosis or the evidence-base. Further studies that elucidate the extent to which intentions are influenced by perceived norms, attitudes and self-efficacy are needed to inform selection of implementation strategies. For example, in some instances you may need to select strategies, such as training and consultation, to increase self-efficacy. In cases where weak intentions are driven by perceived norms, establishing clinical champions may be helpful.

There were few differences in the strength of intentions across CBT, anxiety, and depression clinical groups. Consistent with hypotheses, no differences were found in the strength of intentions to use the structural components of CBT by clinical groups. Differences observed in intentions to use specific intervention components, such as exposure and behavioral activation, were appropriate for treatment groups, also as hypothesized. This suggests most clinicians differentiate exposure as being appropriate for anxiety and behavioral activation as being appropriate for depression. Taken together, these findings indicate that there may not be as much need for researchers to query separately about treatment groups when assessing clinicians’ thoughts regarding the use of CBT components.

Conclusions

When studying determinants of CBT use and designing interventions to increase use, variability in intentions should be taken into account. Given the malleability of intentions, they should be targeted when developing implementation strategies to increase clinician EBP use. Additionally, researchers querying clinicians about their thoughts regarding use of CBT should ask questions separately about different CBT components rather than asking about CBT as a whole.

Availability of data and materials

The dataset supporting the conclusions of this article is available from the authors on reasonable request.

Abbreviations

ANOVA:

Analysis of variance

CBT:

Cognitive-behavioral therapy

DBHIDS:

Philadelphia Department of Behavioral Health and Intellectual disability Services

EBP:

Evidence-based practice

ICC:

Intraclass correlations

Penn BCI:

University of Pennsylvania’s beck community initiative

REDcap:

Research Electronic Data Capture

References

  1. 1.

    Hofmann SG, Asnaani A, Vonk IJJ, Sawyer AT, Fang A. The efficacy of cognitive behavioral therapy: a review of meta-analyses. Cogn Ther Res. 2012;36(5):427–40.

  2. 2.

    Weisz JR, Kuppens S, Eckshtain D, Ugueto AM, Hawley KM, Jensen-Doss A. Performance of evidence-based youth psychotherapies compared with usual clinical care: a multilevel meta-analysis. JAMA Psychiatry. 2013;70(7):750–61.

  3. 3.

    Creed TA, Wolk CB, Feinberg B, Evans AC, Beck AT. Beyond the label: relationship between community Therapists' self-report of a cognitive behavioral therapy orientation and observed skills. Admin Pol Ment Health. 2016;43(1):36–43.

  4. 4.

    Kazdin AE. Addressing the treatment gap: a key challenge for extending evidence-based psychosocial interventions. Behav Res Ther. 2017;88:7–18.

  5. 5.

    Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10(1):21.

  6. 6.

    Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(139):1–11.

  7. 7.

    Beidas RS, Williams NJ, Becker-Haimes EM, Aarons GA, Barg FK, Evans AC, et al. A repeated cross-sectional study of clinicians’ use of psychotherapy techniques during 5 years of a system-wide effort to implement evidence-based practices in Philadelphia. Implement Sci. 2019;14(1):67.

  8. 8.

    Forsetlund L, Bjorndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. The Cochrane Database of Systematic Reviews. 2009(2):Cd003030.

  9. 9.

    McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: a review of current efforts. Am Psychol. 2010;65:73–84.

  10. 10.

    Stirman SW, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7(17):1–19.

  11. 11.

    González-Valderrama A, Mena C, Undurraga J, Gallardo C, Mondaca P. Implementing psychosocial evidence-based practices in mental health: are we moving in the right direction? Fronteirs in Psychiatry. 2015;6:51-.

  12. 12.

    Chorpita BF, Becker KD, Daleiden EL. Understanding the common elements of evidence-based practice: misconceptions and clinical examples. J Am Acad Child Adolesc Psychiatry. 2007;46(5):647–52.

  13. 13.

    Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: application of the distillation and matching model to 615 treatments from 322 randomized trials. J Consult Clin Psychol. 2009;77(3):566–79.

  14. 14.

    Garland AF, Bickman L, Chorpita BF. Change what? Identifying quality improvement targets by investigating usual mental health care. Adm Policy Ment Health Ment Health Serv. 2010;37:15–26.

  15. 15.

    Higa-McMillan CK, Ebesutani C, Stanick CF. What therapy practices do providers value in youth behavioral health? A measure development study. The Journal of Behavioral Health Services & Research. 2019.

  16. 16.

    Kaczkurkin AN, Foa EB. Cognitive-behavioral therapy for anxiety disorders: an update on the empirical evidence. Dialogues Clin Neurosci. 2015;17(3):337–46.

  17. 17.

    Becker-Haimes EM, Okamura KH, Wolk CB, Rubin R, Evans AC, Beidas RS. Predictors of clinician use of exposure therapy in community mental health settings. J Anxiety Disord. 2017;49:88–94.

  18. 18.

    Whiteside SPH, Sim LA, Morrow AS, Farah WH, Hilliker DR, Murad MH, et al. A meta-analysis to guide the enhancement of CBT for childhood anxiety: exposure over anxiety management. Clin Child Fam Psychol Rev. 2019.

  19. 19.

    Fishman J, Beidas R, Reisinger E, Mandell DS. The utility of measuring intentions to use best practices: a longitudinal study among teachers supporting students with autism. Journal Sch Health. 2018;88(5):388–95.

  20. 20.

    Godin G, Belanger-Gravel A, Eccles M, Grimshaw J. Healthcare professionals' intentions and behaviours: a systematic review of studies based on social cognitive theories. Implement Sci. 2008;3:36.

  21. 21.

    Presseau J, Johnston M, Francis JJ, Hrisos S, Stamp E, Steen N, et al. Theory-based predictors of multiple clinician behaviors in the management of diabetes. J Behav Med. 2014;37(4):607–20.

  22. 22.

    Armitage CJ, Conner M. Efficacy of the theory of planned behaviour: a meta-analytic review. British J Soc Psychol. 2001;40(Pt 4):471–99.

  23. 23.

    Fishbein M, Ajzen I. Predicting and changing behavior: the reasoned action approach. New York: Psychology Press; 2010.

  24. 24.

    Sheeran P. Intention—behavior relations: a conceptual and empirical review. Eur Rev Soc Psychol. 2002;12(1):1–36.

  25. 25.

    Williams NJ, Glisson C. The role of organizational culture and climate in the dissemination and implementation of empirically supported treatments for youth. In: Beidas, RS, & Kendall PC, editors. Dissemination and Implementation of Evidence-Based Practices in Child and Adolescent Mental Health. New York Oxford University Press; 2014. p. 61–81.

  26. 26.

    Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, et al. The Society for Implementation Research Collaboration Instrument Review Project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10:2.

  27. 27.

    Stuebe AM, Bonuck K. What predicts intent to breastfeed exclusively? Breastfeeding knowledge, attitudes, and beliefs in a diverse urban population. Breastfeed Med: Official J Academy Breastfeed Med. 2011;6(6):413–20.

  28. 28.

    Creed TA, Frankel SA, German RE, Green KL, Jager-Hyman S, Taylor KP, et al. Implementation of transdiagnostic cognitive therapy in community behavioral health: the Beck Community initiative. J Consult Clin Psychol. 2016;84(12):1116–26.

  29. 29.

    Kraft P, Rise J, Sutton S, Roysamb E. Perceived difficulty in the theory of planned behaviour: perceived behavioural control or affective attitude? British J Soc Psychol. 2005;44(Pt 3):479–96.

Download references

Acknowledgements

Not applicable.

Funding

Support for this research was provided by the Philadelphia Department of Behavioral Health and Intellectual disAbility Services (Creed, PI) and National Institute of Mental Health (P50 MH113840; Beidas, Mandell, and Volpp, PIs). The funding bodies were not directly involved in the study design, collection, analysis, and interpretation of data, or in writing the manuscript.

Author information

Affiliations

Authors

Contributions

TAC is the principal investigator for the project from which this data were collected. DSM generated the idea and designed the study with input from CBW, EMBH, JF, and TAC. CBW was the primary writer of the manuscript. NAF led data analyses. CBW, EMBH, JF, NAF, DSM, and TAC all made substantial contributions to study conception and design. All authors reviewed and approved this manuscript.

Corresponding author

Correspondence to Courtney Benjamin Wolk.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the University of Pennsylvania Institutional Review Board (Federalwide Assurance FWA00004028). The required elements of informed consent were described on the first page of the survey. Individuals agreed to participate by proceeding to complete the questionnaire, which was administered via Research Electronic Data Capture (REDcap), a HIPAA compliant web-based survey platform.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wolk, C.B., Becker-Haimes, E.M., Fishman, J. et al. Variability in clinician intentions to implement specific cognitive-behavioral therapy components. BMC Psychiatry 19, 406 (2019). https://doi.org/10.1186/s12888-019-2394-y

Download citation

Keywords

  • Cognitive-behavioral therapy
  • Intentions
  • Motivation
  • Implementation
  • Dissemination