We compared the use of ROM in CAMHS from an audit conducted in 2011 [13] with a re-audit conducted in 2012/2013 to assess for any changes in the use of outcome measures as a result of recent research, government and commissioning strategies. Our findings reveal a significantly greater uptake of HoNOSCA, SDQ and C-GAS, which are measures advocated by a national Child and adolescent mental health services Outcome Research Consortium (CORC), and crucial to fulfil targets outlined in local commissioning policy (CQUIN). We also noted an increase in the repeated use of outcome measures.
In line with the findings from the original audit [13], HoNOSCA was the most frequently used measure in the re-audit, followed by C-GAS. Additionally, both the single and repeated use of these measures had significantly increased since the original audit. As reported by Batty et al. [13], HoNOSCA has a longstanding history and expectation of use within these Trusts which is likely to explain its high completion rate. As both HoNOSCA and C-GAS are clinician-completed measures it is easier for clinicians to ensure that these measures are completed, in comparison to measures completed by the service user or caregiver (e.g. SDQ, GBO, CHI-ESQ) which often involve more administrative support and the co-operation of the service user. The improved completion rates for clinician rated outcome measures offer the service user, clinician and managers with quantifiable evidence of any change that may resulted from the intervention. This allows all stakeholders to assess the effectiveness of the service and the individual may assess the benefit of the treatment received. However, the comparatively lower completion of service user measures demonstrates that the perspective of change as recorded by the service user is under-reported. It is important that change from both the perspective of the clinician and service user is recorded to fully understand the effectiveness of any intervention. This is particularly the case as clinician completed measures can be susceptible to reporting bias, such as over reporting the extent of improvement [22].
The re-audit revealed a greater use of combined measures in comparison to that found in the original audit. As shown in Figure 1, in the original audit HoNOSCA and C-GAS were the most common combination of measures, whereas in the re-audit, HoNOSCA, SDQ, and C-GAS were the most common grouping of measures, with this combination appearing in over half of all case notes. This suggests that clinicians are in greater agreement regarding which combination of outcome measures to use. The increased uptake of the SDQ may also reflect clinicians’ positive attitudes towards this measure, with previous research showing that they value this as a measure of service users’ opinion [13]. Additionally, the increase in provision of administration time for outcome measures may have been influential for the completion of these service-user completed measures which involve extra burden in terms of posting-out and collecting questionnaires. In contrast, other CORC advocated measures such as the GBO and CHI-ESQ were rarely found. Again, it may be that this represents difficulties in getting measures completed by the service user or, given that these measures are relatively new with less established psychometric properties and history of use within CAMHS, clinicians may be unwilling to engage with these measures. Previous research has noted concerns about the scientific quality of measures and lack of knowledge in how to use and interpret a measure impedes clinicians’ likelihood to use a measure [11, 14, 23]. It is also possible that clinicians feel that a combination of the clinician completed measures alongside one service user measure, such as the SDQ which has established validity and reliability [24–26], is satisfactory in gaining an impression of current functioning.
The significant increase in both the single and repeated use of different outcome measures since the original audit may have resulted from factors other than increased administration time. Additionally, initiatives such as CORC have promoted greater awareness or the use and type of outcome measures, specifically, in the use of generic rather than condition-specific measures that allow comparisons to benchmarks which should lead to improvements in practice [27]. The influence of the recent IAPT initiative is also already seen in DHCFT, whereby the uptake of the IAPT advocated RCADS was found in some case notes. Although routine use of all CYP-IAPT scales was only introduced after the re-audit, the initial year of the scheme involved training clinicians on the clinical usefulness and importance of completing ROMs, thus establishing a managerial expectation that they would be completed. However, it is interesting to note that NHCT withdrew their CORC membership after the original audit but increased their use of outcome measures, suggesting that any practical support provided by CORC was less influential.
We have also speculated that the collection of outcome measures may be driven by CQUIN targets, but DHCFT did not have any commissioning strategies associated with the completion of outcome measures (i.e. CAMHS outcomes CQUIN); yet comparable results were found across the sites with regard to CORC measures. Thus, it may be that clinicians’ willingness to use a measure may be driven as much by knowledge and awareness of measures as it is by commissioning policies. Since the original audit, the research organisation, Collaborations for Leadership in Applied Health Research and Care – Nottinghamshire, Derbyshire, Lincolnshire (CLAHRC-NDL) has conducted significant work to promote the use of outcome measures across the East Midlands. Such work includes seconding 'Diffusion Fellows’ and other local champions from NHS partners to translate and disseminate knowledge from research studies into practice, holding conferences and seminars for local clinicians and publishing findings of the original audit in simple summary 'bites’ for clinicians and managers. Given that previous research has highlighted local champions as providing a key role in promoting outcome measures [28], it is likely that CLAHRC activities have also partially driven this change across the two NHS Trusts.
Whereas the original audit [13] and previous research [11] noted very little repeated use of the same measure, we found that approximately 60% of case-notes contained repeated use of the same measure, compared to 30% in the original audit. Although this shows an improvement in ROM, given that almost half the case-notes still contained only a single use of a given measure, there is further work needed to improve the rate of outcome measurement in CAMHS.
As the option of entering CORC measures to the electronic records system had not been developed at the time of the first audit, we cannot make comparisons regarding the number of measures that are being recorded electronically. However, findings from the re-audit demonstrate that the majority of measures are being recorded electronically. This is important for a care system where multiple professionals with specialised knowledge may be involved in care delivery in different geographical areas [29]. It is possible that the option of electronic inputting of data may have contributed to the increased use of outcome measures, allowing clinicians to quickly input clinician-completed outcome measures (HoNOSCA and C-GAS) without having to find paper-versions. Electronic records offer the opportunity of better access to patient information, with the premise that the greater the availability of high quality information the more able the clinician is to care for the patient [30]. Furthermore, the use of an electronic system allows opportunity for the development of a report system that graphically represents changes in outcome scores over time. This system would provide clinicians with real-time feedback on their client’s progress, thus increasing the clinical utility of outcome measures [13]. Additionally, this type of report could allow data to be aggregated at team or service level to inform managers and commissioners to enable benchmarking to comparable services [31].
Although the re-audit has shown there is a significant increase in the uptake of ROM in child and adolescent psychiatry, it has also highlighted the need for further improvement, particularly with regard to repeated use of the same measure and measures completed by the service user (e.g. SDQ, GBO, CHI-ESQ). In order for an outcome measure to assess any changes that may have resulted through an intervention it is imperative that the same measure is used at baseline and at least once thereafter. To reduce burden on clinicians future outcome measure initiatives may wish to consider reducing the number of clinician completed measures required. Given that HoNOSCA has been shown to have better reliability and be more informative than C-GAS [32] it may be prudent to only complete HoNOSCA. However, C-GAS provides information about the level of functioning of the service user in the previous month [4] across all conditions, incorporating elements of a multi-axial assessment [33]; therefore it is considered a valuable complement to HoNOSCA in research [34] and clinical practice. To improve the completion of service user completed outcome measures such as the SDQ, new technologies could be implemented to facilitate their use. For example, the measures could be completed on a tablet PC in the waiting room prior to each clinic session, (a system currently being rolled out by the CYP-IAPT initiative and separately a trial of electronically completed measures which is being evaluated by the CLAHRC-NDL). The reports from these measures could be fed straight back to the clinician online, producing real-time feedback that does not rely on service users having to remember to complete and post questionnaires prior to their clinic appointment. Truman et al. [35] report on a computer-based SDQ and found significantly more user satisfaction with a computer version in comparison to the paper-based version. This kind of measure requires significant investment in technical adaptation to ensure integration with electronic patient records and would require managerial understanding and commitment to proceed. This 'session-by-session’ monitoring would overcome difficulties in getting follow-up measures due to treatment drop-out, or clinic appointments that are not scheduled around the 6-month follow-up. Furthermore, this regular monitoring may be more sensitive to change and may also allow clinicians to modify their intervention strategy earlier on if they felt sufficient progress was not being made [36].
The comparison of two audits has offered a valuable insight into the improvements of ROM within child and adolescent psychiatry which may have resulted from greater Trust support and initiatives such as CLAHRC-NDL research, CORC and CYP-IAPT that actively promote the use of outcome measures. However, our findings are limited to two NHS Trusts; as such caution should be taken when generalising the findings to other Trusts located in different geographical regions. Nethertheless, the comparison of two different Trusts allowed for the assessment of local service drivers and priorities and their impact on outcome measure completion. Given that the aim of this research was to document the evidence-base for ROM in CAMHS we did not assess clinicians’ opinions as to which factors were influencing their use of specific outcome measures in the re-audit. However, we have inferred possible barriers based on well documented findings from previous research [13, 14, 16, 17, 20].