Patient experiences questionnaire for interdisciplinary treatment for substance dependence (PEQ-ITSD): reliability and validity following a national survey in Norway
BMC Psychiatry volume 17, Article number: 73 (2017)
Patient experiences are an important aspect of health care quality, but there is a lack of validated instruments for their measurement in the substance dependence literature. A new questionnaire to measure inpatients’ experiences of interdisciplinary treatment for substance dependence has been developed in Norway. The aim of this study was to psychometrically test the new questionnaire, using data from a national survey in 2013.
The questionnaire was developed based on a literature review, qualitative interviews with patients, expert group discussions and pretesting. Data were collected in a national survey covering all residential facilities with inpatients in treatment for substance dependence in 2013. Data quality and psychometric properties were assessed, including ceiling effects, item missing, exploratory factor analysis, and tests of internal consistency reliability, test-retest reliability and construct validity.
The sample included 978 inpatients present at 98 residential institutions. After correcting for excluded patients (n = 175), the response rate was 91.4%. 28 out of 33 items had less than 20.5% of missing data or replies in the “not applicable” category. All but one item met the ceiling effect criterion of less than 50.0% of the responses in the most favorable category. Exploratory factor analysis resulted in three scales: “treatment and personnel”, “milieu” and “outcome”. All scales showed satisfactory internal consistency reliability (Cronbach’s alpha ranged from 0.75-0.91) and test-retest reliability (ICC ranged from 0.82-0.85). 17 of 18 significant associations between single variables and the scales supported construct validity of the PEQ-ITSD.
The content validity of the PEQ-ITSD was secured by a literature review, consultations with an expert group and qualitative interviews with patients. The PEQ-ITSD was used in a national survey in Norway in 2013 and psychometric testing showed that the instrument had satisfactory internal consistency reliability and construct validity.
Patient-reported quality is an important component of health care quality, and the routine collection of patients’ experiences as part of quality measurements in health care has become widespread. Various populations are asked to give their feedback about health care services, providing patient-based information about the functioning of specific health care services and the health care system. Patient experiences have been linked to patient safety and clinical effectiveness, giving a clear clinical rationale for focus on such experiences .
Studies have shown that patient experiences are related to patient satisfaction . One issue with satisfaction surveys is that they often report high satisfaction [3–6], challenging the usefulness of satisfaction surveys in quality improvement work, and calling for a more nuanced and multi-faceted approach . Asking patients about their experiences of the health care delivery system has been identified as a useful method for establishing trends over time and comparisons among providers .
Several countries have national programs for monitoring and reporting on health care quality using patient experience surveys . These national efforts create a need for standardized instruments of high quality, specialized for use in different settings . Reliable and valid data about users’ or patients’ experiences requires a measurement tool developed and tested according to rigorous and comprehensive methods. Such development and testing of survey tools is challenging and a task that requires the consideration of many psychometric questions, like what questionnaire development steps are needed, establishing criteria for the psychometric testing and cut-off values for the relevant statistical tests . The results from the development and the psychometric testing of the measurement tool should be documented and appraised to ensure the tool’s properties. Within the patient satisfaction field, a systematic review revealed that such documentation and objective appraisal are not always carried out, with less than half of the included studies reporting some validity or reliability data . Such lack of evidence casts doubt on the credibility of the results derived from the use of these instruments.
The Norwegian Institute of Public Health (NIPH) has the responsibility for carrying out national patient experience surveys in Norway. Usually, the population of interest is drawn at random from each service provider and potential participants are invited by means of a mailed questionnaire and invitation letter. The purpose of the program is to systematically measure user experiences of health care, as a basis for quality improvement, health care management, patient choice and public accountability. To serve this purpose, survey tools for different populations in health care have already been developed and tested in Norway [12–22]. In 2013, the Ministry of Health decided that a national patient experience survey of interdisciplinary treatment for substance dependence should be conducted. The instrument bank in Norway lacked a validated questionnaire for this patient group, but a development and validation project was already in progress and was connected to the national survey that the Ministry had decided on.
In the field of interdisciplinary treatment for substance dependence, some validated questionnaires have been identified in the international literature, one of which is a quality-of-life instrument [23–28]. However, these are used within differentiated treatments and among people who use specific substances. Furthermore, several of these are satisfaction measurements, and not targeted at gathering information about patient experiences. Hence, there is a paucity of surveys in substance dependence treatment that can reliably and validly measure inpatients’ experiences across treatments and types of substance use.
Within this field, research has shown that enhancing patient satisfaction may improve treatment outcomes [29–31]. A critical review within the field of addiction treatment, by Trujols et al. published in , summarizes important aspects of the evaluation of treatment. These aspects include patients’ views on treatment, patients’ opinions about medication, relations with therapists and influence on treatment, perception of needs and satisfaction with treatment, as well as indicators of user-perceived quality. However, these perspectives are not always in focus when evaluating the services .
The lack of a validated questionnaire for the measurement of patient experiences with interdisciplinary treatment for substance dependence led the Norwegian Knowledge Centre for the Health Services (now NIPH) to develop a new questionnaire for this patient group. The development of the questionnaire followed the standard methodology of our national program [12–22], including a literature review, cognitive interviews with patients and expert consultations. The questionnaire was included in the national survey in 2013 that the Ministry of Health decided on. The aim of this study was to test the construct validity and internal consistency reliability of a new questionnaire following the national survey in Norway in 2013. The survey included all 98 residential treatment institutions for substance dependence in Norway.
The Patient Experiences Questionnaire for Interdisciplinary Treatment for Substance Dependence (PEQ-ITSD) was developed through a thorough process that included several recognized steps [12–22]. Firstly, a comprehensive literature review was conducted to search for valid and reliable questionnaires that could be used in the Norwegian context. The review concluded that there were no existing questionnaires ready and relevant for large-scale use in a Norwegian setting . Questionnaires, both from the review and Norwegian questionnaires that had been used locally, were considered in terms of identifying important and relevant topics for the new questionnaire. Secondly, an expert group were consulted several times to discuss the content of the new questionnaire, as well as procedures for data collection. The expert group consisted of seven persons, including clinicians/therapists, researchers associated with treatment institutions and representatives from interest groups. Thirdly, qualitative interviews were conducted with 13 patients with various types of substance dependencies, with a focus on what they found to be important while in treatment. Fourthly, the resulting questionnaire was cognitively tested with patients (n = 15), and lastly, a pilot survey was conducted with 14 institutions (n = 329). The first version of the questionnaire included 45 questions .
Before the national survey, the questionnaire was expanded with three modified items from the Patient Enablement Instrument , and three questions about help from the municipality . The former was included to obtain feedback from patients regarding outcomes of treatment, using the same approach as a newly published patient experience questionnaire for psychiatric inpatients . The latter was included because of the importance of continuity of care and primary health care services in Norway for this patient group as well . The questionnaire included in the national survey consisted of 51 closed-ended questions, most scored on a scale from 1 “not at all” to 5 “to a very large extent”. The topics covered in the questionnaire included “reception and waiting time”, “the therapists/the personnel”, “the treatment”, “the milieu and activity provision”, “preparations for the time after discharge”, “other assessments” and “previous admissions in substance dependence institutions”. The questionnaire also included questions about the respondents’ background. In addition to the closed-ended questions, there were two open-ended questions. One asked the respondents to write more about their experiences at the institution, and the other asked the respondents to write about their experiences of the help and care they had received from their municipality.
Data were collected through a national survey in 2013. The survey was commissioned by the Norwegian Directorate of Health and was mandatory for all relevant institutions. The included institutions were all public residential institutions and private residential institutions with a contract with the regional health authorities. Detoxification institutions were excluded. All patients aged 16 years and older were invited to fill out the questionnaire.
The survey was developed as part of the national program, but the very low rate of response to mailed post-discharge surveys of psychiatric inpatients and sub-groups of patients with substance dependence in these surveys restricts their validity and usefulness . Consequently, this prompted a change to data collection, from post-discharge to on-site. In contrast to the NIPH’s standard data collection method, which is to send a postal questionnaire a few weeks after discharge to the patient’s home, all institutions carried out the survey on-site by distributing questionnaires to patients while in treatment. This data collection approach is also used for psychiatric inpatients .
Questionnaires were sent to participating institutions, where the institutions’ personnel were responsible for distributing and collecting the questionnaires. Each patient received an envelope containing an information sheet, the questionnaire and a reply envelope. Every fourth envelope also contained a retest-questionnaire and an additional reply envelope. The retest was to be carried out approximately two days after the original survey. The institutions were to ensure that the patients completed the questionnaire by themselves, without discussing the questions or their answers with other patients, health personnel or staff. If needed, the patients could receive help in reading and/or understanding the questions, without being influenced on how to respond.
After the survey, the institutions reported to the NIPH on the number of eligible patients, number of patients who participated, number of patients who declined participation and number of excluded patients. Based on this information, the NIPH calculated adjusted gross sample and response rates. No information about the patients was gathered other than background questions in the questionnaires, and hence the NIPH was able to create an anonymous dataset based on the information in the completed questionnaires.
Ceiling effect and item missing were assessed. Ceiling effect is commonly understood as the percentage of respondents answering in the most positive response category. A large ceiling effect can indicate measurement problems in respect of differentiating between care providers or points in time. The cut-off for the ceiling effect was set to 50%, i.e., an item was judged as of adequate quality if the ceiling effect was smaller than 50% [38, 39].
Exploratory factor analyses were conducted to assess the underlying dimensions of the questionnaire. Items with more than 20% missing responses were excluded. All other questions, except questions regarding background information and items about experiences with other services than residential institutions, were entered into exploratory factor analyses. As some correlation between the factors may be expected, principal axis factoring and oblique rotation with Promax was applied. Two separate factor analyses were conducted: The first factor analysis was conducted with items concerning structure and process. In the second analysis, all items related to outcome (as reported at the time of the measurement) were entered. Items with factor loading smaller than 0.4 were excluded, and the criterion for rotation was set to eigenvalues greater than 1.
The internal consistency of the resulting scales was assessed with the calculation of Cronbach’s α and item-total correlation. Item-total correlation measures the correlation of each item with the total score of the remaining items of the scale. Cronbach’s α is an assessment of the correlation between all items in the given scale. The cut-off for the α was set to the commonly used criterion of 0.7 or higher . The criterion for item-total correlation is less established, and 0.2 , 0.4 [15, 41–43] and 0.5  have all been used.
Test-retest reliability was assessed through calculation of the intraclass correlation coefficient (ICC). The ICC was used to test the reliability of the scores by correlating test and retest scores for each scale. A correlation of 0.7 or greater was considered satisfactory.
Construct validity relates to the degree to which the measurement actually measures a specific underlying construct . This can be tested through assessing the association of the measurement’s scales with other variables known to influence the construct of interest. A systematic review found that some variables were relevant across populations: age and health status . Based on a literature search, previous work and experts’ advice, it was hypothesized that the scale scores would correlate with type of misuse [26, 45], more specifically that patients with alcohol dependence would report better experiences. Shorter waiting time before treatment [24, 46] and less extent of forced treatment  were also hypothesized to influence the scale scores positively. Age [2, 26, 45, 47–49] was expected to positively correlate with scale scores. Furthermore, it was hypothesized that patients reporting better self-perceived physical and psychological health would report better experiences [2, 45]. Independent samples t-test was conducted for type of misuse, while Pearson’s r was used to assess correlations for all other variables.
All analyses were conducted using SPSS version 23.0.
On the day of the survey, the 98 participating institutions had a total of 1245 admitted patients. 12 patients were excluded due to ethical considerations and 163 were not present at the institution when the survey was conducted. Hence, the corrected sample was 1070 eligible patients. 978 patients filled out and returned the questionnaire, resulting in a response rate of 91.4%.
Two thirds of the sample were male with a mean age of 36.5 years (Table 1). 80.3% were single, and 11.9% had university or college education. The respondents’ mean age when they developed a substance dependence was 20.3 years. 62.4% and 54.6% reported their physical and mental health as excellent, very good or good, respectively. 32.5% had no previous admissions to residential treatment, and 53.7% had been at the institution less than 3 months. The most frequently used substances prior to admission were cocaine/amphetamine (47.1%) and alcohol (46.4%). 58.9% reported two or more substances as the most frequently used substance.
The levels of missing data ranged from 1.9% to 4.9% (Table 2), while the responses in the “not applicable” category ranged from 0.3% to 29.6%. Five out of the 33 items had more than 20.4% item-missing (missing data + not applicable). The five items were #12c: benefit of treatment with medication; #18: help for psychological distress; #27 and #28: help with practical issues and further treatment after discharge; and #34: the personnel’s cooperation with patients’ next of kin.
All items, with one exception, met the criterion of less than 50.0% responses in the most favorable category. The exception was item #36 regarding malpractice, where 51.4% of the respondents answered “not at all”.
A total of 27 items were included in the two factor analyses. Twenty items addressing structure and process were entered in the first factor analysis. Three items were excluded from the analysis, one at a time, due to low factor loadings. Hence, 17 items were entered in the final analysis, resulting in two factors that explained 51.8% of the variance (Table 3). Initially, seven items concerning outcomes were entered in the second factor analysis. Two items were removed due to the wording of the questions, asking for assessments of specific treatment initiatives. Hence, five general outcome items were entered in the second factor analysis, resulting in one factor which explained 73.4% of the variance. Cronbach’s α for the three scales ranged from 0.75 (factor 2 – “milieu”) to 0.91 (factor 1 – “treatment and personnel” and factor 3 – “outcome”), all of which were above the 0.7 criterion. The scales showed good test-retest reliability; all factors had a reliability greater than 0.8.
The associations between the scale scores and the tested variables were statistically significant in 17 out of 18 tests (Table 4). Independent Samples T-Test showed that patients reporting alcohol as their single used substance before treatment entry scored significantly higher on all three scales compared to patients who reported other types of substance dependencies. When comparing age and reported type of substance dependence, we found that those reporting only alcohol as their type of misuse are generally older (mean age 50 for alcohol, mean age 33 for other). Further testing showed that, for “treatment and personnel” and “outcome”, the effect of age disappears when controlling for alcohol use. However, since the effect of age was statistically significant for “milieu” when controlling for alcohol, both variables were kept in the model for construct validity testing.
The data for this study was collected as part of the national patient experience program in Norway. It was the first national survey of patient experiences of interdisciplinary treatment for substance dependence. The PEQ-ITSD was designed for use among inpatients, and focuses on topics patients have reported to be important. The questionnaire was developed after a thorough review of the literature, meetings in an expert group, interviews with patients and results from a pilot survey. The testing and evaluation of the PEQ-ITSD showed that the questionnaire comprised three scales with excellent internal consistency reliability, test-retest reliability and construct validity. Furthermore, the questionnaire showed good acceptability given the high response rate and low proportion of item missing.
The questionnaire comprises three scales, resulting from two factor analyses. These three scales correspond to the scales found in the on-site survey of psychiatric inpatients in Norway, a survey conducted by the same methods as the current study . It is somewhat difficult to compare the PEQ-ITSD with other instruments of interest, given the variation in the populations surveyed and the aim of the instruments. However, some parallels are found between the PEQ-ITSD’s three scales and other instruments used in similar populations. The scales resemble to some extent both the Treatment Outcome Profile (TOP)  and the Treatment Perceptions Questionnaire (TPQ) , emphasizing the importance to the patients of the areas and topics constituting the PEQ-ITSD. The user satisfaction scale of TOP consists of three subscales; satisfaction with treatment; satisfaction with staff; satisfaction with environment, each consisting of three items. The two scales constituting the TPQ focus on perceptions of staff and treatment program. However, the TOP was primarily developed for use among patients in psychiatric care, and only secondarily tested for use among patients in treatment for substance dependence, while the testing of levels of validity and reliability was insufficient for both instruments. Accordingly, and given that the population in question consisted of all patients undergoing treatment for different types of misuse, it was necessary to develop a new questionnaire for use with a heterogeneous population in residential treatment for substance dependence.
The rationale for conducting two analyses was to avoid contamination between the outcome items and those concerning structure and process. The three scales may enable the institutions to identify areas where the quality, as seen by the patients, should be improved. The scales, along with feasible case mix adjustments, contribute to more valid comparisons across both institutions and time.
Through the search for relevant literature, it was discovered that there were a general lack of literature addressing issues of psychometric properties in questionnaires used in surveys of patients in substance dependence treatment. This is also supported by an overview of user satisfaction surveys in addiction services . Furthermore, there is a general lack of validated patient experience instruments within this field [7, 32]. Due to the insufficient literature, the hypotheses for the construct validity testing were based on what was identified through the literature review of patient experiences of treatment for substance dependence, on the general literature on patient experiences, and on advice from experts from whom advice was sought. Six independent variables were suggested. Since little is known about what variables are most important in the given population, all six variables were entered in the validity testing for a more exploratory approach.
Most hypothesized associations were statistically significant. Several studies have found that age is associated with satisfaction or experiences [2, 26, 45, 47–49]. The patients’ age was associated with the “treatment and personnel” and “milieu” scales. However, it was not significantly associated with “outcome”. The age effect is mostly evident through older patients being less critical than younger patients. In the current data, the patients are, on average, younger than other populations, e.g. somatic inpatients. The mean age in the population replying to the PEQ-ITSD was 36.5 years. As previously described, both alcohol use and age were associated with the scale scores. However, testing showed that patients reporting only alcohol as their dependence are older than patients reporting other types of dependencies, and that the effect of age disappears when controlling for alcohol use for two of the three scales. All significant correlations showed associations according to the hypotheses.
The patient experience surveys conducted by the NIPH are usually carried out as postal surveys. Patients are sent a postal invitation to answer a questionnaire after their hospital visit or doctor’s appointment. However, due to expert advice and previous experience with low response rates among patients within psychiatric care, an on-site data collection method was chosen for the population at hand. In addition, previous research has concluded that personal contact in recruitment and data collection may increase the response rate [50, 51]. There are some concerns regarding the possible differences in responses that are elicited from postal surveys versus on-site data collection. Even though on-site data collection might increase the response rate and therefore increase the representativeness of the data, on-site data collection often results in more favourable responses compared to mailed surveys [52–54].
When deciding to collect the data on site, there are at least two possibilities: at discharge or as a cross-sectional study. One strength of the design that asks for participation at discharge is that the patients have been through their entire treatment, and therefore may be better able to answer all questions. In addition, the patients who have completed their treatment may have other experiences than those who have been in treatment for a shorter amount of time. A limitation of the same design is that the patients who drop out of treatment will not be reached. Furthermore, for institutions where patients are supposed to stay for a longer period of time, the inclusion period for obtaining a large enough sample can be very long, adding to the challenges of anonymity and outdated data. In the work on developing the questionnaire, both approaches were tested. It was found that the two approaches elicited somewhat different evaluations of the treatment and the institutions, but that a cross-sectional study was well suited to including all patients, and minimizing the work load on the employees, while tolerating the somewhat worse evaluations .
The PEQ-ITSD’s three scales will be further tested for feasibility for use as external quality indicators. However, even though the scales have good psychometric properties and present a more robust result than single items, some important items were excluded after the psychometric testing. The items in the questionnaire have all been reported as important to the patients, and the questionnaire should therefore not be reduced to merely the items comprising the three scales.
The psychometric testing of the PEQ-ITSD has shown that the data collected are of satisfactory quality, and that the questionnaire shows excellent psychometric properties. The instrument has been developed and tested for a population seldom previously invited to participate in similar surveys.
While the PEQ-ITSD has been developed and tested through rigorous methods as part of the national program in Norway, there are some limitations to both the questionnaire and this study. Every residential treatment facility, both public and private, was included. This means that the included institutions vary considerably regarding e.g. size of the patient population, type of substance dependence, and method of treatment. Many of the participating institutions are quite small and thus have few responders.
Another limitation of the design is that the data are collected anonymously. That is, no information about the respondents is gathered, other than what the respondents themselves report in the questionnaires. This design means that there is no available information about those who chose not to participate in the survey, and hence no knowledge of whether the respondents differ from the non-respondents in any systematic way. In other words, it is unknown whether the data are influenced by non-response bias, which may pose a threat to the generalizability of the results. However, the national survey of 2013 had a response rate of 91.4%, leading to the conclusion that non-response bias constitutes a minor issue in this population.
The described questionnaire has been developed and tested for use with inpatients on-site, and the generalizability to other populations, such as detoxification patients, out-patient clinics or discharged patients, is unknown.
The PEQ-ITSD has shown excellent measurement properties, such as internal consistency reliability, test-retest reliability and construct validity. The questionnaire comprises important themes elicited from patients and experts. The PEQ-ITSD can be used to measure inpatients’ experiences of interdisciplinary treatment for substance dependence; however more research and testing are needed to assess its feasibility for use in producing quality indicators.
Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3:e001570.
Crow R, Gage H, Hampson S, Hart J, Kimber A, Storey L, et al. The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health technology assessment (Winchester, England). 2002;6(32):1-244.
Carlson MJ, Gabriel RM. Patient satisfaction, use of services, and one-year outcomes in publicly funded substance abuse treatment. Psychiatr Serv. 2001;52(9):1230–6.
Hser YI, Evans E, Huang D, Anglin DM. Relationship between drug treatment services, retention, and outcomes. Psychiatr Serv. 2004;55(7):767–74.
Marchand KI, Oviedo-Joekes E, Guh D, Brissette S, Marsh DC, Schechter MT. Client satisfaction among participants in a randomized trial comparing oral methadone and injectable diacetylmorphine for long-term opioid-dependency. BMC Health Serv Res. 2011;11:174.
Ries RK, Jaffe C, Comtois KA, Kitchell M. Treatment satisfaction compared with outcome in severe dual disorders. Community Ment Health J. 1999;35(3):213–21.
Trujols J, Iraurgi I, Oviedo-Joekes E, Guardia-Olmos J. A critical analysis of user satisfaction surveys in addiction services: Opioid maintenance treatment as a representative case study. Patient Prefer Adher. 2014;8:107–17.
Garratt A, Solheim E, Danielsen K. National and cross-national surveys of patient experiences : a structured review. Oslo: Nasjonalt kunnskapssenter for helsetjenesten; 2008.
Eisen SV, Shaul JA, Leff HS, Stringfellow V, Clarridge BR, Cleary PD. Toward a national consumer survey: evaluation of the CABHS and MHSIP instruments. J Behav Health Serv Res. 2001;28(3):347–69.
Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. 4th ed. New York, NY: Oxford University Press; 2008.
Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care. 1999;11(4):319–28.
Pettersen KI, Veenstra M, Guldvog B, Kolstad A. The Patient Experiences Questionnaire: development, validity and reliability. Int J Qual Health Care. 2004;16(6):453–63.
Oltedal S, Garratt A, Bjertnaes O, Bjornsdottir M, Freil M, Sachs M. The NORPEQ patient experiences questionnaire: data quality, internal consistency and validity following a Norwegian inpatient survey. Scand J Public Health. 2007;35(5):540–7.
Iversen HH, Holmboe O, Bjertnaes OA. The Cancer Patient Experiences Questionnaire (CPEQ): reliability and construct validity following a national survey to assess hospital cancer care from the patient perspective. BMJ Open. 2012;2:e001437.
Garratt AM, Bjertnaes OA, Barlinn J. Parent experiences of paediatric care (PEPC) questionnaire: Reliability and validity following a national survey. Acta Paediatr. 2007;96(2):246–52.
Bjertnaes O, Iversen HH, Kjollesdal J. PIPEQ-OS--an instrument for on-site measurements of the experiences of inpatients at psychiatric institutions. BMC Psychiatry. 2015;15:234.
Garratt A, Bjorngaard JH, Dahle KA, Bjertnaes OA, Saunes IS, Ruud T. The Psychiatric Out-Patient Experiences Questionnaire (POPEQ): data quality, reliability and validity in patients attending 90 Norwegian clinics. Nord J Psychiatry. 2006;60(2):89–96.
Garratt AM, Bjertnaes OA, Holmboe O, Hanssen-Bauer K. Parent experiences questionnaire for outpatient child and adolescent mental health services (PEQ-CAMHS Outpatients): reliability and validity following a national survey. J Child Adolesc Psychiatr Ment Health Nurs. 2011;5:18.
Garratt AM, Danielsen K, Forland O, Hunskaar S. The Patient Experiences Questionnaire for Out-of-Hours Care (PEQ-OHC): data quality, reliability, and validity. Scand J Prim Health Care. 2010;28(2):95–101.
Sjetne IS, Iversen HH, Kjollesdal JG. A questionnaire to measure women's experiences with pregnancy, birth and postnatal care: instrument development and assessment following a national survey in Norway. BMC Pregnancy Childbirth. 2015;15:182.
Bjertnaes OA, Garratt A, Nessa J. The GPs' Experiences Questionnaire (GPEQ): reliability and validity following a national survey to assess GPs' views of district psychiatric services. Fam Pract. 2007;24(4):336–42.
Sjetne IS, Bjertnaes OA, Olsen RV, Iversen HH, Bukholm G. The Generic Short Patient Experiences Questionnaire (GS-PEQ): identification of core items from a survey in Norway. BMC Health Serv Res. 2011;11:88.
Holcomb WR, Parker JC, Leong GB. Outcomes of inpatients treated on a VA psychiatric unit and a substance abuse treatment unit. Psychiatr Serv. 1997;48(5):699–704.
Marsden J, Stewart D, Gossop M, Rolfe A, Bacchus L, Griffiths P, et al. Assessing client satisfaction with treatment for substance use problems and the development of the Treatment Perceptions Questionnaire (TPQ). Addict Res. 2000;8(5):455–70.
De Los Cobos JP, Valero S, Haro G, Fidel G, Escuder G, Trujols J, et al. Development and psychometric properties of the Verona Service Satisfaction Scale for methadone-treated opioid-dependent patients (VSSS-MT). Drug Alcohol Depend. 2002;68(2):209–14.
De Wilde EF, Hendriks VM. The Client Satisfaction Questionnaire: psychometric properties in a Dutch addict population. Eur Addict Res. 2005;11(4):157–62.
Hubley AM, Palepu A. Injection Drug User Quality of Life Scale (IDUQOL): findings from a content validation study. Health Qual Life Outcomes. 2007;5:46.
De Los Cobos JP, Trujols J, Sinol N, Batlle F. Development and validation of the scale to assess satisfaction with medications for addiction treatment-methadone for heroin addiction (SASMAT-METHER). Drug Alcohol Depend. 2014;142:79–85.
Kasprow WJ, Frisman L, Rosenheck RA. Homeless veterans' satisfaction with residential treatment. Psychiatr Serv. 1999;50(4):540–5.
Donovan DM, Kadden RM, DiClemente CC, Carroll KM. Client satisfaction with three therapies in the treatment of alcohol dependence: Results from Project MATCH. Am J Addict. 2002;11(4):291–307.
Long CG, Williams M, Midgley M, Hollin CR. Within-program factors as predictors of drinking outcome following cognitive-behavioral treatment. Addict Behav. 2000;25(4):573–8.
Danielsen K, Garratt A, Kornør H. Måling av brukererfaringer med avhengighetsbehandling : en litteraturgjennomgang av validerte måleinstrumenter. Oslo: Nasjonalt kunnskapssenter for helsetjenesten; 2007.
Dahle KA, Iversen HH. Utvikling av metode for å måle pasienterfaringer med døgnopphold innen tverrfaglig spesialisert rusbehandling. Oslo: Nasjonalt kunnskapssenter for helsetjenesten; 2011.
Howie JG, Heaney DJ, Maxwell M, Walker JJ. A comparison of a Patient Enablement Instrument (PEI) against two established satisfaction scales as an outcome measure of primary care consultations. Fam Pract. 1998;15(2):165–71.
Haugum M, Iversen HH, Bjertnæs ØA. Pasienterfaringer med døgnopphold innen tverrfaglig spesialisert rusbehandling – resultater etter en nasjonal undersøkelse i 2013. PasOpp-rapport nr. 7 − 2013. Oslo: Nasjonalt kunnskapssenter for helsetjenesten; 2013.
Helse- og omsorgsdepartementet. Samhandlingsreformen - Rett behandling – på rett sted – til rett tid. St.meld. nr. 47 (2008-2009).
Dahle KA, Holmboe O, Helgeland J. Brukererfaringer med døgnenheter i psykisk helsevern. Resultater og vurderinger etter en nasjonal undersøkelse i 2005. Oslo: Nasjonalt kunnskapssenter for helsetjenesten; 2006.
Bjertnaes OA, Lyngstad I, Malterud K, Garratt A. The Norwegian EUROPEP questionnaire for patient evaluation of general practice: data quality, reliability and construct validity. Fam Pract. 2011;28(3):342–9.
Ruiz MA, Pardo A, Rejas J, Soto J, Villasante F, Aranguren JL. Development and validation of the "Treatment Satisfaction with Medicines Questionnaire" (SATMED-Q). Value Health. 2008;11(5):913–26.
Bland JM, Altman DG. Statistics notes: Cronbach's alpha. BMJ. 1997;314(7080):572.
Fayers PM, Machin D. Quality of Life: Assessment, Analysis, and Interpretation. Chichester: Wiley; 2000.
Landgraf JM, Abetz L, Ware Jr JE. Child Health Questionnaire (CHQ): A User's Manual. 1st ed. Boston, MA: The Health Institute, New England Medical Center; 1996.
Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994.
Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 7th ed. Essex: Pearson Education Limited; 2014.
Eselius LL, Cleary PD, Zaslavsky AM, Huskamp HA, Busch SH. Case-mix adjustment of consumer reports about managed behavioral health care and health plans. Health Serv Res. 2008;43(6):2014–32.
Morris ZS, McKeganey N. Client perceptions of drug treatment services in Scotland. Drugs. 2007;14(1):49–60.
Cernovsky ZZ, O'Reilly RL, Pennington M. Sensation seeking scales and consumer satisfaction with a substance abuse treatment program. J Clin Psychol. 1997;53(8):779–84.
Chan M, Sorensen JL, Guydish J, Tajima B, Acampora A. Client satisfaction with drug abuse day treatment versus residential care. J Drug Issues. 1997;27(2):367–77.
Sanders L, Trinh C, Sherman B, Banks S. Assessment of client satisfaction in a peer counseling substance abuse treatment program for pregnant and postpartum women. Eval Program Plann. 1998;21(3):287–96.
Groves RM, Fowler Jr FJ, Couper MP, Lepkowski JM, Singer E, Tourangeau R. Survey methodology. 2nd ed. Hoboken, N.J.: Wiley; 2009.
Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care. 1998;10(4):311–7.
Bjertnæs ØA, Garratt A, Johannessen JO. Innsamlingsmåte og resultater i brukerundersøkelser i psykisk helsevern. Tidsskr Nor Laegeforen. 2006;126(11):1481–3.
Gribble RK, Haupt C. Quantitative and qualitative differences between handout and mailed patient satisfaction surveys. Med Care. 2005;43(3):276–81.
Anastario MP, Rodriguez HP, Gallagher PM, Cleary PD, Shaller D, Rogers WH, et al. A randomized trial comparing mail versus in-office distribution of the CAHPS Clinician and Group Survey. Health Serv Res. 2010;45(5 Pt 1):1345–59.
We thank Marit Seljevik Skarpaas for data collection and management, and Inger Opedal Paulsrud for administrative help with the data collection. We are also grateful to the contact persons and project-management professionals in the departments, institutions and health regions concerned. We also thank the patients for participating in the survey.
The Norwegian Knowledge Centre for the Health Services (now a part of the National Institute of Public Health) was responsible for the development project and the current manuscript. The Norwegian Directorate of Health funded the national survey, but had no part in the development project or the design of the current manuscript.
Availability of data and material
The datasets generated and analysed during the current study are not publicly available due to this being one part of an ongoing PhD-project at the National Institute of Public Health and the University of Oslo. The datasets are planned to be used in further analyses and publications, but are available from the corresponding author on reasonable request.
MH planned the paper together with HHI, OB and AKL. MH performed the statistical analyses with HHI, and drafted the manuscript. HHI participated in the planning process, statistical analyses, critically revised the manuscript draft and approved the final version of the manuscript. OB and AKL participated in the planning process, critically revised the manuscript draft and approved the final version of the manuscript. MH was the project manager for the national survey. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
Data were collected anonymously, with no registration of the patients being surveyed. The project was run as part of the national program and was an anonymous quality assurance project. According to the Norwegian Regional Committees for Medical and Health Research Ethics, research approval is not required for quality assurance projects. The Norwegian Social Science Data Services states if the information used are anonymous, the project is not subject to notification (http://www.nsd.uib.no/personvern/en/notification_duty/meldeskjema?eng). Hence, no ethics approval was needed in this project. Patients were informed that participation was voluntary and that they would remain anonymous. In accordance with all the patient surveys in the national program, health professionals at the institutions could exclude individual patients for special ethical considerations. Since no notification or ethics approval was needed, the NIPH obtained signed agreements with all the participating institutions, describing the project and both the institutions’ and NIPH’s responsibility in data collecting, handling, analysing and reporting. Previously established guidelines concerning consent through a returned questionnaire were applied.
About this article
Cite this article
Haugum, M., Iversen, H.H., Bjertnaes, O. et al. Patient experiences questionnaire for interdisciplinary treatment for substance dependence (PEQ-ITSD): reliability and validity following a national survey in Norway. BMC Psychiatry 17, 73 (2017). https://doi.org/10.1186/s12888-017-1242-1