Participants
Eligible participants (approximately 4800) were all Canberra-based employees of two Australian government departments: Health and Ageing, and Family and Community Services. The trial was advertised to staff by email. Participants had to agree to be randomly assigned to receive the training in either Month 1 or Month 6. Training was delivered and data collected at the worksite during office hours.
Interventions
The course content has been described in the Background and previously [3] and further details can be found at the Mental Health First Aid website [4]. The training followed set lesson plans and all participants were given a Mental Health First Aid Manual to keep [5]. Training was administered at the worksite in classes of 6–18 participants. Participants did not necessarily stay in the same class, but moved between classes to complete the course as necessitated by their work schedule. One instructor carried out all the training. She is the developer of the Mental Health First Aid course and had trained over 1000 people before the start of the trial. Participants received training either immediately (June) or after a five-month delay (November). Those who received training immediately constituted the intervention group and the wait-listed group was the control. To monitor whether the intervention was actually received, an attendance roll was kept for each class.
Objectives
The main objective was to assess whether Mental Health First Aid training improved mental health literacy and helping skills relative to a wait-list control. A secondary objective was to assess any benefits to the participants' own mental health.
Outcomes
Outcomes were measured in the month before intervention (the pre-test assessment) and in the fifth month after intervention (the follow-up assessment). The intervention group received training in Month 1 (immediately after pre-test) and the wait-list control group received training in Month 6 (immediately after the follow-up).
All outcomes were measured by self-completed questionnaires based on the ones used in the uncontrolled trial of Mental Health First Aid [3]. The pre-test questionnaire (see Additional File 1) covered the following: socio-demographic characteristics of the participant, why they were interested in doing the course, history of mental health problems in participant or family, confidence in providing help, contact with people who have mental health problems in previous 6 months and help offered, recognition of a disorder in vignettes describing a person with depression and one with schizophrenia, belief about the helpfulness of various interventions for the persons described, a social distance scale to assess stigmatizing attitudes [7], and whether the participant or a family member or friend had ever had a problem like the one in the vignette.
To score the items on beliefs about treatment, a scale was created showing the extent to which participants agreed with health professionals about which interventions would be useful. For depression, there is a professional consensus that GPs, psychiatrists, clinical psychologists, antidepressants, counseling and cognitive-behavior therapy are helpful [6]. Thus, participants received a score from 0 to 6 according to the number of these interventions endorsed as helpful and this was converted into a percentage. For schizophrenia, there is a professional consensus that GPs, psychiatrists, clinical psychologists, antipsychotics and admission to a ward are helpful for schizophrenia [6]. "Helpful" ratings were summed to give a score from 0 to 5 and converted to a percentage.
The questionnaire ended with the SF-12, which provided scales assessing the participant's mental and physical health [8]. These scales were scored using Andrews' [9] integer scorer.
The follow-up questionnaire was the same as the pre-test questionnaire except that it omitted the sociodemographic questions and asked about contact with anyone with a mental health problem over the 5 months since the last questionnaire (rather than 6 months).
The questionnaires were sent out via internal departmental mail by a human resources staff member in each place of employment. The questionnaires were completed anonymously with only an ID number and posted back to the researchers at the Centre for Mental Health Research. The IDs of any non-responders were sent back to the human resources staff member who sent out a reminder. The researchers were never told the names of individual respondents and the human resources staff member in the place of employment never saw any completed questionnaires or individually identifiable data.
Sample size
The study was planned to have a sample of 300. The sample size was determined by practical constraints: when it was convenient to run classes that fitted the employees' work schedule and the workload on the instructor. It was determined that this sample size had excellent power to detect medium effect sizes for both continuous and dichotomous outcomes [10]. The trial was originally planned to involve only one workplace, but was extended to a second one because the number of participants recruited was smaller than expected. The lower recruitment appeared to be due to the requirement that participants agree to random assignment to training at either of two periods.
Randomization and blinding
A staff member in the human resources section of the place of employment kept a list of participants' names and ID numbers. The researchers only had access to the IDs. One of the researchers (Jorm) randomly assigned participants to training or control groups by ID number using the Random Integers option at the http://random.org website [11]. After recruitment, participants were assigned an ID by the staff member in human resources. These staff assigned participants to groups based on the randomized IDs provided to them. Random allocation occurred only after all participants within a place of employment were recruited and assigned ID numbers. The instructor (Kitchener) provided the human resources staff member with the names of attendees to check that participation was as allocated. Blinding was not possible with the Mental Health First Aid intervention.
Ethics
Ethical approval for the study was given by the Australian National University Human Research Ethics Committee.
Statistical methods
Repeated measures analysis of variance was used to analyze continuous measures, with two groups (intervention and control) and two time points (pre-test and follow-up). The principal interest was in the group × time interaction effect. Logistic regression was used to analyze change in dichotomous measures, with group and pre-test score as the predictors and follow-up score as the outcome. Place of employment was also investigated to see if there was a difference in the effects of training. However, no interaction effects involving place of employment were found, so this variable was dropped from all analyses reported below.
The analysis was carried out according to intention-to-treat principles, so that all persons who completed a pre-test questionnaire were included, even if they subsequently dropped out. In such cases, the pre-test score was substituted for the missing value, so that no improvement was assumed.