DMHAS: EQMI - Survey Summary FY03

Evaluation, Quality Management & Improvement Division

DMHAS Consumer Survey FY 03
Executive Summary


  • The FY03 DMHAS Consumer Survey was implemented at a program-level. The survey instrument was a 21-question variation of the nationally popular MHSIP survey. Every program was asked to use the same instrument, but could attach additional questions if it wished.  In addition to the standard instrument, DMHAS/QMI also provided each agency with an Excel-based application that allowed agencies to enter data and to obtain survey reports in a concise form for each of their programs.  Although DMHAS/QMI provided recommendations and training about the survey process, the recruitment of the respondents, mode of survey administration, and manner of providing the help to the consumers were at the provider’s discretion.  The training was completed in the early fall of 2002. The sample size for each program was based on the unduplicated count of clients active any time between September 1 and November 30, 2002.  The required sample size was designed to provide a 95% Confidence Interval of +/- 10% around the estimates of clients’ satisfaction scores.  Mental health  (MH) and substance abuse (SA) programs used the same survey instrument. 

The providers were asked to send the survey results to DMHAS by February 15, 2003.  At that time, all programs were to submit printouts of their Excel-generated reports and provide DMHAS with the raw data sets for further aggregation and analysis.  In addition, each program was asked to fill out a Supplemental Report describing how they conducted the survey (FY03 – Feedback from Supplemental Report - Summary).   

  • Overall, 414 MH and SA programs had the consumer survey requirement for FY03. The majority of programs (378, 91%) completed the survey as required.  Only 23 programs did not submit surveys and an additional 11 programs either used an incorrect survey instrument or reported that they had misunderstood the requirement.  Percent compliance by region is shown in the Table 1 below:

Table 1.  DMHAS Consumer Survey FY03: Compliance with survey requirement 

Region

Number of MH and SA programs that submitted data

Number of MH and SA programs that did not submit data

Number of MH and SA programs that completed different survey/ misunderstood requirements

Total number of MH and SA programs with the requirement to complete the survey in FY03

% of MH and SA programs in compliance

1

77

13

1

91

84.6%

2

111

5

4

119

92.4%

3

53

1

0

54

98.2%

4

72

4

6

82

87.8%

5

66

2

0

68

97.1%

TOTAL

378

23

11

414

91.3%

Compliance with the sample-size expectation (FY03- Explanations about sampling formula and definitions of domains) was analyzed based on a sample of programs that submitted the Supplemental Report.  Overall, about 30% of the programs (42 out of 132) failed to collect at least 75% of the expected number of surveys.  The smaller programs (those with an unduplicated quarterly client count of 25 people or fewer) had the hardest time complying:  less than 65% of 36 programs of that size were able to collect the expected number of surveys. This is an important finding because a smaller than expected sample size means that one can be less confident that the scores truly represent opinion of all clients in the program.  This in turn may seriously affects the comparability of individual programs with one another.   

To view the survey results see:

  • FY03 – Statewide Results by Level of Care   

  • FY03 – Statewide Results – Graph  

  • This illustration is a line graph that shows the percent of clients who agreed (were satisfied) within each domain (i.e., whose mean score on the domain was less than 2.5).  Program types, also known as levels of care (LOC’s) are placed in order of increasing overall satisfaction; that is, the LOC whose respondents expressed the lowest overall satisfaction  is at the left end of the axis and the LOC with the highest overall satisfaction  is at the right end. The figure for each LOC is an aggregate number representing all programs in the state providing that type of service.  

The satisfaction levels within each individual domain vary widely among LOC’s.  The observed variations, however, may not reflect a true difference in client satisfaction because in a number of cases the total client count per LOC was very small, and therefore the estimates of clients’ opinions for those LOC’s are not very reliable.  Note that on the horizontal axis the label for each LOC includes information on the total number of respondents and number of programs within that particular LOC. This information illustrates the difficulties with sampling and interpretation based on small program sizes. 

  • Statewide FY03 totals for domains of access, participation, quality, outcomes, and general satisfaction are summarized in the Table 1 below.  This table also contains data from the previous survey of consumer opinions   (“Voice your Opinion 2000-01”).   It should be noted that the mode of implementation of these two surveys was very different (provider organized distribution in FY03 vs. consumer advocate collected data in 2000/01).  

Table 1: State wide results and comparison of FY03 to the consumer survey conducted in 2000/01 

Access

Participation in Treatment Planning

Quality/ Appropriateness

Outcomes

General Satisfaction

FY031 

% Agree

N Used

% Agree

N Used

% Agree

N Used

% Agree

N Used

% Agree

N Used

MH

84.7%

8468

87.6%

8026

85.7%

8316

78.8%

8269

90.2%

8615

SA

81.3%

4045

89.9%

4050

88.9%

4078

85.0%

4066

88.7%

4171

Total

84.5%

12513

88.4%

12076

86.8%

12394

80.9%

12335

89.7%

12786

Voice your Opinion 2000/012

% Agree

N Used

% Agree

N Used

% Agree

N Used

% Agree

N Used

% Agree

N Used

MH

74.4%

1058

65.2%

973

74.1%

1037

72.9%

994

82.8%

1096

1 This survey was a program-level survey and it was organized by providers.  Direct treatment staff were involved in providing help to the consumers in  about 50% of cases where clients needed help. The total count is likely to be a duplicated count as individual clients might have participated in the survey process at more that one program site.

2 The survey data were collected by peer-surveyors who were recruited, trained and supervised by a consumer advocacy agency.

The surveyors visited 42 different agencies (i.e. 38% of all mental health agencies funded or operated by DMHAS) across the state.  

  • Providers reported that about 22% of the respondents needed the assistance (based on an information from 138 programs that provided this information in the Supplemental Report - see FY03 Feedback from Supplemental Program Report - Summary).  Out of the total number of all people that received an assistance, 49% were assisted by a direct staff and 20% were helped by a consumer representative or volunteer.

  • FY04 survey will be implemented at the agency rather than at the program level.  There are multiple reasons for this change.  One reason is related to the difficulties with obtaining an adequate sample size that would make interpreting differences on a program level more feasible. The other reason is related to the survey instrument itself – the questions are global (by design) and not program specific. Moreover, implementing the survey on a program level resulted in duplications and as amply noted in comments provided in the Supplemental Report, clients objected to being asked to complete the same survey more than once. 

  • The sampling requirements per agency for FY04 will be adjusted to yield a more precise estimate of clients’ satisfaction than afforded in FY03.  Nevertheless, the overall number of surveys required will in most cases be smaller than the total from all programs that many agencies were expected to collect in FY03.


Return to EQMI





Content Last Modified on 8/18/2010 1:13:32 PM