Tuesday, June 18, 2013

The Measurement Challenge


Does Project Coach Really Promote Youth Development: The Measurement Challenge

One of the greatest challenges facing youth development programs is for them to demonstrate that the activities they offer are really having a meaningful impact on the kids that they serve. From a strict experimental design perspective, an assessment strategy for determining program impact would seem fairly straightforward. All an evaluator would need to do is to randomly select a group of participants and controls, engage one group in an array of program activities, while having the other engage in its “normal” activities, and after a fixed time period compare the groups on selected youth development measures such as emotional regulation, attentional control, perseverance, empathy, intrinsic motivation, and various character traits. If the data showed randomly selected program participants to have an advantage over their randomly selected non-participating peers, controlling as much as possible for other factors, then evidence would exist that the program and its activities are having their intended effect.

Unfortunately, the “roll up your sleeves” reality of youth development programs rarely affords the opportunity of random assignment of kids to participation/non-participation groups, and, even if this were possible, it is really impossible to know whether pre-post changes in developmental measures are solely a function of the activities that kids in a program engaged in and the controls did not. The question of whether some other factors affected the two groups differently can always be posed. Programs typically run over the course of several months, and to be certain that comparative groups had identical histories, except for exposure or non-exposure to program activities is a difficult assumption to make, but one that would be necessary for a randomized control group design to be tenable. While randomization and control of group histories are two requirements for doing high-level experimental research, we think that most programs wishing to do meaningful applied research need to find alternative strategies for assessing their effectiveness.

During the past decade, Project Coach, like many other youth development programs, has been doing pre-post assessments on our participants. One of the assessment tools that we use is the Developmental Asset Profile (DAP), developed by the Search Institute. It is designed to measure 20 internal assets (e.g., responsibility, caring, integrity, interpersonal competence, self-esteem) and 20 external assets (e.g., adult relationships/role models, community values, service to others, engagement in creative activities). These assets can then be combined in various ways to show what the Search Institute labels as five contexts views (e.g., Community, School, Family, Social, and Personal) or asset views with four being internal (e.g. Positive Identity, Social Competency, Positive Values, Commitment to Learning), and four being external (e.g., Constructive Use of Time, Boundaries/Expectations, Empowerment, Support). Without getting bogged down in the specific meaning of these measures and how they can be aggregated, data shows that the more assets kids possess the better that they do in school, and the less likely they are found to engage in health compromising behaviors such as alcohol and drug use. Clearly, an objective of programs should be to help youth develop assets that build capabilities that make them strong, successful, and resilient people.

The figure below shows pre-post data for one of our adolescent coaches. Just looking at the context bars (i.e., the top 5), one can see that the youth scored higher on 4 of 5 contexts in June than she did in September. As well, from an asset perspective, the adolescent was higher on 6 of 8 of the factors (lower 8 bars) in June. From the casual observers perspective this might seem like fairly positive support for Project Coach in that our program hypothesizes that participants acquire many critical assets as they acquire leadership skills as coaches of young children.


However, the experimentalist could easily shoot holes in such an interpretation as alternative explanations for the pre-post differences could be proposed. Suppose all youth, whether or not they participated in Project Coach, might have had such positive change as a result of simple maturation. Or, perhaps, youth became sensitized to simply taking the DAP, and were better at responding to questions in June than in September, as they became more savvy about presenting themselves more favorably. Many other explanations for observed differences could also be proposed. Without randomization and strict control over the lives of youth between September and June, any and all factors, including Project Coach activities, could be responsible for observed changes.

Given that such data can be misleading and that strict experimental design and control is difficult to operationalize in real world settings, what are programs to do to show the value of their activities? While no perfect solution exists, one strategy might be to collect data showing what kids are doing in their daily day-to-day lives between September and June. If it can be shown that when they are involved with program activities they are engaged in more activities that promote the sorts of outcomes measured by the DAP than when they are not engaged in the program, then one could start to make the case that pre-post changes are real and naturally result from program experiences. Other twists to this might be to also contrast kids in a program with acquaintances who are not involved in the program and determine whether their day-to-day experiences are qualitatively different with regard to building developmental assets.


Project Coach has actually been experimenting with such a technique known as the Experience Sampling Method (ESM). The idea here is to probe youth at random times and to ask them a variety of questions related to what they are doing, who they are with, and how they are feeling. Using the ESM, one of our staff, Katlin Okamoto, sent text messages to Project Coach youth between 3 and 6 p.m., the after school hours. These texts were sent on the three days a week kids were working as coaches, and on the other two days when they would be “off” and engaged in non-structured, non-Project Coach activities. When they received a text from Katlin, they filled out a brief questionnaire that was based on Lerner’s model of the 5 Cs -  Character, Competence, Confidence, Connection, and Caring, which overlaps with many of the assets found on the DAP. Her results are shown in the following figure. As one can see, the findings are fairly dramatic with 4 out of the 5 categories (* p < .05) showing kids more deeply engaged in developmental types of activities when they are working in Project Coach than when they are just on their own.


Katlin also recruited the friends of Project Coach adolescents who were not in the program, and also probed them to see how they compared with the Project Coach kids. The results of this analysis are shown in the following graph. Again, we can see the impact of being engaged in program activities, which are represented by the blue bars. As well, one can even observe differences between Project Coach kids when they are not involved with Project Coach and their friends. While only tentative, differences are consistent with an hypothesis that suggests that Project Coach kids are involved in more developmental activities even in their own free time when contrasted with non-Project Coach peers. This might be a transfer effect that, although weak, shows adolescents doing a better job of structuring their free time than peers who are not exposed to a program such as Project Coach.




While, we do not claim that such data are as powerful as those that might be acquired by using a randomized-control group design, they do help to build a case that the pre-post DAP chart is not just an artifact attributable to factors unrelated to Project Coach activities.  With ESM data the case can be made that if kids are differentially engaged in developmental activities on a day to day basis in program activities vs. non-program activities, then over the course of ten months there should be changes on scales, such as the DAP, that probe the development of such assets. The in-Project Coach -- not-in-Project Coach contrasts also point in this direction.

While perfect empirical support for youth development programs is a worthy and highly sort after goal, we believe that it is unattainable. On the other hand, “less perfect” data, as portrayed in this blog entry, is critical to acquire, as it can help “connect the dots” between non-experimental designs that utilize pre-post evaluations. As we argue, if kids are engaged in developmental activities on a regular basis, it is logical that they will acquired the sorts of developmental assets contained in the DAP and 5 C’s.






No comments:

Post a Comment