- Open Access
Using process evaluation for program improvement in dose, fidelity and reach: the ACT trial experience
© Wilson et al; licensee BioMed Central Ltd. 2009
- Received: 10 July 2009
- Accepted: 30 November 2009
- Published: 30 November 2009
The purpose of this study was to demonstrate how formative program process evaluation was used to improve dose and fidelity of implementation, as well as reach of the intervention into the target population, in the "Active by Choice Today" (ACT) randomized school-based trial from years 1 to 3 of implementation.
The intervention integrated constructs from Self-Determination Theory and Social Cognitive Theory to enhance intrinsic motivation and behavioral skills for increasing long-term physical activity (PA) behavior in underserved adolescents (low income, minorities). ACT formative process data were examined at the end of each year to provide timely, corrective feedback to keep the intervention "on track".
Between years 1 and 2 and years 2 and 3, three significant changes were made to attempt to increase dose and fidelity rates in the program delivery and participant attendance (reach). These changes included expanding the staff training, reformatting the intervention manual, and developing a tracking system for contacting parents of students who were not attending the after-school programs regularly. Process outcomes suggest that these efforts resulted in notable improvements in attendance, dose, and fidelity of intervention implementation from years 1 to 2 and 2 to 3 of the ACT trial.
Process evaluation methods, particularly implementation monitoring, are useful tools to ensure fidelity in intervention trials and for identifying key best practices for intervention delivery.
- Physical Activity
- Team Leader
- Social Cognitive Theory
- Physical Activity Intervention
- Behavioral Skill
Process evaluation can be used to explain why interventions succeed and fail, and whether there are characteristics or mechanisms involved in the program's implementation that potentially mediate or moderate outcomes. In large-scale trials the importance of monitoring program implementation has been highlighted [1–10] and there is strong evidence that level of implementation impacts study outcomes . Implementation monitoring can be done in both a formative and a summative manner. Formative evaluations can be defined as utilizing data to provide on-going monitoring and quality assessment to maximize the performance of a program [11–14]. Summative evaluations analyze data at the conclusion of an initiative to provide a conclusive rating of the extent to which intended outcomes were achieved and the program was implemented as intended [11, 13, 14]. Another summative purpose of process evaluation is to include level of implementation data in the outcome analysis [15, 16].
Evaluations of implementation are especially important given that few studies have achieved full implementation in real-world settings. This is also true of health promotion efforts, as researchers have noted the great variability in program implementation and policy adoption in community and school settings [1, 17]. Thus, one purpose of implementation monitoring is to ensure that the originally designed intervention is, in fact, being implemented, as well as being implemented in a manner that is consistent with the program theory and plan. In effect, if a complex intervention carried out in a field setting is not carefully monitored and adjusted to stay "on track" with the original plan, many different interventions may be implemented. Thus, midcourse changes are designed to increase fidelity, dose, and reach to enable researchers to evaluate the intervention as originally planned. Despite the importance of such evaluations, outcome analyses are frequently conducted without an assessment of program implementation . This is often referred to as the "black box" approach to evaluation, which refers to examining the outcomes of a program without examining its internal operation. Lack of this knowledge can lead to "a Type III error," which refers to the conclusion that a seemingly ineffective program was, in actuality, not implemented as intended [19, 20].
Process evaluation data used for formative purposes during a developed intervention, as described in this study, should be distinguished from process evaluation used for formative evaluation during the developmental phases of an intervention [21–23]. In an example of the latter, Wilson and colleagues  conducted a formative evaluation of a motivational PA intervention (Active by Choice Today; ACT). The conceptual framework for the ACT intervention targeted the social environment, cognitive mediators, and motivational orientation related to PA in underserved adolescents. The 8-week program sought to increase moderate-to-vigorous PA (MVPA) for participant youth, and formative evaluation was collected through daily forms and observational data completed by an independent objective observer. ACT process evaluation focused on identifying factors in the social environment and curriculum that worked well and/or were in need of improvement. Most effort was spent ensuring that the theoretical underpinnings of the program were maximized and promoting efficiency by modifying logistical flaws. The process evaluation was used to inform necessary changes to the staff training. Specifically, process data indicated that it would be more beneficial to encourage staff to praise students in subtle ways or in a setting where other students would not be aware of it (due to reduction in positive student-to-student reactions when publicly praised for their behavior by staff). The investigators also learned that training should focus on instructional methods which foster a balance between discipline and nurturing as well as ways to subtly dismantle cliques.
A growing literature has included process evaluation as a key element in evaluating success of implementation in large-scale PA trials. The Pathways initiative - a large-scale, multi-site, 3-year study testing a school-based intervention, used process evaluation methods in evaluating implementation of an intervention to lower percent body fat in American Indian children . Pathways applied a multilevel strategy involving individual behavior change and environmental modifications to support changes in individual behavior. The environmental component included a food service intervention to enhance food staff skills in preparing and serving lower-fat meals. For this component, implementation was measured by various behavioral guidelines (e.g., use of low-fat vendor entrees, offer choice of fruits and vegetables). In the first year, none of the 12 goals were achieved; in the second year 6 of the 13 goals were met (a new goal had been added); in the third year 9 of the 13 goals were met. These improvements were due to performance feedback provided by the evaluators at the end of each semester, an example of effective use of formative process data.
Other large trials have reported summative process evaluations which have implications for using process evaluation data for formative purposes. For example, in one investigation of the SPARK program (Sport, Play, and Active Recreation for Kids), a multi-component elementary school program which sought to promote PA in elementary children, process evaluation data was obtained to determine success of implementation. The SPARK curriculum focused on physical education (PE) and self-management (SM), and children participated in either an intervention implemented by PE specialists, an intervention implemented by classroom teachers, or a control (usual PE classes). Through direct observation of weekly classroom lessons it was determined that teachers and PE specialists conducted 63% and 67% of the components of the SM curriculum, respectively. The small variance in intervention delivery coupled with the relatively low implementation percentages suggests the possibility of consistent contextual implementation barriers that perhaps could have been addressed with timely, formative process evaluation data.
In "Switch-Play," Salmon and colleagues  sought to reduce the time spent by primary school children in sedentary behaviors and to increase their skills in, enjoyment of, and participation in PA outside of school. The process evaluation indicated an average attendance of 88% among children in the intervention conditions. Classroom activities were completed 92% of the time; however, outside-of-class PA activities and self-monitoring sheets were completed 57% and 62% of the time, respectively. These data indicate opportunities for improving fidelity to essential program elements, especially for outside of class PA.
The purpose of the present study was to demonstrate how program process evaluation was used in a formative manner  to improve fidelity and dose (completeness) of implementation as well as reach into the target population in the ACT randomized school-based trial from year to year of implementation. The ACT trial , is a group-randomized cohort design with three intervention and three comparison schools per year over the course of four years (N = 24 schools, n = 60 6th graders per school). The formative data from each year were used to provide corrective feedback to keep the intervention "on track", and was part of a comprehensive approach to process evaluation for monitoring and assessing program implementation in ACT .
A total of 24 middle schools (range of 41-71 students per school; N = 1,422 total students) in South Carolina were recruited to participate in one of the two after-school programs (ACT intervention or a general health program that served as a comparison program) over the 4 years (6 schools per year) of the trial implementation. To be eligible, adolescents were required to 1) currently be enrolled in the 6th grade, 2) have parental consent to participate, 3) agree to study participation and random assignment, and 4) be available for a 6-month follow-up. Adolescents were excluded from participation if they 1) had a medical condition that would interfere with the prescribed PA intervention plan, 2) were developmentally delayed such that the intervention materials would not be cognitively appropriate or, 3) were currently in treatment for a psychiatric disorder.
The ACT trial is a group-randomized cohort design with three intervention and three comparison schools per cohort (year). The schools were paired prior to recruitment and randomization to condition to avoid possible bias or confounding by socio-demographic differences. The criteria on which the schools were paired included: 1) school size, 2) proportion of minority versus non-minority ethnicity, 3) proportion of students enrolled in free and reduced lunch program and 4) urban or rural community setting. Baseline psychosocial, PA, and anthropometric measures were obtained prior to randomizing schools in each pair. The measurement team and intervention team maintained separate entities to blind the measurement staff to group conditions. Data was collected by trained measurement staff for each pair of schools on the same days over a period of two weeks in a lagged timeline (pair 1, pair 2, pair 3, respectively). This paper reports on years 1, 2, and 3 of the trial.
Two phases of recruitment were implemented yearly during the ACT trial. The first phase involved attending parent orientations at school events to provide program information and obtain informed consent. Following the orientation a second phase of recruitment took place during the school day. Pep rallies and homeroom visits were two methods of recruitment implemented during the second phase to increase enrollment and excitement about the programs (PA and general health education). Randomization of schools to programs (PA intervention vs. general health education) occurred after recruitment and baseline assessments were completed. The recruitment target was 60 students from each school.
The intervention integrated constructs from Self-Determination Theory (SDT) [29, 30] and Social Cognitive Theory (SCT)  to enhance intrinsic motivation and behavioral skills for increasing long-term PA behavior specifically in underserved adolescents. A formative evaluation of the theoretical elements was developed during year 1 of the ACT trial [24, 28]. In the present study elements from SCT and SDT were combined to develop an intervention that promoted behavioral skills for PA outside the program and a social environmental approach during the after-school program for enhancing autonomy (choice), fun, belongingness (engagement), and competence (challenges emphasizing non-competitive play) for PA . An interview methodology known as strategic self-presentation was used to integrate SDT and SCT by linking motivational elements from the program to applying behavioral skills for being physical active outside of program time.
ACT Theories, Theoretical Constructs, and Essential Elements.
Theoretical construct and definition
ACT Essential Element
Autonomy supportive environment-contributes to feelings of agency or being "in control" ("internal locus of causation")
Participants have choices
Input and Choice
Participants provide meaningful input, have influence on what happens in program
Participants know what is expected of them
Successful and confident (in program)
Competence supportive environment-contributes to being able to effectively interact with ones environment and get wanted effects and outcomes
Participants feel capable and able to participate successfully
Participants are engaged and involved in program
Engaged and Interact
Relatedness supportive environment-contributes to feelings of connectedness and being accepted by significant others
Participants feel that they belong and are part of the group
Participants feel like they are a valued member of group
Participants get along with each other, show respect for each other (positive interactions)
Intrinsic motivation for physical activity
Participants enjoy being in the program and being physically active
Fun & Enjoyment
Participants are physically active during the PA component of the program
Being physically active
Self efficacy (person factor)-confidence in ones ability to successfully engage in a behavior
Participants feel confident that they can be physically active in the program, and at home
Successful onfident (at home)
Behavioral skills (behavioral factor)-skills or capability to self-regulate behavior (self-monitoring, group goal setting, support seeking)
Participants have specific behavioral skills that enable them to be physically active at home
Social support (environmental factor)-instrumental and/or emotional support from peers and/or family members to engage in a specific activity
Participants have the social support needed to be physically active at home
SDT & SCT
Self concept/motivation- Participants have a self-concept that includes being physically active
Students participate in strategic self presentation
The ACT intervention was implemented on Mondays, Tuesdays, and Thursdays for two hours after school. The ACT intervention was supervised by a team leader who had expertise in implementing physical activities in youth. The team leaders provided the structure for the ACT intervention components including the PA component. Four additional trained staff provided oversight and assisted with facilitating the program components. The program had three main components: snack/homework (30 minutes), a PA component that included activities which the students selected each week of MVPA (60 minutes), and a SCT and motivational component (group time/behavioral skills) during which intervention staff taught participants behavioral skills and motivational strategies to increase their PA at home and with friends (30 minutes).
The General Health Education Program (comparison program) focused on nutrition, stress management, drug prevention, and school drop-out prevention. The program was held on the same days and times as the ACT intervention program. The health education modules were taught in an interactive format and students typically rotated from one station to then next every twenty minutes .
ACT Intervention Training
ACT intervention staff and volunteers were trained prior to the beginning of intervention each school year and received one booster training session midway through the intervention period. Training content included: an overview of the ACT trial purpose, an introduction of the behavioral theories and models guiding the ACT intervention, a detailed review of the ACT intervention manual, staff expectations regarding implementing the intervention and record keeping, team building, interacting with students, first aid, and administrative responsibilities and procedures. Training sessions were didactic and interactive. The interactive components provided opportunities for the staff to practice intervention strategies and for training leaders to identify and correct any problem areas for the staff during the training.
ACT Process Evaluation Methods
ACT process evaluation methods were guided by the essential elements framework that defined dose and fidelity or "complete and acceptable delivery" of the ACT intervention [24, 33, 34]. The essential elements described in Table 1 guided the development of items for the process evaluation observation form; that is, the key concepts reflected in Table 1 were reflected throughout the components of the ACT intervention as implemented during the after-school program. The evaluation questions, presented below, guided the selection of methods and tools: 1) Fidelity (for PA and behavioral skills components)- To what extent was the social environment autonomy supportive?, 2) Dose delivered (completeness for all components)-To what extent were all planned components of the program provided to program participants? and 3) Reach-What percentage of the possible target group attends each week of the program?
Process evaluation data were collected by a trained, independent process evaluator using systematic observation of after-school program activities. Through observation and use of a quantitative checklist and ratings scales, the process evaluator assessed the extent to which the ACT after-school social environment achieved the essential elements upon which the program was designed. To assess dose and fidelity, the process evaluator observed the two-hour program for each day of the program for two weeks (3 program days for two weeks) at three points in time, early (weeks 1 and 2), midpoint (weeks 8 and 9) and near the end (weeks 15 and 16) of the 17-week program. It was possible to observe each program in the same phase of implementation because program implementation was staggered by 2 weeks across the three intervention sites.
Intervention Process Evaluation Form for Assessing Fidelity for the PA and Behavioral Skills/Group Time components
Essential element categor y and Sample item
# of Items
1) Clarity of Rules/Expectations Explain rules and daily activities to students
Likert Scale: 1-4
Students get to vote on physical activity games
(1 = none, 2 = some, 3 = most, 4 = all)
3) Optimal Challenges
Leaders encourage participation, fairness and de-emphasizes competition
Leaders create a positive, interactive environment
5) Physical Activity
Students are participating in moderate to vigorous cooperative PA activities
Behavioral Skills/SSP/Group Time
1) Clarity of Rules/Expectations
Explains rules to students
Likert Scale: 1-4
2) Optimal Challenges
Leaders encourage participation and individual making progress (based on goals)
(1 = none, 2 = some, 3 = most, 4 = all)
Leaders create a positive, interactive environment
Many implementation fidelity ratings reported in the literature pertain to implementation or a curriculum or set of program activities and a rating ranging from "poor to excellent" has typically been optimal [11, 13]. In the ACT trial, however, a different approach to conceptualizing and measuring fidelity was used given the goal of the intervention was to create a positive social environment in the program that was characterized by adult staff behavior. This approach was based on SDT [29, 30] and because adult behaviors shape the program environment for the child, we selected a rating ranging from "all to none" to assess appropriate staff behavior.
Description of Intervention Process Evaluation Form for Assessing Dose (or completeness of delivery) for the PA and Behavioral Skills/Group Time components
# of Items
Dose for Snack/Welcome
Greeted arriving students
Ground rules displayed
Staff arrive on time
Staff perform assigned duties
Adult leader gave overview of week (Monday only)
Adult leader gave daily overview
Dose of Physical Activity
Overview and introduction to entire activity session
PA choices listed everyday
Warm-up at beginning of PA session
Dose of Behavioral Skills/SSP/Group Time
Overview of session or activity
Demonstration of skill
Student involvement (brainstorm, role play, etc)
After the intervention was completed each year, the process evaluation data were examined to determine areas of strengths and weaknesses and to make adjustments to keep the program "on track" for the next year (cohort). Based on process evaluation data in year 1, changes were made in the subsequent program years to ensure complete and acceptable program delivery and to maximize reach into the target population.
Student Demographics by Year and Intervention vs. Control Schools
Range across schools
Range across schools
% African American
Range across schools
Free or Reduced Lunch
Range across schools
Recruitment and Attendance
Average Attendance Summary by Year for Intervention and Control Schools
Tracking System Changes
In response to attendance challenges in year 1, a tracking system was developed to more easily contact parents whose children had poor attendance at ACT. Detailed protocols were developed for ACT and general health intervention participants. The protocols included detailed phone scripts and follow-up actions for various scenarios (e.g. wrong phone number, no answering machine, leaving a phone message). The information was then included in a tracking database that included codes for the various scenarios. Staff attempted to collect updated contact information if it was not readily available from the school or provided by the participant.
Intervention Dose and Fidelity
Percentage of Dose Delivered for ACT Intervention Components Cohorts 1, 2 and 3 (Goal 75% or higher)
Cohort 1 Schools
Cohort 2 Schools
Cohort 3 Schools
Snack (7 elements)
Physical Activity (4 elements)
Skills (4 elements)
Staff Manual Changes
There were both curricula as well as visual and organizational changes made to the manuals. The curricula changes included not repeating any weeks during the program. In addition, some activities were taken out that weren't feasible. For example, a camera activity was taken out because it was not feasible to give each child in each school a camera to complete the activity. Visual and organizational changes were also made to the manual. Each daily sheet was changed to include a "to do" list. A "what's the point?" box was added near the top to reinforce top priorities for each daily activity, and which ACT essential element was being covered that day. Fun and interesting visuals were also added to make the daily sheets more appealing to ACT staff; who were primarily school teachers and staff. Finally, important points conveying the main emphasis of ACT (i.e. fun, belongingness) were bolded and functional definitions were added where appropriate.
Summary of Fidelity Scores for ACT Intervention Components-Cohorts 1, 2 & 3 (Goal 3 or higher; Scale 1-4)
Cohort 1 Schools
Cohort 2 Schools
Cohort 3 Schools
Physical Activity (10 items)
Skills (6 items)
Staff Training Changes
Significant changes were made in staff training to attempt to improve program dose and fidelity. A core-training with all the schools team leaders was developed and implemented prior to any of the programs start dates. In this training, team leaders spent 20 hours being exposed to all the essential elements of ACT. They participated in hands-on activities that helped them become more familiar with the basic elements of the program. After the core training, team leaders then helped facilitate their school's staff training. The team leaders took on a more active and leadership role in these 12-hour school trainings. Mid-year, a booster training session was held and feedback was given to each team staff member by the ACT project director. Constructive feedback was given based on internal evaluations that had been conducted by the project director. Finally, the external evaluator's criteria sheet was shared with staff members so that they would become familiar with exactly how the essential elements of the program were translated into specific staff tasks and responsibilities.
Overall, this study suggests that the formative evaluation contributed to improving the intervention dose, fidelity, and program attendance. The intervention itself was not changed; rather, the changes made enabled ACT staff to do a better job of delivering the planned intervention. Many of the changes were related to staff training and monitoring methodology. Specifically, changes in the staff training, the intervention manual, and tracking of students' participation were associated with reaching the goals for dose, fidelity, and reach when comparing years 1 through 3 of implementation. These findings have important implications for future research and suggest that formative process evaluation procedures can inform and enhance program implementation in on-going trials.
Using process evaluation data in a formative manner is frequently recommended; however, there are relatively few reports describing formative compared to summative uses of process evaluation. A commonly cited challenge, particularly in large trials, is the time frame required for data collection, management, synthesis, and reporting . This includes the need to develop project infrastructure and procedures that enable project staff to get and use the information in a timely manner. Pre-implementation development of project "essential elements" that define dose and fidelity and a comprehensive process evaluation plan sets the stage and expectations for developing project infrastructure and process evaluation procedures to ensure program implementation and quality .
In a review conducted by Durlak and DuPre , it was demonstrated that inadequate implementation of a program can adversely affect program outcomes. This is particularly a concern for multi-component programs, given that an improperly implemented component will likely influence the implementation of another. Process data can help ensure that a program stays true to its underlying theory and plan. Theory not only informs proper and desired implementation, it conversely ties implementation to theory and maximizes the possibility of detecting desired outcomes. There is now evidence that links better PA outcomes to fidelity, and methods suggested in this paper may serve as a "best process practice"  that help practitioners identify aspects of PA interventions, [5, 24, 28] that may mediate or moderate positive outcomes.
This article was supported by a grant (R01 HD 045693) funded by the National Institutes of Child Health and Human Development to Dawn K. Wilson, Ph.D.
- Dusenbury L, Brannigan R, Falco M, Hansen WB: A review of research of fidelity of implementation: Implications for drug abuse prevention in school settings. Health Educ Res. 2003, 18: 237-256. 10.1093/her/18.2.237.View ArticleGoogle Scholar
- McGraw SA, Sellers DE, Stone EJ, Bebchuk J, Edmundson E, Johnson C, Buchman K, Luepker R: Using process data to explain outcomes: An illustration from the Child and Adolescent Trial for Cardiovascular Health (CATCH). Eval Rev. 1996, 20: 291-312. 10.1177/0193841X9602000304.View ArticleGoogle Scholar
- McGraw SA, Sellers DE, Stone EJ, Resnicow K, Kuester S, Fridinger F, Wechsler H: Monitoring implementation of school programs and policies to promote healthy eating and physical activity among youth. Prev Med. 2000, 31: S86-S97. 10.1006/pmed.2000.0648.View ArticleGoogle Scholar
- Durlak J, DuPre E: Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Commun Psychol. 2008, 41: 327-350. 10.1007/s10464-008-9165-0.View ArticleGoogle Scholar
- Griffin S, Wilcox S, Ory M, Lattimore D, Leviton L, Castro C, Carpenter R, Rheaume C: Results from Active for Life process evaluation: Program delivery and fidelity. Health Educ Res. 2009.Google Scholar
- Holiday J, Audrey S, Moore L, Parry-Langdon N, Campbell R: High fidelity? How should we consider variations in the delivery of school-based health promotion interventions?. Health Educ J. 2009, 68: 44-62. 10.1177/0017896908100448.View ArticleGoogle Scholar
- Schneider M, Hall W, Hernandez A, Hindes K, Montez G, Pham T, Rosen L, Thompson D, Volpe S, Zeveloff A, Steckler A: Rationale, design and methods for process evaluation in the HEALTH study. Int J Ob. 2009, 33: S60-S67. 10.1038/ijo.2009.118.View ArticleGoogle Scholar
- Karwalajtys T, McDonough B, Hall H, Guirguis-Younger M, Chambers L, Kaczorowski J, Lohfeld L, Hutchison B: Development of the volunteer peer educator role in a community cardiovascular health awareness program (CHAP): A process evaluation in two communities. J Commun Health. 2009, 34: 336-345. 10.1007/s10900-009-9149-5.View ArticleGoogle Scholar
- Audrey S, Holliday J, Parry-Langdon N, Campbell DT: Meeting the challenges of implementing process evaluation within randomized controlled trials: the example of ASSIST (A stop smoking in schools trial). Health Educ Res. 2006, 21: 366-377. 10.1093/her/cyl029.View ArticleGoogle Scholar
- Young D, Steckler A, Cohen S, Pratt C, Felton G, Moe S, Pickrel J, Johnson C, Grieser M, Lytle LA, Lee JS, Raburn B: Process evaluation results from a school-and community-linked intervention: the Trial of Activity for Adolescent Girls (TAAG). Health Educ Res. 2008, 23: 97-111.Google Scholar
- Devaney B, Rossi P: Thinking through evaluation design options. Children Youth Services Review. 1997, 19: 587-606. 10.1016/S0190-7409(97)00047-9.View ArticleGoogle Scholar
- Helitzer D, Yoon S: Process evaluation of the adolescent social action program in New Mexico. Process evaluation for public health interventions and research. Edited by: Steckler A, Linnan L. 2002, San Francisco: Jossey-Bass, 83-109.Google Scholar
- Helitzer D, Yoon S, Wallerstein N, Garcia-Velarde L: The role of process evaluation in the training of facilitators for an adolescent health education program. J School Health. 2000, 70: 141-148. 10.1111/j.1746-1561.2000.tb06460.x.View ArticleGoogle Scholar
- Viadro C, Earp J, Altpeter M: Designing a process evaluation for a comprehensive breast cancer screening intervention: Challenges and opportunities. Eval Program Plann. 1997, 20: 237-249. 10.1016/S0149-7189(97)00001-3.View ArticleGoogle Scholar
- Baranowski T, Stables G: Process evaluation of the 5-a-day projects. Health Educ Res Behav. 2000, 27: 157-166. 10.1177/109019810002700202.View ArticleGoogle Scholar
- Shadish WR, Cook TD, Campbell DT: Experimental and generalized causal inference. Experimental and quasi-experimental designs for generalized causal inference. 2002, Boston, M.A.: Houghton Mifflin Company, 1-32.Google Scholar
- Lillehoj C, Griffin K, Spoth R: Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Educ Behav. 2004, 31: 242-257. 10.1177/1090198103260514.View ArticleGoogle Scholar
- Saunders R, Ward D, Felton G, Dowda M, Pate R: Examining the link between program implementation and behavior outcomes in the Lifestyle Education for Activity Program (LEAP). Eval Program Plann. 2006, 29: 352-264. 10.1016/j.evalprogplan.2006.08.006.View ArticleGoogle Scholar
- Steckler A, Linnan L: Process evaluation for public health interventions and research: An overview. Process evaluation for public health interventions and research. Edited by: Steckler A, Linnan L. 2002, San Francisco, C.A.: Jossey-Bass, 1-21.Google Scholar
- Brownson R, Fielding J, Maylahn C: Evidence based public health: A fundamental concept for public health practice. Ann Rev Publ Health. 2009, 30: 175-186. 10.1146/annurev.publhealth.031308.100134.View ArticleGoogle Scholar
- Cunningham L, Michielutte R, Dignan M, Sharpe P, Boxley J: The value of process evaluation in a community-based cancer control program. Eval and Program Plann. 2000, 23: 13-25. 10.1016/S0149-7189(99)00033-6.View ArticleGoogle Scholar
- Gettelsohn J, Steckler A, Johnson C, Pratt C, Grieser M, Pickrel J, Stone E, Conway T, Coombs D, Staten L: Formative research in school and community-based health programs and studies: "State of the art" and the TAAG approach. Health Educ Behav. 2006, 33: 25-39. 10.1177/1090198105282412.View ArticleGoogle Scholar
- Young D, Saunders R, Johnson C, Steckler A, Gettelsohn J, Saksvig R, Lythle L, McKenzie T: Data to action: Using formative research to develop intervention programs to increase physical activity in adolescent girls. Health Educ Behav. 2006, 33: 97-111. 10.1177/1090198105282444.View ArticleGoogle Scholar
- Wilson DK, Griffin S, Saunders RP, Evans A, Mixon G, Wright M, Beasley A, Umstattd MR, Lattimore D, Watts A, Freelove J: Formative evaluation of a motivational intervention for increasing physical activity in underserved youth. Eval Program Plann. 2006, 29: 260-268. 10.1016/j.evalprogplan.2005.12.008.View ArticleGoogle Scholar
- Steckler A, Ethelbah B, Martin C, Stewart D, Pardilla M, Gittelsohn J, Stone E, Fenn D, Smyth M, Vu M: Pathways process evaluation results: A school-based prevention trial to promote healthful diet and physical activity in American Indian third, fourth, and fifth grade students. Prev Med. 2003, 37: S80-S90. 10.1016/j.ypmed.2003.08.002.View ArticleGoogle Scholar
- Marcoux M, Sallis JF, McKenzie T, Marshall S, Armstrong C, Goggin K: Process evaluation of a physical activity self-management program for children: SPARK. Psychol Health. 1999, 14: 659-677. 10.1080/08870449908410756.View ArticleGoogle Scholar
- Salmon J, Ball K, Crawford D, Booth M, Telford A, Hume C, Jolley D, Worsley A: Reducing sedentary behaviors and increasing physical activity among 10-year old children: An overview and process evaluation of the "Switch-Play" intervention. Health Promot Int. 2005, 20: 7-17. 10.1093/heapro/dah502.View ArticleGoogle Scholar
- Wilson DK, Kitzman-Ulrich H, Williams JE, Saunders R, Griffin S, Pate R, Van Horn ML, Evans A, Hutto B, Addy CL, Mixon G, Sission S: An overview of the "Active by Choice Today" (ACT) trial for increasing physical activity Contemp Clin Trials. 2008, 29: 21-31.Google Scholar
- Ryan R, Deci E: Self-determination theory and the facilitation of intrinsic motivation, social development, and well being. Am Psychol. 2000, 55: 68-78. 10.1037/0003-066X.55.1.68.View ArticleGoogle Scholar
- Deci E, Koestner R, Ryan R: A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychology Bulletin. 1999, 125: 41.Google Scholar
- Bandura A: Social foundation for thought and action. 1986, Englewood Cliffs: Prentice-HallGoogle Scholar
- Wilson DK, Evans AE, Williams JE, Mixon G, Sirard J, Pate R: A preliminary test of a student-centered intervention on increasing physical activity in underserved adolescents. Annals of Behavioral Medicine. 2005, 30: 119-124. 10.1207/s15324796abm3002_4.View ArticleGoogle Scholar
- Bartholomew LK, Parcel GS, Kok G, Gottieb NH: Planning Health Promotion Programs: An Intervention Mapping Approach. 2006, San Francisco, CA: Jossey-BassGoogle Scholar
- Saunders RP, Evans MH, Joshi P: Developing a process-evaluation plan for assessing health promotion program implementation: a how-to-guide. Health Promot Practice. 2005, 6: 134-147. 10.1177/1524839904273387.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.