Process evaluation can be used to explain why interventions succeed and fail, and whether there are characteristics or mechanisms involved in the program's implementation that potentially mediate or moderate outcomes. In large-scale trials the importance of monitoring program implementation has been highlighted [1–10] and there is strong evidence that level of implementation impacts study outcomes . Implementation monitoring can be done in both a formative and a summative manner. Formative evaluations can be defined as utilizing data to provide on-going monitoring and quality assessment to maximize the performance of a program [11–14]. Summative evaluations analyze data at the conclusion of an initiative to provide a conclusive rating of the extent to which intended outcomes were achieved and the program was implemented as intended [11, 13, 14]. Another summative purpose of process evaluation is to include level of implementation data in the outcome analysis [15, 16].
Evaluations of implementation are especially important given that few studies have achieved full implementation in real-world settings. This is also true of health promotion efforts, as researchers have noted the great variability in program implementation and policy adoption in community and school settings [1, 17]. Thus, one purpose of implementation monitoring is to ensure that the originally designed intervention is, in fact, being implemented, as well as being implemented in a manner that is consistent with the program theory and plan. In effect, if a complex intervention carried out in a field setting is not carefully monitored and adjusted to stay "on track" with the original plan, many different interventions may be implemented. Thus, midcourse changes are designed to increase fidelity, dose, and reach to enable researchers to evaluate the intervention as originally planned. Despite the importance of such evaluations, outcome analyses are frequently conducted without an assessment of program implementation . This is often referred to as the "black box" approach to evaluation, which refers to examining the outcomes of a program without examining its internal operation. Lack of this knowledge can lead to "a Type III error," which refers to the conclusion that a seemingly ineffective program was, in actuality, not implemented as intended [19, 20].
Process evaluation data used for formative purposes during a developed intervention, as described in this study, should be distinguished from process evaluation used for formative evaluation during the developmental phases of an intervention [21–23]. In an example of the latter, Wilson and colleagues  conducted a formative evaluation of a motivational PA intervention (Active by Choice Today; ACT). The conceptual framework for the ACT intervention targeted the social environment, cognitive mediators, and motivational orientation related to PA in underserved adolescents. The 8-week program sought to increase moderate-to-vigorous PA (MVPA) for participant youth, and formative evaluation was collected through daily forms and observational data completed by an independent objective observer. ACT process evaluation focused on identifying factors in the social environment and curriculum that worked well and/or were in need of improvement. Most effort was spent ensuring that the theoretical underpinnings of the program were maximized and promoting efficiency by modifying logistical flaws. The process evaluation was used to inform necessary changes to the staff training. Specifically, process data indicated that it would be more beneficial to encourage staff to praise students in subtle ways or in a setting where other students would not be aware of it (due to reduction in positive student-to-student reactions when publicly praised for their behavior by staff). The investigators also learned that training should focus on instructional methods which foster a balance between discipline and nurturing as well as ways to subtly dismantle cliques.
A growing literature has included process evaluation as a key element in evaluating success of implementation in large-scale PA trials. The Pathways initiative - a large-scale, multi-site, 3-year study testing a school-based intervention, used process evaluation methods in evaluating implementation of an intervention to lower percent body fat in American Indian children . Pathways applied a multilevel strategy involving individual behavior change and environmental modifications to support changes in individual behavior. The environmental component included a food service intervention to enhance food staff skills in preparing and serving lower-fat meals. For this component, implementation was measured by various behavioral guidelines (e.g., use of low-fat vendor entrees, offer choice of fruits and vegetables). In the first year, none of the 12 goals were achieved; in the second year 6 of the 13 goals were met (a new goal had been added); in the third year 9 of the 13 goals were met. These improvements were due to performance feedback provided by the evaluators at the end of each semester, an example of effective use of formative process data.
Other large trials have reported summative process evaluations which have implications for using process evaluation data for formative purposes. For example, in one investigation of the SPARK program (Sport, Play, and Active Recreation for Kids), a multi-component elementary school program which sought to promote PA in elementary children, process evaluation data was obtained to determine success of implementation. The SPARK curriculum focused on physical education (PE) and self-management (SM), and children participated in either an intervention implemented by PE specialists, an intervention implemented by classroom teachers, or a control (usual PE classes). Through direct observation of weekly classroom lessons it was determined that teachers and PE specialists conducted 63% and 67% of the components of the SM curriculum, respectively. The small variance in intervention delivery coupled with the relatively low implementation percentages suggests the possibility of consistent contextual implementation barriers that perhaps could have been addressed with timely, formative process evaluation data.
In "Switch-Play," Salmon and colleagues  sought to reduce the time spent by primary school children in sedentary behaviors and to increase their skills in, enjoyment of, and participation in PA outside of school. The process evaluation indicated an average attendance of 88% among children in the intervention conditions. Classroom activities were completed 92% of the time; however, outside-of-class PA activities and self-monitoring sheets were completed 57% and 62% of the time, respectively. These data indicate opportunities for improving fidelity to essential program elements, especially for outside of class PA.
The purpose of the present study was to demonstrate how program process evaluation was used in a formative manner  to improve fidelity and dose (completeness) of implementation as well as reach into the target population in the ACT randomized school-based trial from year to year of implementation. The ACT trial , is a group-randomized cohort design with three intervention and three comparison schools per year over the course of four years (N = 24 schools, n = 60 6th graders per school). The formative data from each year were used to provide corrective feedback to keep the intervention "on track", and was part of a comprehensive approach to process evaluation for monitoring and assessing program implementation in ACT .