Standards 1 through 3 address the preparation experiences of candidates, their developing knowledge and skills, and their abilities at the point of program completion. Candidate progress and provider conclusions about the readiness of completers at exit are direct outcomes of the provider’s efforts. By contrast, Standard 4 addresses the results of preparation at the point where they most matter—in classrooms and schools. Educator preparation providers must attend to candidate mastery of the knowledge and skills necessary for effective teaching, but that judgment is finally dependent on the impact the completers have on-the-job with P-12 student learning and development.

The paramount goal of providers is to prepare candidates who will have a positive impact on P-12 students. Impact can be measured in many ways. Component 4.1 enumerates some of these approaches. The Commission underscores here what also is said in the Recommendations on Evidence section, below, that multiple measures are needed for these and other accreditation evidence. One approach being adopted by several states and districts is known as “value-added modeling” (VAM). A large research effort supported by the Bill & Melinda Gates Foundation, the Measures of Effective Teaching (MET) project, provides useful guidance about the circumstances under which this model can most validly be used. These findings are consistent with those noted in Preparing Teachers: Building Evidence for Sound Policy (NRC, 2010): “Value-added models may provide valuable information about effective teacher preparation, but not definitive conclusions and are best considered together with other evidence from a variety of perspectives.”1

The Commission recommends that CAEP encourage research on the validity and reliability of VAM for program evaluation purposes.2 Because members expect that methodologies for measuring teacher impact on P-12 student learning and development will continue to evolve and hopefully improve, the Commission recommends that CAEP also make certain that its standards and processes reflect the profession’s best current thinking on appropriate use of evidence for program improvement and accreditation decisions. In this regard, providers should refer to the Data Task Force, the American Psychological Association guidance on preparation measures, and the University of Wisconsin Madison Value-Added Research Center reports regarding use of multiple sources of data, including value-added data, for program evaluation.3

Multiple types of surveys can serve as indicators of teaching effectiveness (Component 4.2), satisfaction of employers (Component 4.3), and satisfaction of completers (Component 4.4). Research by Ferguson, for example, shows that K-12 student surveys are a valid means for understanding aspects of teaching effectiveness.4 The Commission recommends that CAEP consider the development of common survey items and instruments for employers and completers. CAEP also should participate in the validation of student survey instruments for use in teacher pre-service programs.


1NRC (2010).

2University of Wisconsin, Value Added Research Center (2013), Student Growth and Value-Added Information as Evidence of Educator Preparation Program Effectiveness: A Review, Draft prepared for CAEP.

3Ewell, P. (2013). Report of the data task force to the CAEP Commission on Standards and Performance Reporting, CAEP. American Psychological Association (2013). Applying Psychological Science to Using Data for continuous Teacher Preparation Program Improvement, Draft, Report of a Board of Educational Affairs Task Force. University of Wisconsin, Value Added Research Center (2013).

4Ferguson, Ronald F. (2012). Can student surveys measure teaching quality? Phi Delta Kappan, 94:3, 24-28.