|The provider demonstrates the impact of its completers on P-12 student learning, classroom instruction and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation.
Impact on P-12 student learning
4.1.The provider documents, using value-added measures where available, other state-supported P-12 impact measures, and any other measures constructed by the provider, that program completers contribute to an expected level of P-12 student growth.
Indicators of teaching effectiveness
4.2 The provider demonstrates, through structured and validated observation instruments and student surveys, that completers effectively apply the professional knowledge, skills and dispositions that the preparation experiences were designed to achieve.
Satisfaction of employers
4.3. The provider demonstrates, using measures that result in valid and reliable data, and including employment milestones such as promotion and retention, that employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students.
Satisfaction of completers
4.4 The provider demonstrates, using measures that result in valid and reliable data, that program completers perceive their preparation was relevant to the responsibilities they confront on the job and that the preparation was effective.
CAEP Commission standards 1 through 3 address the preparation experiences of candidates, their developing knowledge and skills, and their abilities at the point of program completion. Candidate progress and faculty conclusions about the readiness of completers at exit are direct outcomes of the provider’s efforts.
By contrast, Standard 4 addresses the results of preparation programs at the point where they matter—the classroom teaching and other educator responsibilities in schools. Knowing results, learning from that knowledge, and turning the information back to assess the preparation experiences are the expected responsibilities of every provider. The Baldrige education award criteria place 45% (450 of 1000) of their rating points on results. Student results and operational effectiveness are a significant component of those points. For a preparation provider, the student results have a dual meaning: first, candidate mastery of the knowledge and skills necessary for effective teaching, and second teaching that has positive effects on P-12 student learning.
The paramount goal of providers is to prepare candidates who will have a positive impact on P-12 students. Impact can be measured in many ways, and one being adopted by several states and districts is known as “value-added modeling.” A large Gates’ supported research effort, the Measures of Effective Teaching (MET) project, provides useful guidance about the circumstances under which this model can most validly be used. These new findings are consistent with those noted in Preparing Teachers: Building Evidence for Sound Policy (NRC, 2010):[i]
Value-added models may provide valuable information about effective teacher preparation, but not definitive conclusions, and are best considered together with other evidence from a variety of perspectives.
The MET study also provides empirical evidence not previously available about structured teacher observations that employ videotapes and specific evaluation protocols, and it found that “student perception surveys provide a reliable indicator of the learning environment and give voice to the intended beneficiaries of instruction.”[ii] Beyond these sources of evidence, some providers will develop close collaborative relationships with districts in which their completers are employed and construct case studies that examine completers’ impacts on student learning. (NOTE: In addition, the Commission is still considering advice about appropriate conditions for use of evidence, as explained in the penultimate paragraph before Standard 1 on p. 13 of this report.)
Satisfaction measures such as employer surveys can provide useful feedback about completer performance. The Commission recommends that CAEP encourage more consistent use of employer surveys, and collaborate with states and other stakeholders to create more descriptive and more reliable instruments. In addition, the actual employment trajectories of completers—their retention, their promotion, their changing responsibilities—are useful indicators of employer satisfaction. Completer surveys are another source of program impact information. These can describe completer perceptions of the relevance and utility of aspects of their preparation as they view them in their day to day responsibilities.
An exemplary provider will be able to demonstrate superior impact on P-12 students and also the links between program characteristics and P-12 impact. The rationale for this exemplary distinction is that exemplary providers contribute to current P-12 achievement through the work of their own completers and to future P-12 achievement by serving as a model for other providers. (See CAEP Levels of Accreditation in the recommendations, below.)
Examples of Evidence
P-12 student learning
a. Value-added measures of P-12 student learning that can be linked with teacher data
b. State supported measures that address P-12 student learning that can be linked with teacher data
c. Case studies of completers that demonstrate the impacts of preparation on P-12 student learning and can be linked with teacher data
d. Employer surveys and/or focus groups
e. Completer retention
f. Completer promotion and employment trajectory
Observations and surveys
g. EdTPA for in-service teachers (when an in-service version becomes available, or if/when other assessments that provide valid and reliable information about in-service teaching are available)
h. Observations by credentialed evaluators of in-service teachers (e.g., Classroom Assessment Scoring System (CLASS) developed by Bob Pianta and Bridget Hamre; Framework for Teaching, developed by Charlotte Danielson)
i. P-12 student surveys
j. Completer surveys and/or focus groups
[i] NRC. (2010).