Alert

Components

Components

Impact on P-12 Student Learning and Development

4.1 REQUIRED COMPONENT The provider documents, using multiple measures, that program completers contribute to an expected level of student-learning growth. Multiple measures shall include all available growth measures (including value-added measures, student-growth percentiles, and student learning and development objectives) required by the state for its teachers and available to educator preparation providers, other state-supported P-12 impact measures, and any other measures employed by the provider.

Indicators of Teaching Effectiveness

4.2 REQUIRED COMPONENT The provider demonstrates, through structured and validated observation instruments and/or student surveys, that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve. 

Satisfaction of Employers

4.3 REQUIRED COMPONENT The provider demonstrates, using measures that result in valid and reliable data and including employment milestones such as promotion and retention, that employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students.

Satisfaction of Completers

4.4 REQUIRED COMPONENT The provider demonstrates, using measures that result in valid and reliable data, that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective.

Rationale

Rationale

Standards 1 through 3 address the preparation experiences of candidates, their developing knowledge and skills, and their abilities at the point of program completion. Candidate progress and provider conclusions about the readiness of completers at exit are direct outcomes of the provider’s efforts. By contrast, Standard 4 addresses the results of preparation at the point where they most matter—in classrooms and schools. Educator preparation providers must attend to candidate mastery of the knowledge and skills necessary for effective teaching, but that judgment is finally dependent on the impact the completers have on-the-job with P-12 student learning and development.

The paramount goal of providers is to prepare candidates who will have a positive impact on P-12 students. Impact can be measured in many ways. Component 4.1 enumerates some of these approaches. The Commission underscores here what also is said in the Recommendations on Evidence section, below, that multiple measures are needed for these and other accreditation evidence. One approach being adopted by several states and districts is known as “value-added modeling” (VAM). A large research effort supported by the Bill & Melinda Gates Foundation, the Measures of Effective Teaching (MET) project, provides useful guidance about the circumstances under which this model can most validly be used. These findings are consistent with those noted in Preparing Teachers: Building Evidence for Sound Policy (NRC, 2010): “Value-added models may provide valuable information about effective teacher preparation, but not definitive conclusions and are best considered together with other evidence from a variety of perspectives.”1

The Commission recommends that CAEP encourage research on the validity and reliability of VAM for program evaluation purposes.2 Because members expect that methodologies for measuring teacher impact on P-12 student learning and development will continue to evolve and hopefully improve, the Commission recommends that CAEP also make certain that its standards and processes reflect the profession’s best current thinking on appropriate use of evidence for program improvement and accreditation decisions. In this regard, providers should refer to the Data Task Force, the American Psychological Association guidance on preparation measures, and the University of Wisconsin Madison Value-Added Research Center reports regarding use of multiple sources of data, including value-added data, for program evaluation.3

Multiple types of surveys can serve as indicators of teaching effectiveness (Component 4.2), satisfaction of employers (Component 4.3), and satisfaction of completers (Component 4.4). Research by Ferguson, for example, shows that K-12 student surveys are a valid means for understanding aspects of teaching effectiveness.4 The Commission recommends that CAEP consider the development of common survey items and instruments for employers and completers. CAEP also should participate in the validation of student survey instruments for use in teacher pre-service programs.


1NRC (2010).

2University of Wisconsin, Value Added Research Center (2013), Student Growth and Value-Added Information as Evidence of Educator Preparation Program Effectiveness: A Review, Draft prepared for CAEP.

3Ewell, P. (2013). Report of the data task force to the CAEP Commission on Standards and Performance Reporting, CAEP. American Psychological Association (2013). Applying Psychological Science to Using Data for continuous Teacher Preparation Program Improvement, Draft, Report of a Board of Educational Affairs Task Force. University of Wisconsin, Value Added Research Center (2013).

4Ferguson, Ronald F. (2012). Can student surveys measure teaching quality? Phi Delta Kappan, 94:3, 24-28.

Resources

Resources

Standard 4 FAQS

CAEP regularly receives questions from the field on CAEP Standard 4. We have compiled a list of actual frequently asked questions and provided answers for each one. The answers have been shaped by input from stakeholders and education professionals across the field.

Standard 4 Frequently Asked Questions (FAQs)

This resource is intended to provide clarity. Should you have any further questions about Standard 4, please send them to Emerson Elliott, Director of Special Projects at CAEP.

When States Provide Limited Data: Using Standard 4 to Drive Program Improvement

CAEP is aware that Standard 4 represents a challenge for both states and Educator Preparation Providers (EPPs).  CAEP is committed to providing guidance to EPPs and states on approaches that can be taken for providing evidence for the meeting all four components of Standard 4.

This guidance outlines some options, as well as design concepts, for EPPs that have limited or no access to state data.  

Updates on Phasing in Evidence Gathering


At its meeting on October 27, 2015, the Accreditation Council voted in favor of three procedure changes and reaffirmed support for the existing phase-in plan. The Council supported these changes in the interest of easing challenges in gathering the evidence required to meet CAEP Standard 4:

  1. Build on the CAEP “phase-in plan,” already in place, for data that were not typically part of accreditation evidence prior to the 2013 CAEP Standards, 
  2. Provide operational guidance for the requirement that EPPs “meet all components” of Standard 4 for full accreditation by extending the timeline for quality evidence through 2018, 
  3. Systematically classify states by the Standard 4 component information they gather and share with EPPs so that all EPPs within the state will be reviewed by CAEP consistently, and
  4. Permit data that states share with EPPs to be deemed sufficient to meet a standard during the phase-in period.

For more details on the procedure changes, you can view the original proposals on the memorandum the Accreditation Council received.