The Commission’s draft includes five standards and two additional recommendations that address CAEP Board responsibilities for accreditation and accountability. Each of the five standards is followed by a rationale, and then by examples of evidence. Public comments are solicited on the standards, the examples of evidence, and the additional recommendations. The public comment website, http://standards.caepnet.org (available February 22), is arranged to guide reviewers through the recommendations serially.
Structure of the Standards
The Commission has adopted a structure for the standards that was proposed by President Cibulka during its first meeting. The first part of that structure is organized around the three areas of teacher preparation identified by the National Academy of Sciences 2010 report, Preparing Teachers: Building Evidence for Sound Policy. The Academy panel sifted through hundreds of research studies from recent decades and, not surprisingly, concluded that more research is needed in order to have sound evidence about the impact of particular aspects of preparation. But it found that existing research provides some guidance: content knowledge, field experience, and the quality of teacher candidates “are likely to have the strongest effects” on outcomes for students (p. 180).
Adapting that guidance to its task, the Commission’s first three recommended standards are:
Content and Pedagogical Knowledge
Clinical Partnerships and Practice
Candidate Quality, Recruitment, and Selectivity
The Commission also explored important functions of an accrediting body that are fashioned around attributes of high-performing education organizations. These are supported by research on effective management, and, especially, the Baldrige education award criteria, and also by recent trends and new approaches among accreditors. The fourth and fifth standards and additional recommendations for the CAEP Board are built on these sources:
Standard 4: Program Impact
Standard 5: Provider Quality, Continuous Improvement, and Capacity
Recommendation on Annual Reporting and CAEP Monitoring
Recommendation on Levels of Accreditation
These groupings serve to structure the draft recommendations that immediately follow the comments on evidence, below.
Evidence That Standards Are Met
President Cibulka’s charge to the Commission gave equal weight to “essential standards” and to “accompanying evidence” indicating that standards are met. The additional rigor that CAEP has committed itself to apply is often found in the evidence rather than in the language of standards. In each of the Commission’s draft standards there is a concluding section providing “examples of evidence.” The Commissioners have identified these examples during their work over the past eight months and seek public comments on them as the next step toward final recommendations later this year.
In an ideal world, educator preparation accreditation would draw its evidentiary data from a wide array of sources that have different qualitative characteristics from many of those currently available. There would be elements of preparation that are quantified with common definitions or characteristics (e.g., different forms or patterns of clinical experiences) that everyone would understand and that providers would use in their own data systems. There would be comparable experiences in preparation that providers as well as employers, state agencies, and policymakers agree are essential. There would be similar requirements across states for courses, experiences and licensure. There would be a few universally administered examinations that serve as strong anchors for judgments about effective preparation and that are accepted as gateways to preparation programs, or employment, or promotion.
Educator preparation has few close approximations of such an ideal system. However, Commission members are optimistic that advances in the quality of evidence are at hand. From many arguments that might be made in defense of that optimism, three stand out. The current policy interest in well prepared teachers and leaders is probably higher than it has ever been, especially in states. In addition, the U. S. Department of Education’s Institute for Education Sciences is supporting randomized controlled trials that are examining elements of preparation, including selection and clinical experiences. And the Gates foundation’s “Measures of Effective Teaching” project has recently concluded a large research study of instruments used to evaluate teacher performances, some or all of which might be adapted to serve as preservice measures.
As the Commission’s recommendations are put into place by CAEP, the years immediately ahead should be ones of substantial, even order of magnitude, advances in access to sound evidence. Indeed, the examples that the Commission has selected for this report on its draft recommendations amply illustrate this position.
Among the examples are ones that would seem familiar to any accredited provider.
See Standard 1, example a (noted as 1.a), state licensure exams; 1.b, grade point average (GPA) in coursework related to the area of teaching; 2.h video analysis of a candidates’ teaching; 3.e, teacher work samples and Renaissance project portfolios; 4.d, employer surveys; 5.a, a quality assurance system with broad capacity to compile, store, access, manage and analyze data, and also 5.a, feedback from completers.
There are examples of familiar forms of evidence applied more rigorously.
Here illustrations found in the examples are 1.a, a licensure pass rate of 80 percent on a “common cut-score across states,” within two administrations; and 3.i, general education and content course grades with at least a 3.0 average and 3.5 in practica courses. For admissions, minimum criteria are built into component 4 of standard 3, a GPA minimum of 3.0 and average cohort performance on standardized admissions tests in the top third of national test pools.
Some examples explicitly anticipate the emergence of additional measures or new assessments.
1.a provides a note that CAEP should work with states to develop and employ new or revised licensure tests; 1.e lists P-12 student surveys of preservice candidates, and 1.f and 3.e list the Stanford/AACTE “edTPA” assessment, now being piloted; and 4.g includes edTPA “for in-service teachers (when an in-service version becomes available).” Also, component 3.4 contains, as an option for provider-established admissions criteria, “a model that predicts effective teaching” and measures the results in reliable and valid ways; and, similarly, an illustration of evidence for P-12 student learning in 4.c is “case studies of completers that demonstrate the impacts of preparation on P-12 student learning.”
And the Commission recommends some evaluation data strategies that would be new to accreditation.
2.a, 2.b, and 2.c on clinical partnerships call for evidence of understanding, data sharing, tracking and hiring patterns, and action indicating combined resource allocation and joint decision-making. Standard 3 on Candidate quality includes a strategic recruitment plan (3.a) with goals, evidence that progress is monitored, and use of the results for action. Standard 5 requires program outcome measures of graduation rates, candidate ability to meet licensing requirements, candidate hiring in the positions for which they prepared, and student loan default rates.
Another characteristic of the evidence examples is that they differ in level of specificity. Some are explicit performance measures (e.g., a state licensure test, a particular cut score on a test), while others describe inputs (e.g., coursework on assessment, embedding assessment topics in content and methods courses). Some recommendations are outlined in conceptual terms (e.g., evidence of tracking and sharing data with school district partners). Some measures give the appearance of precision (e.g., completion rates, placement rates), but anyone familiar with longstanding debates over the “Title II” preparation data reporting to the U. S. Department of Education is aware that every term must be defined and respondents trained if the results are to be consistent.
As new and better evidence becomes available, CAEP must be committed to use that evidence appropriately in making accreditation decisions. In addition, it should expect providers to take responsibility for examining the quality of evidence on which they rely—in part to make their case for accreditation but, routinely, for continuous improvement of their own programs. As the Commission moves into the final stages of its work, public comments on the examples of evidence contained in this report will be a critical source of counsel. Also, President Cibulka has made arrangements for additional technical advice to the Commission on appropriate conditions for use of various kinds of evidence, on accreditation decision rules and on threshold requirements that are developed for each standard and its components. The decision rules may require adaptation for providers operating in different states with differing approaches to constructing important performance indicators. The rules will need to be developmental and flexible enough to accommodate changes as the evidence measures change.
Providers, the public, and policymakers all need to perceive CAEP decisions as credible. The evidentiary base available to CAEP must improve, and it will. Stronger evidence, which CAEP will help generate, will provide a more solid foundation for the professional judgments reached in CAEP’s accreditation decisions.