| The provider maintains a quality assurance system comprised of valid data from multiple measures, including evidence of candidates’ and completers’ positive impact on P-12 student learning and development. The provider supports continuous improvement that is sustained, evidence-based, and that evaluates the effectiveness of its completers. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning.Quality and strategic evaluation
5.1 The provider’s quality assurance system demonstrates capacity to address all CAEP standards and investigates the relationship between program elements and candidate outcomes to improve graduates’ impact on P-12 student learning.
5.2 The provider’s quality assurance system relies on relevant, verifiable, representative, cumulative, and actionable measures, and produces empirical evidence that interpretations of data are valid and consistent. The system generates outcomes data that are summarized, externally benchmarked, analyzed, shared widely, and acted upon in decision-making related to programs, resource allocation, and future direction.
5.3 The provider’s quality assurance system is comprised of multiple measures that can monitor candidate progress, completer achievements and the provider’s operational effectiveness. These include measures of program outcomes for:
5.4 The provider regularly and systematically assesses performance against its goals and relevant standards, tracks results over time, tests innovations and the effects of selection criteria on subsequent progress and completion, and uses results to improve program elements and processes. Available evidence on academic achievement of completers’ P-12 students is reported, analyzed, and used to improve programs and candidate performance. Leadership at all levels is committed to evidence-based continuous improvement.
5.5 The provider assures that appropriate stakeholders, including alumni, employers, practitioners, school and community partners, and others defined by the provider are involved in program evaluation, improvement, and identification of models of excellence.
5.6 The provider assures continuing quality of curricula; educators (faculty); facilities, equipment, and supplies; fiscal and administrative capacity; student support services; recruiting and admissions practices; academic calendars, catalogs, publications, grading policies, and advertising; measures of program length and objectives; and student complaints.[i]
Effective organizations rely on evidence-based quality assurance systems characterized by clearly articulated and effective processes for defining and assuring quality outcomes and for using data in a process of continuous improvement. A robust quality assurance system ensures continuous improvement by relying on a variety of measures, establishing performance benchmarks for its measures (with reference to external standards where possible), seeking the views of all relevant stakeholders, sharing evidence widely with both internal and external audiences, and using results to improve policies and practices in consultation with partners and stakeholders.[ii]
Ultimately the quality of an educator preparation program is measured by the abilities of its completers to have a positive impact on P-12 student learning and development.[iii] Program quality and improvement are determined, in part, by characteristics of candidates that the provider recruits to the field; the knowledge, skills, and professional dispositions that candidates bring to the program and acquire during the program; the relationships between the provider and the schools where its candidates receive clinical training; and subsequent evidence of completers’ impact on P-12 student learning[iv] in schools where they ultimately teach. To be accredited a preparation program must meet standards on each of these dimensions and demonstrate success in its own continuous improvement efforts.
Effective quality assurance systems rely on multiple measures and include a clearly articulated and effective process for defining and assuring quality outcomes. Reasons for the selection of each measure and the establishment of performance benchmarks for individual and program performance, including external points of comparison, are made clear. Providers show evidence of the credibility and dependability of the data that inform their quality control systems, as well as evidence of ongoing investigation into the quality of evidence and the validity of their interpretations of that evidence. Providers must present empirical evidence of each measure’s psychometric and statistical soundness (reliability and validity).[v]
Continuous improvement systems enable programs to quickly develop and test prospective improvements, deploy what is learned throughout the organization, and add to the profession’s knowledge base and repertoire of practice.[vi] CAEP should encourage providers to develop new models for evaluating and scaling up effective solutions to problems in educator preparation. Research and development in the accreditation framework can deepen the knowledge of existing best practices and provide models of emerging innovations to transform educator preparation.[vii]
A provider must have the capacity to support the desired program and candidate outcomes.[viii] Core program elements include curriculum, faculty/educators, administrative and financial support, and candidate services that support candidates’ ability to positively impact P-12 student learning. The adequacy and effectiveness of these elements in relation to candidate outcomes must be investigated as part of the quality assurance system.
Examples of Evidence
Quality assurance system
a. The quality assurance system demonstrates capabilities to compile, store, access, manage, and analyze data from diverse sources, including:
- multiple indicators from standards 1, 2, and 3 of candidate developing knowledge and skills from recruitment and admissions, during the preparation experience, and measures that inform provider decisions at candidate completion, including assessments of candidate performance such as licensure tests and evaluations of student teaching/internship;
- feedback from standard 4 on completers, employer satisfaction surveys, completer retention and employment milestones, state data on the academic achievement of completers’ P-12 students, program completers own evaluation of their level of preparedness, and other sources that provide useful information on professional performance; and
- documentation of program outcomes from standard 5 such as the proportions of a candidate cohort who complete, who are licensed or certified, who are placed in education positions for which they have prepared, and the student loan default rate.
Use of Quality assessment and descriptive measures
b. Practices for investigating the quality of data sources and efforts to strengthen and improve the overall quality assurance system
c. Processes for testing the reliability and validity of measures and instruments used to determine candidates’ progress through the preparation program, at completion of the program, and during the first years of practice. The evidence should meet accepted research standards for validity and reliability of comparable measures and should, among other things, rule out alternative explanations or rival interpretations of reported results.
- Validity can be supported through evidence of
- Expert validation of the items in an assessment or rating form (content validation)
- Agreement among findings of logically-related measures (convergent validity)
- A measure’s ability to predict performance on another measure (predictive validity)
- Expert validation of performance or of artifacts (expert judgment)
- Agreement among coders or reviewers of narrative evidence.
- Reliability in its various forms can be supported through evidence of:
- Agreement among multiple raters of the same event or artifact (or the same candidate at different points in time)
- Stability or consistency of ratings over time
- Evidence of internal consistency of measures
d. Documentation that data are shared with both internal and external audiences and the use of data for program improvement.
Continuous Improvement Process
e. Documentation of innovations that have been tested and improvements that have been made
f. Examples of leadership commitment to continuous improvement such as planning and implementing change
g. Documentation of stakeholder involvement in the provider’s assessment of the effectiveness of programs and completers
h. Curriculum that reflects current needs in P-12 schools as well as national and P-12 state and/or college and career ready standards
i. Quality of faculty members and/or other staff, including the range of relevant experiences such as academic qualifications; P-12 teaching experience and involvement in P-12 schools and districts; and course evaluations by candidates, teaching awards, or P-12 educator feedback to indicate their effectiveness as teachers
j. Facilities that support teaching and learning
k. Fiscal and administrative resources that support programs and P-12 school partnerships; that develop expertise in new assessments (e.g., edTPA, teacher work samples); that support professional development for content area scholarship and expertise in new technologies, pedagogies, and curriculum (e.g. Common Core State Standards); and that support collaborative inquiry to make decisions regarding priorities and their implementation
l. Candidate support services such as academic advising services, and counseling center services
m. Provider’s recruiting and admissions policies and practices, academic calendars, catalogs, publications, grading, and advertising
n. Information that describes the length and objectives of programs
o. Policies for handling candidate complaints and examples of complaints and their disposal
p. Review of any state actions on the institution or program, or any concerns that have come to the state’s attention
[i] The U.S. Department of Education, Code of Federal Regulations, 34 CFR 602 requires accreditors to include these resources, services, practices and communications in their standards.
[ii] Ruben, B. R. (2010). Excellence in higher education guide. An integrated approach to assessment, planning, and improvement in colleges and universities. Washington, D.C.: National Association of College and University Business Officers.
Baldrige Performance Excellence Program. (2011). 2011-2012 Education criteria for performance excellence. Gaithersburg, MD: Author.
[iii] The use of “development” is based on InTASC’s Standard #1: Learner Development. The teacher understands how learners grow and develop, recognizing that patterns of learning and development vary individually within and across the cognitive, linguistic, social, emotional, and physical areas, and designs and implements developmentally appropriate and challenging learning experiences.
[iv] NRC. (2010).
Bransford, J., Darling-Hammond, L., & Lepage, P. (2005). In L. Darling-Hammond, & J. Bransford (Eds.), Preparing teachers for a changing world. What teachers should learn and be able to do (pp. 1- 39). San Francisco, CA: Jossey-Bass.
Zeichner, K. M., & Conklin, H. G. (2005). Teacher education programs. In M. Cochran-Smith, & K. M. Zeichner (Eds.), Studying teacher education (pp. 645-735). Mahwah, NJ: Lawrence Erlbaum Associates.
[v] Ewell, P. (2012). Recent trends and practices in accreditation: Implications for the development of standards for CAEP. Washington, DC: CAEP.
[vi] Langley G.L., Nolan K.M., Nolan T.W., Norman C.L. & Provost L.P. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed). San Francisco: Jossey-Bass Publishers.
Bryk, A.S., Gomez, L.M. & Grunow, A. (2010). Getting ideas into action: Building networked improvement communities in education, Stanford, CA: Carnegie Foundation for the Advancement of Teaching. Essay retrieved from http://www.carnegiefoundation.org/spotlight/webinar-bryk-gomez-building-networked-improvement-communities-in-education.
[viii] Baldrige Performance Excellence Program. (2011).
Middle States Commission on Higher Education. (2010). Handbook for periodic review reports. Philadelphia, PA: Author.
Western Association of Schools and Colleges. (2002). Evidence guide. A guide to using evidence in the accreditation process: A resource to support institutions and evaluation teams. Alamedia, CA: Author.