Looking for an acronym? Please see the acronyms listing.
A statement or argument that provides a justification for a selection, decision, or recommendation.
An optional written response to the site review report submitted by the EPP, post site review. This response allows an EPP to state whether or not they agree with the evaluation team’s findings; however, an EPP cannot include new and/or additional evidence with the rejoinder.
A principle of evidence quality that implies validity and provides a clear explanation of what any information put forward aligns with standard components. This principle also implies there is a clear and explicable link between what a particular measure is established to gauge and the substantive content of the Standard under which it is listed.
The degree to which test scores for a group of test takers are consistent for repeated evaluations of a measurement procedure. A measure is said to have a high reliability if it produces consistent results under consistent conditions, and for multiple evaluators.
Reliable and Valid Evidence.
The credibility of the results from assessment and evaluation measures.
Reliable and Valid Model.
For CAEP purposes (p. 17 of the Commission report), a case study that is presented to meet one or more of CAEP’s standards in which key outcomes and processes are gauged, changes and supporting judgments are tracked, and the changes lead to improvements. To be reliable and valid as a model, the case study should have followed CAEP’s guidelines in identifying a worthwhile topic to study, generated ideas for change, defined the measurements, tested solutions, transformed promising ideas into sustainable solutions that achieve effectiveness reliably at scale, and shared knowledge.
The extent to which a measure or result is typical of an underlying situation or condition, not an isolated case. If statistics are presented based on a sample, evidence of the extent to which the sample is representative of the overall population ought to be provided, such as the relative characteristics of the sample and the parent population. If the evidence presented is qualitative—for example, case studies or narratives, multiple instances should be given or additional data shown to indicate the typicality of the chosen examples. CAEP holds that sampling can sometimes be useful and desirable in generating measures efficiently. But in both sampling and reporting, care must be taken to ensure that what is claimed is typical and the evidence of representativeness must be subject to audit by a third party. In all cases, EPPs should describe in what ways the actual data represent the full population under investigation and in what ways they do not—that is, how do respondents supplying the data correspond with the total population.
The extent to which a measure or result is typical of an underlying situation or condition, rather than just an isolated case. For example, when statistics are presented based on a sample, evidence of the extent to which the sample is representative of the overall population ought to be provided. If the evidence presented is qualitative, selection of a sample population should demonstrate the typicality of the chosen examples. In both sampling and reporting, care must be taken to ensure that what is claimed is typical. In all cases, EPPs should describe in what ways the actual data represent the full population under investigation and in what ways they do not, such that respondents supplying the data correspond with the total population.
Comparison of the number of candidates who entered a program against the number who completed the program and were recommended for certification or licensure. Retention rates may also be collected for the number of new teachers who begin work in schools and who are still working in specified subsequent years.
The continuing accreditation decision made by the Accreditation Council to revoke an accredited status when the Accreditation Council has determined that the educator preparation provider (EPP) no longer meets two or more CAEP standards.
In education, refers both to a challenging curriculum and to the consistency or stringency with which high standard for learning and performance are upheld (adapted from the Western Association of Schools and Colleges glossary).
A tool for scoring candidate work or performances, typically in the form of a table or matrix, with criteria that describe the dimensions of the outcomes down the lefthand vertical axis, and levels of performance across the horizontal axis. The work of performance may be given an overall score (holistic scoring) or criteria may be scored individually (analytic scoring). Rubrics are also used for communicating expectations (adapted from the Western Association of Schools and Colleges glossary).