Looking for an acronym? Please see the acronyms listing.
A numerical relationship constructed to show trends in characteristics or conditions or to monitor progress of a phenomenon. For example, the proportion of candidates in a cohort who complete preparation within a specified time, shown year by year, might be an indicator of EPP outcomes.
Licensure, certification, or endorsement that signifies successful completion of preparation for P-12 teachers through programs at the baccalaureate or post-baccalaureate levels. Initial licensure programs are designed to prepare candidates who have not yet earned a license to become P-12 teachers.
Initial Review Panel.
A 3-4-person group selected from the Accreditation Council that examines the self-study, site review report, and other accreditation documents related to an educator preparation provider’s (EPP) case for accreditation. These documents include recommendations from the evaluation team about the sufficiency of evidence for each standard, including their recommendations on areas for improvement (AFIs) or stipulations, if any. The Initial Review Panel determines the need for AFIs or stipulations, as well as whether standards are met, and forwards its conclusions to the Joint Review Team, and then to the Accreditation Council. The Joint Review Panel and Council confirm or amend areas for improvement, stipulations, and standards met or not met. The Accreditation Council makes all final accreditation decisions.
The summative evaluation of a college or university against the standards of an institutional or regional accreditor, such as the Higher Learning Commission.
Standards set by an educator preparation provider (EPP) that reflect its mission and identify important expectations for educator candidate learning that may be unique to the EPP.
Educator preparation providers (EPPs) incorporated in or primarily operating in countries outside of the United States may seek CAEP accreditation. International institutions must meet all of CAEP’s standards and policies; however, in some cases adaptation may be made to accommodate national or cultural differences while preserving the integrity of the CAEP process (adapted from the Western Association of Schools and Colleges glossary).
Inter-rater reliability is a measure of consistency used to assess the degree to which different judges (or raters) agree in their evaluation (or scoring) decisions of the same phenomenon. Inter-rater reliability is useful because human observers will not necessarily interpret concepts, performances or scoring categories the same way. If various raters do not agree, the effects can be detrimental and suggest either that the scale is defective or that the raters need to be re-trained. Inter-rater reliability is high when reviewers demonstrate that they consistently reach the same or very similar decisions. A formal training and calibration procedure is usually needed to achieve this result, and the calibration involves calculating reliability coefficients.