Where possible nationally standardized examinations from publicly recognized third-party vendors for EMS professions will be used solely or in conjunction with locally developed examinations to act as summative final examinations. These exams are internally referred to as Readiness Exams. These exams provide either a point-biserial correlation coefficient and a Difficulty Index (or both) for each question used. These data are specific to the institution’s students and program cohorts, but are also provided for the entire pool of test-takers for comparative purposes.
The high reliability and predictive nature of nationally standardized examinations provide value both to the institution and individual students. This value is judged to outweigh any drawbacks that accompany the use of standardized exams including the inability to control or restrict the use of any individual question used in the exam. However, faculty are encouraged to run any available reports related to item analysis and flag items on behalf of the institution that appear to fall outside of the following Item Analysis Thresholds:
1) Items on the Readiness Exams are flagged for review if the difficulty index for an item is greater than .93 or less than .40.
2) Items on the Readiness Exams are flagged for review if the point-biserial correlation coefficient for any of an item’s incorrect responses is equal to or greater than its correct response or if a only a single score is calculated the point-biserial correlation falls below .33.
3) Items on the Readiness Exams are flagged for review if the Discrimination Index for the item falls below .33.
Where standardized exams are not used, item analysis will be conducted for final examinations for each course no less than annually, and thresholds applied to flag questions for removal. When examinations are under full institutional control statistics provided by the LMS may be used to apply thresholds. In some cases, when there is no automated calculation of a Discrimination Index or comparable statistic, it may be necessary to analyze raw data using statistical analysis software such as SPSS, NCSS, Real Statistics, etc. In these cases the default Discrimination Cutoff will be used for the package employed. This default is commonly set at .27. It is recognized that there may be minor differences to the statistical approach used to calculate a discrimination Index or comparable statistic among the software providers, but it is assumed the end results will be sufficiently identical to allow for interchangeable use of these thresholds. In these cases questions falling beyond thresholds will be flagged for review by the Teaching Team, who may take action at their discretion.
The primary LMS for advanced EMS programs is navigate2.jblearning.com. Jones and Bartlett define the following terms for statistics that may be employed using the previously mentioned thresholds:
Discrimination index – this is the correlation between the score for this question and the score for the whole quiz. That is, for a good question, you hope that the students who score highly on this question are the same students who score highly on the whole quiz. Higher numbers are better.
Discriminative efficiency – another measure that is similar to Discrimination index.
It is recognized that standardized examinations that are intended to predict a student’s likelihood of success in professional licensing examinations will necessarily include trial-questions that may not impact a student’s immediate score on the exam, but instead may be included as a trial-question being studied for possible future use within the exam. Instructors may, at their discretion, choose to include trial-questions in institutionally produced exams which have no impact on a student’s grade.