
AUTOMATED GRADING MISFIRE
-
Summary: Lack of Human Intervention – The AI grading tool automatically failed a student (an English language learner) by labeling their work as AI-generated, and the teacher/school initially upheld that result without a manual review. There was no human check on an AI-made decision about a student’s grade, raising serious fairness concerns.
-
Human in the Loop – The grading process should require human oversight, especially for negative outcomes. Teachers or administrators must review cases where the AI flags a submission as non-original before assigning a failing grade. Also, they should consider biases (the AI may misidentify writing from ELL students). In practice, the school might update its grading policy so that AI-generated flags prompt further investigation, not an automatic fail.
-
High Risk – Wrongly failing a student can have severe academic and emotional consequences, and if such auto-grading continues unchecked, it could unjustly impact many students. This is a high risk to student trust, equity, and the school’s integrity (potentially even exposing the school to complaints or legal challenges if students are punished wrongly).
-
Decision: This situation demands action to ensure fairness in grading. The council would implement safeguards like mandatory human review and bias checks for the AI tool. Mitigation might include retraining or calibrating the AI (if possible) to reduce bias against ELL writing, but most importantly, never accepting an AI’s judgment without human confirmation. By mitigating the risk, the school protects students from unjust outcomes and maintains confidence in the grading system.