
ALWAYS-WATCHING
-
Issue: Lack of Human Intervention. The system automatically flagged private student data without human context, leading to an unnecessary alarm. This fits the “Lack of Human Intervention” issue, where “there has been a lack of human intervention in the decisions made by AI”.
-
Human in the Loop: Require that flagged cases (especially sensitive ones like self-harm) are immediately reviewed by a counselor or staff before any action. This adds human judgment to interpret ambiguous AI alerts.
-
Risk Level: High. Misinterpreting personal content as a self-harm risk can lead to serious harm (psychological distress, mistrust) and disrupt a student’s well-being.
-
Decision: Mitigate. Because the AI’s false positive caused needless alarm, we would not accept this level of error. We would implement additional oversight and perhaps adjust the AI’s sensitivity (adding human review) to avoid unwarranted interventions.