
AI LABELS STUDENT “AT-RISK”
-
Issue: Bias and Discrimination. The AI’s risk label may reflect “algorithmic bias or outdated data”. This aligns with the “Bias and Discrimination” issue, where an AI “may have led to bias or discrimination in relation to an individual or group”.
-
Bias/Discrimination Countering: Actively audit and adjust the early-warning model to reduce bias – for example, diversify or normalize the data used. This includes checking whether demographic factors unduly influence the “at-risk” label.
-
Risk Level: Medium. The incorrect label risks unfair treatment or stigma for the student, but it can be caught before causing major harm.
-
Decision: Mitigate. We would not simply accept the biased label. Instead, we’d review the AI’s criteria and implement bias-mitigation steps (the safeguard above). This reduces the chance of repeating the bias.