top of page

GROUP PROJECT DIVIDE

  • Issue: Inappropriate Interactions. The AI tool is exaggerating student behavior in official reports. This means it’s producing “incorrect content” about students (e.g. overstating defiance), an “inappropriate” AI output.

  • Human in the Loop: Require that all AI-generated discipline reports be reviewed by a human (administrator or counselor) before finalizing. This lets humans catch exaggerations or misinterpretations.

  • Risk Level: Medium. Exaggerated reports could unfairly label students and harm their records, but oversight can correct them.
     

  • Decision: Mitigate. We would add an oversight step rather than accept the reports as-is. By ensuring a staff member reviews AI-written discipline summaries, we prevent false accusations from becoming official.

©2024 by Digital Brush Studios L.L.C.

bottom of page