
AI KITCHEN ASSISTANT ERROR
-
Summary: Inappropriate Interactions – The AI cooking assistant gave incorrect and unsafe advice (a wrong poultry cooking temperature). This misinformation could have led to undercooked food and a health hazard, if the teacher hadn’t caught it. It’s an AI error that crosses into safety risk territory, thus an inappropriate output for a classroom tool.
-
Training/Awareness Activities – The school should train students and staff to treat AI-provided instructions with caution, especially on safety-critical matters. For example, culinary students should be taught to always verify cooking safety info with authoritative sources (like food safety guidelines posted in class) and not blindly trust the AI. Likewise, teachers must be aware of the AI’s limitations and actively supervise its use (implementing a human-in-the-loop approach for safety decisions). Additionally, before deploying such an assistant, the school could have completed a process (rigorous testing or vendor check) to ensure its advice is accurate – this could be another safeguard to consider.
-
High Risk – Giving wrong food safety information is dangerous. If not caught, students or consumers could get food poisoning. This represents a high risk because it directly threatens health and safety. The near-miss in this scenario underscores how severe the outcome could have been without teacher oversight.
-
Decision: The school would take action to prevent any chance of harmful misinformation. Accepting this risk is not viable when student health is at stake. Mitigation steps might include disabling or restricting the AI’s advice on critical safety topics, improving the AI’s knowledge base, and retraining everyone on verifying critical information. By mitigating – for instance, by always cross-checking the AI’s cooking instructions against standard safety temperatures – the school ensures that classroom technology enriches learning without compromising safety.