When designing AI solutions, it’s essential to be aware of common pitfalls that can negatively impact user experience, security, and trust.
Recognising and proactively addressing these pitfalls helps maintain high-quality AI solutions that are accurate, transparent, secure, and user-friendly.
| Pitfall | Example | Prevention |
|---|---|---|
| Hallucinated information | AI shows incorrect vessel arrival times without actual port data. | Always validate AI outputs against authoritative backend data sources, clearly marking or excluding uncertain data. |
| Hidden automation | AI automatically approves shipment rerouting without clearly informing the logistics coordinator. | Require explicit user confirmation for critical automated actions and follow clear authorisation policies. |
| Unexplained AI decisions | AI automatically approves shipment rerouting without clearly informing the logistics coordinator. | Include clear and accessible explanations for AI-driven decisions, linking to detailed reasoning and underlying data. |
| Persistent biases | AI repeatedly recommends certain suppliers due to biased historical purchasing data. | Regularly audit training data for fairness, diversify inputs, and transparently disclose recommendation criteria. |
| Overgeneralization | AI suggests identical inventory replenishment levels regardless of seasonal demand fluctuations. | Contextualise AI recommendations based on roles, logistics tasks, real-time data, and specific contexts. |
| Data privacy ambiguity | AI uses customer data for personalized tracking without clearly communicating privacy implications. | Explicitly request user consent, transparently communicate data usage, and provide clear privacy preference settings. |
| Unrecoverable errors | AI misinterprets a user’s voice request without clearly guiding the user on how to correct it. | Design clear interactions with explicit prompts and easy correction or error recovery paths. |
| Friction in escalation paths | Users struggle to smoothly transition from AI chatbot interactions to human support. | Clearly and proactively present easy escalation options and ensure smooth hand-off experiences. |
| Security shortcuts | AI chatbot inadvertently reveals sensitive shipment information without proper authentication. | Enforce secure, role-based authentication and clearly indicate when sensitive actions require additional security steps. |
| Inconsistent AI personality and tone | AI conversations abruptly shift from formal to overly casual, confusing users and reducing trust. | Define and consistently apply a conversational style guide aligned with user expectations and company tone. |
Please write to us on our Teams Channel. We encourage and welcome any type of contribution and feedback.
Related pages
With contributions from:
Mia Stigsnaes-Hansen
Martin Oliver Christensen
Fangyu Zhou