Designing Understandable AI: The Power of the HAX Design Library

Introduction

The rise of user-facing Artificial Intelligence (AI) systems presents both incredible opportunities and significant design challenges. How do we create AI that is not only intelligent but also understandable, trustworthy, and ultimately helpful for users? The HAX Design Library: Interactive collection of the 18 Guidelines for Human-AI Interaction offers a powerful framework to address these questions. This resource provides invaluable guidance, complete with design patterns and practical examples, for building human-centered AI experiences.

One of the most critical aspects highlighted by the HAX Design Library is the importance of managing user expectations. In a world where AI capabilities are rapidly evolving, it's easy for users to have unclear or even unrealistic ideas about what an AI system can do. This can lead to disappointment, product abandonment, and in some cases, even harm. Let's delve into how the HAX Guidelines address this crucial area.

Setting the Stage: Clearly Defining AI Capabilities (Guideline 1)

The first guideline, "Make clear what the system can do," underscores the fundamental need for transparency. Many AI systems are designed to support multiple tasks across various domains. However, if these boundaries aren't clearly communicated, users might expect the system to perform tasks it wasn't designed for.

 

Consider the example of a fitness tracker. A user might reasonably assume that a device tracking steps for walking and running would also track cycling or sleep quality. When this isn't the case (a domain and task mismatch, respectively), the user is likely to be disappointed. Clearly articulating the supported activities and domains upfront is crucial for setting accurate expectations.

 

Painting a Realistic Picture: Communicating AI Performance (Guideline 2)

Building upon the "what," the second guideline, "Make clear how well the system can do what it can do," focuses on the "how well." People often have skewed perceptions of AI accuracy. They might either overestimate its reliability (leading to over-trust) or underestimate it (leading to algorithm aversion).

Think again about our fitness tracker. Even for its intended tasks (walking and running), it might not be perfect. It could miss steps on inclines or incorrectly register movements as steps when someone is sitting on a swing. Unrealistic expectations about its accuracy can lead to frustration and abandonment.

Furthermore, the guideline touches on the dangers of automation bias (over-trusting AI even when it's wrong) and algorithm aversion (under-trusting AI even when it's right). The example of AI in judicial sentencing highlights the severe consequences of blindly following AI recommendations without understanding their potential for error or bias.

Beyond Capabilities: Context and Social Considerations (Guidelines 3-6)

The HAX Design Library goes beyond just defining what an AI can and cannot do. It also emphasizes the importance of context and social norms:

 
  • Guideline 3: Time services based on context. AI that proactively interacts with users needs to be mindful of their current task and environment. A push notification from an AI while someone is driving could be dangerous.

  • Guideline 4: Show contextually relevant information. The information displayed by the AI should be relevant to the user's current situation. A restaurant recommendation app should consider the user's location.

  • Guideline 5: Match relevant social norms. The AI's behavior and presentation should align with user expectations based on their social and cultural background. Tone and formality can vary significantly across cultures.

  • Guideline 6: Mitigate social biases. AI systems can inadvertently perpetuate harmful stereotypes present in the data they are trained on. Designers must actively work to identify and mitigate these biases.

These guidelines highlight that designing effective AI involves more than just technical prowess; it requires a deep understanding of human behavior and social dynamics.

 

Empowering Users: Control and Feedback (Guidelines 7-10)

Recognizing that AI systems will inevitably make mistakes, the HAX Design Library emphasizes the importance of user control and recovery:

 
  • Guideline 7: Support efficient invocation. Users should be able to easily trigger the AI's services when needed.

  • Guideline 8: Support efficient dismissal. Similarly, it should be easy to dismiss unwanted AI actions.

  • Guideline 9: Support efficient correction. Users should be able to easily edit or refine the AI's outputs when they are incorrect or partially correct.

  • Guideline 10: Scope services when in doubt. In ambiguous situations, the AI should either seek clarification or gracefully degrade its services rather than making potentially incorrect assumptions.

These guidelines empower users to maintain control and recover from AI errors, fostering a more positive and trustworthy interaction.

 
 

Understanding the "Why": Explainability and Memory (Guidelines 11-12)

Transparency extends to understanding the AI's reasoning:

  • Guideline 11: Make clear why the system did what it did. Providing explanations for AI actions can increase user trust, but it must be done thoughtfully to avoid over-reliance. Tools like InterpretML can aid in improving model explainability.

  • Guideline 12: Remember recent interactions. Maintaining short-term memory allows for more natural and efficient interactions, as users can refer to previous turns in a conversation.

 

Continuous Improvement: Learning and Adapting (Guidelines 13-18)

The final set of guidelines focuses on how AI systems evolve and interact with users over time:

 
  • Guideline 13: Learn from user behavior. Personalizing the experience based on user actions can lead to more relevant and helpful AI.

  • Guideline 14: Update and adapt cautiously. Changes to the AI's behavior should be gradual and well-researched to avoid disrupting the user experience.

  • Guideline 15: Encourage granular feedback. Enabling users to provide specific feedback helps the AI learn and improve in ways that align with their preferences.

  • Guideline 16: Convey the consequences of user actions. Showing users how their input will influence the AI's future behavior helps them interact more effectively.

  • Guideline 17: Provide global controls. Allowing users to customize the AI's behavior and data monitoring at a system-wide level empowers them with greater control.

  • Guideline 18: Notify users about changes. When significant updates occur, users should be informed so they can recalibrate their expectations about the AI's capabilities and performance.

 

Conclusion

The HAX Design Library offers a comprehensive and practical guide for building human-centered AI.

 

By emphasizing the importance of setting clear expectations, providing contextually relevant experiences, respecting social norms, empowering users with control, and ensuring transparency, these 18 guidelines pave the way for creating AI systems that are not only intelligent but also intuitive, trustworthy, and ultimately more valuable to the people who use them. For anyone involved in designing or developing user-facing AI, the HAX Design Library is an indispensable resource.

 

Ready to build AI that users love?

Our UX for AI services ensure your intelligent systems are intuitive, trustworthy, and drive real results. Let's discuss your project today!

 
Previous
Previous

No More Green, Just Ghosts in the Phone Booths

Next
Next

SideQuest XR Publishing: A Step-by-Step Breakdown