Artificial Intelligence is everywhere – helping us shop smarter, create art, and even write essays. But as cool as AI is, it’s only truly useful when it works for us, the humans! That’s where human-centred design (HCD) comes in – a design approach that keeps people at the heart of technology.
In this blog, I’ll dive into how HCD fits into the AI era, explore the differences between Traditional AI and Generative AI, tackle challenges like AI hallucinations, and share tips for improving products with AI – all while keeping things fun and relatable!
Traditional AI vs generative AI: The role of human-centred design in shaping user experiences
The evolution of AI has brought us two distinct paradigms – traditional AI and generative AI – each with its own capabilities and implications for user experience.
Traditional AI:
- Focuses on structured tasks like data analysis, predictions, and automation.
- Operates within predefined rules and datasets.
- Examples include recommendation engines (e.g., Netflix suggestions) or fraud detection systems.
Generative AI:
- Goes beyond analysis to create new content, such as text, images, music, or even code.
- Learns from vast unstructured datasets using deep learning models.
- Examples: ChatGPT for conversations or DALL-E for creating stunning visuals.
How can HCD enhance AI?
Empathy in design:
Traditional AI can feel robotic, but HCD ensures a human touch.
Creative collaboration:
Generative AI is super creative, but with HCD, it becomes a partner – not just a tool – empowering users to co-create amazing things.
Transparency and trust:
Whether it’s traditional AI or generative AI, HCD ensures users understand how these systems work (no spooky black boxes here!).
By applying HCD principles to both types of AI, we ensure that these technologies perform tasks but also enrich human experiences.
AI hallucination and HCD: Tackling the challenge
One of the most pressing challenges in generative AI is AI hallucination, where an AI system generates false or nonsensical outputs that appear credible. Generative AI systems are prone to producing hallucinations – plausible but incorrect or nonsensical outputs, such as false statements or distorted images. These hallucinations are often presented confidently, making them difficult for humans to detect.
For example, a study by Columbia Journalism Review found ChatGPT falsely attributed 76% of 200 quotes and rarely indicated uncertainty. Even specialised tools, like legal AI systems from LexisNexis and Thomson Reuters, produced errors in 1 out of 6 benchmarking queries, highlighting the widespread challenge of AI hallucinations.
Why does this happen?
AI models are trained on vast datasets but lack true understanding. They predict patterns rather than comprehend context, leading to errors when faced with ambiguous or incomplete inputs.
How HCD can address hallucination?
User education:
- Design systems that clearly communicate their limitations.
- Use disclaimers or visual cues to indicate when content might be unreliable.
Feedback loops:
- Incorporate user feedback mechanisms to flag inaccuracies.
- Continuously refine models based on real-world usage data.
Explainability in design:
- Provide users with explanations for how outputs were generated.
- Allow users to trace the reasoning behind an answer or suggestion.
Fail-safe mechanisms:
- Implement safeguards that detect when outputs deviate significantly from expected norms. For example, a medical generative AI tool could flag uncertain diagnoses for human review rather than presenting them as facts.
By embedding these principles into design processes, we can mitigate the risks of hallucinations while building trust between users and AI systems.
Improving products with AI: The HCD perspective
AI has become a powerful tool for enhancing products across industries – from healthcare and education to entertainment and e-commerce. However, integrating AI into products isn’t just about adding advanced features; it’s about ensuring those features genuinely improve user experiences.
HCD strategies for product improvement with AI:
Start with user needs:
The foundation of any successful AI-powered product lies in understanding the needs, pain points, and goals of its users. Before integrating AI, it’s crucial to identify specific areas where the technology can add meaningful value. This ensures the product is solving real problems rather than introducing unnecessary complexity.
How to apply this principle:
- Conduct user research: Use surveys, interviews, and focus groups to gather insights into user challenges and expectations.
- Map pain points: Create a journey map to pinpoint moments where users experience frustration or inefficiency.
- Focus on value creation: Align AI capabilities with opportunities to enhance productivity, convenience, or personalisation.
Make it intuitive:
AI should simplify tasks, not complicate them. To ensure usability, interactions with AI systems must be intuitive and accessible to users of all skill levels. The goal is to make advanced capabilities feel effortless.
How to apply this principle:
- Design simple interfaces: Avoid cluttered layouts and technical jargon; focus on clean, user-friendly designs.
- Offer guided experiences: Use prompts, tutorials, or walkthroughs to help users navigate new features.
- Minimise cognitive load: Reduce the number of steps required for users to achieve their goals.
Enable co-creation:
AI doesn’t have to replace human creativity – it can amplify it. By positioning AI as a collaborator rather than a replacement, users can guide the system’s outputs while leveraging its creative potential.
How to apply this principle:
- Provide customisation options: Allow users to modify or refine AI-generated outputs based on their preferences.
- Encourage iterative collaboration: Build tools that let users interact with the system in real-time to shape results.
- Balance automation with control: Ensure users feel empowered rather than sidelined by automation.
Iterate through feedback:
AI systems are not static – they improve over time through user interactions and feedback loops. Continuously gathering feedback ensures the system evolves to meet user needs more effectively.
How to apply this principle:
- Build feedback mechanisms: Include features like thumbs-up/down ratings or comment sections for users to share their thoughts.
- Analyse usage data: Monitor how users interact with the system to identify areas for improvement.
- Adapt based on insights: Regularly update algorithms and interfaces based on user feedback.
Build ethical guardrails (think responsible AI):
As AI systems grow more powerful, integrating ethical principles ensures they operate responsibly, fairly, and transparently. Products powered by AI must respect privacy, avoid bias, and operate transparently to foster trust among users.
How to apply this principle:
Ensure data privacy
Privacy by design: Embed privacy protections like anonymisation and data minimisation throughout development.
Robust security: Use encryption and access controls to safeguard user data.
User consent: Provide clear opt-in/opt-out mechanisms and disclose how data is used.
Mitigate bias in algorithms
Diverse data: Audit datasets for demographic imbalances and address gaps.
Bias detection: Use fairness-aware algorithms to identify and correct biases.
Continuous audits: Regularly evaluate model outputs for fairness, especially in critical domains like hiring or lending.
Operate transparently
Clear communication: Disclose system limitations (e.g., error rates) to users upfront.
Governance frameworks: Establish ethics boards to oversee AI use and ensure compliance with regulations like GDPR.
With these tips, you can create products that don’t just use AI – they wow users while staying ethical and human-centred!
Conclusion
The AI era is exciting – but it’s also a reminder that technology should serve humans, not replace them. Whether we’re talking about traditional AI or generative AI, tackling hallucinations, or improving products, human-centred design is the secret sauce that makes everything work better for us.
At its core, HCD ensures that innovation remains grounded in empathy – delivering tools that are transparent, trustworthy, and intuitive. So as we continue pushing boundaries in the world of AI, let’s keep humanity front and centre!
Want to integrate HDR into your next project?
Are you looking to integrate human-centred design into your next AI project? Start by asking yourself one simple question: How will this technology improve someone’s life? Let empathy guide your innovation journey!
Responsible AI guardrails sources:
EU: https://artificialintelligenceact.eu/
Australian: https://www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails#:~:text=solution%20or%20component.-,Using%20the%20guardrails,uplift%20any%20organisation’s%20AI%20maturity.
Other references:
https://www.nngroup.com/articles/ai-hallucinations/