TradeSense

Designing an Explainable AI Trading Journal

A speculative HCI exploration of human–AI interaction and reflective decision-making

This project is a speculative, research-driven exploration. The visuals shown are conceptual artifacts used to reason about interaction, explainability, and behavior—not finalized product designs.

ROLE

Systems Designer

DURATION

6–8 weeks

TOOLS

Figma

— Motivation & Context

In high-stakes decision-making environments like trading, users often rely on opaque AI systems without fully understanding how or why recommendations are generated.

While these systems optimize for outcomes, they rarely support reflection, learning, or long-term behavior change.

This project began as an exploration of how AI tools might better support human judgment rather than replace it.

— Core Question

How might an AI-assisted system support reflection and learning, rather than simply optimizing for short-term performance?

How might we help traders understand why they act the way they do — and how those behaviours impact performance — while keeping AI transparent, supportive, and non-authoritative?

— Concept Overview

The concept is an AI-powered trading journal that surfaces patterns in user behavior, highlights reasoning behind AI suggestions, and encourages users to reflect on decisions over time. Rather than predicting trades, the system focuses on helping users understand why certain outcomes occurred and how their behavior evolved.

— Design Principles

Explainability over accuracy alone

Reflection over automation

User agency over optimization

Learning over performance metrics

Image: User agency over optimization

— Key Interaction Ideas

- AI explanations shown as narratives, not scores

- Visualizations that emphasize patterns over single outcomes

- Prompts that encourage post-decision reflection

Image: Prompts that encourage post-decision reflection

— Human–AI Tradeoffs

A key tension in this concept is balancing transparency with cognitive load. Making AI reasoning fully visible risks overwhelming users, while hiding it undermines trust.

This exploration prioritizes selective explainability—surfacing insight when it supports reflection, not at every interaction.

Image: Exploration prioritizes selective explainability

— Ethical & Behavioral Considerations

The concept intentionally avoids real-time nudging or persuasive feedback during decision-making, to reduce over-reliance on the system.

Reflection is designed to happen after actions, preserving user agency and accountability.

Reflection & Next Steps

This project remains a speculative exploration.

If developed further, I would validate assumptions through qualitative user interviews over time and longitudinal studies to understand how reflection-driven tools impact trust, learning, and decision confidence over time.