Background
Amazon SageMaker Canvas allows business users to build machine learning models through a no-code interface—without requiring ML expertise. However, users often struggle to translate real-world business problems into appropriate ML workflows.
In 2024, the SageMaker science team introduced a breakthrough: Amazon Q Developer, a chat-based generative AI agent designed to guide users through the machine learning lifecycle using natural language. The goal was to help users build accurate, production-ready models by simply describing what they wanted to accomplish—without writing a single line of code.
My challenge was to lead the UX strategy for integrating this AI assistant into Canvas in a way that felt trustworthy, intuitive, and aligned with the expectations of non-technical users.
The new Amazon Q generative AI agent integrated into SageMaker Canvas—
guiding business users through the end-to-end machine learning process with natural language.



The Problem
Although the Q Developer agent had strong technical potential, several challenges stood in the way of successful integration: 
•Earlier “north star” integration concepts had been paused, and reactivation came with tight timelines for re:Invent 2024. 
•The vision required rethinking how users would engage with a conversational agent inside a visual ML platform. 
•There were open questions around how to build trust, support transparency, and avoid cognitive overload—especially when surfacing complex ML decisions and recommendations.


My Contribution
I led design strategy and UX research to integrate the Q Developer agent into Amazon SageMaker Canvas. I shaped the overall approach for both the MVP and long-term vision, balancing user needs with technical feasibility on an accelerated timeline. 
•Conducted foundational and evaluative research to define product direction 
•Created and tested MVP interaction models through prototypes and user interviews 
•Collaborated closely with science, product, and engineering to align chat behavior and UI flows
•Delivered actionable design recommendations that improved usability, transparency, and trust in AI-generated results

Team: 2 Senior UX Designers (including myself), 4 Product Managers, 8 Data Scientists, 16 Engineers and 1 Technical Writer

Duration: 4 months

Research & Design Process
To integrate a chat-based AI assistant into Amazon SageMaker Canvas, we followed a structured two-phase UX strategy that balanced ambitious design goals with technical feasibility and tight deadlines. Our goal was to deliver a usable, trustworthy experience for business users building machine learning models without writing code.

Phase 1: MVP Testing & Feasibility Alignment
When the project resumed after a pause, we quickly realized the original vision—an embedded, conversational UI for managing the entire ML workflow—wouldn’t be feasible in the short term. I worked closely with the science and engineering teams to define a Minimum Viable Product (MVP) we could test with users while preserving our long-term vision.
Interface Direction Testing
To explore integration options for the chat-based assistant, we considered two interface models, each with distinct implications for usability and feasibility.
North Star Concept: A seamless experience where task-specific UI components appear directly within the chat flow—enabling users to browse data, clean it, and review results without leaving the conversation.
MVP Concept: A streamlined approach using modal windows that open outside the chat interface for specific tasks. These dedicated UIs guide users through actions like data prep and model selection, with smooth return to the conversation flow.

This case study presents both concepts to illustrate the design evolution and highlight key tradeoffs between vision and near-term feasibility.

Embedded chat experience   
Task-specific UI components are dynamically generated within the chat interface in real time, enabling users to complete actions like file browsing, data cleaning, and reviewing results—all without leaving the conversation flow.
MVP Concept – Modal-based interactions
Users complete tasks through modal windows that open outside the chat interface. These modals provide the full UI needed for actions like data import or cleaning, with built-in navigation to return smoothly to the conversation.

Conversation Management
We explored two approaches to managing chat sessions, each reflecting a different level of system complexity and user expectation.
North Star Concept: Persistent chat history maintained across sessions, allowing users to return at any time and pick up where they left off—with full context and memory retained.
MVP Concept: A lightweight solution where a new conversation version is created at the end of each session. Users can revisit and reference previous chats, but context doesn’t persist across sessions.

To highlight the contrast in user experience and technical complexity, both concepts are included here—showcasing how vision and feasibility shaped the design direction.
​​​​​​​
North star concept: Persistent Chat History
•Single, continuous conversation maintained across sessions
•Chat history and context preserved when users close or restart
•Not technically feasible for initial release

MVP concept: Versioned Conversations
•New version created when session ends or chat closes
•Maintains conversation integrity and allows users to reference and return to previous chat conversations


Phase 2: Private Beta Evaluation & Design Refinement
We launched a Private Beta of the MVP and conducted in-depth research to validate user experience and identify areas for improvement. This phase included:
•10 customer interviews
•Feedback surveys
•Follow-up 1:1s to explore friction points in more detail


Key Research Insights
The research revealed that users wanted more transparency, flexibility, and control across all stages of the ML workflow. These insights directly informed product decisions for the November 2024 release.

1. User Control over Data Fixes
Users strongly preferred reviewing and confirming changes before the system applied them.
“I think it’s great it can fix the data, but at the same time, it feels a bit invasive for trying to impose the way on how to fix the issues”.
Data suggestions now include previews and manual controls before changes are applied.
2. Data Control & Preferences
Users highlighted that not all “errors” should be auto-fixed—context matters.
“Sometimes I don't want to replace the missing value. Or maybe I need those duplicated rows so I may not want to remove them”. 

“Rather than “I did it, it'd be better if the system asks... “Would you want it to be done?”
3. Transparency & Technical Clarity
Participants requested contextual explanations for ML concepts and more insight into how models were being generated.
“What does this accuracy score mean for me?” 
“I want to know how the model made its predictions.”

How Insights Shaped the Design 
Thanks to our findings, we: 
•Introduced manual controls for data prep and cleanup
•Enhanced transparency through contextual tips and detailed explanations 
•Updated the flow to support versioned conversations 
•Prioritized a modular MVP design that set the foundation for future embedded experiences


Impact & What's Next
The Q Developer agent is now being integrated into Amazon Bedrock within the SageMaker Unified Studio, allowing advanced ML workflows to be supported across tools and personas. This represents a key milestone in expanding generative AI capabilities across the AWS ecosystem.

Amazon Bedrock IDE — the future home of the Q Developer agent, enabling seamless GenAI integration for ML workflows.


Ongoing Research: Human-Agent Collaboration
To support future iterations, I’m leading a multi-phase ethnographic research study on how teams collaborate with AI agents in real-world workflows.
This work will inform next-generation experiences by exploring:
•How teams currently collaborate and where agents can support them
•How users assess agent output and build trust over time
•What roles and skills are needed to effectively integrate AI into collaborative team settings
•How agent collaboration impacts team productivity and decision-making

Dr. Swami Sivasubramanian, VP of AWS Agentic AI, introducing the Q Developer agent during re:Invent 2024.

Closing Thoughts
This project marked a pivotal moment in democratizing machine learning by integrating a generative AI assistant into a no-code platform. By leading the UX strategy and research, I helped shape a conversational experience that makes ML model creation more approachable, transparent, and intuitive for non-technical users. The learnings continue to inform our future direction as we expand into broader human–agent collaboration within Amazon’s ecosystem.
Back to Top