
AI-Assisted Usability Pre-Screening for Engineering Teams
Designing a system to help engineers identify UX issues earlier using simulated users
Role: Lead UX Researcher
Conducted literature review on AI in UX and usability evaluation
Co-designed research plan and study methodology
Developed interview and usability testing protocols
Collaborated with cross-functional team (designers + PM + sponsor)
Research & Design Skills
UX Research (Qualitative + Mixed Methods), Study Design & Research Planning, Semi-Structured Interviews, Usability Testing, Thematic Analysis & Affinity Mapping, AI + UX (Human-in-the-loop systems, trust design), Stakeholder Collaboration
Type
Capstone Project — In collaboration with NVIDIA
Team: 3 Designers, 1 Product Manager, 1 UX Researcher
Timeline: 10 weeks
Context: Internal tools for engineering teams
Overview
Exploring how simulated users can accelerate UX pre-screening before deeper human validation.
This NVIDIA and UCSC capstone project explores a lightweight approach to usability testing with simulated users. The concept combines LLM personas, task prompts, and headless browser automation to run quick UX checks, capture traces like clicks and dead ends, and surface early friction before a team invests in full-scale user research.
The work is framed as fast UX pre-screening rather than a replacement for human evaluation. The goal is to help product teams explore more scenarios, devices, and task variants early, then use those signals to decide what needs real-user validation next.
Problem
Traditional UX validation is slow to spin up and difficult to scale across the many paths a product team wants to test.
Recruiting participants, preparing studies, and manually working through edge cases takes time, which means many product decisions get made before teams have enough signal. That gap becomes even larger when designers want to compare multiple personas, devices, and task variants or capture detailed traces like time to success, failures, screenshots, and dead ends.
This project asks whether simulated users can help teams find friction quickly enough to focus their attention, not whether AI can replace human research. The opportunity is in faster pre-screening and clearer prioritization for what should be validated with real people.
This project is still in progress. Feel free to reach out and connect to talk more about it!
