top of page
Designing an AI-powered tool to improve a software developer’s existing workflow.

Red Hat is a software company that makes it easy for businesses to collaborate across platforms, helping them build flexible and powerful IT solutions.
FOR
Red Hat
ROLE
UX Designer & Researcher
DURATION
3 months
WITH
Linda Borghesani — Project Stakeholder, Tufts
D. Beau Morley — Principal UX Designer, Red Hat
Emma Spero — Research Operations, Red Hat
Tufts Human Factors Capstone Team
CONTEXT
Red Hat frequently hears from software development teams about the challenge of going beyond coding, as they are required to grasp various tools and technologies. As such, they have been curious about how Artificial Intelligence (AI) can assist developers in navigating this complex landscape.
As a UX Designer & Researcher, I led discovery research to identify AI integration opportunities in software development, conducted user interviews, guided prototyping through design sprints, and iterated on our product through user testing.
PROBLEM STATEMENT
How might we imagine an AI-powered tool that improves a software developer’s existing workflow, enabling faster onboarding onto project teams, more efficient comprehension of code, and expedited code documentation?
SOLUTION
Discovery research and user interviews revealed that developers struggle with understanding code when onboarding and writing documentation. For our solution, we designed an AI-powered tool to address these specific pain points:
Summarization AI Tool: AI tool designed to summarize code and provide refactoring recommendations for outdated codebases.
Documentation AI Tool: AI tool designed to expedite the documentation process for software developers.



DISCOVERY
With our limited coding experience, we started by learning about developers' workflows in the Red Hat ecosystem.

1. Initial Interviews
Discovery interviews gave insights into developers' pain points and workflows, which were largely summarized by two key concepts: the inner loop and the outer loop.
Inner loop: individual process of coding, building, debugging, and pushing code.
Outer loop: collaborative process involving code review, compliance, security checks, testing, and deployment.


3. Personas
Two personas were created to reflect different experience levels of our users. Our focus was to design a tool that would improve Dave and Sally's workflows.



2. Competitive Review
We explored AI's use in the industry and analyzed competitors to identify opportunities for improving developer workflows. Our analysis confirmed that AI works best as a tool to enhance human workflows.


4. Journey & Empathy Mapping
We synthesized our discovery interviews into Journey and Empathy Maps using a six-category structure: Says, Thinks, Does, Feels, Pains, and Gains. This framework helped us further develop personas by integrating key quotes and insights from the interviews.
IDEATION
We used a Crazy Eights design sprint to quickly generate and refine ideas, with each team member sketching eight design solutions in one-minute intervals. This process helped us consolidate ideas and turn our discovery research into a tangible concept, setting the foundation for our prototype.



PROTOTYPING
From our design sprint, we narrowed our solution to two concepts and began designing prototypes to better understand how users would interact with these tools:
Summarization AI Tool: AI tool designed to summarize code and provide refactoring recommendations for outdated codebases.
Documentation AI Tool: AI tool designed to expedite the documentation process for software developers.
Early Summarization Tool Concepts



Early Documentation Tool Concepts



TESTING
After generating prototypes, we conducted two rounds of user testing to refine our product design. The first round focused on validating our concepts with users, while the second aimed to test usability.
User Testing Round 1: Concept Testing
We conducted six 45-minute interviews using a "show and tell" approach with wireframes, which revealed that the terminology in the documentation tool was confusing and needed rebranding, while the scope of the summarization tool was unclear.
User Testing Round 2: Usability Testing
We refined our prototypes and tested them with six experienced developers, using the System Usability Scale (SUS) to gauge usability.
Scores above 68 suggest a product is functional but could be improved. DocumentationAI scored 76.16, and SummarizerAI slightly higher at 77.5, indicating both tools were effective in meeting user needs.


other projects
bottom of page