Timeline
As a team with 20+ members in total, we communicate needs, design, Technical Implementation by meetings. In average, we have internal meetings 3 times a week and presentation to our client every Wednesday. We have a shared sheet for all teams to update real-time progress.
Solution Overview
This product for Microsoft streamlines the annotation of image, text, and search data for machine learning.
By automating tasks like tagging images, categorizing text, and refining search results, it reduces manual effort, speeds up AI model training, and lowers operational costs.
Understand AI Labeling and Deep Learning
To gather insights, we conducted in-depth interviews with doctors trained to label scan results and screened AI products to explore how data labeling is used in the medical field—one of our key end users for our product.
We visited Dr. Pei and Dr. Huang from the Thoracic Surgery Department of Beijing Haidian Hospital to interview them based on their experience. Through this process, we have a clearer understanding of the significance of making such products. Also, we were able to bring some valuable insights to the table during the meetings.
The objective of this interview with Dr.Huang is to explore the role of AI diagnosis in thoracic surgery, particularly in addressing the challenges associated with lung cancer diagnosis. And its impact on society and future healthcare system.
Dr. Pei has about 10000 min experience in manually labeling CT scans of lung cancer. Given that our AI labeling software is designed to automate and assist in data labeling, Dr. Pei’s explains his understanding of the pain points, challenges, and necessary precision in medical labeling.
(Interview: Click here to Read more)
Research: Competitive Analysis
At the same time, our team started the project by looking at key players in the data labeling space to help shape our product. By comparing features, design, and user experience, we’re focusing on areas where we can stand out and improve what’s already out there.
Through our research, we found that Labelbox has a feature set that’s pretty aligned with what we’re aiming for. Conversations with our client and stakeholders also highlighted them as a major competitor.
Based on their product, we identified ways to simplify the user experience for newcomers while keeping the advanced functionalities we need, like real-time label previews and support for different formats like images, text, and search labels.
Ideation: Collecting Ideas
During team meetings with product managers, affinity mapping has been incredibly useful for organizing ideas gathered from our research, as well as client feedback. For example, we use sticky notes to capture insights from different sources, then rearrange them into sections. This method helps us identify key features and ultimately develop a feature list.
Feature Summary List for MVP
Through detailed feature mapping, Daisy (our Product Manager), Ivy, and I collaborated on developing user flows and wireframes. For each feature, we carefully considered the user journey, streamlining the process to include only the essential steps. Below are examples of the user flows and wireframes we created.
UI Design Helper: User Persona
Once moving to the interface and styling, we developed a user persona—derived from our affinity maps—to better understand key usage scenarios and the core needs of our users. This persona helped ensure that the UI team (Yixing, Ivy, and Zhe Li) aligned on a shared understanding of our target user."
UI Kit
We have provided Sam (our client) 4 design options with moodboard and mockups, and Sam picked the monochrome green / teal color scheme. Since we haven't established the product's branding yet, and the client might select a different primary color in the future, opting for a single color is a more cautious and flexible approach.
Feature Breakdown
Design Validation: UI Walk Through
Given the tight development timeline, the front-end engineers began development as we were still working on the high-fidelity prototype. This allowed us, as designers, to conduct timely walkthroughs and identify areas for improvement. To streamline the review process and ensure clear communication, we set up a shared Excel sheet between the design and front-end teams to track page modifications. This enabled us to provide timely feedback and synchronize updates after each revision.
Usability Testing
While we didn’t have the opportunity to conduct testing for our MVP, we’re prioritizing user and usability testing for our Beta version. We believe this feedback is essential for refining our product, addressing pain points, and enhancing the overall user experience. We're also integrating LLMs to improve data labeling in the Beta, along with adding additional labelling features such as video labelling.
We've drafted a set of usability testing questions that reflect both user needs and business objectives for our Beta version. Below are sample questions designed to gather insights into the platform’s technical functionality and its effectiveness in supporting labeling tasks.
Task Efficiency:
1. How efficient is the process for applying labels to datasets (e.g., speed of task completion)?
2. Can you quantify the time taken for specific labeling tasks?
System Performance:
3. Did you experience any latency or performance issues when handling large datasets? If so, at what point did this occur?
Error Handling:
4. How well does the platform support undo or correction actions when a labeling mistake occurs?
5. Is the process straightforward and efficient?
Integration and Customization:
6. How easy is it to customize labeling categories and options?
7. Did you find the customization process user-friendly and flexible?