Group 35.png
an AI-driven smart glasses that can recognize objects and texts,
and transfers the detected information to the user through audio interaction.
DURATION

6 months

MY ROLE

Recruitment

Research Planning

Secondary Research

Expert Interviews

User Interviews

Data Analysis and Coding

Prototype Testing

Design Evaluation

Client

Context

According to CDC, there are around 4.3 million people in the US with visual impairment who have this kind of problem every day. They often face daily obstacles to perceive their environments in a world that mostly is designed by and for the sighted people, such as reading printed instructions, sorting mails, and differentiating medicine bottles.
039DD02E-BCA1-41AB-B82A-71217DC72F1A-4DC

What is Iris?

an AI-driven smart glasses that can recognize objects and texts, and transfers the detected information to the user through audio interaction.

Demo Video

Objective

How might we provide tractable, effortless, and independent object and text recognition experiences to people who are blind? 
V2-6,7.png

Research Methods

I started our research process with a literature review to research what has been done in this space and followed it with expert interviews with those who have worked in the accessibility technology design. From experts interviews and secondary research, we learnt that:

My document (72) 3 1.png
  • Blindness is a spectrum, narrow down your target user.

  • Do not assume how blind people use technology.

  • Do no exclude those, who have low technology proficiency.

  • Examine current accessibility guidelines.

  • Start talking with blind users as soon as possible.

Expert Interviews

Participants & Recruitment

For the primary research, I recruited our participants by reaching out to some associates and faculty, as well as to nonprofit organizations that work and support blind communities; such as the UW Disability Center, Seattle Deafblind Service Center, and the American Council of the Blind.

I recruited a total of 10 participants, ages ranged from 19-60, who identify as "completely blind with no vision" and have shopped online in the past 30 days. I did not have any technical knowledge requirements other than being able to use the Internet, as I did not want to exclude participants due to the level of technical proficiency.

Group 182.png
Screen Shot 2020-08-15 at 11.41 1.png

User Interviews Findings

The user interviews and the remote contextual inquiry informed us that our users prefer familiar products and platforms while shopping online and they value conversational assistance as support even though they think the technology isn't “there” yet. Most importantly we found out that our participants really value efficiency, relevance, and independence.

 

In other words, they prefer to receive relevant information more efficiently and independently while shopping online and in other daily activities, too.

Screen Shot 2020-08-15 at 11.41 1.png

Sense-Making &

Data Analysis

I gathered all of our data on database organization tools such as Airtable and Excel for us to be able to easily access all the information we received. I then organized the collected data with Affinity Diagramming methods on an online whiteboard platform, Miro, as I color-coded the collected information. I compiled our observations and insights into relevant categories and consequential relationships based on shared intent, purpose, or problems.

Miro board screenshot

Competitive Analysis

Screen Shot 2021-09-16 at 12.56.07 PM.png
Moreover, current technologies require users to potentially use two hands to complete a task and do not provide any haptic feedback on error cases. That's why we validated our idea by focusing on something that is real-time, hands-free with troubleshooting practices.
Screen Shot 2021-09-16 at 12.56.49 PM.png
Screen Shot 2021-09-16 at 12.56.42 PM.png

From the competitive analysis, I found out that currently existing computer vision technologies such as Seeing AI are pretty helpful to blind users while recognizing objects and texts. However, there are still some usability issues such as requiring users to aim the camera on an object directly and taking a clear picture for successful recognition. 

Usability Testing

As we built our product’s initial prototypes informed by the research I conducted, we wanted to tests its usability early and iteratively. During Covid-19, we understood that we could not conduct usability testings in person, therefore I needed to push our boundaries to be creative in remote solutions. Even though I see the potential of using Iris in other scenarios in the future, since I could only test remotely while participants were at their homes, I wanted to focus on the interaction design for only at-home experiences.

After making collaborating with product designers and project managers, we decided that the product was going to be a wearable device with an AI camera and voice assistant to address a real-time, hand-free experience that none of the current competitors offered. Therefore, we set our objective for the prototype testing as to understand the conversational and interactive experience, instead of the ergonomic design of the wearable -just yet. This is why I chose the following three elements as our prototype tools. 
Screen Shot 2021-09-17 at 2.06.38 PM.png

my teammate acting as the

 I designed the Voice User Interface’s dialogue flow and in the testing, I had my teammate acting as the Voice Assistant while following that flow.

I shipped the headset below to our participants. I video-called them and once they wore the headset with a phone, we could see the items they were holding.
Screen Shot 2021-09-17 at 2.09.09 PM.png
I used Wizard of Oz methodology and have one of my teammates act like “Iris”, following the dialog flows that I prepared and spoke the information that is in the users’ camera’s frame. With that, I got to test both success and error cases with the participants.
Screen Shot 2021-09-17 at 2.11.30 PM.png

Use cases and Scenario Exploration

Moreover, in order to validate the scenarios our target users would like to use our product in, I also conducted an online survey with 35 participants to understand the importance of each use case and explore scenarios they encounter that we might have missed. We took the provided feedback as a reference while improving our product’s future vision, functions and design.

% of users welcome Iris in the following use case

Screen Shot 2020-08-18 at 8.30 1.png
other possible use cases
“I usually need help with correctly identifying multiple very similar items.” - Participant 2

“I would need it (a technology) for reading a user manuals, currency, medicine instructions, bottle tags and oven knobs.” - Participant 17

“It would be crucial to have object recognition in cooking instructions and operating appliances, such as the oven.” - Participant 33

 

Final Prototype

Screen Shot 2021-09-17 at 2.18.13 PM.png

Product's Impact

“I can envision myself wearing (Iris). As much as possible. Probably every day!”
- Participant 1
"I really like that it warned me when I did not hold the item in the frame. The more feedback, the better!"
- Participant 2
“I can see this product being very helpful especially when I need to use both of my hands.”
- Participant 3
The final prototype was validated by our participants that our design concept can provide great value to in fast object recognition needs. Also, through our testings and iterations, we believe that our final form will allow users to feel more comfortable holding or point items during the recognition process.

Future Vision and Learnings

Our final design can definitely start helping people who are blind to get the relevant information on objects efficiently. We learnt from the research studies that our product can be applied to more scenarios and use cases to help users to live more independently. 











In the future, we are interested in expanding this product’s reach with visual impairments and accessing more people from the visual impairment spectrum such as senior people with sight issues. We are also looking into ways to make our product available to those who are not experts in technology, as our participants happened to be on the high end of tech proficiency. 
Screen Shot 2021-09-17 at 2.26.17 PM.png