an AI-driven smart glasses that can recognize objects and texts,
and transfers the detected information to the user through audio interaction.
Data Analysis and Coding
According to CDC, there are around 4.3 million people in the US with visual impairment who have this kind of problem every day. They often face daily obstacles to perceive their environments in a world that mostly is designed by and for the sighted people, such as reading printed instructions, sorting mails, and differentiating medicine bottles.
What is Iris?
an AI-driven smart glasses that can recognize objects and texts, and transfers the detected information to the user through audio interaction.
How might we provide tractable, effortless, and independent object and text recognition experiences to people who are blind?
I started our research process with a literature review to research what has been done in this space and followed it with expert interviews with those who have worked in the accessibility technology design. From experts interviews and secondary research, we learnt that:
Blindness is a spectrum, narrow down your target user.
Do not assume how blind people use technology.
Do no exclude those, who have low technology proficiency.
Examine current accessibility guidelines.
Start talking with blind users as soon as possible.
Participants & Recruitment
For the primary research, I recruited our participants by reaching out to some associates and faculty, as well as to nonprofit organizations that work and support blind communities; such as the UW Disability Center, Seattle Deafblind Service Center, and the American Council of the Blind.
I recruited a total of 10 participants, ages ranged from 19-60, who identify as "completely blind with no vision" and have shopped online in the past 30 days. I did not have any technical knowledge requirements other than being able to use the Internet, as I did not want to exclude participants due to the level of technical proficiency.
User Interviews Findings
The user interviews and the remote contextual inquiry informed us that our users prefer familiar products and platforms while shopping online and they value conversational assistance as support even though they think the technology isn't “there” yet. Most importantly we found out that our participants really value efficiency, relevance, and independence.
In other words, they prefer to receive relevant information more efficiently and independently while shopping online and in other daily activities, too.
I gathered all of our data on database organization tools such as Airtable and Excel for us to be able to easily access all the information we received. I then organized the collected data with Affinity Diagramming methods on an online whiteboard platform, Miro, as I color-coded the collected information. I compiled our observations and insights into relevant categories and consequential relationships based on shared intent, purpose, or problems.
Miro board screenshot
Moreover, current technologies require users to potentially use two hands to complete a task and do not provide any haptic feedback on error cases. That's why we validated our idea by focusing on something that is real-time, hands-free with troubleshooting practices.
From the competitive analysis, I found out that currently existing computer vision technologies such as Seeing AI are pretty helpful to blind users while recognizing objects and texts. However, there are still some usability issues such as requiring users to aim the camera on an object directly and taking a clear picture for successful recognition.
As we built our product’s initial prototypes informed by the research I conducted, we wanted to tests its usability early and iteratively. During Covid-19, we understood that we could not conduct usability testings in person, therefore I needed to push our boundaries to be creative in remote solutions. Even though I see the potential of using Iris in other scenarios in the future, since I could only test remotely while participants were at their homes, I wanted to focus on the interaction design for only at-home experiences.
After making collaborating with product designers and project managers, we decided that the product was going to be a wearable device with an AI camera and voice assistant to address a real-time, hand-free experience that none of the current competitors offered. Therefore, we set our objective for the prototype testing as to understand the conversational and interactive experience, instead of the ergonomic design of the wearable -just yet. This is why I chose the following three elements as our prototype tools.
my teammate acting as the
I designed the Voice User Interface’s dialogue flow and in the testing, I had my teammate acting as the Voice Assistant while following that flow.
I shipped the headset below to our participants. I video-called them and once they wore the headset with a phone, we could see the items they were holding.
I used Wizard of Oz methodology and have one of my teammates act like “Iris”, following the dialog flows that I prepared and spoke the information that is in the users’ camera’s frame. With that, I got to test both success and error cases with the participants.
Use cases and Scenario Exploration
Moreover, in order to validate the scenarios our target users would like to use our product in, I also conducted an online survey with 35 participants to understand the importance of each use case and explore scenarios they encounter that we might have missed. We took the provided feedback as a reference while improving our product’s future vision, functions and design.
% of users welcome Iris in the following use case