Making Augmented Reality Accessible: A Case Study of Lens in Maps

Augmented reality (AR) has the potential to revolutionize how we interact with the world, but its visual-centric nature often excludes users with visual impairments. This presentation will explore how Google Maps' Lens in Maps feature was adapted to provide a meaningful AR experience for visually impaired users. By leveraging audio cues, haptic feedback, and intuitive screen reader interactions, we demonstrate how AR can be made accessible without compromising its core functionality. Join us as we discuss the design decisions, challenges, and successes of this project, and explore the potential for broader applications of accessible AR in various domains.

Interview:

What is the focus of your work?

I am focused on pioneering next-generation interactions for the Lens in Maps feature, aiming to revolutionize Google Maps experience. 

What’s the motivation for your talk?

I hope to advocate for the inclusion of low-vision accessibility in augmented reality applications, where applicable and feasible. 

Who is your talk for?

Our ideal audience includes individuals with a foundational understanding of augmented reality principles and a keen interest in exploring its applications, regardless of their current role or experience level.

What do you want someone to walk away with from your presentation?

I hope to inspire attendees to envision a future where augmented reality technology serves as a powerful tool for empowerment, enabling individuals with low vision to fully participate in and benefit from the digital world.

What do you think is the next big disruption in software?

Neural-interface technology, enabling direct brain-to-text conversion for head-worn devices, will become increasingly indispensable as natural language interfaces proliferate. This will alleviate the social and practical constraints of traditional input methods in public settings.


Speaker

Ohan Oda

Senior Software Engineer @Google, Expert in AR with Maps Starting from MARS @ColumbiaUniversity (2005), then CityLens @Nokia (2012), and Currently Live View @Google

I was born in China, and grew up in Japan. I came to the US after high school and graduated from University of Wisconsin Madison with a double major in Computer Engineering and Computer Science. I got my Ph.D in Computer Science with focus on Augmented Reality from Columbia University under Prof. Steven Feiner's supervision. I currently work at Google as Software Engineer for Live View features in Google Maps.

Read more
Find Ohan Oda at:

From the same track

Session XR

Accessible Innovation in XR: Maximizing the Curb Cut Effect

Wednesday Nov 20 / 11:45AM PST

Accessibility is often seen as the last step in many software projects - a checklist to be crossed off to satisfy regulations. But in reality, accessible design thinking can lead to a fountain of features that benefit disabled and abled users alike.

Speaker image - Dylan Fox

Dylan Fox

Director of Operations @XR Access, Previously UC Berkeley Researcher & UX Designer, Expert on Accessibility for Emerging Technologies

Session

Multidimensionality: Using Spatial Intelligence x Spatial Computing to Create New Worlds

Wednesday Nov 20 / 02:45PM PST

Spatial intelligence and multimodal AI models are transforming the way we design user experiences in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), eXtended Reality (XR) / spatial computing environments.

Speaker image - Erin Pañgilinan

Erin Pañgilinan

Spatial Computing x AI Leader, Author of "Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing", fast.ai Diversity Fellow, Deep Learning Program

Session

Building Inclusive Mini Golf: A Practical Guide to Accessible XR Development

Wednesday Nov 20 / 01:35PM PST

Creating accessible tools and experiences in VR is an ongoing challenge, especially in visually intensive environments like gaming.

Speaker image - Colby Morgan

Colby Morgan

Technical Director @Mighty Coconut, Walkabout Mini Golf, XR Accessibility Advocate

Session

Panel: Next Generation Inclusive UIs

Wednesday Nov 20 / 03:55PM PST

Augmented, Virtual, Extended, and Mixed Reality unlock the ability to integrate the power of computers more seamlessly into our physical three-dimensional world. However, designing the user experience of these next-generation UIs to be as inclusive as possible comes with a lot of challenges.

Speaker image - Erin Pañgilinan

Erin Pañgilinan

Spatial Computing x AI Leader, Author of "Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing", fast.ai Diversity Fellow, Deep Learning Program

Speaker image - Colby Morgan

Colby Morgan

Technical Director @Mighty Coconut, Walkabout Mini Golf, XR Accessibility Advocate

Speaker image - Dylan Fox

Dylan Fox

Director of Operations @XR Access, Previously UC Berkeley Researcher & UX Designer, Expert on Accessibility for Emerging Technologies