How University of Washington researchers are using augmented reality to improve accessibility

By Gemma DiCarlo (OPB)
Nov. 17, 2023 12:47 p.m.

Broadcast: Monday, Nov. 20

Your browser doesn’t support HTML5 audio

Augmented reality technology allows the user to see the real world while overlaying virtual elements. It’s often used to enhance gaming experiences (think Pokémon GO), but researchers in the University of Washington’s Makeability Lab are using AR to improve accessibility for users who are blind, have low vision or other disabilities. For example, the RASSAR app uses a smartphone to scan indoor spaces for safety and accessibility issues, while ARTennis uses an AR headset to help users with low vision track fast-moving tennis balls.

THANKS TO OUR SPONSOR:

Xia Su and Jae Lee are both Ph.D. students studying computer science and engineering at UW. They join us to talk about their work and what future AR technology might look like.

Note: The following transcript was created by a computer and edited by a volunteer.

Dave Miller: From the Gert Boyle Studio at OPB, this is Think Out Loud. I’m Dave Miller. Augmented reality allows someone to see the real world with virtual elements overlaid on top. It’s often used to enhance gaming experiences. Think Pokémon GO. But researchers at the University of Washington’s Makeability Lab are using AR to improve accessibility for users who are visually impaired or have other disabilities. One project could make it easier for people with vision issues to play tennis. Another could alert them about potential dangers in indoor spaces.

Xia Su and Jae Lee are both PhD students studying computer science and engineering at the University of Washington. They join us now to talk about their work. Welcome to the show.

Jae Lee: Hey, how you doing? Thanks for inviting me.

Miller: Doing great Xia Su first – can you just tell us how you describe augmented reality for people who’ve never experienced it before?

Xia Su: Sure, augmented reality is some technology that overlays information on our original vision abilities so that you can receive actual information about the surrounding spaces and also for, usually a lot of amusement-related purposes.

Miller: Jae Lee, where did the idea come from – to use this technology not for gaming, but to improve our lives?

Lee: Augmented reality is a technology that has been used for entertainment for quite some time and I don’t doubt that it’s going to continue to make an impact in the entertainment area. But then we wanted to look at AR technology not just as an entertainment machine, but rather, can it assist with our day-to-day tasks?  And when it comes to AR glasses and then anything wearable augmented reality technologies, they have cameras on them, usually. And we thought, “Well, who can benefit from being able to wear cameras?” And we thought that would be blind or low vision people, and hence we went in this direction of applying AR technologies for those with accessibility needs.

Miller: Xia Su, can you describe the RASSAR app that you’ve helped create?

Su: Sure, the RASSAR app we created is an app that scans the indoor spaces and shows the potential accessibility and safety issues in augmented reality. So basically we have the app installed on iphones, and if the user scans their home space with the iphone, the phone, in real time, detects their surrounding environment and sees what are presented in these environments and runs through a detailed rubric about accessibility and safety.  And then all the possible hazards will be shown in the 3D environments in real time. Thus, all the information can be conveyed to the user and then to some possible improvement suggestions.

Miller: What are examples of what the app is looking for?

Su: Sure. For example, some very common things that we have found in our user study is people may have some hazardous items in their homes.They may have throw rugs in their bedrooms or living rooms, which can be some tripping hazards for our target communities like wheelchair users or blind people. And also there can be some furniture dimension that doesn’t fit the user’s needs. For example, some of the kitchen counters and coffee tables might be a little bit too high or too low for the users. Thus, it can sometimes cause inconvenience or in some cases danger to them.

Miller: What’s the use case for this? I mean, a real world application, because I imagine in people’s existing indoor spaces, that they know about the dangers, that hopefully they’ve tried to mitigate those already. So when might somebody use this?

Su: There are a lot of cases. For example, even though people may have already been very used to their home spaces, their life can go through changes. They can say, get injured and suddenly they are in a wheelchair. In some cases they might have guests to their places. For example, if a wheelchair user is coming to visit or they may give birth to a child, thus, they may need to conduct some child-proofing of the home. And in all these cases, the app can greatly help people since all the information is already incorporated into the app so that the user doesn’t need to go through a very complex process for searching for information online, and trying to measure everything and trying to make improvements to everything. They can all just use this app and then all these solutions will be provided in just a few minutes.

Miller: So for example, expecting parents, they could put the app on their phone and then have the camera on their phone take a visual pass of their living room. Would the app say, “There are plugs here that a kid could stick their finger in; there’s a bookshelf that’s not bolted to the wall?” I mean, how granular would the detail be?

Su: Currently, we do detect a lot of height issues and also dangerous items issues. For example, if the electric socket is a little bit too low, the children might be able to reach them. We just hint at it as a possible danger. We currently have a list of 20 different issues that could cause danger to different communities.

In these 20 issues, they are classified into four categories. The object dimension, like a table being too high or too low, we can capture them very well, and object position - for example, the electric socket being too low was dangerous to kids and dangerous items like some tripping hazards, sharp objects, etc.  And also lack of assistive items. I think for all these accessibility issues, we can detect them very precisely and thus in their high granularity.

Miller: Jae Lee, one of the projects you’ve worked on is called ARTennis. Can you describe it?

THANKS TO OUR SPONSOR:

Lee: Yeah, that project is about how can we help low-vision people specifically, how can we assist them with playing tennis?

And then, looking at it in a broader sense, how do we get them to  play sports? And so there’s of course sports that have been transformed to such that blind or low-vision people can play these sports, similar but definitely has some differences.

We wanted to  explore how can we get them to play sports that sighted people play as well?  And so what the project aims to do is, we realized that a lot of low vision-sports players have trouble sensing depth. So for example, if you think about basketball, if you pass the ball to someone, we can understand how far away the ball may be, and then be able to catch it as well as if a friend was to pass it. Whereas low-vision people might have some trouble understanding how far away the ball is from them. And so we wanted to place visualizations on top of moving sports objects such that it gives people a better sense as to how far away it is or how fast it’s moving, etc.

Miller: Can you describe what is actually going to be superimposed on the user’s headset or glasses when they play ARTennis?

Lee: Right. Given augmented reality technology, they can still see the world, they can still see, if you’re playing tennis, you can see the ball, you can see the net, what is being superimposed onto the real world is a red dot that represents where the ball is and it’s going to be slightly larger than the ball, just so that it’s easier to hit. And there’s also four green arrows that are forming almost like a cross hair. And that’s supposed to sort of be the visualization that increases the surface area of the overall ball. And the reason why this is important is because a lot of low-vision people have vision in part of their field of view. And so if it’s just a small red dot then that ball may go to parts of low vision people’s field of vision in which they don’t have any vision. And so there are cross hairs to help them locate the ball better, especially since the ball is small in tennis, and so that’s being overlaid on top of a ball.

Miller: Right. I mean, the ball is small and it moves really fast. It seems like you started with a challenging sport to make more visually readable. How quickly can the visual information be - meaning a ball speeding through space - how quickly can it be processed and then superimposed back onto what the user is seeing?

Lee: First of all, we chose tennis as a sort of an initial challenge because tennis does have a very small, fast-moving object, as in the tennis ball. And so we thought, “Hey, if it works for tennis, it might work for other sports as well, such as basketball.”

And so how quickly can it do it? Well, it’s running at about 30 frames per second, which means it is considered real time. But the problem is, of course, a tennis ball is very fast, as you said. And so there is a bit of a lag behind it, as of now. And that obviously has to do with the limitation of current technology in that some of the AI - artificial intelligence - algorithms that we have, it cannot actually be ran on the glasses themselves. So what we had to do is, we had to then stream video almost like YouTube would. So we had to stream video onto an external device and run the AI algorithm there. And so there’s of course latency and lag that happens from sending video over to an external device. But, once technology catches up and some of these problems are no longer in existence, we have confidence that it would run fast enough that it can track even a moving tennis ball in space.

Miller: Xia Su, that gets to a bigger question about just accessibility, financially. Where are we right now in terms of the quality of the tools that we’re talking about, and how much it costs for people to actually be able to have them in their hands or on their faces?

Su: Great question. I think it depends on what kind of devices are you using for augmented reality? So for we two researchers, although we are both using augmented reality for accessibility purposes,we are using different devices. The RASSAR app which I am developing is using phone-based augmented reality. And in this case, as long as you have a compatible iPhone, an iPhone Pro, then it will just support all the experience we want to provide. And in this case for a lot of people, the actual cost for using this device is $0.

But potentially for Jae Le’s case, since he’s using HoloLens for the AR implementation, then potentially this will come with the actual cost for a lot of people, because considering the fact that HoloLens is just not widespread in people’s hands.

And Jae? Want to answer him on this question?

Miller: Yeah, Jae - how do you think about the cost, which is its own version of accessibility?

Lee: I absolutely agree that cost is certainly going to be a big problem, especially with, I think, wearable AR glasses. And as Xia said, when it comes to phone-based AR, people with access to phones already may be able to utilize the technology right away, whereas I haven’t seen anyone that walks around wearing AR glasses as of now. The headset that we used is $3,500 and part of the reason why it is so expensive is because it’s really for research purposes. It is designed for that, and it’s developed by Microsoft, it’s called the HoloLens, but there are certainly cheap alternatives that the public can get their hands on right now that are going as low as, for example, $200 to $300. And I think certainly with time, the technology will get better and certainly it will get cheaper as well. Hopefully given the trend in companies building products and whatnot, there has been a trend where the cost of these products have declined as well.

And of course, I’m not saying that it still won’t be an accessibility hurdle, because again, people would have to purchase these technologies. But hopefully at some point, it’s being used for a lot of different use cases that it becomes worthwhile to invest in. And then accessibility can be a part of that headset as well.

Miller: Jae, in the minute we have left, can you give us a sense for the privacy questions that you’re also looking into these days?

Lee: Privacy is certainly a big problem. As I said, AR glasses to us, we’re looking at this as having cameras that are constantly on top of your head. And if this is constantly looking out into the environment, then you run into a lot of privacy concerns, especially regarding people around and how would their faces, for example, be included as part of the data that’s being sent over? And so I think these are still big problems that still need to be resolved.

We are looking at this from the perspective of, for example, can we blur people’s faces and can we get consent somehow from people that are around that? So, “Hey, yes, I am recording the space, but I need this for a specific reason. Here’s the reason why, do you give consent?” We’re looking at this from the perspective of how can we get consent from people, how can we make sure their privacy is still being preserved while also trying to help people who have these accessibility needs and for the general public to then get access to some of the more cutting edge technologies as well.

Miller: Jae Lee and Xia Su, thanks very much.

Lee: Thank you so much.

Su: Thanks for having us.

Miller: Jae Lee and Xia Su are PhD students in computer science and engineering at the University of Washington.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:
THANKS TO OUR SPONSOR: