Think Out Loud

University of Washington computer scientist wins a MacArthur ‘genius’ award

By Sheraz Sadiq (OPB)
Oct. 17, 2022 6:38 p.m. Updated: Oct. 27, 2022 3:45 p.m.

Broadcast: Wednesday, Oct. 19

University of Washington computer science professor Yejin Choi was named a 2022 MacArthur Fellow for her work to make artificial intelligence capable of understanding common sense reasoning and intent within human language.

University of Washington computer science professor Yejin Choi was named a 2022 MacArthur Fellow for her work to make artificial intelligence capable of understanding common sense reasoning and intent within human language.

John D. and Catherine T. MacArthur Foundation

THANKS TO OUR SPONSOR:

The John D. and Catherine T. MacArthur Foundation recently announced the names of 25 people who won an annual fellowship. The highly selective and prestigious award, which has also been described as a “genius grant,” comes with a no-strings-attached stipend of $800,000, which the foundation describes as “an investment in their potential.” The fellowships are given each year to a diverse array of artists, academics, activists and other pioneers in their fields. Yejin Choi is a 2022 MacArthur Fellow and a computer science professor at the University of Washington. She joins us to talk to us about her efforts to make artificial intelligence capable of common sense reasoning, ethical decision-making and recognizing intent behind the human language.

The following transcript was created by a computer and edited by a volunteer:

Dave Miller: This is Think Out Loud on OPB, I’m Dave Miller. We end today with a newly minted Macarthur Award winner from the University of Washington. Yejin Choi  is a Professor in the Paul G. Allen School of Computer Science and Engineering at U. Dub. She received the so-called ‘Genius Grant’ for her work that pushes the boundaries of Artificial Intelligence. She’s trying to teach A-Is to be able to use common sense reasoning, identify information that’s not trustworthy and even make ethical judgments. Yejin Choi joins us to talk about her work. Congratulations and welcome.

Yejin Choi: Thank you.

Miller: What does ‘common sense reasoning’ mean in the context of Artificial Intelligence?

Choi: It broadly refers to what’s easy for humans, everyday knowledge about how the world works, that is strikingly hard for A-I.

Miller: What’s an example of something that I can do without quotes ‘thinking about it,’ but that would be hard to get an A-I to do?

Choi: In general it’s about obvious knowledge that you and I don’t talk about. For example, a horse has two eyes, not three or four, but we don’t talk about it. So then, if you ask that kind of a question, A-I can struggle.

Miller: Is this a question of just not having powerful enough computers, even given how amazingly powerful computers are compared to what they were, say, 20 years ago, or is the problem deeper than that?

Choi: Deeper than that – because the computers are more powerful, we do now have some hope that we might be able to investigate this problem. But the actual challenge is very, very deep.

Miller: What makes it challenging?

Choi: So there’s a fundamental difference between how humans learn about the world, by learning the actual concepts, what it means to…you know, let’s make an example of, for example, an apple, a fruit apple. You know, like, we can eat it, it’s crunchy if you bite into it and it’s just somewhat hard, but not hard enough to sustain a lot of weight.  So we know all of these things and so, say, you’re driving a car and then you see an apple on the ground and then you know that it’s okay to drive over it. But it’s not as okay if you see a sharp metallic object on the ground, but these things may not be written down or spoken out loud by human system-error,  and machines tend to learn by just reading lots of internet data, which, if they have not talked about it, then A-I doesn’t know what to do about it.

Miller: So how so what is the challenge in front of you in terms of how do you teach an AI, that’s so literal in so many ways and everything it learns I guess, in the past, it had to be explicitly taught- how do you get it to soak up lost knowledge and knit different pieces of knowledge together and make inferences and do the stuff that humans do without effort?

Choi: If we think about how humans learn, we do learn from diverse modes of learning. So we don’t learn only from internet data. In fact, if you imagine having a child to only learn from internet data, I don’t think it would be actually successful in many ways. So we do learn by reading books, reading textbooks in particular, and we do learn by asking questions, especially when a child is learning about new objects. They do ask a lot about what is this, why is this? And so we’re mimicking that sort of exchange of knowledge through declarative form and see how far we can reach.

Miller: How far have you gotten in terms of creating an A-I that’s more capable of common sense reasoning?

Choi: We’ve made a lot of progress but it’s still far from how robust humans are when it comes to common sense reasoning. But now for example, if we teach machines about X repelling Y’s attack, and then we tell machine that then maybe access the kind of person who’s brave and strong in order to be able to repel Y’s attack as opposed to run away. So we teach that as an example and then ask machine, how about repelling someone’s attack in a chess game? Now it’s a very different situation, it’s intellectual combat. So then machine extrapolates out of the original fist fight context to intellectual fight and say, ‘Oh X might be seen as someone competitive and smart.’ We made that much of the progress.

Miller: So the computer could recognize the different contexts and repelling an attack from somebody who’s physically fighting you- it recognizes that’s different than winning at a chess game.

Choi: Yeah. So it does recognize that there’s some analogical similarity, but I wouldn’t say that it’s super reliable for all the corner cases. And the thing about life is that it’s full of corner cases that humans are very good at just dealing with.

Miller:  ‘Corner cases,’ meaning ‘complicated situations?’

Choi: It could be complicated, it could be just obvious corner cases, trivial corner cases. What I mean, so, for example, why would there be an apple on the road when you drive? I mean how often do you actually encounter an apple when you drive? It might have been zero.

[Dave Miller, laughing]

THANKS TO OUR SPONSOR:

Miller: Sure. But it’s not unusual for them to be unusual objects, right – I mean like we regularly see stuff that shouldn’t be there. But you’re saying we’re just used to it as humans?

Choi: Yeah. And we are ready for a lot of them. We almost always know what to do with them. Say, even if you don’t know what it is, you still know what the common sense reaction should be? Maybe avoid it if possible.

Miller: Right. And I guess in the kind of binary question, ‘Do we avoid it or not?’ Can we squash the banana or the apple, or do we avoid the rock that will break our oil pan?

Choi: Yeah. Yeah, exactly.

Miller: So what are the, what are the potential applications for an A-I that’s more capable of common sense reasoning, setting aside questions of say, an autonomous vehicle that literally could have to make the decision of whether or not to swerve away from an apple. What other real life examples could be really important?

Choi: For example, we now have home devices that can speak in natural language and interact with natural language and ask questions, answer questions. And so there was one incident not so long ago, a home device, suggesting a child to touch an electronic socket using a metal coin. That’s just such a bad idea, common sense-wise. Fortunately, the child did have a common sense that it’s a stupid thing to do. But you know, even the conversational A-I systems need this in order to avoid harmful conversations.

Miller: How far along are A-Is right now in being able to understand sarcasm. Because, to me, that’s a kind of a language version of the apple in the road. It’s not exactly what it would seem to be.

Choi: Yeah. So understanding figurative language, if that figurative language is what a lot of people do use in a very predictable context, then it will have a chance to understand it. If it’s a different kind then it might have a harder time.

Miller: As I noted, you’re also working to make Artificial Intelligence capable of certain kinds of ethical decision making. What’s an example of what you’ve been working on?

Choi: So surprisingly a lot of everyday decision making has moral implications, even including, you know, not suggesting a child to touch the electronic socket using a metal coin, because if you do, there’s moral consequences.

Miller: There’s moral consequences for swerving out of the way of an apple and driving into oncoming traffic as well.

Choi: Yeah, that too. So a lot of our seemingly trivial action decisions that we make about everyday actions actually do have moral implications. So the more A-I interacts with humans the closer they interact, it’s unavoidable that it’s going to encounter situations in which the decisions will have some kind of ethical moral implications.

Miller: What are you interested in working with, in terms of ethical or moral questions, yourself?

Choi: First of all, obvious cases don’t suggest a child to do dangerous things, that I think a lot of people would just agree. Also AI being aware of the questions where people tend to disagree, AI shouldn’t side with just one opinion in those cases, but rather be aware that it’s something that humans are having disagreements about. This includes even the cultural norms – different cultures have different standards, and that’s okay.

Miller: When you started your work, especially in trying to push forward with ‘common sense’ reasoning in artificial intelligence, what were the kinds of responses you got from fellow computer scientists?

Choi: So, ‘common sense’ was a very hot topic at the beginning of A-I field, which is in the 70s and 80s, and there was a major, I guess, failure – after a while, nothing really worked well. So they then decided that, okay, this is too hard and we will never be able to do this. So don’t work on it, period. And then I thought about it for a couple of years, and then I realized their conclusion is inconclusive when all that research was based on less compute, less data, no crowdsourcing, not the same kind of computational models that we have now. And also, it just seems like it’s inconclusive.

Miller: This MacArthur Grant is a real explicit message from a panel of smart people that both, they’re impressed with the work you’ve already done, and they’re excited to see what you are going to do next. What does his grant mean to you?

Choi: This was the last thing I expected in my life and I was genuinely at a loss for words, or I wasn’t sure what to think or what to say, for some time and I still cannot believe that it means that it’s okay to be ‘not perfect,’ it’s okay to still have to learn a lot and it’s okay to make mistakes along the way. But I shouldn’t give up.

Miller: Yejin Choi, thanks very much and congratulations again.

Choi: Thank you.

Miller: Yejin Choi is a Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington and a 2022 MacArthur Foundation Fellow. Tomorrow on the show, Portland voters will soon get a chance to completely overhaul the way city government works and elected officials are chosen. The mayor would become more powerful.

The City Council would expand from five members to 12 and elections would be conducted using ranked choice voting. We’ll explore how this would work and hear arguments for and against the changes on the next Think Out Loud. There are a lot of ways you can get in touch with us if you have comments about what we’ve done or questions or suggestions on Facebook and Twitter, we are at OPBTOL. You can also email us, we are thinkoutloud@ opb.org. You can also leave us a voicemail. The number is 503-293-1983. Thanks very much for tuning in to Think Out Loud on OPB and KLCC. I’m Dave Miller, we’ll be back tomorrow.

Announcer: Think Out Loud is supported by Steve and Jan Oliver, the Rose E. Tucker Charitable Trust and Michael, Kristen, Andrew and Anna Kern.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook or Twitter, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:
THANKS TO OUR SPONSOR: