Two years ago, the Portland City Council adopted one of the nation’s strictest bans on facial recognition technology. The council passed a policy earlier this month outlining how city bureaus should approach other surveillance technologies, such as license plate readers and traffic cameras. Meanwhile, the Portland Police Bureau is mulling a program to deploy drones at car crash sites. Kate Kaye is a Portland-based journalist covering technology and artificial intelligence. She’s been reporting on these issues and joins to tell us more.
This transcript was created by a computer and edited by a volunteer.
Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. We start today with Portland’s evolving approach to surveillance technology. In 2020 the Portland City Council adopted one of the nation’s strictest bans on facial recognition technology. But earlier this month, the Council took a softer approach. Instead of an outright ban, the unanimous resolution outlined how city bureaus should approach other surveillance technologies like license plate readers or traffic cameras. Came at a time when the Portland Police Bureau is considering a program to deploy drones at car crash sites. Kate Kaye has been writing about all of this. She is a Portland-based freelance journalist who’s focused on technology and surveillance and artificial intelligence. Welcome back to Think Out Loud.
Kate Kaye: Thank you.
Miller: Can you give us a sense for what this new policy requires?
Kaye: On February 1, the City Council passed the policy resolution on surveillance technology and they passed it unanimously. It calls for a couple of things. Like you said, it doesn’t ban anything, doesn’t stop the use of any kind of technology. What it does is to require every city bureau to do an inventory of the technology they use. So they’re going to have to work with the Planning and Sustainability Bureau and they’re gonna have to kind of have a registry, a list of the technologies that would be considered surveillance technologies that they use.
So the Transportation Bureau would possibly be listing something like software that they use to understand how people move around the city using location data and other types of transportation data. The Police Bureau absolutely would have to list any number of technologies. I think the City Auditor had a report that they did with the Police Bureau last year that had listed something like 37 surveillance technologies that the police already use. So that would be one of the things that it requires, which is something like in Seattle. They do something like that.
And then the other thing that’s going to be required is what they call Privacy Impact Assessments. Those assessments would have to happen any time the city, any city bureau, wants to acquire new forms of surveillance technology.
Miller: Now, would those privacy impact assessments be after surveillance technology is purchased by a bureau?
Kaye: My understanding is before. So I actually spoke with Hector Dominguez at Smart Cities PDX, which is part of the Planning and Sustainability Bureau who’s really overseeing a lot of this stuff. And he said ‘no, this is for before.’ And these Privacy Impact Assessments are going to have to be public too. And I think that’s really important because it could lead people who are paying attention to look at that and say ‘I don’t want that bureau to buy that technology, I’m going to call my city councilperson’ or ‘let’s try to stop it.’ So I think that it could have an impact that way.
Miller: The Resolution is a reminder that tech concerns privacy. They don’t just arise from the actions of the Portland Police Bureau. Can you give us a sense for the variety of surveillance technologies that either in Portland or other cities around the country are currently using or are on the near term horizon?
Kaye: Okay. [Laughter]
Miller: Are you laughing because there’s so much?
Kaye: Yeah. There’s all sorts because surveillance technology is usually dual purpose. It’s almost always something that’s sold as a way to create safety for, let’s say, the Fire Department to be able to assess certain areas of the city where they might be more prone to some problem. Or they’re always sold as safety or making the city more efficient and more streamlined. Right? And it’s all about collecting data and learning from it and making decisions based on accurate information about what actually happens in the city. So a lot of times again, they’re dual purpose.
Miller: So Smart City?
Kaye: Yeah, Smart City. The stuff that’s pulled into the category of quote unquote “smart city” technology is almost always, also, surveillance technology. I did this really interesting three-part podcast a couple of years ago called ‘City Surveillance Watch.’ And I really tried to dig into the dichotomy of these technologies.
One example, I’ll give you that I think really illustrates it well. In the city of Eugene, they have license plate readers, which we have here in Portland. The police use license plate readers. Their Traffic Enforcement Division uses license plate readers. And they use them to drive around the areas where they would be giving out parking tickets. And it actually just automates that process to know this vehicle has been parked here after whatever they paid for. The parking is lapsed. So we have to give them a ticket. In the past, they would have literally chalked tires to do that. That’s the old school process that’s used all over the place. So they considered this as doing this really cool new-fangled thing.
Meanwhile, it’s also surveillance technology. And so they really had to think about who can access this data. What happens if law enforcement comes to us and says ‘hey, we’d like to know what this particular vehicle was parked in this spot for.’ So they have had to develop a policy for that. What Portland’s new surveillance-type policy does is it starts to create guidelines for how city bureaus figure out those kinds of questions before they even buy the stuff.
Miller: So going back to what seems like a subtle change in the way city leaders are approaching surveillance technology from a specific ban of facial recognition software to what is either more lenient or more case-by-case. How do you account for that political change?
Kaye: I think there’s a lot of reasons. I think one reason [is that] facial recognition is something that’s been proven to have a lot of flaws. Still when it’s deployed today, still
Miller: Including racial disparities?
Kaye: Absolutely. There’s a lot of discriminatory impact that it can have because the systems, even though they’re getting more accurate, [are not] accurate enough when it comes to dark skinned faces. Typically they do lead to people being wrongfully arrested and imprisoned sometimes. So that’s something that is just so blatantly problematic. But all sorts of other kinds of technologies that are going to fall into the category of surveillance tech might . . . the resolution even talks about just data use, not even just, a piece of software, piece of hardware, like a camera, but also just data that the city might acquire, could be considered surveillance tech. So there’s all these other things.
So just to say we’re going to ban this or we’re going to ban that, there’s so many things that they could put in that category, how would they even decide? But politically speaking, I do think there are other reasons here in Portland. Because we have to remember the facial recognition ban that passed in 2020, the City Council was different. We had Jo Ann Hardesty and Chloe Eudaly on the Council. They were both big critics of the police and they were more staunch advocates for getting rid of facial recognition or making sure it doesn’t happen here in the city. Things have changed in the council and I think in general, the attitudes towards police here in the city have changed too, throughout the pandemic. And people’s concerns, regarding crime and stuff, that’s another reason.
Miller: How would the new policy we’re talking about affect two things that the police are either in the pilot stages of or in the planning stages of? I’m thinking about the Shot Spotter technology. This is a controversial microphone system that purports to tell police where the shots came from, where a gun was fired. And also the pilot project for using drones for car crash sites?
Kaye: It would require a Privacy Impact Assessment for both of those programs. In the case of Shot Spotter, it hasn’t been purchased yet. The city actually changed its tune and said ‘we’re going to open up the bid process and let other companies bid on this thing.’ So, when they do, assuming they still want a technology like that, decide what it will be, this Privacy Impact Assessment process will happen. And that will hopefully illuminate different ways that people might be affected by that technology in terms of their civil liberties and their privacy. There’s already a lot of concern about people . . . there’s already a lot of pushback about that one.
So I think probably a lot of people would really be eager to see what the Privacy Impact Assessment says. In the case of the Police Drone Program, that’s something that has not happened yet. The Police Bureau told me that they do not have a launch date for it. But there has actually been a Privacy Impact Assessment already done of that. And that’s how I even caught wind of it. The idea is to have drones over crime and traffic, major crime and major crash sites. So I have more questions about that one because the City, again, we have this Privacy Impact Assessment done of something that they didn’t even purchase, the drones, yet I don’t think.
Miller: So the impact study was the technology broadly, as opposed to a particular tool?
Kaye: Yeah, I think it was much more broad, like ‘hey, you’re going to use drones here. Here are the things you’re going to run up against and here are the recommendations we have for how you implement, how you deploy this stuff.’
Miller: You also noted that one of the next steps could be assessments of what are known as Automated Decision Systems. What are those and how might a city use them?
Kaye: It’s kind of a broader way to think about algorithmic systems and things that often are considered AI (artificial intelligence). So [for example] automated decision systems might be some sort of software that is used by a court, to determine whether someone who is up for trial is imprisoned before their trial or gets out on bail. That’s a typical use for that sort of system. Or it might be used to determine whether or not somebody gets access to SNAP (Supplemental Nutrition Assistance Program) or some sort of food assistance or something. Those are examples.
Miller: But in some ways outsourcing that kind of decision to an algorithm to hopefully a wise artificial intelligence as opposed to purely human discretion?
Kaye: And again, the reason they use the term Automated Decision System and not AI is because a lot of times those things are not necessarily using machine learning processes. They’re using processes that are often kind of based on rules like here’s the logic of how this system is going to make a decision.
Miller: Kate Kaye. Thanks very much for joining us.
Kaye: Thank you.
Miller: Kate Kaye is an independent journalist who covers technology and artificial intelligence with a focus on the way governments are using these technologies.
Contact “Think Out Loud®”
If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook or Twitter, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.