A Google research scientist explains why she thinks the police shouldn't use facial recognition.
A case for banning facial recognition |
| Ziv Schneider |
|
Facial recognition software might be the world’s most divisive technology. |
Law enforcement agencies and some companies use it to identify suspects and victims by matching photos and video with databases like driver’s license records. But civil liberties groups say facial recognition contributes to privacy erosion, reinforces bias against black people and is prone to misuse. |
Timnit Gebru, a leader of Google’s ethical artificial intelligence team, explained why she believes that facial recognition is too dangerous to be used right now for law enforcement purposes. These are edited excerpts from our virtual discussion at the Women’s Forum for the Economy & Society on Monday. |
Ovide: What are your concerns about facial recognition? |
Gebru: I collaborated with Joy Buolamwini at the M.I.T. Media Lab on an analysis that found very high disparities in error rates [in facial identification systems], especially between lighter-skinned men and darker-skinned women. In melanoma screenings, imagine that there’s a detection technology that doesn’t work for people with darker skin. |
I also realized that even perfect facial recognition can be misused. I’m a black woman living in the U.S. who has dealt with serious consequences of racism. Facial recognition is being used against the black community. Baltimore police during the Freddie Gray protests used facial recognition to identify protesters by linking images to social media profiles. |
But a police officer or eyewitness could also look at surveillance footage and mug shots and misidentify someone as Jim Smith. Is software more accurate or less biased than humans? |
That depends. Our analysis showed that for many, facial recognition was way less accurate than humans. |
The other problem is something called automation bias. If your intuition tells you that an image doesn’t look like Smith, but the computer model tells you that it is him with 99 percent accuracy, you’re more likely to believe that model. |
There’s also an imbalance of power. Facial recognition can be completely accurate, but it can still be used in a way that is detrimental to certain groups of people. |
The combination of overreliance on technology, misuse and lack of transparency — we don’t know how widespread the use of this software is — is dangerous. |
A maker of police body cameras recently discussed using artificial intelligence to analyze video footage and possibly flag law-enforcement incidents for review. What’s your take on using technology in that way? |
My gut reaction is that a lot of people in technology have the urge to jump on a tech solution without listening to people who have been working with community leaders, the police and others proposing solutions to reform the police. |
Do you see a way to use facial recognition for law enforcement and security responsibly? |
It should be banned at the moment. I don’t know about the future. |
You can watch our entire conversation about helpful uses of A.I. and its downsides here. |
Stopping trackers in their tracks |
Brian X. Chen, a consumer technology writer at the The New York Times, writes in to explain ways that emails can identify when and where you click, and how to dial back the tracking. |
Google’s Gmail is so popular in large part because its artificial intelligence is effective at filtering out spam. But it does little to combat another nuisance: email tracking. |
The trackers come in many forms, like an invisible piece of software inserted into an email or a hyperlink embedded inside text. They are frequently used to detect when someone opens an email and even a person’s location when the message is opened. |
When used legitimately, email trackers help businesses determine what types of marketing messages to send to you, and how frequently to communicate with you. This emailed newsletter has some trackers as well to help us gain insight into the topics you like to read about, among other metrics. |
But from a privacy perspective, email tracking may feel unfair. You didn’t opt in to being tracked, and there’s no simple way to opt out. |
Fortunately, many email trackers can be thwarted by disabling images from automatically loading in Gmail messages. Here’s how to do that: |
- Inside Gmail.com, look in the upper right corner for the icon of a gear, click on it, and choose the “Settings” option.
- In the settings window, scroll down to “Images.” Select “Ask before displaying external images.”
|
With this setting enabled, you can prevent tracking software from loading automatically. If you choose, you can agree to load the images. This won’t stop all email tracking, but it’s better than nothing. |
Bonus tech tip! Some readers asked for more help setting up notifications that can alert you to fraudulent credit card charges. Signing up for these is not easy because, let’s face it, financial websites are not the simplest to use. |
On the apps and websites for the credit cards I have, I found these alerts in menus labeled “Profile and Settings” or “Help & Support.” Look for “Alerts” or dig into the privacy and security options. Sign up for an email or app notification each time your card is used to make a purchase online and over the phone. |
Most of the time, those purchases are from you. But you want to know right away in the (hopefully) rare times when they’re not. |
- Behind the pro-China Twitter campaign: An analysis by my New York Times colleagues found a new and decidedly pro-China presence on Twitter, made up of a network of accounts exhibiting seemingly coordinated behavior. The findings add to evidence suggesting that Twitter is being manipulated to amplify the Chinese government’s messaging about the coronavirus and other topics.
- Restaurants really aren’t fans of those apps: Nathaniel Popper, a Times tech reporter, explains why restaurants are lincreasingly unhappy about the high fees and other aspects of food delivery services like Grubhub and Postmates. (I’ll have a conversation with Nathaniel about this in Wednesday’s newsletter.)
- The downsides of every gathering of humans: The neighborhood social network Nextdoor has been both a place for people to help one another during the pandemic, and a way for neighbors to lash out at one another over perceived slights or fan fears about crime. The Verge writes about the challenges faced by the volunteers on Nextdoor who are moderating discussions about race and the recent protests.
|
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com. |
|
No comments:
Post a Comment