Google issued a rare public apology last month after a new version of its Gemini A.I. chatbot demonstrated a strong aversion to rendering images of white people and men, even in contexts where seeing white men would be appropriate, like requests for historical images of Nazi soldiers or kings of England. Instead, the chatbot created images of people of color and women in Nazi uniforms and kingly regalia. "We're working to improve these kinds of depictions immediately," Google wrote in a statement. "Gemini's A.I. image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." At that moment, a long-simmering conversation about A.I. systems becoming political began to boil over. Examples like Google's Gemini are starting to dissolve the decades-old perception — justified or not — that computers are inherently neutral or objective. A chatbot is a conversation partner. Even when it's not acting in an overtly partisan manner, it occupies a point of view — by choosing to share certain facts while omitting others and by framing topics in specific ways — that can be mapped on the political spectrum. As A.I. chatbots and virtual assistants become more prevalent, the potential for their politics to begin influencing ours will intensify. To explore this unfamiliar landscape of political machines, we asked Zvi Mowshowitz, who writes extensively about A.I., to reflect on a recent paper by a machine-learning researcher, David Rozado, that assesses the political leanings of commercially available chatbots by using a simple but brilliant technique: having them, as chatbots, take versions of standard political quizzes. In a guest essay for Times Opinion, Mowshowitz untangles the mechanisms by which computer programs can start to lean left or right, or develop authoritarian or libertarian tendencies, and walks us through the implications. You can also take an interactive quiz to see how your own political preferences stack up against the politics of chatbots like ChatGPT, Claude and Grok.
Here's what we're focusing on today:
We hope you've enjoyed this newsletter, which is made possible through subscriber support. Subscribe to The New York Times. Games Here are today's Mini Crossword, Wordle and Spelling Bee. If you're in the mood to play more, find all our games here. Forward this newsletter to friends to share ideas and perspectives that will help inform their lives. They can sign up here. Do you have feedback? Email us at opiniontoday@nytimes.com. If you have questions about your Times account, delivery problems or other issues, visit our Help Page or contact The Times.
|
Thursday, March 28, 2024
Opinion Today: When your chatbot starts getting too political
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment