Anyone who has logged into Facebook or Instagram recently has likely seen the deluge of weird look-at-me material filling their feeds, like pie-eating cats and memes of Jesus-as-a-shrimp. But maybe we should be thankful for the artificial intelligence-generated slop: It may be one of the triggers that will finally force the social media giants to be held responsible for the societal consequences of their behavior. For decades, the tech giants have run rampant thanks to a snippet of law from the dial-up era that was intended to protect companies from defamation claims related to posts made by users. That may have made sense back when we got to choose whose material we saw. But, as Julia Angwin explains in her latest essay for Times Opinion, courts are starting to question whether it still applies in the TikTok era, when platforms are increasingly having us watch whatever content their algorithms have chosen for us. "If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy," she writes. Our legal system may be slow to catch up with the times, but let's take a moment to celebrate the fact it still seems to be functioning. Read the guest essay: We hope you've enjoyed this newsletter, which is made possible through subscriber support. Subscribe to The New York Times. Games Here are today's Mini Crossword, Wordle and Spelling Bee. If you're in the mood to play more, find all our games here. Forward this newsletter to friends to share ideas and perspectives that will help inform their lives. They can sign up here. Do you have feedback? Email us at opiniontoday@nytimes.com. If you have questions about your Times account, delivery problems or other issues, visit our Help Page or contact The Times.
|
Wednesday, October 23, 2024
Opinion Today: Will the case against TikTok finally make big tech accountable?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment