What is language? What is thinking? How are the two connected? |
Countless parents have contemplated versions of those questions as they watch their babies learn to speak. Scientists, too, have tried to understand how it all works, and the ensuing debates have animated the field of linguistics for decades. |
In the early 1950s, the leading theory was essentially that young children learn to speak by trial and error. Exposed to language — sentences flying this way and that — they eventually start experimenting with a few of their own. With enough feedback from parents or others, the kids figure out how to use their words, as the current parenting cliché goes, to express their thoughts. |
Noam Chomsky argued that this couldn't be right. He observed that young kids have relatively little exposure to language, given how quickly they manage to navigate it in all its limitless complexity. Often described as the father of modern linguistics, Chomsky said we humans don't just absorb language; we are hard-wired for it. It's an innate ability that didn't develop just to help us express our thoughts. In a certain sense, language is thought. You can't separate the two. |
It might sound academic, but those big questions have come roaring back to life with the advent of ChatGPT and other machine learning programs. This new generation of artificial intelligence can absorb huge amounts of data and then, based on the patterns it infers, produce something that sure does seem like language. So is it talking? And more mind-boggling still, is it thinking? |
In a provocative essay for Times Opinion, none other than Chomsky himself — along with Ian Roberts, a professor of linguistics at the University of Cambridge, and Jeffrey Watumull, a philosopher and the director of artificial intelligence at Oceanit, a science and technology company — says that ChatGPT isn't thinking for the same reason that young children aren't simply absorbing language. "The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question," they write. "On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations." |
What Our Readers Are Saying |
I've experimented with both ChatGPT and Bing Chat over the last several weeks. They are useful tools and can be fun. (I asked ChatGPT to write a haiku about a Big Mac, and it complied admirably.) I do not understand all the deprecation and doomsaying. Just add them to your toolbox and use them for what they're worth. — Clare Mulligan, Palm Harbor, Fla. At the moment, it seems to me that the comments disagreeing with the authors of this piece are missing this very important point: In spite of the fact that we can recognize the limitations of this facet of A.I., most people don't have the resources to stop and think it through. The technology will be let loose — just like social media — for profit. The public will willingly give it their trust and will be exploited. Like most, or all, technological advances, it's the humans who put it to use that are the danger. — Raleehan, Dallas The problem is not that A.I. is frequently inaccurate. The problem is that people will believe those inaccuracies. — JCA, N.J. |
Here's what we're focusing on today: |
Forward this newsletter to friends to share ideas and perspectives that will help inform their lives. They can sign up here. Do you have feedback? Email us at opiniontoday@nytimes.com. |
|
No comments:
Post a Comment