π₯ Read this trending post from WIRED π
π Category: Science,Artificial Intelligence
β Main takeaway:
The original version to This story Featured in Quanta Magazine.
Of the countless abilities that humans possess, which ones are unique to humans? Language has been a prime candidate at least since Aristotle, who wrote that humanity is βthe animal that has language.β Although large language models like ChatGPT superficially mimic normal speech, researchers want to know whether there are specific aspects of human language that are unparalleled in the communication systems of other animals or artificial intelligence devices.
In particular, researchers have been exploring the extent to which linguistic models can reason about language itself. For some in the linguistic community, language models are not just that no They have thinking abilities, they are I cannot. This view was summarized by Noam Chomsky, a prominent linguist, and two co-authors in 2023, when they wrote in New York Times βCorrect interpretations of language are complex and cannot be learned simply by soaking them in big data.β These researchers argue that AI models may be adept at using language, but they are unable to analyze language in a sophisticated way.
This view has been challenged in a recent paper by Jasper Bigos, a linguist at the University of California, Berkeley; Maximilian Dabkowski, who recently received his PhD in linguistics from the University of Berkeley; and Ryan Rhodes of Rutgers University. Researchers have put a number of large linguistic models, or LLMs, through a series of linguistic tests, including, in one case, having the LLM generalize to the grammar of a different language. While most LLM holders failed to analyze grammar the way humans can, one had impressive abilities that far exceeded expectations. He was able to analyze language in the same way a graduate student in linguistics would, such as drawing sentences, resolving multiple ambiguous meanings, and taking advantage of complex linguistic features such as repetition. This finding βchallenges our understanding of what artificial intelligence can do,β Bigos said.
This new work is timely and “extremely important,” said Tom McCoy, a computational linguist at Yale University who was not involved in the research. βAs society becomes more dependent on this technology, it is increasingly important to understand where it can succeed and where it can fail.β He added that linguistic analysis is an ideal test to assess the degree to which these linguistic models can think like humans.
Infinite complexity
One of the challenges of conducting rigorous linguistic testing of language models is ensuring that they do not already know the answers. These systems are typically trained on vast amounts of written information, not just the bulk of the Internet, in dozens if not hundreds of languages, but also things like linguistic textbooks. Models can, in theory, memorize and regurgitate information they are fed during training.
To avoid this, Bigos and his colleagues created a four-part language test. Three of the four parts involved asking the model to parse specially designed sentences using tree diagrams, which were first introduced in Chomsky’s landmark 1957 book, Grammatical structures. These charts break sentences into noun phrases and verb phrases and then divide them into nouns, verbs, adjectives, adverbs, prepositions, conjunctions, and so on.
One part of the test focused on repetition, that is, the ability to include phrases within phrases. βThe sky is blueβ is a simple English sentence. βJane said the sky was blueβ includes the original sentence in a slightly more complex sentence. Most importantly, this process of repetition can go on forever: βMaria wondered if Sam knew that Omar heard that Jane said the sky was blueβ is also a grammatically correct, if awkward, repeated sentence.
β‘ Tell us your thoughts in comments!
#οΈβ£ #time #analyzes #language #human #expert
π Posted on 1765698627

