Last month, Stanford Researchers have announced a new era Artificial intelligence Came, built on a huge top Neural network And the sea of information. They said a New research center Stanford will create and study these “basic models” of AI.
Critics of the idea were quick to express themselves at a workshop organized to mark the inauguration of the new center. Some object to the limited power and sometimes bizarre behavior of these models; Others warn against paying too much attention to a way to make machines smarter.
The owner acknowledges that the practical use of a model identified by Stanford researchers – a large language model that can answer questions or create text from prompts – is much greater. But he said that evolutionary biology suggests that language creates interactions with other aspects of intelligence such as the physical world.
“These models are really fortresses in the air; They have no basis, ”Malik said. “The language we have in these models is not baseless, it has lies, there is no real understanding.” He declined an interview request.
A paper written by dozens of Stanford researchers described “an emerging paradigm for creating artificial intelligence systems” that labeled it a “basic model.” The ever-larger AI models have made some impressive advances in AI in recent years, such as perception and robotics as well as language.
Big language models are also the basis for big technology companies Google And Facebook, Which they use in areas such as search, advertising and content measurement. Large language model building and training may require millions of dollars worth of cloud computing power; So far, it has limited their development and is used in several heeled technology companies.
But larger models are also problematic. Language models inherit biased and offensive text from their trained data and have zero perception of their general knowledge or truth or falsehood. Given a prompt, can be a large language model Unpleasant language spit Or Incorrect information. There is also no guarantee that these larger models will continue to produce advances in machine intelligence.
The Stanford proposal has divided the research community. “Calling them‘ foundation models ’completely makes a fuss in the discussion,” he says Subbarao Kamambhapati, A professor at Arizona State University. There is no clear path from these models to a more general form of AI, says Kamambhapati.
Thomas Dietrich, A professor at Oregon State University and its former president Association for the Advancement of Artificial IntelligenceHe said he has a “huge respect” for the researchers behind the new Stanford Center and believes they are genuinely concerned about the problems these models raise.
But Dietrich wonders if the idea of basic models is not partly about financing the resources needed to create and operate them. “I’m surprised they gave these models a fancy name and created a centerpiece,” he says. “It does the small job of planting flags, which can have a number of benefits towards fundraising.”
Stanford also proposed a creation National AI Cloud Provide industry-scale computing resources for educators working on AI research projects.
Emily M. BenderThe professor of linguistics at the University of Washington said he was concerned that the concept of foundation models reflected the industry’s bias towards investing in data-centric approaches to AI.
Bender said it is especially important to study the risks posed by large AI models. He coauthored a Paper, Published in March, which drew attention to the problems of the large language model and Two Google researchers have contributed to the departure. But, he said, verification should be done from multiple branches.
“There are all these adjacent, really important areas that are just hungry for funding,” he says. “Before we throw money into the cloud, I want the money to go to another branch.”