Glean provides search tools through applications such as Gmail, Slack and Salesforce. Q said the new AI techniques for language analysis will help Glian’s clients find the right files or conversations much faster.
But training such a sophisticated AI algorithm costs millions of dollars. So Glean uses smaller, less capable AI models that can’t make much sense out of text.
AI has produced exciting success over the past decade – programs that can beat people in complex games, drive on city streets under certain conditions, respond to spoken commands, and write consistent text based on short prompts. Writing in particular relies on recent advances in computer language analysis and manipulation.
These advances are essentially learning as an example of feeding algorithms more text, and giving them more chips with which to digest it. And it costs money.
Consideration Of OpenAI Language model GPT-3, A large, mathematically simulated Neural network Which was fed to the rim of text scraped from the web. GPT-3 can find statistical patterns that predict, with interesting consistency, which words should follow others. Out of the box, GPT-3 is significantly better than previous AI models in answering questions, summarizing text, and correcting grammatical errors. In one measure, it is 1,000 times more capable than its predecessor, the GPT-2. But training GPT-3 costs, By some conjecture, About $ 5 million.
“If GPT-3 was accessible and cheap, it would fully charge our search engines,” Q said. “It will be really, really strong.”
The spiraling cost of advanced AI training also seeks to create their problems for established companies.
Dan McCurry leads a team in a division of Optom, a health IT company, which uses language models to identify high-risk patients or analyze transcripts of calls to recommend referrals. He said even training a language model that could consume the GPT-3’s one-thousandth-size early team budget. Models need to be trained for specific tasks and can cost more than $ 50,000, to rent cloud computing companies their computers and programs.
Cloud computing providers have little reason to reduce costs, McCurry said. “We can’t believe that cloud providers are working to reduce the cost of creating our AI models,” he said. He is looking to buy specialized chips designed to speed up AI training.
Part of why AI has improved so quickly recently is because so many academic labs and startups can download and use new ideas and strategies. Algorithm The breakthrough in image processing, for example, originated from academic labs and evolved using off-the-shelf hardware and openly shared data sets.
Over time, though, it has Gradually became clear The advancement of AI is associated with an indicative increase in the underlying computer power.
Big companies, of course, have always had advantages in terms of budget, scale and reach. And large-scale computer power in industries such as drug discovery.
Now, some are pushing to scale things further. Microsoft Said This week, with Nvidia, it created a language model twice as large as the GPT-3. Researchers in China Say they have created a language model that is four times larger Than that.
“The cost of AI training is absolutely rising,” said David Canter, its executive director. MLCommons, A company that tracks the performance of chips designed for AI. He said the idea that big models can unlock valuable new capabilities can be seen in many areas of the technology industry. It could explain Why Tesla is designing its own chip Just to train AI models for autonomous driving.
Some are concerned that the rising cost of tapping the latest and greatest technology could slow down innovation by saving it for the largest companies and those who lease their equipment.
“I think it slows down innovation,” he says Chris Manning, A Stanford professor who specializes in AI and language. “When we have a handful of places where people can play with the underlying part of the model on this scale, the amount of creative search needs to be greatly reduced.”