Struggle for power
When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the next generation of big language models like ChatGPT, he did what many of us did: He started playing with them to see how they can help your work. He carefully documented their performance in an article in February, noting how well they handled 25 “use cases,” from brainstorming and text editing (very useful) to coding (very good with a little help) to doing math (not great).
ChatGPT incorrectly explained one of the most fundamental principles of economics, says Korinek: “It went very wrong.” But the mistake, easy to spot, was quickly forgiven in light of the benefits. “I can tell you it makes me, as a cognitive worker, more productive,” he says. “For me, I’m definitely more productive when I’m using a language model.”
When GPT-4 came out, he tested its performance on the same 25 questions he documented in February and it performed much better. There were fewer cases of making things up; he also did much better on math assignments, says Korinek.
Because ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen much faster than in past technological revolutions, Korinek says. “I think we’ll see a bigger boost to productivity later this year, certainly in 2024,” he says.
Who will control the future of this amazing technology?
What’s more, he says, in the long run, the way AI models can make researchers like him more productive has the potential to drive technological progress.
This potential of large linguistic models is already being shown in physical science research. Berend Smit, who heads a chemical engineering laboratory at EPFL in Lausanne, Switzerland, is an expert in using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to prove that GPT-3 is, in fact, useless for sophisticated types of studies of machine learning that your group does. to predict the properties of compounds.
“It completely failed,” jokes Smit.
It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model works just like advanced machine learning tools developed especially for chemistry to answer basic questions about things like the solubility of a compound or its reactivity Simply give it the name of a compound and it can predict various properties based on the structure.