Google’s top AI scientist says ‘learning how to learn’ will be next generation’s most needed skill generation’s most needed skill
ATHENS, Greece (AP) — A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace. Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google’s DeepMind, said rapid technological change demands a new approach to learning and skill development. “It’s very hard to predict the future, like 10 years from now, in normal cases. It’s even harder today, given how fast AI is changing, even week by week,” Hassabis told the audience. “The only thing you can say for certain is that huge change is coming.”
Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
There’s a race underway to build artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can. Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft. It’s also a cause for concern for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with “long-term planning” skills could pose an existential risk to humanity. But what exactly is AGI and how will we know when it’s been attained? Once on the fringe of computer science, it’s now a buzzword that’s being constantly redefined by those trying to make it happen.

What is AGI?
Not to be confused with the similar-sounding generative AI — which describes the AI systems behind the crop of tools that “generate” new documents, images and sounds — artificial general intelligence is a more nebulous idea. It’s not a technical term but “a serious, though ill-defined, concept,” said Geoffrey Hinton, a pioneering AI scientist who’s been dubbed a “Godfather of AI.” “I don’t think there is agreement on what the term means,” Hinton said by email this week. “I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do.” Hinton prefers a different term — superintelligence — “for AGIs that are better than humans.”
Are we at AGI yet?
Without a clear definition, it’s hard to know when a company or group of researchers will have achieved artificial general intelligence — or if they already have. “Twenty years ago, I think people would have happily agreed that systems with the ability of GPT-4 or (Google’s) Gemini had achieved general intelligence comparable to that of humans,” Hinton said. “Being able to answer more or less any question in a sensible way would have passed the test. But now that AI can do that, people want to change the test.” Improvements in “autoregressive” AI techniques that predict the most plausible next word in a sequence, combined with massive computing power to train those systems on troves of data, have led to impressive chatbots, but they’re still not quite the AGI that many people had in mind. Getting to AGI requires technology that can perform just as well as humans in a wide variety of tasks, including reasoning, planning and the ability to learn from experiences.

Our Newsletter
Subscribe our newsletter to get latest news & promotion
By subscribing, you accepted the our Policy

Caribbean Export Diversification: Tourism, Services, and Niche Products

Fiscal and Monetary Policy Challenges in Latin America

3 comments
David Bowie
3 hours agoLogan Cee
2 dayes agoLuis Diaz
December 25, 2022