‘Baby AGI’ could be a reality in early 2025

An early prototype framework for Artificial General Intelligence could be operating as early as next year, SingularityNET founder Ben Goertzel believes.

Speaking at the opening of the Beneficial AGI Summit in Panama on Feb. 27, Goertzel — who popularized the term AGI — laid out a blueprint for its development to ensure AGI isn’t controlled by corporations or governments and will help humanity rather than harm it.

AGI is a theoretical concept that, if accomplished, would develop an AI system with the ability to accomplish any intellectual task that human beings can perform.

Goertzel’s plan calls for the use of open source code, decentralized infrastructure and governance, open-ended cognitive architecture, diverse AGI algorithms, ethically sourced and managed data, and ensuring people from all around the world are included.

Goertzel told Cointelegraph in an interview the blueprint underpins everything “we’re doing in the whole SingularityNET ecosystem.”

“We’re building decentralized AI infrastructures that are agnostic with respect to what AI approach you may want to take,” he said, noting the aim was that a “12-year-old genius from Tajikistan” could contribute to a breakthrough.

But while the network is designed to foster collaboration and different contributions to help achieve AGI, Goertzel said that his “best guess” is his own OpenCog Hyperon project “may be the system to make the breakthrough.”

Scheduled for release in Alpha in April, Hyperon is described in a research paper with numerous coauthors as “a framework for AGI at the human level and beyond” that incorporates the latest ideas, software and techniques. OpenCog is an open-source AI project founded by Goertzel in 2008 and is affiliated with SingularityNET.

As he describes it, the Hyperon Alpha will be a sort of proto-AGI using a bespoke programming language called Metta that will open up in April so that open-source developers can use it to write better code for different AI applications.

Pressing Goertzel for more details, Cointelegraph asked if the alpha release would be something like a baby AGI that could be developed into a full AGI.

“We will have a complete toolset for building the baby AGI,” he clarified and said they needed to, and would, scale up the system massively between now and the end of the year.

“I think by early 2025 we might have a baby AGI,” he said. “I think we can call it a fetal AGI if you want to pursue that metaphor.”

Goertzel also threw his support behind Vitalik Buterin’s defensive acceleration (d/acc) approach to developing superintelligent AI.

Opinion on AGI development is currently split between accelerationists (e/acc) who want to rush toward the technology due to its benefits, and decelerationists (decel), who want to slow down development for fear of the existential risks.

Goertzel said the former had a touch of “Silicon Valley Uber Alles” about it, while the latter had no realistic chance of happening even if it were the best approach.

Instead, he endorsed the “decentralized accelerationism” or “defensive accelerationism” approach Buterin proposed in November.

Accelerating progress to AGI is “probably the best thing we can do,” he said, but “we don’t want power over AGI to be concentrated in any one party […] And we want to pay attention to various bad things that could happen.”

Related: Would Sam Altman’s $7 trillion ask really secure our future?

Goertzel has just written a new book about AGI called The Consciousness Explosion that argues AGI will have enormous benefits and will liberate humans from repetitive labor, end all physical and mental diseases, cure aging and potentially prevent involuntary death.

While he says these benefits outweigh the risks, AGI could still go wrong in a number of ways.

He outlined some of those risks in his address, including China and the United States developing “super AGI whose goal is to clobber the other guy” or an unethical rollout of AGI that only benefits the global elite and makes the poor even poorer.

Regulatory capture, where big companies lobby for regulations that benefit them more than the people, was a definite possibility.

While he considers it unlikely, “the risk Hollywood likes to talk about” of an AI that goes rogue was also within the bounds of possibility.

“I also don’t think we can totally confidently rule out anything because we’re going into fundamentally unknown territory,” he said.

Magazine: How to control the AIs and incentivize the humans with crypto