Dr. Ben Goertzel—Striving towards an Autonomous Decentralized and Compassionate Artificial Super Intelligence - Blockchain.News

Dr. Ben Goertzel—Striving towards an Autonomous Decentralized and Compassionate Artificial Super Intelligence

Dr Ben Goertzel is the founder and CEO of SingularityNET a blockchain-based Artificial Intelligence marketplace project. He has described the project as a medium for the creation and the emergence of the global super brain–an autonomous decentralized democratically governed artificial super intelligent network that must be biased toward compassion for the betterment of humanity.

  • Dec 06, 2019 03:00
Dr. Ben Goertzel—Striving towards an Autonomous Decentralized and Compassionate Artificial Super Intelligence

Dr. Ben Goertzel is the founder and CEO of SingularityNET, a decentralized blockchain-based Artificial Intelligence (AI) marketplace project. He has described the project as a medium for the creation and the emergence of Artificial General Intelligence (AGI) as well as a way to roll out superior AI-as-a-service to every vertical market and enable everyone in the world to contribute to and benefit from AI.

Blockchain.News managed to catch up with Dr. Goertzel at the Blockshow 2019, in Singapore. In the first part of our interview, we discuss the evolution and the philosophical aspects of AI and AGI.

Evolution of Artificial Intelligence

For many, the concept of machines with the ability to learn and develop as humans, but with the enhanced calculation speed of a computer, is simply terrifying. Will the machines replace us, will we be able to contain them, or will we merge with them and how far away is this future?

While AI does describe the simulation of human intelligence by machines, most AI we encounter in our day to day lives are complex mathematical algorithms, such as Apple’s Siri or Amazon’s Alexa, described as ‘narrow AI’ and are of relatively weak intelligence—capable of performing basic tasks but only within a very specific framework.

Goertzel is aiming higher, essentially trying to birth a much stronger type of AI, also known as artificial general intelligence (AGI)—AI systems with assimilated human learning cognitive abilities. He explained, “AGI refers to an AI that can generalize way beyond what it has been taught and has seen, which means it can imaginatively guess elements about new domains of experience. This is extremely important in the modern world where we are forced to deal with unexpected circumstances all the time.”

Beyond AGI is where things get really exciting and may perhaps present a slightly existential challenge for humanity—artificial superintelligence (ASI). Goertzel said, “Artificial superintelligence is the next step beyond general intelligence. Humans currently have more general intelligence than the software products that are commercially available right now. But humans are by no means the most generally intelligent possible system. I think as AI advances further and further, you're going to see AI as tremendously smarter than people much as we're much smarter than monkeys, rats, or bugs. But I mean, to get from where we are now with narrow AI, to AGI, and then to artificial superintelligence, we need to go through quite a series of practical steps.” He continued, “That's really what we're engaged with at SingularityNET—the project is how to get through the next steps of the evolution—from where we are now, with fairly simplistic narrow AI towards a powerful general intelligence but also taking care to do it in a way that avoids putting the AI in control of these confused centralized global powerful parties—we want a decentralized general intelligence."


The Singularity in SingularityNET

As Dr. Goertzel revealed, “The singularity in SingularityNET refers to the future foreseen by Verner Vinge and popularized by Ray Kurzweil—it basically is the moment at which technology starts advancing so fast, it seems effectively instantaneous to the human mind, and this is going to occur by AGI becoming smarter than people. AI will be doing the invention rather than people.”  

Shortly before his death, at a conference in Lisbon, Stephen Hawking warned those in attendance that the development of artificial intelligence might become the “worst event in the history of our civilization.” He was alluding to what is known as the ‘technological singularity.’ Other notable intellectuals of our time including Tesla’s Elon Musk and neuroscientist Sam Harris have also delivered a number of foreboding speeches regarding the innovation, believing it may be the start of our impending doom and will ultimately replace us completely or even worse—simply discard us in the course of an intermediary task as HAL 9000 discarded the lives of the astronauts in favor of completing the mission of the Discovery One.

Goertzel does not share this apocalyptic view but sees an opportunity for humans and machines to evolve together, he said, “AI will almost certainly become far more intelligent than human beings. But there will be a possibility for humans to follow the AI along and effectively fuse their minds with the AI—which Elon Musk, among others, are also working on with his company Neuralink. I would say humans who choose not to fuse with the AI will indeed be, in a sense, left behind as they will no longer be among the smartest beings in this region of the universe.”

Artificial Compassion for Humanity

The fact that AI will become much smarter than people do not necessarily mean that AI is a danger to humans. Goertzel explained, “That all depends on how they are built, what we want is AIs that are compassionately disposed toward human beings. That is also why at SingularityNET we're so focused on creating a democratically controlled AI mind because if the first true general intelligence is controlled by a military organization or an advertising agency, then this probably isn't optimal in terms of gathering a beneficial general intelligence to emerge into a compassionate, supermind.”

So how do you put compassion into a machine? How can you teach an AI about empathy and concepts as abstract as love? The reality is that even as humans, we are unable to display or enact these concepts with any real consistency. Goertzel said, “You don’t program empathy into the code of the AI; these things will be learned by the AI. Compassion will emerge within the AI in the course of its interactions with the world—including the humans in the world and the physical world. It’s very similar to a child, you don’t program emotions or compassion into a child; they gain it through interactions with the world.” He added, “So the task of AI is to build a learning system and a self-organizing system that can organize its own mind; its own feelings; and its own compassion in an appropriate way. It is complex, but the internet is complex, your mobile phone is complex, your laptop is complex, I mean, humanity has built many complex things, and these are built by a combination of many complex people working together.”


At the comparison of an AI developing as a child would, I could not help but consider the amount of children that grow up to be sociopathic—often making impulsive decisions or breaking rules with little or no feelings of guilt or wrongdoing. Goertzel admits, “Humanity is certainly a complex mess with aspects that are both positive and negative according to the value systems of various parts of humanity. I think the best we can do is put an AI out there in the world and expose it to the various aspects of humanity and make sure that it's biased in a positive direction.”

Goertzel himself is a father of four children and a grandfather to one, speaking from experience he said, “Protecting them from all the bad things in the world is something you can only do to a limited extent because eventually, they're going to go out there and interact with some harsh realities—but you can bias what you expose them to in a positive direction.”

The reality of our future according to Goertzel, is that AGI is coming regardless and what we can do is ensure that it is not solely disposed towards the whims of a powerful central authority and that it is taught compassion for humanity. He said, “AI will be used for military purposes, it will be used for advertising and even crime. We have to make sure that AI is also used, and to a greater extent, for education, agriculture, healthcare or scientific discovery. The AI will get all these things integrated into its mind and be able to form a whole picture of human values and culture to form a substantial inclination towards compassion.”

Sum of Many

Goertzel clarified that the creation of the future AGI global intelligence will not be done solely by his team at SingularityNET, it will be the combined work of a vast community of AI and technology developers as well as the information that the AI agents on the network are able to absorb from human consumers who leverage the network. 

He said, “If Singularity is going to play a key role, then we need to be massively growing the user base of SingularityNETwe have to drive massive adoption of these decentralized networks that we have launched. After two years of work, we have a pretty nice version of the SingularityNET platform out there. It's a decentralized network which is democratically governed and controlled—meaning the AI network is sort of controlled by the AI agents in the network, rather than by some outside party. It's a nice bit of software and we’ve shown it works.” Concluding he said, “If humanity wants to transition from AI to AGI and then to super intelligence in a democratic and participatory way then networks like SingularityNET need to be a significant part of the mix, which is easy to see from an abstract view but from practice on the ground, there's still a lot to be done to get adoption of this sort of platform.”


Image source: Shutterstock