Human-Level AI Are Probably A Lot Closer Than You Think

What Is “The” Singularity?

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

“Will it be self-aware or self-improving? Not necessarily,” Scott says. “But that might be the kind of creep of the singularity across a whole bunch of different domains: All these things getting better and better, as an overall a set of services that collectively surpass our collective human capabilities.”

2015 in Review: The Year Artificial Intelligence Went Mainstream
Click to View Full Infographic

Not If, But When

Ray Kurzweil, Google’s director of engineering and a computer scientist, takes the opposite view: that a “hard singularity” will occur at a particular point in time. In fact, he has predicted the singularity 147 times since the 1990s, most recently going with 2045 as the year “[w]e’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music.”

Masayoshi Son, CEO of Softbank Robotics, and Kurzweil are splitting hairs as Son argues that the year for the singularity will be 2047.  Despite the two year difference in their predictions, Son and Kurzweil are basically optimistic: “I think this super intelligence is going to be our partner. If we misuse it, it’s a risk. If we use it in good spirits it will be our partner for a better life.”

Not everyone takes such a positive view of the singularity: Elon Musk sees it as an inevitability, but one that demands we prepare properly. In that vein, he is working on Neuralink; a technology and process for merging human intelligence with AI.

Meanwhile, physicist Edward Witten has said that we will never be able to unravel all of the mysteries of consciousness, which would be a stumbling block to the singularity. Computers that mimic the human brain will achieve the singularity, but what if they can’t mimic consciousness because we can’t explain it ourselves? On the other hand, economist William Nordhaus has studied the economic implications of the singularity, only to conclude that while it may be coming, it isn’t happening anytime soon.

So, is the singularity on the horizon? Will it be a single, watershed moment in human history? Or will we simply look back someday with wonder at how far we’ve come, and how much has changed since we blended our intellects with AI? The same hazy kinds of memories you might have of your life before cell phones and Internet (if you’re old enough to remember those times) might one day pop into your mind, recalling the days when the human mind thought on its own, without the benefit of AI.

The post Human-Level AI Are Probably A Lot Closer Than You Think appeared first on Futurism.

Leave a Reply