Artificial Intelligence Is Only Dangerous If Humans Use It Foolishly

It is the nature of technology to improve over time. As it progresses, technology brings humanity forward with it. Yet, there is a certain fear that surrounds technologies like artificial intelligence (AI) and robotics, in part due to how these have been portrayed in science fiction. This fear, however, is mostly a fear of the unknown. For the most part, humankind doesn’t know what will come of the continued improvement of AI systems.

The Top Artificial Intelligence Movies of All Time
Click to View Full Infographic

The coming of the technological singularity is one such outcome that’s greatly influenced by science fiction. Supposedly, AI and intelligent machines will become so smart that they will overtake their human overlords, ending the world as we know it. We don’t know if that would indeed happen, of course — although there are some institutions that are actively working towards making the singularity happen.

But perhaps the most immediate concern people have with AI and automated systems is the expected job displacement that goes along with these. A number of studies seem to agree that increased automation will cause an employment disruption in the next 10 to 20 years.

One study predicts machines will replace 47 percent of jobs in the United States. Another study expects 40 percent of jobs will be displaced in Canada, while British agencies predict some 850,000 jobs in the UK will be replaced by automated systems. Meanwhile, 137 million workers in Southeast Asia are in danger of losing their jobs to machines in the next 20 years. The trend is expected to cover a whole range of industries, not just blue collar jobs.

What to Fear, Really

Given all of these, are we correct to fear AI?

Without the risk of being an alarmist, yes there are things to be worried about. But a great deal of this has to do with how we use AI, according to a piece written by ZD Net and TechRepublic UK editor-in-chief Steve Ranger. “AI is a fast-growing and intriguing niche,” Ranger wrote, “but it’s not the answer to every problem.”

Ranger warns of the inability of industries to cope up with AI, which could potentially cause another “AI winter.” He writes: “[A] lack of skilled staff to make the most of the technologies, along with massively inflated expectations, could create a loss of confidence.” Moreover, there’s the danger of looking at AI as the magical solution to everything, neglecting the fact that AI and machine learning algorithms are only as good as the data put into them. Ranger says, “ways must be found to make sure that AI-led decision making becomes as easy to understand — and to challenge — as any other type.” He sees this as the ultimate threat related to AI. He points out that research is being done when it comes to being able to understanding how AI reaches its conclusions. The five basic principals laid out are responsibility (a person must be available to deal with the effects of the AI), explainability (ability to simply explain the decisions made by the AI to the people affected by it), accuracy (sources of error must be kept track of), auditability (third parties should be able to easily review the behavior of the AI), and fairness (AI should not be affected by human bias or discrimination).

Ultimately, the greatest threat to humanity isn’t AI. It’s how we handle AI. “Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn,” Ranger concludes.

Measured and Monitored

Thankfully, there are institutions that have already come up with guidelines in pursuing AI research and development. There’s the Partnership on AI, which includes tech heavyweights like Amazon, Google, IBM, Facebook, Microsoft, and Apple. Another one is the Ethics and Governance of Artificial Intelligence Fund (AI Fund) that’s led by the Knight Foundation. There’s also the IEEE’s framework document on designing ethically aligned AI.

The benefits of AI are undeniable, and we don’t need to wait for 2047 and the singularity to figure out just how much it affects people’s lives. Today’s AI systems shouldn’t be confused with sci-fi’s Skynet and HAL-9000. Much of what we call AI right now are neural networks and machine learning algorithms that work in the background of our most common devices. AI is also found in systems that facilitate trends-based decision making processes in companies and improve customer services.

If used properly, AI can help humanity by keeping people away from hazardous jobs, reducing the number of car accidents, and improving medical treatments. Our fears cannot outweigh these perks.

The post Artificial Intelligence Is Only Dangerous If Humans Use It Foolishly appeared first on Futurism.

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life

“Fixed to This World”

Lord Martin Rees, Astronomer Royal and University of Cambridge Emeritus Professor of Cosmology and Astrophysics, believes that machines could surpass humans within a few hundred years, ushering in eons of domination. He also cautions that while we will certainly discover more about the origins of biological life in the coming decades, we should recognize that alien intelligence may be electronic.

“Just because there’s life elsewhere doesn’t mean that there is intelligent life,” Lord Rees told The Conversation. “My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.”

Kardashev Scale: The Kinds of Alien Civilizations in Our Universe
Click to View Full Infographic

Rees thinks that there is a serious risk of a major setback of global proportions happening during this century, citing misuse of technology, bioterrorism, population growth, and increasing connectivity as problems that render humans more vulnerable now than we have ever been before. While we may be most at risk because of human activities, the ability of machines to outlast us may be a decisive factor in how life in the universe unfolds.

“If we look into the future, then it’s quite likely that within a few centuries, machines will have taken over—and they will then have billions of years ahead of them,” he explains. “In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines.”

In contrast to the delicate, specific needs of human life, electronic intelligent life is well-suited to space travel and equipped to outlast many global threats that could exterminate humans.

“[We] are likely to be fixed to this world. We will be able to look deeper and deeper into space, but traveling to worlds beyond our solar system will be a post-human enterprise,” predicts Rees. “The journey times are just too great for mortal minds and bodies. If you’re immortal, however, these distances become far less daunting. That journey will be made by robots, not us.”

Surviving Our Progress

Rees isn’t alone in his ideas. Several notable thinkers, such as Stephen Hawking, agree that artificial intelligences (AI) have the potential to wipe out human civilization. Others, such as Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence, see malicious hacking of AI as the greatest threat we face. However, there are at least as many who disagree with these ideas, with even Hawking noting the potential benefits of AI.

As we train and educate AIs, shaping them in our own image, we imbue them with the ability to form emotional attachments that could deter them from wanting to hurt us. There is evidence that the Singularity might not be a single moment in time, but is instead a gradual process that is already happening—meaning that we are already adapting alongside AI.

But what if Rees is correct and humans are on track to self-annihilate? If we wipe ourselves out and AI is advanced enough to survive without us, then his predictions about biological life being a relative blip on the historical landscape and electronic intelligent life going on to master the universe will have been correct—but not because AI has turned on humans.

Ultimately, the idea of electronic life being uniquely well-suited to survive and thrive throughout the universe isn’t that far-fetched. The question is, will we survive alongside it?

The post Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life appeared first on Futurism.

Human-Level AI Are Probably A Lot Closer Than You Think

What Is “The” Singularity?

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

“Will it be self-aware or self-improving? Not necessarily,” Scott says. “But that might be the kind of creep of the singularity across a whole bunch of different domains: All these things getting better and better, as an overall a set of services that collectively surpass our collective human capabilities.”

2015 in Review: The Year Artificial Intelligence Went Mainstream
Click to View Full Infographic

Not If, But When

Ray Kurzweil, Google’s director of engineering and a computer scientist, takes the opposite view: that a “hard singularity” will occur at a particular point in time. In fact, he has predicted the singularity 147 times since the 1990s, most recently going with 2045 as the year “[w]e’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music.”

Masayoshi Son, CEO of Softbank Robotics, and Kurzweil are splitting hairs as Son argues that the year for the singularity will be 2047.  Despite the two year difference in their predictions, Son and Kurzweil are basically optimistic: “I think this super intelligence is going to be our partner. If we misuse it, it’s a risk. If we use it in good spirits it will be our partner for a better life.”

Not everyone takes such a positive view of the singularity: Elon Musk sees it as an inevitability, but one that demands we prepare properly. In that vein, he is working on Neuralink; a technology and process for merging human intelligence with AI.

Meanwhile, physicist Edward Witten has said that we will never be able to unravel all of the mysteries of consciousness, which would be a stumbling block to the singularity. Computers that mimic the human brain will achieve the singularity, but what if they can’t mimic consciousness because we can’t explain it ourselves? On the other hand, economist William Nordhaus has studied the economic implications of the singularity, only to conclude that while it may be coming, it isn’t happening anytime soon.

So, is the singularity on the horizon? Will it be a single, watershed moment in human history? Or will we simply look back someday with wonder at how far we’ve come, and how much has changed since we blended our intellects with AI? The same hazy kinds of memories you might have of your life before cell phones and Internet (if you’re old enough to remember those times) might one day pop into your mind, recalling the days when the human mind thought on its own, without the benefit of AI.

The post Human-Level AI Are Probably A Lot Closer Than You Think appeared first on Futurism.

Human-Level AI Are Probably A Lot Closer Than You Think

What Is “The” Singularity?

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

“Will it be self-aware or self-improving? Not necessarily,” Scott says. “But that might be the kind of creep of the singularity across a whole bunch of different domains: All these things getting better and better, as an overall a set of services that collectively surpass our collective human capabilities.”

2015 in Review: The Year Artificial Intelligence Went Mainstream
Click to View Full Infographic

Not If, But When

Ray Kurzweil, Google’s director of engineering and a computer scientist, takes the opposite view: that a “hard singularity” will occur at a particular point in time. In fact, he has predicted the singularity 147 times since the 1990s, most recently going with 2045 as the year “[w]e’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music.”

Masayoshi Son, CEO of Softbank Robotics, and Kurzweil are splitting hairs as Son argues that the year for the singularity will be 2047.  Despite the two year difference in their predictions, Son and Kurzweil are basically optimistic: “I think this super intelligence is going to be our partner. If we misuse it, it’s a risk. If we use it in good spirits it will be our partner for a better life.”

Not everyone takes such a positive view of the singularity: Elon Musk sees it as an inevitability, but one that demands we prepare properly. In that vein, he is working on Neuralink; a technology and process for merging human intelligence with AI.

Meanwhile, physicist Edward Witten has said that we will never be able to unravel all of the mysteries of consciousness, which would be a stumbling block to the singularity. Computers that mimic the human brain will achieve the singularity, but what if they can’t mimic consciousness because we can’t explain it ourselves? On the other hand, economist William Nordhaus has studied the economic implications of the singularity, only to conclude that while it may be coming, it isn’t happening anytime soon.

So, is the singularity on the horizon? Will it be a single, watershed moment in human history? Or will we simply look back someday with wonder at how far we’ve come, and how much has changed since we blended our intellects with AI? The same hazy kinds of memories you might have of your life before cell phones and Internet (if you’re old enough to remember those times) might one day pop into your mind, recalling the days when the human mind thought on its own, without the benefit of AI.

The post Human-Level AI Are Probably A Lot Closer Than You Think appeared first on Futurism.