Artificial Intelligence Is Only Dangerous If Humans Use It Foolishly

It is the nature of technology to improve over time. As it progresses, technology brings humanity forward with it. Yet, there is a certain fear that surrounds technologies like artificial intelligence (AI) and robotics, in part due to how these have been portrayed in science fiction. This fear, however, is mostly a fear of the unknown. For the most part, humankind doesn’t know what will come of the continued improvement of AI systems.

The Top Artificial Intelligence Movies of All Time
Click to View Full Infographic

The coming of the technological singularity is one such outcome that’s greatly influenced by science fiction. Supposedly, AI and intelligent machines will become so smart that they will overtake their human overlords, ending the world as we know it. We don’t know if that would indeed happen, of course — although there are some institutions that are actively working towards making the singularity happen.

But perhaps the most immediate concern people have with AI and automated systems is the expected job displacement that goes along with these. A number of studies seem to agree that increased automation will cause an employment disruption in the next 10 to 20 years.

One study predicts machines will replace 47 percent of jobs in the United States. Another study expects 40 percent of jobs will be displaced in Canada, while British agencies predict some 850,000 jobs in the UK will be replaced by automated systems. Meanwhile, 137 million workers in Southeast Asia are in danger of losing their jobs to machines in the next 20 years. The trend is expected to cover a whole range of industries, not just blue collar jobs.

What to Fear, Really

Given all of these, are we correct to fear AI?

Without the risk of being an alarmist, yes there are things to be worried about. But a great deal of this has to do with how we use AI, according to a piece written by ZD Net and TechRepublic UK editor-in-chief Steve Ranger. “AI is a fast-growing and intriguing niche,” Ranger wrote, “but it’s not the answer to every problem.”

Ranger warns of the inability of industries to cope up with AI, which could potentially cause another “AI winter.” He writes: “[A] lack of skilled staff to make the most of the technologies, along with massively inflated expectations, could create a loss of confidence.” Moreover, there’s the danger of looking at AI as the magical solution to everything, neglecting the fact that AI and machine learning algorithms are only as good as the data put into them. Ranger says, “ways must be found to make sure that AI-led decision making becomes as easy to understand — and to challenge — as any other type.” He sees this as the ultimate threat related to AI. He points out that research is being done when it comes to being able to understanding how AI reaches its conclusions. The five basic principals laid out are responsibility (a person must be available to deal with the effects of the AI), explainability (ability to simply explain the decisions made by the AI to the people affected by it), accuracy (sources of error must be kept track of), auditability (third parties should be able to easily review the behavior of the AI), and fairness (AI should not be affected by human bias or discrimination).

Ultimately, the greatest threat to humanity isn’t AI. It’s how we handle AI. “Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn,” Ranger concludes.

Measured and Monitored

Thankfully, there are institutions that have already come up with guidelines in pursuing AI research and development. There’s the Partnership on AI, which includes tech heavyweights like Amazon, Google, IBM, Facebook, Microsoft, and Apple. Another one is the Ethics and Governance of Artificial Intelligence Fund (AI Fund) that’s led by the Knight Foundation. There’s also the IEEE’s framework document on designing ethically aligned AI.

The benefits of AI are undeniable, and we don’t need to wait for 2047 and the singularity to figure out just how much it affects people’s lives. Today’s AI systems shouldn’t be confused with sci-fi’s Skynet and HAL-9000. Much of what we call AI right now are neural networks and machine learning algorithms that work in the background of our most common devices. AI is also found in systems that facilitate trends-based decision making processes in companies and improve customer services.

If used properly, AI can help humanity by keeping people away from hazardous jobs, reducing the number of car accidents, and improving medical treatments. Our fears cannot outweigh these perks.

The post Artificial Intelligence Is Only Dangerous If Humans Use It Foolishly appeared first on Futurism.

If We Don’t Regulate Automation, It Could Decimate the U.S. Economy

Our Current State

Several politicians and leaders in technology law are calling for the United States to create a department that concentrates on robotics and artificial intelligence (AI). AI is becoming ubiquitous, and is present in everything from your cell phone to self-driving cars.

The future of the workforce is in automation, and a plan needs to be in place for workers who are affected. In his farewell address, former president Barack Obama expressed his concerns about the impact of future tech. “The next wave of economic dislocation won’t come from overseas” Obama said. “It will come from the relentless pace of automation that makes many good, middle-class jobs obsolete.”

The U.S. should start taking action to address Obama’s concerns, argues John Frank Weaver, a lawyer who specializes in AI law. In an interview with Inverse, he advocated the formation of federal commission or similar government entity to establish overarching regulations of AI and autonomous technology.

“The idea that there’s one body where congress and the executive branch are able to pool their resources and come up with a coherent federal policy for the country, both in terms of domestic policy and how we approach international treaties, I think is important, because of the potential dangers in a lot of areas,” Weaver said.

Some of these potential dangers might be privacy concerns from drones or smart TVs, or safety issues stemming from cars driven by AI. There are also economic implications to these technological advances: what happens to taxis, Uber, Lyft, long-haul trucking, and other industries when AI takes over driving? Who is responsible for accidents caused by self-driving vehicles? A centralized federal agency could tackle these problems and others.

The idea of a federal agency to regulate robotics isn’t new. Ryan Calo, professor at the University of Washington School of Law and adviser of the Obama administration, wrote a proposal for one in 2014. The proposal points out that private tech companies are already looking to government agencies for guidance in these uncharted technological territories. For example, Toyota approached NASA for help when their cars were unexpectedly accelerating. But NASA cannot take on all the problems that will come with a growing robotics industry — its members have too many other things to focus on.

Legislating Modalities

Currently, any regulations of robotics and AI are spread out across many organizations. The Federal Aviation Administration, Securities and Exchange Commission, and the National Highway Traffic Safety Administration have some of the responsibility when it comes to robotics regulations. However, this arrangement doesn’t allow for full coverage or expertise in this highly technical and rapidly changing field.

The Laws of Robotics [INFOGRAPHIC]
Click to View Full Infographic

While the U.S. federal government is lagging behind technological advances, many states are struggling to come up with their own solutions. Legislation on autonomous vehicles has been passed Alabama, California, Florida, Louisiana, Michigan, Nevada, North Dakota, Pennsylvania, Tennessee, Utah, and Virginia, as well as in Washington D.C. , since 2012. However, when you compare the body of legislation to that of the airline industry, it doesn’t even come close. If every department takes on only the robotics issues that affect it directly, there’s no across-the-board policy, which can lead to confusion.

It’s not like such policies are impossible to put in place. Japan and the European Union have both created robotics commissions along the lines of what Calo and Weaver have proposed. In Japan in particular, robotics is an enormous industry. In 2009, the nation employed over 25,000 robot workers, more than any other country. This could be a solution for the country’s declining birthrate and diminishing workforce. The European Union’s proposal covers rules and ethics governing robots in addition to tackling the societal issues that will arise.

The consequences of allowing the robotics industry to run a muck without oversight could have far-reaching consequences. For a similar example, remember the the banking industry collapsed of 2008, which occurred because of a lack of federal oversight when it came to banking regulations. Nine years later, the industry is still suffering, according to author Anat Admati.

She says that it’s necessary to look to experts first to put guidelines in place — politicians and regulators probably don’t have the specific knowledge necessary to create rules about driverless cars, for example. In an interview with Inverse, Admati said, “It is important that policymakers rely on sufficient, un-conflicted expertise and make sure to set rules in a timely manner. Otherwise, we may discover that risks have been ignored when it is too late and harm has occurred.”

In a situation linked to the economy, it is vital that we have regulations in place to prevent another collapse like in 2008. A federal robotics agency is necessary in order to nurture this growing industry — and protect the nation from its side effects.

The post If We Don’t Regulate Automation, It Could Decimate the U.S. Economy appeared first on Futurism.

The Next Step in AI Is Training Machines to Think Like We Do

Deep Learning Right Now

When you think of “amazing” tasks a computer can manage, you probably think of impossibly complex calculations in rapid time, or parsing huge amounts of data—things your own mind could never manage on its own. Or maybe you think of the recent defeat of Lee Sedol at Go, a classic game of strategy, or IBM’s Watson taking on Jeopardy and winning. These more recent wins for AI were made possible in large part by deep learning, which is now opening up all kinds of possibilities for AI and the people who use it.

The simple, day to day tasks of common sense that even the human mind of a toddler manages seems to be what easily stumps AI systems: things like recognizing what kind of food is on your plate, or identifying which emotions are clouding the face of someone looking at you. These effortless tasks for the human brain were impossible challenges for machines — until now.

Deep learning techniques are gifting machines with what feels like common sense to human users. In the past, programmers would write complex algorithms that detailed everything down to the most minute possibility. This kind of explicit, deterministic algorithm is achievable when large, unwieldy calculations are the task at hand. Deep learning frees AI from these kinds of constraints, allowing the system to learn from its mistakes, remember what it has learned, and interact with users for more information.

This deep learning revolution is happening in large part because now, there is so much big data available for teaching. The human toddler can typically figure out what it needs to know after a few tries, but it takes AI many, many trials to learn the same lessons. Deep learning hinges upon access to huge amounts of data, because machines powered by AI need to base their choices on probabilities and statistical significance. As yet, there is no mechanical substitute for intuition.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Deep Possibilities

Advances in deep learning have already improved voice search capabilities significantly: Google replaced its Android speech system with a new deep learning based system and errors dropped by up to 25 percent overnight. Cameras using deep neural networks can now read aloud to people and understand sign language. Facebook now boasts that its deep learning based capabilities make the platform more accessible for blind users by describing photographs.

In the coming years, both big tech and a slew of startups will be using deep learning to create new products and services, as well as upgrade their existing applications. New markets and businesses will germinate and grow, fostering more innovation, services, and products. Deep learning systems will improve and become more widely available and simpler to use. The easier it is to use, the more our interactions with the technology will change.

Aditya Singh, a partner at Foundation Capital, believes that the development of a deep learning operating system will democratize deep learning and prompt the widespread adoption of practical AI. The result will be that everyday people will be able to solve real-life problems of significant magnitude using deep learning. In this sense, AI has the very real potential to be an equalizing tool, allowing people from all walks of life to engage in innovative work that can change the world.

The post The Next Step in AI Is Training Machines to Think Like We Do appeared first on Futurism.

We Created AI, and Now They Are Teaching Us

Catastrophic Forgetting — Forgotten

The latest research from DeepMind is proving how inspired the idea to model neural networks of the human mind truly was. The strength of the association between the human brains and and their computational models is revealing weaknesses in our own minds and teaching us how to overcome them.

Google’s engineers, inspired by neuroscience, have created an Artificial Intelligence (AI) using an Artificial Neural Network that can hang onto knowledge as it moves from task to task, spinning the straw of raw memory into gold that stays with the program, forming long-term experiences. And while human minds do this after a fashion, we are not as adept at discerning what is important. You may recall the song lyrics you heard when you first rode a bicycle as well as you recall important information from your career successes.

Via DeepMind
Credit: DeepMind

Neuroscience has gradually pulled back the curtain, making the long-term pruning mechanisms of the human mind more plain to us, revealing how our minds protect the most important information from being overwritten by less useful data. This is what DeepMind is now reproducing in its AI, as evidenced by something relatively prosaic: progress in the Atari gaming universe.

Here AI learns what isn’t as salient through sheer numbers and brute force. If you had to process your way through billions of experiences, you might have a more discerning sense of what mattered. Moreover, you might be able to shape your memory to match that awareness. However, the AI can do that now only because it knows what it must hang onto, as well.

In other words, DeepMind is getting past catastrophic forgetting, which is the tenancy to replace crucial older information with newer memories. When it comes to the Atari challenge, this means that there’s no longer any need for the AI to reinvent the wheel with every game — it can remember the wheel it invented three games ago.

Learning Elastic Weight Consolidation

The new DeepMind approach, elastic weight consolidation, is infinitely better than the Machine Learning processes that came before it. Elastic weight consolidation simply means the ability to weight the protection assigned to synapses, ranking them to be more or less likely to be overwritten in the future. This way, often-used synapses associated with important skills are preserved, the least important synapses are lost first, and the neural networks don’t expand to unwieldy sizes.

AI Forecast for 2017
Click to View Full Infographic

These AI neural network advances are holding a mirror up for us, allowing us to see the foibles of the human mind and memory in more useful ways. As we see Deep Learning AI networks start to protect synapses systematically, we can see a blueprint for studies of specific human brain regions. Evolution in its natural state is too slow for us to watch and learn from; the evolution of an AI system, on the other hand, can offer us a model and organizational strategy for memory preservation.

Elastic weight consolidation is at the heart of both AI and human intelligence because it enables learning task after task without forgetting. While the new DeepMind algorithm models the synaptic consolidation of the human brain, it may also improve on the process and find new ways to efficiently maintain existing knowledge while absorbing new data. As AI systems take on creative challenges with greater success (areas we believed to be the domain of humankind), we can learn from the ways they solve problems of lost knowledge, faulty overriding, and other miscalculations, or — as such problems are known to humans — forgetting.

Because neural networks are modeled after our own brains, they are engaging in a kind of “psychiatric plagiarism” to bring about their own evolution. The recreation of natural evolution in code provides us with a window into our own neurological development. This is the beauty and possibility that AI is presenting to us: the ability to design new studies and experiments for our own brains.

The post We Created AI, and Now They Are Teaching Us appeared first on Futurism.

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life

“Fixed to This World”

Lord Martin Rees, Astronomer Royal and University of Cambridge Emeritus Professor of Cosmology and Astrophysics, believes that machines could surpass humans within a few hundred years, ushering in eons of domination. He also cautions that while we will certainly discover more about the origins of biological life in the coming decades, we should recognize that alien intelligence may be electronic.

“Just because there’s life elsewhere doesn’t mean that there is intelligent life,” Lord Rees told The Conversation. “My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.”

Kardashev Scale: The Kinds of Alien Civilizations in Our Universe
Click to View Full Infographic

Rees thinks that there is a serious risk of a major setback of global proportions happening during this century, citing misuse of technology, bioterrorism, population growth, and increasing connectivity as problems that render humans more vulnerable now than we have ever been before. While we may be most at risk because of human activities, the ability of machines to outlast us may be a decisive factor in how life in the universe unfolds.

“If we look into the future, then it’s quite likely that within a few centuries, machines will have taken over—and they will then have billions of years ahead of them,” he explains. “In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines.”

In contrast to the delicate, specific needs of human life, electronic intelligent life is well-suited to space travel and equipped to outlast many global threats that could exterminate humans.

“[We] are likely to be fixed to this world. We will be able to look deeper and deeper into space, but traveling to worlds beyond our solar system will be a post-human enterprise,” predicts Rees. “The journey times are just too great for mortal minds and bodies. If you’re immortal, however, these distances become far less daunting. That journey will be made by robots, not us.”

Surviving Our Progress

Rees isn’t alone in his ideas. Several notable thinkers, such as Stephen Hawking, agree that artificial intelligences (AI) have the potential to wipe out human civilization. Others, such as Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence, see malicious hacking of AI as the greatest threat we face. However, there are at least as many who disagree with these ideas, with even Hawking noting the potential benefits of AI.

As we train and educate AIs, shaping them in our own image, we imbue them with the ability to form emotional attachments that could deter them from wanting to hurt us. There is evidence that the Singularity might not be a single moment in time, but is instead a gradual process that is already happening—meaning that we are already adapting alongside AI.

But what if Rees is correct and humans are on track to self-annihilate? If we wipe ourselves out and AI is advanced enough to survive without us, then his predictions about biological life being a relative blip on the historical landscape and electronic intelligent life going on to master the universe will have been correct—but not because AI has turned on humans.

Ultimately, the idea of electronic life being uniquely well-suited to survive and thrive throughout the universe isn’t that far-fetched. The question is, will we survive alongside it?

The post Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life appeared first on Futurism.

This New AI Is Like Having Iron Man’s Jarvis Living on Your Wall

Meet Duo, an AI device that’s “part mirror —all computer.”

Unlike standalone devices such as the Amazon Echo, Alexa, or Google Assistant, Duo operates beyond its 27-inch reflective display. It’s a powerful smart computer that connects all your home devices and serves as a sleek, discreet entertainment hub.

Robot Companions: A New Breed of Social Machines [INFOGRAPHIC]
Click to View Full Infographic

Think of it as something like Iron Man’s Jarvis, but instead of being built into your entire home, you can interact with it via a touch-sensitive, 1.9 mm-thick mirror. Because of its design, Duo can easily be mounted on any wall to blend with your interior. You can control Duo’s screen via touch or communicate with its built-in artificial intelligence (AI) companion, Albert, who can help you control any app within the system using your voice.

Duo’s on-board processor allows users to play music, check the news and weather, stream videos, control lights, play games, or even use the device as a virtual gallery to display artwork. Duo runs on its own operating system, HomeOS, and not only does it come equipped with native apps, its team has also developed a web-based HomeOS SDK for developers who want to create their own apps for use with the device.

According to the Duo website, the device will see a limited release of only 1,000 units in October of this year, with each selling for $399.

The post This New AI Is Like Having Iron Man’s Jarvis Living on Your Wall appeared first on Futurism.

Canada is Investing Over $100 Million to Bring AI to Life

Support for AI

You can’t deny that machines are getting smarter. And as they do, it’s inevitable that humans will turn to AI and robots to get jobs done faster and more efficiently.

To that end, governments have a responsibility to make sure that they are able manage these changes in a way that creates more opportunity and maintains job security for human employees. The challenge for governments, then, is to establish public support for programs that will harness the benefits of artificial intelligence. 

Will Automation Steal My Job?
Click to View Full Infographic

When asked “What is your stance on AI research given Canada’s privileged position in the field?” on Quora, Canadian Prime Minister Justin Trudeau explained how their budget proposal is investing $125 million to launch a Pan-Canadian Artificial Intelligence Strategy for research and talent.

“The Strategy will promote collaboration between Canada’s main centres of expertise in Montréal, Toronto-Waterloo and Edmonton and position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation. A leader in the area of artificial intelligence, the Canadian Institute for Advanced Research will be responsible for administering the funding for the new Strategy,” the Prime Minister said.

Rise of Automation

Trudeau believes that AI has the capacity to revolutionize every facet of the modern world. As reported by, Trudeau notes:

In the same way that electricity revolutionized manufacturing and the microprocessor reinvented how we gather, analyze and communicate information, artificial intelligence will cut across nearly every industry […] It will shape the world that our kids and our grandkids grow up in.

With this budget in place, the Canadian government hopes it will lead to more insightful collaborations and significant breakthroughs, specifically in the field of artificial neural networks, as well as algorithms designed to mimic human brain function.

The dedicated funds also aim to pave the way for broader educational opportunities in AI, allow for the creation of 25 university research chairs, as well as promote investment in other areas of scientific research like stem cells, space exploration, and quantum computing.

The post Canada is Investing Over $100 Million to Bring AI to Life appeared first on Futurism.

Human-Level AI Are Probably A Lot Closer Than You Think

What Is “The” Singularity?

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

“Will it be self-aware or self-improving? Not necessarily,” Scott says. “But that might be the kind of creep of the singularity across a whole bunch of different domains: All these things getting better and better, as an overall a set of services that collectively surpass our collective human capabilities.”

2015 in Review: The Year Artificial Intelligence Went Mainstream
Click to View Full Infographic

Not If, But When

Ray Kurzweil, Google’s director of engineering and a computer scientist, takes the opposite view: that a “hard singularity” will occur at a particular point in time. In fact, he has predicted the singularity 147 times since the 1990s, most recently going with 2045 as the year “[w]e’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music.”

Masayoshi Son, CEO of Softbank Robotics, and Kurzweil are splitting hairs as Son argues that the year for the singularity will be 2047.  Despite the two year difference in their predictions, Son and Kurzweil are basically optimistic: “I think this super intelligence is going to be our partner. If we misuse it, it’s a risk. If we use it in good spirits it will be our partner for a better life.”

Not everyone takes such a positive view of the singularity: Elon Musk sees it as an inevitability, but one that demands we prepare properly. In that vein, he is working on Neuralink; a technology and process for merging human intelligence with AI.

Meanwhile, physicist Edward Witten has said that we will never be able to unravel all of the mysteries of consciousness, which would be a stumbling block to the singularity. Computers that mimic the human brain will achieve the singularity, but what if they can’t mimic consciousness because we can’t explain it ourselves? On the other hand, economist William Nordhaus has studied the economic implications of the singularity, only to conclude that while it may be coming, it isn’t happening anytime soon.

So, is the singularity on the horizon? Will it be a single, watershed moment in human history? Or will we simply look back someday with wonder at how far we’ve come, and how much has changed since we blended our intellects with AI? The same hazy kinds of memories you might have of your life before cell phones and Internet (if you’re old enough to remember those times) might one day pop into your mind, recalling the days when the human mind thought on its own, without the benefit of AI.

The post Human-Level AI Are Probably A Lot Closer Than You Think appeared first on Futurism.