Since artificial intelligence first came into play, talks about machines reaching superintelligence and robots taking over immediately emerged with it. Mankind’s fear of the unknown, especially of man-made sentient beings that are smarter, faster and more efficient is present deeply in our society. Because AI technologies are constantly opening new opportunities for advancement in many areas, some treat it as an existential threat. Already, we have machines topping human performance in some specifics tasks and they will get better with time.
However one might perceive it, the truth is that artificial intelligence plays a major role in today’s modern world. The technology is being used in almost every industry, to an extent that causes concern for many. Even the White House is keeping a close eye on artificial intelligence. Last October, the Obama administration released a report on the growing impact of AI, including the possible consequences. Naturally, the report differs from the cataclysmic predictions of the impending AI doom by focusing on economy and the greater good. Still, there are some insightful considerations regarding security. For instance, there is a great deal of autonomy already implemented into certain weapon systems. Further AI implementation for better precision and quicker reaction slowly erases the human control.
Artificial Intelligence Uncertainty
It doesn’t take much to turn the story from a reasonable security concern into a different, apocalyptic surrounding. That’s exactly what some people do, to a certain degree, usually acting on misled information or lack of knowledge. That’s not to say there aren’t some valid concerns regarding our relationship with AI. Indeed, there are certain issues regarding overall safety and privacy in various AI-powered products and services. But those are far away from turning machines into murdering robots, affectionately speaking.
If there is one word to describe the future of artificial intelligence, it’s uncertainty. Numerous technical experts, ethicists, and thinkers have differing opinions on what will the AI look like in the future. More important, how it will act. Usually, there are two camps of thought – either AI will eventually gain the abilities to extinct mankind or it will be submissive and beneficial to us. If it does set its goals on killing us, the given timeline is the standard “some time in the near future”. More precocious thinkers believe we can’t know for certain and that it could take much longer, if at all.
The myth of AI
The myth of AI causes a lot of anxiety and needless harm to its vast productive potential. There are unnecessary negative connotations attached to artificial intelligence that distort the overall image of it. Our society has much to benefit from it, probably much more than we realize. However, we often have the wrong perception because of the myth we created. That is the core problem, not the technology itself.
Stephen Hawking, one of the greatest contemporary scientists of our time, once said that AI could be “either the best or worst thing ever to happen to humanity”. It’s up to us to make sure artificial intelligence doesn’t go the wrong way. The notion that somewhere, someone is creating an AI algorithm that might eventually take over advanced weapon systems speaks two truths. The first is that we are probably the only species on the planet that goes beyond its limitations. Mankind’s desire to achieve greatness and go beyond our dreams is what brought us where we are now. But, it also unveils the devastating truth, the other part of the equation – that we as a society don’t do nearly enough not to have killing machines at our disposal.
The dark side of AI
Undoubtedly, there is a dark side of the AI evolution. While techniques like machine learning and pattern recognition will help improve certain aspects of life, the downside will be the increase in the malicious use of advanced AI technologies. The sad truth is that it’s not about what the computers will do to us, it’s about what we will do with them.
Let’s back up to those advanced weapon systems once again. Drones are all the rage these days, ever since booming on the market couple of years ago. We all know about the military efforts to create a perfect unmanned combat aerial vehicle, aka the combat drone. These types of drones normally have human guidance in real-time, although there are some levels of autonomy present. Considering what we have now, is it really that far-fetched to envision drones without any human control? A Skynet future, where H-K’s roam the skies looking for the remaining John Connors of the world? Not really, given everything the history taught us. But, unlike anything before, the stakes are now different and much higher.
The existential threat that artificial intelligence possess to humankind is really a question of responsibility. It is as simple as that. Constant advances in deep machine learning allow us to edify our machines to differentiate between various data. Take Roomba for a simple example. The small, cute robot functions as an almost fully autonomous device (it still needs someone to empty its bin). It has its own schedule and finds its way around even if you rearrange the furniture, distinguishing between miscellaneous obstacles. But it has no greater understanding beyond its narrow purpose. It cannot connect the dots to understand the environment, to differentiate between humans and couches, why the dirt appears in the first place and so on.
Accountability is of the utmost importance. We humans must not get caught up in these rapidly advancing times. Roomba and other similar examples do not need further intelligence, they are good as they are. The ones that do, those are the ones we need to be extremely cautious with and utterly pragmatic. The point where the technology is beyond our ability to restrain it is when things go south. Only then, there won’t be any coming back.
Facebook AI invents its own language
Recently, an artificial intelligence program developed by Facebook was accused of creating its own language that humans could not understand. The idea was that the recursive self-improving aspects of the AI had improved its own intra-computer communications, to the point that they were no longer recognizable to humans. While this is a theoretically pregnant idea – an idea that is conceptually possible, if not likely – in this case the claim has been proven false.
Closing words
When all is said and done, there are still tons of questions that need answers. Will the AI reach superintelligence? When? If it does, will artificial intelligence kill humans? Maybe. We are far from the Terminator future where sentient machines are trying to exterminate us. Still, it’s not completely inconceivable as who knows what the future will bring. The reason AI intimidates us so much is because we have infused it with too much of ourselves, our own abilities. Naturally occurring and especially human-made threats are much more dangerous and both more attention and time deserving than those coming from artificial intelligence.
Focus of the super intelligent systems needs to be on a friendly basis towards humans. We need to use it as a tool, not a threat. There is an argument to make that in the end, we could face a nightmare scenario from AI, even if it’s not quite imminent. As it stands now, the relationship between us and AI is a one way street with humans in the driving seat using AI as a means of getting around. Let us hope it stays that way.