IS ARTIFICIAL INTELLIGENCE A DOUBLE-EDGED SWORD?
Is artificial intelligence a double-edged sword?
Technology has come a long way in a very short time. Voice-activated navigation, home lighting, door locks and cameras controlled by computers, and motor vehicles driving themselves down the busy streets—just a decade ago, this would seem impossible. Now, it’s a reality.
But if these amazing changes in technology happened in such a short period of time, what are we, as humans on this planet, going to encounter in the very near future?
The answer is both miraculous and extremely scary. Advances in the computer and robotic sciences have, indeed, put us in a position to obtain information and enjoy physical conveniences around the clock. Technology has leapt so far that some of our basic needs are not even consciously controlled by us but via artificial intelligence ... and that’s how our ultimate fantasy world could turn into our apocalyptic nightmare.
Artificial intelligence, or “AI,” as it’s commonly known, continues to advance at a dizzying pace. Computers that can seemingly think on their own—or, more frightening, think for us—might be a near-future, worldwide problem. At present, “it” could be just waiting in the shadows, ready to strike.
But is this a genuine concern, or is it just a conspiracy theory that is more fitting for a science fiction film?
IS THE SEED ALREADY PLANTED?
One popular idea is that the beginnings of an AI takeover are already underway in today’s developed societies, although most people don’t even realize it. That is because convenience
and the overwhelming desire for new and improved technological advancements trump a person’s logical thought process about long-term repercussions of using this new tech. Simply put: If it makes a person’s life easier, it is welcomed into their life, nearly always without question.
This, in itself, is quite dangerous. To blindly incorporate cell phones, automated systems throughout the household and other computer-driven systems designed to make life easier guarantees that the tidal wave of increasingly capable advances won’t slow anytime soon.
This isn’t to say that your automated garage door opener will trap you within your home or that the GPS system in your car will drive you off a cliff. No; it’s not that dramatic, nor is it intended as such. With the allowance—or, more specifically, the encouragement—for more-advanced AI wanted by the general public, computerized systems will become more advanced in basic “thinking,” and this will increase exponentially. That’s when the first indications of machines “thinking” for themselves could possibly occur ... and, after that, the first physical conflicts with humans.
SOCIETY CAN’T HAVE IT BOTH WAYS
A measure that can be taken to limit an AI is to keep it contained within certain parameters. These parameters would only allow it to progress to a certain point and no further. Nevertheless, this idea contradicts the entire reason AI was pursued in the first place: to aid in the performance of a human’s everyday tasks and evolve proportionately.
Limiting a supercomputer’s abilities might prevent a future takeover, but it would halt the progress of AI technology. Also, keeping it contained, in reality, won’t work because of man’s constant “I can do better” or “we can go further” attitude. This is what drives the human race
WITH THE ALLOWANCE— OR, MORE SPECIFICALLY, THE ENCOURAGEMENT— FOR MORE-ADVANCED AI WANTED BY THE GENERAL PUBLIC, COMPUTERIZED SYSTEMS WILL BECOME MORE ADVANCED IN BASIC “THINKING,” AND THIS WILL INCREASE EXPONENTIALLY.
ONE POPULAR IDEA IS THAT THE BEGINNINGS OF AN AI TAKEOVER ARE ALREADY UNDERWAY IN TODAY’S DEVELOPED SOCIETIES, ALTHOUGH MOST PEOPLE DON’T EVEN REALIZE IT.
to explore, improve technology and strive to do things never before achieved. Arrogance or confidence? A little of both, but only because we are intent on moving forward with the human race’s continued progress. Simply stated, computers allow humans to do more than we can do on our own.
Another reason it could be very difficult to contain an AI system is because the
AI, itself, might “find” a way around the constraints we try to impose on it. It could figure out how to manipulate the system or its human users—or even discover an electronic path out of its “containment field.” Humanity might not be aware of just how far the supercomputer has progressed, and we might essentially lower our collective guard … and that could prove disastrous.
PRECAUTIONS TO PROTECT
The very idea that computer technology could evolve and ultimately control humans— similar to what has been depicted in science fiction books and movies—is refuted by some scientists within the tech field.
Their argument is that safeguards within the programming would be put into place to avoid such scenarios. This, on the surface, might seem to be a simple and effective preventative measure, but “wild cards” need to be taken into consideration.
Terrorist intervention is one. If a hostile organization either hacks the programmed preventative measures or creates its own AI without such restraints—perhaps unwittingly, even to the terrorists—a takeover could occur. Another possibility would be that an accident, either man-made or deep within the programming, could trigger a snowball effect; the outcome could be a
computerized, automated system with self-preservation as its main objective.
Surely, the sharp minds of the most intelligent people on Earth could find a solution? One problem is that the computer’s “brain” outclasses human thought in speed by an inconceivable scale. For example, human axons carry signals in the brain at about 120 meters per second, while a computer’s ability to move information throughout its system travels at nearly the speed of light (approximately 300 million meters per second). That’s quite a difference! The AI would be millions of times ahead of a human’s ability to process data.
Exactly when artificial intelligence could, or would, take over the world is another subject under much debate. The general consensus is that it won’t happen overnight, next year or within the next 10 years, but it could happen within several decades. Late physicist Stephen Hawking said he believed it could happen within 100 years. That’s a very large ballpark figure. But, just as technology has progressed within the past 100 years, the idea of robots continually upgrading their hardware and software is not such a far-fetched idea.
If you lived about 100 years ago and told someone that future versions of those first rudimentary automobiles would be able to drive themselves, your sanity would have been called into question. Meanwhile, this
IF YOU LIVED ABOUT 100 YEARS AGO AND TOLD SOMEONE THAT FUTURE VERSIONS OF THOSE FIRST RUDIMENTARY AUTOMOBILES WOULD BE ABLE TO DRIVE THEMSELVES, YOUR SANITY WOULD HAVE BEEN CALLED INTO QUESTION.
technology is becoming an increasingly common occurrence on the streets today, and competition among manufacturers to create a variety of viable autonomous vehicles is fueling their progress and adoption. It’s the steady process of AI evolution that could be man’s undoing—not just one big, sudden event that occurs to break the normal cycle of continual progress.
FOLLOWING LOGIC, NOT SUPREMACY
Contrary to what has been shown in science fiction movies and books during the past century, the reason AI would take over isn’t because it wants to dominate humans or to reign supreme over the Earth; rather, it would happen because it would have a goal that would need to be fulfilled. That goal could be a very simple one; something as insignificant as the need to collect a certain item or to complete a pre-programmed task.
However, because the computer can’t identify its task as inconsequential (as a human
could), it would resort to any means within its ability to accomplish its goal, including eliminating anything or anyone that got in its way. This idea negates the common misconception that an AI can be friendly or evil. It has no such distinctions; it just does what is needed to reach its objective. By doing so, that puts it into a category—when viewed by a human— of either “good” or “bad.”
WHAT CAN BE DONE?
Is there really anything a single individual can do to prevent the worst-case AI scenario from occurring? Unfortunately, the answer is no. Only through joint discussions with the top computer designers and artificial intelligence pioneers throughout the world can universal constraints and precautions be put into place.
However, acceptance of, and compliance with, these limitations is a very tall order. As corporations and countries compete against one another to advance computer technology, who would abandon the opportunities to create AI applications that can be used to create profits or political advantages?
With so many loose ends across the globe, the likelihood that something will slip through international agreements and blossom into a life-threatening problem for humans is a very real possibility.
Not unlike the development and spread of nuclear weapons, technological progress can't be stopped. When that progress threatens the safety and, to the extreme, the very existence of the human race, the only option could be to fight back against the robotic uprising!
ADVANCES IN THE COMPUTER AND ROBOTIC SCIENCES HAVE, INDEED, PUT US IN A POSITION TO OBTAIN INFORMATION AND ENJOY PHYSICAL CONVENIENCES AROUND THE CLOCK.
Robots similar to those that are common on today's factory floors might become our adversaries in the future.
Above: The “brain” of modern-day computers operates at near light-speed— significantly faster than a human’s ability to think.
Right: Small agricultural robots could be the precursors of more-advanced roaming robots. Some say the “seeds” are already planted today for a robotic rebellion in the future.
Far right: Our devices can already be linked and synched to each other and the Cloud. Will there be a time when our Ai-enabled conveniences direct us through our lives instead of being assistants?
Near right: If the pursuit of cures for human diseases were taken over by Ai-enabled entities, would the cures be found sooner ... or never?
If robots have access to parts that can be used to build other robots, an army of mechanical soldiers is not beyond the realm of possibility.
For now, at least, humans are still required to perform maintenance on robots. It is a safeguard we might lose control over.
Robots have been working in U.S. auto factories since the 1970s and in other types of manufacturing for even longer.
Terrorist hackers might “open the box” and undo restrictive parameters that keep supercomputers in check.
Right: Is there a better way to indoctrinate humans into accepting AI devices than to incorporate them into the child-rearing process? Every day, millions of children are occupied by electronic devices—in many cases, to give their parents extra free time.
Above: Originally conceived as workers that would handle all our mundane and dangerous chores, intelligent machines could become the greatest threat to our existence.
Above: Humans have an inherent inability to remain peaceful for long periods of time. Would there be an advantage in having an AI overseer so that human conflict would not be allowed?
With combat and other types of drones already heavily deployed around the world, is the fear of uncontrollable robotic armies truly that unrealistic?
In a world dominated by AI, would humans ever have privacy and anonymity, or would we have to adapt to a life throughout which we are always being monitored?