From Siri and self-driving cars to robot vacuums and virtual personal assistants, artificial intelligence is transitioning from being the next big technological advance to a popular technology among consumers. For decades, scientists have explored ways to have technology meet the demands of a fast-paced and advanced society. Now the question is less about racing to catch up, and more about evolving too fast for the safety of humanity.
Artificial intelligence, or A.I., is essentially the capacity of a computer to learn from data and imitate human intelligence functions. As a field, it involves research in methods of programming designed to supplement human intellectual abilities. For example, Apple’s Siri and Google’s Google Now are smartphone apps that use natural language to answer questions, make recommendations and perform actions. Google’s self-driving cars use sensors and software to detect and predict the behavior of pedestrians, cyclists, vehicles, and road work in order to navigate safely.
However, this technological development has met controversy among some renowned physicists and innovators. Stephen Hawking, a distinguished theoretical physicist, warned about the threat of A.I., claiming that, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk, the CEO of SpaceX and Tesla Motors, also advised that AI is humanity’s “biggest existential threat” that could be “potentially more dangerous than nukes.”
Although most A.I. researchers are striving to develop intelligent systems that make peoples’ lives easier, there is the possibility that in the future A.I. will have the potential to supersede human intelligence. Because of this, scientists have no surefire way of predicting how it will behave. As this is a largely unprecedented field, past technological developments can’t provide a basis. We as a species have never created anything with the ability to outsmart us.
A.I. could also have other adverse effects on society. New military technology includes autonomous weapons with artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Also, if a super-intelligent system is tasked with an ambitious geo-engineering project, it might wreak havoc on our ecosystem as a side effect. There is also the emotional aspect. We can develop genius robots and computer programs, but will they ever learn emotional values and ethics? Can we trust robots and drones with guns? Can we trust that A.I. programming of our computers and machines won’t be hacked and turned to malware? Like in so many sci-fi movies, should we be worried that A.I. will turn against us once it outsmarts us? These are questions that not only prominent science experts are asking, but also everyday consumers.
Nevertheless, the development of A.I is the future of technology, and should not be suppressed due to current fears and hesitation. It is inevitable that this new field raises some concerns; everything new and unpredictable in our past was met with similar reluctance or averseness. For as much innovation that can be put forth, there will always be a sizable opposition. Although it’s important to maintain our traditional values, it is becoming increasingly pressing to continue evolving our technology in this changing world. In our past, it has never been a winning strategy to limit technological development, and once we start restricting progress, we run the risk of preventing beneficial innovation, stifling creativity and suppressing human ingenuity.
There will always be two sides to the argument of technological development, but the key to AI will be the measure at which we apply it. Some argue that AI robots and machines will depress the economy, as they will take away jobs from real humans. However, right now AI is creating new jobs, as it has produced an entirely new field of scientific research. Others believe that AI will cause work to become obsolete, since it will be able to complete everything from menial tasks to powerpoint presentations. However, this might create opportunities for much more substantial innovation, as we won’t be wasting our time with unnecessary tasks. It is not entirely correct to say that AI will promote human lethargy and that the value of human life will go down without work. On the contrary, there is the chance that it might be drastically improved, as people can focus on what truly interests or inspires them instead of what simply needs to be done in order to function.
As for the technical concerns, it should be noted that as of now there is no example of AI that can completely take control of our species. AI programs aren’t necessarily lone entities that are created and then left to their own devices. They start out like any other computer, programmed with a set of instructions. From there, they learn from data they encounter or are provided, and are guided by researchers. It is unlikely that, at least in this time period, AI will become so dangerously powerful and pose a threat. Also, if scientists or government officials were to pursue AI for national purposes in the future, the programming would certainly be highly protected and encrypted. It’s very plausible that if we were to entrust our safety in robots, computers and machines, they would not be so easily corrupted or hacked (we’re smarter than that).
In the end, AI is something powerful and big that can help us. We shouldn’t fear power just because it might overtake us, and we shouldn’t suppress innovation just because it brings on the unknown. AI may have faults and weaknesses, but since it clearly is the future of technological development, it is our duty to pursue it. Yes, it is risky to depend on or trust a computer, but this is a calculated risk that we need to take. As long as the development of AI is in good measure, humanity will be in good (perhaps robotic) hands.