Credit: Martec Group
One of the brightest minds in this century, Elon Musk, is known to have negative feelings towards artificial intelligence (AI). He has compared AI to the dangers dictators pose, and insists there is a risk “something seriously dangerous [could happen in] the five-year time frame. Ten years at most.”
It’s an alarming statement, but not too surprising given that Musk also expresses his support for the conspiracy theory that we live in a simulation (yes, like the Matrix). When we consider that the likes of Stephen Hawking and Bill Gates have addressed the very same concerns, it raises brows.
The argument isn’t about all types of AI. Applied AI performs narrow tasks that make our lives moderately easier—such as educated internet searches or facial recognition to unlock our cellphones. It is artificial general intelligence Musk, Hawking, and Gates have a problem with. This form of AI, when it is developed to its fullest potential, would not only have the ability to handle any task we can perform, but beat us at them.
“It’s amazing to see how far technology has come over the years,” William Marcel Tremble of Charlotte, North Carolina says, “and when I look at the development of AI, I am completely floored. When I think about what is to come, I am gobsmacked.”
In most instances, this still doesn’t seem like a problem. So what if AI could do your taxes quickly and more efficiently? So what if they could work in a nearly secretarial fashion? In theory, this would save us all time and hassle. It isn’t the mundane that these brilliant minds are worried about, however.
It’s the potential for “going rogue” or worse, being hacked, that could ultimately cause massive harm. In a perfect world (or in this cause, an imperfect world), AI would take care of all the things we don’t want to. But if AI is control of a power grid of autonomous weapons, what devastation could happen if the grid is hacked? This concept is far-fetched, certainly, but it has implications in more average scenarios that could just as easily cause a slew of problems.
Because AI would have an unprecedented understanding of algorithms that control marketing across social media (we all know that Facebook knows what we think), AI could spread horribly believable information that taps into our already preconceived notions. In the wrong hands, AI could easily be used to find out everything about us. We’ve seen evidence of this possibility due to massive hacks and information leaks on Facebook and other social media platforms.
At its core, AI is not a bad thing. It is an incredibly helpful tool that allows us uncapped knowledge, and can perform little tasks for us that we’d rather not do. But just because it is helpful, does not mean we should rely on it entirely. A cancer center within the University of Texas purchased AI technology that was, in theory, help doctors provide better care to cancer patients. If doctors listened to the information provided by the tech, however, it would have caused more harm than good. AI cannot distinguish between correlation and causation, and in science, that is a massive problem.
“We need to be responsible as we continue developing exciting new technology,” NGOTechnologies CEO Jasel Patel says. “We need to carefully consider the good and the bad– and what outweighs the other.”
As we move forward with our progression of technology, we need to be cautious of the problems it could cause. It can order our groceries or check the weather, but expecting it to diagnose patients and understand our limits is something else entirely. At this point, we cannot expect AI to make moral decisions like a human would, and we probably shouldn’t.