Should We Be Worried About AI?

Big names in the tech world have been talking about why the potential applications for artificial intelligence could be something we should be worried about. Elon Musk said, “AI is far more dangerous than nukes.” And even the late physicist Stephen Hawking has raised concerns that the rise of powerful AI systems could end humanity.

On the one hand, experts see AI as one of the most fundamental transformative technologies we have ever seen in history. Still, on the other hand, that same transformative power is something we must be wary about. It’s because it has the power to transform for both good and bad reasons.

Sci-fi movies like The Terminator and The Matrix also don’t help, as they portrayed the prospect of the development of superintelligent machines as rather undesirable and scary for humankind.  

However, other experts say that we do not need to be scared of artificial intelligence. They say that humans tend to be afraid of what they don’t understand and that human greed and natural stupidity are far scarier than artificial intelligence.  

So, should we be afraid of artificial intelligence? Let us discuss AI’s risks and how society can mitigate its risks.   

Why is There a General Fear About AI?

One of the most widespread fears of AI is just general anxiety of what it’s potentially capable of. A recurring theme in science fiction movies with AI systems is that they can go rogue. If machines get too smart, people fear we can’t control them. The common representation of AI technology gone bad is causing a general fear in public regarding the development of intelligent systems technologies. The fear surrounding the unknown as AI systems are becoming more intelligent and human intelligence surrounding these technologies is increasing. The unknowns don’t give humanity a clear direction for where things could go.

A great antidote to that fear is the realization that whenever human society has faced a significant shift or change due to technological advances, humans have adapted right along with it. 

Risks and Threats AI can Pose

The development in AI technology is advancing, and there are risks associated with it, such as:

Job automation

The most immediate concern regarding AI is that it could cause mass unemployment of human workers due to being replaced by AI workers. The fear is that in the previous wave of automation, mostly blue-collar jobs like manufacturing-related work were automated, but in the new wave, some white-collar, service-oriented jobs based around knowledge will be automated, too.

As AI grows, the need for trained human workers in different sectors can go away. Jobs with high exposure to automation and repetition, ranging from retail sales, warehouse labor, manufacturing, market analysis, and hospitality, can be done using AI.

Due to the advancement of AI, even professions that require additional post-college training and graduate degrees aren’t immune to AI displacement. Nowadays, AI is already having a significant impact on medicine. Law and accounting are next to be affected since lots of attorneys read through hundreds and thousands of pages and data and documents. AI has the ability to comb through and comprehensively deliver the best possible contract for the outcome you want to achieve, which makes it have the potential to replace a lot of corporate attorneys. In accounting, once AI can quickly comb through reams of data and make decisions based on computational interpretations, human auditors may be displaced.

Possible danger in the hands of bad people

Another common yet legitimate concern about AI is that it can do bad things in the hands of evil people. Russian leaders once said that whoever leads the advancement of AI will be the top ruler of the world. This is why countries are pouring significant amounts of research and investment into developing AI systems for everything. In the future, we can expect governments to use AI systems in ways that can make us uncomfortable, in the ways they are applied to surveillance, law enforcement, warfare, and other purposes.

While we can expect countries and governments to compete with each other for AI dominance, the people we have to fear are criminals and mischief-makers taking AI technologies and bending them to their ill-conceived plans. As AI systems learn from their creators, that can call into question the creator’s intention and what they want to accomplish.

Privacy and security risks

While job loss and mass unemployment is currently the most pressing issue posed by AI development, it’s merely one of the many potential risks. Malicious use of AI could threaten digital security by hacking or socially engineering victims at human levels of performance. It can also pose a risk for physical security as non-state people weaponize consumer drones.

As with using the Internet, where we tend to sacrifice our digital data for convenience, will AI-analyzed monitoring seem like a fair trade-off for increased safety and security despite its potential for exploitation by bad people?

Superintelligence

Probably the biggest fear about AI as superintelligence, or that AI will reach a point where it can teach itself, improve and invent on its own, and instead of being of help to humanity, humans will become a servant of technology. The fear is that human brains will not be able to keep up with the advancement and invention after a certain point because things will be moving way too fast.

If computing systems reach a point where they outstrip their human creators, what will it mean for humanity? It makes us question what intelligence is and how we define and measure intelligence for both humans and computers.

But the big counterargument to this is that we are still far away from achieving artificial general intelligence. While there’s a lot of technology developing quickly, there are parts that are still not working particularly well. Data is the cornerstone of AI, and a lot of it is still messy.

The anxiety about superintelligence is that we don’t know where AI is going and how soon it will take humanity to get there.

Autonomous weapons

Not all experts agree with Musk that AI is more dangerous than nukes. But what if AI decides to launch nukes or even biological weapons without human intervention? What if an enemy manipulates data to return the AI-guided missiles to where they came from? Both are possibilities, and both can spell a major disaster.

If any major military power prioritizes AI weapon development, a global arms race is inevitable, and autonomous weapons are something to be scared of. Unlike nukes, AI weapons will require no hard-to-obtain raw materials, so they can become cheap and ubiquitous for military powers to mass-produce. It will be a matter of time until they appear in the black market and in the hands of dictators, terrorists, and warlords.

Mitigating the Risks of AI

Many believe that the only way to prevent or at least temper the risks of AI from wreaking havoc is to regulate at the international level. Arguably, the greatest worry about AI is that machines may become better at making decisions than humans, enslaving man to automated decision-makers and whoever controls them.

A public regulatory body that has insight and oversight to confirm that everyone’s safely developing AI is crucial. However, the research regarding AI must not be limited. Any country that lags behind AI development is at a disadvantage economically, socially, and militarily. The solution is selective application. Experts believe there must be a treaty that fully bans AI weapons or permits only certain applications of the technology.

In the end, AI is not all doom and gloom. If humans find a way to balance the growth of AI and wisdom with how we manage it, we can have an inspiring future with AI. That may require lots of planning and work, but it’s possible.

Exit mobile version