Okay, you keep hearing about artificial intelligence (AI) and it really is such a buzz word these days. So, is AI dangerous? Well, that depends. In order to know if AI is dangerous, there are a few things we need to explore first.
- What is AI?
- How is AI used?
- How can AI be abused?
Once we know those answers, then we can answer if artificial intelligence is truly dangerous.
What is AI?
Well, Britannica says that artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Essentially, AI is a computer program with the ability to ingest data, learn from it, and adapt. Traditionally, computer programs could only do what they knew, what they had been programmed to do. Over the years, these pre-programmed reactions became quite complex and even random at times, but that was all part of the program. Today, AI is capable of learning, like we do, from past experiences and changing future actions to reflect what was learned.
How is AI used?
Already, we see artificial intelligence being leveraged in so many ways, from healthcare, to the gaming industry, to marketing and design, to finance, and beyond. Smart self-driving cars anyone? Investopedia reviews other uses, such as detecting fraudulent banking activity or dosage guidance for medical patients. The world around us is leveraging new technology in many positive and helpful ways. Some of these ways reduce mundane tasks for humans and others actually replace human tasks all together.
How can AI be abused?
Unfortunately, artificial intelligence can also be used for malicious activity. Bad actors like to use AI for phishing campaigns, website generations, intrusive computer code, and digital impersonations. Additionally, AI is capable of creating complex computer code and plans. Plans that could be used for optimal theft times and locations or something more sinister.
So, is AI dangerous?
Well, it certainly can be. Without the proper controls in place, artificial intelligence could fall victim to abuse. That may be in terms of just outright malicious uses as mentioned above or content manipulation. Meaning, if it’s given bad data to learn from or doesn’t contain proper safeguards, it could perform intentional mistakes or unintentional accidents. When used in banking, self-driving cars, or healthcare, it’s easy to see how any slip-up could be devastating to life and property.