Artificial intelligence is transforming business as we know it. What started as a highly technical industry that was impacting only the most granular of business systems is projected to grow to $190 billion in market value by 2025, with annual growth between now and 2027 projected to eclipse 33%. What’s more striking is how big of an impact AI is expected to have. By 2030, it’s expected that AI will drive a 26% increase in global GDP – amounting to an additional $15.7 trillion in production.
That kind of productivity comes from a wealth of new technologies that help to automate highly manual processes, streamline otherwise complex logistical problems, and provide the data insights and actionable analysis that leaders need to guide their businesses into the future more carefully. It provides both stability and the knowledge needed to be bolder. But it also presents a wealth of security vulnerabilities that many companies are at risk of ignoring as they rush to complete their digital transformations as quickly as possible. Let’s take a closer look at some of these vulnerabilities and risks that are already presenting to businesses in a range of industries and the types of things they are doing to prevent major incidents.
Defining the Security Vulnerabilities in AI and Machine Learning
While AI systems generally remove a number of human touchpoints, they are still designed by humans and offer the same entry points and potential security vulnerabilities as a traditional system. The biggest issue that many organizations are facing is the speed with which they are adopting the technology. The potential benefits are well-documented, so leaders are eager to leverage the suite of new technologies represented by AI to increase revenue. But good security can’t be implemented over a new system overnight. It requires a keen understanding of how it will be used in the long term.
Because AI and ML thrive on large volumes of data, their functions are most often run on cloud platforms that can scale to handle large workloads. This alone adds a new layer of vulnerability. A recent Deloitte survey reported that 62% of AI adopters were aware of and concerned about these risks, but only 39% said they were actually doing what was needed to address them.
For AI to work effectively, it needs three sets of data: training data to build a model, testing data to evaluate how well that model works, and operational data to feed into that model once it is built. The latter tends to be carefully monitored and protected, but what about the data that is used to actually build and test the model? This is frequently a potential vector of attack. Things to keep in mind with an AI project include:
- Data Necessity – It’s tempting to pour as much data into an AI system as possible, but do you actually need all of it? Every piece of data in that system needs to be protected the same as the operational data you manage every day. The more sensitive the data, the more carefully you should evaluate if it’s truly necessary. PII, for example, should rarely be used in a model if possible.
- Identifying and Removing Unnecessary Data – PII may enter your data sets without you realizing it, so it’s important to have tools in place to identify PII, purge it, and let customers know it was collected and removed.
- Context-Related Data – Data may be purchased or collected to provide context to existing data sets. For example, spending pattern data to correspond to existing financial data. This creates a richer data set that may provide better insights but is also a riper target for hackers.
These issues speak to the increased desire of bad actors to try and access your databases due to the data collected, but what about actual attacks on the AI systems themselves?
Avenues for Attack in AI Systems
In a recent paper produced by the National Academies Press, Google Brain research scientist Nicolas Papernot, discussed some of the ways in which adversaries could exploit AI systems and how companies should design and implement systems to detect and respond to these attacks before they can influence the organization.
The accuracy and effectiveness of a Machine Learning algorithm are defined by the data the algorithm consumes to build its model. If that data is corrupted or manipulated, it can lead to errors in the eventual output, which can range from financially disruptive to physically dangerous depending on the nature of the AI system.
Systems need to be designed in a way that humans can easily understand and use them. One of the more difficult things for humans to understand is “privacy” because it is naturally subjective. For this reason, differential privacy algorithms have been implemented to make it impossible to tell which data is live versus what is provided in a training set. Any ML training data can be made differential private to protect it from these types of attacks.
How Companies Can Better Protect Against These AI Threats
AI and ML algorithms are naturally complex, but the systems with which they are protected are generally the same as what is already being used. The key is to think about the way in which data is used, evaluated, and ultimately protected in these systems. Do you have a way to identify PII, reduce the scope of the testing data fed into an ML model, and monitor for potential attacks in a highly automated system? Building intentionally and avoiding the rush to full digital transformation are important steps in this process.
Learn how Bedroc can help with an in-depth cybersecurity assessment of your existing AI systems, cloud security audit for where your systems are currently hosted, and strategic evaluation of the potential business impact of your digital transformation efforts, and the risk being introduced with the new systems being implemented. Contact us today to speak with a member of our team.