Insider Threats in the Age of AI
Employers deal with the dual challenge of leveraging AI for operations and defending against AI-powered internal threats
By Christina Catenacci, human writer
Mar 28, 2025

Key Points
Insiders are trusted individuals who have been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public
Insider risks are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization
As AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones
AI is making waves in the workplace—employers have been trying to find novel ways of implementing AI to improve their operations, while simultaneously defending against cyberattacks, including new methods of AI-powered attacks, from inside their organizations. This article explores the nature of internal threats in modern workplaces.
What are insider risks?
According to Microsoft, insider risks (before they become actual threats or attacks) are the potential for a person to use authorized access to the organization’s assets—either maliciously or unintentionally—in a way that negatively affects an organization. “Assets” mean information, processes, systems, and facilities.
In this context, an “insider” is a trusted individual who has been given access to, or has knowledge of, any company resources, data, or system that is not generally available to the public. For example, an insider could be someone with a company computer with network access. It could be a person with a badge or device that allows them to continuously access the company’s physical property. Or it could be someone who has access to the corporate network, cloud storage, or data. It could even be a person who knows the company strategy and financial information.
Some risk indicators include:
Changes in user activity such as a person behaves in a way that is out-of-character
Anomalous data exfiltration such as sharing or downloading unusual amounts of sensitive data
A sequence of related risky activities, which could involve renaming confidential files to look less sensitive, downloading the files, saved to a portable device, and deleting the files from cloud storage
Data exfiltration of a departing employee, such as a resigning employee downloading a copy of a previous project file to keep a record of accomplishments (unintentional) or knowingly downloading sensitive data for personal gain or to assist them in the next position at a new company (intentional)
Abnormal system access, where employees download files that they do not need for their jobs
Intimidation or harassment, which could involve an employee making a threatening, harassing, or a discriminatory statement
Privileges escalation, such as employees trying to escalate their privileges without a clear business justification
What are insider threats and attacks?
Further down the continuum, insider threats have the potential to damage the system or asset. The threat could be intentional or unintentional. And even further down, an insider attack is an intentional malicious act that causes damage to a system or asset. Unlike threats, attacks are relatively easy to detect. Not all cyberattacks are data breaches. More specifically, a data breach is any security incident where unauthorized parties access sensitive or confidential information, including personal data like health information and corporate data like customer records, intellectual property, or financial information.
The ultimate goal of these insiders could be to steal sensitive data or intellectual property, to sabotage data or systems, espionage, or even intimidate co-workers.
What is the cost of an insider attack?
Data breaches are serious: according to the IBM Cost of a Data Breach Report 2024, the global average cost of a data breach has increased by 10 percent from 2023 and now has reached USD 4.88 million.
But what is striking is that the average cost of a malicious insider attacks averaged about USD 4.99 million in 2024. In this regard, expensive attack vectors included business email compromise, phishing, social engineering, and stolen or compromised credentials. The most common ones were phishing and stolen or compromised credentials.
What happens when you add AI?
According to IBM, AI and automation are transforming the world of cybersecurity. They make it easier for bad actors to create and launch attacks at scale. For example, when they create phishing attacks, they make it easier to produce grammatically correct and plausible phishing messages.
In fact, the ThreatLabz 2025 AI Security Report revealed that threat actors are currently leveraging AI to enhance phishing campaigns, automated attacks, and create realistic deepfake content. ThreatLabz researchers discovered how DeepSeek can be manipulated to quickly generate phishing pages that mimic trusted brands, and how attackers can create a fake AI platform to exploit interest in AI and trick victims into downloading malicious software.
In addition, ThreatLabz suggests that organizations face a number of AI risks:
Shadow AI and data leakage (using AI tools without formal approval or oversight of the IT department and causing data leaks)
AI-generated phishing campaign (in about five prompts, a phishing page can be created)
AI-driven social engineering, from deepfake videos to voice impersonation used to defraud businesses
Malware campaigns exploiting interest in AI, where attackers lure victims with a fake AI platform to deliver the Rhadamanthys infostealer
The dangers of open-source AI enabling accidental data exposure and more serious things like data exfiltration
The rise of agentic AI, where there are autonomous AI systems capable of executing tasks with minimal human oversight
Indeed, Security Intelligence claims that Gen AI is expanding the insider threat surface. We’re talking about chatbots, image synthesizers, voice cloning software, and deepfake video technology for creating virtual avatars. Employees are going to work and misusing AI to the point that some companies are starting to ban the use of Gen AI tools in the workplace. For instance, Samsung apparently made such a ban following an incident where employees were suspected of sharing sensitive data in conversations with OpenAI’s ChatGPT. This is concerning, especially since OpenAI records and archives all conversations, potentially for use in training future generations of the large language model.
A combination of human and AI security internal threats
Organizations face many internal security threats, which can be of a traditional or AI nature: we can see from the above discussion that as AI becomes more common in the workplace and performs tasks that are not completed by humans, organizations face a growing security risk from artificial insiders as well as human ones.
These AI insiders would be even better at learning how to avoid detection by ingesting more information and becoming more adept at spotting patterns within that information. In fact, threat actors use AI-generated malware, exploit network traffic analysis to find weak points, manipulate AI models by injecting false data, and make advanced phishing messages that evade detection. And AI systems can be used to detect those risks—AI and machine learning can enhance the security of systems and data by analyzing vast amounts of data, recognizing patterns, and adapting to new threats.
Insider risk reframed
The above discussion touched on several risks that could result because of insiders, whether they are human or AI. These risks can be boiled down and examined by looking at people, processes, and technology.
The following could be another way of thinking about internal risk:
People: human insiders make errors, lie about what they are doing, behave in unusual or suspicious ways, engage in theft of confidential information or intellectual property, could be manipulated, do not comply with the company’s policies and procedures, have low levels of AI literacy, easily fall for phishing or give up credentials, have no human-in-the-loop, or lack AI governance
Processes: the company may not have AI in the workplace policies and procedures in place, or if they do have them, they may not be regularly updated, monitored, or enforced
Technology: there may be biased data, a lack of data hygiene leading to bad-quality data, no change management so people and systems are not supported during the transition, AI-generated malware, manipulated AI models by injecting false data, advanced phishing messages that evade detection, agentic AI that goes rogue, or model drift and consequent inaccurate predictions