Above Security raises $50M to redefine insider risk in the age of AI agents
Read more
%201.png)
During the pandemic, the game “Among Us” offered people an opportunity to connect virtually. The premise of the game is that a group of people would me in a virtual “lobby,” then run around an environment in outer space. One person within the group would be the “Imposter”, and everyone else had to guess which person it was, voting who they believed was “Sus.”
In essence, “Among Us” was a microcosm of insider threat. The Imposter was trying to complete harmful objectives, while everyone else tried to determine who was out to hurt the group. When playing the game, finding the Imposter is always difficult because all the characters look basically the same and are running around the environment completing different tasks. In a corporate IT infrastructure, finding the insider threat is similar. Insiders all have authenticated and authorized access to the environment, giving them access to varying levels of confidential data based on their job function.
While corporate insider threat “imposters” are difficult to identify, organizations need to know how to detect and mitigate insider risk to improve security and compliance.
Insider threats are security violations or risks that come from within the organization, encompassing anyone who has or has had authorized access to systems, data, and facilities. While malicious insider threats exist, most arise from user negligence or human error. The definition of insider incorporates anyone with knowledge about internal systems and security practices, including:
People typically categorize insider threats based on a person’s intent and the sensitive data they can access.
A malicious insider intentionally acts to harm the organization, stealing sensitive data like trade secrets, intellectual property, or customer information. Many malicious insiders are motivated by money or revenge against the employer, including ideological beliefs that may lead to espionage.
In some cases, these insider attacks want more than data loss, they want to sabotage critical systems, disrupt operations, or damage the organization’s reputation.
Negligent insiders have no harmful intentions even if their actions or inactions create security risks. Generally, these insider risks arise from people familiar with security policies who choose to ignore them, knowingly engaging in risky behavior for speed or convenience. Some examples of negligent behavior that can lead to security incidents include:
While often used interchangeably with negligent insiders, unintentional insiders lack malicious intent and may not even realize that they acted outside a security policy. Some examples of unintentional threats that are not negligence include:
This category includes third-party vendors, contractors, and other people from external organizations with authorized or privileged access to systems or data. While these people are not direct employees, they can be malicious or accidental insider threats if an unauthorized disclosure originates from them.
Collusive threats involve two people who plan and work together to complete a malicious act. With at least one compromised insider, this collaboration can increase the data breach’s potential impact and the attack’s sophistication. Often, cybercriminals or nation-state actors offer an insider money for providing access credentials or sensitive information. In some cases, insiders at different levels of an organization might conspire to exploit vulnerabilities for personal gain or commit fraud.
Insider threats are dangerous to organizations for several reasons:
Insider threats often arise through a combination of human factors and environmental conditions.
Organizations provide cybersecurity awareness training that seeks to teach people about security policies, best practices, and common threats, like social engineering attacks. However, many training modules are compliance activities where people answer generic multiple choice questions rather than learn how to build security hygiene.
Limiting user access according to the principle of least privilege means that users should only have the access they need to complete job functions. Complex, interconnected IT ecosystems often create challenges as applications define permissions differently, which can lead to overly broad access. Additionally, IAM only helps control who can access what resources and why, it fails to provide insight into user intent.
Sometimes called user behavior analytics (UBA), these tools establish baselines around how users typically interact with resources to identify abnormal activity. UEBA tools may flag activities like:
While these tools can be a powerful technology layer for insider threat detection, they often generate too many false positives and negatives which increases security analyst alert fatigue. Additionally, these tools also fail to provide insight into user intent, meaning that they aggregate malicious, negligent, and unintentional insider threats which only adds to the noise they generate.
Penetration tests and red teaming simulate cyberattacks to identify vulnerabilities. By focusing primarily on external threats, offensive security only identifies some places where insider threats can compromise sensitive information, like weak access controls or unpatched systems. These activities focus on system weaknesses, not human intent, so they may not capture insider threat risks. Insider threats exploit the inherent trust that a system places in authenticated and authorized users, risks typically outside offensive security testing.
Trying to find your corporate “Imposter” is extremely difficult when you have no way to identify “sus” intent. While you can track what people do and how they interact with sensitive information, you may not always have enough data points to identify why people take actions and how that can predict potential unauthorized disclosures. However, if you want to improve your internal threat detection capabilities, you can implement some procedures and solutions to mature your risk management.
Most insider threat risk management tools look at what people do, giving you a sense of:
When you understand these baselines, you can start to look for abnormal activity that can identify unsanctioned data use.
As you build out your insider risk management processes, UEBA offers the ability to:
Security teams create alerts based on logs and activity that identify what happened but give little insight into why an event occurred. To derive intent, you need to analyze real time data interactions and understand activity timelines.
For a proactive insider risk management program, you need to understand the story underneath people’s actions, which means identifying behavioral patterns tied to data movement and usage like:
For many people, data protection lives in the same brain space as those lyrics from that 1980s one hit wonder. They typically focus on ease and productivity before considering unauthorized disclosure. Individual data interactions may appear innocent or accidental unless you make connections across systems and applications.
To take a proactive approach, you want to track every user’s data access and use by:
People interact with data all day long. Real insider risk comes from people moving data. When you start focusing on transfers and downloads, you get information about intent as they:
As you mature your insider threat monitoring, tracking how data flows helps distinguish routine work from potentially risky behaviors. To help mitigate risk, you want to:
No two fingerprints or people are the same. Similarly, no two people interact with data the same way. Adding context to a data interaction can give you insight into a user interaction. At a high level, UEBA can signal the difference between:
However, to understand whether someone is malicious or just working late on a project after going to the gym, you need more context.
To understand user intent, you need to consider:
A single data download tells you very little information. Patterns help you identify intent. Consider the following:
By analyzing how people interact with data over time, you can look for partners that help classify intent. To gain this insight, you may want to use:
Just like people and their motivations are different, your responses should relate to the risk that the data interaction poses. When you understand user intent, you can more effectively mitigate risk by:
To improve your investigation and response capabilities, you want to incorporate:
Most insider threat tools look at what people are doing. They are rarely able to discern why people interact with data. Above Security connects the dots between behavior, identity, and data movement so that security teams get high-fidelity detections instead of more noisy alerts.
When you know how people interact with data over time and across systems, you can correlate this insight with information about people’s roles and behavior patterns. By combining and analyzing these data points, you can start to understand the story about why people interact with sensitive information the way they do. Now, your security team can take the appropriate response action, like educating people about security, enforcing controls, or escalating forensic data to your legal team.
Once you understand intent, you do more than react to threats. You manage risk with context.
%201.png)
