HomeOur Blog
Blog Posts

Insider Threat 101: Do You Know Who's Sus?

Above Security

Above Security

Table of contents
Above Security

Above Security

During the pandemic, the game “Among Us” offered people an opportunity to connect virtually. The premise of the game is that a group of people would me in a virtual “lobby,” then run around an environment in outer space. One person within the group would be the “Imposter”, and everyone else had to guess which person it was, voting who they believed was “Sus.”

In essence, “Among Us” was a microcosm of insider threat. The Imposter was trying to complete harmful objectives, while everyone else tried to determine who was out to hurt the group. When playing the game, finding the Imposter is always difficult because all the characters look basically the same and are running around the environment completing different tasks. In a corporate IT infrastructure, finding the insider threat is similar. Insiders all have authenticated and authorized access to the environment, giving them access to varying levels of confidential data based on their job function. 

While corporate insider threat “imposters” are difficult to identify, organizations need to know how to detect and mitigate insider risk to improve security and compliance.

What is an insider threat?

Insider threats are security violations or risks that come from within the organization, encompassing anyone who has or has had authorized access to systems, data, and facilities. While malicious insider threats exist, most arise from user negligence or human error. The definition of insider incorporates anyone with knowledge about internal systems and security practices, including:

  • Current employees
  • Former employees
  • Contractors
  • Third-party vendors
  • Business partners

What are the types of insider threats?

People typically categorize insider threats based on a person’s intent and the sensitive data they can access. 

Malicious insiders

A malicious insider intentionally acts to harm the organization, stealing sensitive data like trade secrets, intellectual property, or customer information. Many malicious insiders are motivated by money or revenge against the employer, including ideological beliefs that may lead to espionage. 

In some cases, these insider attacks want more than data loss, they want to sabotage critical systems, disrupt operations, or damage the organization’s reputation. 

Negligent insiders

Negligent insiders have no harmful intentions even if their actions or inactions create security risks. Generally, these insider risks arise from people familiar with security policies who choose to ignore them, knowingly engaging in risky behavior for speed or convenience. Some examples of negligent behavior that can lead to security incidents include:

  • Sharing passwords for departmental resources that may contain customer personally identifiable information (PII)
  • Failing to follow security protocols by not installing security patches or software updates in a timely manner
  • Inputting confidential data into a generative AI prompt

Unintentional insider threats

While often used interchangeably with negligent insiders, unintentional insiders lack malicious intent and may not even realize that they acted outside a security policy. Some examples of unintentional threats that are not negligence include:

  • Falling victim to a phishing attack that cybercriminals use for credential theft
  • Unauthorized disclosure by accidentally emailing information to the wrong recipient
  • Misconfiguring an app without realizing the risk

Third-Party threats

This category includes third-party vendors, contractors, and other people from external organizations with authorized or privileged access to systems or data. While these people are not direct employees, they can be malicious or accidental insider threats if an unauthorized disclosure originates from them. 

Collusive threats

Collusive threats involve two people who plan and work together to complete a malicious act. With at least one compromised insider, this collaboration can increase the data breach’s potential impact and the attack’s sophistication. Often, cybercriminals or nation-state actors offer an insider money for providing access credentials or sensitive information. In some cases, insiders at different levels of an organization might conspire to exploit vulnerabilities for personal gain or commit fraud. 

Why are insider threats dangerous?

Insider threats are dangerous to organizations for several reasons:

  • Authorization and authentication: Organizations authenticate and authorize these users to systems while external threat actors must gain unauthorized access without triggering security alerts. 
  • Legitimate access: Insider threats have legitimate access to sensitive information so they can complete job functions while external threat actors must escalate privileges after gaining unauthorized access. 
  • System and resource knowledge: Insider threats know where confidential information resides while external threat actors must investigate systems to identify these locations while evading security controls. 

How does an insider threat occur?

Insider threats often arise through a combination of human factors and environmental conditions. 

Ineffective Cyberawareness Training

Organizations provide cybersecurity awareness training that seeks to teach people about security policies, best practices, and common threats, like social engineering attacks. However, many training modules are compliance activities where people answer generic multiple choice questions rather than learn how to build security hygiene. 

Complex identity and access management (IAM) permissions

Limiting user access according to the principle of least privilege means that users should only have the access they need to complete job functions. Complex, interconnected IT ecosystems often create challenges as applications define permissions differently, which can lead to overly broad access. Additionally, IAM only helps control who can access what resources and why, it fails to provide insight into user intent. 

Inadequate user and entity behavior analytics (UEBA)

Sometimes called user behavior analytics (UBA), these tools establish baselines around how users typically interact with resources to identify abnormal activity. UEBA tools may flag activities like:

  • Unusual login times and locations
  • Accessing sensitive data outside normal job functions
  • Excessive data downloads

While these tools can be a powerful technology layer for insider threat detection, they often generate too many false positives and negatives which increases security analyst alert fatigue. Additionally, these tools also fail to provide insight into user intent, meaning that they aggregate malicious, negligent, and unintentional insider threats which only adds to the noise they generate.

External-threat focused offensive security

Penetration tests and red teaming simulate cyberattacks to identify vulnerabilities. By focusing primarily on external threats, offensive security only identifies some places where insider threats can compromise sensitive information, like weak access controls or unpatched systems. These activities focus on system weaknesses, not human intent, so they may not capture insider threat risks. Insider threats exploit the inherent trust that a system places in authenticated and authorized users, risks typically outside offensive security testing.

Best Practices for Understanding User Intent to Mitigate Insider Threat Risk

Trying to find your corporate “Imposter” is extremely difficult when you have no way to identify “sus” intent. While you can track what people do and how they interact with sensitive information, you may not always have enough data points to identify why people take actions and how that can predict potential unauthorized disclosures. However, if you want to improve your internal threat detection capabilities, you can implement some procedures and solutions to mature your risk management. 

Establish behavioral baselines

Most insider threat risk management tools look at what people do, giving you a sense of:

  • Data they interact with
  • Time of day they consider work hours
  • Devices they use 
  • Daily workflows to know data flows and use

When you understand these baselines, you can start to look for abnormal activity that can identify unsanctioned data use. 

As you build out your insider risk management processes, UEBA offers the ability to:

  • Establish baseline patterns: Typical user and role data access, volume, and movement
  • Ongoing analysis: Identification of changes and trends over time.
  • Detection: Deviations from normal data interaction patterns

Implement intent-focused analysis

Security teams create alerts based on logs and activity that identify what happened but give little insight into why an event occurred. To derive intent, you need to analyze real time data interactions and understand activity timelines. 

For a proactive insider risk management program, you need to understand the story underneath people’s actions, which means identifying behavioral patterns tied to data movement and usage like:

  • Cross-environment access: Endpoints, cloud storage, and Software-as-a-Service (SaaS) applications
  • User behavior: How people access, modify, and interact with data
  • Event timelines: Sequence of actions like access, downloads, transfers

Correlate user actions across systems

For many people, data protection lives in the same brain space as those lyrics from that 1980s one hit wonder. They typically focus on ease and productivity before considering unauthorized disclosure. Individual data interactions may appear innocent or accidental unless you make connections across systems and applications. 

To take a proactive approach, you want to track every user’s data access and use by:

  • Aggregating telemetry: Data from endpoints, identity systems, and cloud applications 
  • Correlating data: Patterns related to data access, modification, and transfer
  • Building workflows: End-to-end data movement path reconstruction

Focus on data movement

People interact with data all day long. Real insider risk comes from people moving data. When you start focusing on transfers and downloads, you get information about intent as they:

  • Aggregate data without having a business purpose for it
  • Move data across boundaries 
  • Transfer data from approved channels

As you mature your insider threat monitoring, tracking how data flows helps distinguish routine work from potentially risky behaviors. To help mitigate risk, you want to:

  • Track data flows: Identify unauthorized data exfiltration paths
  • Implement monitoring: Track data transfers between systems, devices, and external destinations
  • Improve detections: Trigger alerts for unusual aggregation or bulk access patterns

Add context to data interactions

No two fingerprints or people are the same. Similarly, no two people interact with data the same way. Adding context to a data interaction can give you insight into a user interaction. At a high level, UEBA can signal the difference between:

  • Someone downloading sensitive information during expected work hours. 
  • Someone downloading sensitive information at 11pm at night. 

However, to understand whether someone is malicious or just working late on a project after going to the gym, you need more context. 

To understand user intent, you need to consider:

  • User information: Identity, role, and life cycle
  • Behavior information: Time, device, and location 
  • Contextual risk: Activities across different resources and applications

Look for patterns

A single data download tells you very little information. Patterns help you identify intent. Consider the following:

  • Single abnormal download might just be unintentional 
  • Multiple events where someone mishandles data might indicate negligence 
  • High volumes of data downloaded multiple times may signal malicious intent
  • Employees traveling for work who might be using shadow IT, like an unsanctioned application, to complete a job function

By analyzing how people interact with data over time, you can look for partners that help classify intent. To gain this insight, you may want to use:

  • Historical data: Data interactions over time for identifying patterns of data misuse or potential malicious intent
  • Analytics: Correlating telemetry from various identity, application, and data monitoring tools to identify repeated or escalating behaviors
  • Contextual signals: Enriching patterns with user context like role, access level, and recent behavioral changes to filter negligent or malicious behavior from expected activity

Tailor response to risk

Just like people and their motivations are different, your responses should relate to the risk that the data interaction poses. When you understand user intent, you can more effectively mitigate risk by:

  • Educating people when they accidentally misuse data 
  • Enforcing controls for people who negligently handle data 
  • Collecting forensic information during the investigation to document potential malicious behavior

To improve your investigation and response capabilities, you want to incorporate:

  • Risk-based scoring models: Connect user behavior and derived intent to data interactions
  • Adaptive responses: Trigger investigations and activities by using severity and context
  • Activity timelines: Support investigations with timelines that correlate various data interactions and user activities

Above Security: Understand the user intent below the actions

Most insider threat tools look at what people are doing. They are rarely able to discern why people interact with data. Above Security connects the dots between behavior, identity, and data movement so that security teams get high-fidelity detections instead of more noisy alerts. 

When you know how people interact with data over time and across systems, you can correlate this insight with information about people’s roles and behavior patterns. By combining and analyzing these data points, you can start to understand the story about why people interact with sensitive information the way they do. Now, your security team can take the appropriate response action, like educating people about security, enforcing controls, or escalating forensic data to your legal team. 

Once you understand intent, you do more than react to threats. You manage risk with context. 

Share

Contact us

You've made a great move.
We'll be in touch shortly

Close
Watch Now