However, in a small number of cases, the employee could be attempting to take confidential product designs, sensitive legal information, private employee data or trade secrets with them to a rival company.
It can be difficult for a company to even spot these “insider risks,” much less distinguish between routine behavior and the outlier that could destroy a company’s competitive advantage or reputation.
That’s why Microsoft is offering a new Insider Risk Management solution within Microsoft 365 that uses machine learning to intelligently detect potentially risky behavior within a company. It also quickly identifies which activities are most likely to pose real security threats, even inadvertently.
Because mistakes are a larger source of actual risk than insider attacks, the solution was designed to help employees make the right choices and avoid common security lapses. To be effective, engineers knew, the solution also had to help people do their jobs rather than slow them down.
“Fundamentally, a company’s employees are usually trying to do the right thing,” said Bret Arsenault, Microsoft’s chief information security officer and corporate vice president. “But sometimes intention is different than outcome.”
A couple of years ago, the security threats keeping Arsenault awake at night weren’t limited to hackers, cybercriminals or nation state attacks that Microsoft employs a small army of experts and leading-edge technologies to thwart. He increasingly worried about the potential risks, largely unintentional but occasionally malicious, from employees who already have easy access to a company’s most sensitive information.
For instance, that could include someone who inadvertently keeps sensitive information in a folder that’s searchable to anyone in the company, making it vulnerable to theft. Or the person who just hits the wrong button and mistakenly emails a highly confidential document outside the company.
In a recent survey of cybersecurity professionals, 90 percent of organizations indicated that they felt vulnerable to insider risk, and two-thirds considered malicious insider attacks or accidental breaches more likely than external attacks. More than half of organizations reported that they had experienced an insider attack in the past year, according to an insider threat report from Crowd Research Partners.
“In the security industry there has been a disproportionate amount of focus on external adversaries,” Arsenault said. “But with thousands of employees logging into a company’s systems every day, the threat of users — whether with inadvertent or malicious intent — may be a higher risk scenario. And that’s when we realized we needed to expand our focus.”
Arsenault tasked engineers from his security team and Microsoft 365 with creating a solution that leverages machine learning to intelligently detect and prevent internal security breaches, and to eventually turn that into a solution for customers. But it had to be designed with Microsoft core principles in mind: respecting employee privacy, assuming positive intent at the outset and encouraging the free flow of information and collaboration within a company.
The Insider Risk Management solution combines the massive array of signals from Microsoft 365 productivity tools, Windows operating systems and Azure cloud services with machine learning algorithms that can identify anomalous and potentially risky behavior from people using those products.
Product engineers worked closely with internal security analysts, human resources and other experts within Microsoft — and consulted with workers’ advocates in countries that share Microsoft’s strong commitment to privacy — to ensure the solution struck the right balance in respecting employees’ privacy and workflows.
“We knew that insider risk was becoming a more pervasive and expensive challenge, but also that we had to have an entirely different lens for addressing it,” said Erin Miyake, Microsoft’s senior program manager for insider threats, who worked with human resources, compliance and product experts to develop the new solution.
To start, you’re looking at people who already have access to company assets as part of their jobs, so it’s harder to detect, she said.
Then, because you’re analyzing activity from people who are already in your workforce, it’s essential to balance risk management with company culture, privacy, fairness and compliance needs. Those considerations simply don’t come up when you’re protecting a company from faceless cybercriminals in distant countries, said Talhah Mir, principal project manager in the Microsoft 365 security and compliance team.
“Employees absolutely should have access to the things they need for their jobs and shouldn’t feel unnecessary friction,” Mir said. “This is really about taking all these signals that already exist in the background and reasoning over it at scale with machine learning to find that thread in that sea of information that identifies possibly suspicious activities.”
All initial reports of unusual behavior in the Insider Risk Management system can be anonymized at the outset — to protect reputations and prevent any bias from creeping into the process. But because data signals only get you so far, the tool also offers a collaboration platform for investigators, human resource experts or business managers to determine whether the unusual behavior might be malicious or just something outside a person’s normal workflow.
Microsoft engineers working on the Insider Risk Management solution consulted with internal legal and human resources departments to delineate what thresholds would need to be met within Microsoft for anyone involved in an investigation to take necessary next steps.
“The system doesn’t pass any judgment or assume ill intent,” Mir said. “If there is an anomaly, you start from the place that the end user is probably just trying to get their job done, but we’re still going to trust and verify.”
The new solution uses machine learning algorithms to look for patterns of unusual and potentially risky behavior, which might be downloading hundreds of sensitive files from a SharePoint site, copying files to a USB device, disabling security software or emailing sensitive files outside of the company. It leverages Microsoft Graph and other services to look for anomalous signals across Windows, Azure and Office products such as SharePoint, OneDrive, Teams and Outlook.
None of those activities are inherently threatening, as employees do these things each day as part of their jobs. But the patterns become more meaningful as the system draws information from other sources, such as classification and labeling tools offered in Office 365 that can be used to flag sensitive documents and datasets.
That allows the algorithms to begin to distinguish between the risks posed by the employee who might be downloading uncontroversial presentations or documents — perhaps because they’re about to embark on a sales trip — and the employee who’s downloading highly confidential designs for a product under development.
The system can also indicate if downloaded files contain customer banking or credit card information, which would be a red flag for would-be identity theft. And, with the proper permissions, an analyst can see the content of downloaded files to further assess how harmful an outside leak of that information might be.
The Insider Risk Management solution can also plug into third-party human resources software, for instance, to bring in other pertinent data, such as whether an employee has recently resigned.
The algorithms factor in all of that information and assign each unusual activity a numerical “risk score,” which helps people tasked with managing insider risk to easily see where they need to focus additional attention.
That mirrors solutions such as the Azure Secure Score and Azure Security Center, which help Microsoft customers protect their data stored in the cloud by monitoring for, identifying and prioritizing the most serious security vulnerabilities. That could include mistakes in the way a customer configures a firewall that could allow a hacker to gain access and reflects the shared responsibility that both enterprises and cloud providers have to protect data in the cloud from all threats.
Microsoft’s own digital risk security team initially developed the insider risk machine learning algorithms as part of its own in-house solution to better detect potential insider risks from the data that’s already generated by its 150,000 employees around the world. The anomaly detection — which uses audit logs from existing tools — is part of a long line of technologies that have enabled the company to provide better security in ways that are relatively frictionless for employees, Arsenault said.