Behavioral analytics is one of the best authentication methods around — especially when it’s part of continuous authentication. Authentication as a “one-and-done” is something that simply shouldn’t happen anymore. Then again, I’ve argued the same thing about using unencrypted SMS as a form of multi-factor authentication and I sadly still see that being used by lots of Fortune 1000 firms.
Although most enterprise CISOs are fine with behavioral analytics on paper (on a whiteboard? As a message within Microsoft Teams/GoogleMeet/Zoom?), they’re resistant to rapid widespread deployment because it requires creating a profile for every user — including partners, distributors, suppliers, large customers and anyone else who needs system access. Those profiles can take more than a month to create to get an accurate, consistent picture of each person.
I hate to make this even worse, but there are now arguments that security admins don’t need one profile for every user, but possibly dozens or more.
Why? Let’s say you run a user (transparently to the user, of course) through a variety of tracking sessions and determine everything you can, such as typing speed, the angle the user holds a mobile device, the pressure used to strike keys, typos per 100 words, the number of words typed per minute, etc.
You now have a behavioral profile of that user. That profile, however, is likely based on the user’s regular behavior during normal workdays. What about when that user is exhausted, say possibly after arriving in the office from a red-eye flight? Or ecstatically happy or horribly depressed? Do they behave differently in an unfamiliar hotel room compared to the comfort of their home office? Do they act differently after their boss has screamed at them for 10 minutes?
For any machine-learning system to truly recognize the user and deliver few false negatives, it needs to accurately recognize the user in a wide range of different circumstances. That means studying the user longer and in as many different environments/situations as practical. For an enterprise with a vast six-figure workforce, that is a daunting task indeed.
Scott Edington, the CEO of Deep Labs (a firm that deals with behavioral analytics), offered an interesting example: “A person visiting NYC from Southern California steps out of a restaurant in the middle of the winter to call a car. She is impacted by the cold weather and suddenly starts typing on her phone in an accelerated and more deliberate manner, because she is cold and her fingers numb. This type of persona being identified may differ from the “warm” version of this same individual. Having personas understood in this manner provides context. It’s not a bad actor or hacker, even though their behavior is different. It’s the same person, but only acting in a different – and reasonable – way.”
Edington’s example is interesting, but it’s difficult to see a practical way of replicating that during a normal period of analysis. This testing needs to be done with minimal to no interference — or even interaction — with users to keep the process frictionless. (Of course, it’s unlikely you’d see a user do this kind of cold-weather-outside activity without being prompted — at least not during a routine testing period.
It’s an interesting conundrum for companies that rely on behavioral analytics to stay secure. It may simply be that CISOs are going to have to accept a higher-than-ideal number of false alerts during an initial testing period. It might mean that profiles seamlessly get more accurate over an extended period (say, a year or two) as these atypical behaviors happen.
This gets us into the typical chicken-and-egg problem. The earliest days/weeks of a behavioral analytics rollout will be: A, when the system is at its least accurate, firing off many false alerts. And B, when users and LOB chiefs will decide whether they will accept this authentication approach or resist it.
No one ever said cybersecurity would be easy.