Search
Close this search box.

Industries

Ensure your unique data and process requirements are being met with IT solutions built on deep domain experience and expertise.

Company

At Coretelligent, we’re redefining the essence of IT services to emphasize true partnership and business alignment.

Insights

Get our perspective on the connections between technology and business and how they affect you.

Why “Personhood” is the AI Cybersecurity Issue Businesses Need to Address Now

In this post:

Across the many artificial intelligence (AI) developments confronting businesses today, one of the most pressing issues borrows its vocabulary from a philosophical debate: establishing personhood. In this cybersecurity-specific context, “establishing personhood” means being able to accurately differentiate between legitimate, human users and AI-generated bots – and the urgency around getting it right cannot be overstated. Bad actors, armed with AI tools that create hyper-realistic fake identities, are already exploiting companies’ weakness in this area and turning authentication into the ultimate high-stakes battleground. 

For C-suite leaders, understanding and addressing the personhood challenge is key to protecting not just systems and data, but also brand reputation and operational integrity as well.

AI-Powered Impersonation Attacks – They’re Everywhere

Current estimates suggest that 30% of all internet traffic is bots. This is a sobering figure, given how quickly bots are evolving the ability to imitate human behavior with uncanny accuracy. The emergence of AI-driven deepfakes and chatbots that can hold realistic conversations, bypass CAPTCHA tests – and even carry out phishing and other social engineering attacks – is only compounding the problem. A recent Coretelligent guide to AI-driven cyber threats explains how new, AI-driven threats like these are outpacing traditional defenses, leaving organizations scrambling to shore up their cybersecurity protections.

Speaking with CSO Online, Steve Grobman, Chief Technology Officer at McAfee, shares: “As AI capabilities advance, we’re seeing a shift from automated cyberattacks to AI-augmented threats that are more adaptable, resilient, and difficult to detect. This is the new frontier in cybersecurity.” 

By canceling out traditionally effective methods of identity verification – such as multi-factor authentication (MFA) or CAPTCHAs – AI-generated bots have tossed the “prove your humanity” ball back into our court.

Personhood Murkiness Makes It Easier to Threaten and Disrupt  

For CISOs and other executives, the challenge now is to determine who is on the other side of every interaction. Traditional systems rely on “what you know” (passwords) and “what you have” (devices). However, AI-powered bots can mimic both, making these methods unreliable. 

One critical area where this challenge is evident is in Distributed Denial-of-Service (DDoS) attacks on call centers. Imagine a customer support line flooded with thousands of AI-generated voices, each capable of holding a convincing conversation. The goal isn’t just to overwhelm but to confuse and disrupt, potentially damaging brand reputation. Jay Meier, SVP of North American Operations at FaceTec, tells CSO online: “This is the new DDoS attack, and it will be able to easily shut down a call center.”

Promising New Authentication Solutions Are Starting to Emerge

Standard security measures like CAPTCHAs, once the go-to for separating bots from humans, have become almost obsolete. CAPTCHAs were designed to be solved by humans and not machines, but today’s AI can outperform humans at these tests. Similarly, MFA, while adding an extra layer of security, cannot stop a determined AI bot equipped with stolen credentials.

A more promising solution is behavioral biometrics. Unlike traditional biometrics that focus on static features (like fingerprints), behavioral biometrics analyze how a user interacts with a system—such as typing speed, mouse movements, and even scrolling behavior. This data is much harder for AI to replicate, making it a stronger form of verification. For organizations looking to stay ahead of AI-driven threats, this shift is crucial.

Continuous authentication is another emerging strategy. Instead of verifying identity only at the beginning of a session, continuous authentication monitors user behavior in real time. By constantly analyzing activity patterns, this method ensures that any unusual behavior triggers a security response before damage can be done.

The Human Element Still Matters – a Lot

While technology is a critical component of any defense strategy, it’s not enough on its own. Businesses also need to focus on building a culture of security awareness to counteract these threats. A Coretelligent article on the human element in cybersecurity emphasizes that humans are often the weakest link in any security system. Cybercriminals frequently bypass technical defenses by exploiting human vulnerabilities like poor password practices and susceptibility to phishing attacks. To reduce this risk and create a seamless integration between security policies and employee behaviors, companies need to invest in regular training and foster a culture of vigilance and awareness that encompasses all users – including your customers.

As Sandy Carielli, a principal analyst at Forrester, points out in CSO online: “The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. You need to pay very close attention to customer friction. If you alienate your human customers, you will not last.”

Preparing for the Future of AI-Driven Cyber Threats

In the race against malicious AI-driven bots, businesses can’t afford to fall behind. To stay ahead, companies need to employ a multi-layered approach that combines new technologies like personhood credentials with existing measures. The NIST Cybersecurity Framework (CSF), recently updated to include more comprehensive guidelines, offers a flexible structure that aligns various standards and best practices to combat the emerging threat landscape (Coretelligent’s NIST guide provides an overview).

Dan Boneh, Professor of Computer Science at Stanford University, emphasizes the role of emerging technologies like zero-knowledge proofs, stating: “Zero-knowledge proofs offer a way to verify identity without compromising user privacy. In the context of AI-driven threats, these solutions are critical to establishing trust in digital interactions, where proving ‘who you are’ is as important as ‘what you know.’” (MIT Technology Review)

Why Personhood Matters for C-Suite Leaders

As AI becomes a cornerstone of both business innovation and cybercrime, leaders must view personhood as a critical part of their cybersecurity strategy. Implementing advanced solutions like behavioral biometrics and continuous authentication, while building a culture of cybersecurity awareness, is the best defense against this new wave of AI-driven threats.

For more insights on staying ahead of emerging cyber threats, read Coretelligent’s Top 10 Cybersecurity Recommendations and explore how to develop a holistic security strategy that aligns with your business goals.

This multi-layered strategy will ensure that in the battle between humans and AI, your organization can not only detect threats but respond swiftly and effectively.

Want to learn more? Join us on Thursday, October 31, 2024, at 1:00 pm ET for “Building Cyber Resilience in the Age of AI-Driven Threats.” This webinar will feature a fireside chat between Michael Messinger, Shermco CIO; Alex Rose, Secureworks Director of Government Partnerships & CTU Threat Research; and Jason Baron, Coretelligent CIO. Reserve your seat today! 

Your Next Read

Cyber Resilience: Building a Business That Can Survive (and Thrive) Post-Cyber Attack

How can we help you?

Our engineers provide help desk support and a whole lot more.