Get Your Lowest Price on Number Rep Management!
Take advantage of our end of year promos today!
8 min read

Identity Scams & Impersonation Fraud: Safeguarding Trust in an Era of AI

Leaders at the CES Event Call on the Urgent Need for Trustworthy Communications in all Channels
Written by
Mary González, Brand & Content Director
Published on
February 6, 2024

Robocalls, robotexts, and spam calls elicit a universal frustration that has led to an erosion of trust in communications because we believe them to be harmful, and they often are. Identity-based scams and impersonations of both consumers and businesses run rampant on the voice channel, with a growth trajectory projecting close to $83 billion lost to identity fraud by 2028.  

According to the Federal Trade Commission (FTC) and Javelin Strategy & Research, fraud committed through identity impersonation has cost Americans $8.8 billion, with upwards of $20 billion lost in traditional identity fraud in 2022, and has cost more in daily disruptions to consumers. Clearly, it’s lucrative to be a bad actor.  

The foundation of Numeracle’s Entity Identity Management solution prioritizes the verification and protection of an entity’s calling and, therefore, digital or mobile identity, which solves the gaps in trust left by illegal robocalling and identity impersonation events facilitated by the anonymity of the voice channel. Even for the leading company in identity management, we, too, need to be vigilant against identity scams harming our own company.  

The Impersonation of a CEO

Rebekah Johnson, Numeracle’s Founder and CEO, recently spoke about the identity impersonation scam she is experiencing at the Consumer Technology Associations’ CES event, where she and the panel discussed the current state of communications scams and the dangers of not collaborating as an industry to put a stop to identity fraud.  

"I am the leading subject matter expert on this, and I can't make it stop. Someone keeps pretending to be me, the CEO, and is texting or emailing my employees to buy gift cards. What's sad is that these scams keep getting more sophisticated. They gather more information and make it seem like it's really me." — Rebekah Johnson, Founder & CEO of Numeracle.

As consumers get savvier about the tactics leveraged by fraudulent callers, the scammers get savvier with the technologies they use to scam them. While the industry is trying to keep up with ongoing consumer complaints about fraudulent and illegal robocalls and robotexts, bad actors are moving on to the next big scamming tool: artificial intelligence.

AI & Identity: Sophisticated Scamming

The emergence and quick acceptance of AI has led to an evolution of identity-based scams, but it also highlights the need to recenter ourselves to focus on the need for trust in identity technologies at large. Scam calls may feel quaint when stacked against the novelty and sophistication of AI impersonations, but they signal the growing evolution of scammers alongside the technology.

As the latest buzzword, many are still focused on aspects of AI like the functionality and quality of generative AI, or the speed and creativity enabled through visual-based AI. However, more focus is needed to address the trustworthiness and the truthfulness, or lack thereof, of these intelligent technologies. In an ideal communications ecosystem, consumers would have the tools to distinguish between what is trustworthy and what’s a scam, but now it's getting harder to discern between what's even real, let alone trustworthy.

Our regulatory bodies are attempting to keep pace with developing technologies, with the Federal Communication Commission (FCC) publishing a Notice of Inquiry (NOI) to solicit feedback on how AI is affecting robocalls and robotexts. This NOI seeks to explore how AI can be leveraged to protect our communications networks and how to inform consumers of its dangers and uses. Many predict that AI will be used to improve the consumer experience as it gets better at filtering out potentially harmful or fraudulent messages.  

“We hope that in this specific kind of arms race, we can come up with an effective safety measure to protect consumers.” — Alejandro Roark, Chief of the Consumer and Governmental Affairs Bureau at the Federal Communications Commission.

How we interact with, regulate, and eventually learn to trust technologies such as AI needs to be initially rooted in systems of identity and trust upfront. As we evolve towards accepting prevalent altered realities, it's already more difficult to trust your own senses to distinguish the real from the altered, and it's a challenge we must be aware of.    

"How do we get ahead of this and establish a trusted identity? We already have artificial intelligence, but the one thing I'm terrified of is verified ignorance. Things will worsen until truthfulness becomes the number one priority." — Rebekah Johnson.

Consumers need to be armed with the tools to know that a call is coming from a real person or entity who authorized the use of their name and image and, in the world of AI robocalls, consented to the use of an auto-generated voice, with an indicator that the call is being AI-generated and not by a real person. While anonymity can be good when applied to certain communications and channels, more transparency is needed for AI communications. Consumers must have a way to identify the technologies being used to contact them and be granted the agency of consumer choice to decide how they want to respond to the information being presented.  

The Regulatory Game of Whack-A-Mole

In December 2019, Congress passed the TRACED Act to allow the FCC to evaluate and determine how to achieve trusted caller authentication information for the voice and texting channels. Over the past few years, this work has centered on the same need: where to inject identity information. Where does that happen in the standards? In the technologies? How will this information be formatted and transmitted? How will consumers know to trust the data when we’re surrounded by all kinds of identity scams and impersonations?

"If we're going to transmit identity in any communication channel, everyone who participates from the very beginning, like an enterprise that wants to deliver a communication to the consumer and the device through which that communication is delivered, must take action. Let's not wait for the FCC to say we should all be doing this. This is responsibility and accountability in technology." — Rebekah Johnson.

Preventing this type of fraud becomes a game of whack-a-mole when we choose to run before we can walk. New ideas on how to mitigate consumer harm or new proprietary technologies on how to do it get launched before it's widely understood, or any collective consensus is reached on how to tackle it. Regulatory bodies are doing what they can, alongside industry partners and feedback, to ensure we’re developing safeguards and consumer protections to create a healthier ecosystem where consumers have some sense of resolution.  

“The FTC reported that in 2022, Americans lost $798,000,000 to fraud via phone calls and another $396,000,000 to fraud by text. The commission has been working relentlessly to protect consumers from all types of messages. We're proud of the progress that we've made, but we also know that because of the rapid rate of innovation, there's always more that we can all do together.” — Alejandro Roark.

On Monday, January 29th, Congressman Frank Pallone, Jr. (NJ-06) introduced a new piece of legislation known as the Do Not Disturb Act to protect consumers from illegal robocallers misusing AI, acting as an update to strengthen the stance of the Telephone Consumer Protection Act. The new bill gives federal agencies more tools to go after companies misusing artificial intelligence and robocalls to defraud consumers, who are now more vulnerable to fall for more sophisticated AI impersonation scams.  

“Today I’m introducing legislation that brings anti-robocall protections into the 21st century and ensures illegal robocallers and scam artists can’t exploit new loopholes even as technology continues to evolve.” — Congressman Frank Pallone.

The bill contains 2 noteworthy statements:

  • “This bill would require disclosure of the use of AI to emulate human interaction over text or phone and would double penalties for any robocall violations of the TCPA or Telemarketing Sales Rule (TSR) when using AI to impersonate someone.”  
  • "The bill would require network service providers to offer robocall detection and blocking services at no additional cost to customers instead of charging a premium for the service. Providers would be required to give customers the ability to block robocalls that are highly likely to be illegal.”

While these requirements can be of use for detecting and curtailing illegal robocallers, it misses the mark yet again that lies at the crux of their efforts: prioritizing identity.

When it comes to generative AI, regulatory and enforcement agencies still need to define the limits of their authority and to what extent it applies to this new technology. By the end of 2024, we predict that these bodies will hopefully have a clearer understanding of what regulations are needed and their responsibilities, both in the US and internationally.  

Fighting Fraud

What's still needed is an emphasis on the responsibility of businesses leveraging any of these technologies to adopt the concepts of gathering and transmitting verified identities to ensure higher levels of truthfulness and trust. There is an overall lack of understanding by entities that still need to realize how their business will be impacted. This is a tidal wave of threat, not only to their revenue but the direct vulnerabilities of leaving their identity unprotected.  

"If you're a part of the communications ecosystem, whether in voice, messaging, email, or social, you have an obligation to ensure that fraud doesn't occur. We all have a responsibility of ensuring, at the front end, that we address how we're combatting fraud and not solely leave it up to the FCC or the terminating carriers to deal with it" — Rebekah Johnson.

Generative AI may be the new frontier for fraud, and bad actors will continue to use every opportunity to slip through our defenses to target unassuming consumers, which means we, as industry businesses working alongside the FCC, the FTC, and State Attorneys General, must all be willing, flexible, and determined to work collaboratively to put the scammers out of business.  

"We need to tackle this on all fronts; collaboration is key, and it's collaboration across the broader ecosystem. We've got to stay ahead of the bad actors, see the patterns, work together to find them and root them out, and stay flexible in how we respond." — Amanda Potter, Assistant VP & Senior Legal Counsel for AT&T.

The next frontier may be about developing AI tools to help us better engage with other AI tools, but right now, the conversation is about trust. It's about our ability to trust what we see and hear and our ability to trust our devices, which we carry around with us wherever we go.

Looking to protect the identity of your business, improve communications, and take control of your brand reputation? Get in touch with one of our identity management experts today.

About the CES Panel Discussion

Watch the Playback:

Click Here to Watch the Playback

Panel Speakers:

Steven Overly (Host); POLITICO Tech, Politico
Rebekah Johnson; Founder & CEO of Numeracle
Alejandro Roark; Chief of the Consumer and Governmental Affairs Bureau at the Federal Communications Commission
Amanda Potter; Assistant VP & Senior Legal Counsel for AT&T  

Numeracle Spoke logo small dark purple
©Numeracle 2024
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
It starts with an insight
Ready to take control of how your identity is presented to consumers? 
Are you curious about how your calls are being labeled? Maybe you know you have a problem and don't know where to start?

Contact us today and take the first step towards high quality and trusted connections.
Contact Us