Top | 7 | |
Newsletter 03/23/2025 | If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |
Back to Contents |
More Old Wine In New Bottles:
Not that long ago, maybe 15 years or so, the most common things Internet users had to worry about were emails written by people who knew nothing about The Chicago Manual of Style, or websites that appealed to a user's more prurient interests. In those days, a common email based scam was when a user received an email, purportedly from a younger relative who had found him or herself in trouble in a foreign country. Here is how the Copilot AI described the way such scams had worked.
This type of scam is often referred to as the "Grandparent Scam" or
"Relative-in-Distress Scam." It typically starts with the victim
receiving an email (or sometimes a phone call) from someone claiming to
be a younger relative, such as a grandchild, niece, or nephew. The
message often states that the relative is in urgent legal trouble while
traveling abroad—perhaps they've been arrested, detained, or involved in
an accident—and are unable to contact anyone else. This scam was indeed quite prevalent in the previous two decades, and by my experience often quite succcesful. I remember several desperate calls from clients begging me to come immediately so I could evaluate the legitimacy of the email. A small amount of cyber sleuthing would reveal the inconsistencies about the email, and thus I helped those people who called me to realize it was a scam, before they sent off the money. Many people, however, fell for the scam, and sent the money off to some unknown crooks of the netherworld. And then they called me. The advent of AI hasn't really changed the modus operandi of many of today's cyber crooks. Instead, Generative AI, a subset of the broader term, Artificial Intelligence, has simply increased the effectiveness of these miscreants. Generative AI is commonly used to describe tools and techniques that create — or generate — images, videos, and sounds. Today's Gen AI can create quite convincing images, videos, and sounds that, at first, may seem quite authentic. The term given to the objects of Gen AI is "deepfakes." Although many people might think of fake explicit images of celebrities when imagining the notion of deepfakes, and certainly that is how the term entered the common lexicon of our times, this isn't Grandpa's Gen AI anymore. One of the techniques involved with creating audio deepfakes is "voice cloning." Using a single sample of a person's voice can allow skilled users of Gen AI to recreate speech completely different than that of the original, but using the same sounding voice. And this is where today's scammers are placing their bets. Whereas twenty years ago, as noted above, scammers would send an email that pleaded with Grandpa to send money to bail a grandson out of jail in Stockholm. Now the scammers are calling Grandpa using grandson's own voice to steal money from Grandpa, while still using the same modus operandi. The voice will be, however, a Gen AI deepfake. As the graph below from truecaller insights shows, while spam email is still the favorite tool for spammers to use to fleece their victims, consumers are receiving an increasing amount of spam calls, too. To make the spam call sound authentic, the crooks need to do some homework about their intended victims. First, they will need to obtain a sample of the person's voice who they intend to impersonate and can be easily recorded. This is simple to accomplish by any number of means. Today, people often express themselves on social media with videos of themselves talking about this or that. Or, by simply calling the number of the intended target, and having a random conversation with that person would allow a voice sample to be recorded. Even just listening to and recording the outgoing message of the voice mail will offer the needed sample of the voice the crooks intend to impersonate. As PCWorld pointed out in, "The rise of AI voice cloning: A new era of phone scams begins", the information needed to begin a deepfake scam is not that difficult to obtain these days. As the PCWorld article pointed out, the types of information the scammers might want to collect are: Full name, Contact details, like phone number, email and address, Employment history, Educational background, Financial situation, Criminal history, Relatives, Known associates. Besides buying stolen data on the dark web, a more effective means for these crooks to do their reconnaissance is the use of data brokers. Type any name you want into Google and let's say a city.. You might be surprised at the results. Websites, like BeenVerified and Whitepages, will display a considerable amount of information about that person for free. Moreover, most of these sites offer a trial of their paid service for $1.00, or so. The paid premium service will provide an astonishing amount of data about a person; more than enough information about an intended victim to make their scam calls even more convincing. The Federal Trade Commission has a webpage that explains in-depth how these services work. Of course, in our age where Caller ID is now universal, the crooks would need to display on the call recipient's phone the real telephone number of the person who they claim to be. This is done through a now common practice called "Caller ID Spoofing." Again, as with people search websites, otherwise legitimate web based services provide the crooks with the means to accomplish their exploit. Voice Over IP services that allow a user to place telephone calls using the Internet instead of traditional telephone lines, often allow the customers of these online services to configure the service using any name, telephone number, or both that they choose as the number or text that is displayed on the call recipient's phone. The Wikipedia page, Caller ID spoofing, explains all this very well. So, by abusing what is now everyday off-the-shelf technology, crooks can generate a very convicting, but otherwise fraudulent, telephone call that might well dupe the unsuspecting call recipient into believing that Grandson Jimmy really is in some kind of duress. This fraudulent practice has become so virulent that on, December 3, 2024, the F.B.I. released a public service announcement that offers a good summary of how Gen AI is being exploited to commit frauds. The PSA from the F.B.I. offers some good suggestions about how to recognize AI voice frauds. One good piece of advice the Feds offer is to: "Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning." But, first, to take the time to do that, you must suspect that the call is a fake to begin with. The other piece of advice the Feds give is already way too common in our imperfect digital world. The F.B.I. suggests that you now use a form of Two Factor Authentication (2FA) when a relative calls. "Create a secret word or phrase with your family to verify their identity," is what the feds now advise us citizens to do. I might suggest a challenge question, not unlike when you set up a new online account, such as "What is the name of the city where you were born?" Between relatives, the challenge question can be quite idiosyncratic, and thus difficult to guess. When the call from the relative comes in, hang up the phone; call that person right back, and then use the your personal code to verify it's who it should be on the other end of the call. Ah Jeez...
Isn't it amazing how all this technology has made our lives easier and
way more productive, not mention stress free?
Hello Muddah, hello Faddah
|
¯\_(ツ)_/¯¯ Gerald Reiff |
Back to Top | ← previous post | next post → |
If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |