I like to think I’m somewhat savvy when it comes to technology. Yes, I was slow to embrace the Internet, smartphones, and even answering machines. But I’ve been writing about tech for more than four decades, often covering topics months or years before they hit the mainstream.
One of the areas I’ve focused on in my writing is cybersecurity, so naturally I feel like I’m less likely to be victimized by cybercriminals and fraudsters. Wrong, wrong, wrong. Since launching my book, Life Lessons, last September, I’ve twice fallen for scams that I should have seen coming.
In each case, artificial intelligence played a role in my being duped. AI, even in its still relatively early stages, is powerful enough to easily fool us into falling for scams or other malicious activities.
While it’s easy to blame AI for many of the woes we face today, it’s important to remember that it’s still humans behind the treachery. People create the programs and look to make a profit through their actions. The technology itself, at least at this point, is neither good nor bad.
One of the biggest fears about AI, in fact, is that fellow humans will use it to exploit others. And they are doing just that.
The scams that got me should have been easy to catch. In my haste to move quickly in marketing my book, however, I looked past the warning signs. Each of them involved legitimate online entities.
In the first one, the scammer was disguised as a popular Instagram book reader account. I assumed it was legit. After I received an offer from this account to leverage its book marketing program, I decided to sign up even though I had some suspicions. Sure enough, it never delivered a single service but was happy to accept my PayPal payment.
The lesson I learned here is never trust unsolicited (cold) emails or social media messages offering book marketing, promotion, or publishing services. They are overwhelmingly scams designed to take your money.
The second scam was far worse. In this case, I signed up to use ACX, an affiliate of Amazon that creates audiobooks. I entered my book on the platform to elicit auditions from narrators, and after receiving about 20 auditions selected one of the audiobook producers because I liked the sound of his voice.
My first clue should have been the fact that this person had no web site, barely a web presence at all, in fact. But I ignored this because I thought that surely someone in the ACX community had to be legitimate.
Well, it turns out this “narrator” used AI to create the audio files, which is strictly against ACX policy. The narration sounded fine to me, and I didn’t find out about the violation until after I had paid the producer, who no doubt has already run off to scam some other unsuspecting soul.
The world is full of good and bad people. I like to think the good outnumber the bad. But that thought is of little consolation when I’ve just been ripped off.
My goal with this post is not to whine about my own brushes with the scamosphere (a word I apparently just invented, based on a quick Google search showing that it doesn’t exist). No, my aim is to perhaps educate or remind readers about how scammers can get them in this fast-changing AI-powered world. Here are a few:
- Voice cloning. Scammers use AI to clone voices of people you know, such as family members, friends, employers. Then they call to ask for emergency financial help. Evidently all the scammers need is a short audio clip from social media or a voicemail to perpetrate this crime. It’s best to hang up and call the person in question and ask if he or she really needs help.
- Fake emails and texts. Scammers use AI tools such as ChatGPT to write emails and text messages. Unlike earlier generations of “phishing” attacks with spelling or grammatical errors, these actually look real and can include personalized information. I’ve gotten a few purporting to be from Social Security. Be sure to never click any links or respond to messages asking for personal or financial information.
- Fake photos and videos. Using a method known as deepfakes, scammers can create fake images and videos of actual people. Some are using it to create fake charity appeals, news stories, or testimonials, aimed at soliciting donations or personal information from potential victims. To identify deepfake photos and videos, look for unnatural, AI-generated glitches, especially around faces, and inconsistencies in lighting and audio.
As a side note, I would add that the audiobook scam occurred as I am navigating another serious health issue. There’s a real temptation to rant about why these things have to happen, and the timing of such events. But it’s a reminder that many of the things we experience in life are random and to a large degree out of our control.
Fairness is a meaningless concept when it comes to what happens to us—good or bad. Sometimes we have to just draw deeply from the well of patience and resilience to deal with life’s challenges. Ultimately, the best we can do is carry on and try to counter the bad with good.
“Perhaps one day we’ll be able to identify and block not just scams but the scammers themselves—before they even target their first victim.”—Maria Konnikova
If you haven’t subscribed, you’re missing out on post updates! To sign up, type your email in the box below then click on the green subscribe button. When you receive a confirmation message from WordPress, click on the “confirm now” button to complete the process.

Leave a Reply