HackTalk is a long-running monthly podcast with Sean Bailey and Devin Kropp, co-authors of Hack-Proof Your Life Now!, which covers the latest cybersecurity threats and issues advisors need to know to protect themselves and their clients. You can listen to the full broadcast in the video below, or you can get just the highlights by reading the article, where the transcript has been edited for length and clarity.
Sean: Hi everyone. Welcome back to HackTalk. My name is Sean Bailey, Editor in Chief at Horsesmouth. I’m here with Devin Kropp, co-creator of our Savvy Cybersecurity program and co-author of the award-winning Hack-Proof Your Life Now. And in today’s episode, we’re going to be talking about the new dangers AI poses for cybersecurity. Devin, last month we were talking about some positive AI stuff. You just want to recap because now you’re going to go into the dark corners, right?
Devin: Sure. So of course with any technology, right, there’s pros and cons. Last month we talked about some of the really cool positive things coming with this new adoption of AI software by everyone, everywhere. One of the ideas was there’s this new kind of AI-powered website where individuals can upload suspected phishing messages that they get and the AI software will run through it and tell you if it’s legitimate or not. And if it’s not, you can report it to this website and it helps them build a database to be able to verify messages in the future.
“Software now can take a three-second recording and create a voice double.”
And then we also talked about how it can help cybersecurity professionals in their job. So these new kind of AI software programs, one of which we’re going to talk about later, I think two now out of Microsoft, have been designed to help cybersecurity professionals not only identify malware, but come up with code quickly to help block that malware that’s being written. And so as attacks become more sophisticated, this software can help identify the malware strains more quickly than an individual could on their own.
Voice cloning
Sean: Right. Good. So now for this month we said we were going to come back and look at the dark side, meaning cybersecurity and AI. How are hackers using this technology? What’s going on?
Devin: So this is honestly nothing new. I feel like AI is really big in the news right now for individuals with things like ChatGPT and all the other programs coming out. But hackers have actually been using AI for a really, really long time. If you think about things like the software that they use to brute force through passwords, that’s basically AI-powered. So this is nothing new.
But with the kind of widespread adoption of AI, we’re seeing some new trends come to light. There’s one story in the news this month that caught my eye and it’s about how hackers are using AI software to create really realistic voice clones of people. And so there was a mother in Arizona who received a phone call from an unknown number. When she answered it sounded exactly like her daughter, basically saying that she was in trouble and that she needed help. And then a man’s voice came on the phone demanding a million dollars in exchange for the daughter.
The mom was at one of her other kids’ events freaking out. One of her friends was able to call her husband and verify that her daughter was in fact home.
But the voice was actually created from her daughter’s voice. It was an exact replica because there is this software out there now that can basically take a three-second recording and create a voice double. So think about things like TikTok and Instagram, where kids are recording their voices. They can take any recording and create an entire voice clone out of it and then program it to say, “Help I’m in trouble,” or things like that.
So we’ve already seen the grandparents scam, where this person calls and says, “Oh, your grandchild’s been in an accident and you need to send this money to help them.” But it’s not the kids’ actual voice.
Now we’re seeing this new level where voice clones exist. It used to be like you needed 30 seconds of someone’s voice to do this. Now it is as little as three seconds. And think about it: anyone’s voice is probably on the Internet for three seconds.
So that is one of the kind of scary things we’re seeing coming out now. They can take scams that already exist and kind of bring them to the next level of voice clone. And I mean in the future, we might even see that with video cloning too, these deep fake videos that exist. So things like FaceTime scams could come to light too. So that’s one of the things that we’re seeing among the kind of dangers of this new technology.
Sean: And this relates directly to financial advisors and how they run their business because many people have put in place the security function of, “We will not make a wire transfer. We’ll not move any client’s account until we’ve had a conversation with them on the phone.” And that was intended to avoid fraud being perpetrated through email.
“What if AI is able to clone a client’s voice and conduct a conversation with a financial advisor, and then a trade or movement of money is executed from the fraudulent instructions?”
Now I was talking with Debbie Taylor about this last week. She’s obviously very concerned about what other safeguards they need to put in place. The danger is that, in fact, AI is able to clone people’s voices and in some way conduct a conversation with a financial advisor where the voice sounds like the client’s voice, and then a trade or movement of money is executed from the fraudulent instructions of the scammer. So there’s definitely these huge concerns. We’re definitely going to need to come back to all this in the future.
In many ways Hack-Proof Your Life Now, which has been updated over the years, was a sort of picture in time. I think our whole system is good. It still works. The key elements that we recommend advisors use with their clients or teach to their clients or anybody still stand. But now with this AI thing, we may have to think about revising some stuff, but that’s for another day.
Devin, let’s move on. What is going on? There’s more scam talk relating to Apple. What are they warning their customers about?
Apple warns of uptick in scams
Devin: I’m sure many of you listening to or reading this have probably noticed an uptick in phishing text messages that you’re getting on a regular basis. I know that I seem to have been getting them more and more frequently over the past few months. Apple has noticed that happening, too. And they actually released this security update page on their website warning people about this uptick in messaging because it has become so prevalent.
Number one, it’s just kind of reminding people if you’re getting these messages that appear to be from Apple or any other legitimate company to look twice before you do anything. Netflix and Amazon seem to be the places that these hackers like to clone and appear to be from. But Apple has taken a step further and told its users, you need to be really careful of anything like these fraudulent messages that come in, and also things like popups that appear on your device saying that there’s a security problem.
They also warn of scam phone calls that impersonate Apple support. So going back to what we were just talking about where people are able to scam using the phone more convincingly now. Beware of unwanted calendar invitations that come into your email on any of your devices because they are seeing this uptick in these scams hitting Apple devices.
But really any device, too. I know people who are on Android who are experiencing this as well. So it’s more of just kind of a warning. There’s nothing in particular that has come out to be more dangerous, but they have noticed this uptick like individuals have. And they want to warn their users to be careful.
How to fight back against increased phishing
- They also want to tell people that if they do get these messages specifically that pretend to be from Apple to report them to a specific email address, it’s called reportphishing@apple.com. They’re asking that people do send those in so they can help fight back against that.
- And then there’s also another email. If you get these Netflix or Amazon fake logins, you can report them to abuse@icloud.com.
- They also remind people that on your iPhone, you can block and report any suspicious messages that come in. So instead of deleting them, it gives you the option to report it. And again, that’s helping Apple build this library of scam messages that are coming in so they can better filter them out.
Sean: Hey, it’s definitely a scary situation for sure. All right, and well we’re hitting all the big tech companies today. What about Microsoft? They’re coming to the rescue maybe with AI-powered security. What’s that all about?
Microsoft AI helpling cybersecurity pros
Devin: They created their own AI software for cybersecurity professionals. Again, I expect that we’ll see a lot of this in the next few months because there is a need there. So they’ve released what they’re calling Security Copilot and it is designed for cybersecurity professionals. It’s not for an everyday user, but they’ve designed this AI software that is going to help professionals do their job.
It can help create responses to malicious code. So when a malicious code is discovered, typically what would happen is a professional would have to write code to fix that security flaw, update the patch, get it out to the public and all of that.
The idea with this is that the AI software would be able to not only identify the strain faster, but come up with code that would block it in minutes instead of hours or days, which is how long a professional human being would probably take.
“With Security Copilot, the idea here is that it’s going to help professionals identify and respond to these malicious attacks more quickly than they would be able to on their own.”
Of course, again, it’s not perfect yet, but it gives these professionals a starting point to work from. So they’re not coming up with code from scratch. So the idea here is that it’s going to have some capability to help professionals identify and then respond to these malicious attacks more quickly than they would be able to on their own. So it’s not replacing any jobs here, not taking the place of a cybersecurity professional, but it’s enhancing their ability to do their job more quickly and efficiently.
Sean: And I think that’s the key thing to understand. In the last couple of months with the release of ChatGPT, there’s been massive renewed interest and scare warnings about artificial intelligence. It does remind me of the early days of the Internet where some people thought it was great, but some people thought it was crazy.
And the truth of the matter is that it’s kind of a neutral thing. It’ll be used for good, it can be used for bad. And we just all have got to maintain our wits and bring on board this new technology to boost our own security and our own knowledge base, really. And I think everybody will be probably better off with that situation. So anyway. All right Devin, I think that does it for this month, right?
Devin: Yep, that’s it. Everyone, keep staying safe out there. Keep an eye out for those scams and report them if you get them.
Sean: Yeah, for sure. All right everyone. Thanks. We’ll see you next month. Take care.
Devin: Bye.