AI voice cloning and deepfakes are supercharging scams. One method to protect your loved ones and yourself is to create secret code words to verify someone’s identity in real time.
AI voice cloning and deepfakes are supercharging scams. One method to protect your loved ones and yourself is to create secret code words to verify someone’s identity in real time.
Or, crazy idea, we could stop blaming the victims (“oh, so you didn’t have a secret password with your family? no wonder they got phished!”) and start blaming the companies creating these tools that are making money hand over fist and not giving a fuck about the consequences.
The two are not mutually exclusive. To overuse an American fantasy, it’s the difference between blaming the police for not solving murders vs stopping the attacker in their tracks.
It is way easier to deal with a scam you didn’t fall for in the first place
I am not saying they are mutually exclusive.
I am saying that the focus of this article seems to be on blaming the victims instead of demanding accountability from providers of the tools making these scams possible. The article treats these tools as some inevitable background, part of the environment – but in fact creating them and making them publicly available is a choice and a business decision of for-profit corporations, making bank on scammers using them to scam people.
So while these approaches are not mutually exclusive, the focus is dreadfully misplaced. It’s like saying “You Need To Wear An Air Filter At All Times” instead of calling out the companies making the air unbreathable.
And that’s something I object to on a very basic level.