The biggest AI story this week was the showdown between Open AI and actress Scarlett Johansson over the use of a voice concerningly similar to hers after she had refused to collaborate with Open AI. This is quickly becoming a recurring typical deepfake story: someone steals the likeness of a famous person, and that person fights back.

Photo by Taylor Hill / WireImage
The less popular but perhaps more concerning story is the growth of sanctioned deepfakes. This week, WIRED detailed how deepfakes are playing a role in the upcoming Indian election. A large number of politicians are intentionally creating deepfake versions of themselves as a way to personalize messages directly to voters across the vast country.
It’s easy to see the appeal. Who hasn’t had a moment where they wish they could scale themselves? Heading into high-stakes situations like a national election, these sorts of deepfakes can change minds and swing voters. Thankfully, the growth is also attracting innovators trying to reduce the impact of deepfakes. Last week, I met an entrepreneur working on a platform called TrueMedia.org that was designed specifically to detect political deepfakes. The more growth we see in intentional and ill-intentioned deepfakes alike, the more we’ll need tools like this to help us separate truth from fabricated reality.
TRENDING CURRENTLY
- How MindValley Is Building the Next TED (Only More Useful) »
- My 500th Blog Post – A Big Thank You »
- What Does Chacha Mean To You? (The Power Of A Name) »
- Is “Sludge” a Real Customer Service Tactic to Avoid Irate Customers? »
- Manifesto For The Content Curator: The Next Big Social Media Job Of The Future ? »