Close-up of human eye - representing identity, scrutiny and the personal nature of the deepfake threat facing high-profile individuals and their families

The Incident That Changed the Conversation

On 28 October 2025, Jensen Huang, the CEO of NVIDIA, was delivering his keynote address at the company's GPU Technology Conference in Washington D.C. While he spoke, a parallel version of him was also live on YouTube.

The second Jensen Huang was not him. It was a deepfake, an AI-generated video hosted by a channel calling itself "NVIDIA Live," appearing at the top of YouTube's search results for anyone searching for the event. The fake stream attracted 95,000 concurrent viewers. The real one had 12,000.

The deepfake told viewers he was pausing the keynote to announce a "crypto mass adoption event tied directly to NVIDIA's mission." He directed them to scan a QR code. By the time YouTube removed the stream, nearly an hour later, criminals had stolen an estimated $115,000 in cryptocurrency from viewers who believed they were watching the real thing.

No NVIDIA system was compromised. No password was stolen. Jensen Huang's identity was weaponised against tens of thousands of people using nothing more than publicly available footage of him speaking on stage.

“The attack surface is no longer a system. It is the individual's public identity itself: their voice, their face, their words, their reputation built over decades.”

The Implications for High-Profile Individuals and Their Families

The NVIDIA case is instructive precisely because of the scale of the target. If the world's most visible technology executive, with a global communications team and a $3 trillion company behind him, cannot prevent his identity from being deployed as a fraud instrument in real time, the question for every high-profile individual is not whether this threat applies to them, but when.

The exposure is not limited to public statements or corporate communications. The same technology that cloned Jensen Huang's voice and mannerisms from earnings calls and keynotes can clone a family member's voice from a handful of social media videos, and deploy it against the people who love them most.

The fake kidnapping variant is particularly relevant to family offices and high-net-worth households. A parent receives a call. The voice on the line is their child's, the precise cadence, the particular way they say certain words, the exact pitch of distress. They are told their child has been in an accident, has been arrested, or is being held. A wire transfer is requested immediately.

Just three to five seconds of publicly available audio is now sufficient to generate a voice clone with 85% accuracy. For individuals with social media profiles, podcast appearances, media interviews, or public speaking engagements, the raw material for this attack is already in circulation.

The Reputational Dimension Is Distinct from the Financial One

For the individuals and families Pavesen works with, the reputational exposure is a separate, and in many ways more durable, threat than the financial loss.

A deepfake does not need to defraud anyone to cause reputational damage. It needs only to circulate. A fabricated video of an executive making controversial statements, a cloned audio clip placed in the wrong context, a synthetic interview: any of these can be published and shared at scale in the time it takes a communications team to become aware of its existence.

The defining characteristic of this threat is the absence of a breach. There is no hack to detect, no intrusion to trace, no system to harden. The attack surface is the individual's public identity itself.

“When a deepfake circulates, the damage is done in the first hour. Organisations with pre-established monitoring and clear escalation protocols are significantly better positioned than those responding from a standing start.”

What Proactive Management Looks Like

The first line of defence is not technical. It is strategic. Individuals who maintain a strong, consistent, and well-documented digital presence create a clearer baseline against which fabrications can be identified and contested.

For families and private offices, practical measures are specific. Establishing private code words that cannot be harvested from public sources, used to verify identity in any unexpected communication requesting urgent action, is now considered standard protocol at family office level. The logic is simple: an AI cannot guess a code word it was never trained on.

At the communications level, the priority is speed. When a deepfake circulates, the damage is done in the first hour. Individuals and organisations with pre-established monitoring, clear escalation protocols, and platform relationships are significantly better positioned to contain the spread than those responding from a standing start.

The NVIDIA case resolved. The fake stream was removed. But the 95,000 people who watched it are not recovered. In reputation management, the cost of being unprepared is paid long after the incident is over.