Back to Blog

Barry Diller: Trust is Irrelevant as AGI Nears

Barry Diller: Trust is Irrelevant as AGI Nears Barry Diller: Trust is Irrelevant as AGI Nears Barry Diller: Trust is Irrelevant as AGI Nears

Barry Diller Trusts Sam Altman, But 'Trust is Irrelevant' as AGI Nears

Overview

Billionaire media mogul Barry Diller defended OpenAI CEO Sam Altman at The Wall Street Journal's "Future of Everything" conference, addressing concerns about Altman's trustworthiness. However, Diller emphasized that personal trust is becoming irrelevant as artificial general intelligence (AGI) approaches—because the consequences of AI development extend far beyond the intentions of any single leader.

Key Insights

Trust vs. The Unknown

  • Trust is not the main issue: Diller argues that focusing on whether Sam Altman is trustworthy misses the larger point. The real concern is the unknown and unpredictable consequences of AI development.
  • AI creators are surprised too: Even the people building AI systems don't fully understand what will emerge. Diller noted that AI leaders themselves have a "sense of wonder" about what they're creating.
  • AGI is approaching rapidly: "We're close to it. We're not there yet, but we're getting closer and closer, quicker and quicker."

The Need for Guardrails

  • Guardrails are essential: Diller stressed that humanity must establish boundaries for AI development before it's too late.
  • The alternative is AGI self-regulation: If humans don't create guardrails, "another force, an AGI force, will do it themselves. And once that happens, once you unleash that, there's no going back."

On Sam Altman

  • Diller vouched for Altman personally, calling him "sincere" and "a decent person with good values."
  • However, he emphasized that individual character doesn't matter when dealing with technology that could fundamentally alter everything.

The Bigger Picture

"One of the big issues with AI is it goes way beyond trust. It may be that trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen."

Diller believes most AI leaders are "good stewards," but the transformation AI will bring is beyond anyone's control—making the conversation about trust secondary to the need for systemic safeguards.

Takeaways

  • AGI is an unpredictable force: Even its creators don't fully understand what it will become.
  • Guardrails are urgent: Humanity must act now to establish boundaries before AGI develops self-regulation.
  • Personal trust is irrelevant: The scale and unknowability of AI's impact make individual intentions less important than systemic oversight.
  • Progress is inevitable: Diller believes AI will continue to advance regardless of investment outcomes or public debate.