Connect with us

AI / Tech

Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.

Published

on


Billionaire media mogul Barry Diller doesn’t think OpenAI CEO Sam Altman is untrustworthy, despite recent reporting to the contrary. Onstage at The Wall Street Journal’s “Future of Everything” conference this week, Diller vouched for the AI exec, who has been accused by some former colleagues and board members of being manipulative and deceptive at times.

Diller, who is friendly with Altman, was responding to a question about whether or not people should put their faith in Altman to ensure that artificial intelligence benefits humanity.

In particular, he was asked about the theoretical form of AI known as artificial general intelligence, or AGI, which could one day outperform humans on any task.

The media exec, a co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said that while he believes Altman is sincere in his pursuits, that’s not really the area of concern people should be focused on. Rather, it’s the unknown consequences that will result from AI.

“One of the big issues with AI is it goes way beyond trust,” Diller said. “It may be that trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen. And I’ve spent a lot of time with various people who’ve been in the creation mode of AI, and they have a sense of wonder themselves. So…it’s the great unknown. We don’t know. They don’t know,” he explained.

“We have embarked on something that is going to change almost everything. It is not under-reported. Now, whether these huge investments are going to come through — I couldn’t care less. I’m not invested in it, but progress is going to be made,” Diller added.

Still, the media mogul said he believes that most of the people leading the charge are good stewards, saying he believes that Altman is sincere and “a decent person with good values.” (Diller wouldn’t say which of the AI leaders he thinks is insincere, we should note.)

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“But the issue is not their stewardship. The issue is … it’s dealing truly with the unknown. They don’t know what can happen once you get AGI, and we’re close to it. We’re not there yet, but we’re getting closer and closer, quicker and quicker. And we must think about guardrails,” Diller noted.

Plus, he warned, if humans don’t think about guardrails, then the alternative is that “another force, an AGI force, will do it themselves. And once that happens, once you unleash that, there’s no going back,” Diller said.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.



Source link

Continue Reading