July 19, 2023 at 6:11 am

‘LLMs don’t hallucinate.’ Stability.ai’s CEO Speculates That Wild Answers From AI Are The Result Of Peering Into Other Realities

by Trisha Leigh

AIAlternateReality LLMs dont hallucinate. Stability.ais CEO Speculates That Wild Answers From AI Are The Result Of Peering Into Other Realities

If you have always wanted to be able to travel to alternate realities, some say you should be super jealous of AI – because it’s been there and back.

You’ve probably heard that one of the reasons Large Language Models (LLMs) spit out wild answers sometimes is because of machine “hallucinations.”

Well, according to Stability.ai’s CEO Emad Mostaque, those don’t exist – but alternate realities do.

“LLMs don’t hallucinate. They’re just windows into alternate realities in the latent space. Just broaden your minds.”

It sounds like he was born in the wrong generation – or at least, he really wants to believe there’s a good reason for the growing pains AI is going through as it begins to interact with humans on a regular basis.

Some think that the explanation could also help better explain the concept of latent space, which is the compressed space between an input and output image.

Whether it’s a hallucination or something else, that doesn’t change the fact that what the AI is spitting out doesn’t make sense in our reality. The mistakes and weird feedback aren’t exactly helpful if you need AI to say, assist with your resume.

Others were quick to call Mostaque on his nonsense, because there shouldn’t be a pass for flawed tech, no matter how cutting edge.

“The next time someone lies to you, to your face, with every confidence, I want you to say ‘it’s ok, that was just a window into alternate realities in latent space.'”

I mean. It’s hard to argue with that.

Right?