"I just found out that it's been hallucinating numbers this entire time."
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay "Here's what a functional company's numbers would look like"
Unfortunately, these are not *your* company's numbers
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay This offers a glimmer of hope that the managerial class may self-destruct without doing too much more damage to the rest of us.
-
@Natasha_Jay grounded human will be needed for oversight for quiet some time
@Powerfromspace1 @Natasha_Jay you're so close to getting it
-
By the way, always delighted to see what is to my understanding a classic prank Auzzies play on tourists
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone
-
"I just found out that it's been hallucinating numbers this entire time."
Source is here for anyone interested in it
https://www.reddit.com/r/analytics/comments/1r4dsq2/we_just_found_out_our_ai_has_been_making_up/
-
@Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone
@ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!
-
"I just found out that it's been hallucinating numbers this entire time."
No one should be surprised.
It is mathematically impossible to stop an LLM from “hallucinating” because “hallucinating” is what LLMs are doing 100% of the time.
It’s only human beings who distinguish (ideally) between correct and incorrect synthetic text.
And yet, it’s like forbidden candy to a child. Even well educated, thoughtful people so desperately want to believe that this tech “works”
-
@ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!
@maccruiskeen @Natasha_Jay @wendynather MUPSaaS
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay "To an artificial mind, all reality is virtual."
-
@Natasha_Jay I suspect that one of the few applications where a heavily hallucinating LLM can outperform a human would be in replacing board members, C-suite executives, and their direct reports. I propose a longitudinal study with a control group of high level executives using real data, and experimental group using hallucinated, or maybe even totally random data filtered into plausible ranges, using executive compensation deltas as the metric.
Remember who funds Ai.
https://www.al-monitor.com/originals/2024/05/saudi-prince-alwaleed-bin-talal-invests-elon-musks-24b-ai-startuphttps://www.washingtonpost.com/technology/2025/05/13/trump-tech-execs-riyadh/
Disaster Capitalists -- they want another bubble bursting that sends another generation of wealth upwards to the 1%
Global financial crashes & mass dissatisfaction provides the makings for fascist movements & profitable imperialist wars of extraction.
https://www.taxresearch.org.uk/Blog/2025/09/03/austerity-is-the-midwife-of-fascism/
https://www.euractiv.com/opinion/the-next-financial-crisis-is-unavoidable-and-american/
https://www.ips-journal.eu/topics/economy-and-ecology/what-will-it-take-to-beat-the-far-right-8599/
-
"I just found out that it's been hallucinating numbers this entire time."
Hallucinating numbers, you say?
It must be learning from the current American administration..
-
"I just found out that it's been hallucinating numbers this entire time."
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay someone is getting fired
-
"I just found out that it's been hallucinating numbers this entire time."
-
@Natasha_Jay This part in the original post is fantastic:
“The worst part I raised concerns about needing validation in November and got told I was slowing down innovation.”
hxxps://www.reddit.com/r/analytics/comments/1r4dsq2/we_just_found_out_our_ai_has_been_making_up/
@drahardja @Natasha_Jay I believe it’s supposed to be h*tt*ps
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay It seems to me that AI tools have been released prematurely to generate revenue from the massive investments AI tech companies are making. This type of hallucination could cause serious damage to any business, organisation or person using it. Who is accountable in the end? Given the history of big tech's social media platforms and the harm theyare causing you can bet they'll accept no responsibility.
-
"I just found out that it's been hallucinating numbers this entire time."
"... just inventing plausible sounding [answers]"
This shit is so tiring - that is literally all any AI is even *meant* to do. They are not even designed to give correct answers to questions, but just examples of what a plausible answer could sound like.
(edit: sorry, I know I'm likely preaching to the choir here, but it's just so fucking tiring seeing people surprised by this crap.)
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay Bwahahahahahaha
-
"I just found out that it's been hallucinating numbers this entire time."
@Natasha_Jay .. and you really want to blame the technology for this... If there was no process for checking the facts that were used , it is simply bad implementation, everyone knows you have to check for or do something against hallucinations with GenAI.
