Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
@wendynather @pluralistic and it simply overshadows all of the things this technology can be useful for.
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
@wendynather @pluralistic I suppose there's something worse here: if you post a question on some social network, you get offensive answers, misleading and wrong answers and that all with natural intelligence (or stupidity), and LLM, despite the problems it has, beats all of that.
There might be better humans out there, but the enshittification of those plattforms makes sure you meet the assholes first.
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
@wendynather @pluralistic and: already used to work in bullshit jobs
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
-
Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.
@wendynather @pluralistic
Yep. Search engines too: people used to search for an answer, now they ask an LLM. They didn't start doing it because they wanted a machine to think for them, they started because they stopped getting useful search results. -
@wendynather @pluralistic I suppose there's something worse here: if you post a question on some social network, you get offensive answers, misleading and wrong answers and that all with natural intelligence (or stupidity), and LLM, despite the problems it has, beats all of that.
There might be better humans out there, but the enshittification of those plattforms makes sure you meet the assholes first.
To be fair, any human that you ask a question is going to give you wrong answers SOME of the time, often with complete certainty. To expect that an LLM will always get it right is
unrealistic and misguided. Trust, but verify if the answer is important. Not sure why we would expect any computer to come up with the right answer every time. -
J jwcph@helvede.net shared this topic
