Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
-
@joeinwynnewood @emilymbender I've got AI turning itself on every session across devices despite preferences. If you disallow history, tracking, storage, etc. it is a fresh instance evey use. Theae are are all options available in browser settings. So, if you allow for nothing else then it works as advertised. Which is my point. It's opposite what it ought to be, and isn't actually functional if you exercise the options available you. I hope that makes sense. I appreciate the dialogue.
Sounds like turning off storage would be the likely culprit. If you don't store the settings, they will have to revert to the default.
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 Boy are they not even subtly advertising their running from the coming guillotines. Just cracks me up how little they think they need to spend to avoid their own collapse.
-
@emilymbender we're living through a mass psychological engineering campaign and the results have been, and will continue to be, horrifying https://azhdarchid.com/are-llms-useful
That blog post was an interesting read, thanks for sharing.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender The developers do not know what answer an AI will give to a specific prompt in advance. It's a black box of answers. Therefore there is no QA of the product. It is unpredictable and therefore dangerous. Need I continue?
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 ah yes, bill gates, the "good billionaire"
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender The theme to I DREAM OF JEANIE is now playing in your head!
It's outa that bottle! -
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
@emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.
-
@Npars01 ah yes, bill gates, the "good billionaire"
Lol. Are there good billionaires?
-
@emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.
@adrianco @emilymbender that’s because the an LLM model is now more effective than the broken Google Search algorithm is at surfacing the better results out of all the Gen AI slop the web is now drowning in.
I don’t have conversations because there isn’t anything to converse with, but i just type what I would have done into Google back when it was useful. I find it supremely ironic that this is my main use case for LLM. -
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender It's fine they have n-dimensional guardrails

-
Sounds like turning off storage would be the likely culprit. If you don't store the settings, they will have to revert to the default.
@joeinwynnewood @emilymbender Correct, which is why I think that the default position is that DDG doesn't actually provide an opt-out, despite having a toggle switch which says otherwise. If my anonymity or preference is enforced only after presenting an ID and pulling a personal file that 'aint an opt-out. Chart the decision-tree out and the opt-out works for one specific path, coincidentally the path of idgaf and rawdog the internet. But, yeah, they have a toggle.
-
@joeinwynnewood @emilymbender Correct, which is why I think that the default position is that DDG doesn't actually provide an opt-out, despite having a toggle switch which says otherwise. If my anonymity or preference is enforced only after presenting an ID and pulling a personal file that 'aint an opt-out. Chart the decision-tree out and the opt-out works for one specific path, coincidentally the path of idgaf and rawdog the internet. But, yeah, they have a toggle.
@joeinwynnewood @emilymbender I'm sorry. That was uncalled for. I am tired. I am a cog in a machine that keeps people alive, and though only a cog I share the same planet and have a personal life, like you, and like the developersnof DDG -- look, that sloppiness wouldn't stand in my workplace. Things have to do, or do not, and withstand and adapt. The annoyance is supreme, personally. So I get frustrated and shout at strangers. Context, I guess. Anyhow, apologies and good day/night.
-
@Npars01 @emilymbender yeah, it's clearly one of those technologies that accelerates just about everything patriarchal white supremacist capitalism does in various ways, and provides a greater means of plausible deniability to the people behind that than previous systems. it enables the monsters running the world to Capitalism Harder, at the exact moment when we need to be doing the opposite and take better care of one another and our planet. so in that sense it's definitely working as designed.
@jplebreton @Npars01 @emilymbender or not designed, as this shit was ripped from labs before it was ready, because moneydudes got the fomo. the devil finds work in underengineered solutions.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender I play a game now and then with the latest and greatest LLMs to see how easy it is to get them to make me a recipe which includes something grossly poisonous. They still fail badly.
-
@emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.
@emilymbender For example, here’s a conversation about what species of bat I’m looking at. It’s a much better experience than web search. Regardless of how accurate it is, the experience is going to drive usage. However it asked good clarifying questions and the answers are correct as far as I can tell. https://chatgpt.com/share/6964daa3-4a64-8009-86e9-4a1b804998a7
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender That Guardian headline is itself misleading; it should say something like, "Google's false and inaccurate AI overviews continue to put users' health at risk."
-
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
@emilymbender The world cries out for a better search. One that can work even on an Internet full of malicious SEO engineered to generate false positives for fake reviews and other scam sites. Such a technology is desperately needed. Unfortunately we got LLMs instead, which exchange one set of problems for another: They can only return information that is true most of the time.
-
J jwcph@helvede.net shared this topic