Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
-
Sorry, no. I think DuckDuckGo is a decent option. They don't keep your search history and produce good search results.
@joeinwynnewood @emilymbender No worries, and agreed on it being a decent option! It's what I've been using for a couple years now. I've just been troubled by some recent stuff the company's been doing, so I wanna keep my options open.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender I just wanna say, I'm thrilled that rugbies are back. I always loved rugbies.
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender Once again I am thinking: I never expected the end of this era to be so *stupid*.
-
@joeinwynnewood @emilymbender I've got AI turning itself on every session across devices despite preferences. If you disallow history, tracking, storage, etc. it is a fresh instance evey use. Theae are are all options available in browser settings. So, if you allow for nothing else then it works as advertised. Which is my point. It's opposite what it ought to be, and isn't actually functional if you exercise the options available you. I hope that makes sense. I appreciate the dialogue.
-
@joeinwynnewood @emilymbender Sorry, let me rephrase a little. If you clear data between use, or block trackers it doesn't work. Having a flag from the developers which could be passed to the browser as another catagory to toggle under various browser privacy settings, eg. cookies, tracking, local access, etc., would ensure intended function, set a functional precedent for browser developers and users, all while allowing for privacy.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender Incredibly, I still hear scientists defending this damn technology. It's like, "I'm always on time for my morning classes thanks to my radium watch! Golly gee, the bus to the math conference runs so much smoother on leaded gasoline."
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender so they pledge to give 1/320th of their net worth so that they can pretend even a 1% tax on their total net worth would not work better. Got it.
-
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
This is the bit that really gets my goat. As someone who focused quite a bit on epistemology and empiricism and science v pseudoscience in college + grad school it’s like watching a horror film where you know the mistake that everyone is making but they refuse to listen.
That’s really why I listen to the podcast - to know that I’m not alone.
-
Back to the current one, the quotes from Google in the Guardian piece are so disingenuous:
>>
@emilymbender Good News! We told our AI to audit the results of our AI and it told us it passed!
-
@joeinwynnewood @emilymbender I've got AI turning itself on every session across devices despite preferences. If you disallow history, tracking, storage, etc. it is a fresh instance evey use. Theae are are all options available in browser settings. So, if you allow for nothing else then it works as advertised. Which is my point. It's opposite what it ought to be, and isn't actually functional if you exercise the options available you. I hope that makes sense. I appreciate the dialogue.
Sounds like turning off storage would be the likely culprit. If you don't store the settings, they will have to revert to the default.
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 Boy are they not even subtly advertising their running from the coming guillotines. Just cracks me up how little they think they need to spend to avoid their own collapse.
-
@emilymbender we're living through a mass psychological engineering campaign and the results have been, and will continue to be, horrifying https://azhdarchid.com/are-llms-useful
That blog post was an interesting read, thanks for sharing.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender The developers do not know what answer an AI will give to a specific prompt in advance. It's a black box of answers. Therefore there is no QA of the product. It is unpredictable and therefore dangerous. Need I continue?
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 ah yes, bill gates, the "good billionaire"
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender The theme to I DREAM OF JEANIE is now playing in your head!
It's outa that bottle! -
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
@emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.
-
@Npars01 ah yes, bill gates, the "good billionaire"
Lol. Are there good billionaires?
-
@emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.
@adrianco @emilymbender that’s because the an LLM model is now more effective than the broken Google Search algorithm is at surfacing the better results out of all the Gen AI slop the web is now drowning in.
I don’t have conversations because there isn’t anything to converse with, but i just type what I would have done into Google back when it was useful. I find it supremely ironic that this is my main use case for LLM. -
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender It's fine they have n-dimensional guardrails

-
Sounds like turning off storage would be the likely culprit. If you don't store the settings, they will have to revert to the default.
@joeinwynnewood @emilymbender Correct, which is why I think that the default position is that DDG doesn't actually provide an opt-out, despite having a toggle switch which says otherwise. If my anonymity or preference is enforced only after presenting an ID and pulling a personal file that 'aint an opt-out. Chart the decision-tree out and the opt-out works for one specific path, coincidentally the path of idgaf and rawdog the internet. But, yeah, they have a toggle.