Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
-
@joeinwynnewood @emilymbender Do you have recommendations for ones that aren't DuckDuckGo?
@evannakita @joeinwynnewood @emilymbender I use Startpage at times! @StartpageSearch Also, cool profile amd background picture!
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender
Bang on.
-
Measuring and reviewing the quality of its summaries? How does a great AI house do that?
Unleashing one AI instance on another, and letting them "duke it out"?
I think I like it.
I was being ironic. I hate everything about generative, text-puking "AI". It's environmental impact. Degradation of workers. Shoddy, unreliable output. Theft of thoughtful, imaginative or well-researched content. Fooling people into thinking they are being "helped" by a sentient, caring being. Erroneous "correcting" of my work.
Sorry for my misleading attempt at irony. ( But I would love to see AI battle-bots disrupt the blasted AI machines and destroy them.)
-
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
@emilymbender
The con is to allow the user to imagine that the chatbot is AGI and not LLM. Once it is clear what the LLM is, then it is useful for finding normative language related to any number of topics, has been my experience. The AI market capitalization is criticized as a bubble. I think it is, too. I think that the misunderstanding regarding the chatbot will bite. I discussed this matter with DeepSeek: https://johntinker.substack.com/p/misunderstanding-as-a-commutator -
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender
Forbes is full of shit. Those five mega-billionaires, including Bill Gates and Charles Koch, are not giving a bent dime to "boost economic mobility" in the US, using AI or anything else.
They are trying to recoup the many more billions their organizations have and will spend on AI by forcing the world to accept their flawed products. Their purpose is to dominate the citizenry, not make it more able to climb a rigged economic ladder.
-
@Npars01 @jplebreton @emilymbender
Forbes is full of shit. Those five mega-billionaires, including Bill Gates and Charles Koch, are not giving a bent dime to "boost economic mobility" in the US, using AI or anything else.
They are trying to recoup the many more billions their organizations have and will spend on AI by forcing the world to accept their flawed products. Their purpose is to dominate the citizenry, not make it more able to climb a rigged economic ladder.
@huntingdon AI will be a drug that we will be addicted to.
-
@joeinwynnewood @emilymbender Do you have recommendations for ones that aren't DuckDuckGo?
Sorry, no. I think DuckDuckGo is a decent option. They don't keep your search history and produce good search results.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender If the approach to mking AI return good results is to put in a bunch of special cases, can we just go back to writing proper expert systems?
-
@emilymbender
The con is to allow the user to imagine that the chatbot is AGI and not LLM. Once it is clear what the LLM is, then it is useful for finding normative language related to any number of topics, has been my experience. The AI market capitalization is criticized as a bubble. I think it is, too. I think that the misunderstanding regarding the chatbot will bite. I discussed this matter with DeepSeek: https://johntinker.substack.com/p/misunderstanding-as-a-commutator@johntinker See pinned toot:
https://dair-community.social/@emilymbender/109339391065534153
Also, no, LLMs are not what you think they are, if you are "discussing" anything with them.
-
@joeinwynnewood @emilymbender Ah, yet when you use privacy features within DuckDuckGo, or numerous available browsers, disabling AI is (predictably) per session. It would seem the better way would be to have opt-out by default, with locally stored flag for enabling AI being useful for those that have... not bothered with privacy -- which is not their target audience? So, I am sorry but DuckDuckGo does not allow for disabling AI; it is a misleading claim.
-
@evannakita @joeinwynnewood @emilymbender I use Startpage at times! @StartpageSearch Also, cool profile amd background picture!
@Brian @joeinwynnewood @emilymbender Thank you so much, both for the recommendation and for the compliment! I drew both of those and it means a lot to hear

-
Sorry, no. I think DuckDuckGo is a decent option. They don't keep your search history and produce good search results.
@joeinwynnewood @emilymbender No worries, and agreed on it being a decent option! It's what I've been using for a couple years now. I've just been troubled by some recent stuff the company's been doing, so I wanna keep my options open.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender I just wanna say, I'm thrilled that rugbies are back. I always loved rugbies.
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender Once again I am thinking: I never expected the end of this era to be so *stupid*.
-
@joeinwynnewood @emilymbender I've got AI turning itself on every session across devices despite preferences. If you disallow history, tracking, storage, etc. it is a fresh instance evey use. Theae are are all options available in browser settings. So, if you allow for nothing else then it works as advertised. Which is my point. It's opposite what it ought to be, and isn't actually functional if you exercise the options available you. I hope that makes sense. I appreciate the dialogue.
-
@joeinwynnewood @emilymbender Sorry, let me rephrase a little. If you clear data between use, or block trackers it doesn't work. Having a flag from the developers which could be passed to the browser as another catagory to toggle under various browser privacy settings, eg. cookies, tracking, local access, etc., would ensure intended function, set a functional precedent for browser developers and users, all while allowing for privacy.
-
Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.
Google: Okay, we'll block "AI" overviews on that query.
The product is fundamentally flawed and cannot be "fixed" by patching query by query.
A short 🧵>>
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
@emilymbender Incredibly, I still hear scientists defending this damn technology. It's like, "I'm always on time for my morning classes thanks to my radium watch! Golly gee, the bus to the math conference runs so much smoother on leaded gasoline."
-
The question is also "LLM'S are useful to whom?"
The wealthiest seem overjoyed with it so far.
So much so, they are funding one of the largest coercive & forced user adoption campaign in history.
It's the best at:
1. Election interference
2. Malign influence campaigns
3. Automated cyberwarfare
4. Manipulation of public sentiment
5. Automated hate campaigns
6. Plausible deniability for funding a fascist movement
7. Frying the planet@Npars01 @jplebreton @emilymbender so they pledge to give 1/320th of their net worth so that they can pretend even a 1% tax on their total net worth would not work better. Got it.
-
I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
/fin (for now)
This is the bit that really gets my goat. As someone who focused quite a bit on epistemology and empiricism and science v pseudoscience in college + grad school it’s like watching a horror film where you know the mistake that everyone is making but they refuse to listen.
That’s really why I listen to the podcast - to know that I’m not alone.
-
Back to the current one, the quotes from Google in the Guardian piece are so disingenuous:
>>
@emilymbender Good News! We told our AI to audit the results of our AI and it told us it passed!