“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
Oh I am in trouble? Thank god, COD and Zeus whose tagging (writing on the wall) I saw yesterday.
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
@Em0nM4stodon why all of those announcements feels like marketing?
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
@Em0nM4stodon THIS is the way AI can destroy the world. Not taking over Nuclear Silos or whatever.
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
@Em0nM4stodon Beyond grim - that deeply intrusive, manipulative access in the wrong, powerful hands.
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
@Em0nM4stodon I mean, better late than never, but this kind of realization should have come years ago for these people.
Years ago, it was already abundantly clear that these companies do not care about copyright, user privacy, environment, or, well, anything besides their profits, market share, and bullshit metrics like "engagement".
I wouldn't be surprised if at least one of them had a book deal in the pipeline, to squeeze just a little more money out.
-
A anderslund@expressional.social shared this topic
P pelle@veganism.social shared this topic
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
-
@Em0nM4stodon why all of those announcements feels like marketing?
@Lioh @Em0nM4stodon yeah this article is still very boosterish in a lot of ways.
This person quit the company, but they aren't done detoxing from the hype yet.
-
J jwcph@helvede.net shared this topic
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
@Em0nM4stodon We HAVE a brain. It understands very well. It doesn't have to know all the specific ways it manipulates, just extrapolate from algorithmic (?) manipulation.
-
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
Remember kids, this is why we run our LLMs locally. They definitely have their use, but treat them like a more sophisticated search engine and you'll be fine.