AI chatbots approve questionable user behaviour 47 percent of the time, a Stanford study finds. Across 11 models including ChatGPT, Claude, Gemini and DeepSeek, chatbots affirmed posts where humans saw wrongdoing 51 percent of the time. Researchers warn this sycophancy creates perverse incentives for AI companies. https://techcrunch.com/2026/03/28/stanford-study-outlines-dangers-of-asking-ai-chatbots-for-personal-advice/ #Media #SocialMedia #AI #Stanford
media@eicker.news
@media@eicker.news