@emilymbender Excellent article, thanks!
Part of the issue is that LLMs often work best when they are prompted to act as a person in a specific role. "You are a skilled X" is a common system prompt type. This kind of roleplaying setting seems to trigger outputs that resemble what a human in the same situation could produce, which is what the user wanted (or at least the best the model could do, within its limitations). So the anthropomorphism cuts deep into the behaviour of the model.