An "option to turn off ai features" isn't even the bare minimum.
-
An "option to turn off ai features" isn't even the bare minimum.
The bare minimum of responsibility would be not to push epistemic vandalism such as "ai summaries" on users by default.
-
An "option to turn off ai features" isn't even the bare minimum.
The bare minimum of responsibility would be not to push epistemic vandalism such as "ai summaries" on users by default.
(No, "ai" can't summarize dependably. Because that requires understanding.
It will happily introduce biases and distortions, though.)
-
(No, "ai" can't summarize dependably. Because that requires understanding.
It will happily introduce biases and distortions, though.)
@quincy this isn't true in our experience at Discourse when summarizing discussion topics, nor is it my experience when summarizing books, etc. Do you have to read critically? Well, yes, that's true of everything you read. Including this.
-
@quincy this isn't true in our experience at Discourse when summarizing discussion topics, nor is it my experience when summarizing books, etc. Do you have to read critically? Well, yes, that's true of everything you read. Including this.
@codinghorror What some of us who would rather not have to read LLM-summaries are getting at is that it doubles the amount you need to read critically. In human dialogue, we sometimes summarize or paraphrase each other, but then usually ask a question to the source (Did I get you?) and wait for an answer (Yes/No). Here, every reader would need to read the LLM-summary, then the original text, to critically analyze if the summary got it right. @quincy