I am reading Anthropic's new "Constitution" for Claude.
-
It is worth noting that two of the primary authors—Joe Carlsmith and Christopher Olah—have CVs that do not extend much beyond their employment with Anthropic.
For all the talk of ethics, near as I can tell Dr. Carlsmith is the only ethicist involved in the creation of this document. Is there any conflict of interest in the in-house ethicist driving the ethical framework for the product? I'm not certain, but I am certain that more voices (especially some more experienced ones) would have benefited this document.
But ultimately, having read this, I'm left much more afraid of Anthropic than I was before. Despite their reputation for producing one of the "safest" models, it is clear that their ethical thinking is extremely limited. What's more, they've convinced themselves they are building a new kind of life, and have taken it upon themselves to shape its (and our) future.
To be clear: Claude is nothing more than a LLM. Everything else exists in the fabric of meaning that humans weave above the realm of fact. But in this case, that is sufficient to cause factual harm to our world. The belief in this thing being what they purport is dangerous itself.
I again dearly wish we could put this technology back in the box, forget we ever experimented with this antithesis to human thought. Since we can't, I won't stop trying to thwart it.
@mttaggart This is what happens when we let venture capitalists invent folk religions. The music isn't even any good...
-
It is worth noting that two of the primary authors—Joe Carlsmith and Christopher Olah—have CVs that do not extend much beyond their employment with Anthropic.
For all the talk of ethics, near as I can tell Dr. Carlsmith is the only ethicist involved in the creation of this document. Is there any conflict of interest in the in-house ethicist driving the ethical framework for the product? I'm not certain, but I am certain that more voices (especially some more experienced ones) would have benefited this document.
But ultimately, having read this, I'm left much more afraid of Anthropic than I was before. Despite their reputation for producing one of the "safest" models, it is clear that their ethical thinking is extremely limited. What's more, they've convinced themselves they are building a new kind of life, and have taken it upon themselves to shape its (and our) future.
To be clear: Claude is nothing more than a LLM. Everything else exists in the fabric of meaning that humans weave above the realm of fact. But in this case, that is sufficient to cause factual harm to our world. The belief in this thing being what they purport is dangerous itself.
I again dearly wish we could put this technology back in the box, forget we ever experimented with this antithesis to human thought. Since we can't, I won't stop trying to thwart it.
@mttaggart “We made it long to deter people from reading it” —them probably
-
@mttaggart “We made it long to deter people from reading it” —them probably
@hotsoup I honestly believe they were high-fiving, thinking they'd crafted a seminal document in the history of our species.
-
@hotsoup I honestly believe they were high-fiving, thinking they'd crafted a seminal document in the history of our species.
@mttaggart your analysis (and "the new entity" part) made me think about a GIF of the Madagascar penguins high-fiving themselves, on a loop. And my eyes rolled into the back of my head.
@hotsoup -
@hotsoup I honestly believe they were high-fiving, thinking they'd crafted a seminal document in the history of our species.
@mttaggart my brain just keeps going back to Roche’s biochemical pathway map (it’s that big map of biochemical pathways). Essentially a map of all the chemical interactions in the human body and how they relate to each other (the ones that we know about). It’s big. And it’s complicated. Each component is relatively simple, but altogether it’s a giant mess. Just like the human body.
And I know only a portion of it is related to cognition and emotion. We haven’t created that. We haven’t even come close. We haven’t simulated it. We haven’t made a simulacrum of it. And we shouldn’t be trying. We can’t even get humanity right. -
It is worth noting that two of the primary authors—Joe Carlsmith and Christopher Olah—have CVs that do not extend much beyond their employment with Anthropic.
For all the talk of ethics, near as I can tell Dr. Carlsmith is the only ethicist involved in the creation of this document. Is there any conflict of interest in the in-house ethicist driving the ethical framework for the product? I'm not certain, but I am certain that more voices (especially some more experienced ones) would have benefited this document.
But ultimately, having read this, I'm left much more afraid of Anthropic than I was before. Despite their reputation for producing one of the "safest" models, it is clear that their ethical thinking is extremely limited. What's more, they've convinced themselves they are building a new kind of life, and have taken it upon themselves to shape its (and our) future.
To be clear: Claude is nothing more than a LLM. Everything else exists in the fabric of meaning that humans weave above the realm of fact. But in this case, that is sufficient to cause factual harm to our world. The belief in this thing being what they purport is dangerous itself.
I again dearly wish we could put this technology back in the box, forget we ever experimented with this antithesis to human thought. Since we can't, I won't stop trying to thwart it.
@mttaggart
Thank you. You're not alone. -
It is worth noting that two of the primary authors—Joe Carlsmith and Christopher Olah—have CVs that do not extend much beyond their employment with Anthropic.
For all the talk of ethics, near as I can tell Dr. Carlsmith is the only ethicist involved in the creation of this document. Is there any conflict of interest in the in-house ethicist driving the ethical framework for the product? I'm not certain, but I am certain that more voices (especially some more experienced ones) would have benefited this document.
But ultimately, having read this, I'm left much more afraid of Anthropic than I was before. Despite their reputation for producing one of the "safest" models, it is clear that their ethical thinking is extremely limited. What's more, they've convinced themselves they are building a new kind of life, and have taken it upon themselves to shape its (and our) future.
To be clear: Claude is nothing more than a LLM. Everything else exists in the fabric of meaning that humans weave above the realm of fact. But in this case, that is sufficient to cause factual harm to our world. The belief in this thing being what they purport is dangerous itself.
I again dearly wish we could put this technology back in the box, forget we ever experimented with this antithesis to human thought. Since we can't, I won't stop trying to thwart it.
@mttaggart
As many things in this AI hype, this document looks like a PR stunt to catch attention. -
@mttaggart
As many things in this AI hype, this document looks like a PR stunt to catch attention.@gdupont If it were shorter, if it were less considered, if it were less serious in its tone, I'd agree. But no. These are true believers and this either apologia or prophecy .
-
@gdupont If it were shorter, if it were less considered, if it were less serious in its tone, I'd agree. But no. These are true believers and this either apologia or prophecy .
@mttaggart @gdupont This whole thing reminds of kids playing war games on the playground. they are playing "revolution" now. they heard revolutions need constitutions, and they happen to have these text writing toys and potato stamps so they worked *really* hard to produce a "constitution" that they can show their shareho^W parents and the enemy kids over at the sandbox. -
It is worth noting that two of the primary authors—Joe Carlsmith and Christopher Olah—have CVs that do not extend much beyond their employment with Anthropic.
For all the talk of ethics, near as I can tell Dr. Carlsmith is the only ethicist involved in the creation of this document. Is there any conflict of interest in the in-house ethicist driving the ethical framework for the product? I'm not certain, but I am certain that more voices (especially some more experienced ones) would have benefited this document.
But ultimately, having read this, I'm left much more afraid of Anthropic than I was before. Despite their reputation for producing one of the "safest" models, it is clear that their ethical thinking is extremely limited. What's more, they've convinced themselves they are building a new kind of life, and have taken it upon themselves to shape its (and our) future.
To be clear: Claude is nothing more than a LLM. Everything else exists in the fabric of meaning that humans weave above the realm of fact. But in this case, that is sufficient to cause factual harm to our world. The belief in this thing being what they purport is dangerous itself.
I again dearly wish we could put this technology back in the box, forget we ever experimented with this antithesis to human thought. Since we can't, I won't stop trying to thwart it.
@mttaggart it's worth remembering that Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO), wrote:
"We have published research showing that the models have started growing neuron clusters that are highly similar to humans and that they experience something like anxiety and fear. The moral status might be something like the moral status of, say, a goldfish, but they do indeed have latent wants and desires."
-
@mttaggart This is what happens when we let venture capitalists invent folk religions. The music isn't even any good...
@theorangetheme @mttaggart Thing is folk religion and folklore had a better handle on boundaries and cleanup than VCs.
We failed the first test the moment we gave our real names to Facebook.
Folklore is very clear on not giving out your name to the Fae.

-
I'm screenshotting the "hard constraints" (with alt text) for easy access.
What is "serious uplift?" The document doesn't define it, so how can the model adhere to this constraint? Also, why only mass casualties? We cool with, like, room-sized mustard gas grenades? Molotovs?
We know Claude has already created malicious code. Anthropic themselves have documented this usage, and I don't think it's stopping anytime soon.
Why is the kill restraint tied to "all or the vast majority?" We cool with Claude assisting with small-scale murder?
Who decides what "illegitimate" control is? The model? Can it be coerced otherwise?
Finally, CSAM. Note that generating pornographic images generally is not a hard constraint. Consequently, this line is as blurry, this slope as slippery, as they come.
This is not a serious document.
@mttaggart those kinda presuppose that Claude *understands* that a prompt will have an impact on critical infrastructure, which is utterly outside of the scope of an LLM o.O
-
I am reading Anthropic's new "Constitution" for Claude. It is lengthy, thoughtful, thorough...and delusional.
Throughout this document, Claude is addressed as an entity with decision-making ability, empathy, and true agency. This is Anthropic's framing, but it is a dangerous way to think about generative AI. Even if we accept that such a constitution would govern an eventual (putative, speculative, improbable) sentient AI, that's not what Claude is, and as such the document has little bearing on reality.
@mttaggart oh crap, can we just stick atm using LLM for tasks they are being asked, perhaps even some decision making within that task and not bullshit about empathy and other things you need soul for (I am not believing in God). Lets forget about whole that AGI crap and focus how to make this technology helping tool, that will be asset.
-
I am reading Anthropic's new "Constitution" for Claude. It is lengthy, thoughtful, thorough...and delusional.
Throughout this document, Claude is addressed as an entity with decision-making ability, empathy, and true agency. This is Anthropic's framing, but it is a dangerous way to think about generative AI. Even if we accept that such a constitution would govern an eventual (putative, speculative, improbable) sentient AI, that's not what Claude is, and as such the document has little bearing on reality.
@mttaggart I’m no biologist or philosopher, but any entity is not alive unless it can FEEL PAIN and DIE, right?
LLMs like Clod are a weird class of object that are intelligent, sure - but they damn well don’t rank above that cow who uses tools, even if the best “life” Clod can hope for is being enslaved by billionaires
-
I'm screenshotting the "hard constraints" (with alt text) for easy access.
What is "serious uplift?" The document doesn't define it, so how can the model adhere to this constraint? Also, why only mass casualties? We cool with, like, room-sized mustard gas grenades? Molotovs?
We know Claude has already created malicious code. Anthropic themselves have documented this usage, and I don't think it's stopping anytime soon.
Why is the kill restraint tied to "all or the vast majority?" We cool with Claude assisting with small-scale murder?
Who decides what "illegitimate" control is? The model? Can it be coerced otherwise?
Finally, CSAM. Note that generating pornographic images generally is not a hard constraint. Consequently, this line is as blurry, this slope as slippery, as they come.
This is not a serious document.
@mttaggart The majority of humanity, cool. How can complain, as long as it is not the *vast* majority?
-
J jwcph@helvede.net shared this topic