Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza.
-
Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza. In the 60's when he developed Eliza, he was basically part of the high priesthood of computer science, working in the newly founded "Artificial Intelligence Project" at MIT. The reactions that people had to his chatbot - insisting that it had intentions and intelligence - surprised him and made him deeply worried. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
-
Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza. In the 60's when he developed Eliza, he was basically part of the high priesthood of computer science, working in the newly founded "Artificial Intelligence Project" at MIT. The reactions that people had to his chatbot - insisting that it had intentions and intelligence - surprised him and made him deeply worried. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
It wasn't until a few years later, when the Pentagon wanted to fund several new projects in his lab to help the American military murder people in Vietnam, e.g. by developing technologies to balance a helicopter while a machine-gunner fired at people below, that Weizenbaum split with the so-called "artificial intelligentsia" and threw himself into the anti-war movement. Later in the 70s he would write a book-length critique of the AI ideology called Computer Power and Human Reason.
-
It wasn't until a few years later, when the Pentagon wanted to fund several new projects in his lab to help the American military murder people in Vietnam, e.g. by developing technologies to balance a helicopter while a machine-gunner fired at people below, that Weizenbaum split with the so-called "artificial intelligentsia" and threw himself into the anti-war movement. Later in the 70s he would write a book-length critique of the AI ideology called Computer Power and Human Reason.
The basic point of the book is that humans and machines are capable of different things - and thus are not interchangeable as the AI ideologists assume they (eventually) will be. Humans can guide their decisions by using values - what Weizenbaum calls judgement. Values can bye definition not be reduced to code. Being into the philosophy and anthropology of values this makes a lot of sense to me. Values are the things that we can't explain the importance of by referring to something else.
-
The basic point of the book is that humans and machines are capable of different things - and thus are not interchangeable as the AI ideologists assume they (eventually) will be. Humans can guide their decisions by using values - what Weizenbaum calls judgement. Values can bye definition not be reduced to code. Being into the philosophy and anthropology of values this makes a lot of sense to me. Values are the things that we can't explain the importance of by referring to something else.
With all other ways of evaluating action, you can always ask "Why is that important?" And someone will try to say "Because it do this or that". Which means their referring to something else. And you ask again: "Why is that important?" And they will again say some other consequence. Etc. You might have had a similar conversation with a child. At some point you end up with something like "Because it is simply just beautiful" or "the right thing to do" or "fair to everyone".
-
With all other ways of evaluating action, you can always ask "Why is that important?" And someone will try to say "Because it do this or that". Which means their referring to something else. And you ask again: "Why is that important?" And they will again say some other consequence. Etc. You might have had a similar conversation with a child. At some point you end up with something like "Because it is simply just beautiful" or "the right thing to do" or "fair to everyone".
If someone would then ask "And why is that important" and you wouldn't be able to answer - it might even seem preposterous to say why that is important - then you know you are in the presence of a value.
-
If someone would then ask "And why is that important" and you wouldn't be able to answer - it might even seem preposterous to say why that is important - then you know you are in the presence of a value.
Weizenbaum's claim was that computers can't make decisions guided by values. They don't understand real values. They can only calculate. And they do that very well. The problem is when you start giving computers tasks that are actually not calculating tasks but decisions that include value judgements. The computer will inevitably transform that judgement into a calculation. We know this perverse transformation is possible because humans themselves routinely do it.
-
Weizenbaum's claim was that computers can't make decisions guided by values. They don't understand real values. They can only calculate. And they do that very well. The problem is when you start giving computers tasks that are actually not calculating tasks but decisions that include value judgements. The computer will inevitably transform that judgement into a calculation. We know this perverse transformation is possible because humans themselves routinely do it.
Humans can calculate. We have many examples where humans turn value judgements into situations of pure calculation. I'm finishing up a translation of a book by David Graeber on the history of Debt right now and it is filled with stories about humans reducing complex situations involving value judgments to something more like cold mathematical calculation. The very existence of debt and money is one case in point.
-
Humans can calculate. We have many examples where humans turn value judgements into situations of pure calculation. I'm finishing up a translation of a book by David Graeber on the history of Debt right now and it is filled with stories about humans reducing complex situations involving value judgments to something more like cold mathematical calculation. The very existence of debt and money is one case in point.
Graeber defines debt as the perverse transformation of a commitment into cold calculation using violence. This is basically the same nightmare Weizenbaum was warning us against and that our abuse of computers could accelerate. If we treat humans and machines as interchangeable, we reduce all value judgments to calculations. Our world will be filled with the perverse transformations of our commitments. The history of debt shows that can only be enforced through violence.
-
Graeber defines debt as the perverse transformation of a commitment into cold calculation using violence. This is basically the same nightmare Weizenbaum was warning us against and that our abuse of computers could accelerate. If we treat humans and machines as interchangeable, we reduce all value judgments to calculations. Our world will be filled with the perverse transformations of our commitments. The history of debt shows that can only be enforced through violence.
I continued the writing and posted it as a blog post here: https://minus1.ghost.io/from-value-judgments-to-cold-calculation/