Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza.

Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
9 Indlæg 1 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • malte@radikal.socialM This user is from outside of this forum
    malte@radikal.socialM This user is from outside of this forum
    malte@radikal.social
    wrote sidst redigeret af
    #1

    Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza. In the 60's when he developed Eliza, he was basically part of the high priesthood of computer science, working in the newly founded "Artificial Intelligence Project" at MIT. The reactions that people had to his chatbot - insisting that it had intentions and intelligence - surprised him and made him deeply worried. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

    malte@radikal.socialM 1 Reply Last reply
    0
    • malte@radikal.socialM malte@radikal.social

      Finally reading this awesome portrait of the life and thought of Joseph Weizenbaum, the programmer of one of the first natural language chatbots called Eliza. In the 60's when he developed Eliza, he was basically part of the high priesthood of computer science, working in the newly founded "Artificial Intelligence Project" at MIT. The reactions that people had to his chatbot - insisting that it had intentions and intelligence - surprised him and made him deeply worried. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

      malte@radikal.socialM This user is from outside of this forum
      malte@radikal.socialM This user is from outside of this forum
      malte@radikal.social
      wrote sidst redigeret af
      #2

      It wasn't until a few years later, when the Pentagon wanted to fund several new projects in his lab to help the American military murder people in Vietnam, e.g. by developing technologies to balance a helicopter while a machine-gunner fired at people below, that Weizenbaum split with the so-called "artificial intelligentsia" and threw himself into the anti-war movement. Later in the 70s he would write a book-length critique of the AI ideology called Computer Power and Human Reason.

      malte@radikal.socialM 1 Reply Last reply
      0
      • malte@radikal.socialM malte@radikal.social

        It wasn't until a few years later, when the Pentagon wanted to fund several new projects in his lab to help the American military murder people in Vietnam, e.g. by developing technologies to balance a helicopter while a machine-gunner fired at people below, that Weizenbaum split with the so-called "artificial intelligentsia" and threw himself into the anti-war movement. Later in the 70s he would write a book-length critique of the AI ideology called Computer Power and Human Reason.

        malte@radikal.socialM This user is from outside of this forum
        malte@radikal.socialM This user is from outside of this forum
        malte@radikal.social
        wrote sidst redigeret af
        #3

        The basic point of the book is that humans and machines are capable of different things - and thus are not interchangeable as the AI ideologists assume they (eventually) will be. Humans can guide their decisions by using values - what Weizenbaum calls judgement. Values can bye definition not be reduced to code. Being into the philosophy and anthropology of values this makes a lot of sense to me. Values are the things that we can't explain the importance of by referring to something else.

        malte@radikal.socialM 1 Reply Last reply
        0
        • malte@radikal.socialM malte@radikal.social

          The basic point of the book is that humans and machines are capable of different things - and thus are not interchangeable as the AI ideologists assume they (eventually) will be. Humans can guide their decisions by using values - what Weizenbaum calls judgement. Values can bye definition not be reduced to code. Being into the philosophy and anthropology of values this makes a lot of sense to me. Values are the things that we can't explain the importance of by referring to something else.

          malte@radikal.socialM This user is from outside of this forum
          malte@radikal.socialM This user is from outside of this forum
          malte@radikal.social
          wrote sidst redigeret af
          #4

          With all other ways of evaluating action, you can always ask "Why is that important?" And someone will try to say "Because it do this or that". Which means their referring to something else. And you ask again: "Why is that important?" And they will again say some other consequence. Etc. You might have had a similar conversation with a child. At some point you end up with something like "Because it is simply just beautiful" or "the right thing to do" or "fair to everyone".

          malte@radikal.socialM 1 Reply Last reply
          0
          • malte@radikal.socialM malte@radikal.social

            With all other ways of evaluating action, you can always ask "Why is that important?" And someone will try to say "Because it do this or that". Which means their referring to something else. And you ask again: "Why is that important?" And they will again say some other consequence. Etc. You might have had a similar conversation with a child. At some point you end up with something like "Because it is simply just beautiful" or "the right thing to do" or "fair to everyone".

            malte@radikal.socialM This user is from outside of this forum
            malte@radikal.socialM This user is from outside of this forum
            malte@radikal.social
            wrote sidst redigeret af
            #5

            If someone would then ask "And why is that important" and you wouldn't be able to answer - it might even seem preposterous to say why that is important - then you know you are in the presence of a value.

            malte@radikal.socialM 1 Reply Last reply
            0
            • malte@radikal.socialM malte@radikal.social

              If someone would then ask "And why is that important" and you wouldn't be able to answer - it might even seem preposterous to say why that is important - then you know you are in the presence of a value.

              malte@radikal.socialM This user is from outside of this forum
              malte@radikal.socialM This user is from outside of this forum
              malte@radikal.social
              wrote sidst redigeret af
              #6

              Weizenbaum's claim was that computers can't make decisions guided by values. They don't understand real values. They can only calculate. And they do that very well. The problem is when you start giving computers tasks that are actually not calculating tasks but decisions that include value judgements. The computer will inevitably transform that judgement into a calculation. We know this perverse transformation is possible because humans themselves routinely do it.

              malte@radikal.socialM 1 Reply Last reply
              0
              • malte@radikal.socialM malte@radikal.social

                Weizenbaum's claim was that computers can't make decisions guided by values. They don't understand real values. They can only calculate. And they do that very well. The problem is when you start giving computers tasks that are actually not calculating tasks but decisions that include value judgements. The computer will inevitably transform that judgement into a calculation. We know this perverse transformation is possible because humans themselves routinely do it.

                malte@radikal.socialM This user is from outside of this forum
                malte@radikal.socialM This user is from outside of this forum
                malte@radikal.social
                wrote sidst redigeret af
                #7

                Humans can calculate. We have many examples where humans turn value judgements into situations of pure calculation. I'm finishing up a translation of a book by David Graeber on the history of Debt right now and it is filled with stories about humans reducing complex situations involving value judgments to something more like cold mathematical calculation. The very existence of debt and money is one case in point.

                malte@radikal.socialM 1 Reply Last reply
                0
                • malte@radikal.socialM malte@radikal.social

                  Humans can calculate. We have many examples where humans turn value judgements into situations of pure calculation. I'm finishing up a translation of a book by David Graeber on the history of Debt right now and it is filled with stories about humans reducing complex situations involving value judgments to something more like cold mathematical calculation. The very existence of debt and money is one case in point.

                  malte@radikal.socialM This user is from outside of this forum
                  malte@radikal.socialM This user is from outside of this forum
                  malte@radikal.social
                  wrote sidst redigeret af
                  #8

                  Graeber defines debt as the perverse transformation of a commitment into cold calculation using violence. This is basically the same nightmare Weizenbaum was warning us against and that our abuse of computers could accelerate. If we treat humans and machines as interchangeable, we reduce all value judgments to calculations. Our world will be filled with the perverse transformations of our commitments. The history of debt shows that can only be enforced through violence.

                  malte@radikal.socialM 1 Reply Last reply
                  0
                  • malte@radikal.socialM malte@radikal.social

                    Graeber defines debt as the perverse transformation of a commitment into cold calculation using violence. This is basically the same nightmare Weizenbaum was warning us against and that our abuse of computers could accelerate. If we treat humans and machines as interchangeable, we reduce all value judgments to calculations. Our world will be filled with the perverse transformations of our commitments. The history of debt shows that can only be enforced through violence.

                    malte@radikal.socialM This user is from outside of this forum
                    malte@radikal.socialM This user is from outside of this forum
                    malte@radikal.social
                    wrote sidst redigeret af
                    #9

                    I continued the writing and posted it as a blog post here: https://minus1.ghost.io/from-value-judgments-to-cold-calculation/

                    1 Reply Last reply
                    0
                    Svar
                    • Svar som emne
                    Login for at svare
                    • Ældste til nyeste
                    • Nyeste til ældste
                    • Most Votes


                    • Log ind

                    • Har du ikke en konto? Tilmeld

                    • Login or register to search.
                    Powered by NodeBB Contributors
                    Graciously hosted by data.coop
                    • First post
                      Last post
                    0
                    • Hjem
                    • Seneste
                    • Etiketter
                    • Populære
                    • Verden
                    • Bruger
                    • Grupper