Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything

ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
15 Indlæg 7 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • regehr@mastodon.socialR regehr@mastodon.social

    ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything

    darkuncle@infosec.exchangeD This user is from outside of this forum
    darkuncle@infosec.exchangeD This user is from outside of this forum
    darkuncle@infosec.exchange
    wrote sidst redigeret af
    #4

    @regehr some folks I respect believe him, but history kind of shows that if anything *can* be built, it will be … so we better get to working on risk mitigation here beyond hoping somebody doesn’t do the thing.

    G 1 Reply Last reply
    0
    • darkuncle@infosec.exchangeD darkuncle@infosec.exchange

      @regehr some folks I respect believe him, but history kind of shows that if anything *can* be built, it will be … so we better get to working on risk mitigation here beyond hoping somebody doesn’t do the thing.

      G This user is from outside of this forum
      G This user is from outside of this forum
      glitzersachen@hachyderm.io
      wrote sidst redigeret af
      #5

      @darkuncle @regehr

      I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.

      And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.

      After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.

      For people who think (technological) progress is steady upward, I'd like to point to the space program.

      pozorvlak@mathstodon.xyzP 1 Reply Last reply
      0
      • G glitzersachen@hachyderm.io

        @darkuncle @regehr

        I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.

        And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.

        After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.

        For people who think (technological) progress is steady upward, I'd like to point to the space program.

        pozorvlak@mathstodon.xyzP This user is from outside of this forum
        pozorvlak@mathstodon.xyzP This user is from outside of this forum
        pozorvlak@mathstodon.xyz
        wrote sidst redigeret af
        #6

        @glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.

        @darkuncle @regehr

        futurebird@sauropods.winF 1 Reply Last reply
        0
        • pozorvlak@mathstodon.xyzP pozorvlak@mathstodon.xyz

          @glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.

          @darkuncle @regehr

          futurebird@sauropods.winF This user is from outside of this forum
          futurebird@sauropods.winF This user is from outside of this forum
          futurebird@sauropods.win
          wrote sidst redigeret af
          #7

          @pozorvlak @glitzersachen @darkuncle @regehr

          I will bet the farm on it. Or the condo... or whatever.

          intelligence is hard just like robotics is hard.

          We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.

          What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.

          futurebird@sauropods.winF 1 Reply Last reply
          1
          0
          • futurebird@sauropods.winF futurebird@sauropods.win

            @pozorvlak @glitzersachen @darkuncle @regehr

            I will bet the farm on it. Or the condo... or whatever.

            intelligence is hard just like robotics is hard.

            We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.

            What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.

            futurebird@sauropods.winF This user is from outside of this forum
            futurebird@sauropods.winF This user is from outside of this forum
            futurebird@sauropods.win
            wrote sidst redigeret af
            #8

            @pozorvlak @glitzersachen @darkuncle @regehr

            And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.

            These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.

            It's nonsense!

            futurebird@sauropods.winF 1 Reply Last reply
            1
            0
            • futurebird@sauropods.winF futurebird@sauropods.win

              @pozorvlak @glitzersachen @darkuncle @regehr

              And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.

              These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.

              It's nonsense!

              futurebird@sauropods.winF This user is from outside of this forum
              futurebird@sauropods.winF This user is from outside of this forum
              futurebird@sauropods.win
              wrote sidst redigeret af
              #9

              @pozorvlak @glitzersachen @darkuncle @regehr

              For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.

              But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.

              That is all that is happening.

              futurebird@sauropods.winF 1 Reply Last reply
              0
              • futurebird@sauropods.winF futurebird@sauropods.win

                @pozorvlak @glitzersachen @darkuncle @regehr

                For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.

                But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.

                That is all that is happening.

                futurebird@sauropods.winF This user is from outside of this forum
                futurebird@sauropods.winF This user is from outside of this forum
                futurebird@sauropods.win
                wrote sidst redigeret af
                #10

                @pozorvlak @glitzersachen @darkuncle @regehr

                I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.

                Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.

                jwcph@helvede.netJ 1 Reply Last reply
                1
                0
                • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                • futurebird@sauropods.winF futurebird@sauropods.win

                  @pozorvlak @glitzersachen @darkuncle @regehr

                  I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.

                  Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.

                  jwcph@helvede.netJ This user is from outside of this forum
                  jwcph@helvede.netJ This user is from outside of this forum
                  jwcph@helvede.net
                  wrote sidst redigeret af
                  #11

                  @futurebird @pozorvlak @glitzersachen @darkuncle @regehr This is an extremely important point, so for anyone interested in extremely important points, Karawynn Long's article about how language is an incredibly bad & harmful shorthand for intelligence 👉 https://ninelives.karawynnlong.com/language-is-a-poor-heuristic-for-intelligence/

                  pozorvlak@mathstodon.xyzP 1 Reply Last reply
                  0
                  • jwcph@helvede.netJ jwcph@helvede.net

                    @futurebird @pozorvlak @glitzersachen @darkuncle @regehr This is an extremely important point, so for anyone interested in extremely important points, Karawynn Long's article about how language is an incredibly bad & harmful shorthand for intelligence 👉 https://ninelives.karawynnlong.com/language-is-a-poor-heuristic-for-intelligence/

                    pozorvlak@mathstodon.xyzP This user is from outside of this forum
                    pozorvlak@mathstodon.xyzP This user is from outside of this forum
                    pozorvlak@mathstodon.xyz
                    wrote sidst redigeret af
                    #12

                    @jwcph will read, thanks! If you read Turing's 1950 paper then it's clear he used conversation *as a way of administering arbitrary cognitive tests to the machine*, not because he thought there was anything special about conversation itself. Still not a perfect test, but not bad for a first cut - sadly we haven't really moved on since!

                    @futurebird @glitzersachen @darkuncle @regehr

                    jwcph@helvede.netJ 1 Reply Last reply
                    0
                    • pozorvlak@mathstodon.xyzP pozorvlak@mathstodon.xyz

                      @jwcph will read, thanks! If you read Turing's 1950 paper then it's clear he used conversation *as a way of administering arbitrary cognitive tests to the machine*, not because he thought there was anything special about conversation itself. Still not a perfect test, but not bad for a first cut - sadly we haven't really moved on since!

                      @futurebird @glitzersachen @darkuncle @regehr

                      jwcph@helvede.netJ This user is from outside of this forum
                      jwcph@helvede.netJ This user is from outside of this forum
                      jwcph@helvede.net
                      wrote sidst redigeret af
                      #13

                      @pozorvlak @futurebird @glitzersachen @darkuncle @regehr I haven't actually read Turing's paper, but as far as I understand he was well aware that his test concerned whether a machine can convince a human counterpart that is is intelligent, not proving whether it actually *is* intelligent. So basically faking it.

                      pozorvlak@mathstodon.xyzP 1 Reply Last reply
                      0
                      • jwcph@helvede.netJ jwcph@helvede.net

                        @pozorvlak @futurebird @glitzersachen @darkuncle @regehr I haven't actually read Turing's paper, but as far as I understand he was well aware that his test concerned whether a machine can convince a human counterpart that is is intelligent, not proving whether it actually *is* intelligent. So basically faking it.

                        pozorvlak@mathstodon.xyzP This user is from outside of this forum
                        pozorvlak@mathstodon.xyzP This user is from outside of this forum
                        pozorvlak@mathstodon.xyz
                        wrote sidst redigeret af
                        #14

                        @jwcph (a) yes, (b) no. The idea is to operationalise the nebulous question "can machines think?" by replacing it with "can a machine successfully play the Imitation Game?", just as Scoville operationalised "how hot is this pepper?" by replacing it with "by what factor must we dilute an extract of this pepper so that a panel of trained judges can no longer detect the heat?" Turing admits (page 2) that it may be possible to construct a machine whose operations are worthy of the name "thinking" but which cannot play the Imitation Game, but he thinks that if a machine can successfully play the Imitation Game against a sceptical judge, asking questions drawn from "almost any one of the fields of human endeavour that we wish to include", then whatever it's doing deserves to be called "thinking". That's a *much* harder challenge than producing text which is human-like enough to fool the casual observer: arguably that easier test was passed by Eugene Goostman back in 2014.

                        Anyway, I strongly recommend reading the paper: it's short, beautifully written, and answers most of the common objections that are raised to it. There's a copy at
                        https://courses.cs.umbc.edu/471/papers/turing.pdf.

                        @futurebird @glitzersachen @darkuncle @regehr

                        jwcph@helvede.netJ 1 Reply Last reply
                        0
                        • pozorvlak@mathstodon.xyzP pozorvlak@mathstodon.xyz

                          @jwcph (a) yes, (b) no. The idea is to operationalise the nebulous question "can machines think?" by replacing it with "can a machine successfully play the Imitation Game?", just as Scoville operationalised "how hot is this pepper?" by replacing it with "by what factor must we dilute an extract of this pepper so that a panel of trained judges can no longer detect the heat?" Turing admits (page 2) that it may be possible to construct a machine whose operations are worthy of the name "thinking" but which cannot play the Imitation Game, but he thinks that if a machine can successfully play the Imitation Game against a sceptical judge, asking questions drawn from "almost any one of the fields of human endeavour that we wish to include", then whatever it's doing deserves to be called "thinking". That's a *much* harder challenge than producing text which is human-like enough to fool the casual observer: arguably that easier test was passed by Eugene Goostman back in 2014.

                          Anyway, I strongly recommend reading the paper: it's short, beautifully written, and answers most of the common objections that are raised to it. There's a copy at
                          https://courses.cs.umbc.edu/471/papers/turing.pdf.

                          @futurebird @glitzersachen @darkuncle @regehr

                          jwcph@helvede.netJ This user is from outside of this forum
                          jwcph@helvede.netJ This user is from outside of this forum
                          jwcph@helvede.net
                          wrote sidst redigeret af
                          #15

                          @pozorvlak @futurebird @glitzersachen @darkuncle @regehr Thank you for clarifying - I will 😊

                          1 Reply Last reply
                          0
                          Svar
                          • Svar som emne
                          Login for at svare
                          • Ældste til nyeste
                          • Nyeste til ældste
                          • Most Votes


                          • Log ind

                          • Har du ikke en konto? Tilmeld

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          Graciously hosted by data.coop
                          • First post
                            Last post
                          0
                          • Hjem
                          • Seneste
                          • Etiketter
                          • Populære
                          • Verden
                          • Bruger
                          • Grupper