ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything
-
@regehr some folks I respect believe him, but history kind of shows that if anything *can* be built, it will be … so we better get to working on risk mitigation here beyond hoping somebody doesn’t do the thing.
I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.
And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.
After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.
For people who think (technological) progress is steady upward, I'd like to point to the space program.
-
I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.
And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.
After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.
For people who think (technological) progress is steady upward, I'd like to point to the space program.
@glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.
-
@glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.
@pozorvlak @glitzersachen @darkuncle @regehr
I will bet the farm on it. Or the condo... or whatever.
intelligence is hard just like robotics is hard.
We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.
What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.
-
@pozorvlak @glitzersachen @darkuncle @regehr
I will bet the farm on it. Or the condo... or whatever.
intelligence is hard just like robotics is hard.
We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.
What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.
@pozorvlak @glitzersachen @darkuncle @regehr
And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.
These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.
It's nonsense!
-
@pozorvlak @glitzersachen @darkuncle @regehr
And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.
These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.
It's nonsense!
@pozorvlak @glitzersachen @darkuncle @regehr
For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.
But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.
That is all that is happening.
-
@pozorvlak @glitzersachen @darkuncle @regehr
For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.
But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.
That is all that is happening.
@pozorvlak @glitzersachen @darkuncle @regehr
I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.
Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.
-
J jwcph@helvede.net shared this topic
-
@pozorvlak @glitzersachen @darkuncle @regehr
I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.
Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.
@futurebird @pozorvlak @glitzersachen @darkuncle @regehr This is an extremely important point, so for anyone interested in extremely important points, Karawynn Long's article about how language is an incredibly bad & harmful shorthand for intelligence
https://ninelives.karawynnlong.com/language-is-a-poor-heuristic-for-intelligence/
-
@futurebird @pozorvlak @glitzersachen @darkuncle @regehr This is an extremely important point, so for anyone interested in extremely important points, Karawynn Long's article about how language is an incredibly bad & harmful shorthand for intelligence
https://ninelives.karawynnlong.com/language-is-a-poor-heuristic-for-intelligence/
@jwcph will read, thanks! If you read Turing's 1950 paper then it's clear he used conversation *as a way of administering arbitrary cognitive tests to the machine*, not because he thought there was anything special about conversation itself. Still not a perfect test, but not bad for a first cut - sadly we haven't really moved on since!
-
@jwcph will read, thanks! If you read Turing's 1950 paper then it's clear he used conversation *as a way of administering arbitrary cognitive tests to the machine*, not because he thought there was anything special about conversation itself. Still not a perfect test, but not bad for a first cut - sadly we haven't really moved on since!
@pozorvlak @futurebird @glitzersachen @darkuncle @regehr I haven't actually read Turing's paper, but as far as I understand he was well aware that his test concerned whether a machine can convince a human counterpart that is is intelligent, not proving whether it actually *is* intelligent. So basically faking it.
-
@pozorvlak @futurebird @glitzersachen @darkuncle @regehr I haven't actually read Turing's paper, but as far as I understand he was well aware that his test concerned whether a machine can convince a human counterpart that is is intelligent, not proving whether it actually *is* intelligent. So basically faking it.
@jwcph (a) yes, (b) no. The idea is to operationalise the nebulous question "can machines think?" by replacing it with "can a machine successfully play the Imitation Game?", just as Scoville operationalised "how hot is this pepper?" by replacing it with "by what factor must we dilute an extract of this pepper so that a panel of trained judges can no longer detect the heat?" Turing admits (page 2) that it may be possible to construct a machine whose operations are worthy of the name "thinking" but which cannot play the Imitation Game, but he thinks that if a machine can successfully play the Imitation Game against a sceptical judge, asking questions drawn from "almost any one of the fields of human endeavour that we wish to include", then whatever it's doing deserves to be called "thinking". That's a *much* harder challenge than producing text which is human-like enough to fool the casual observer: arguably that easier test was passed by Eugene Goostman back in 2014.
Anyway, I strongly recommend reading the paper: it's short, beautifully written, and answers most of the common objections that are raised to it. There's a copy at
https://courses.cs.umbc.edu/471/papers/turing.pdf. -
@jwcph (a) yes, (b) no. The idea is to operationalise the nebulous question "can machines think?" by replacing it with "can a machine successfully play the Imitation Game?", just as Scoville operationalised "how hot is this pepper?" by replacing it with "by what factor must we dilute an extract of this pepper so that a panel of trained judges can no longer detect the heat?" Turing admits (page 2) that it may be possible to construct a machine whose operations are worthy of the name "thinking" but which cannot play the Imitation Game, but he thinks that if a machine can successfully play the Imitation Game against a sceptical judge, asking questions drawn from "almost any one of the fields of human endeavour that we wish to include", then whatever it's doing deserves to be called "thinking". That's a *much* harder challenge than producing text which is human-like enough to fool the casual observer: arguably that easier test was passed by Eugene Goostman back in 2014.
Anyway, I strongly recommend reading the paper: it's short, beautifully written, and answers most of the common objections that are raised to it. There's a copy at
https://courses.cs.umbc.edu/471/papers/turing.pdf.@pozorvlak @futurebird @glitzersachen @darkuncle @regehr Thank you for clarifying - I will