AI is not inevitable.
-
@abucci @UlrikeHahn @apostolis @olivia
Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?
@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.
-
@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.
@UlrikeHahn @abucci @apostolis @olivia
I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2
-
@UlrikeHahn @abucci @apostolis @olivia
I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2
@UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..
-
@UlrikeHahn @abucci @apostolis @olivia 2/2 tools generally need calibration. Is it possible to use the disruption you speak of as a teaching moment? I don't know. Am I being foolish about the political/economic consequences of those benefitting from the disruption? Maybe. I agree with the original poster. We should have free education, health care, and representation in the way we govern ourselves. Problem there is, IMHO, the white elephant that is religion working against secular human values..
@lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.
-
@lednaBM @abucci @apostolis @olivia I think one of the problems, particularly in the context of education, lies in the idea that “now I can use AI to give me an answer and check the results”. It is precisely the “ability to check the results” in a particular scientific or academic discipline that higher education degrees are trying to provide. Leaning on AI to “find” answers by students is undermining the learning of the skills that underpin “the ability to check”.
@UlrikeHahn @abucci @apostolis @olivia
That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated. -
@UlrikeHahn @abucci @apostolis @olivia
That's a great point. Teaching youth only to rely upon AI sounds like a mistake. I guess I have trouble with the notion that AI is anything more than a tool. Its applications threaten a lot, probably a lot beyond its scope, but not beyond its profit scam. Hopefully, some applications are identified as misapplications. I'm reminded of Huxleys Brave New World. Will AI be the soma drug to placate the masses, even though they were designed to be placated.@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.
Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?
In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...
-
@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.
Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?
In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...
@aoanla @lednaBM @abucci @apostolis @olivia that’s very well put!
-
@abucci @UlrikeHahn @apostolis @olivia
Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?
@lednaBM@stranger.social @UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social The fact that you can selectively ignore the strings of a marionette does not mean it is alive, part of our nature, or able to attend and pass a course. I suspect this is even obvious to AI!
-
@abucci @apostolis @olivia I’m going to point you toward the scare quotes around the word “organic” in my post, which are there for precisely those reasons.
I am also going to push back against the notion that I am “placing the responsibility at the feet of students”: I am simply describing the (widely documented) problem in higher education that students are using AI tools in significant volumes *even where there use is explicitly sanctioned and forbidden*.
That is the concrete problem of AI now undermining higher education. Asking what “resisting AI” is supposed to mean for me in that context seems legitimate to me, and if it’s not, Olivia (who I’ve known for a long time as an academic colleague) is more than capable of telling me that herself.
@UlrikeHahn@fediscience.org You stated you were pushing back against the characterization of your stance that you were laying responsibility at the feet of your students, and then immediately placed responsibility at the feet of the students! Are you really unable to see this in your own post?
@fediscience.org @apostolis@social.coop @olivia@scholar.social
-
@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.
Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?
In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...
@aoanla @lednaBM @UlrikeHahn @abucci @apostolis
Indeed! FWIW I touch on tools versus technologies in this context here if useful. https://scholar.social/@olivia/114937376930475208
-
@lednaBM @UlrikeHahn @abucci @apostolis @olivia At the risk of butting into this conversation, I think the problem here is that you think that "just a tool" is a neutral concept.
Tools, by their very nature, change the way we interact with the world. Cars are "just a tool", but dependence on cars for transport has both positive and negative effects, because of how their use changes how we behave (and what other things we want to change about the world now "we" want to use cars all the time). Is "car-using humanity" healthier than "pre-car humanity"?
In this sense, even if "AI is just a tool", the existence of cognitive tools *clearly* implies that use of them will change the way people behave - *regardless* of any concept of "applications being identified as misapplications". Dependence on a tool for *thinking* feels inherently more problematic than dependence on a tool for travelling distances...
@aoanla@hachyderm.io @lednaBM@stranger.social @UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social The "just a tool" framing also does a great deal of heavy lifting for the political project that AI represents and forwards. What saddens me most is that this project is nearly transparent, its actors almost totally honest about what they are attempting to accomplish even as they dissemble about it. Yet we go around and around in circles about whether these things are "just" tools, or wring our hands about what to do about students using them, or waffle about whether the tools are useful or have this or that impact on productivity. These things are symptoms, not causes.
-
@UlrikeHahn@fediscience.org You stated you were pushing back against the characterization of your stance that you were laying responsibility at the feet of your students, and then immediately placed responsibility at the feet of the students! Are you really unable to see this in your own post?
@fediscience.org @apostolis@social.coop @olivia@scholar.social@abucci @apostolis @olivia let me say this then: I find your original reply to me, someone you have never met, aggressive and inflammatory.
One of the main benefits of exchange on platforms like this, to me, lies in being able to talk things through with others whose opinion and expertise I value but who disagree with me - that allows me to learn things and clarify my thoughts, and I’ve found this exchange with Olivia really helpful in that regard.
Trying to navigate disagreement in a way that it doesn’t lead to conflict is incredibly hard. In a context like this thread where people are investing significant effort in trying to navigate disagreement in a constructive way, I don’t personally have time, energy, or interest in exchanges with people who aren’t making that effort. The world is fraught enough as it is.
-
@olivia Olivia, what would it mean for me to “refuse adoption” in universities when it is students who are the drivers for my courses and they are widely using AI in ways that are already forbidden?
I feel like the “resistance” and critique of inevitability talk isn’t quite connecting with my reality on the ground
I teach those students as well. We see them coming in with 85%+ self reported past-week AI use so we started surveying them on K,A&B regarding that use. They return the party line; it's everywhere and I have to use it in order to succeed. We switched to a Write To Learn model and switched our assessments away from format, grammar and spelling and into content, personal insight and creativity. That's after a module on the history and mechanics of LLMs, not just the problem with hallucinations but how they form as AI constructs its responses.
The feedback was and remains positive for short, low threat assignments. If I can personally generate similar bursts of dopamine compared to a chatbots injection of "great question" and other disingenuous slop then perhaps I can actually engage with the learner.
One last point, my institution just bought a one year instance of openai.edu and the students are HAMMERING leadership over the expense, environmental impacts and stolen creativity. Our shared governance organization is pushing back citing this industry interaction as a failure of shared decision making articulated in our governing constitution. AI is pedagogy and that's faculty business and not the job of leadership.
-
@UlrikeHahn@fediscience.org You stated you were pushing back against the characterization of your stance that you were laying responsibility at the feet of your students, and then immediately placed responsibility at the feet of the students! Are you really unable to see this in your own post?
@fediscience.org @apostolis@social.coop @olivia@scholar.social@abucci @UlrikeHahn @apostolis @olivia I mean, it is a fact that students are massively relying on AI in a way that is impacting education. One can wonder about the causes or what to do about it, but merely stating that fact is not putting any responsibility on anyone.
-
@abucci @UlrikeHahn @apostolis @olivia
Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?
@lednaBM @abucci @UlrikeHahn @apostolis @olivia
The tacit assumption here is that LLMs possess intelligence is false. Their purpose is not to give intelligent answers. Their purpose is surveillance.
AGI = Automated Gathering of Intel
-
I teach those students as well. We see them coming in with 85%+ self reported past-week AI use so we started surveying them on K,A&B regarding that use. They return the party line; it's everywhere and I have to use it in order to succeed. We switched to a Write To Learn model and switched our assessments away from format, grammar and spelling and into content, personal insight and creativity. That's after a module on the history and mechanics of LLMs, not just the problem with hallucinations but how they form as AI constructs its responses.
The feedback was and remains positive for short, low threat assignments. If I can personally generate similar bursts of dopamine compared to a chatbots injection of "great question" and other disingenuous slop then perhaps I can actually engage with the learner.
One last point, my institution just bought a one year instance of openai.edu and the students are HAMMERING leadership over the expense, environmental impacts and stolen creativity. Our shared governance organization is pushing back citing this industry interaction as a failure of shared decision making articulated in our governing constitution. AI is pedagogy and that's faculty business and not the job of leadership.
@mycotropic @olivia I’ve tried to use a course on cognition (and the computational metaphor) to give students a better understanding of the basics of LLMs along the same lines and with similar intentions. But there’s a limit to the “educational” approach (here or elsewhere) because it’s not going to be effective with those using the tools in bad faith.
So part of the response to AI use has to take those bad faith cases as given (at least currently) and find ways to deal with them, and one of the difficulties with that is finding effective ways to do this that don’t then just further embed AI.
Likewise, I feel that preparing for future disruption requires us to anticipate ways in which these tools might be used.
Both of these require (to my mind) engagement with them in ways that, to some extent, takes their use as given, and tries to work from that.
I think it’s regarding those activities that the “resist the inevitability” narrative, and the focus on telling people that these tools are morally problematic and no good that’s gone along with it in practice, is not really helpful, and maybe even counterproductive.
-
@lednaBM @abucci @UlrikeHahn @apostolis @olivia
The tacit assumption here is that LLMs possess intelligence is false. Their purpose is not to give intelligent answers. Their purpose is surveillance.
AGI = Automated Gathering of Intel
@teledyn @lednaBM @abucci @apostolis @olivia I think the “intelligence” issue is a red herring, personally
in the contexts I’m concerned with, people’s use is driven by the practical value they find in the actual outputs
(I also don’t personally see anyone in this thread that has been assuming that)
-
@teledyn @lednaBM @abucci @apostolis @olivia I think the “intelligence” issue is a red herring, personally
in the contexts I’m concerned with, people’s use is driven by the practical value they find in the actual outputs
(I also don’t personally see anyone in this thread that has been assuming that)
@UlrikeHahn @lednaBM @abucci @apostolis @olivia
We have already seen arrests, and then the shooting in BC, all US based are required to retain, and the #ELIZAeffect.
So carry on. Don't mind me. Enjoy.
-
S sebastian@social.itu.dk shared this topic