AI is not inevitable.
-
@olivia @apostolis ok, now that we have the contrast clear between contexts in which damage is arising from someone ordering people to use AI and ones where the problems stem from individuals voluntarily adopting them (and, in fact, adopting them even in the face of explicit sanction) what form do you think “resistance” should take in the latter?
that is, what, concretely, do you think academics in my position should do?
@UlrikeHahn @apostolis sorry to zoom it out, but why are you so interested in my position over texts when it's so long form all over my website and papers? I think your university does pay AI companies for services, so yes, you can push back on that, so you are the one who is pushing a distinction I personally disagree with!
-
@UlrikeHahn @apostolis yeah, I know many do not like many of the quotes and have trouble with my position
But yes, I do think we need to educate the students: Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243
@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient
so it’s important to me to figure out why they’re insufficient and what else we could/should be doing
-
@UlrikeHahn @apostolis sorry to zoom it out, but why are you so interested in my position over texts when it's so long form all over my website and papers? I think your university does pay AI companies for services, so yes, you can push back on that, so you are the one who is pushing a distinction I personally disagree with!
@olivia @apostolis we just crossed replies… maybe the one I just sent answers that?
-
@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient
so it’s important to me to figure out why they’re insufficient and what else we could/should be doing
@UlrikeHahn @apostolis ok, I'm excited to see what you come up with!
-
@olivia @apostolis I don’t have trouble with your position, Olivia. I have trouble with the fact that I don’t think the recommendations (including in the linked preprint) are connecting fully with the problem. It would be great if they were, but -from my day to day experience with how AI is up-ending science academia- they aren’t. Not because they are wrong, but because they are insufficient
so it’s important to me to figure out why they’re insufficient and what else we could/should be doing
Sorry to interject my uneducated opinion , but both directions are insufficient alone.
You can look at it from both directions, top-down and bottoms-up. And both are necessary.
-
@UlrikeHahn @apostolis ok, I'm excited to see what you come up with!
@olivia @apostolis I don’t have any solution…it all feels pretty intractable to me at the moment, so I’m mainly struggling to understand the problem
what AI is doing to publishing reform is as good an example as any (see below). There is an “industry force” at play here only in as much as there is an industry irresponsibly making available particular products.
The actual causal pathways by which AI is breaking the system involves multiple distinct actors with very different motivations (outright AI slop/fraud, malicious actors, scientists using AI for research in ways that increase productivity but still leaves them in charge), each of these is different, but they are all combining to an overall negative effect
what I don’t see is how we can solve anything (if we indeed can) without unpacking all that in detail
-
Sorry to interject my uneducated opinion , but both directions are insufficient alone.
You can look at it from both directions, top-down and bottoms-up. And both are necessary.
@apostolis @olivia no disagreement with that!
-
@apostolis @olivia no disagreement with that!
@UlrikeHahn @apostolis it's funny mine is seen as top down tho, but sure, both in this schema are needed — but I am not by any means at any top in any sense
-
@olivia @apostolis I don’t have any solution…it all feels pretty intractable to me at the moment, so I’m mainly struggling to understand the problem
what AI is doing to publishing reform is as good an example as any (see below). There is an “industry force” at play here only in as much as there is an industry irresponsibly making available particular products.
The actual causal pathways by which AI is breaking the system involves multiple distinct actors with very different motivations (outright AI slop/fraud, malicious actors, scientists using AI for research in ways that increase productivity but still leaves them in charge), each of these is different, but they are all combining to an overall negative effect
what I don’t see is how we can solve anything (if we indeed can) without unpacking all that in detail
@UlrikeHahn @apostolis I don't fully grasp what I did that makes one think I am against different analyses here? So each featured paper here analyses AI from a different angle pretty clearly with different actors: https://olivia.science/ai/#featuredresearch e.g. https://doi.org/10.31234/osf.io/dkrgj_v1
-
@UlrikeHahn @apostolis I don't fully grasp what I did that makes one think I am against different analyses here? So each featured paper here analyses AI from a different angle pretty clearly with different actors: https://olivia.science/ai/#featuredresearch e.g. https://doi.org/10.31234/osf.io/dkrgj_v1
@olivia @apostolis I don’t think I said you are against different analyses?
the point I was trying to make is simply that what is breaking things right now is a confluence of forces and actors. If we are going to counter the destructive effects we need a systemic analysis of how these forces are interacting.
I don’t take you to be someone who would object to that in principle

I suspect what we do have disagreements on is what the relative importance of these different forces and actors are, and what’s required to push back as a result (even in principle)
-
AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.
Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099
@olivia Absofinglutely.... true!!!
-
@olivia @apostolis I don’t think I said you are against different analyses?
the point I was trying to make is simply that what is breaking things right now is a confluence of forces and actors. If we are going to counter the destructive effects we need a systemic analysis of how these forces are interacting.
I don’t take you to be someone who would object to that in principle

I suspect what we do have disagreements on is what the relative importance of these different forces and actors are, and what’s required to push back as a result (even in principle)
"Most importantly of all, resistance can and should take on many forms. Remember to rest and take care of yourself and your community. If talking to friends and colleagues is easy, then try to engage them on these issues. If it is not possible to do so, you can instead (or in addition) seek out allies online."
-
@apostolis @olivia the reason why this ultimately matters that pushing back against the real driver (the “organic” adoption of these tools by individuals) requires me to understand and engage with the perceived value and function these tools have for them…
…and that means trying to understand both what they can and what they can’t do. Simply declaring that these tools are garbage (“semantically meaningless random text generator”) isn’t useful for actually productively countering AI use in this configuration…(if they genuinely were meaningless random text generators I wouldn’t be faced with the negative effects in the first place).
the Fodor quote doesn’t feel like it’s aimed at that kind of understanding
@UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.
-
AI is not inevitable. Nothing in human societies is inevitable because we design them. Healthcare can be free for the public. Books can be bought instead of bombs. Universities can be free for students, and they can even receive a stipend to live off. Don't let companies dictate the future.
Read more in section 3.2 here https://doi.org/10.5281/zenodo.17065099
@olivia this feels good to read again just the day after we (finally) had a first meeting in school to discuss AI (and of course I shared the paper!).
But god, the belief in inevitability is so deeply engrained.
-
@apostolis @olivia no disagreement with that!
@UlrikeHahn @apostolis @olivia
Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2
-
@UlrikeHahn @apostolis @olivia
Wow.... interesting discussion, folks. Thank you. I'm a long way from university level experience, being an engineer in the electronic design industry for over 40 years. We've gone from one computer to share among engineers thru now to AI assistance across our individual computers. IMHO, we need to separate what AI can do from what they do. Humans, almost instinctively anthropomorphise everything. FFS... people still worship an imaginary AI in the sky and.... 1/2
@UlrikeHahn @apostolis @olivia 2/2 ... call it their god(s). I think the best resistance is to cooperate. After all, no matter how human these things can seem, they will never be more than tools. As humans, we "feel" a lot. We need to not let our feelings blind us to what these new tools can do. I'm no teacher. I've found that my method of communication doesn't do well explaining to others how to think, instead of what to think. I just know the tools we use evolve all the time....
-
@UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.
@abucci @apostolis @olivia I’m going to point you toward the scare quotes around the word “organic” in my post, which are there for precisely those reasons.
I am also going to push back against the notion that I am “placing the responsibility at the feet of students”: I am simply describing the (widely documented) problem in higher education that students are using AI tools in significant volumes *even where there use is explicitly sanctioned and forbidden*.
That is the concrete problem of AI now undermining higher education. Asking what “resisting AI” is supposed to mean for me in that context seems legitimate to me, and if it’s not, Olivia (who I’ve known for a long time as an academic colleague) is more than capable of telling me that herself.
-
@UlrikeHahn@fediscience.org @apostolis@social.coop @olivia@scholar.social There is very little that could be credibly called organic adoption when it comes to AI. It is being fiercely pushed in support of multiple hundreds of billion dollar investment. People are being told repeatedly, in every channel, that AI is inevitable, is here to stay, etc. It is disingenuous to place this responsibility at the feet of students, throw up your hands, or ask someone else to tell you what to do about it. That kind of behavior from people empowered to know and do better is the problem.
@abucci @UlrikeHahn @apostolis @olivia
Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?
-
J jwcph@helvede.net shared this topic
-
@abucci @UlrikeHahn @apostolis @olivia
Wouldn't an approach where the AIs have to pass the class as students, be better. After all, regurgitating data is not the way to learn how to think. As for the pollitical/economics of the whole mess, well, that's on us to some extent. It's a problem educated people deal with all the time, even among each other. IMHO, humanity is still growing up. We've not abandoned our superstitions for the hard real wonder of actual nature. Is AI part of our nature?
@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.
-
@lednaBM @abucci @apostolis @olivia if I understand you correctly, you are suggesting we, in an sense, embrace AI and treat it in such a way that makes it better (ie accept it as students)? if yes, I don’t personally really want to make AI systems ‘better’ - they are causing huge damage and disruption at current levels of performance. I’d personally rather put a brake on that.
@UlrikeHahn @abucci @apostolis @olivia
I understand what you're saying, and maybe language is not serving us well. You seemed to have juxtaposed helping it be better versus creating a disruption. And again, maybe I fully don't understand the dilemma. When I need very technical information that I can not recall or need help with, I would go to a book or a specification. Now, I can ask AI, check its results, and decide whether I can rely upon what's being presented. It's a tool... 1/2