We'll see how I feel in the morning, but for now i seem to have convinced myself to actually read that fuckin anthropic paper
-
There's a whole series of recent studies from MIT, CMU, Boston Consulting Group, BBC, and Oxford Economics arguing that AI/LLM assistants do NOT improve productivity.
Walk-through here:
-
And now we have actual research questions! It feels like it shouldn't take this long to get these, but w/e
1. Does AI assistance improve task completion productivity when new skills are required?
2. How does using AI assistance affect the development of these new skills?We'll learn how the authors propose to answer these questions in the next chapter: Methods.
But first, there is a 6 year old in here demanding I play minecraft, and I'd rather do that.
To be continued... probbaly
Chapter 4. Methods.
Let's go
First, the task. It's uh. It's basically a shitty whiteboard coding interview. The assignment is to build a couple of demo projects for an async python library. One is a non-blocking ticker. The other is some I/O ("record retrieval", not clear if this is the local filesystem or what, but probably the local fs) with handling for missing files.
Both are implemented in a literal white board coding interview tool. The test group gets an AI chatbot button, and encouragement to use it. The control group doesn't.
/sigh
I just. Come on. If you were serious about this, it would be pocket change to do an actual study
-
Chapter 4. Methods.
Let's go
First, the task. It's uh. It's basically a shitty whiteboard coding interview. The assignment is to build a couple of demo projects for an async python library. One is a non-blocking ticker. The other is some I/O ("record retrieval", not clear if this is the local filesystem or what, but probably the local fs) with handling for missing files.
Both are implemented in a literal white board coding interview tool. The test group gets an AI chatbot button, and encouragement to use it. The control group doesn't.
/sigh
I just. Come on. If you were serious about this, it would be pocket change to do an actual study
@jenniferplusplus thank you so much for doing this. I skimmed and just couldn’t bring myself to read it all, and it’s nice to see someone doing a much deeper read but coming to largely the same conclusions.
-
@jenniferplusplus thank you so much for doing this. I skimmed and just couldn’t bring myself to read it all, and it’s nice to see someone doing a much deeper read but coming to largely the same conclusions.
@glyph i would do this more, but the format of academic papers is so cumbersome. The time I actually have available for it is on the couch, after the kid's in bed. But reading these things on a phone is basically impossible
-
@glyph i would do this more, but the format of academic papers is so cumbersome. The time I actually have available for it is on the couch, after the kid's in bed. But reading these things on a phone is basically impossible
@jenniferplusplus @glyph I had only read the anthropic summary. I was struck by how even if all their methods and study design were great (& a good sample etc) the results seemed to very much indicate LLM use isn't as transformative as the hype with major risks of deskilling impacts. I was surprised they published it just reading their own summary. I guess they had to make lemonade from lemons??
-
@glyph i would do this more, but the format of academic papers is so cumbersome. The time I actually have available for it is on the couch, after the kid's in bed. But reading these things on a phone is basically impossible
@jenniferplusplus all the more reason I appreciate you putting the effort in!
-
@jenniferplusplus @glyph I had only read the anthropic summary. I was struck by how even if all their methods and study design were great (& a good sample etc) the results seemed to very much indicate LLM use isn't as transformative as the hype with major risks of deskilling impacts. I was surprised they published it just reading their own summary. I guess they had to make lemonade from lemons??
@r343l @jenniferplusplus as I put it earlier today: https://mastodon.social/@glyph/115992279951399934
-
@glyph i would do this more, but the format of academic papers is so cumbersome. The time I actually have available for it is on the couch, after the kid's in bed. But reading these things on a phone is basically impossible
@jenniferplusplus @glyph The industrial state today is a progressing milestone . But it has a history of 60 years. Turing test and Joseph Weizenbaum’s “Eliza” (same Test as Turing) are passed easily on any machine. But the myth of the ancient days about AI didnot change for many people.
-
@jenniferplusplus @glyph The industrial state today is a progressing milestone . But it has a history of 60 years. Turing test and Joseph Weizenbaum’s “Eliza” (same Test as Turing) are passed easily on any machine. But the myth of the ancient days about AI didnot change for many people.
@jenniferplusplus @glyph I , older too, compare AI often with the moon landing of the 1960ies when AI also started professionally at the MIT, USA. The most confusing inquiry about Apollo’s success was : Now that we reached this goal that millions dreamed about what do we want there ? And what is our next stepping stone ?
-
@jenniferplusplus @glyph I , older too, compare AI often with the moon landing of the 1960ies when AI also started professionally at the MIT, USA. The most confusing inquiry about Apollo’s success was : Now that we reached this goal that millions dreamed about what do we want there ? And what is our next stepping stone ?
@jenniferplusplus @glyph my beloved fantasy and SciFi book was and is Solaris from Stanislaw Lem (Poland, 1961)
Https://en.wikipedia.org/wiki/Solaris_%28novel%29a mystic ocean on a distant planet that materializes human minds life traumata . Astronauts there suffer from a deceased child or partner e.g by suicide.
The facit is that humanity tries to push their frontiers as much as possible to escape earth from daily routine. And only faces himself as in mind mirror. -
@jenniferplusplus @glyph I had only read the anthropic summary. I was struck by how even if all their methods and study design were great (& a good sample etc) the results seemed to very much indicate LLM use isn't as transformative as the hype with major risks of deskilling impacts. I was surprised they published it just reading their own summary. I guess they had to make lemonade from lemons??
@r343l @glyph
As I've learned, they did some preregistration for the study. That might have influenced them.And, a whole bunch of these ai researchers really do seem to think of themselves as serious scientists doing important work. Particularly at anthropic, as that's where a lot of the true believers ended up
-
Chapter 4. Methods.
Let's go
First, the task. It's uh. It's basically a shitty whiteboard coding interview. The assignment is to build a couple of demo projects for an async python library. One is a non-blocking ticker. The other is some I/O ("record retrieval", not clear if this is the local filesystem or what, but probably the local fs) with handling for missing files.
Both are implemented in a literal white board coding interview tool. The test group gets an AI chatbot button, and encouragement to use it. The control group doesn't.
/sigh
I just. Come on. If you were serious about this, it would be pocket change to do an actual study
Found it! n=52. wtf. I reiterate: 20 billion dollars, just for this current funding round, and they only managed to do this study with 52 people.
But anyway, let's return to the methods themselves. They start with the design of the evaluation component, so I will too. It's organized around 4 evaluative practices they say are common in CS education. That seems fine, but their explanation for why these things are relevant is weird.
1. Debugging. According to them "this skill is curcial for detecting when AI-generated code is incorrect and understanding why it fails.
Maybe their definition is more expansive than it seems here? But it's been my experience, professionally, that this is just not the case. The only even sort-of reliable mechanism for detecting and understanding the shit behavior of slop code is extensive validation suites.
-
Found it! n=52. wtf. I reiterate: 20 billion dollars, just for this current funding round, and they only managed to do this study with 52 people.
But anyway, let's return to the methods themselves. They start with the design of the evaluation component, so I will too. It's organized around 4 evaluative practices they say are common in CS education. That seems fine, but their explanation for why these things are relevant is weird.
1. Debugging. According to them "this skill is curcial for detecting when AI-generated code is incorrect and understanding why it fails.
Maybe their definition is more expansive than it seems here? But it's been my experience, professionally, that this is just not the case. The only even sort-of reliable mechanism for detecting and understanding the shit behavior of slop code is extensive validation suites.
2. Code Reading. "This skill enables humans to understand and verify AI-written code before deployment."
Again, not in my professional experience. It's just too voluminous and bland. And no one has time for that shit, even if they can make themselves do it. Plus, I haven't found anyone who can properly review slop code, because we can't operate without the assumptions of comprehension, intention, and good faith that simply do not hold in that case.
-
2. Code Reading. "This skill enables humans to understand and verify AI-written code before deployment."
Again, not in my professional experience. It's just too voluminous and bland. And no one has time for that shit, even if they can make themselves do it. Plus, I haven't found anyone who can properly review slop code, because we can't operate without the assumptions of comprehension, intention, and good faith that simply do not hold in that case.
@jenniferplusplus the latter part is especially true and i don't have any sort of strategy for handling it. i have to read every single line of LLM code because the space of possible mistakes it can make is so large. with humans, even if someone really doesn't know what they are doing, there are only so many kinds of things that could conceivably screw up.
-
2. Code Reading. "This skill enables humans to understand and verify AI-written code before deployment."
Again, not in my professional experience. It's just too voluminous and bland. And no one has time for that shit, even if they can make themselves do it. Plus, I haven't found anyone who can properly review slop code, because we can't operate without the assumptions of comprehension, intention, and good faith that simply do not hold in that case.
3. Code writing. Honestly, I don't get the impression they even understand what this means. They say "Low-level code writing, like remembering the syntax of functions, will be less important with further integration of AI coding tools
than high-level system design."Neither of those things is a meaningful facet of actually writing code. Writing code exists entirely in-between those two things. Code completion tools basically eliminate having to think about syntax (but we will return to this). And system design happens in the realm of abstract behaviors and responsibilities.
-
3. Code writing. Honestly, I don't get the impression they even understand what this means. They say "Low-level code writing, like remembering the syntax of functions, will be less important with further integration of AI coding tools
than high-level system design."Neither of those things is a meaningful facet of actually writing code. Writing code exists entirely in-between those two things. Code completion tools basically eliminate having to think about syntax (but we will return to this). And system design happens in the realm of abstract behaviors and responsibilities.
4. Conceptual. As they put it, "Conceptual understanding is critical to assess whether AI-generated code uses appropriate design patterns that adheres to how the library should be used.
IIIIIII guess. That's not wrong, exactly? But it's such a reverse centaur world view. I don't want to be the conceptual bounds checker for the code extruder. And I don't understand why they don't understand that.
-
4. Conceptual. As they put it, "Conceptual understanding is critical to assess whether AI-generated code uses appropriate design patterns that adheres to how the library should be used.
IIIIIII guess. That's not wrong, exactly? But it's such a reverse centaur world view. I don't want to be the conceptual bounds checker for the code extruder. And I don't understand why they don't understand that.
@jenniferplusplus they don't understand it because their job depends on them not understanding it
-
4. Conceptual. As they put it, "Conceptual understanding is critical to assess whether AI-generated code uses appropriate design patterns that adheres to how the library should be used.
IIIIIII guess. That's not wrong, exactly? But it's such a reverse centaur world view. I don't want to be the conceptual bounds checker for the code extruder. And I don't understand why they don't understand that.
So anyway, all of this is, apparently, in service to the "original motivation of developing and retaining the skills required for supervising automation."
Which would be cool, I'd like to read that study, because it isn't this one. This study is about whether the tools used to rapidly spit out meaningless code will impact one's ability to answer questions about the code that was spat. And even then, I'm not sure the design of the study can answer that question.
-
So, back to the paper.
"How AI Impacts Skill Formation"
https://arxiv.org/abs/2601.20245The very first sentence of the abstract:
> AI assistance produces significant productivity gains across professional domains, particularly for novice workers.
1. The evidence for this is mixed, and the effect is small.
2. That's not even the purpose of this study. The design of the study doesn't support drawing conclusions in this area.Of course, the authors will repeat this claim frequently. Which brings us back to MY priors, which is that this is largely a political document.
@jenniferplusplus oh gods I need to read this.
-
@jenniferplusplus they don't understand it because their job depends on them not understanding it
@mattly I mean, yes. But still.
Maybe what I don't understand is why everyone else goes along with it.
