Actually, hang on. One more thing occurred to me. Does this exacerbate the difficulty of replication, given that the simple passage of time will render this library no longer new?
And now I'm done for the night, for real
Actually, hang on. One more thing occurred to me. Does this exacerbate the difficulty of replication, given that the simple passage of time will render this library no longer new?
And now I'm done for the night, for real
But now it's 1am. I may pick this up tomorrow, I'm not sure. If I do, the next chapter is their analysis. Seems like there would be things in there that merit comment
@hrefna Im finding it frustrating, mainly
@jsbarretto That's not what people mean when they say system design.
They mean which way do dependencies flow. What is the scope of responsibility for this thing. How will it communicate with other things. How does the collection of things remain in a consistent state.
For example.
@realn2s Lower grades are, indeed, worse.
The AI did seem to speed things up, but not enough to achieve statistical significance. And as I describe further down the thread (just now, not suggesting you didn't read far enough), the AI chatbot seems to have been the only supportive tooling that was available. So it's not so much the difference between AI or not, as the difference between support tools or not.
And then there's one more detail. I'm not sure how I should be thinking about this, but it feels very relevant. All of the study subjects were recruited through a crowd working platform. That adds a whole extra concern about the subject's standing on the platform. It means that in some sense undertaking this study was their job, and the instruction given in the project brief was not just instruction to a participant in a study, but requirements given to a worker.
I know this kind of thing is not unusual in studies like this. But it feels like a complicating factor that I can't see the edges of.
So. The test conditions were weirdly high stress, for no particular reason the study makes clear. Or even acknowledges. The stress was *higher* on the control group. And the control group had to use inferior tooling.
I don't see how this data can be used to support any quantitative conclusion at all.
Qualitatively, I suspect there is some value in the clusters of AI usage patterns they observed. But that's not what anyone is talking about when they talk about this study.
Given all of that, I don't actually think they measured the impact of the code extruding chatbots at all. On anything. What they measured was stress. This is a stress test.
And, to return to their notion of what "code writing" consists of: the control subjects didn't have code completion, and the test subjects did. I know this, because they said so. It came up in their pilot studies. The control group kept running out of time because they struggled with syntax for try/catch, and for string formatting. They only stopped running out of time after the researchers added specific reminders for those 2 things to the project's instructions.
I guess this brings me to the study design. I'm struggling a little to figure out how to talk about this. The short version is that I don't think they're testing any of the effects they think they're testing.
So, they start with a warmup coding round, which seems to be mostly to let people become familiar with the tool. That's important, because the tool is commercial software for conducting coding interviews in a browser. They don't say which one, that I've seen.
Then they have two separate toy projects that the subjects should complete. 1 is a non-blocking ticker, using a specific async library. 2 is some async I/O record retrieval with basic error handling, using the same async library.
And then they take a quiz about that async library.
But there's some very important details. The coding portion and quiz are both timed. The subjects were instructed to complete them as fast as possible. And the testing platform did not seem to have code completion or, presumably, any other modern development affordance.
@hrefna it certainly doesn't make them look good. But I'm honestly not sure we can draw *any* conclusion from this study. Which I'm getting into now
@mattly I mean, yes. But still.
Maybe what I don't understand is why everyone else goes along with it.
So anyway, all of this is, apparently, in service to the "original motivation of developing and retaining the skills required for supervising automation."
Which would be cool, I'd like to read that study, because it isn't this one. This study is about whether the tools used to rapidly spit out meaningless code will impact one's ability to answer questions about the code that was spat. And even then, I'm not sure the design of the study can answer that question.
4. Conceptual. As they put it, "Conceptual understanding is critical to assess whether AI-generated code uses appropriate design patterns that adheres to how the library should be used.
IIIIIII guess. That's not wrong, exactly? But it's such a reverse centaur world view. I don't want to be the conceptual bounds checker for the code extruder. And I don't understand why they don't understand that.
3. Code writing. Honestly, I don't get the impression they even understand what this means. They say "Low-level code writing, like remembering the syntax of functions, will be less important with further integration of AI coding tools
than high-level system design."
Neither of those things is a meaningful facet of actually writing code. Writing code exists entirely in-between those two things. Code completion tools basically eliminate having to think about syntax (but we will return to this). And system design happens in the realm of abstract behaviors and responsibilities.
2. Code Reading. "This skill enables humans to understand and verify AI-written code before deployment."
Again, not in my professional experience. It's just too voluminous and bland. And no one has time for that shit, even if they can make themselves do it. Plus, I haven't found anyone who can properly review slop code, because we can't operate without the assumptions of comprehension, intention, and good faith that simply do not hold in that case.
Found it! n=52. wtf. I reiterate: 20 billion dollars, just for this current funding round, and they only managed to do this study with 52 people.
But anyway, let's return to the methods themselves. They start with the design of the evaluation component, so I will too. It's organized around 4 evaluative practices they say are common in CS education. That seems fine, but their explanation for why these things are relevant is weird.
1. Debugging. According to them "this skill is curcial for detecting when AI-generated code is incorrect and understanding why it fails.
Maybe their definition is more expansive than it seems here? But it's been my experience, professionally, that this is just not the case. The only even sort-of reliable mechanism for detecting and understanding the shit behavior of slop code is extensive validation suites.
@r343l @glyph
As I've learned, they did some preregistration for the study. That might have influenced them.
And, a whole bunch of these ai researchers really do seem to think of themselves as serious scientists doing important work. Particularly at anthropic, as that's where a lot of the true believers ended up
@glyph i would do this more, but the format of academic papers is so cumbersome. The time I actually have available for it is on the couch, after the kid's in bed. But reading these things on a phone is basically impossible
Chapter 4. Methods.
Let's go
First, the task. It's uh. It's basically a shitty whiteboard coding interview. The assignment is to build a couple of demo projects for an async python library. One is a non-blocking ticker. The other is some I/O ("record retrieval", not clear if this is the local filesystem or what, but probably the local fs) with handling for missing files.
Both are implemented in a literal white board coding interview tool. The test group gets an AI chatbot button, and encouragement to use it. The control group doesn't.
/sigh
I just. Come on. If you were serious about this, it would be pocket change to do an actual study