We'll see how I feel in the morning, but for now i seem to have convinced myself to actually read that fuckin anthropic paper
-
@jenniferplusplus
…and good struggles, which are what good instructors help createYes, that's one important aspect during teaching/learning. @inthehands @jenniferplusplus
-
Given all of that, I don't actually think they measured the impact of the code extruding chatbots at all. On anything. What they measured was stress. This is a stress test.
And, to return to their notion of what "code writing" consists of: the control subjects didn't have code completion, and the test subjects did. I know this, because they said so. It came up in their pilot studies. The control group kept running out of time because they struggled with syntax for try/catch, and for string formatting. They only stopped running out of time after the researchers added specific reminders for those 2 things to the project's instructions.
-
So. The test conditions were weirdly high stress, for no particular reason the study makes clear. Or even acknowledges. The stress was *higher* on the control group. And the control group had to use inferior tooling.
I don't see how this data can be used to support any quantitative conclusion at all.
Qualitatively, I suspect there is some value in the clusters of AI usage patterns they observed. But that's not what anyone is talking about when they talk about this study.
And then there's one more detail. I'm not sure how I should be thinking about this, but it feels very relevant. All of the study subjects were recruited through a crowd working platform. That adds a whole extra concern about the subject's standing on the platform. It means that in some sense undertaking this study was their job, and the instruction given in the project brief was not just instruction to a participant in a study, but requirements given to a worker.
I know this kind of thing is not unusual in studies like this. But it feels like a complicating factor that I can't see the edges of.
-
@jenniferplusplus
In a bit confusedAren't lower grades worse?
And it even took longer because of "AI distractions"?@realn2s Lower grades are, indeed, worse.
The AI did seem to speed things up, but not enough to achieve statistical significance. And as I describe further down the thread (just now, not suggesting you didn't read far enough), the AI chatbot seems to have been the only supportive tooling that was available. So it's not so much the difference between AI or not, as the difference between support tools or not.
-
@jenniferplusplus Kind of a funny statement given that the whole point of abstraction, encapsulation, high level languages, etc. is to provide a formal basis for much of a program to be designed in terms of high level concepts
@jsbarretto That's not what people mean when they say system design.
They mean which way do dependencies flow. What is the scope of responsibility for this thing. How will it communicate with other things. How does the collection of things remain in a consistent state.
For example.
-
And then there's one more detail. I'm not sure how I should be thinking about this, but it feels very relevant. All of the study subjects were recruited through a crowd working platform. That adds a whole extra concern about the subject's standing on the platform. It means that in some sense undertaking this study was their job, and the instruction given in the project brief was not just instruction to a participant in a study, but requirements given to a worker.
I know this kind of thing is not unusual in studies like this. But it feels like a complicating factor that I can't see the edges of.
@jenniferplusplus Holy carp this is a fabulous (slash shocking) thread. Thanks for taking the time.
-
@jenniferplusplus oh gods I need to read this.
@hrefna Im finding it frustrating, mainly
-
And then there's one more detail. I'm not sure how I should be thinking about this, but it feels very relevant. All of the study subjects were recruited through a crowd working platform. That adds a whole extra concern about the subject's standing on the platform. It means that in some sense undertaking this study was their job, and the instruction given in the project brief was not just instruction to a participant in a study, but requirements given to a worker.
I know this kind of thing is not unusual in studies like this. But it feels like a complicating factor that I can't see the edges of.
But now it's 1am. I may pick this up tomorrow, I'm not sure. If I do, the next chapter is their analysis. Seems like there would be things in there that merit comment
-
But now it's 1am. I may pick this up tomorrow, I'm not sure. If I do, the next chapter is their analysis. Seems like there would be things in there that merit comment
Actually, hang on. One more thing occurred to me. Does this exacerbate the difficulty of replication, given that the simple passage of time will render this library no longer new?
And now I'm done for the night, for real
-
@realn2s Lower grades are, indeed, worse.
The AI did seem to speed things up, but not enough to achieve statistical significance. And as I describe further down the thread (just now, not suggesting you didn't read far enough), the AI chatbot seems to have been the only supportive tooling that was available. So it's not so much the difference between AI or not, as the difference between support tools or not.
@jenniferplusplus


I indeed asked the question before i had finished the thread
I was very confused and in some ways still are.
How can the authors of the paper think all this is an argument for AI (which I believe they do)? -
J jwcph@helvede.net shared this topic
