Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. We'll see how I feel in the morning, but for now i seem to have convinced myself to actually read that fuckin anthropic paper

We'll see how I feel in the morning, but for now i seem to have convinced myself to actually read that fuckin anthropic paper

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
92 Indlæg 29 Posters 13 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • mikalai@privacysafe.socialM mikalai@privacysafe.social

    @seanwbruno @jenniferplusplus
    Will "is peer reviewed" change validity/or-lack of the paper?
    Should it?

    kevingranade@mastodon.gamedev.placeK This user is from outside of this forum
    kevingranade@mastodon.gamedev.placeK This user is from outside of this forum
    kevingranade@mastodon.gamedev.place
    wrote sidst redigeret af
    #18

    @mikalai @seanwbruno @jenniferplusplus the thing that is a positive signal is that it *survived* peer review, which implies that there are multiple, knowledgeable, independent scientists in the area of study of the paper that read it and came to the conclusion, "the conclusions stated by this paper are supported by the data and arguments presented in the paper".

    This paper would not survive peer review.

    It is a flawed system but it is not worthless.

    1 Reply Last reply
    0
    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

      I just

      I'm not actually in the habit of reading academic research papers like this. Is it normal to begin these things by confidently asserting your priors as fact, unsupported by anything in the study?

      I suppose I should do the same, because there's no way it's not going to inform my read on this

      atax1a@infosec.exchangeA This user is from outside of this forum
      atax1a@infosec.exchangeA This user is from outside of this forum
      atax1a@infosec.exchange
      wrote sidst redigeret af
      #19

      @jenniferplusplus no, usually academic studies have a null hypothesis of "the effect we're trying to study does not exist" and are required to provide evidence sufficient to reject that hypothesis

      1 Reply Last reply
      0
      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

        "AI" is not actually a technology, in the way people would commonly understand that term.

        If you're feeling extremely generous, you could say that AI is a marketing term for a loose and shifting bundle of technologies that have specific useful applications.

        I am not feeling so generous.

        AI is a technocratic political project for the purpose of industrializing knowledge work. The details of how it works are a distant secondary concern to the effect it has, which is to enclose and capture all knowledge work and make it dependent on capital.

        n_dimension@infosec.exchangeN This user is from outside of this forum
        n_dimension@infosec.exchangeN This user is from outside of this forum
        n_dimension@infosec.exchange
        wrote sidst redigeret af
        #20

        @jenniferplusplus

        #regulateai

        1 Reply Last reply
        0
        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

          And now for a short break

          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
          jenniferplusplus@hachyderm.io
          wrote sidst redigeret af
          #21

          I have eaten. I may be _slightly_ less cranky.

          Ok! The results section! For the paper "How AI Impacts Skill Formation"

          > we design a coding task and evaluation around a relatively new asynchronous Python library and conduct randomized experiments to understand the impact
          of AI assistance on task completion time and skill development

          ...

          Task completion time. Right. So, unless the difference is large enough that it could change whether or not people can learn things at all in a given practice or instructional period, I don't know why we're concerned with task completion time.

          Well, I mean, I have a theory. It's because "AI makes you more productive" is the central justification behind the political project, and this is largely a political document.

          jenniferplusplus@hachyderm.ioJ kdedude@kde.socialK 2 Replies Last reply
          0
          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

            I have eaten. I may be _slightly_ less cranky.

            Ok! The results section! For the paper "How AI Impacts Skill Formation"

            > we design a coding task and evaluation around a relatively new asynchronous Python library and conduct randomized experiments to understand the impact
            of AI assistance on task completion time and skill development

            ...

            Task completion time. Right. So, unless the difference is large enough that it could change whether or not people can learn things at all in a given practice or instructional period, I don't know why we're concerned with task completion time.

            Well, I mean, I have a theory. It's because "AI makes you more productive" is the central justification behind the political project, and this is largely a political document.

            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
            jenniferplusplus@hachyderm.io
            wrote sidst redigeret af
            #22

            > We find that using AI assistance to complete
            tasks that involve this new library resulted in a reduction in the evaluation score by 17% or two grade
            points (Cohen’s d = 0.738, p = 0.010). Meanwhile, we did not find a statistically significant acceleration in
            completion time with AI assistance.

            I mean, that's an enormous effect. I'm very interested in the methods section, now.

            > Through an in-depth qualitative analysis where we watch the screen recordings of every participant in our
            main study, we explain the lack of AI productivity improvement through the additional time some participants
            invested in interacting with the AI assistant.

            ...

            Is this about learning, or is it about productivity!? God.

            > We attribute the gains in skill development of the control group to the process of encountering and subsequently resolving errors independently

            Hm. Learning with instruction is generally more effective than learning through struggle. A surface level read would suggest that the stochastic chatbot actually has a counter-instructional effect. But again, we'll see what the methods actually are.

            Edit: I should say, doing things with feedback from an instructor generally has better learning outcomes than doing things in isolation. I phrased that badly.

            jenniferplusplus@hachyderm.ioJ inthehands@hachyderm.ioI grimalkina@mastodon.socialG catch56@kolektiva.socialC realn2s@infosec.exchangeR 5 Replies Last reply
            0
            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

              > We find that using AI assistance to complete
              tasks that involve this new library resulted in a reduction in the evaluation score by 17% or two grade
              points (Cohen’s d = 0.738, p = 0.010). Meanwhile, we did not find a statistically significant acceleration in
              completion time with AI assistance.

              I mean, that's an enormous effect. I'm very interested in the methods section, now.

              > Through an in-depth qualitative analysis where we watch the screen recordings of every participant in our
              main study, we explain the lack of AI productivity improvement through the additional time some participants
              invested in interacting with the AI assistant.

              ...

              Is this about learning, or is it about productivity!? God.

              > We attribute the gains in skill development of the control group to the process of encountering and subsequently resolving errors independently

              Hm. Learning with instruction is generally more effective than learning through struggle. A surface level read would suggest that the stochastic chatbot actually has a counter-instructional effect. But again, we'll see what the methods actually are.

              Edit: I should say, doing things with feedback from an instructor generally has better learning outcomes than doing things in isolation. I phrased that badly.

              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
              jenniferplusplus@hachyderm.io
              wrote sidst redigeret af
              #23

              They reference these figures a lot, so I'll make sure to include them here.

              > Figure 1: Overview of results: (Left) We find a significant decrease in library-specific skills (conceptual
              understanding, code reading, and debugging) among workers using AI assistance for completing tasks with a
              new python library. (Right) We categorize AI usage patterns and found three high skill development patterns
              where participants stay cognitively engaged when using AI assistance

              mikalai@privacysafe.socialM jenniferplusplus@hachyderm.ioJ 2 Replies Last reply
              0
              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                > We find that using AI assistance to complete
                tasks that involve this new library resulted in a reduction in the evaluation score by 17% or two grade
                points (Cohen’s d = 0.738, p = 0.010). Meanwhile, we did not find a statistically significant acceleration in
                completion time with AI assistance.

                I mean, that's an enormous effect. I'm very interested in the methods section, now.

                > Through an in-depth qualitative analysis where we watch the screen recordings of every participant in our
                main study, we explain the lack of AI productivity improvement through the additional time some participants
                invested in interacting with the AI assistant.

                ...

                Is this about learning, or is it about productivity!? God.

                > We attribute the gains in skill development of the control group to the process of encountering and subsequently resolving errors independently

                Hm. Learning with instruction is generally more effective than learning through struggle. A surface level read would suggest that the stochastic chatbot actually has a counter-instructional effect. But again, we'll see what the methods actually are.

                Edit: I should say, doing things with feedback from an instructor generally has better learning outcomes than doing things in isolation. I phrased that badly.

                inthehands@hachyderm.ioI This user is from outside of this forum
                inthehands@hachyderm.ioI This user is from outside of this forum
                inthehands@hachyderm.io
                wrote sidst redigeret af
                #24

                @jenniferplusplus

                > Learning with instruction is generally more effective than learning through struggle.

                I don’t think this is necessarily a true statement? Guided learning beats unproductive struggle, but learning through struggle that eventually succeed produces far better retention etc than guided learning that becomes passive/receptive. There’s a huge literature on this that I’m not up on at all, but I’m pretty sure it doesn’t break cleanly along that particular line.

                (I don’t think my quibble derails your larger train of thought here)

                c0dec0dec0de@hachyderm.ioC r343l@freeradical.zoneR jenniferplusplus@hachyderm.ioJ 3 Replies Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  They reference these figures a lot, so I'll make sure to include them here.

                  > Figure 1: Overview of results: (Left) We find a significant decrease in library-specific skills (conceptual
                  understanding, code reading, and debugging) among workers using AI assistance for completing tasks with a
                  new python library. (Right) We categorize AI usage patterns and found three high skill development patterns
                  where participants stay cognitively engaged when using AI assistance

                  mikalai@privacysafe.socialM This user is from outside of this forum
                  mikalai@privacysafe.socialM This user is from outside of this forum
                  mikalai@privacysafe.social
                  wrote sidst redigeret af
                  #25

                  @jenniferplusplus
                  Should title read there:
                  Impact of not forming mental, due to trusting and outsourcing thinking to AI in this case.

                  1 Reply Last reply
                  0
                  • inthehands@hachyderm.ioI inthehands@hachyderm.io

                    @jenniferplusplus

                    > Learning with instruction is generally more effective than learning through struggle.

                    I don’t think this is necessarily a true statement? Guided learning beats unproductive struggle, but learning through struggle that eventually succeed produces far better retention etc than guided learning that becomes passive/receptive. There’s a huge literature on this that I’m not up on at all, but I’m pretty sure it doesn’t break cleanly along that particular line.

                    (I don’t think my quibble derails your larger train of thought here)

                    c0dec0dec0de@hachyderm.ioC This user is from outside of this forum
                    c0dec0dec0de@hachyderm.ioC This user is from outside of this forum
                    c0dec0dec0de@hachyderm.io
                    wrote sidst redigeret af
                    #26

                    @inthehands @jenniferplusplus I would say that regardless whether guided learning from an entity that actually knows the material or independent learning tested against reality both best working with jumped-up autocorrect. The machine will tell you that you’re doing great things while spitting out garbage—counter-instructional is certainly one way to put it.

                    aoanla@hachyderm.ioA 1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      They reference these figures a lot, so I'll make sure to include them here.

                      > Figure 1: Overview of results: (Left) We find a significant decrease in library-specific skills (conceptual
                      understanding, code reading, and debugging) among workers using AI assistance for completing tasks with a
                      new python library. (Right) We categorize AI usage patterns and found three high skill development patterns
                      where participants stay cognitively engaged when using AI assistance

                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                      jenniferplusplus@hachyderm.io
                      wrote sidst redigeret af
                      #27

                      > As AI development progresses, the problem of supervising more and more capable AI systems becomes more difficult if humans have weaker abilities to understand code [Bowman et al., 2022]. When complex software tasks require human-AI collaboration,
                      humans still need to understand the basic concepts of code development even if their software skills are
                      complementary to the strengths of AI [Wang et al., 2020].

                      Right, sure. Except, there is actually a third option. But it's one that seems inconceivable to the authors. That is to not use AI in this context. I'm not even necessarily arguing* that's better. But if this is supposed to be sincere scholarship, how is that not even under consideration?

                      *well, I am arguing that, in the context of AI as a political project. If you had similar programs that were developed and deployed in a way that empowers people, rather than disempowers them, this would be a very different conversation. Of course, I would also argue that very same political project is why it's inconceivable to the authors, soooo

                      jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
                      0
                      • c0dec0dec0de@hachyderm.ioC c0dec0dec0de@hachyderm.io

                        @inthehands @jenniferplusplus I would say that regardless whether guided learning from an entity that actually knows the material or independent learning tested against reality both best working with jumped-up autocorrect. The machine will tell you that you’re doing great things while spitting out garbage—counter-instructional is certainly one way to put it.

                        aoanla@hachyderm.ioA This user is from outside of this forum
                        aoanla@hachyderm.ioA This user is from outside of this forum
                        aoanla@hachyderm.io
                        wrote sidst redigeret af
                        #28

                        @c0dec0dec0de @inthehands @jenniferplusplus I think the problem is actually *engagement* - as well as correct challenge, learning requires active engagement with material (and effort to internalise it). Getting an LLM etc to "help" tends to reward disengagement (as well as potentially allowing you to "reduce the challenge" to the point where you're not actually doing anything hard yourself).

                        1 Reply Last reply
                        0
                        • inthehands@hachyderm.ioI inthehands@hachyderm.io

                          @jenniferplusplus

                          > Learning with instruction is generally more effective than learning through struggle.

                          I don’t think this is necessarily a true statement? Guided learning beats unproductive struggle, but learning through struggle that eventually succeed produces far better retention etc than guided learning that becomes passive/receptive. There’s a huge literature on this that I’m not up on at all, but I’m pretty sure it doesn’t break cleanly along that particular line.

                          (I don’t think my quibble derails your larger train of thought here)

                          r343l@freeradical.zoneR This user is from outside of this forum
                          r343l@freeradical.zoneR This user is from outside of this forum
                          r343l@freeradical.zone
                          wrote sidst redigeret af
                          #29

                          @inthehands @jenniferplusplus One of my personal hesitance to use the LLM tools much (despite incredible professional pressure to do so) is that my use of it (again, under professional necessity) has re-enforced my pre-existing belief that struggling through a problem, debugging and digging through source and so on has been CRITICAL to my skill development. It is something I have for (uh) 15+ years told less experienced software developers is critical to getting better / faster!

                          r343l@freeradical.zoneR dahukanna@mastodon.socialD 2 Replies Last reply
                          0
                          • r343l@freeradical.zoneR r343l@freeradical.zone

                            @inthehands @jenniferplusplus One of my personal hesitance to use the LLM tools much (despite incredible professional pressure to do so) is that my use of it (again, under professional necessity) has re-enforced my pre-existing belief that struggling through a problem, debugging and digging through source and so on has been CRITICAL to my skill development. It is something I have for (uh) 15+ years told less experienced software developers is critical to getting better / faster!

                            r343l@freeradical.zoneR This user is from outside of this forum
                            r343l@freeradical.zoneR This user is from outside of this forum
                            r343l@freeradical.zone
                            wrote sidst redigeret af
                            #30

                            @inthehands @jenniferplusplus Maybe there is a way to use things like Claude Code in ways that don’t disrupt this struggle learning pattern. This is one thing I’ve been trying to work out for myself! But so far I’ve not seen much about this concern or how the tools could be used in a way that results in the equivalent learning.

                            1 Reply Last reply
                            0
                            • inthehands@hachyderm.ioI inthehands@hachyderm.io

                              @jenniferplusplus

                              > Learning with instruction is generally more effective than learning through struggle.

                              I don’t think this is necessarily a true statement? Guided learning beats unproductive struggle, but learning through struggle that eventually succeed produces far better retention etc than guided learning that becomes passive/receptive. There’s a huge literature on this that I’m not up on at all, but I’m pretty sure it doesn’t break cleanly along that particular line.

                              (I don’t think my quibble derails your larger train of thought here)

                              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                              jenniferplusplus@hachyderm.io
                              wrote sidst redigeret af
                              #31

                              @inthehands Right, it's not universally the case. There are bad instructors and bad instructional contexts.

                              inthehands@hachyderm.ioI 1 Reply Last reply
                              0
                              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                I have eaten. I may be _slightly_ less cranky.

                                Ok! The results section! For the paper "How AI Impacts Skill Formation"

                                > we design a coding task and evaluation around a relatively new asynchronous Python library and conduct randomized experiments to understand the impact
                                of AI assistance on task completion time and skill development

                                ...

                                Task completion time. Right. So, unless the difference is large enough that it could change whether or not people can learn things at all in a given practice or instructional period, I don't know why we're concerned with task completion time.

                                Well, I mean, I have a theory. It's because "AI makes you more productive" is the central justification behind the political project, and this is largely a political document.

                                kdedude@kde.socialK This user is from outside of this forum
                                kdedude@kde.socialK This user is from outside of this forum
                                kdedude@kde.social
                                wrote sidst redigeret af
                                #32

                                @jenniferplusplus you have inspired me to read it as well (over beer and pizza) and .. yeah, what she said. I think i gave up before the results section. i did feel that the prep-work to calibrate the experiment (e.g the local item dependence in the quiz) was pretty well done, but i will defer to any sociologist who says otherwise.

                                Why is all the so-called productivity in the paper at all?

                                1 Reply Last reply
                                0
                                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                  So, back to the paper.

                                  "How AI Impacts Skill Formation"
                                  https://arxiv.org/abs/2601.20245

                                  The very first sentence of the abstract:

                                  > AI assistance produces significant productivity gains across professional domains, particularly for novice workers.

                                  1. The evidence for this is mixed, and the effect is small.
                                  2. That's not even the purpose of this study. The design of the study doesn't support drawing conclusions in this area.

                                  Of course, the authors will repeat this claim frequently. Which brings us back to MY priors, which is that this is largely a political document.

                                  dalias@hachyderm.ioD This user is from outside of this forum
                                  dalias@hachyderm.ioD This user is from outside of this forum
                                  dalias@hachyderm.io
                                  wrote sidst redigeret af
                                  #33

                                  @jenniferplusplus It's less a claim and more an intentionally-unsubstantiated background premise which the supposed research will treat as an assumed truth.

                                  jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    @inthehands Right, it's not universally the case. There are bad instructors and bad instructional contexts.

                                    inthehands@hachyderm.ioI This user is from outside of this forum
                                    inthehands@hachyderm.ioI This user is from outside of this forum
                                    inthehands@hachyderm.io
                                    wrote sidst redigeret af
                                    #34

                                    @jenniferplusplus
                                    …and good struggles, which are what good instructors help create

                                    sci_photos@troet.cafeS 1 Reply Last reply
                                    0
                                    • dalias@hachyderm.ioD dalias@hachyderm.io

                                      @jenniferplusplus It's less a claim and more an intentionally-unsubstantiated background premise which the supposed research will treat as an assumed truth.

                                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                      jenniferplusplus@hachyderm.io
                                      wrote sidst redigeret af
                                      #35

                                      @dalias Honestly, yes. I suspect the purpose of this paper is to reinforce that production is a correct and necessary factor to consider when making decisions about AI.

                                      And secondarily, I suspect it's establishing justification for blaming workers for undesirable outcomes; it's our fault for choosing to learn badly.

                                      dalias@hachyderm.ioD 1 Reply Last reply
                                      0
                                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                        > As AI development progresses, the problem of supervising more and more capable AI systems becomes more difficult if humans have weaker abilities to understand code [Bowman et al., 2022]. When complex software tasks require human-AI collaboration,
                                        humans still need to understand the basic concepts of code development even if their software skills are
                                        complementary to the strengths of AI [Wang et al., 2020].

                                        Right, sure. Except, there is actually a third option. But it's one that seems inconceivable to the authors. That is to not use AI in this context. I'm not even necessarily arguing* that's better. But if this is supposed to be sincere scholarship, how is that not even under consideration?

                                        *well, I am arguing that, in the context of AI as a political project. If you had similar programs that were developed and deployed in a way that empowers people, rather than disempowers them, this would be a very different conversation. Of course, I would also argue that very same political project is why it's inconceivable to the authors, soooo

                                        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                        jenniferplusplus@hachyderm.io
                                        wrote sidst redigeret af
                                        #36

                                        And then we switch back to background context. We get a 11 sentences of AI = productivity. Then 3 sentences on "cognitive offloading". 4 sentences on skill retention. And 4 on "over reliance". So, fully 50% of the background section of the "AI Impacts on Skill Formation" paper is about productivity.

                                        jenniferplusplus@hachyderm.ioJ wakame@tech.lgbtW 2 Replies Last reply
                                        0
                                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                          "AI" is not actually a technology, in the way people would commonly understand that term.

                                          If you're feeling extremely generous, you could say that AI is a marketing term for a loose and shifting bundle of technologies that have specific useful applications.

                                          I am not feeling so generous.

                                          AI is a technocratic political project for the purpose of industrializing knowledge work. The details of how it works are a distant secondary concern to the effect it has, which is to enclose and capture all knowledge work and make it dependent on capital.

                                          joshg@mathstodon.xyzJ This user is from outside of this forum
                                          joshg@mathstodon.xyzJ This user is from outside of this forum
                                          joshg@mathstodon.xyz
                                          wrote sidst redigeret af
                                          #37

                                          @jenniferplusplus
                                          bookmarked for future reference, boosting is not enough

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper