Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.

This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
25 Indlæg 9 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • rysiek@mstdn.socialR rysiek@mstdn.social

    @timnitGebru I think this is relevant to these questions, albeit handles them on a different level:
    https://freakonometrics.hypotheses.org/89367

    > Someone still has to reread, compare, test, contextualize, and sometimes rewrite. And if no one seriously takes on that work, the cost does not disappear. It reappears later in the form of errors, urgent fixes, loss of trust, and eventually litigation. What is presented as a productivity gain is often just an accounting displacement.

    timnitgebru@dair-community.socialT This user is from outside of this forum
    timnitgebru@dair-community.socialT This user is from outside of this forum
    timnitgebru@dair-community.social
    wrote sidst redigeret af
    #11

    @rysiek Great article.

    rysiek@mstdn.socialR 1 Reply Last reply
    0
    • timnitgebru@dair-community.socialT timnitgebru@dair-community.social

      @rysiek Great article.

      rysiek@mstdn.socialR This user is from outside of this forum
      rysiek@mstdn.socialR This user is from outside of this forum
      rysiek@mstdn.social
      wrote sidst redigeret af
      #12

      @timnitGebru it really is.

      And boy does the Claude Code leaked codebase support that assessment. Have you seen @jonny 's thread on this? If not:
      https://neuromatch.social/@jonny/116324676116121930

      rysiek@mstdn.socialR timnitgebru@dair-community.socialT 2 Replies Last reply
      0
      • rysiek@mstdn.socialR rysiek@mstdn.social

        @timnitGebru it really is.

        And boy does the Claude Code leaked codebase support that assessment. Have you seen @jonny 's thread on this? If not:
        https://neuromatch.social/@jonny/116324676116121930

        rysiek@mstdn.socialR This user is from outside of this forum
        rysiek@mstdn.socialR This user is from outside of this forum
        rysiek@mstdn.social
        wrote sidst redigeret af
        #13

        @timnitGebru the whole thing is great, but somewhere down the thread there are truly astonishing gems like:

        > So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.

        Thousand monkeys, thousand typewriters…

        rysiek@mstdn.socialR bms48@mastodon.socialB marcel@waldvogel.familyM 3 Replies Last reply
        0
        • bms48@mastodon.socialB bms48@mastodon.social

          @timnitGebru EMC++S: Embracing Modern C++ Safely. My appetite for actually using GenAI is wearing thin after the severe information security risk Claude Code and other frontends are known to pose, after the leak <48 hours ago. LLMs have suggested regular expressions to me, but their role has been pretty limited to that of a error prone natural language search processor for me. This suggests a far lower economic point of inflexion for GenAI driven advantage than that promoted for it.

          bms48@mastodon.socialB This user is from outside of this forum
          bms48@mastodon.socialB This user is from outside of this forum
          bms48@mastodon.social
          wrote sidst redigeret af
          #14

          @timnitGebru Also, a lot of the FreeBSD related work I've been doing lately hasn't been writing software itself in anger, but hardware qualification: physically plugging hardware together, usually network adapters, switches, and routers, and evaluating compatibility. Using agents for any of this, whilst possible, would be like putting a hat on a hat, to borrow an expression from Seth MacFarlane in Family Guy. The human factor reigns supreme because of ISO OSI Layer 1.

          1 Reply Last reply
          0
          • rysiek@mstdn.socialR rysiek@mstdn.social

            @timnitGebru the whole thing is great, but somewhere down the thread there are truly astonishing gems like:

            > So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.

            Thousand monkeys, thousand typewriters…

            rysiek@mstdn.socialR This user is from outside of this forum
            rysiek@mstdn.socialR This user is from outside of this forum
            rysiek@mstdn.social
            wrote sidst redigeret af
            #15

            @timnitGebru of course it makes total sense for Claude Code to waste developer tokens like that, since Anthropic charges per token… 🙄

            timnitgebru@dair-community.socialT 1 Reply Last reply
            0
            • timnitgebru@dair-community.socialT timnitgebru@dair-community.social

              I appreciated this article by @mttaggart
              infosec.exchange.

              I get the temptation especially in this world we're all living in where you have to produce something super fast all the time.

              But my question is, what are people's arguments for how functioning software can be created with these tools?

              What about new architectures, new ways of thinking, new programming languages, etc? Who will create those?

              https://taggart-tech.com/reckoning/

              kwazekwaze@mastodon.socialK This user is from outside of this forum
              kwazekwaze@mastodon.socialK This user is from outside of this forum
              kwazekwaze@mastodon.social
              wrote sidst redigeret af
              #16

              @timnitGebru that blogpost strikes me as incredibly irresponsible

              The legalistic use of the word "works" - the post itself includes the keyphrase "works with caveats"! - and that otherwise reasonable conclusion that becomes absolutely heinous anywhere that isn't a vacuum. Suggesting people need to be more accommodating towards LLM users is a joke when this is the cohort attempting to force their (by the authors' recognition horrifically joyless to use) toys onto and into everyone else's life.

              kwazekwaze@mastodon.socialK 1 Reply Last reply
              0
              • kwazekwaze@mastodon.socialK kwazekwaze@mastodon.social

                @timnitGebru that blogpost strikes me as incredibly irresponsible

                The legalistic use of the word "works" - the post itself includes the keyphrase "works with caveats"! - and that otherwise reasonable conclusion that becomes absolutely heinous anywhere that isn't a vacuum. Suggesting people need to be more accommodating towards LLM users is a joke when this is the cohort attempting to force their (by the authors' recognition horrifically joyless to use) toys onto and into everyone else's life.

                kwazekwaze@mastodon.socialK This user is from outside of this forum
                kwazekwaze@mastodon.socialK This user is from outside of this forum
                kwazekwaze@mastodon.social
                wrote sidst redigeret af
                #17

                @timnitGebru In a perfect world I'd accept people that love their codegen chatbots as no different from people that prefer the command line or tabs over spaces!

                But we're not in that world and they're actively forcing their products on everyone else and posts like these reek of someone that has the privilege of not having that be done to them.

                kwazekwaze@mastodon.socialK 1 Reply Last reply
                0
                • rysiek@mstdn.socialR rysiek@mstdn.social

                  @timnitGebru it really is.

                  And boy does the Claude Code leaked codebase support that assessment. Have you seen @jonny 's thread on this? If not:
                  https://neuromatch.social/@jonny/116324676116121930

                  timnitgebru@dair-community.socialT This user is from outside of this forum
                  timnitgebru@dair-community.socialT This user is from outside of this forum
                  timnitgebru@dair-community.social
                  wrote sidst redigeret af
                  #18

                  @rysiek @jonny No just read now.

                  1 Reply Last reply
                  0
                  • rysiek@mstdn.socialR rysiek@mstdn.social

                    @timnitGebru of course it makes total sense for Claude Code to waste developer tokens like that, since Anthropic charges per token… 🙄

                    timnitgebru@dair-community.socialT This user is from outside of this forum
                    timnitgebru@dair-community.socialT This user is from outside of this forum
                    timnitgebru@dair-community.social
                    wrote sidst redigeret af
                    #19

                    @rysiek Literally the questions of "what if computer science was no longer about figuring out the most efficient way to do X but the brute force way to do X"?

                    jdp23@neuromatch.socialJ 1 Reply Last reply
                    0
                    • rysiek@mstdn.socialR rysiek@mstdn.social

                      @timnitGebru the whole thing is great, but somewhere down the thread there are truly astonishing gems like:

                      > So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.

                      Thousand monkeys, thousand typewriters…

                      bms48@mastodon.socialB This user is from outside of this forum
                      bms48@mastodon.socialB This user is from outside of this forum
                      bms48@mastodon.social
                      wrote sidst redigeret af
                      #20

                      @rysiek @timnitGebru The illusion of progress, indeed! I plan to do my initial experiments with Gemini as it is being massively subsidised at the open API gateway level via Opencode.AI, as opposed to using monthly subscriptions for the now arguably massively discredited Claude Code. That's if I even get around to it. So far just using project-wide find/grep/sed magic is working just fine for me, and traditional clang-tidy abstract syntax tree (AST) based refactoring is closer in grasp.

                      1 Reply Last reply
                      0
                      • rysiek@mstdn.socialR rysiek@mstdn.social

                        @timnitGebru the whole thing is great, but somewhere down the thread there are truly astonishing gems like:

                        > So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.

                        Thousand monkeys, thousand typewriters…

                        marcel@waldvogel.familyM This user is from outside of this forum
                        marcel@waldvogel.familyM This user is from outside of this forum
                        marcel@waldvogel.family
                        wrote sidst redigeret af
                        #21

                        @rysiek @timnitGebru
                        I was so baffled to learn how *mandatory* output verification is implemented. Any sane developer would have resorted to a compact loop along the lines of

                        `do { result = tool_call(…) } while (!is_valid(result));`

                        Zero overhead besides the wasteful repetitive tool calls in the hope of eventually getting the format right.

                        Instead, they have complex, expensive instructions for the LLM to do that.
                        https://neuromatch.social/@jonny/116326861737478342

                        1 Reply Last reply
                        0
                        • kwazekwaze@mastodon.socialK kwazekwaze@mastodon.social

                          @timnitGebru In a perfect world I'd accept people that love their codegen chatbots as no different from people that prefer the command line or tabs over spaces!

                          But we're not in that world and they're actively forcing their products on everyone else and posts like these reek of someone that has the privilege of not having that be done to them.

                          kwazekwaze@mastodon.socialK This user is from outside of this forum
                          kwazekwaze@mastodon.socialK This user is from outside of this forum
                          kwazekwaze@mastodon.social
                          wrote sidst redigeret af
                          #22

                          @timnitGebru There's something especially heinous about using the word "works" like this despite knowing all of the issues and I feel like it's been litigated to death at this point and people should know better by now.

                          Leaded gasoline "works". Downtown freeways "work". Asbestos "works". The list goes on. It's tiresome! It's irksome! It strikes me as if this author thought the theft machine wasn't capable of reproducing the working content it stole! Yes! That's why we call it a theft machine!

                          kwazekwaze@mastodon.socialK 1 Reply Last reply
                          0
                          • kwazekwaze@mastodon.socialK kwazekwaze@mastodon.social

                            @timnitGebru There's something especially heinous about using the word "works" like this despite knowing all of the issues and I feel like it's been litigated to death at this point and people should know better by now.

                            Leaded gasoline "works". Downtown freeways "work". Asbestos "works". The list goes on. It's tiresome! It's irksome! It strikes me as if this author thought the theft machine wasn't capable of reproducing the working content it stole! Yes! That's why we call it a theft machine!

                            kwazekwaze@mastodon.socialK This user is from outside of this forum
                            kwazekwaze@mastodon.socialK This user is from outside of this forum
                            kwazekwaze@mastodon.social
                            wrote sidst redigeret af
                            #23

                            @timnitGebru
                            And sorry none of this is directed at you

                            1 Reply Last reply
                            0
                            • timnitgebru@dair-community.socialT timnitgebru@dair-community.social

                              @rysiek Literally the questions of "what if computer science was no longer about figuring out the most efficient way to do X but the brute force way to do X"?

                              jdp23@neuromatch.socialJ This user is from outside of this forum
                              jdp23@neuromatch.socialJ This user is from outside of this forum
                              jdp23@neuromatch.social
                              wrote sidst redigeret af
                              #24

                              Yeah @jonny's thread is great, really eye-opening.

                              It's an interesting question. There are a few different arguments that advocates for using these tools make.

                              • skilled software engineers are very good at using imperfect tools -- figuring out the scenarios they work well in and how to work around the problems. @mttaggart's article was a great example of how this can work in practice, and @glyph has some thoughtful posts along these lines (not that either of them are advocates of the tools, but they illustrate the point). Static analysis tools (my software engineering claim to fame) is a great example of this general tendency: they can be extremely useful despite high numbers of false positives and false negatives.

                              • the tools will radically democratize who can create personal-use software -- stuiff that that addresses their own (and their friends/family's) problems without being intended for broader use. For a lot of secnerios, attributes like scalability / reliability / security don't necessarily matter that much; so being able to start with a natural language definition and get something "good enough" can potentially be useful.

                              • agentic software development is a transformative approach that leverages today's immense computing power so can produce software at least as good as today's hand-crafted software (which to be fair mostly sucks) far more quickly.

                              Then again as well as the issues that excellent article @rysiek discusses, advocates in general don't consider Gender HCI, Feminist HCI, Post-Colonial Computing, Anti-Oppressive Design, Design Justice, Accessibility, Security, Algorithmic Discrimination, or Design from the Margins into account. Neither do the people creating these tools, and neither does the overwhelming majoriity of the existing software these tools have been trained on. So software generated by these tools is at besting going to replicate the existing problems in these areas -- and more likely magnify them.

                              So this to me is where the bullet points above break down.

                              • Few if any software developers are "skilled" in all of these areas, so don't know how to compensate for imperfect tools (and quite possibly aren't even aware of the tools imperfections).

                              • "Personal use" tools that aren't accessible or designed from the margins, or embed algorithmic discrimination, aren't useful for most people.

                              • Generating more software more quickly that magnifies (or even reproduces) today's problems in all these areas magnifies oppressions.

                              And as you say there's also the the data stealing, exploitation, environmental racism, etc, of the current generation of tools -- and let's not forget fascism, eugenics, and cognitive issues!

                              In theory there are alternate approaches that can avoid these problems; @anildash has talked about using small models trained locally on his own code, and that seems like a potentially-promising direction. In practice though the vast majority of advocates today seem to be using stuff from Anthropic, OpenAI, Meta ... even the ones who acknowledge the ethical issues don't actually address them.

                              @timnitGebru

                              jdp23@neuromatch.socialJ 1 Reply Last reply
                              0
                              • jdp23@neuromatch.socialJ jdp23@neuromatch.social

                                Yeah @jonny's thread is great, really eye-opening.

                                It's an interesting question. There are a few different arguments that advocates for using these tools make.

                                • skilled software engineers are very good at using imperfect tools -- figuring out the scenarios they work well in and how to work around the problems. @mttaggart's article was a great example of how this can work in practice, and @glyph has some thoughtful posts along these lines (not that either of them are advocates of the tools, but they illustrate the point). Static analysis tools (my software engineering claim to fame) is a great example of this general tendency: they can be extremely useful despite high numbers of false positives and false negatives.

                                • the tools will radically democratize who can create personal-use software -- stuiff that that addresses their own (and their friends/family's) problems without being intended for broader use. For a lot of secnerios, attributes like scalability / reliability / security don't necessarily matter that much; so being able to start with a natural language definition and get something "good enough" can potentially be useful.

                                • agentic software development is a transformative approach that leverages today's immense computing power so can produce software at least as good as today's hand-crafted software (which to be fair mostly sucks) far more quickly.

                                Then again as well as the issues that excellent article @rysiek discusses, advocates in general don't consider Gender HCI, Feminist HCI, Post-Colonial Computing, Anti-Oppressive Design, Design Justice, Accessibility, Security, Algorithmic Discrimination, or Design from the Margins into account. Neither do the people creating these tools, and neither does the overwhelming majoriity of the existing software these tools have been trained on. So software generated by these tools is at besting going to replicate the existing problems in these areas -- and more likely magnify them.

                                So this to me is where the bullet points above break down.

                                • Few if any software developers are "skilled" in all of these areas, so don't know how to compensate for imperfect tools (and quite possibly aren't even aware of the tools imperfections).

                                • "Personal use" tools that aren't accessible or designed from the margins, or embed algorithmic discrimination, aren't useful for most people.

                                • Generating more software more quickly that magnifies (or even reproduces) today's problems in all these areas magnifies oppressions.

                                And as you say there's also the the data stealing, exploitation, environmental racism, etc, of the current generation of tools -- and let's not forget fascism, eugenics, and cognitive issues!

                                In theory there are alternate approaches that can avoid these problems; @anildash has talked about using small models trained locally on his own code, and that seems like a potentially-promising direction. In practice though the vast majority of advocates today seem to be using stuff from Anthropic, OpenAI, Meta ... even the ones who acknowledge the ethical issues don't actually address them.

                                @timnitGebru

                                jdp23@neuromatch.socialJ This user is from outside of this forum
                                jdp23@neuromatch.socialJ This user is from outside of this forum
                                jdp23@neuromatch.social
                                wrote sidst redigeret af
                                #25

                                Also I think this question is very related to @tarakiyee excellent On The Enshittification of Audre Lorde: "The Master's Tools" in Tech Discourse. Of course as Tara points out, Lorde wasn't talking about "tools" in the tech sense.

                                "[T]he "tools" Lorde was naming were not literal instruments but epistemic ones: the frameworks of thought, the methods of inquiry, the structures of inclusion and exclusion that had been built by and for a particular kind of subject (white, heterosexual, Western) and that continued to operate even within movements ostensibly committed to liberation. The questions she poses make this frame legible: what does it mean to conduct feminist analysis while systematically excluding the voices of poor women, Black women, Third World women, lesbians? What does it mean to theorize liberation using categories that treat those exclusions as incidental rather than structural?"

                                And heaven knows most of the discussions about "AI" tools in software engineering isn't being done as "feminist analysis"!

                                Still ... "AI" is indeed an epistemic framework, and the pattern of systematically excluding the voices of poor women, Black women, Third World women, lesbians (and disabled people, etc etc etc) and then treating those exclusions as incidental rather than structural is exactly what's going on here.

                                @timnitGebru

                                1 Reply Last reply
                                0
                                • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                                Svar
                                • Svar som emne
                                Login for at svare
                                • Ældste til nyeste
                                • Nyeste til ældste
                                • Most Votes


                                • Log ind

                                • Har du ikke en konto? Tilmeld

                                • Login or register to search.
                                Powered by NodeBB Contributors
                                Graciously hosted by data.coop
                                • First post
                                  Last post
                                0
                                • Hjem
                                • Seneste
                                • Etiketter
                                • Populære
                                • Verden
                                • Bruger
                                • Grupper